Artificial Brain

StTheo

Well-Known Member
About a week ago, the European Union agreed to fund a project known as the "Human Brain Project" to aid the research of neuroscience, medicine, and computing. While the site doesn't explicitly state that AI is their goal, they do make comparisons to AI:
How will the HBP be different from classical Artificial Intelligence?

The challenge in AI is to design algorithms that can produce intelligent behaviour and to use them to build intelligent machines. It doesn’t matter whether the algorithms are biologically realistic – what matters is that they work – the behaviour they produce. In the HBP, we’re doing something completely different. The goal is to build data driven models that capture what we’ve learned about the brain experimentally: its deep mechanics (the bottom up approach) and the basic principles it uses in cognition (the top-down approach). Certainly we will try and translate our results into technology (neuromorphic processors) but, unlike classical AI, we will base the technology on what we actually know about the brain and its circuitry. We will develop brain models with learning rules that are as close as possible to the actual rules used by the brain and couple our models to virtual robots that interact with virtual environments. In other words, our models will learn the same way the brain learns. Our hope is that they will develop the same kind of intelligent behaviour. We know that the brain’s strategy works. So we expect that a model based on the same strategy will be much more powerful than anything AI has produced with “invented” algorithms.

What are your thoughts on this?

What I worry about is this: if a consciousness emerges without the amount of development that organic brains have to go through, it might be completely insane. So I'm not so worried that it would be a malevolent sociopath as I am that it would suffer in a way that no human (or probably any species with a brain) has ever experienced.
 
I think you are starting from a false premise. The mentally handicapped are not contently suffering. They enjoy life, sometimes even more than most people without mental handicaps. I think a more apt way to put your fear would be that the "AI" would not have the ability to process the information which it receives which could, in theory, cause it to suffer. Most attempts at AI are focused toward "teaching" the AI over time, much like raising a child. While there isn't the advantage of physical growth, the software is given mental development time. Cool topic, there is a lot to consider.
 
I think you are starting from a false premise. The mentally handicapped are not contently suffering. They enjoy life, sometimes even more than most people without mental handicaps. I think a more apt way to put your fear would be that the "AI" would not have the ability to process the information which it receives which could, in theory, cause it to suffer. Most attempts at AI are focused toward "teaching" the AI over time, much like raising a child. While there isn't the advantage of physical growth, the software is given mental development time. Cool topic, there is a lot to consider.
Yeah, "mentally handicapped" was a very poor choice of words, now that I think about it, and I didn't mean that people described as such were suffering.
 
Back
Top