Vorsprungity Thread

Audi

Well-Known Member
I can do philosophy too, me.
My question for you kids today is:

"Will machines ever become conscious?"

This is an idea that's been explored in science-fiction for years. The idea of machines that are able to adapt to their surroundings and react instinctually has been thrown around by philosophers, programmers, and engineers, for decades. But how realistic is this concept? Can machines truly experience consciousness as we define it?

Talking points to consider:

1: Moore's law and its implications
The concept of Moore's law is that the number of transistors on integrated circuits double every two years. In other words, computing technology, as time goes on, becomes exponentially more powerful. What effect will this have? If technology is getting better and better at such a rate, when will we be able to create machines that can be self-aware?

2: The definition of consciousness
What do we mean by 'conscious' machines? Are we talking about machines that can experience emotion? Or machines with instinct? Is consciousness a strictly functional attribute?

3: The human aspect
Even if machines evolve to a point that they can imitate human experiences near-perfectly, can we then call them conscious? Are they still simply following a set of inputs, albeit a complex combination? What does this mean, then, for humans? Will machines, once conscious, self-replicate? Where does the programmer fit into this?


Alright give me your best T9K
Show me that you're conscious
 
To me, the answer is obvious and it isn't; i mean, if we were to program a robot/machine with a limited range of intelligence but did not keep it at that level, i think it could learn more on it's own.


If it got to that point, most likely we (by which i mean mankind) would have 3 options:

Option A) Adapt to this change and allow the machines to intergrate into society in a sense; allow them to be among us without any "leashes or collars"

Option B) as a safeguard (and i know i'm "stealing" this from him) Implant each machine with Isaac Asimov's 3 laws of robotics. they are (for those who don't know)



1) A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2) A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac asimov was the one who developed these laws, and from what i've read by him, they 9/10 times work, or have some twist to them.

Option C) Adopt the Frankenstein Complex (Also by Asimov; the fear of metal men, or the fear of one's own creation) and attack/destroy the machines out of sheer fear.


but, what else could happen is related to option c, the "Uncanny valley" which is the effect of when a human encounters an object that is so close to being humanlike, yet not quite, an interesting thing happens to said human, it mainly states that said human, let's use paul as an example, would start to question His beliefs, because how can something not human be so close to being human, especially if humankind is supposedly "unique" and defined by religious beliefs; the concern of his own mortality will rise, etc, you get the point.

I always love the idea of machines, but i think we should stop working on making them as we are currently., because hey, you never know.
 
I truly believe that if we can ever map the human brain down to a 100% predictable and explainable system, we can not only remake it, but we can make it better.
 
I believe we will eventually be able to implant our own consciousness into a machine body. Of course, at this point, the difference between human and machine will be null, and therefore the question here is moot as well.
 
I believe we will eventually be able to implant our own consciousness into a machine body. Of course, at this point, the difference between human and machine will be null, and therefore the question here is moot as well.
But maybe,
you're wrong

What if the human element is necessary for consciousness?

If you subscribe that consciousness is defined by self-awareness, then how is a human in a robot form necessarily self-aware?
A human pulls himself away from a flame because of the sensation of pain. He knows not to burn himself because of that previous experience of pain.
A robot may pull away from a flame, but only because its sensors tell it to. It is not an instinctual response; it is a pre-programmed reaction.

Where do we draw the line between instinct and programming?
While a robot might be able to respond to a huge range of stimuli, it may never be able to have that instinct humans have.

Not that I necessarily agree with that line of though
But this is more relevant conversation no
Question never moot
 
Even if machines evolve to a point that they can imitate human experiences near-perfectly, can we then call them conscious? Are they still simply following a set of inputs, albeit a complex combination?

How is this different from humans? Aren't we as humans simply following a set of inputs? Perhaps my point of view as an atheist is what makes this a compelling argument. If you accept the fact that humans are merely flesh, blood and bone then It seems we are just products of our environment just as computers which had a sufficient amount of learning capability would be. There are instincts as well... our bodies are born simply knowing how to do somethings... such as breathe. However this is programming that has evolved there over billions of years.

If you make the argument that humans have souls... and that's what sets them apart from robots... well... that's a different debate.
 
How is this different from humans? Aren't we as humans simply following a set of inputs? Perhaps my point of view as an atheist is what makes this a compelling argument. If you accept the fact that humans are merely flesh, blood and bone then It seems we are just products of our environment just as computers which had a sufficient amount of learning capability would be. There are instincts as well... our bodies are born simply knowing how to do somethings... such as breathe. However this is programming that has evolved there over billions of years.

If you make the argument that humans have souls... and that's what sets them apart from robots... well... that's a different debate.
As I said,
Not that I necessarily agree with that line of though
I personally believe that a human is nearly analogous to a machine.
What is an eye but an organic photoreceptor?

And to shoot down my own suggestion of "instinct"...
The best way I think of it is this:
I don't have the instinct to look after children as I don't have any children. However, if I had a child, I'd learn that paternal instinct. It's not something I'm necessarily born with; it's something I learn.
My human experience isn't adversely affected by the fact I don't know how to look after children. I still function perfectly fine.
In the same way, does a robot necessarily need instinct?

I wouldn't go into argument that humans have souls and whatnot because I think that's too religious and thus smelly,
But I would consider that the difference between humans and robots is emotion.
How do you program love?

But again, everything's up for debate - I'm just plating up arguments for the punters here. :)
 
As I said,

I personally believe that a human is nearly analogous to a machine.
What is an eye but an organic photoreceptor?

And to shoot down my own suggestion of "instinct"...
The best way I think of it is this:
I don't have the instinct to look after children as I don't have any children. However, if I had a child, I'd learn that paternal instinct. It's not something I'm necessarily born with; it's something I learn.
My human experience isn't adversely affected by the fact I don't know how to look after children. I still function perfectly fine.
In the same way, does a robot necessarily need instinct?

I wouldn't go into argument that humans have souls and whatnot because I think that's too religious and thus smelly,
But I would consider that the difference between humans and robots is emotion.
How do you program love?

But again, everything's up for debate - I'm just plating up arguments for the punters here. :)
I don't believe that romantic love actually exists, as it is a fairly recent invention. Brotherly, or long-term relationship love, now that's different. For simplicity, I'll use that definition of love. Ask yourself why we developed love. It's a survival instinct; we are fragile alone, our species works best when we work together. Would a robot need that? If yes, then programming something similar to love would be the same thing. If not, then why bother?
 
The words "self-aware" is a very subjective term as everyone has different opinions and views upon it.

If there is any defined parameter and pre-requisites to define consciousness (that are not contradictory) that needs to be check listed off then I see that there really isn't a problem creating a "conscious" robot.
 
All of our responses, reactions, thoughts, decisions and emotions are stimulated by electrical and chemical responses. Our brain processes all of our intake and converts it into electrical responses and stimuli which in turn makes us react one way or another. I'm thinking about these words that I am typing right now and so my brain is sending electrical impulses through my nerves to my arms, hands and fingers to accurately input these words on the keyboard. Emotion is no different than something as basic as typing. I see what I would consider an attractive female and my brain will send impulses to my eyes, heart and other bits as a reaction. Love and other emotions are nothing but reactions to certain objects in our environment that our bodies have adapted to in order to show that object the appropriate response. To show my love and affection for my wife, I have adapted how to kiss and hold her, which is comparatively different than how I would show my love and affection for my son or my mother. The reason why it's considered 'love' instead of 'hate' or some other emotion in our brain is because of how our bodies react to the impulses our brain sends to the different parts of our bodies. Although there are different types of love, it originates from the same part of the brain, and the overall reaction is the same.

How our bodies and brains learn these reactions and how to interpret them is, I think, the crux of the question of whether machines would ever become conscious. Machines are programmed today to react and respond to stimuli within their environment, and we are slowly figuring out how to program AI to react more organically and realistically to stimuli (that, I think, will be the crowning achievement of this next console generation: more believeable AI and environments) in video games, and those programs could be modified and inputted into a robot. The learning part of our brain/bodies is what ultimately differentiates us from machines now. We are able to learn and adapt, machines are not. We can program a robot to exhibit signs of love, hate, any other emotions through pre-programmed inputs, but we cannot program that robot to learn to express those emotions to other objects on it's own. It cannot learn what to love or hate. When we can develop an AI that is able to learn and adapt to situations without any additional input from programmers, we will soon see a concious robot.
 
All of our responses, reactions, thoughts, decisions and emotions are stimulated by electrical and chemical responses. Our brain processes all of our intake and converts it into electrical responses and stimuli which in turn makes us react one way or another. I'm thinking about these words that I am typing right now and so my brain is sending electrical impulses through my nerves to my arms, hands and fingers to accurately input these words on the keyboard. Emotion is no different than something as basic as typing. I see what I would consider an attractive female and my brain will send impulses to my eyes, heart and other bits as a reaction. Love and other emotions are nothing but reactions to certain objects in our environment that our bodies have adapted to in order to show that object the appropriate response. To show my love and affection for my wife, I have adapted how to kiss and hold her, which is comparatively different than how I would show my love and affection for my son or my mother. The reason why it's considered 'love' instead of 'hate' or some other emotion in our brain is because of how our bodies react to the impulses our brain sends to the different parts of our bodies. Although there are different types of love, it originates from the same part of the brain, and the overall reaction is the same.

How our bodies and brains learn these reactions and how to interpret them is, I think, the crux of the question of whether machines would ever become conscious. Machines are programmed today to react and respond to stimuli within their environment, and we are slowly figuring out how to program AI to react more organically and realistically to stimuli (that, I think, will be the crowning achievement of this next console generation: more believeable AI and environments) in video games, and those programs could be modified and inputted into a robot. The learning part of our brain/bodies is what ultimately differentiates us from machines now. We are able to learn and adapt, machines are not. We can program a robot to exhibit signs of love, hate, any other emotions through pre-programmed inputs, but we cannot program that robot to learn to express those emotions to other objects on it's own. It cannot learn what to love or hate. When we can develop an AI that is able to learn and adapt to situations without any additional input from programmers, we will soon see a concious robot.

You. Massive loads of respect right now. You and I, need to have some chats about robots.
 
The only way I see us creating a "strong ai" (ai matches or exceeds human intelligence) is by creating a completely artificial version of the human brain. We'd start with the physical structure, which is just a tad overwhelming. Our brains have around 100,000,000,000 neurons, and each neuron has around 7000 synaptic connections to other neurons. Luckily, the energy consumption and heat issues of such a large system seem to be manageable in smaller models, as seen by the "Neurogrid" chip. However, there seems to be another large problem with the neural network. From how I understand it, the human brain, and its neural network, physically changes with age. We'd have to find a way to simulate not only the monstrosity that is the human brain's neural network, but the way it changes itself in response to learning.
 
I was thinking about this a bit further. As I said before, I don't think that we'll ever have a true "strong ai" without a completely synthetic human brain. The first ai of this kind will be either a complete copy of an existing person's brain, or a blank slate. If it's the former, then every neuron and synapse would have to be copied. I'm also unsure if the ai would be able to adjust to the lack of an organic body. If it's the latter, then the ai would essentially start out with the mind of an unborn child. However, I think that this poses a similar problem. A child's mind doesn't adequately develop without a parent, and without an organic body (or a functionally equivalent synthetic body), I'm thinking that it would experience even more developmental problems.
 
All of our responses, reactions, thoughts, decisions and emotions are stimulated by electrical and chemical responses. Our brain processes all of our intake and converts it into electrical responses and stimuli which in turn makes us react one way or another. I'm thinking about these words that I am typing right now and so my brain is sending electrical impulses through my nerves to my arms, hands and fingers to accurately input these words on the keyboard. Emotion is no different than something as basic as typing. I see what I would consider an attractive female and my brain will send impulses to my eyes, heart and other bits as a reaction. Love and other emotions are nothing but reactions to certain objects in our environment that our bodies have adapted to in order to show that object the appropriate response. To show my love and affection for my wife, I have adapted how to kiss and hold her, which is comparatively different than how I would show my love and affection for my son or my mother. The reason why it's considered 'love' instead of 'hate' or some other emotion in our brain is because of how our bodies react to the impulses our brain sends to the different parts of our bodies. Although there are different types of love, it originates from the same part of the brain, and the overall reaction is the same.

How our bodies and brains learn these reactions and how to interpret them is, I think, the crux of the question of whether machines would ever become conscious. Machines are programmed today to react and respond to stimuli within their environment, and we are slowly figuring out how to program AI to react more organically and realistically to stimuli (that, I think, will be the crowning achievement of this next console generation: more believeable AI and environments) in video games, and those programs could be modified and inputted into a robot. The learning part of our brain/bodies is what ultimately differentiates us from machines now. We are able to learn and adapt, machines are not. We can program a robot to exhibit signs of love, hate, any other emotions through pre-programmed inputs, but we cannot program that robot to learn to express those emotions to other objects on it's own. It cannot learn what to love or hate. When we can develop an AI that is able to learn and adapt to situations without any additional input from programmers, we will soon see a concious robot.
You definitely nailed the bullseye on this one dude.

I was thinking about this a bit further. As I said before, I don't think that we'll ever have a true "strong ai" without a completely synthetic human brain. The first ai of this kind will be either a complete copy of an existing person's brain, or a blank slate. If it's the former, then every neuron and synapse would have to be copied. I'm also unsure if the ai would be able to adjust to the lack of an organic body. If it's the latter, then the ai would essentially start out with the mind of an unborn child. However, I think that this poses a similar problem. A child's mind doesn't adequately develop without a parent, and without an organic body (or a functionally equivalent synthetic body), I'm thinking that it would experience even more developmental problems.
You're right. The AI could even malfunction altogether there during the developmental stages right before us in that case.
 
I have an issue with the blank slate statement. In order to learn, or even to have some measure of intelligence at all, it cannot be truly blank. This would not be the mind of a newborn, but the mind of the braindead from birth.
 
Back
Top