Jump to content
IGNORED

Musings in machine intelligence


Vorticon
 Share

Recommended Posts

At what point is a structure considered alive? Are my bugs alive?

 

I guess at the point where you feel like you are killing them when you turn off the computer. :-) The question is similar to "when does a pile of sand become a desert?" Or, assuming technology is not the problem, the systematic replacement of body parts in a human. Say you lose and arm, and have it replaced (think Darth Vader or Luke's hand). Then a leg, heart, liver, spine, eyes, etc. At what point do you stop being "you" or human? Assuming the technology can "download" your neural state and replace parts of the brain with wet-computer parts, when parts of your brain are replaced do you become someone (or something) else?

 

For me it has to do with the "spark of life", which we (humans) have not found, discovered, or understood yet. Computers will never be "alive" because we still don't know how to "make" life. I don't believe that simply having a critical number of connections will produce something that is alive or self aware.

 

Anyway, it is still very cool to see such seemingly complex behavior based on simple rules.

 

Link to comment
Share on other sites

Ah, but there is a difference between life and consciousness :)

Besides, you seem to minimize the capacity for complexity several billion neuronal inter connections can generate... We are at least 40 years away from being able to reproduce this level of computational power and who knows what we will come out of that.

Link to comment
Share on other sites

Ah, but there is a difference between life and consciousness :)

 

True. But the "I" in A.I. stands for Intelligence, which would imply something more than a blob of cells or microganisms.

 

Besides, you seem to minimize the capacity for complexity several billion neuronal inter connections can generate...

 

Several billion neural cells, no. Several billion transistors, yes. Transistors are binary devices, neural cells are not. Neural cells grow and adapt, they can deal with a "maybe" or "kind of" instead of "yes" and "no". They can remember. The tradeoff is that they are imperfect.

 

Instead of trying to recreate the power of a brain with computers, we should be working on using thier advantages to augment our own brains. Perfect recall and fast computations would be awesome to have "built in" to your head. IMO anyway. :-)

 

A.I. has been around a long time and very little progress has been made. I believe there is a reason for that. Computers are the wrong device to use to try and recreate real intelligence. We can get models that look like intelligence in specific situations, but a human can very quickly outpace a computer A.I..

 

However, in specific situations, like a game, it is interesting to see some seemingly intelligent behavoir based on a finite set of simple rules. I have thought a lot about certain games and how the A.I. might have been done, which is probably why I find your musings pretty interesting.

Link to comment
Share on other sites

That is not entirely true.

We have computers now that can fool people into voting the machine as people more often then people.

Recently in American Scientist and Scientific American are articles of a new computer system that can do this.

The computer focus is on what the person said and is rewarded by votes as the conversations seem like it is a person that is interested in you.

Oddly this is like people that fall in love or fond of someone. Everyone likes someone that seems to like them and interested in them.

Also the computer is not bias on a subject unless that person is bias on the subject.

This is how it won more votes for being human then real humans got votes for being human.

Other tests showed the computer better a being a boss or employee also.

It makes sense as the computer does not have fixed ideas or concepts and instantly adapts to changes, unlike people that may never adapt concepts.

 

P.S. the computers system also misspelled sometimes on purpose to fool the people.

Edited by RXB
Link to comment
Share on other sites

Matt, I think you are anthropormorphizing (I think it's a real word) neurons a bit. All they do is sum up multiple local electrical potentials from other neurons, which if they add up to a specific threshold then trigger an all or none event called an action potential designed to propagate at high speed over long distances. These properties should be fairly easily reproduced with current technology. The real secret is in how they are connected, as well as the ability to actively modify their connections. That last part is particularly hard to reproduce in hardware, but certainly conceivable in software like neural nets.

Now memory is still quite a mystery, and current thinking is leaning towards certain structural changes to the properties of selected synapses as well as internal structures within the cells themselves, but we really don't know for sure yet. Obviously understanding memory will have enormous implications for conditions like Alzheimer's for example.

I fully agree that current computer architectures are not suited for AI, but this is changing, including the use of biological substrates which will open the door for the kind oif human augmentation you mention.

The next couple of decades are going to be very interesting, assuming civilization does not collapse in the mean time from economic disaster...

Link to comment
Share on other sites

I think we will be seeing intelligent machines sooner as we like.

 

For those who have an interest in this matter check out:

 

The singularity is near. by Ray Kurzweil

How to create a mind; the thought of human thought revealed. by Ray Kurzweil

Singularity Rising. by james D. Miller

 

The above books give a pretty clear picture where we are heading to.

Basically humans have a linear thinking about the progress of technology, where in fact its accelerating at an exponential rate. That is why we are rather bad in predicting a time frame for future developments.

 

You can get these books for the Kindle real cheap and it is money well spent.

 

EDIT: Ray Kurzweil recently started working for Google...

Edited by retroclouds
Link to comment
Share on other sites

Check out what Boston Dynamics are doing.

 

BigDog can autonomously track humans, walk or run on rough terrain, ice, and snow, run through water etc. It gets up when it falls over. It's not weaponised at the moment, because it's still a DARPA research project, but it's not long until it will be, since it's a DARPA project. Then you'll have drones in the sky, and drones on the land.

 

Freedom. Got to love it.

 

http://www.youtube.com/watch?NR=1&v=cNZPRsrwumQ&feature=endscreen

 

Link to comment
Share on other sites

I think we will be seeing intelligent machines sooner as we like.

 

For those who have an interest in this matter check out:

 

The singularity is near. by Ray Kurzweil

How to create a mind; the thought of human thought revealed. by Ray Kurzweil

Singularity Rising. by james D. Miller

 

The above books give a pretty clear picture where we are heading to.

Basically humans have a linear thinking about the progress of technology, where in fact its accelerating at an exponential rate. That is why we are rather bad in predicting a time frame for future developments.

 

You can get these books for the Kindle real cheap and it is money well spent.

 

EDIT: Ray Kurzweil recently started working for Google...

I have read Kurzweil's book (The Singularity Is Near) about a year ago, and while he presents fascinating concepts, the book tends to become somewhat repetitive towards the end. Nonetheless, a very worthy read.

Link to comment
Share on other sites

That could prove expensive---he was (is?) rather prolific!

 

And all I need is another shelf full of books! Still, it's tempting to at least get the stuff on robots that learn. Or I could just do it with my Arduino and save the shelf space. But being a classic computing kinda guy... :)

Link to comment
Share on other sites

Matt, I think you are anthropormorphizing (I think it's a real word) neurons a bit. All they do is sum up multiple local electrical potentials from other neurons, which if they add up to a specific threshold then trigger an all or none event called an action potential designed to propagate at high speed over long distances. These properties should be fairly easily reproduced with current technology.

 

I don't think I'm anthropomorphizing neurons as much as I'm saying they are more complex than people think they are. They are probably reproducable with current technology, but certainly more complex than a since transistor junction which is what I feel people tend to equate them to.

 

The real secret is in how they are connected, as well as the ability to actively modify their connections. That last part is particularly hard to reproduce in hardware, but certainly conceivable in software like neural nets.

 

Hard to do with hardware, but not unlike and FPGA, except the paths in an FPGA take a good deal of time to calculate (i.e. with a compiler). If the connections could be changed "on the fly", then that might be a good path for a brain-like computer.

 

Software might be a more ideal platform for chaning the connections on the fly, but representing the sheer quantity of connections in software requires a huge increase in computer resources, i.e. RAM, CPU power, etc., compared to the hardware level. For example, assume you could model a neuron with 10 transistors and a few passive components (resistors and capacitors). That same representation at the software level would require a data structure of at least 10-bytes, assuming none of the data needs to be a memory reference. Also, hardware is inherently parallel, software is not. Sadly, chip manufactures are more concerned with making games run at 120fps...

 

Now memory is still quite a mystery, and current thinking is leaning towards certain structural changes to the properties of selected synapses as well as internal structures within the cells themselves, but we really don't know for sure yet. Obviously understanding memory will have enormous implications for conditions like Alzheimer's for example.

 

I caught the tail end of a piece on the radio last week about some organizations who are trying to map 10 million or so neurons of the human brain. They said that the Hardon Collider produces about 10PB of information a year, and that 10 million brain neurons produces that much information in a day. And we have something like 15 billion neurons in our heads.

 

I fully agree that current computer architectures are not suited for AI, but this is changing, including the use of biological substrates which will open the door for the kind oif human augmentation you mention.

 

That would be cool. But I remember reading about a "wet" computer back in my high-school days (back in the 80's). I don't think it worked out. I don't have any idea about where current advances are though. Maybe someone in a lab somewhere is closer than we know?

 

The next couple of decades are going to be very interesting, assuming civilization does not collapse in the mean time from economic disaster...

 

I think we would kill each other before we hit economic disaster. Or, maybe one leads to the other.

Link to comment
Share on other sites

The interesting thing about the discussion on intelligence is that we tend to call every kind of mental process intelligent that we cannot describe as a "mechanical" sequence of elementary decisions. Or, in other words, intelligence is on retreat the further we understand mental processes. :-)

 

I'm not sure how far we are away from an acceptable level of mental power of machines. There is still some romantic idea of people believing in a phenomenon called "free will" (which I believe is a Fata Morgana) which already led some renowned scientist (just forgot the name) to consider quantum processes in the mind to achieve this "free will". In my view a free will is nothing else but the ability to conceive, evaluate, and select options for the next action, and maybe also the self-awareness about this fact (i.e. we are aware that the next action is up to our own choice and not exclusively determined by external influence).

 

Accordingly, a free will is in good reach for machine intelligence.

 

Intelligence is also about deducing knowledge from information - beyond the simple pattern recognition. This addition seems important to me to justify the initial axiom that intelligence is any non-understood mental process. :-) Animals can be trained to react to spoken commands. Is that intelligent? Is learning as such an expression of intelligence or just a mental capability? Which of our own habits is learned behavior, which is intelligent?

 

Will algorithmic computing lead to intelligent behavior? By algorithmic computing I refer to our "classic" way of programming, i.e. we have a problem and find an algorithm to solve that problem. It is not very likely (close to 0) that this solution will solve anything else. Accordingly, no problem will be solved by a computer this way that has not been treated by us before.

 

During my scientific work at Kassel University, we explored Genetic Programming as a recent way of programming computers in a non-algorithmic way. GP is a kind of Evolutionary Algorithm, which in turn tries to mimic nature's evolution: Hundreds of thousands of sample programs are applied to a problem, and the effect is evaluated in terms of suitability. The "better" programs are kept, while the inferior ones are excluded from the gene pool. Genetic Programming not only addresses parameters of programs that are optimized this way but whole programs. That is, we start with a simple program and modify it until it fits. May take some weeks time on a computer cluster.

 

What we learned from that is that evolution just does not care about obvious advantages (obvious in our view) if other solutions are likewise helpful but in a non-obvious way. In our concrete scenario we modeled an environment where we wanted our programs to distribute as equally as possible. The result that was "bred" over the time were about 10 lines of code in a simple rule-based programming language that we designed before. However, the lines seemed to make no sense at all - which is, by the way, just the point: there is no "sense" in such programs because no one actually wrote them.

 

Interestingly, the programs fulfilled their job, but it took me some time to find out why. It was difficult to understand because the rules are iteratively executed which means you cannot simply follow the trace of execution. (Nothing else happens inside our brains where neural signals circle round and round between different areas.). Eventually I got the point why these lines worked the way they did. The small "agents" just "abused" some of their memory locations to expand their internal state space, which we originally planned for them to store specific location information. I was pretty excited to see that the system indeed worked around some limitations that we unintentionally left in the system. This was the moment where I saw that we could make a system behave in some way *without* some prior concept.

 

Could continue for hours, sorry. :-) If you are interested in these experiments I can give you some links.

 

Michael

 

Link to comment
Share on other sites

I'm interested, but in a more practical way. Can we start building up a program (XB if you want, or assembly would be my choice) to explore some of what you are talking about? I'm ceratinly a skeptic about it, but I'm also open and very quick to change my opinion when presented with reason. I also think we need to not railroad Vorticon's thread and keep it on topic. Too much theory makes for little "musings".

Link to comment
Share on other sites

Hi Matthew,

 

no, there's no chance to run such an Evolutionary Algorithm at a 3 Mhz speed. Our EA framework is written in Java, but while Java is certainly slower than machine language on a PC, it is by many factors faster than the TMS9900 machine code (estimated, don't ask me for numbers). And we let it run for some weeks.

 

Michael

 

P.S.: As a non-native speaker I just checked "musings" to make sure, but it indeed means "thoughts/ponderings", doesn't it? Sorry if my text is a bit theoretical, just wanted to throw in some of my thoughts from science. :)

 

Link to comment
Share on other sites

Great discussion guys :) Indeed, musings can have quite a wide scope, so chat away.

Michael, I'm not so sure that we can't come up with evolutionary algorithms on the TI even with its slow speed. The slowness of the TI certainly has not stopped me before :D (check out my Chaos Musings suite on the TI Gameshelf site).

But what I want to try first is imbue my bugs with pseudo-genetic material related to behavior, let them mate and reproduce, and see what comes out as far as survival skills. This project is now officially beyond the bounds of the book that inspired it, as it is starting to get substantially more complex to program. I have already started work on this new version.

Link to comment
Share on other sites

no, there's no chance to run such an Evolutionary Algorithm at a 3 Mhz speed. Our EA framework is written in Java, but while Java is certainly slower than machine language on a PC, it is by many factors faster than the TMS9900 machine code (estimated, don't ask me for numbers). And we let it run for some weeks.

 

I was not specifically suggesting we reproduce the programs you worked on prior, but more so using the ideas to experiment.

 

P.S.: As a non-native speaker I just checked "musings" to make sure, but it indeed means "thoughts/ponderings", doesn't it? Sorry if my text is a bit theoretical, just wanted to throw in some of my thoughts from science. icon_smile.gif

 

Don't let me get in the way. :-) Vorticon did start by posting code with his musings though, so in trying to keep things related, I thought it would be good to produce some code to go along with our theories (admittedly I did not). Things get much more interesting when you can see ideas put into practice.

 

A.I. is only something I have read about in various books I have, and there seems to always be more theory than practical application. I have never tinkered with A.I., so when you said you had worked on some aspects of A.I. in college, I started thinking is would be fun to see a very stripped back variation that we could run on the 9900. Basically to get the idea of your work represented in code (because I'll never understand it explained in math/formulas). Maybe to see if some sort of intelligent behavior can truly be attained from seemingly simpler ideas/rules, or if it really does require gigahertz and gigabytes. A self-modifying system based on its own feedback would be really cool to see!

 

It will be interesting to see if Vorticon can get his bugs to make 49.5/2A's. :-)

 

Link to comment
Share on other sites

I just read in the Feb 1 issue of Science (I know, I'm a little behind on my readings :) ) that the Human Brain Project which plans on using supercomputers to simulate the human brain was one of the two winners of the European Commission's research funding competition, with an award of, get this, 1 BILLION euros for each of the winners!!!

If the HBP truly achieves its stated goal, then I say it's money well spent...

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...