Search

Copies of my books:

Follow me on ResearchGate

Follow me on ResearchGate

Pages
Social networking

Entries in Strong AI (5)

Monday
Feb272023

Talking to an AI chatbot about my research on Strong Artificial Intelligence

I had a fascinating conversation with the #Bing #AI #chatbot this evening.

In 2006 I defended my PhD at UNISA on an African intersubjective approach to self-validating consciousness in relation to some of the claims of Strong Artificial Intelligence.
This evening Bing found my work, summarized my thesis (relatively well), and ‘humbly’ conceded its limits.
Goodness, this is quite interesting, some of what I discussed in 2006 is now a reality! What do you think about AI, particularly in relation to the kind of work you do? Can you see some possibilities for using it in helpful and responsible ways, or not?
See the screenshots for more on this conversation, and you can find a copy of my 2006 PhD thesis here:

 

Wednesday
Apr142010

Sci-Fi meets society - my Artificial Intelligence research used...

I have mentioned elsewhere on my blog that I practice a simple discipline of NOT 'googling' myself (sometimes it is called 'vanity searching' - I think that is quite an accurate description).  It is a simple choice not to search for my name on the internet.  It is quite liberating not to worry about what others are saying, or not saying, about me!

However, even though I have chosen this, every now and then someone sends me a note about something I've written, or a comment that someone has made about my research or writing. I'm ashamed to admit that it feels quite good (what the Afrikaans would call 'lekker').  

This was the case with this particular entry.  A friend sent me a link to point out that my research on 'strong Artificial Intelligence' was quoted in an iTWeb article! Very cool!  

It was quite exciting to read the context in which my ideas were used.  The article is entitled 'Sci-fi meets society' and was written by Lezette Engelbrecht.  She contacted me some time ago with a few questions which I was pleased to answer via email (and point her to some of my research and publication in this area).  Thanks for using my thoughts Lezette - I appreciate it!

You can read the full article after the jump.

As artificially intelligent systems and machines progress, their interaction with society has raised issues of ethics and responsibility.

While advances in genetic engineering, nanotechnology and robotics have brought improvements in fields from construction to healthcare, industry players have warned of the future implications of increasingly “intelligent” machines.

Professor Tshilidzi Marwala, executive dean of the Faculty of Engineering and the Built Environment, at the University of Johannesburg, says ethics have to be considered in developing machine intelligence. “When you have autonomous machines that can evolve independent of their creators, who is responsible for their actions?”

In February last year, the Association for the Advancement of Artificial Intelligence (AAAI) held a series of discussions under the theme “long-term AI futures”, and reflected on the societal aspects of increased machine intelligence.

The AAAI is yet to issue a final report, but in an interim release, a subgroup highlighted the ethical and legal complexities involved if autonomous or semi-autonomous systems were one day charged with making high-level decisions, such as in medical therapy or the targeting of weapons.

The group also noted the potential psychological issues accompanying people's interaction with robotic systems that increasingly look and act like humans.

Just six months after the AAAI meeting, scientists at the Laboratory of Intelligent Systems, in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, conducted an experiment in which robots learned to “lie” to each other, in an attempt to hoard a valuable resource.

The robots were programmed to seek out a beneficial resource and avoid a harmful one, and alert one another via light signals once they had found the good item. But they soon “evolved” to keep their lights off when they found the good resource – in direct contradiction of their original instruction.

According to AI researcher Dion Forster, the problem, as suggested by Ray Kurzweil, is that when people design self-aggregating machines, such systems could produce stronger, more intricate and effective machines.

“When this is linked to evolution, humans may no longer be the strongest and most sentient beings. For example, we already know machines are generally better at mathematics than humans are, so we have evolved to rely on machines to do complex calculation for us.

“What will happen when other functions of human activity, such as knowledge or wisdom, are superseded in the same manner?” (read the rest of the article here...)

Friday
Mar192010

Learning to listen to the earth - is it a good thing?

This amazing video shows how we are learning to listen to the earth! Did you know there are more ‘things’ on the internet than people!?
There are sensors under the roads, in shops, in our offices, homes, schools and our phones - they’re all reporting things to us and about us.
The key is DIKW is the key, moving from:
  • captured data to
  • usable information to
  • knowledge to
  • wisdom
I suppose that as long as we gather the data in order to glean information that we can use as knowledge so that we become more wise it is a good thing.
 
For example, I don't like being filmed by hundreds of closed circuit tv cameras wherever I go - but I understand why it is necessary to capture that data, so that the information can help the police to get knowledge about criminals and criminal hot spots so that they can advice people to be wise about where they go, and what they do when they go out.
I've blogged about some of this here (see the post on wolframalpha which I think is very interesting in this regard), and here for a neuroscientific perspective, and this one which deals with strong artificial intelligence.
Tuesday
May122009

How computers can replicate (but not replace) scientists...

Strong Artificial Intelligence formed a large part of my doctoral research - yes, I've heard most of the jokes about being an 'artificially intelligent' doctor... And, the good news is that most of them are true! ha ha!

I proposed a hypothesis, among other things, based on a mathematical model for the exponential growth of representational and emulative intelligence in machines (showing an exponential increase in computing capacity from data retention, to information processing, to knowledge management, and then to intelligence and finally sentience). In order for this to take place Moore's law would need to be exceeded (which has happened), and we would need to harness the accuracy and computational power of artificially intelligent machines to create even more intricate and powerful machines (much to complicated for a human person to create in the limited space of our lives, and with the clumsiness of our knowledge and skill). These are likely to be quantum computers, or possibly some form of enzyme based biomechanical machines...

The long and the short of it was that we could see the rise of truly intelligent machines by as early as 2029 (as per Ray Kurzweil's suggestion).

Well, some of this is already taking place in credible scientific research. Simply linear (and some more complex parallel) emulative processes are already being reproduced using super computers. However, as this post below suggests, whilst computers can perform comparative tasks between existing models, they are not yet at the place where they can fathom the creative mustre to develop new models by themselves... But, who knows, that may not be too far off! All that we need is to find some realiable self agregating code that gathers knowledge, tests it through a simple Turing test (in comparrision to other valid data - of course both of these processes are already possible), and then agregates and adjusts its code base for increasing accuracy and complexity. If a machine can do this faster and more accurately than a human person it may just be able to develop more stringent and previously unfathomed models of knowledge and perhaps even wisdom!

But for now, here's what is possible:


In his first column for Seed magazine, my Institute for the Future colleague and pal Alex Pang looks at efforts to create software that doesn't just support scientific discovery, it actually does new science. From Seed:
Older AI projects in scientific discovery tried to model the way scientists think. This approach doesn’t try to imitate an individual scientist’s cognitive processes — you don’t need intuition when you have processor cycles to burn — but it bears an interesting similarity to the way scientific communities work. (Cornell professor Hod) Lipson says it figures out what to look at next “based on disagreement between models, just as a scientist will design an experiment that tests predictions made by competing theories.”

 

But that doesn’t mean it will replace scientists. (Cornell graduate student Michael) Schmidt views it as a tool to see what they can’t: “Something that is not obvious to a human might be obvious to a computer,” he speculates. A program, says Schmidt, may find things “that look really strange and foreign” to a scientist. More fundamentally, the Cornell program can analyze data, build models, and even guess which theories are more powerful, but it can’t explain what its theories mean — and new theories often force scientists to rethink and refine basic assumptions. “E=mc2 looks very simple, but it actually encapsulates a lot of knowledge,” Lipson says. “It overturned a lot of older preconceptions about energy and the speed of light.” Even as computers get better at formulating theories, “you need humans to give meaning to what the system finds.”

Why We're Not Obsolete: Alex Pang in Seed

From boingboing

.

 

I would be interested to hear your thoughts. Do you think that sentient machines could be a threat to humanity? I once postulated that perhaps the extinction of the human race was part of God's evolutionary plan for the redemption of the cosmos... It would seem that humanity has two radical problems. First, we have a tendancy to displace God from the centre of the universe (so much popular theology goes around humanity, the needs and will of humans and the actualisation of human desire)... Surely that can not be right!? Second, humans are clearly a destructive force in the greater scheme of cosmic reality. We fight, we consume, we destroy and generally seem to be quite bad for the cosmic ecosystem.

Of course the converse argument is that the Gospels show that Jesus died for BOTH humans and the cosmos... But, I could be wrong (or right)! What do you think?

Tuesday
Mar042008

Singularity and the Matrix... Spiritual Machines... Mmmmm... Contemporary crazies!? Maybe not?

John van de Laar gave an interesting persepctive on a 'religious movement' that has formed around the central ideas of the first Matrix movie.
I can certainly understand the appeal - after all, throughout history generations have always attempted to locate the sacred within the tools, symbols, and nomenclature of their contemporary culture. The Matrix seems to be so expressive of some of the existential questions, queries, and framing aspects of our reality (these include such issues as the relationship between humans and our technology, eternal existence in terms that we can understand, issues of good and evil etc.)

Some scientists have suggested that these issues may have a far greater influence, and in fact be truer than just sociology, theology, and psychology, could explain. Some others have suggested it is in fact nothing more than 'wish projection' (along the lines of Feuerbach and Freud's theory).

In short, every generation has a built in need to believe that there is more to life than just being born, living, and dying - we seek a transcendent truth (which you can read about in my Doctoral Thesis, by the way - please see the chapter on Neuroscience (chapter 3 I think it was) where I discuss the holistic and transcendent a-priori neural operators that are present in the human brain from birth). In our generation the 'mythology' of our time is intrinsically linked with technology (particularly those technologies that make our lives easier, and in some sense bearable).

A final perspective, which I think is the most rational of them all, is the perspective offered by Professor Cornel du Toit, who suggests that any duality that we create between ourselves and our technology is a false duality. Just think about it, your cell phone is not just an object that performs technological functions, it has become an integral part of your life. For many of us it extends our ability to communicate, it offers us a sense of security, connection with others, and for some (like myself) it even regulates how one lives one's life (e.g., my cell phone has a diary function that alerts me to appointments etc.). Another example cited by du Toit is contemporary banking. We have created both a hard technology (notes, coins, cards, ATM machines) and a soft technology (values, exchanges, commodities etc. which cannot be felt or weighed, or seen, but which have value). Just try to live your life without money and you will soon see how we have allowed a 'created' technology to become an integral part of our identity. How many people do you know whose identity is formed by what they earn, what they drive, and what they use?

I tend to agree with this - faith and technology are not separate realities that are discovering one another, they are complex interwoven system of creating and forming meaning. Both are dependent upon each other.

Anyway, enough of my 'ramblings'....

Read the article below for more on the concept of 'singularity':


Science fiction writer and mathematician Rudy Rucker takes a running swing at the idea of the Singularity, the moment in human history when we disassemble raw matter, turn it into "computronium" and upload ourselves to it, inhabiting a simulation of reality rather than real reality. It's a fine and provocative turn from our Mr Rucker, who has a fine and provocative and deeply weird and wonderful mind.
Although it’s a cute idea, I think computronium is a fundamentally spurious concept, an unnecessary detour. Matter, just as it is, carries out outlandishly complex chaotic quantum computations just by sitting around. Matter isn’t dumb. Every particle everywhere everywhen is computing at the maximum possible rate. I think we tend to very seriously undervalue quotidian reality...

This would be like filling in wetlands to make a multiplex theater showing nature movies, clear-cutting a rainforest to make a destination eco-resort, or killing an elephant to whittle its teeth into religious icons of an elephant god.

This is because there are no shortcuts for nature’s computations. Due to a property of the natural world that I call the “principle of natural unpredictability,” fully simulating a bunch of particles for a certain period of time requires a system using about the same number of particles for about the same length of time. Naturally occurring systems don’t allow for drastic shortcuts.

Link (via Futurismic)

 

By the way, my own doctoral research considered some of the theological issues in relation to these notions - you can download a copy of my Doctoral Thesis here (please see chapter 2). Two other superb books to read are:

The age of spiritual machines, and Are we spiritual machines. By Kurzweil.
Wiredlife - who are we in the digital age? By Jonscher.

Technorati tags: , , , ,