Search

Copies of my books:

Follow me on ResearchGate

Follow me on ResearchGate

Pages
Social networking

Entries in Artificial Intelligence (5)

Monday
Feb272023

Talking to an AI chatbot about my research on Strong Artificial Intelligence

I had a fascinating conversation with the #Bing #AI #chatbot this evening.

In 2006 I defended my PhD at UNISA on an African intersubjective approach to self-validating consciousness in relation to some of the claims of Strong Artificial Intelligence.
This evening Bing found my work, summarized my thesis (relatively well), and ‘humbly’ conceded its limits.
Goodness, this is quite interesting, some of what I discussed in 2006 is now a reality! What do you think about AI, particularly in relation to the kind of work you do? Can you see some possibilities for using it in helpful and responsible ways, or not?
See the screenshots for more on this conversation, and you can find a copy of my 2006 PhD thesis here:

 

Saturday
Jan032015

Robots are starting to break the law and nobody knows what to do about it

More than 10 years ago I wrote about this challenge in my doctoral research. At the time it was not yet feasible, but as the story at this link shows, robots can now commit crimes (or at least perform actions that we would consider criminal).

http://fusion.net/story/35883/robots-are-starting-to-break-the-law-and-nobody-knows-what-to-do-about-it/?utm_source=digg&utm_medium=email

The issue that the referenced article doesn't consider is whether the acts are actually criminal. Did the robot have criminal intent or was it just randomized action on the part of a non-conscious machine? While we may consider these actions criminal I doubt the robot had any sense of the difference between the 'criminal purchases' and other randomized 'non-criminal' purchases.

Still, it highlights an interesting ethical issue, what do we do when criminal activities take place by non sentient, self directed, machines or programs? Perhaps at best we could deactivate the machine and aggregate its code or re-program it with more sophisticated coding that takes our sense of criminal activity into account. In more serious cases we could ask whether the creators of the machine or program had criminal intent and pursue them for their intent and action (enacted by proxy through the machine or program).

You can read more about my thoughts and research on these issues (although I did shift from Artificial Intelligence to Neuroscience):

http://www.dionforster.com/blog/tag/neuroscience

This article in Sci-WEB used some of my Artificial Intelligence research:

http://www.dionforster.com/blog/2010/4/14/sci-fi-meets-society-my-artificial-intelligence-research-use.html

Wednesday
Apr142010

Sci-Fi meets society - my Artificial Intelligence research used...

I have mentioned elsewhere on my blog that I practice a simple discipline of NOT 'googling' myself (sometimes it is called 'vanity searching' - I think that is quite an accurate description).  It is a simple choice not to search for my name on the internet.  It is quite liberating not to worry about what others are saying, or not saying, about me!

However, even though I have chosen this, every now and then someone sends me a note about something I've written, or a comment that someone has made about my research or writing. I'm ashamed to admit that it feels quite good (what the Afrikaans would call 'lekker').  

This was the case with this particular entry.  A friend sent me a link to point out that my research on 'strong Artificial Intelligence' was quoted in an iTWeb article! Very cool!  

It was quite exciting to read the context in which my ideas were used.  The article is entitled 'Sci-fi meets society' and was written by Lezette Engelbrecht.  She contacted me some time ago with a few questions which I was pleased to answer via email (and point her to some of my research and publication in this area).  Thanks for using my thoughts Lezette - I appreciate it!

You can read the full article after the jump.

As artificially intelligent systems and machines progress, their interaction with society has raised issues of ethics and responsibility.

While advances in genetic engineering, nanotechnology and robotics have brought improvements in fields from construction to healthcare, industry players have warned of the future implications of increasingly “intelligent” machines.

Professor Tshilidzi Marwala, executive dean of the Faculty of Engineering and the Built Environment, at the University of Johannesburg, says ethics have to be considered in developing machine intelligence. “When you have autonomous machines that can evolve independent of their creators, who is responsible for their actions?”

In February last year, the Association for the Advancement of Artificial Intelligence (AAAI) held a series of discussions under the theme “long-term AI futures”, and reflected on the societal aspects of increased machine intelligence.

The AAAI is yet to issue a final report, but in an interim release, a subgroup highlighted the ethical and legal complexities involved if autonomous or semi-autonomous systems were one day charged with making high-level decisions, such as in medical therapy or the targeting of weapons.

The group also noted the potential psychological issues accompanying people's interaction with robotic systems that increasingly look and act like humans.

Just six months after the AAAI meeting, scientists at the Laboratory of Intelligent Systems, in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, conducted an experiment in which robots learned to “lie” to each other, in an attempt to hoard a valuable resource.

The robots were programmed to seek out a beneficial resource and avoid a harmful one, and alert one another via light signals once they had found the good item. But they soon “evolved” to keep their lights off when they found the good resource – in direct contradiction of their original instruction.

According to AI researcher Dion Forster, the problem, as suggested by Ray Kurzweil, is that when people design self-aggregating machines, such systems could produce stronger, more intricate and effective machines.

“When this is linked to evolution, humans may no longer be the strongest and most sentient beings. For example, we already know machines are generally better at mathematics than humans are, so we have evolved to rely on machines to do complex calculation for us.

“What will happen when other functions of human activity, such as knowledge or wisdom, are superseded in the same manner?” (read the rest of the article here...)

Friday
Mar192010

Learning to listen to the earth - is it a good thing?

This amazing video shows how we are learning to listen to the earth! Did you know there are more ‘things’ on the internet than people!?
There are sensors under the roads, in shops, in our offices, homes, schools and our phones - they’re all reporting things to us and about us.
The key is DIKW is the key, moving from:
  • captured data to
  • usable information to
  • knowledge to
  • wisdom
I suppose that as long as we gather the data in order to glean information that we can use as knowledge so that we become more wise it is a good thing.
 
For example, I don't like being filmed by hundreds of closed circuit tv cameras wherever I go - but I understand why it is necessary to capture that data, so that the information can help the police to get knowledge about criminals and criminal hot spots so that they can advice people to be wise about where they go, and what they do when they go out.
I've blogged about some of this here (see the post on wolframalpha which I think is very interesting in this regard), and here for a neuroscientific perspective, and this one which deals with strong artificial intelligence.
Wednesday
Dec162009

MIT to revisit Artificial Intelligence research

This story from boingboing.

 Newsoffice  Images Article Images 20091204121447-1-1MIT has launched a new $5 million, 5-year project to build intelligent machines. To do it, the scientists are revisiting the fifty year history of the Artificial Intelligence field, including the shortfalls that led to the stigmas surrounding it, to find the threads that are still worth exploring. The star-studded roster of researchers includes AI pioneer Marvin Minsky, synthetic neurobiologist Ed Boyden, Neil "Things That Think" Gershenfeld, and David Dalrymple, who started grad school at MIT when he was just 14-years-old. Minsky is even proposing a new Turing test for machine intelligence: can the computer read, understand, and explain a children's book.


Fore more details please follow this link. And, for some posts that I've written about Artificial Intelligence, neuroscience, and consciousness please follow the links listed on the next page.