Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Monday, 1 March 2010

ITWeb: Sci-fi meets society

And the second article by Lezette (ITWeb):

Sci-fi meets society

As artificially intelligent systems and machines progress, their interaction with society has raised issues of ethics and responsibility.

While advances in genetic engineering, nanotechnology and robotics have brought improvements in fields from construction to healthcare, industry players have warned of the future implications of increasingly “intelligent” machines.

Professor Tshilidzi Marwala, executive dean of the Faculty of Engineering and the Built Environment, at the University of Johannesburg, says ethics have to be considered in developing machine intelligence. “When you have autonomous machines that can evolve independent of their creators, who is responsible for their actions?”

In February last year, the Association for the Advancement of Artificial Intelligence (AAAI) held a series of discussions under the theme “long-term AI futures”, and reflected on the societal aspects of increased machine intelligence.

The AAAI is yet to issue a final report, but in an interim release, a subgroup highlighted the ethical and legal complexities involved if autonomous or semi-autonomous systems were one day charged with making high-level decisions, such as in medical therapy or the targeting of weapons.

The group also noted the potential psychological issues accompanying people's interaction with robotic systems that increasingly look and act like humans.

Just six months after the AAAI meeting, scientists at the Laboratory of Intelligent Systems, in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, conducted an experiment in which robots learned to “lie” to each other, in an attempt to hoard a valuable resource.

The robots were programmed to seek out a beneficial resource and avoid a harmful one, and alert one another via light signals once they had found the good item. But they soon “evolved” to keep their lights off when they found the good resource – in direct contradiction of their original instruction.

According to AI researcher Dion Forster, the problem, as suggested by Ray Kurzweil, is that when people design self-aggregating machines, such systems could produce stronger, more intricate and effective machines.

“When this is linked to evolution, humans may no longer be the strongest and most sentient beings. For example, we already know machines are generally better at mathematics than humans are, so we have evolved to rely on machines to do complex calculation for us.

“What will happen when other functions of human activity, such as knowledge or wisdom, are superseded in the same manner?”

Sum of parts

According to Steve Kroon, computer science lecturer at the Stellenbosch University, if people ever develop sentient robots, or other non-sentient robots do, we'll need to decide what rights they need. “And the lines will be blurred with electronic implants: what are your rights if you were almost killed in an accident, but have been given a second chance with a mechanical leg? A heart? A brain? When do you stop being human and become a robot?”

Healthcare is one area where “intelligent” machines have come to be used extensively, involved in everything from surgery to recovery therapy. Robotic prosthetics aid people's physical functioning, enabling amputees to regain a semblance of their former mobility. The i-Limb bionic hand, for example, uses muscle signals in the remaining portion of the limb to control individual prosthetic fingers.

More “behavioural” forms of robotic medical assistance, such as home care robots, are also emerging. Gecko Systems' CareBot acts as a companion to the frail or elderly, “speaking” to them, reminding them to take medication, alerting patients about unexpected visitors, and responding to calls for help, by notifying designated caregivers.

According to Forster, daily interaction with forms of “intelligent” machines is nothing new. “We are already intertwined with complex technologies (bank cards, cellphones, computers), and all of these simple things are connected to intelligent machines designed to make our lives easier and more productive.

“The question is not 'should' we do this, we already do, but how far should we go?” He adds this question is most frequently asked when one crosses the line of giving over control to a machine or technology that could cause harm.

While the advent of certain technologies, such as pacemakers, or artificial heart valves, or steel pins inserted to support limbs are generally beneficial, explains Forster, their progression could bring complexities.

“These are all technologies that make life better, and are designed to respond to environmental changes in order to aid the person in question. But, if my legs, arms, eyes, ears and memory are replaced by technologies, the question is when do I cross the line and stop being human and become a machine?”

Sun Microsystems founder William Joy is one of the industry's more outspoken critics of people's increasing dependence on technology, and warns in a 2000 Wired article: “The 21st-century technologies – genetics, nanotechnology, and robotics – are so powerful that they can spawn whole new classes of accidents and abuses.

“Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.”

Drones to decision-makers

Another area of contention regarding the increasing involvement of “intelligent” machines is in military applications. A recent US mandate requires that a third of its military forces be unmanned – remote controlled or autonomous – in future. While this could significantly reduce the number of human casualties in fighting, it also raises fears around autonomous machines' power to drop bombs and launch missiles.

Some argue that giving robots the ability to use weapons without human intervention is a dangerous move, while others say they will behave more ethically than their human counterparts. While these realities are still a way off, they have raised concerns over what is considered ethical behaviour.

The AAAI writes in a 2007 issue of AI Magazine that there's a considerable difference between a machine making ethical decisions by itself, and merely gathering the information needed to make the decision, and incorporating it into its general behaviour. “Having all the information and facility in the world won't, by itself, generate ethical behaviour in a machine.”

Clifford Foster, CTO of IBM SA, says ethics is something that has to be tackled across industry, with various people, including the public, collaborating to make sure policies are in place. “You can't advocate responsibility to machines. There are certain cases that present opportunities for technology to assist, such as telemedicine, but then it may be necessary to limit this to certain categories, with final decisions resting with professionals.”

In addition, as machine capabilities increase from automated mechanical tasks to more high-level, skilled ones, it calls into question their competition with humans in the workforce. “I think we've been staring one of the simplest ethical issues in the face for a few centuries already, and we still haven't reached consensus on it,” says Kroon.

“How do we balance the need of unskilled people to be employed and earn a reasonable living with the benefits of cheaper industrialisation and automation?” He adds the issue will only get more glaring in the next few decades, as the skill level needed to contribute meaningfully beyond what automated systems can do increases.

“Things like search engines have already radically changed how children learn in developed countries. If we simply dumb down education, that would be a pity,” notes Kroon. “We need a generation of people who can utilise the new capabilities of tomorrow's machines, rather than a generation of people who can contribute nothing meaningful to society, since any skills they possess have been usurped by machines.”

ITWeb: AI comes of age

First of two articles by Lezette (ITWeb) on Artificial Intelligence that I contributed to:

AI comes of age

The focus of artificial intelligence (AI) research has undergone a shift – from trying to simulate human thinking, to specific “intelligent” functions, like data mining and statistical learning theory.

Steve Kroon, computer science lecturer at the University of Stellenbosch, says, in the past, people were enthusiastic about machines that could think like people. “Now, many researchers figure the challenges of the present day are things we need 'alternative intelligence' for – skills that humans can make use of, but don't have themselves.”

The best examples of these “alternative intelligence” fields, notes Kroon, are data mining and machine learning – using advanced statistical analysis to find patterns in the vast amounts of data we're confronted with today.

“That's not to say research isn't being done on human-like AI, but even tasks like speech recognition and computer vision are more and more being seen as tasks that will yield to statistical analyses.”

In the 1950s, researchers began exploring the idea of artificially intelligent systems, with mathematician Alan Turing establishing some of the characteristics of intelligent machines. Consequently, work on AI spanned a wide range of fields, but soon developed an emphasis on programming computers.*

“In the past, there was a lot of research on rule-based systems and expert systems,” notes Kroon. “But now, we're faced with areas where there's so much data, that even the experts are at a loss to explain.”

The triumph of these methods, according to Kroon, is that they're discovering things the experts aren't aware of. “Bio-informatics is a great example of this; analysing the data leads to hypotheses, which the biochemists and biologists can attempt to verify, so this new knowledge can help in the development of new medication.

“We're seeing the shift from computation as simply a tool for use by the researcher to validate his hypotheses, to computation being used to generate sensible hypotheses for investigation – hypotheses humans would probably never have found by manually looking at the data.”

Information generation

Artificial intelligence has become so ingrained in people's daily lives that it has become ordinary, says professor Tshilidzi Marwala, executive dean of the Faculty of Engineering and the Built Environment, at the University of Johannesburg.

“Fingerprint recognition is now a common technology. Intelligent word processing systems that guess words about to be typed are now common. Face recognition software is used by security agencies – the situation has shifted to more advanced and realistic applications,” he states.

“We're already seeing automated systems for so many things in our daily lives,” notes Kroon. He adds that people are reading and remembering facts less than they did a generation ago, relying instead on being able to look up information on the Internet, whenever they need it.

“Combine that kind of technology with things like recommender systems and location-aware tools, and soon you'll have a constant stream of information relevant to you, available for your consumption as you need it.”

A project exemplifying this trend is the Massachusetts Institute of Technology's (MIT's) SixthSense prototype, a gestural interface that projects digital information onto physical surfaces, and lets users interact with it via hand gestures.

“When I look around and see how many people are now using mobile smartphones instead of the desktop computers of a couple of years back, and this SixthSense technology they've been prototyping at MIT, I get the impression that 'augmented intelligence' is going to be a big thing in coming years,” says Kroon.

Everyday AI

AI has become such a part of our daily lives that it has become ordinary.
Web search technologies are widely seen as an application of AI, he adds, with Wolfram Alpha being a prominent example. The “knowledge engine” answers user queries directly by computing information from a core data base, instead of searching the Web and returning links.

“Its premise was that people want answers to questions, not just a list of links. And I think they're right, but there's a long way to go before this is powerful enough to dethrone the classical search engine approach,” states Kroon.

“Understanding the question being asked, and trying to infer context for that question, are difficult challenges in AI before one can even start to construct an answer to the question,” he adds.

Another development in this direction is the new “social search engine” Aardvark, recently acquired by Google. “Aardvark uses machine-learning techniques to understand social networks, and then provides answers to a user's query by passing it on to people that its system believes are the best to answer the query,” explains Kroon.

“So, in this model, Aardvark's 'AI' is simply responsible for teaming up a person with a question and someone who can give that person a good answer. This sort of system works well when you're looking for more personalised responses, like hotel and restaurant recommendations, as opposed to the impersonal information typically served up by a regular search engine.”

Mind over matter

Clifford Foster, CTO at IBM SA, says AI offers significant ways of handling the explosion of data in the world. This follows from the use of computers to simulate intelligent processes, and understanding information in context, notes Foster. “This can be applied in a number of areas, such as a recent system to predict and understand the impact of anti-retrovirals on HIV patients.”

The vast amount of information being generated, and the need to process that information in near real-time, to prevent problems from happening, is simply too much for humans to compute fast enough, explains Foster. “So, if you give machines the ability to analyse and respond to this data, it fundamentally changes the way we manage and use information.”

He points to applications, such as orchestrating traffic lights according to traffic flow at various times of the day, or medical diagnoses for people in remote areas.

“One of the biggest healthcare challenges in Africa is limited access to medical professionals. But if a person could present their problem to a computer capable of understanding the symptoms, it could search medical data banks for related content, ask additional questions for greater accuracy, and provide an informed diagnosis, which could then be passed on to a professional.”

Foster believes investment will continue in areas where AI improves people's lives. “It's become less about trying to replicate the brain and more about complementing the way we connect with the world.”

For example, trying to clone the kind of processing involved in catching a ball, with the need to calculate the trajectory of the ball, activating the correct muscles to catch it, and absorbing the impact of the catch, requires a phenomenal amount of computing power, notes Foster, but it's not very useful.

“For a long time, people were confined by the idea that AI must simulate the human brain, but where's the value in that? A programme that can aid people in solving problems, whether it be running their data centre, or managing traffic flow, or reducing the mortality rate, is much more valuable.”

Marwala argues that intelligent machines will always be created to perform a particular or handful of functions. “An intelligent machine that performs many tasks is as elusive a concept as 'the theory of everything', but the adaptation and evolution of a machine performing a specific task is perfectly possible.”

Foster believes the intersection of technology, business, and people is where AI research is going, and where it can have most impact. “Using intelligent systems to get things quicker to assist in areas such as healthcare and education could have a profound impact on society, and change people's lives in ways they never thought of.”