Wednesday 17 March 2010

Smarter Cities at the IDC Africa CIO Summit

I was fortunate enough to present at the Africa CIO Summit again this year and have included my presentation (sans video) in this post.

Last year I discussed IBM's Smart Planet Strategy and how technology could enable this vision.  In addition I discussed the role that technologists and strategists had in changing the way that we work and live.

This year I discussed Smart Cities as it is in cities that we find a "system of systems" where key systems interact in a unique way.  I focused on examples from around the globe and how this could be applied to Africa. 

Wednesday 10 March 2010

Tackling the corporate “skills” challenge

There are two different, but related, issues that require addressing when tackling the so-called “skills shortage” and “skills challenge”.

The first area is addressing the skill and experience level of people currently within an organisation; and the second area is meeting the capacity or demand for people with a specific skill set and experience level.


Skill and experience


This is the easy part and subsequently most organisations focus the majority of their effort on solving this problem. Organisations improve the skill and experience level of their resources through training interventions and providing the right project/work experience. It is worth noting that it is far more difficult to do the latter (project/work experience) as it requires organisations to take a slightly longer-term view of their workforce requirements.

Resources can be up-skilled and cross-skilled and this is certainly a viable strategy as it allows resources to move into new areas, which creates a gap at the lower skill area that is easier to fill. This is akin to succession planning although it is managed at a group level and not an individual level.

Capacity

Understanding the capacity problem starts with understanding what the gap is and will be. This gap is the result of what the organisation anticipates they will need per major skill/profession area (to meet current and future demand) versus what the organisation currently has at its disposal. Thereafter it is necessary to apply a workforce strategy to determine what percentage of this gap should be permanently hired versus contractors and partners.

The strategy for closing this gap should include defensive (retention) and recruitment/attraction approaches. From my perspective, the most effective retention strategies are those that focus on career and profession development. This is supplemented by meeting hygiene requirements (salary, benefits) and very importantly providing a “sense of belonging”/community involvement.

The other element of addressing capacity requirements is recruitment (in conjunction with securing contractor and partner agreements for that aspect of the workforce). However, if hypothesis is true, and there is a shortage of adequately skilled resources in the market, then we need to tackle this differently as it is not cost effective to over-pay for candidates - at least not as a general rule.

Transformation

This is where it gets really interesting as the only option is to actively transform the current and potential workforce. Let’s looks at both of those elements.

Transforming the current workforce means growing your existing resources to create capacity at the bottom of the career ladder and recruiting/training junior people to fill the gap. I believe that this is the long-term intention of our government as it is through this type of transformation that we will provide the country with the professionals it requires to be globally competitive.

Transforming the potential workforce is far more difficult. Corporates and other professional organisations work closely with tertiary education organisations to ensure that the workforce of tomorrow is adequately trained and employable. However, the tertiary education organisations themselves have to work closely with secondary and primary education organisations to ensure that they have an adequate pool of candidates to train for employment by the pubic and private sector.

This means that the supply chain extends to primary schools and as such it is important to support and actively participate in education programmes at every level within our country or region that we operate.

Friday 5 March 2010

Event Processing for a Smarter Planet

A Smarter Planet

You've heard it before but it warrants repeating: Every natural and man-made system is becoming instrumented, interconnected and intelligent. We now have the ability to measure, sense and monitor the condition of almost everything. Furthermore, people, systems and objects can communicate and interact with each other in entirely new ways. This means that we can respond to changes quickly and get better results by predicting and optimising for future events.

That's the good news but there are challenges.

Today, more than ever, organisations are under pressure to leverage a wealth of information to make more intelligent choices.

Volume of data: Data volumes are expected to grow tenfold in the next three years. This is driven by the proliferation of end-user devices, real-world sensors, lower-cost technologies and population growth.

Velocity of decision-making: The market demands that businesses optimise decisions, take action based on good information and utilise advanced predictive capabilities - all with speed and efficiency.

Variety of information: With the expansion of information comes large variances in the complexion of available data - very noisy with lots of errors and no opportunity to cleanse it in a world of real-time decision-making.

Shift in what we analyse: Enterprises need a broader, systems-based approach to the information they examine and optimise. Stream computing and event processing capabilities are enabling the analysis of massive volumes.

Event Processing

Event processing provides us with an approach to leverage this wealth of information for competitive advantage and for the betterment of people

An Event: Any signal indicating that a change of state (of the business, real-world or other domain) has occurred or contemplated as having occurred.

Simple Event Processing (SEP): Simple event processing is not a new concept. SEP provides functions to detect and respond reactively to a single source, or homogeneous event type.

Complex Event Processing (CEP): Detect and respond reactively to patterns among like or related events, missing events and aggregate events. CEP supports high volumes of homogenous events within a predictable time and order.

Business Event Processing (BEP): Extends CEP and provides a graphical, non-programmatic user interface that allows business users to manage event processing logic themselves. Supports high volumes of heterogenous business events and complex patterns that occur in no particular time or order.

Event processing is complementary to SOA

SOA is an approach to modelling your business, and hence IT, as a set of reusable activities or services. Services are then orchestrated or composed into coarser services, processes or application. SOA communication is typically (or perhaps narrowly understood to be) implemented as request/response pattern.

Event processing adds an asynchronous pattern to SOA components and services. These patterns are complementary. Services can emit events and consume events. The event processing itself is performed as an extended capability of an ESB.

Conclusion

Event processing, and especially business event processing, begins to deliver on the "Utopian ideals" of real-time analytics and business-configured processing.

This is big ... and it going to get a lot bigger.

Monday 1 March 2010

Mind the Gap

Change

Businesses are experiencing rapid disruptive change, driven by such shifts as internet-enabled businesses; collapsing time; globalisation of markets and business configurations; social and economic imperatives; financial uncertainty, and changes in industry boundaries and structure.

This is leading to new business styles such as on-demand, value ecosystems, collaborative networks, and the globally integrated enterprise; which in turn place very different demands on operations and IT.

As Wytenburg states "the greater the degree of complexity in an environment, the more various, dynamic, and unpredictable are those situations“. We have moved away from a known single future, away from even from a range of possible futures, to a truly ambiguous future. There are too many permutations of possible architectures for such an uncertain future.

Significance

As with all uncertainty we need to focus on the "architecturally significant" elements - in this situation it refers to those elements of an organisation that will enable success in a future where the pace of change is accelerating.

Osterwalder's Business Model Ontology provides a good solution context for considering what is significant. I would add an additional component though that I will call "The Core".

The Core

The Core extends the ontology proposed by Osterwalder by adding an additional Core element associated to Core Capabilities. The Core that refers to that differentiating aspect of an organisation that transcend products and services, as is traditionally defined as “core business”. It is not an idealised construct but the tangible essence of an organisation that includes the key people, their vision and values.

In this context the Core combines (a) Vision with (b) People. Initially the people are those that formulated the vision for the organisation. These people lead the organisation and are the solid aspect thereof. The core (of the organisation) can, and probably should, be extended during the life of an organisation.

Key Capabilities

Within this construct, the key capabilities of the Core are those that position an organisation architecturally for an uncertain future:
  • Manage core values and value streams: At the heart of what allows an organisation to adapt while maintaining some consistency in the eyes of its key stakeholders (customers, partners, investors, employees). They are internal beliefs which guide the decision-making and behaviour of the organisation.
  • Shed and Attract: Agility is a key attribute of future business which has resulted in a marked shift from a 'loyalty/purpose' based business context to an 'opportunity' based context. As this trend continues, businesses who have a core competence to shed and attract will be able to maintain their competitive position over those who are less able to do so.
  • Communicate: There is a requirement for richer communication with all stakeholders, from employees to customers. The ability to establish trust through one's actions and communicate that and a vision for the future will differentiate the winners from the losers.
  • Sense and Respond: This is the ability to detect change and respond rapidly to it. Included in this capability is the ability to manage "future uncertain" through scenario planning and management.
  • Manage Information: Managing master data and information is critical to the decision-process and is a core ability, without which organisations will flounder.
The Gap

It is important for organisations to "mind the gap" when they shed components of their businesses that are better delivered by a supplier or through the network. This gap exists between the formal contracts between components, which is often covered by the good nature, professionalism or natural instincts of people within a business.

As business components become increasingly logically and physically seperated it will become increasingly important to understand whether the sum of the contracts equals the whole; and whether the whole still delivers what people are accustomed to. Managing that difference will be crucial.

ITWeb: Sci-fi meets society

And the second article by Lezette (ITWeb):

Sci-fi meets society

As artificially intelligent systems and machines progress, their interaction with society has raised issues of ethics and responsibility.

While advances in genetic engineering, nanotechnology and robotics have brought improvements in fields from construction to healthcare, industry players have warned of the future implications of increasingly “intelligent” machines.

Professor Tshilidzi Marwala, executive dean of the Faculty of Engineering and the Built Environment, at the University of Johannesburg, says ethics have to be considered in developing machine intelligence. “When you have autonomous machines that can evolve independent of their creators, who is responsible for their actions?”

In February last year, the Association for the Advancement of Artificial Intelligence (AAAI) held a series of discussions under the theme “long-term AI futures”, and reflected on the societal aspects of increased machine intelligence.

The AAAI is yet to issue a final report, but in an interim release, a subgroup highlighted the ethical and legal complexities involved if autonomous or semi-autonomous systems were one day charged with making high-level decisions, such as in medical therapy or the targeting of weapons.

The group also noted the potential psychological issues accompanying people's interaction with robotic systems that increasingly look and act like humans.

Just six months after the AAAI meeting, scientists at the Laboratory of Intelligent Systems, in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, conducted an experiment in which robots learned to “lie” to each other, in an attempt to hoard a valuable resource.

The robots were programmed to seek out a beneficial resource and avoid a harmful one, and alert one another via light signals once they had found the good item. But they soon “evolved” to keep their lights off when they found the good resource – in direct contradiction of their original instruction.

According to AI researcher Dion Forster, the problem, as suggested by Ray Kurzweil, is that when people design self-aggregating machines, such systems could produce stronger, more intricate and effective machines.

“When this is linked to evolution, humans may no longer be the strongest and most sentient beings. For example, we already know machines are generally better at mathematics than humans are, so we have evolved to rely on machines to do complex calculation for us.

“What will happen when other functions of human activity, such as knowledge or wisdom, are superseded in the same manner?”

Sum of parts

According to Steve Kroon, computer science lecturer at the Stellenbosch University, if people ever develop sentient robots, or other non-sentient robots do, we'll need to decide what rights they need. “And the lines will be blurred with electronic implants: what are your rights if you were almost killed in an accident, but have been given a second chance with a mechanical leg? A heart? A brain? When do you stop being human and become a robot?”

Healthcare is one area where “intelligent” machines have come to be used extensively, involved in everything from surgery to recovery therapy. Robotic prosthetics aid people's physical functioning, enabling amputees to regain a semblance of their former mobility. The i-Limb bionic hand, for example, uses muscle signals in the remaining portion of the limb to control individual prosthetic fingers.

More “behavioural” forms of robotic medical assistance, such as home care robots, are also emerging. Gecko Systems' CareBot acts as a companion to the frail or elderly, “speaking” to them, reminding them to take medication, alerting patients about unexpected visitors, and responding to calls for help, by notifying designated caregivers.

According to Forster, daily interaction with forms of “intelligent” machines is nothing new. “We are already intertwined with complex technologies (bank cards, cellphones, computers), and all of these simple things are connected to intelligent machines designed to make our lives easier and more productive.

“The question is not 'should' we do this, we already do, but how far should we go?” He adds this question is most frequently asked when one crosses the line of giving over control to a machine or technology that could cause harm.

While the advent of certain technologies, such as pacemakers, or artificial heart valves, or steel pins inserted to support limbs are generally beneficial, explains Forster, their progression could bring complexities.

“These are all technologies that make life better, and are designed to respond to environmental changes in order to aid the person in question. But, if my legs, arms, eyes, ears and memory are replaced by technologies, the question is when do I cross the line and stop being human and become a machine?”

Sun Microsystems founder William Joy is one of the industry's more outspoken critics of people's increasing dependence on technology, and warns in a 2000 Wired article: “The 21st-century technologies – genetics, nanotechnology, and robotics – are so powerful that they can spawn whole new classes of accidents and abuses.

“Most dangerously, for the first time, these accidents and abuses are widely within the reach of individuals or small groups. They will not require large facilities or rare raw materials. Knowledge alone will enable the use of them.”

Drones to decision-makers

Another area of contention regarding the increasing involvement of “intelligent” machines is in military applications. A recent US mandate requires that a third of its military forces be unmanned – remote controlled or autonomous – in future. While this could significantly reduce the number of human casualties in fighting, it also raises fears around autonomous machines' power to drop bombs and launch missiles.

Some argue that giving robots the ability to use weapons without human intervention is a dangerous move, while others say they will behave more ethically than their human counterparts. While these realities are still a way off, they have raised concerns over what is considered ethical behaviour.

The AAAI writes in a 2007 issue of AI Magazine that there's a considerable difference between a machine making ethical decisions by itself, and merely gathering the information needed to make the decision, and incorporating it into its general behaviour. “Having all the information and facility in the world won't, by itself, generate ethical behaviour in a machine.”

Clifford Foster, CTO of IBM SA, says ethics is something that has to be tackled across industry, with various people, including the public, collaborating to make sure policies are in place. “You can't advocate responsibility to machines. There are certain cases that present opportunities for technology to assist, such as telemedicine, but then it may be necessary to limit this to certain categories, with final decisions resting with professionals.”

In addition, as machine capabilities increase from automated mechanical tasks to more high-level, skilled ones, it calls into question their competition with humans in the workforce. “I think we've been staring one of the simplest ethical issues in the face for a few centuries already, and we still haven't reached consensus on it,” says Kroon.

“How do we balance the need of unskilled people to be employed and earn a reasonable living with the benefits of cheaper industrialisation and automation?” He adds the issue will only get more glaring in the next few decades, as the skill level needed to contribute meaningfully beyond what automated systems can do increases.

“Things like search engines have already radically changed how children learn in developed countries. If we simply dumb down education, that would be a pity,” notes Kroon. “We need a generation of people who can utilise the new capabilities of tomorrow's machines, rather than a generation of people who can contribute nothing meaningful to society, since any skills they possess have been usurped by machines.”

ITWeb: AI comes of age

First of two articles by Lezette (ITWeb) on Artificial Intelligence that I contributed to:

AI comes of age

The focus of artificial intelligence (AI) research has undergone a shift – from trying to simulate human thinking, to specific “intelligent” functions, like data mining and statistical learning theory.

Steve Kroon, computer science lecturer at the University of Stellenbosch, says, in the past, people were enthusiastic about machines that could think like people. “Now, many researchers figure the challenges of the present day are things we need 'alternative intelligence' for – skills that humans can make use of, but don't have themselves.”

The best examples of these “alternative intelligence” fields, notes Kroon, are data mining and machine learning – using advanced statistical analysis to find patterns in the vast amounts of data we're confronted with today.

“That's not to say research isn't being done on human-like AI, but even tasks like speech recognition and computer vision are more and more being seen as tasks that will yield to statistical analyses.”

In the 1950s, researchers began exploring the idea of artificially intelligent systems, with mathematician Alan Turing establishing some of the characteristics of intelligent machines. Consequently, work on AI spanned a wide range of fields, but soon developed an emphasis on programming computers.*

“In the past, there was a lot of research on rule-based systems and expert systems,” notes Kroon. “But now, we're faced with areas where there's so much data, that even the experts are at a loss to explain.”

The triumph of these methods, according to Kroon, is that they're discovering things the experts aren't aware of. “Bio-informatics is a great example of this; analysing the data leads to hypotheses, which the biochemists and biologists can attempt to verify, so this new knowledge can help in the development of new medication.

“We're seeing the shift from computation as simply a tool for use by the researcher to validate his hypotheses, to computation being used to generate sensible hypotheses for investigation – hypotheses humans would probably never have found by manually looking at the data.”

Information generation

Artificial intelligence has become so ingrained in people's daily lives that it has become ordinary, says professor Tshilidzi Marwala, executive dean of the Faculty of Engineering and the Built Environment, at the University of Johannesburg.

“Fingerprint recognition is now a common technology. Intelligent word processing systems that guess words about to be typed are now common. Face recognition software is used by security agencies – the situation has shifted to more advanced and realistic applications,” he states.

“We're already seeing automated systems for so many things in our daily lives,” notes Kroon. He adds that people are reading and remembering facts less than they did a generation ago, relying instead on being able to look up information on the Internet, whenever they need it.

“Combine that kind of technology with things like recommender systems and location-aware tools, and soon you'll have a constant stream of information relevant to you, available for your consumption as you need it.”

A project exemplifying this trend is the Massachusetts Institute of Technology's (MIT's) SixthSense prototype, a gestural interface that projects digital information onto physical surfaces, and lets users interact with it via hand gestures.

“When I look around and see how many people are now using mobile smartphones instead of the desktop computers of a couple of years back, and this SixthSense technology they've been prototyping at MIT, I get the impression that 'augmented intelligence' is going to be a big thing in coming years,” says Kroon.

Everyday AI

AI has become such a part of our daily lives that it has become ordinary.
Web search technologies are widely seen as an application of AI, he adds, with Wolfram Alpha being a prominent example. The “knowledge engine” answers user queries directly by computing information from a core data base, instead of searching the Web and returning links.

“Its premise was that people want answers to questions, not just a list of links. And I think they're right, but there's a long way to go before this is powerful enough to dethrone the classical search engine approach,” states Kroon.

“Understanding the question being asked, and trying to infer context for that question, are difficult challenges in AI before one can even start to construct an answer to the question,” he adds.

Another development in this direction is the new “social search engine” Aardvark, recently acquired by Google. “Aardvark uses machine-learning techniques to understand social networks, and then provides answers to a user's query by passing it on to people that its system believes are the best to answer the query,” explains Kroon.

“So, in this model, Aardvark's 'AI' is simply responsible for teaming up a person with a question and someone who can give that person a good answer. This sort of system works well when you're looking for more personalised responses, like hotel and restaurant recommendations, as opposed to the impersonal information typically served up by a regular search engine.”

Mind over matter

Clifford Foster, CTO at IBM SA, says AI offers significant ways of handling the explosion of data in the world. This follows from the use of computers to simulate intelligent processes, and understanding information in context, notes Foster. “This can be applied in a number of areas, such as a recent system to predict and understand the impact of anti-retrovirals on HIV patients.”

The vast amount of information being generated, and the need to process that information in near real-time, to prevent problems from happening, is simply too much for humans to compute fast enough, explains Foster. “So, if you give machines the ability to analyse and respond to this data, it fundamentally changes the way we manage and use information.”

He points to applications, such as orchestrating traffic lights according to traffic flow at various times of the day, or medical diagnoses for people in remote areas.

“One of the biggest healthcare challenges in Africa is limited access to medical professionals. But if a person could present their problem to a computer capable of understanding the symptoms, it could search medical data banks for related content, ask additional questions for greater accuracy, and provide an informed diagnosis, which could then be passed on to a professional.”

Foster believes investment will continue in areas where AI improves people's lives. “It's become less about trying to replicate the brain and more about complementing the way we connect with the world.”

For example, trying to clone the kind of processing involved in catching a ball, with the need to calculate the trajectory of the ball, activating the correct muscles to catch it, and absorbing the impact of the catch, requires a phenomenal amount of computing power, notes Foster, but it's not very useful.

“For a long time, people were confined by the idea that AI must simulate the human brain, but where's the value in that? A programme that can aid people in solving problems, whether it be running their data centre, or managing traffic flow, or reducing the mortality rate, is much more valuable.”

Marwala argues that intelligent machines will always be created to perform a particular or handful of functions. “An intelligent machine that performs many tasks is as elusive a concept as 'the theory of everything', but the adaptation and evolution of a machine performing a specific task is perfectly possible.”

Foster believes the intersection of technology, business, and people is where AI research is going, and where it can have most impact. “Using intelligent systems to get things quicker to assist in areas such as healthcare and education could have a profound impact on society, and change people's lives in ways they never thought of.”