Welcome to the future!

Welcome to the future!
by Nelson S. Lima (Science Writer)

How to Make a Mind


The mammalian brain has a distinct aptitude not found in any other class of animal. We are capable of hierarchicalthinking, of understanding a structure composed of diverse elements arranged in a pattern, representing that arrangement with a symbol, and then using that symbol as an element in a yet more elaborate configuration.
This capability takes place in a brain structure called theneocortex, which in humans has achieved a threshold of sophistication and capacity such that we are able to call these patterns ideas. We are capable of building ideas that are ever more complex. We call this vast array of recursively linked ideas knowledge. Only Homo sapienshave a knowledge base that itself evolves, grows exponentially, and is passed down from one generation to another.
We are now in a position to speed up the learning process by a factor of thousands or millions once again by migrating from biological to nonbiological intelligence. Once a digital neocortex learns a skill, it can transfer that know-how in minutes or even seconds. Ultimately we will create an artificial neocortex that has the full range and flexibility of its human counterpart.
Consider the benefits. Electronic circuits are millions of times faster than our biological circuits. At first we will have to devote all of this speed increase to compensating for the relative lack of parallelism in our computers. Parallelism is what gives our brains the ability to do so many different types of operations—walking, talking, reasoning—all at once, and perform these tasks so seamlessly that we live our lives blissfully unaware that they are occurring at all. The digital neocortex will be much faster than the biological variety and will only continue to increase in speed.
When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains, as most of it will be in the cloud, like most of the computing we use today. We have about 300 million pattern recognizers in our biological neocortex. That’s as much as could be squeezed into our skulls even with the evolutionary innovation of a large forehead and with the neocortex taking about 80% of the available space. As soon as we start thinking in the cloud, there will be no natural limits—we will be able to use billions or trillions of pattern recognizers, basically whatever we need, and whatever the law of accelerating returns can provide at each point in time.
In order for a digital neocortex to learn a new skill, it will still require many iterations of education, just as a biological neocortex does. Once a single digital neocortex somewhere and at some time learns something, however, it can share that knowledge with every other digital neocortex without delay. We can each have our own private neocortex extenders in the cloud, just as we have our own private stores of personal data today.
Last but not least, we will be able to back up the digital portion of our intelligence. It is frightening to contemplate that none of the information contained in our neocortex is backed up today. There is, of course, one way in which we do back up some of the information in our brains: by writing it down. The ability to transfer at least some of our thinking to a medium that can outlast our biological bodies was a huge step forward, but a great deal of data in our brains continues to remain vulnerable.

The Next Chapter in Artificial Intelligence

Artificial intelligence is all around us. The simple act of connecting with someone via a text message, e-mail, or cell-phone call uses intelligent algorithms to route the information. Almost every product we touch is originally designed in a collaboration between human and artificial intelligence and then built in automated factories. If all the AI systems decided to go on strike tomorrow, our civilization would be crippled: We couldn’t get money from our bank, and indeed, our money would disappear; communication, transportation, and manufacturing would all grind to a halt. Fortunately, our intelligent machines are not yet intelligent enough to organize such a conspiracy.
What is new in AI today is the viscerally impressive nature of publicly available examples. For example, consider Google’s self-driving cars, which as of this writing have gone over 200,000 miles in cities and towns. This technology will lead to significantly fewer crashes and increased capacity of roads, alleviate the requirement of humans to perform the chore of driving, and bring many other benefits.
Driverless cars are actually already legal to operate on public roads in Nevada with some restrictions, although widespread usage by the public throughout the world is not expected until late in this decade. Technology that intelligently watches the road and warns the driver of impending dangers is already being installed in cars. One such technology is based in part on the successful model of visual processing in the brain created by MIT’s Tomaso Poggio. Called MobilEye, it was developed by Amnon Shashua, a former postdoctoral student of Poggio’s. It is capable of alerting the driver to such dangers as an impending collision or a child running in front of the car and has recently been installed in cars by such manufacturers as Volvo and BMW.
I will focus now on language technologies for several reasons: Not surprisingly, the hierarchical nature of language closely mirrors the hierarchical nature of our thinking. Spoken language was our first technology, with written language as the second. My own work in artificial intelligence has been heavily focused on language. Finally, mastering language is a powerfully leveraged capability. Watson, the IBM computer that beat two former Jeopardy! champions in 2011, has already read hundreds of millions of pages on the Web and mastered the knowledge contained in these documents. Ultimately, machines will be able to master all of the knowledge on the Web—which is essentially all of the knowledge of our human–machine civilization.
One does not need to be an AI expert to be moved by the performance of Watson on Jeopardy! Although I have a reasonable understanding of the methodology used in a number of its key subsystems, that does not diminish my emotional reaction to watching it—him?—perform. Even a perfect understanding of how all of its component systems work would not help you to predict how Watson would actually react to a given situation. It contains hundreds of interacting subsystems, and each of these is considering millions of competing hypotheses at the same time, so predicting the outcome is impossible. Doing a thorough analysis—after the fact—of Watson’s deliberations for a single three-second query would take a human centuries.
One limitation of the Jeopardy! game is that the answers are generally brief: It does not, for example, pose questions of the sort that ask contestants to name the five primary themes of A Tale of Two Cities. To the extent that it can find documents that do discuss the themes of this novel, a suitably modified version of Watson should be able to respond to this. Coming up with such themes on its own from just reading the book, and not essentially copying the thoughts (even without the words) of other thinkers, is another matter. Doing so would constitute a higher-level task than Watson is capable of today.
It is noteworthy that, although Watson’s language skills are actually somewhat below that of an educated human, it was able to defeat the best two Jeopardy! players in the world. It could accomplish this because it is able to combine its language ability and knowledge understanding with the perfect recall and highly accurate memories that machines possess. That is why we have already largely assigned our personal, social, and historical memories to them.
Wolfram|Alpha is one important system that demonstrates the strength of computing applied to organized knowledge. Wolfram|Alpha is an answer engine (as opposed to a search engine) developed by British mathematician and scientist Stephen Wolfram and his colleagues at Wolfram Research. For example, if you ask Wolfram|Alpha, “How many primes are there under a million?” it will respond with “78,498.” It did not look up the answer, it computed it, and following the answer it provides the equations it used. If you attempted to get that answer using a conventional search engine, it would direct you to links where you could find the algorithms required. You would then have to plug those formulas into a system such as Mathematica, also developed by Wolfram, but this would obviously require a lot more work (and understanding) than simply asking Alpha.
Indeed, Alpha consists of 15 million lines of Mathematica code. What Alpha is doing is literally computing the answer from approximately 10 trillion bytes of data that has been carefully curated by the Wolfram Research staff. You can ask a wide range of factual questions, such as, “What country has the highest GDP per person?” (Answer: Monaco, with $212,000 per person in U.S. dollars), or “How old is Stephen Wolfram?” (he was born in 1959; the answer is 52 years, 9 months, 2 days on the day I am writing this). Alpha is used as part of Apple’s Siri; if you ask Siri a factual question, it is handed off to Alpha to handle. Alpha also handles some of the searches posed to Microsoft’s Bing search engine.
Wolfram reported in a recent blog post that Alpha is now providing successful responses 90% of the time. He also reports an exponential decrease in the failure rate, with a half-life of around 18 months. It is an impressive system, and uses handcrafted methods and hand-checked data. It is a testament to why we created computers in the first place. As we discover and compile scientific and mathematical methods, computers are far better than unaided human intelligence in implementing them. Most of the known scientific methods have been encoded in Alpha, along with continually updated data on topics ranging from economics to physics.
In a private conversation I had with him, Wolfram estimated that self-organizing methods such as those used in Watson typically achieve about an 80% accuracy when they are working well. Alpha, he pointed out, is achieving about a 90% accuracy. Of course, there is self-selection in both of these accuracy numbers, in that users (such as myself) have learned what kinds of questions Alpha is good at, and a similar factor applies to the self-organizing methods. Some 80% appears to be a reasonable estimate of how accurate Watson is onJeopardy! queries, but this was sufficient to defeat the best humans.
It is my view that self-organizing methods such as I articulate as the pattern-recognition theory of mind, or PRTM, are needed to understand the elaborate and often ambiguous hierarchies we encounter in real-world phenomena, including human language. Ideally, a robustly intelligent system would combine hierarchical intelligence based on the PRTM (which I contend is how the human brain works) with precise codification of scientific knowledge and data. That essentially describes a human with a computer.
We will enhance both poles of intelligence in the years ahead. With regard to our biological intelligence, although our neocortex has significant plasticity, its basic architecture is limited by its physical constraints. Putting additional neocortex into our foreheads was an important evolutionary innovation, but we cannot now easily expand the size of our frontal lobes by a factor of a thousand, or even by 10%. That is, we cannot do so biologically, but that is exactly what we will do technologically.
Our digital brain will also accommodate substantial redundancy of each pattern, especially ones that occur frequently. This allows for robust recognition of common patterns and is also one of the key methods to achieving invariant recognition of different forms of a pattern. We will, however, need rules for how much redundancy to permit, as we don’t want to use up excessive amounts of memory on very common low-level patterns.

Educating Our Nonbiological Brain

A very important consideration is the education of a brain, whether a biological or a software one. A hierarchical pattern-recognition system (digital or biological) will only learn about two—preferably one—hierarchical levels at a time. To bootstrap the system, I would start with previously trained hierarchical networks that have already learned their lessons in recognizing human speech, printed characters, and natural-language structures.
Such a system would be capable of reading natural-language documents but would only be able to master approximately one conceptual level at a time. Previously learned levels would provide a relatively stable basis to learn the next level. The system can read the same documents over and over, gaining new conceptual levels with each subsequent reading, similar to the way people reread and achieve a deeper understanding of texts. Billions of pages of material are available on the Web. Wikipedia itself has about 4 million articles in the English version.
I would also provide a critical-thinking module, which would perform a continual background scan of all of the existing patterns, reviewing their compatibility with the other patterns (ideas) in this software neocortex. We have no such facility in our biological brains, which is why people can hold completely inconsistent thoughts with equanimity. Upon identifying an inconsistent idea, the digital module would begin a search for a resolution, including its own cortical structures as well as all of the vast literature available to it. A resolution might mean determining that one of the inconsistent ideas is simply incorrect (if contraindicated by a preponderance of conflicting data). More constructively, it would find an idea at a higher conceptual level that resolves the apparent contradiction by providing a perspective that explains each idea. The system would add this resolution as a new pattern and link to the ideas that initially triggered the search for the resolution. This critical thinking module would run as a continual background task. It would be very beneficial if human brains did the same thing.
I would also provide a module that identifies open questions in every discipline. As another continual background task, it would search for solutions to them in other disparate areas of knowledge. The knowledge in the neocortex consists of deeply nested patterns of patterns and is therefore entirely metaphorical. We can use one pattern to provide a solution or insight in an apparently disconnected field.
As an example, molecules in a gas move randomly with no apparent sense of direction. Despite this, virtually every molecule in a gas in a beaker, given sufficient time, will leave the beaker. This provides a perspective on an important question concerning the evolution of intelligence. Like molecules in a gas, evolutionary changes also move every which way with no apparent direction. Yet, we nonetheless see a movement toward greater complexity and greater intelligence, indeed to evolution’s supreme achievement of evolving a neocortex capable of hierarchical thinking. So we are able to gain an insight into how an apparently purposeless and directionless process can achieve an apparently purposeful result in one field (biological evolution) by looking at another field (thermodynamics).
We should provide a means of stepping through multiple lists simultaneously to provide the equivalent of structured thought. A list might be the statement of the constraints that a solution to a problem must satisfy. Each step can generate a recursive search through the existing hierarchy of ideas or a search through available literature. The human brain appears to be only able to handle four simultaneous lists at a time (without the aid of tools such as computers), but there is no reason for an artificial neocortex to have such a limitation.
We will also want to enhance our artificial brains with the kind of intelligence that computers have always excelled in, which is the ability to master vast databases accurately and implement known algorithms quickly and efficiently Wolfram|Alpha uniquely combines a great many known scientific methods and applies them to carefully collected data. This type of system is also going to continue to improve, given Stephen Wolfram’s observation of an exponential decline in error rates.
Finally, our new brain needs a purpose. A purpose is expressed as a series of goals. In the case of our biological brains, our goals are established by the pleasure and fear centers that we have inherited from the old brain. These primitive drives were initially set by biological evolution to foster the survival of species, but the neocortex has enabled us to sublimate them. Watson’s goal was to respond to Jeopardy! queries. Another simply stated goal could be to pass the Turing test. To do so, a digital brain would need a human narrative of its own fictional story so that it can pretend to be a biological human. It would also have to dumb itself down considerably, for any system that displayed the knowledge of Watson, for instance, would be quickly unmasked as nonbiological.
More interestingly, we could give our new brain a more ambitious goal, such as contributing to a better world. A goal along these lines, of course, raises a lot of questions: Better for whom? Better in what way? For biological humans? For all conscious beings? If that is the case, who or what is conscious?
As nonbiological brains become as capable as biological ones of effecting changes in the world—indeed, ultimately far more capable than unenhanced biological ones—we will need to consider their moral education. A good place to start would be with one old idea from our religious traditions: the golden rule.

About the Author

Ray Kurzweil is an inventor, writer, and futurist. Among his honors are the MIT-Lemelson Prize, the National Medal of Technology, and, in 2002, induction into the U.S Patent Office’s National Inventor’s Hall of Fame.
This article was excerpted from his most recent book, How to Create a Mind (Viking, 2012).
From How to Create a Mind by Ray Kurzweil. Copyright © 2012, Ray Kurzweil. Reprinted by arrangement with Viking, a member of Penguin Group (USA) Inc.

Welcome to the monoculture

Here's the local supermarket in a little town, way off the beaten path. And there, right next to the cash register, are Lindt chocolate bars - from Switzerland.
Here's the local radio station, thousands of miles from the epicenters of music culture. And the next song--it's the one that kids in every country in the world are watching right now on YouTube.
Monoculture doesn't always mean the status quo. They sell more salsa than ketchup now. It doesn't mean only the established brands win--you can find Kind bars and Teslas in more and more places.
What monoculture does mean is that the churn isn't local as much as it's national and worldwide now. It means the stakes are far higher, because the step from niche win to worldwide win is smaller than it's ever been before.
Your blog, your line of clothes, your song, your cause--there's more competition than ever before (by a lot) because you compete with the world now. And there's more upside, too.
Posted by Seth Godin

UK Jobs Market In Robust Health



Further evidence of a recovery in the UK jobs market is provided by the latest Recruitment & Employment Confederation (REC)/KPMG Report on Jobs. The report for December shows the strongest rise in permanent placements since March 2010.

With vacancy growth close to its November high, there are increased concerns about the availability of staff to fill permanent roles. The rate of decline in temporary/contract staff availability remained substantial.

The REC's head of policy Kate Shoesmith says: "Growing confidence means more and more employers are willing to invest in their workforce and take on more people. The real concern now is the mismatch between demand and supply with recruiters reporting that they can't source suitable candidates for vacancies in a whole range of sectors. Companies want to hire more salespeople, accountants and businesses development staff to help their enterprises grow, but can't find people with the right skills to take the jobs."

One In Five Plan To Leave Job This Year

According to a new survey nearly one in five people are planning to change jobs in 2014.

Conducted by the Institute of Leadership & Management (ILM), the survey shows that 19% plan to leave their job while almost a third are considering it. Around one in six people surveyed said that they were planning to leave because they do not feel valued by their current organisation.

Charles Elvin, Chief Executive of the Institute of Leadership & Management, said: "The New Year is always a popular time for workers to look ahead and think about how they can progress. Our findings show that UK employees are beginning to reassess the job market and look into a range of new opportunities, from starting a new job to developing a new business."

Elvin adds: "The survey illustrates just how crucial it is that workers feel valued in the workplace. As many workers like to make a change at this time of year, it is important that organisations adapt to this phase by offering the chance to learn new skills and opportunities to progress wherever possible."

Improved UK Confidence Boosts UK Hiring

A broad based improvement in all activity levels in all regions and specialisms in recruitment firm Hays' UK business was a key factor behind a solid performance by the international recruiter, according to the company's group finance director, Paul Venables.

On the back of the improved market conditions, Venables said Hays' income from permanent placements rose by 17%, their highest growth rate for six years. Hays' temporary business also grew, albeit it less strongly at 5%, with overall NFI up 10%. The East of the UK, London, the Midlands, Northern Ireland, the North-West and Scotland, each of which grew by more than 10%, while Ireland delivered net fee growth of 30%.

Venables said that construction and property had been particularly strong as the sector recovered after a number of weak years. He added that in order to support the expected continuing growth in the UK business, Hays had targeted a 5-10% increase in UK headcount in the next six months.

Workshops and Seminars about THE FUTURE

Foresight is an inevitable instrument of modern management to make decisions and improve success in business. In an age of hyperchange and hypercompetition, developing foresight and seeing our way to the future is harder. But you can now learn more about foresight techniques (such as trend analysis, scanning, scenario analysis, and more) with our help. We offer a variety of lectures, seminars, workshops and courses for individuals, companies and other organizations to understand megatrends and discover the future concerning economic and business development and global trends of general interest.
Workshops, Lectures & Seminars:
- What is Futuring?
- Visions for the 21st century
- How to design future maps
- Foresight, innovation and strategy
- How to think creatively in turbulent times?
- The Exploration of Business Today and Tomorrow
- The Future of Globalization
- The Future of Education
- Prepare your workforce for the future challenges
- Business Excellence in the Knowledge Economy
- New markets, new business
- New world, new values
- New business, new jobs
- New mind for the future challenges
- The end of an Era: the future of management
- Smart decisions in hypercompetitive markets
- Introduction to Neuroeconomy
- What is Neuromarketing?
- What is Neuroadvertising?
Email us to
mps@secretary.net
nelsonslima@yahoo.co.uk
info@unifuturo.net

The road to...

These are a few forecasts from members of
The World Future Society:
Forecast 1:
The world will have a billion millionaires by 2025. Globalization and technological innovation are driving this increased prosperity. But challenges to prosperity will also become more acute, such as water shortages that will affect two-thirds of world population by 2025.
Forecast 2:
Fashion will go wired as technologies and tastes converge to revolutionize the textile industry. Researchers in smart fabrics and intelligent textiles (SFIT) are working with the fashion industry to bring us color-changing or perfume-emitting jeans, wristwatches that work as digital wallets, and running shoes like the Nike +iPod that watch where you're going (possibly allowing others to do the same). Powering these gizmos remains a key obstacle. But industry watchers estimate that a $400 million market for SFIT is already in place and predict that smart fabrics could revitalize the U.S. and European textile industry.
Forecast 3:
The threat of another cold war with China, Russia, or both could replace terrorism as the chief foreign-policy concern of the United States. Scenarios for what a war with China or Russia would look like make the clashes and wars in which the United States is now involved seem insignificant. The power of radical jihadists is trivial compared with Soviet missile capabilities, for instance. The focus of U.S. foreign policy should thus be on preventing an engagement among Great Powers.
Forecast 4:
Counterfeiting of currency will proliferate, driving the move toward a cashless society. Sophisticated new optical scanning technologies could, in the next five years, be a boon for currency counterfeiters, so societies are increasingly putting aside their privacy fears about going cashless. Meanwhile, cashless technologies are improving, making them far easier and safer to use.
Forecast 5:
The earth is on the verge of a significant extinction event. The twenty-first century could witness a biodiversity collapse 100 to 1,000 times greater than any previous extinction since the dawn of humanity, according to the World Resources Institute. Protecting biodiversity in a time of increased resource consumption, overpopulation, and environmental degradation will require continued sacrifice on the part of local, often impoverished communities. Experts contend that incorporating local communities' economic interests into conservation plans will be essential to species protection in the next century.

Scientists Create Artifical Brain


A network of artificial nerves is evolving right now in a Swiss supercomputer. This bizarre creation is capable of simulating a natural brain, cell-for-cell. The Swiss scientists, who created what they have dubbed "Blue Brain", believe it will soon offer a better understanding of human consciousness. This is no sci-fi flick; it’s an actual ‘computer brain’ that may eventually have the ability to think for itself. Exciting? Scary? It could be a little of both.

The designers say that "Blue Brain" was willful and unpredictable from day one. When it was first fed electrical impulses, strange patterns began to appear with lightning-like flashes produced by ‘cells’ that the scientists recognized from living human and animal processes. Neurons started interacting with one another until they were firing in rhythm. "It happened entirely on its own," says biologist Henry Markram, the project's director. "Spontaneously."

The project essentially has its own factory to produce artificial brains. Their computers can clone nerve cells quickly. The system allows for the production of whole series of neurons of all different types. Because in natural brains, no two cells are exactly identical, the scientists make sure the artificial cells used for the project are also random and unique.

Does this ‘Brain’ have a soul? If it does, it is likely to be the shadowy remnants of thousands of sacrificed rats whose brains were almost literally fed into the computer. After opening the rat skulls and slicing their brains into thin sections, the scientists kept the slices alive. Tiny sensors picked up individual neurons, recorded how the cells fired off neurons and the adjacent cells’ responses. In this way the scientists were able to collect entire repertoires of actual rat behavior- basically how a rat would respond in different situations throughout a rat's life.

The researchers say it wouldn't present much of a technological challenge to bring the brain to life. "We could simply connect a robot to the brain model," says Markram. "Then we could see how it reacts to real environments."

Are rats capable of revenge? What I’m wondering is what this brain would do to those researchers if it was attached to a giant metallic rat body and equipped with teeth and claws…now there’s a good sci-fi movie.

Although over ten thousand artificial nerve cells have already been woven in, the researchers plan to increase the number to one million by next year. The researchers are already working with IBM experts on plans for a computer that would operate at inconceivable speeds – something fast enough to simulate the human brain. The project is scheduled to last beyond 2015, at which point the team hopes to be ready for their primary goal: a computer model of an entire human brain. So, who’s brain will they be slicing up for that one? Lets hope it’s not a psychopath.

Story Link
Read more...
Neurotheology -Is God Hardwired into the Human Brain?
Mysteries of the Human Brain
Origin of Religion -Human Brain as "Belief Engine"
The Biology of Awe
Big Brain & the Pursuit of Happiness

What is the Executive Intelligence?


According to Justin Menkes, executive intelligence is the ability to digest, often with the help of others, large amounts of information in order to form important decisions.
Menkes says, "Personality is not a differentiator of star talent. It is an individual's facility for clear thinking or intelligence that largely determines their leadership success."
What do you think is the relative importance of executive intelligence, style, and personality in effective leaders?
Stephen Burkett said: "… I find that the core of the issue remains the definition of executive intelligence … Executive intelligence is less about number-crunching power or one's grasp of advanced concepts, and more about evaluating situations and taking appropriate action." Quinton van Eeden added, "Executive intelligence seems to be the sum of the parts—emotional intelligence, IQ, personality, values, and experience … A demonstration of executive intelligence must lie in the demonstrable ability to act and execute." Paul Jackson took us to the next step in commenting, "Once defined, how do we measure executive intelligence? Once measured, how do we assess its impact or usefulness?" And, we might add, how do we incorporate it into our everyday assessment of potential or actual leadership talent?
There was a full range of opinions regarding the importance of EI, perhaps in part due to the breadth with which the term was defined in each case. For example, Rowland Freeman opined, "Intelligence is of value, but more important is demonstrated common sense. Some of the most intelligent leaders I have known were failures at leadership." As Malvin Bernal put it, "Executive intelligence will only guarantee a sound processing of information that produces decisions … Execution is the basic ingredient that makes a great leader." On the other hand, Philip Derrow argued, "Executive intelligence, particularly as Mr. Menkes defines it, is, I believe, the most important component for long-term leadership effectiveness … Three words that best describe effective people in any organization: smart and happy. Both the order and the conjunction are important . . . ." Harry Tucci went even further, saying: "The concept of executive intelligence is a very useful measure of success … When it comes to meeting earnings and Street expectations I'll take the manager with his nose deep in a book any day".
In: http://hbswk.hbs.edu/item/5449.html
Read more:http://uk.askmen.com/money/successful_150/155_success.html