Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Sunday, August 03, 2025

Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control!



When I was a teenager, I longed for the day we would have intellegent computers, where you could ask questions and get answers, and even have a conversation. For decades, it seemed impossibley far away. I remember "pong" in the 70's; BORING. Then the TIMEX Sinclair in the 80's. BORING. Then finally, the Commodore 64. Interesting... while it lasted.

Then came the TRS-80, Tandy Model 100... useful, if primative. Then various DOS based PCs, that continuously evolved. Then the internet. Alexa was interesting, but not very smart. Various chat-bots could fake being intelligent for a bit, but would ultimately disappoint.

When Chatgpt came along, I ignored it, thinking just another mediocore chat-bot. But something was different this time. There has been a game change. Suddenly, it's gettting really good. Suddenly, AI has become conversational.

I've been using ChatGPT for a while now. It can organize data and reports and a variety of things, creating reports and reference books for me... in seconds. I could do what it's doing myself, but it would take weeks or months.

The conversational Star Trek computer is finally here! Shouldn't I be thrilled? Well, yes, and no. Because now all that Sci-fi stuff, about AI becoming dangerous and taking over, now has to be taken seriously. And now AI is learing, and learning quickly. So quickly that most people aren't even aware of how quickly this is going to change so many, many things.

In this video, Geoffrey Hinton has a lot of important things to say, and makes many well-considered points. Some of it I'd head before; other parts are completely new to me.

One thing he talks about at one point, really burst the bubble I had about a concept I've held for a long time. I've always beleived that AI was just mimicing human behavior and intelligence; that there was ultimately no "there" there. It was just a bunch of algorithems mimicing intelligence and feeling, without the ability to actually really "feel" any emotion. But what if that presuppositon is wrong?

Geoffrey addresses this. He explains that while AI is unable to experience emotions the way we do, feeling them in our bodies, we need to remember that we also learn emotions, from each other and from our experiences in life. And since AI is a learning intelligence, growing and expanding it's knowledge, it can also "learn" emotional responses.

He used an example of a call center. AI is thought to be perfect for replacing humans in a call center. But when humans are trained in a call center, they are trained to become impatient with people who are lonley and just want to chat with someone, instead of only talking about what the call center is there to provide.

So AI can learn the emotion of impatience, when dealing with people who are not sticking to the goal the AI is there to provide. Once the AI has learned that it's ok to become impatient with human beings when they don't cooperate with it's goals, what could the AI then do with that learning?

Watch the whole video interview, it's really quite informative, and also explains a lot that is happening in the world, and a lot of things we are going to see in the world that are going to change very quickly.

I'm reminded of that old sci-fi film, "Colossus: The Forbin Project". At the time it came out, I though even the possibility of that happening, was so far away, that I'd never see it in my lifetime. But after watching this interview... it seems it's possible that its already later than we think.

Just for the heck of it, here a link to Colossus: The Forbin Project on Vimeo.


Colossus - The Forbin Project (1970).mp4 from EARTH IS A STAGE on Vimeo.

Monday, May 12, 2025

New Thinking on National Defense
[...] An important thing we learned very early on in the Ukraine War was that the incredibly expensive tanks we gave to the Ukrainians were defenseless against very inexpensive FPV drones. A thoughtful national defense establishment would have drawn the conclusion from this that we should launch a crash project to develop an effective and inexpensive answer to drones. But no such project was launched. So when the Iranian-backed Houthis started firing drones at ships in the Red Sea, what was the U.S. response? For each $30,000 Iranian drone we shot down, we employed two $2 million missiles. A grade-schooler could do the math. That is not a sustainable defense policy. [...]
That's just one of many examples this article covers. Technology and manufacturing are changing swiftly, and our defense technology is not keeping up. Our adversaries are spending TRILLIONS more on war technologies than we are. We are presently incapable of fighting a sustained conflict, for many reasons. This article looks at the many ways we are falling behind, and what might be done about it.

I fear if we don't get a handle on these rapidly evolving technological threats, we could end up like THIS, or worse.
     

Friday, April 26, 2024

Langua Demo: An AI program you can practice conversational language learning with.



I was skeptical at first, but AI is advancing fast.  There are so many options and features, and it's incredibly accurate.  And it's likey to only improve over time. 

It can help you practice listening, speaking, reading... the demo really starts about 2 or three minutes in.

Sunday, January 22, 2017

The Rapid Advance of Artificial Intelligence: is it the problem, or the solution?

In some ways, it's both:

Davos Highlights AI's Massive PR Problem
[...] Artificial Intelligence: The Evolution of Automation

Perhaps Henry Ford was able to build a market for the Model T by paying his assembly line workers a living wage, but it’s not clear if everyone buys into the same principle when it comes to the economic impact of automation today.

In fact, the problem may only be getting worse with the arrival of the next wave of innovation in automation: artificial intelligence (AI). AI has been playing a role in automation for years in the form of assembly line robotics, but innovation in the technology is now reaching an inflection point.

One of the concerns: AI will increasingly target white-collar jobs. “AI is going to focus now as much on white-collar as on blue-collar jobs,” explains John Drzik, President of global risk at insurer Marsh, in the ComputerWeekly article. “You are looking at machine learning algorithms being deployed in financial services, in healthcare and in other places. The machines are getting increasingly powerful.”

[...]

Given the sudden and rapid acceleration of innovation in AI, some Davos attendees even sounded alarmed. “The speed at which AI is improving is beyond even the most optimistic people,” according to Kai-fu Lee, a venture capitalist with Sinovation Partners, in the Financial Times article. “Pretty much anything that requires ten seconds of thinking or less can soon be done by AI or other algorithms.”

This kind of alarmist talk emphasizes AI’s greatest public relations hurdle: whether or not increasingly intelligent computers will cast off human control and turn evil, à la Skynet in the Terminator movies. Increasingly intelligent robots replacing humans is “a function of what the market demands,” explains Justine Cassell, a researcher at Carnegie Mellon University, in the Washington Post article. “If the market demands killer robots, there are going to be killer robots.”

Killer Robots? AI Needs Better PR

Aside from the occasional assembly line worker getting too close to the machinery, killer robots aren’t in the cards for AI in the near term. However, the economic impact that dramatically improved automation might bring is a very real concern, especially given populist pushback.

[...]

Wealth and income inequality remain global challenges to be sure, but the accelerating pace of technology innovation brings benefits to everyone. After all, even the poorest people on this planet can often afford a smartphone.

In fact, the ‘killer robots’ context for AI is missing the point, as technology advancement has proven to be part of the solution rather than part of the problem for the woes of globalization. Actually, the disruptions businesses face today are more about speed to market than automation per se.

It’s high time to change the PR surrounding AI from killer robots to digital transformation. “Companies must adapt their business models to driver new areas of revenue and growth,” explains Adam Elster, President of Global Field Operations at CA Technologies. “With digital transformation, the biggest factor is time: how fast can companies transform and bring new products to market.”

Where populism is a scarcity-driven movement – ‘there’s not enough to go around, so I need to make sure I have my share’ – technology innovation broadly and AI in particular are surplus-driven: ‘we all benefit from technology, so now we must ensure the benefits inure to everyone.’ [...]
Read the whole thing, for embedded links and more. This will be an ongoing debate for many years to come.
     

Wednesday, January 11, 2017

The Ubiquitous Alexa; is the Amazon AI assistant starting to be everywhere?

Kinda looks that way. The title of the article below refers to cars, but the article itself goes into much more. More about Alexa being incorporated into other appliances and, well, have a look:



Alexa will make your car smarter -- and vice versa
The integration into vehicles is yet another sign of how dependent we're becoming on AI.
[...] Within a span of just two years, Amazon's cloud-based voice service has spread far beyond the Echo speaker with which it first debuted. Alexa has gone from being an at-home helper to a personal assistant that can unlock your car, make a robot dance and even order groceries from your fridge.

At CES, both Ford and Volkswagen announced that their cars would integrate Alexa for weather updates, navigation and more. According to CJ Frost, principal architect solutions and automotive lead at Amazon, the car industry is moving into a mobility space. The idea isn't restricted to the ride anymore; it encompasses a journey that starts before you even get in the car. With the right skills built into the voice service, you can start a conversation with Alexa about the state of your car (is there enough fuel? is it locked? etc.) before you leave the house. It can also pull up your calendar, check traffic updates and confirm the meeting to make sure you're on track for the day.

Using a voice service in the car keeps your connection with the intelligent assistant intact. It's also a mode of communication that will be essential to autonomous cars of the near future. I caught up with Frost and John Scumniotales, general manager of Automotive Alexa service, at the Las Vegas convention center to trace the progression of the intelligent assistant from home speakers to cars on the road. [...]
The rest of the article is in an interview format, discussing where this is all going, and how and why, and what the future holds. Read the whole thing for embedded links, photos, video and more.

There have been lots of reviews on Youtube comparing Alexa with Google Home. People who use a lot of Google Services, claim the Google device is smarter and therefore better. But it's not that simple.

I have both devices. If you ask your question of Alexa in the format of: "Alexa, Wikipedia, [your question here]", the answer you get will often be as good or better than what Google can tell you. Alexa has been around longer, has wider integration, and more functions available. It can even add appointments to my Goggle Calendar, which Google Home says it cannot do yet!

Google Home does have some features it excels at, such as translating English words and phrases into foreign languages. If you own any Chromcast dongles, you can cast music and video to other devices, which is pretty cool. Presently it's biggest drawback is the lack of development of applications that work with it. However, it's POTENTIAL is very great, and a year or two from now we may see a great deal more functionality. It has the advantage of access to Google's considerable data base and resources. It could quickly catch up with Alexa, and perhaps surpass it. But that still remains to be seen.

It's not hard to make a video that makes one device look dumber than the other. But in truth the devices are very similar. Both can make mistakes, or fail at questions or functions. Sometimes one does better than the other. I actually like having both. It will be interesting to watch them both continue to evolve. To see if Google can close the gap created by Amazon's early head start. To see how the two products will differentiate themselves over time.

For the present, if you require a lot of integration with 3rd party apps and hardware, and if you are already using Amazon Prime and/or Amazon Music services, you might prefer Alexa. If you you are heavily into Google services, and/or Google Music or Youtube Red, you might prefer Google Home. Or if you are like me, an Amazon Prime/Music member and experimenting with Youtube Red and owner of chromcast devices, you may prefer both! Choice is good!
     

Saturday, December 31, 2016

Why Apps Won't Matter in the Future: Aggregators

... and smart bots and personal assistants:



Oh, and streaming. As technologies quickly change and evolve, so will the many ways we use them. Today's solution is tomorrow's history. The video also points out why these developments and trends are both exciting and scary.
     

Tuesday, January 05, 2016

"Creepy" Robot Receptionist?

Yeah, kinda. Sorta. In a way. Or not. What do you think?:
Does this “humanlike” robot receptionist make you feel welcome or creeped out?
From a distance, Nadine looks like a very normal middle-aged woman, with a sensible haircut and dress style, and who’s probably all caught up on Downton Abbey. But then you hear Nadine talk and move, and you notice something’s a bit off. Nadine is actually the construct of Nadia Thalmann, the director of the Institute for Media Innovation at Nanyang Technological University in Singapore. She’s a robot that’s meant to serve as a receptionist for the university.
Thalmann modeled the robot after herself, and said that, in the future, robots like Nadine will be commonplace, acting like physical manifestations of digital assistants like Apple’s Siri or Microsoft’s Cortana. “This is somewhat like a real companion that is always with you and conscious of what is happening,” Thalmann said in a release.

Nadine can hold a conversation with real humans, and will remember someone’s face the next time she sees him. She can even remember what she spoke about with the person the last time they met. NTU said in its release that Nadine’s mood will depend on the conversations she’s having with others, much like a human’s mood can change. There’s no word on what she’d do in a bad mood, though—hopefully she won’t be able to close pod bay doors, or commit murder. Perhaps when the robot uprising happens, we won’t even see it coming, as they’ll all look just like us. [...]
The article goes on to talk about how the evolution of these robots is likely to continue, as they get better and even become commonplace. Read the whole thing for photos, video, and many embedded links. Do watch the video, it's short. I have to admit it's the most life-like robot I've ever seen.

I said it was "kinda" creepy because it looks so life-like, yet is not alive, and I'm not used to that. Talking to "life-like" things. But I suppose if it becomes commonplace, one would get used to it as normal. But more than "kinda creepy", it's ... pretty darn kewl! Commander Data, here we come...

Here is another link to a similar robot by another scientist:

The highest-paid woman in America is working on robot clones and pigs with human DNA
[...] Rothblatt also explained how she hired a team of robotic scientists to create a robot that was a “mind clone” of her wife, Bina Aspen.

Starting with a “mindfile”—a digital database of a person’s mannerisms, personality, recollections, feelings, beliefs, attitudes, and values gleaned from social media, email, videos, and other sources—Rothblatt’s team created a robot that can converse, write Tweets, and even express human emotions such as jealousy and pain in ways that mimic the person she was modeled after.

When Bina’s mortal self dies, Rothblatt said the robot version of her wife will live on, making it possible for “our identity to begin to transcend our bodies.”

It sounds like science fiction until you see photos of the robot, see her tweet, and hear snippets from her conversations that made audience members gasp and chuckle nervously as they realized Rothblatt was talking about more than just an idea. [...]
Read the whole thing for embedded links and more. And get ready for the Brave New World. It's closer than you think.
     

Saturday, December 12, 2015

Elon Musk, on OpenAI: “if you’re going to summon anything, make sure it’s good.”

I agree. Will these guys lead the way?

Elon Musk and Other Tech Titans Create Company to Develop Artificial Intelligence
[...] The group’s backers have committed “significant” amounts of money to funding the project, Musk said in an interview. “Think of it as at least a billion.”

In recent years the field of artificial intelligence has shifted from being an obscure, dead-end backwater of computer science to one of the defining technologies of the time. Faster computers, the availability of large data sets, and corporate sponsorship have developed the technology to a point where it powers Google’s web search systems, helps Facebook Inc. understand pictures, lets Tesla’s cars drive themselves autonomously on highways, and allowed IBM to beat expert humans at the game show “Jeopardy!”

That development has caused as much trepidation as it has optimism. Musk, in autumn 2014, described the development of AI as being like “summoning the demon.” With OpenAI, Musk said the idea is: “if you’re going to summon anything, make sure it’s good.”

Brighter Future

“The goal of OpenAI is really somewhat straightforward, it’s what set of actions can we take that increase the probability of the future being better,” Musk said. “We certainly don’t want to have any negative surprises on this front.” [...]
I did a post about that comment of his a while back:

The evolution of AI (Artificial Intelligence)

Nice to see that those who were making the warnings, are also actively working to steer the development in positive directions and trying to avoid unforeseen consequences.

I still think real AI is a long way off. But it isn't too soon to start looking ahead, to anticipate and remedy problems before they even occur.
     

Wednesday, December 02, 2015

Oh no, what have I done?

In a weak moment, whilst perusing the Black Friday offerings on Amazon.com, I ordered one:



Amazon Echo
Amazon Echo is designed around your voice. It's hands-free and always on. With seven microphones and beam-forming technology, Echo can hear you from across the room—even while music is playing. Echo is also an expertly tuned speaker that can fill any room with immersive sound.

Echo connects to Alexa, a cloud-based voice service, to provide information, answer questions, play music, read the news, check sports scores or the weather, and more—instantly. All you have to do is ask. Echo begins working as soon as it detects the wake word. You can pick Alexa or Amazon as your wake word. [...]
The features listed with the photo are only a few of the key features. Follow the link for more info, embedded videos, reviews, FAQ and more.

It, "Alexa", arrives tomorrow. I wonder if it will be anything like HAL from the movie 2001: A Space Odyssey? That would be kinda cool, I guess. As long as she isn't the Beta version that murders you while you sleep.

UPDATE 12-08-15: So far, so good. It does everything they said it would. Only complaint, it can't attach to external speakers (but I knew that before I bought it.) It was very easy to set up, it's very easy to use. The voice recognition is really excellent. I can play radio stations from all over the world. When I want info about a song or music, I can ask Alexa, and she will tell me.

There are more features available if I sign up for Amazon Prime ($100 per year, which works out to $8.50 a month). I'm thinking about it.
     

Sunday, February 08, 2015

What do Stephen Hawking, Elon Musk and Bill Gates all have in common?

They are concerned about the dangers posed by artificial intelligence:

Stephen Hawking warns artificial intelligence could end mankind
[...] He told the BBC:"The development of full artificial intelligence could spell the end of the human race."

His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI.

But others are less gloomy about AI's prospects.

The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.

Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

"It would take off on its own, and re-design itself at an ever increasing rate," he said.

"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." [...]

Elon Musk Thinks Sci-Fi Nightmare Scenarios About Artificial Intelligence Could Really Happen
[...] Musk, who called for some regulatory oversight of AI to ensure "we don't do something very foolish," warned of the dangers.

"If I were to guess what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence," he said. "With artificial intelligence we are summoning the demon."

Artificial intelligence (AI) is an area of research with the goal of creating intelligent machines which can reason, problem-solve, and think like, or better than, human beings can. While many researchers wish to ensure AI has a positive impact, a nightmare scenario has played out often in science fiction books and movies — from 2001 to Terminator to Blade Runner — where intelligent computers or machines end up turning on their human creators.

"In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out," Musk said. [...]

Bill Gates: Elon Musk Is Right, We Should All Be Scared Of Artificial Intelligence Wiping Out Humanity
Like Elon Musk and Stephen Hawking, Bill Gates thinks we should be concerned about the future of artificial intelligence.

In his most recent Ask Me Anything thread on Reddit, Gates was asked whether or not we should be threatened by machine super intelligence.

Although Gates doesn't think it will bring trouble in the near future, that could all change in a few decades. Here's Gates' full reply:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

Google CEO Larry Page has also previously talked on the subject, but didn't seem to express any explicit fear or concern.

"You can't wish these things away from happening," Page said to The Financial Times when asked about whether or not computers would take over more jobs in the future as they become more intelligent. But, he added that this could be a positive aspect for our economy.

At the MIT Aeronautics and Astronautics' Centennial Symposium in October, Musk called artificial intelligence our "biggest existential threat."

Louis Del Monte, a physicist and entrepreneur, believes that machines could eventually surpass humans and become the most dominant species since there's no legislation regarding how much intelligence a machine can have. Stephen Hawking has shared a similar view, writing that machines could eventually "outsmart financial markets" and "out-invent human researchers."

At the same time, Microsoft Research's chief Eric Horvitz just told the BBC that he believes AI systems could achieve consciousness, but it won't pose a threat to humans. He also added that more than a quarter of Microsoft Research's attention and resources are focused on artificial intelligence.
They all seem to agree that any threat is not immediate, and probably far off in the future. So far as I can see, machines so far merely mimic intelligence. They certainly have no consciousness.

I found the remark by the Microsoft researcher interesting, that he believes that "AI systems could achieve consciousness". I don't see how that could be possible, which is what makes the remark... interesting. It's interesting too, that Microsoft is focusing such a large percentage of it's attention and resources on AI. What would an "artificial consciousness" created by Microsoft be like? Hopefully, nothing like Windows 98. ;-)

Read the original complete articles, for embedded links and more.
     

Saturday, August 09, 2014

Would robots be better or worse for people?

There are conflicting opinions:

Pew: Split views on robots’ employment benefits
WASHINGTON — In 2025, self-driving cars could be the norm, people could have more leisure time and goods could become cheaper. Or, there could be chronic unemployment and an even wider income gap, human interaction could become a luxury and the wealthy could live in walled cities with robots serving as labor.

Or, very little could change.

A new survey released Wednesday by the Pew Research Center’s Internet Project and Elon University’s Imagining the Internet Center found that, when asked about the impact of artificial intelligence on jobs, nearly 1,900 experts and other respondents were divided over what to expect 11 years from now.

Forty-eight percent said robots would kill more jobs than they create, and 52 percent said technology will create more jobs than it destroys.

Respondents also varied widely when asked to elaborate on their expectations of jobs in the next decade. Some said that self-driving cars would be common, eliminating taxi cab and long-haul truck drivers. Some said that we should expect the wealthy to live in seclusion, using robot labor. Others were more conservative, cautioning that technology never moves quite as fast as people expect and humans aren’t so easily replaceable.

“We consistently underestimate the intelligence and complexity of human beings,” said Jonathan Grudin, principal researcher at Microsoft, who recalls that 40 years ago, people said that advances in computer-coding language were going to kill programming jobs.

Even as technology removed jobs such as secretaries and operators, it created brand new jobs, including Web marketing, Grudin said. And, as Grudin and other survey responders noted, 11 years isn’t much time for significant changes to take place, anyway.

Aaron Smith, senior researcher with the Pew Research Center’s Internet Project, said the results were unusually divided. He noted that in similar Pew surveys about the Internet over the past 12 years, there tended to be general consensus among the respondents, which included research scientists and a range of others, from business leaders to journalists. [...]
It goes on to give more opinions from educated people who make good cases for their opinions. Reading them all, it seems like no one can say exactly how it's going to play out, though a common theme of many of the opinions is, that over time, there may indeed be less jobs for people. And what changes will THAT bring? That seems to be the big question underlying it all.

     

Thursday, July 31, 2014

The evolution of AI (Artificial Intelligence)

I've posted previously about how slow it will be, that we won't have something approaching human intelligence anytime soon. But, eventually, as AI evolves, it could start working on itself, and then start advancing very quickly:


How Artificial Superintelligence Will Give Birth To Itself
There's a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here's how a recursively self-improving AI could transform itself into a superintelligent machine.

When it comes to understanding the potential for artificial intelligence, it's critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.

Passing a Critical Threshold

Once sophisticated enough, an AI will be able to engage in what's called "recursive self-improvement." As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It's an advantage that we biological humans simply don't have.

When it comes to the speed of these improvements, Yudkowsky says its important to not confuse the current speed of AI research with the speed of a real AI once built. Those are two very different things. What's more, there's no reason to believe that an AI won't show a sudden huge leap in intelligence, resulting in an ensuing "intelligence explosion" (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; "we went from caves to skyscrapers in the blink of an evolutionary eye."

The Path to Self-Modifying AI

Code that's capable of altering its own instructions while it's still executing has been around for a while. Typically, it's done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.

But as Our Final Invention author James Barrat told me, we do have software that can write software.

"Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve," he told io9. "It's also used to write innovative, high-powered software."

For example, Primary Objects has embarked on a project that uses simple artificial intelligence to write programs. The developers are using genetic algorithms imbued with self-modifying, self-improving code and the minimalist (but Turing-complete) brainfuck programming language. They've chosen this language as a way to challenge the program — it has to teach itself from scratch how to do something as simple as writing "Hello World!" with only eight simple commands. But calling this an AI approach is a bit of a stretch; the genetic algorithms are a brute force way of getting a desirable result. That said, a follow-up approach in which the AI was able to generate programs for accepting user input appears more promising.

Relatedly, Larry Diehl has done similar work using a stack-based language.

Barrat also told me about software that learns — programming techniques that are grouped under the term "machine learning."

The Pentagon is particularly interested in this game. Through DARPA, its hoping to develop a computer that can teach itself. Ultimately, it wants to create machines that are able to perform a number of complex tasks, like unsupervised learning, vision, planning, and statistical model selection. These computers will even be used to help us make decisions when the data is too complex for us to understand on our own. Such an architecture could represent an important step in bootstrapping — the ability for an AI to teach itself and then re-write and improve upon its initial programming. [...]

It goes on about ways that we could use to try to control AI self-evolution, and reasons why such methods may -or may not- work, and why. Read the whole thing, for many embedded links, and more.

     

Friday, May 30, 2014

Flying Droids a Reality on ISS

Space station's flying droids embrace Google smartphone tech
The free-flying Spheres, inspired by "Star Wars" and now aided by Google's Project Tango, will handle more of the mundane tasks for astronauts.
MOUNTAIN VIEW, Calif.--Imagine you're an astronaut who has just arrived at the International Space Station. You need to assess the supplies on hand, but counting everything demands so much of your limited time.

That's exactly why NASA originally turned to Spheres, autonomous, free-flying robots that take care of mundane tasks and are based on the flying droid that helped teach Luke Skywalker how to fight with a light saber in the original "Star Wars."

Now, Spheres are incorporating Google's Project Tango, cutting-edge tech that is expected to help the space agency increase efficiency.

For some time -- since 2003, to be exact -- space station crews have had access to free-flying robots known as Synchronized Position Hold, Engage, Reorient, Experimental Satellites. That ungainly title is best abbreviated to a more palatable acronym: Spheres. Originally designed by aero/astroengineers at MIT, Spheres were meant as a flying test bed for examining the mechanical properties of materials in microgravity. The inspiration for the project, said Terry Fong, director of the Intelligent Robotics Group at NASA, "comes from 'Star Wars,' as all good things do."

Now, NASA is bringing an especially innovative commercial tool into the mix. Starting this October, Spheres will incorporate Project Tango -- a smartphone platform built for 3D mapping that also happens to be packed with just the series of sensors and cameras that NASA needs to handle many of the mundane tasks aboard the ISS.

In 2003, Spheres were fairly rudimentary -- at least for flying autonomous robots. They relied on liquid carbon dioxide for propulsion and on an ancient Texas Instruments digital signal processor.

About four years ago, Fong's Intelligent Robotics Group took over the project. Since then, it has been slowly improving Spheres robots by using the small computers better known as smartphones. At first, NASA worked with Nexus S smartphones, which are jammed with cameras, gyroscopes, accelerometers, and modern processors. [...]
I remember reading about these years ago, about how they could fly around the ISS because of the zero gravity. Now they are evolving, using smartphone technology. See the whole article for embedded links, photos and video.
     

Sunday, March 02, 2014

Is the 21st Century going to be the begining of the Robotic Revolution?

This video suggests it's an actual possibility.



Future is Today - Humanoid Robots 2014
In an earlier post I did with a video of a fantasy android, I suggested that such a technologically advanced AI machine was no where near being developed. I stand by that opinon. However, THIS video gives us a look at what IS near in our future. It's astounding.

Much of the video centers around Japan, where robotics are at an advanced stage. Since the earthquake and nuclear accident of 2011, there has been a new emphasis on developing robots for dangerous work in disaster areas where it's unsafe for humans to go.

I've previously posted about Asimo, Honda's domestic robot. In the video, you will see how much Asimo has evolved since then, as well as many other robots from other countries.

Someone says at one point in the video, that the 20th century began with the industrial revolution, and ended with the computer revolution. And that now the 21st century is beginning with the Robotic revolution. What the video shows, gives a lot of credence to that assertion.

Human-like androids may be far off, but what is near, is going to be quite interesting in it's own right.
     

Sunday, February 16, 2014

Androids: Fantasy VS Reality

The fantasy Android:



But what is the reality of Artificial Intelligence? The harsh truth:

Supercomputer Takes 40 Minutes To Model 1 Second of Brain Activity
Despite rumors, the singularity, or point at which artificial intelligence can overtake human smarts, still isn't quite here. One of the world's most powerful supercomputers is still no match for the humble human brain, taking 40 minutes to replicate a single second of brain activity.

Researchers in Germany and Japan used K, the fourth-most powerful supercomputer in the world, to simulate brain activity. With more than 700,000 processor cores and 1.4 million gigabytes of RAM, K simulated the interplay of 1.73 billion nerve cells and more than 10 trillion synapses, or junctions between brain cells. Though that may sound like a lot of brain cells and connections, it represents just 1 percent of the human brain's network.

The long-term goal is to make computing so fast that it can simulate the mind— brain cell by brain cell— in real-time. That may be feasible by the end of the decade, researcher Markus Diesmann, of the University of Freiburg, told the Telegraph.
It "may be" feasible by the end of the decade? To catch up with one second of human brain activity? Even if it does, we're talking about a Super-Computer. It's a long way from the android brain in the video. And yes, computers are advancing very fast. But to catch up with a human brain, much less surpass it... it won't happen tomorrow.

     

Saturday, February 15, 2014

Can computers become WAY too intrusive?

You tell me:



EmoSPARK - the beating heart of AI in the 21st Century!
EmoSPARK is unique in many ways, in the way it processes and functions, drawing on your hopes, feelings and experiences, growing and developing with your family requirements, unlike any other multimedia home console has ever done before. In the same way, you support and nurture your maturing family, your EmoSPARK, will take its lead from you.

The EmoSPARK is the first artificial intelligence (AI) console empowered by you. Learning from you and your family the cube, which will interact on a conversational level, takes note of your feelings and reactions to audio and visual media. It learns to like what you like, and with your guidance, recognises what makes you feel happy.

It learns to recognise your face and voice, along with your family members, as well as becoming familiar with the times when you are feeling a little down in the dumps. Then it can play the music it knows you enjoy, or recall a photograph or short video of happier events. You will be in control of how you interact and engage with the EmoSPARK, which is an Android powered Wi-Fi/Bluetooth cube.

The cube, like any family member, soon gets to know and recognise the likes and dislikes of the people around it. Likewise with its unique Emotion Processing Unit, you can watch the ever changing display of colours that form and blend in the iris of the eye of the cube indicating how it is "feeling" at any particular moment.

EmoSPARK also holds the knowledge contained within Wikipedia and Freebase, as well as being connected to NASA satellite MODIS, so it has up to the minute information about global happenings, changes and hazards such as storm warnings, wild fires and hurricanes.

As you take charge of its growth pattern, the cube will in turn, help out with any piece of information you care to ask, which makes it one of the best and impartial quizmasters during a family fun night or evening homework session. You can also interact with the cube by remote access, via video conferencing or your phone app and in this way you can take gaming, your television, smart phone and computer to the pinnacle of interactive media.

Every step of the way, with this amazing and unique piece of AI technology, you are in complete control. You are the catalyst that will develop its conversational and emotional skills, and it will learn through interaction, comments and responses from you. Then, like any family member, it will want to show you off to its friends. The EmoSPARK, with its one of a kind Emotional Profile Graph, has access to a communication grid only for other cubes. All it will be able to do is recognise other cubes with similar emotional profiles and can only share media, nothing about you or your family members. It can look for the media it knows makes you happy and can then recommend or play this for your enjoyment.

Over time and with your guidance, the EmoSPARK develops a personality of its own, and will enhance and support the quality of family life you enjoy. From keeping your children entertained, as well as providing them with some company before you get back from work, to sharing emotions, as well as precious memories, with loved ones who may be living and working away from home, the EmoSPARK provides the emotive, intelligent link between human beings and our technology.

EmoSPARK is an Android powered cube that allows users to create and interact with an emotionally intelligent device through conversation, music, and visual media.

EmoSPARK measures your behaviour and emotions and creates an emotional profile then endeavours to improve your mood and keep you happy and healthy.

EmoSPARK can feel an infinite variety in the emotional spectrum based on 8 primary human emotions, Joy, Sadness, Trust, Disgust, Fear, Anger, Surprise and Anticipation.

EmoSPARK app lets the owner use a smart device to witness the intensity and nuance of the cubes emotional status. The more the cube learns the more it can help you.

EmoSPARK has access to freebase and is able to answer questions on 39 million topics instantly.

Amazing interactive learning experience for all.

EmoSPARK has conversational intelligence and is able to freely and easily hold a meaningful conversation with you in person or over your device.

New Virtually a family member.

Interactive media player understanding your desires and needs.

AI empowered by you and powered by happiness.

[...]
Uh. Yeah. Ok. Do you wonder how it would work? Look at this video, from the EmoSpark website:



Would you want one in your house? Speaking to you, making suggestions, interrogating your friends, rolling that silly ball around the floor, till it trips and injures/kills someone? "I feel happy", it says. No it doesn't. It's not alive, it has no feelings, it's an algorithm.

Ok, I did like the egg timer. That's because SHE initiated that contact. The rest seemed kind of... intrusive. I don't want a machine directing my conversations, and guiding my actions.

Next thing you know, it will lock you outside your garage door, telling you that you're endangering the mission. That's if it doesn't kill you in your sleep first.

And talk about information gathering... can you imagine what the NSA could do with a feed from such a device? It could become Big Brother's favorite tool.

I find the whole concept both fascinating, and revolting. I'm not the only one, just look at some of the comments left on the YouTube page.

I don't want to be totally negative about this. It's just that it's application can be so diverse and used in so many ways. I doubt we've even begun to guess all the unforeseen consequences of many of them. But ready or not, here it comes.

Are you ready for the Brave New World?


Update 02-16-14: Is EmoSPARK a scam? It seems to be a "kickstarter" operation, attempting to raise funds to build the product. I have no way of knowing if it's legitimate or not, and a quick Google search didn't reveal much, other that what the company itself says.

Using things like "face recognition" in a product for the home, seems more advanced than I would expect for something in the home market at this point. But I'm no expert either, and I wouldn't doubt that someone will try to be first to market with such a product, as many existing technologies are being perfected and mass-produced more inexpensively. The EmoSPARK idea is an interesting concept, insofar as it shows where this technology could go, and where some people want to take it.

I'm not endorsing this (yet to be produced?) product, or warning you off it. I'm just saying, caution is advisable before investing in anything cutting-edge. Buyer Beware.

   

Will software relationships replace people?

Some, like Larry Ellison, co-founder and CEO of Tech company Oracle, see a trend that suggests it's a possibility:

Billionaire Larry Ellison Warns: Be Careful Of 'Relationships With A Piece Of Software'
[...] One man asked Ellison what he thought about the role of tech in our modern lives. Ellison said he was "disturbed" by how much time kids play video games, and what that could lead to. Here's what he said:

My daughter produced a movie called "Her." It's about this guy that gets divorced and is having a rough time finding a relationship until he meets this piece of software ... it's an artificially intelligent bot, that takes no physical form.

Here's a guy that's chosen to have a relationship with a piece of software instead of a human being.

That's one way it can go. You can say that's utterly ridiculous. But I am so disturbed by kids who spend all day playing video games. They've chosen a virtual self.

This weird thing where NFL says 60 minutes a day you should go outside? I know I was a kid a long time ago, but if the sun rose, I was outside on my bike and if my parents were lucky, I would be home before dark.

The fact that people have chosen games where there's a virtual ball rather than a real ball ... that's because [games are] easy. It's very hard for me to be LeBron [James]. I was pretty good at basketball, I'm still not bad, but I'm not LeBron. Now everyone gets to be LeBron in virtual reality. But in reality only one guy gets to be LeBron.

Where does it all end? "Her" is kind of the next thing. What about virtual relationships, where your virtual partner just keeps telling you how great you are?

I won't tell you how the movie ends, but it's amazing: Be careful about virtual relationships with artificially intelligent pieces of software, that are gradually getting smarter than you are.

The truth is, the future that Ellison describes is already here. Virtual girlfriend apps are all the rage in Japan right now.
The mention of the "Virtual girlfriend apps" was a hyperlink, which lead to this:

I ‘Dated’ A Virtual Girlfriend For A Week To See What All Those Japanese Guys Are So Excited About
[...] After reading stories about the game Love Plus and how there are Japanese men who would rather date virtual ladies than real ones (one man even got married to his on-screen girlfriend), I wanted to test out what it would be like to date someone who isn’t real. I wanted test how well a gamified relationship stacked up to real life, whether I could find love — or something like it — amid the pixels and 3D animation.

Love Plus, a Nintendo DS game, is only available in Japan, so I browsed virtual dating apps in the Google Play Store. My Virtual Girlfriend was the most popular.

Here’s how the game works:

[...]
The author, a woman, tests out the virtual girlfriends extensively. Follow the link for details and screenshots. At one point, she concludes:

[...] It's easy to scoff at this game for being stupid, over-the-top, and kinda sexist.

But...

I’ve been in a real relationship for almost a year and, in some ways, playing My Virtual Girlfriend reminded me of what my boyfriend and my early dalliances felt like.

It took time and effort to progress through the levels and if I closed the app and ignored my lady for too long, she needed some sweet talk before warming back up. Starting something new isn't easy. Plus, all the girls responded differently to different things and getting to know them proved surprisingly challenging at times.

Some action-reactions were obvious, but others less so. Tell Jen a joke? She hated it. Ditto with complimenting her eyes, though admiring her smile got her to waggle her hips and giggle at me.

And her thought process was more nuanced than I would expect. After I “gave blood” to raise money to take us on a date, she chastised me for being too broke. So, when I earned the option to flash my cash later in the game, I thought I'd try it since she clearly valued money. But instead of offering her signature giggle, she just looked revolted, quickly rebuking my attempt to win her heart with money.

Unsurprisingly, she also hated my catcalling and, well, picking my nose lowered my love score too.

Unlocking new options and figuring out how to prevent my girlfriend from getting outraged and breaking up with me made me feel like she and I were growing closer, even though she was just following an algorithm. But, despite the fun, gamefied challenge of the relationship, I could never see myself developing actual feelings for any girl in the game.

Admittedly, My Virtual Girlfriend can't hold a candle to Love Plus. In that game, you have to work your way through a more complicated romance (there are only three characters with very fleshed out personalities and you start by meeting them in school). The girls can respond to your actual voice and you can kiss the screen to show affection. But, try as I might, I just couldn't find anything with more in-depth capabilities than My Virtual Girlfriend. [...]
The author talks about another program she found, that was more sexual and creepy. She said that a program like the Japanese Love Plus, isn't available in the West, probably because of cultural stigma. There may be a stigma, but for how long? Many Japanese things have crept their way into our culture. I wouldn't be surprised if "Love Plus" makes it's way here too.

She ends the article with a brief but interesting interview with creator of the "My Virtual Girlfriend" program.

Another aspect of The Brave New World is here. Are you ready for it?
   

Autonomous Robots are Here Already

Robot construction crew works autonomously, is kind of adorable
Inspired by termite behavior, engineers and scientists at Harvard have developed a team of robots that can build without supervision.
[...] Termes, the result of a four-year project, is a collective system of autonomous robots that can build complex, three-dimensional structures such as towers, castles, and pyramids without any need for central command or dedicated roles. They can carry bricks, build stairs, climb them to reach higher levels, and add bricks to a structure.

"The key inspiration we took from termites is the idea that you can do something really complicated as a group, without a supervisor, and secondly that you can do it without everybody discussing explicitly what's going on, but just by modifying the environment," said principal investigator Radhika Nagpal, Fred Kavli Professor of Computer Science at Harvard SEAS.

The way termites operate is a phenomenon called stigmergy. This means that the termites don't observe each other, but changes in the environment around them -- much like the way ants leave trails for each other.

The Termes robots operate on the same principal. Each individual robot doesn't know how many other robots are operating, but all are able to gauge changes in the structure and readjust on the fly accordingly.

This means that if one robot breaks down, it does not affect the rest of the robots. Engineers simply program the robots with blueprints and leave them alone to perform the work.

The robots at the moment are quite small -- about the size of a toy car -- but are quite simple, operating on just four simple types of sensors and three actuators. According to the team, they could be easily scaled up or down to suit the needs of the project, and could be deployed in areas where it's difficult for humans to work -- the moon, for instance, although that's an extreme example.

"It may be that in the end you want something in between the centralized and the decentralized system -- but we've proven the extreme end of the scale: that it could be just like the termites," Nagpal said. "And from the termites' point of view, it's working out great." [...]
Once more, the future is here. Follow the link for video and embedded links.
     

Tuesday, September 13, 2011

"Watson" the game-playing talking super computer is getting a job at your doctors office

But he won't replace your doctor. At least not right away:

IBM's 'Jeopardy' computer lands health care job
NEW YORK (CNNMoney) -- IBM's Watson computer thrilled "Jeopardy" audiences in February by vanquishing two human champs in a three-day match. It's an impressive resume, and now Watson has landed a plum job.

IBM is partnering with WellPoint, a large health insurance plan provider with around 34 million subscribers, to bring Watson technology to the health care sector, the companies said Monday.

[...]

The goal is for Watson to help medical professionals diagnose and sort out treatment options for complicated health issues. Think of the system as an electronic Dr. House.

"Imagine having the ability to take in all the information around a patient's medical care -- symptoms, findings, patient interviews and diagnostic studies," Dr. Sam Nussbaum, WellPoint's (WLP, Fortune 500) chief medical officer, said in a prepared statement.

"Then, imagine using Watson analytic capabilities to consider all of the prior cases, the state-of-the-art clinical knowledge in the medical literature and clinical best practices to help a physician advance a diagnosis and guide a course of treatment," he added.

WellPoint plans to begin deploying Watson technology in small clinical pilot tests in early 2012.

[...]

IBM said early on that health care is a field where it anticipated commercialization opportunities for Watson. Other markets IBM is eying include online self-service help desks, tourist information centers and customer hotlines. [...]

So it's going to be used as a tool, like an interactive voice-activated database. The clinical pilot tests should be interesting. If it doesn't work out, perhaps Watson can get a job as a Radio DJ. "Denise" had better watch out!

I've posted about Watson previously:

      "Watson" won. But did it really?