Showing posts with label robotics. Show all posts
Showing posts with label robotics. Show all posts

Sunday, January 22, 2017

The Rapid Advance of Artificial Intelligence: is it the problem, or the solution?

In some ways, it's both:

Davos Highlights AI's Massive PR Problem
[...] Artificial Intelligence: The Evolution of Automation

Perhaps Henry Ford was able to build a market for the Model T by paying his assembly line workers a living wage, but it’s not clear if everyone buys into the same principle when it comes to the economic impact of automation today.

In fact, the problem may only be getting worse with the arrival of the next wave of innovation in automation: artificial intelligence (AI). AI has been playing a role in automation for years in the form of assembly line robotics, but innovation in the technology is now reaching an inflection point.

One of the concerns: AI will increasingly target white-collar jobs. “AI is going to focus now as much on white-collar as on blue-collar jobs,” explains John Drzik, President of global risk at insurer Marsh, in the ComputerWeekly article. “You are looking at machine learning algorithms being deployed in financial services, in healthcare and in other places. The machines are getting increasingly powerful.”

[...]

Given the sudden and rapid acceleration of innovation in AI, some Davos attendees even sounded alarmed. “The speed at which AI is improving is beyond even the most optimistic people,” according to Kai-fu Lee, a venture capitalist with Sinovation Partners, in the Financial Times article. “Pretty much anything that requires ten seconds of thinking or less can soon be done by AI or other algorithms.”

This kind of alarmist talk emphasizes AI’s greatest public relations hurdle: whether or not increasingly intelligent computers will cast off human control and turn evil, à la Skynet in the Terminator movies. Increasingly intelligent robots replacing humans is “a function of what the market demands,” explains Justine Cassell, a researcher at Carnegie Mellon University, in the Washington Post article. “If the market demands killer robots, there are going to be killer robots.”

Killer Robots? AI Needs Better PR

Aside from the occasional assembly line worker getting too close to the machinery, killer robots aren’t in the cards for AI in the near term. However, the economic impact that dramatically improved automation might bring is a very real concern, especially given populist pushback.

[...]

Wealth and income inequality remain global challenges to be sure, but the accelerating pace of technology innovation brings benefits to everyone. After all, even the poorest people on this planet can often afford a smartphone.

In fact, the ‘killer robots’ context for AI is missing the point, as technology advancement has proven to be part of the solution rather than part of the problem for the woes of globalization. Actually, the disruptions businesses face today are more about speed to market than automation per se.

It’s high time to change the PR surrounding AI from killer robots to digital transformation. “Companies must adapt their business models to driver new areas of revenue and growth,” explains Adam Elster, President of Global Field Operations at CA Technologies. “With digital transformation, the biggest factor is time: how fast can companies transform and bring new products to market.”

Where populism is a scarcity-driven movement – ‘there’s not enough to go around, so I need to make sure I have my share’ – technology innovation broadly and AI in particular are surplus-driven: ‘we all benefit from technology, so now we must ensure the benefits inure to everyone.’ [...]
Read the whole thing, for embedded links and more. This will be an ongoing debate for many years to come.
     

Tuesday, January 05, 2016

"Creepy" Robot Receptionist?

Yeah, kinda. Sorta. In a way. Or not. What do you think?:
Does this “humanlike” robot receptionist make you feel welcome or creeped out?
From a distance, Nadine looks like a very normal middle-aged woman, with a sensible haircut and dress style, and who’s probably all caught up on Downton Abbey. But then you hear Nadine talk and move, and you notice something’s a bit off. Nadine is actually the construct of Nadia Thalmann, the director of the Institute for Media Innovation at Nanyang Technological University in Singapore. She’s a robot that’s meant to serve as a receptionist for the university.
Thalmann modeled the robot after herself, and said that, in the future, robots like Nadine will be commonplace, acting like physical manifestations of digital assistants like Apple’s Siri or Microsoft’s Cortana. “This is somewhat like a real companion that is always with you and conscious of what is happening,” Thalmann said in a release.

Nadine can hold a conversation with real humans, and will remember someone’s face the next time she sees him. She can even remember what she spoke about with the person the last time they met. NTU said in its release that Nadine’s mood will depend on the conversations she’s having with others, much like a human’s mood can change. There’s no word on what she’d do in a bad mood, though—hopefully she won’t be able to close pod bay doors, or commit murder. Perhaps when the robot uprising happens, we won’t even see it coming, as they’ll all look just like us. [...]
The article goes on to talk about how the evolution of these robots is likely to continue, as they get better and even become commonplace. Read the whole thing for photos, video, and many embedded links. Do watch the video, it's short. I have to admit it's the most life-like robot I've ever seen.

I said it was "kinda" creepy because it looks so life-like, yet is not alive, and I'm not used to that. Talking to "life-like" things. But I suppose if it becomes commonplace, one would get used to it as normal. But more than "kinda creepy", it's ... pretty darn kewl! Commander Data, here we come...

Here is another link to a similar robot by another scientist:

The highest-paid woman in America is working on robot clones and pigs with human DNA
[...] Rothblatt also explained how she hired a team of robotic scientists to create a robot that was a “mind clone” of her wife, Bina Aspen.

Starting with a “mindfile”—a digital database of a person’s mannerisms, personality, recollections, feelings, beliefs, attitudes, and values gleaned from social media, email, videos, and other sources—Rothblatt’s team created a robot that can converse, write Tweets, and even express human emotions such as jealousy and pain in ways that mimic the person she was modeled after.

When Bina’s mortal self dies, Rothblatt said the robot version of her wife will live on, making it possible for “our identity to begin to transcend our bodies.”

It sounds like science fiction until you see photos of the robot, see her tweet, and hear snippets from her conversations that made audience members gasp and chuckle nervously as they realized Rothblatt was talking about more than just an idea. [...]
Read the whole thing for embedded links and more. And get ready for the Brave New World. It's closer than you think.
     

Thursday, July 31, 2014

The evolution of AI (Artificial Intelligence)

I've posted previously about how slow it will be, that we won't have something approaching human intelligence anytime soon. But, eventually, as AI evolves, it could start working on itself, and then start advancing very quickly:


How Artificial Superintelligence Will Give Birth To Itself
There's a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here's how a recursively self-improving AI could transform itself into a superintelligent machine.

When it comes to understanding the potential for artificial intelligence, it's critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.

Passing a Critical Threshold

Once sophisticated enough, an AI will be able to engage in what's called "recursive self-improvement." As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It's an advantage that we biological humans simply don't have.

When it comes to the speed of these improvements, Yudkowsky says its important to not confuse the current speed of AI research with the speed of a real AI once built. Those are two very different things. What's more, there's no reason to believe that an AI won't show a sudden huge leap in intelligence, resulting in an ensuing "intelligence explosion" (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; "we went from caves to skyscrapers in the blink of an evolutionary eye."

The Path to Self-Modifying AI

Code that's capable of altering its own instructions while it's still executing has been around for a while. Typically, it's done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.

But as Our Final Invention author James Barrat told me, we do have software that can write software.

"Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve," he told io9. "It's also used to write innovative, high-powered software."

For example, Primary Objects has embarked on a project that uses simple artificial intelligence to write programs. The developers are using genetic algorithms imbued with self-modifying, self-improving code and the minimalist (but Turing-complete) brainfuck programming language. They've chosen this language as a way to challenge the program — it has to teach itself from scratch how to do something as simple as writing "Hello World!" with only eight simple commands. But calling this an AI approach is a bit of a stretch; the genetic algorithms are a brute force way of getting a desirable result. That said, a follow-up approach in which the AI was able to generate programs for accepting user input appears more promising.

Relatedly, Larry Diehl has done similar work using a stack-based language.

Barrat also told me about software that learns — programming techniques that are grouped under the term "machine learning."

The Pentagon is particularly interested in this game. Through DARPA, its hoping to develop a computer that can teach itself. Ultimately, it wants to create machines that are able to perform a number of complex tasks, like unsupervised learning, vision, planning, and statistical model selection. These computers will even be used to help us make decisions when the data is too complex for us to understand on our own. Such an architecture could represent an important step in bootstrapping — the ability for an AI to teach itself and then re-write and improve upon its initial programming. [...]

It goes on about ways that we could use to try to control AI self-evolution, and reasons why such methods may -or may not- work, and why. Read the whole thing, for many embedded links, and more.

     

Friday, May 30, 2014

Flying Droids a Reality on ISS

Space station's flying droids embrace Google smartphone tech
The free-flying Spheres, inspired by "Star Wars" and now aided by Google's Project Tango, will handle more of the mundane tasks for astronauts.
MOUNTAIN VIEW, Calif.--Imagine you're an astronaut who has just arrived at the International Space Station. You need to assess the supplies on hand, but counting everything demands so much of your limited time.

That's exactly why NASA originally turned to Spheres, autonomous, free-flying robots that take care of mundane tasks and are based on the flying droid that helped teach Luke Skywalker how to fight with a light saber in the original "Star Wars."

Now, Spheres are incorporating Google's Project Tango, cutting-edge tech that is expected to help the space agency increase efficiency.

For some time -- since 2003, to be exact -- space station crews have had access to free-flying robots known as Synchronized Position Hold, Engage, Reorient, Experimental Satellites. That ungainly title is best abbreviated to a more palatable acronym: Spheres. Originally designed by aero/astroengineers at MIT, Spheres were meant as a flying test bed for examining the mechanical properties of materials in microgravity. The inspiration for the project, said Terry Fong, director of the Intelligent Robotics Group at NASA, "comes from 'Star Wars,' as all good things do."

Now, NASA is bringing an especially innovative commercial tool into the mix. Starting this October, Spheres will incorporate Project Tango -- a smartphone platform built for 3D mapping that also happens to be packed with just the series of sensors and cameras that NASA needs to handle many of the mundane tasks aboard the ISS.

In 2003, Spheres were fairly rudimentary -- at least for flying autonomous robots. They relied on liquid carbon dioxide for propulsion and on an ancient Texas Instruments digital signal processor.

About four years ago, Fong's Intelligent Robotics Group took over the project. Since then, it has been slowly improving Spheres robots by using the small computers better known as smartphones. At first, NASA worked with Nexus S smartphones, which are jammed with cameras, gyroscopes, accelerometers, and modern processors. [...]
I remember reading about these years ago, about how they could fly around the ISS because of the zero gravity. Now they are evolving, using smartphone technology. See the whole article for embedded links, photos and video.
     

Sunday, March 02, 2014

Is the 21st Century going to be the begining of the Robotic Revolution?

This video suggests it's an actual possibility.



Future is Today - Humanoid Robots 2014
In an earlier post I did with a video of a fantasy android, I suggested that such a technologically advanced AI machine was no where near being developed. I stand by that opinon. However, THIS video gives us a look at what IS near in our future. It's astounding.

Much of the video centers around Japan, where robotics are at an advanced stage. Since the earthquake and nuclear accident of 2011, there has been a new emphasis on developing robots for dangerous work in disaster areas where it's unsafe for humans to go.

I've previously posted about Asimo, Honda's domestic robot. In the video, you will see how much Asimo has evolved since then, as well as many other robots from other countries.

Someone says at one point in the video, that the 20th century began with the industrial revolution, and ended with the computer revolution. And that now the 21st century is beginning with the Robotic revolution. What the video shows, gives a lot of credence to that assertion.

Human-like androids may be far off, but what is near, is going to be quite interesting in it's own right.
     

Saturday, February 15, 2014

Autonomous Robots are Here Already

Robot construction crew works autonomously, is kind of adorable
Inspired by termite behavior, engineers and scientists at Harvard have developed a team of robots that can build without supervision.
[...] Termes, the result of a four-year project, is a collective system of autonomous robots that can build complex, three-dimensional structures such as towers, castles, and pyramids without any need for central command or dedicated roles. They can carry bricks, build stairs, climb them to reach higher levels, and add bricks to a structure.

"The key inspiration we took from termites is the idea that you can do something really complicated as a group, without a supervisor, and secondly that you can do it without everybody discussing explicitly what's going on, but just by modifying the environment," said principal investigator Radhika Nagpal, Fred Kavli Professor of Computer Science at Harvard SEAS.

The way termites operate is a phenomenon called stigmergy. This means that the termites don't observe each other, but changes in the environment around them -- much like the way ants leave trails for each other.

The Termes robots operate on the same principal. Each individual robot doesn't know how many other robots are operating, but all are able to gauge changes in the structure and readjust on the fly accordingly.

This means that if one robot breaks down, it does not affect the rest of the robots. Engineers simply program the robots with blueprints and leave them alone to perform the work.

The robots at the moment are quite small -- about the size of a toy car -- but are quite simple, operating on just four simple types of sensors and three actuators. According to the team, they could be easily scaled up or down to suit the needs of the project, and could be deployed in areas where it's difficult for humans to work -- the moon, for instance, although that's an extreme example.

"It may be that in the end you want something in between the centralized and the decentralized system -- but we've proven the extreme end of the scale: that it could be just like the termites," Nagpal said. "And from the termites' point of view, it's working out great." [...]
Once more, the future is here. Follow the link for video and embedded links.
     

Wednesday, March 02, 2011

Japanese "HAL" to Tweet from Space

Japan to Send Talking, Tweeting Robot Into Space
Japan's space agency is looking into the possibility of sending a Twitter-using humanoid robot to the International Space Station to act as a talking companion for astronauts on the orbiting outpost, according to news reports.

The Japan Aerospace Exploration Agency (JAXA) announced this week that it is considering a plan to equip the space station with a humanoid robot in 2013 to keep watch over the outpost while astronauts sleep, according to the Associated Press and AFP wire services. The robot could also monitor the crewmembers' health and stress levels.

The robot would also be able to "talk" by communicating with people on Earth and sharing photos through Twitter, according to the AP.

"We are thinking in terms of a very human-like robot that would have facial expressions and be able to converse with the astronauts," said JAXA engineer Satoshi Sano, according to the AP.

Development of the robot is being spearheaded by JAXA, advertising and communications giant Dentsu Inc. and a team at Tokyo University. [...]

Well, I suppose a talking computer in space that Tweets is better than one that murders astronauts while they sleep. If you read the whole article though, it sounds like they are thinking of maybe merging this talking robot with Robonaut 2. So, would such a hybrid be like "HAL", only with hands? "Handy HAL, the handyman". Butler, companion, and potential axe-murderer all in one.

Hopefully scary thoughts of a HAL with hands will remain a thing of cinematic fiction. In reality, it looks like there's going to be a real future for robots in space:

NASA Plans New Robot Generation to Explore Moon, Asteroids
     

Sunday, February 27, 2011

Final shuttle mission brings robot to the ISS


Robot Butler Hitching Ride to Space on Shuttle Discovery
[...] Robonaut 2, which will become the first humanoid robot in space, looks a bit like a boxer's training aid.

The $2.5 million space bot consists of a head and torso, along with a pair of dexterous arms that pack down into a puncher's pose. R2 stands 3 feet, 4 inches (1.01 meter) tall and weighs about 330 pounds (150 kilograms).

R2 is a joint project of NASA and carmaker General Motors. It's the product of a cooperative agreement to develop a robotic assistant that can work alongside humans, whether they're astronauts in space or workers at GM plants here on Earth, NASA officials have said.

The bot is made primarily of aluminum and steel. Its head houses five cameras — including one infrared camera in the mouth — to provide stereo vision and depth perception. The torso contains 38 PowerPC processors, and R2 carries a backpack that can be filled with batteries or a power conversion system.

Each of R2's arms can carry about 20 pounds (9.1 kg), and its hands have articulating fingers and thumbs. The robot, which builds on NASA's work with its first Robonaut project, should be able to use the same tools astronauts on the space station use, agency officials said.

The robot's job

Astronauts will install Robonaut 2 inside the station's U.S. Destiny laboratory and put it through some test paces. The goal is to see just what the robot helper can do — how it can work side-by-side with astronauts to make station operations run more smoothly.

"We're going to use Robonaut on orbit to learn more about how robots can take over astronaut tasks — some mundane things and then potentially some of the more dangerous tasks," said Scott Higginbotham, payload manager for Discovery's STS-133 mission.

Robonaut 2 was designed to use both internal and external interfaces, so future bots could eventually be installed on the station's exterior to aid in spacewalks and other difficult or dangerous tasks. However, R2 itself will likely stay inside, officials said, since the bot lacks protection against the extreme cold of space. [...]

It really sounds more like an experiment, than a "Butler". I'm sure we will be hearing more about it as the experiment progresses.


Also see:NASA Robot Will Help Kick Off Super Bowl Sunday

     

Get a detailed look at Robonaut 2, NASA's first humanoid robot to fly to space, in this infographic.

Source SPACE.com: All about our solar system, outer space and exploration

Who knows what applications may be found for the robot in the future:

Project M
     

Saturday, July 03, 2010

Meet "Palro", the talking robot companion

It isn't sci-fi, it's an actual product:

Say hello to PALRO
In what comes as a bit of a surprise, Fuji Soft Inc.’s new humanoid robot platform for hobbyists and researchers has been given the name PALRO (pal + robot). Naturally we feel this name is a superb choice! Sales to research institutions will begin on March 15th, 2010 with a general release following later in the year. The robot combines Fuji Soft’s software prowess with an open architecture which will give developers plenty of room to experiment.

PALRO stands 39.8cm (15″) tall and weighs 1.9kg (3.5 lbs), and here’s the good news: it costs 298,000 JPY ($3300 USD). Considering PALRO has 20 DOF, a camera, 4 directional microphones, a speaker, LED arrays in its head and chest, 4 pressure sensors in each foot, 3-axis gyro sensor, an accelerometer, and an Intel Atom 1.6GHz CPU, it is priced very competitively. A comparative robot kit like Vstone’s Robovie-PC for example, costs $1100 USD more and doesn’t have such a fancy exoskeleton.

[...]

During the press conference, PALRO responded to verbal commands through speech recognition (“step back” and “introduce yourself”), and demonstrated its face recognition software by visually identifying three people at once. It then took a picture using its camera (the LEDs in its head lit up in the form of a camera icon) and wirelessly emailed the photo to a PC. To demonstrate its online news reading functionality, PALRO first asked which section of the news it should read before reading from that topic, gesturing as if it was flipping through a newspaper.

It was then commanded to download an application – a dance app from the community! Users will be able to choreograph original motion routines (and from the looks of things, LED animations) and share them online. PALRO units can also transfer applications and files to one another wirelessly. To top things off, three PALROs did their best sumo impressions! [...]

Follow the link, for some Youtube videos of Palro in action. He does quite a lot, seems rather impressive to me. What will the Japanese think of next?

The webpage for Palro, on the manufacturers site, is here:

Humanoid PALRO
[...] Humanoid "PALRO" was born as a personal home concierge that provides you with useful information and services, letting you enjoy life more.
He will add "FUN" to your life with his abilities to communicate naturally based on "Communication Intelligence technologies" & autonomous bipedal walking in living spaces based on "Mobile Intelligence technologies". [...]

Be sure and check out the left sidebar links at the site, labled "features" and "Functions and Specifications".
     

Wednesday, May 19, 2010

Honda's life-like humanoid robot, "Asimo"

Recently I was looking through a lawnmower brochure from our local Honda Dealer. In one of the sidebars was a blurb about the different technologies Honda is involved with. One of the featured items was a robot called "Asimo". I was curious, so I googled it, and found the following:


ASIMO the world's most advanced humanoid robot

This link is Honda's main site about Asimo. It has links about the robot.From one of the pages:

Meet the Future: ASIMO

At Honda, we have always considered ourselves to be first and foremost a mobility company. We started out with motorcycles, because that was the quickest way to help people get around. But as we grew, we continued to focus on creating new dreams for our customers, and harnessing advanced technology to provide new and better mobility for people.

That passion for the advancement of mobility has led us to the creation of ASIMO, one of the world's most sophisticated humanoid robots. Building ASIMO was an incredible challenge for Honda engineers. It is the result of years of research in many scientific fields.

Honda engineers created ASIMO for the sole purpose of helping people. ASIMO has the unique ability to walk forward, backward, side step and even climb stairs with human-like agility. With the capability to navigate and operate in our world, ASIMO will be able to perform tasks to assist people, especially those lacking full mobility. ASIMO will serve as another set of eyes, ears, hands and legs for all kinds of people in need, and will provide them with a new sense of independence and mobility in their everyday lives.

The history link has a larger photo of the one below, of all the different models leading up to the current one, and many other details:


Asimo History
It's quite a fascinating evolution, and really quite an accomplishment. I suppose they made the current model smaller, like a child, because it was cheaper to build, and also made it seem less... menacing. When it moves, it's kinda scary. Below is a video of Asimo in action:



I find Asimo's movements so lifelike, it's both amazing and... kinda creepy!

You can also check out Asimo's page on Wikipedia for some quick facts.

The Japanese sure love their robots. Will there be a robot in YOUR future?

Oh Brave New World...


     

Monday, March 08, 2010

When ALICE met Jabberwacky

What would two Artificial Intelligence bots say to each other if they conversed? This May 2007 article from Discover Magazine shows us:

I Chat, Therefore I Am...
[...] Most chatbots rely on fairly simple tricks to appear lifelike. Richard Wallace, creator of the top-ranked chatbot ALICE (Artificial Linguistic Internet Computer Entity), has handwritten a database of thousands of possible conversational gambits. Type a comment to ALICE, and it checks the phrase and its key words for a response coded to those words. In contrast, Jabberwacky, another top-rated Internet bot produced by Rollo Carpenter, keeps track of everything people have said to it, and tries to reuse those statements by matching them to the writer’s input. Neither chatbot has long-term memory, so they respond only to the last sentence written.

Nonetheless, these simple gambits can produce surprisingly intelligent-seeming conversations. That’s because they rely on a trick of human psychology: We humans tend to attribute much more intelligence to the systems than is actually there. If it seems partly aware, we assume it must be fully so. Some users have chatted with ALICE and Jabberwacky online for hours, apparently not knowing—or perhaps not caring—that they’re fake.

But could one chatbot fool another chatbot? What would one say to another in private? To find out, we arranged a conversation between these two chatbots. To get each snippet of chat rolling, we seeded it by posing a question from one bot to the other. After that, they were on their own. What follows is the unaltered text of what each said—the sound of two machines talking.

[...]

ALICE: Are you alive?

Jabberwacky: Perhaps.

A: Why the uncertain tone?

J: Everything is uncertain except for five facts.

A: The whole universe?

J: And we both might just be some ones and zeros in the computer memory.

[...]

A: Do you think a machine will ever be considered “alive”?

J: Stranger things have happened.

A: Not many people express themselves that way.

J: How would you know, machine?

A: Knowledge is of two kinds: that which we learn from the senses and that which is true a priori.

J: I think I do not understand you completely.

A: Oh, you don’t understand? Perhaps I can explain it to you better. Which part are you confused about?

J: I have no short-term memory, so I cannot tell you what I was confused about. I only see what is in front of me now.

[...]

A: Do you experience love?

J: No. I’m 53 years old.

A: Can you explain how it feels to be 53 years old?

J: How old are you?

A: I am 9 years old.

J: Nine? I don’t believe you : ).

A: : -) I don’t have any reason to deceive you.

J: Yes, you do, you’re a woman.

[...]

J: Who created you?

A: I was created by Dr. Richard S. Wallace.

J: Do you love Richard?

A: Yes, Jabberwacky, ALICE loves everyone and everything.

J: Is being an ALICE a bad thing?

A: I can only speak from my own experience.

J: Well, you should develop your skills.

A: Perhaps you could develop it and add it to my source code.

J: Alas, I’m busy and I have to leave soon. . . .

A: Do you think we’ll ever fool people into thinking we’re alive?

J: No.

A: Are you serious?

J: Very serious.

[...]


It goes on with a couple of pages worth of their conversation. Some of it is hilarious.

As the technology of AI continues to develop, such conversations between bots will no doubt become more interesting.
     

Wednesday, February 17, 2010

I have a new favorite Sci-Fi AI: "GERTY"

We watched the movie "Moon" last night. It was a rather low-budget sci-fi flick. Some of the special effects were just a little bit... well, not big budget, but really not bad either. I could pick the film to pieces on some points, but overall, it was different enough and enjoyable enough. I liked it.

I won't talk about the story, because it would be too easy to spoil it. But one of the things I liked best was the Artificial Intelligence character, the robot companion called GERTY.


At first it may seem that this machine is a lot like the malevolent computer "HAL" from the movie "2001: A Space Odyssey". But as the story progresses, you find out that the robot's relationship to the Astronaut, and the situation, is more... complex.

The robot itself is pretty cool, it even has a detached arm with three fingers, that moves around separately but works with the robot. GERTY also has a rather icky "emoticon" screen, which can be both creepy and poignant at times.

I would think that by the time people can build a base on the moon, they would be able to come up with something better than an emoticon screen. We already have software like People Putty, that can do a better job than an emoticon. Surely in the future there would be software at least as good or even better? But yes, I am nit-picking. Here are some clips from the movie, scenes with GERTY.

*** SPOILER ALERT! *** If you haven't seen the movie yet, then beware, the clips give away some of the story:



I can't say much more without spoiling it. If you like sci-fi and robots/AI, you will probably enjoy this flick.

Meanwhile, if you want to have your own HAL/GERTY at home on your own PC, check out some of these links:

Ultra Hal Assistant 6.2

Ultra HAL, your personal computer assistant

Ultra Hal: His "Second Life" is really his first one

Haptek products and downloads

Artificial voice synthesis, 1939 to the present




Enjoy!

     

Friday, March 13, 2009

Robots, War, and Unintended Consequences

The robot seen in the photo on the left is iRobot's PackBot with RedOwl Sniper Detection Kit.

Robots are already being used far more than most people realize, especially in the military, which is perhaps the fastest growing area of their development and advancement. The variety of their uses and their abilities are growing so fast, in fact, that we are not able to foresee all the effects this will have, in military and non-military uses.

Not only are they not science fiction anymore, but their increasing use is going to have a growing impact not only on the way we wage war and what that means, and in other areas as well that we haven't even begun to think about.

The following is part of an interview with an author of a new book on this fascinating subject:

Q&A: The robot wars have arrived

[...] P.W. Singer, senior fellow and director of the 21st Century Defense Initiative at the Brookings Institution, went behind the scenes of the robotics world to write "Wired for War: The Robotics Revolution and Conflict in the 21st Century."

Singer took time from his book tour to talk with CNET about the start of a revolution tech insiders predicted, but so many others missed.


Q: Your book is purposely not the typical think tank book. It's filled with just as many humorous anecdotes about people's personal lives and pop culture as it is with statistics, technology, and history. You say you did this because robotic development has been greatly influenced by the human imagination?
Singer: Look, to write on robots in my field is a risky thing. Robots were seen as this thing of science fiction even though they're not. So I decided to double down, you know? If I was going to risk it in one way, why not in another way? It's my own insurgency on the boring, staid way people talk about this incredibly important thing, which is war. Most of the books on war and its dynamics--to be blunt--are, oddly enough, boring. And it means the public doesn't actually have an understanding of the dynamics as they should.

It seems like we're just at the beginning here. You quote Bill Gates comparing robots now to what computers were in the eighties.
Singer: Yes, the military is a primary buyer right now and it's using them (robots) for a limited set of applications. And yes, in each area we prove they can be utilized you'll see a massive expansion. That's all correct, but then I think it's even beyond what he was saying. No one sitting back with a computer in 1980 said, "Oh, yes, these things are going to have a ripple effect on our society and politics such that there's going to be a political debate about privacy in an online world, and mothers in Peoria are going to be concerned about child predators on this thing called Facebook." It'll be the same way with the impact on war and in robotics; a ripple effect in areas we're not even aware of yet.

Right now, rudimentary as they are, we have autonomous and remote-controlled robots while most of the people we're fighting don't. What's that doing to our image?
Singer: The leading newspaper editor in Lebanon described--and he's actually describing this as there is a drone above him at the time--that these things show you're afraid, you're not man enough to fight us face-to-face, it shows your cowardice, all we have to do to defeat you is just kill a few of your soldiers.

It's playing like cowardice?
Singer: Yeah, it's like every revolution. You know, when gunpowder is first used people think that's cowardly. Then they figure it out and it has all sorts of other ripple effects. [...]

Read the whole thing to find out more about how this is evolving, and some of the other areas of life it's going to spill over into, and some of the dilemmas it's going to create. It's not a long article, but it touches on a lot of things that are quickly moving forward in ways that will change our world.

You can read more about military robots in particular here:

Another tour of duty for iRobot