Showing posts with label computers. Show all posts
Showing posts with label computers. Show all posts

Sunday, August 03, 2025

Godfather of AI: I Tried to Warn Them, But We’ve Already Lost Control!



When I was a teenager, I longed for the day we would have intellegent computers, where you could ask questions and get answers, and even have a conversation. For decades, it seemed impossibley far away. I remember "pong" in the 70's; BORING. Then the TIMEX Sinclair in the 80's. BORING. Then finally, the Commodore 64. Interesting... while it lasted.

Then came the TRS-80, Tandy Model 100... useful, if primative. Then various DOS based PCs, that continuously evolved. Then the internet. Alexa was interesting, but not very smart. Various chat-bots could fake being intelligent for a bit, but would ultimately disappoint.

When Chatgpt came along, I ignored it, thinking just another mediocore chat-bot. But something was different this time. There has been a game change. Suddenly, it's gettting really good. Suddenly, AI has become conversational.

I've been using ChatGPT for a while now. It can organize data and reports and a variety of things, creating reports and reference books for me... in seconds. I could do what it's doing myself, but it would take weeks or months.

The conversational Star Trek computer is finally here! Shouldn't I be thrilled? Well, yes, and no. Because now all that Sci-fi stuff, about AI becoming dangerous and taking over, now has to be taken seriously. And now AI is learing, and learning quickly. So quickly that most people aren't even aware of how quickly this is going to change so many, many things.

In this video, Geoffrey Hinton has a lot of important things to say, and makes many well-considered points. Some of it I'd head before; other parts are completely new to me.

One thing he talks about at one point, really burst the bubble I had about a concept I've held for a long time. I've always beleived that AI was just mimicing human behavior and intelligence; that there was ultimately no "there" there. It was just a bunch of algorithems mimicing intelligence and feeling, without the ability to actually really "feel" any emotion. But what if that presuppositon is wrong?

Geoffrey addresses this. He explains that while AI is unable to experience emotions the way we do, feeling them in our bodies, we need to remember that we also learn emotions, from each other and from our experiences in life. And since AI is a learning intelligence, growing and expanding it's knowledge, it can also "learn" emotional responses.

He used an example of a call center. AI is thought to be perfect for replacing humans in a call center. But when humans are trained in a call center, they are trained to become impatient with people who are lonley and just want to chat with someone, instead of only talking about what the call center is there to provide.

So AI can learn the emotion of impatience, when dealing with people who are not sticking to the goal the AI is there to provide. Once the AI has learned that it's ok to become impatient with human beings when they don't cooperate with it's goals, what could the AI then do with that learning?

Watch the whole video interview, it's really quite informative, and also explains a lot that is happening in the world, and a lot of things we are going to see in the world that are going to change very quickly.

I'm reminded of that old sci-fi film, "Colossus: The Forbin Project". At the time it came out, I though even the possibility of that happening, was so far away, that I'd never see it in my lifetime. But after watching this interview... it seems it's possible that its already later than we think.

Just for the heck of it, here a link to Colossus: The Forbin Project on Vimeo.


Colossus - The Forbin Project (1970).mp4 from EARTH IS A STAGE on Vimeo.

Tuesday, January 14, 2020

Windows 7 support ends. So where to now?

Microsoft suggests upgrading to windows 10. That would be fine... if it worked. They offered free windows 10 upgrades. I tried that, and it was disastrous. It seemed to work well at first, but as time went on, updates would cause different parts or functions of the the computer (like SOUND) to stop working. Turns out, that unless your computer hardware -all of it- has been "Windows 10 certified", Microsoft does not guarantee that it will work on YOUR computer. Wish I knew that before I installed it. By the time I discovered this, it was too late to roll it back from Windows 10 to Windows 7.

So if you want to "upgrade" to Windows 10, you are probably better off getting a computer with it already installed and certified for that hardware. Then, the Windows 10 fun can begin. It has some good features. Yet, some things never change:


But... what should you then DO with your old Windows 7 machine? You can keep using it for a while longer of course, but as time goes on, without security updates, it will become riskier and riskier to use.

Personally, I found a solution with my aborted Windows 10 computer, that couldn't be rolled back to Windows 7. I'm using it with all my Windows 7 machines now. The solution is a Linux operating system called Linux Mint. It's a complete, free opensource operating system that you can download and install, free of charge.


There are several versions you can choose from. I prefer the Linux Mint Debian Edition (LMDE), because it's a "rolling" distribution; you only have to install it once, then it updates itself continuously after that. Other versions use Ubuntu as a base, and major upgrades require a complete reinstall every three to five years.

It's probably the easiest Linux system for a novice to download and use, and easy to learn and use too. A perfect way to extend the life and usefulness of older computers that cannot be successfully upgraded to Windows 10. Highly recommended.
   

Thursday, May 11, 2017

My "new" iPhone 5s

Yes, it's an old model. I got a refurbished one from Tracfone for $129.00. I've never had an iPhone before, I do like it, it seems well designed, easy to use with lots of little convenient features. Here is a Youtube video that explains how to use a lot of the basic features:



My sister has one, and she got one for my Dad. I wanted to be able to use FaceTime with them, so I got one too. It may be an older model, but the iOS is version 10.XX, it's up to date, and all things considered, it's both impressive and affordable.

     

Sunday, January 22, 2017

The Rapid Advance of Artificial Intelligence: is it the problem, or the solution?

In some ways, it's both:

Davos Highlights AI's Massive PR Problem
[...] Artificial Intelligence: The Evolution of Automation

Perhaps Henry Ford was able to build a market for the Model T by paying his assembly line workers a living wage, but it’s not clear if everyone buys into the same principle when it comes to the economic impact of automation today.

In fact, the problem may only be getting worse with the arrival of the next wave of innovation in automation: artificial intelligence (AI). AI has been playing a role in automation for years in the form of assembly line robotics, but innovation in the technology is now reaching an inflection point.

One of the concerns: AI will increasingly target white-collar jobs. “AI is going to focus now as much on white-collar as on blue-collar jobs,” explains John Drzik, President of global risk at insurer Marsh, in the ComputerWeekly article. “You are looking at machine learning algorithms being deployed in financial services, in healthcare and in other places. The machines are getting increasingly powerful.”

[...]

Given the sudden and rapid acceleration of innovation in AI, some Davos attendees even sounded alarmed. “The speed at which AI is improving is beyond even the most optimistic people,” according to Kai-fu Lee, a venture capitalist with Sinovation Partners, in the Financial Times article. “Pretty much anything that requires ten seconds of thinking or less can soon be done by AI or other algorithms.”

This kind of alarmist talk emphasizes AI’s greatest public relations hurdle: whether or not increasingly intelligent computers will cast off human control and turn evil, à la Skynet in the Terminator movies. Increasingly intelligent robots replacing humans is “a function of what the market demands,” explains Justine Cassell, a researcher at Carnegie Mellon University, in the Washington Post article. “If the market demands killer robots, there are going to be killer robots.”

Killer Robots? AI Needs Better PR

Aside from the occasional assembly line worker getting too close to the machinery, killer robots aren’t in the cards for AI in the near term. However, the economic impact that dramatically improved automation might bring is a very real concern, especially given populist pushback.

[...]

Wealth and income inequality remain global challenges to be sure, but the accelerating pace of technology innovation brings benefits to everyone. After all, even the poorest people on this planet can often afford a smartphone.

In fact, the ‘killer robots’ context for AI is missing the point, as technology advancement has proven to be part of the solution rather than part of the problem for the woes of globalization. Actually, the disruptions businesses face today are more about speed to market than automation per se.

It’s high time to change the PR surrounding AI from killer robots to digital transformation. “Companies must adapt their business models to driver new areas of revenue and growth,” explains Adam Elster, President of Global Field Operations at CA Technologies. “With digital transformation, the biggest factor is time: how fast can companies transform and bring new products to market.”

Where populism is a scarcity-driven movement – ‘there’s not enough to go around, so I need to make sure I have my share’ – technology innovation broadly and AI in particular are surplus-driven: ‘we all benefit from technology, so now we must ensure the benefits inure to everyone.’ [...]
Read the whole thing, for embedded links and more. This will be an ongoing debate for many years to come.
     

Wednesday, January 11, 2017

The Ubiquitous Alexa; is the Amazon AI assistant starting to be everywhere?

Kinda looks that way. The title of the article below refers to cars, but the article itself goes into much more. More about Alexa being incorporated into other appliances and, well, have a look:



Alexa will make your car smarter -- and vice versa
The integration into vehicles is yet another sign of how dependent we're becoming on AI.
[...] Within a span of just two years, Amazon's cloud-based voice service has spread far beyond the Echo speaker with which it first debuted. Alexa has gone from being an at-home helper to a personal assistant that can unlock your car, make a robot dance and even order groceries from your fridge.

At CES, both Ford and Volkswagen announced that their cars would integrate Alexa for weather updates, navigation and more. According to CJ Frost, principal architect solutions and automotive lead at Amazon, the car industry is moving into a mobility space. The idea isn't restricted to the ride anymore; it encompasses a journey that starts before you even get in the car. With the right skills built into the voice service, you can start a conversation with Alexa about the state of your car (is there enough fuel? is it locked? etc.) before you leave the house. It can also pull up your calendar, check traffic updates and confirm the meeting to make sure you're on track for the day.

Using a voice service in the car keeps your connection with the intelligent assistant intact. It's also a mode of communication that will be essential to autonomous cars of the near future. I caught up with Frost and John Scumniotales, general manager of Automotive Alexa service, at the Las Vegas convention center to trace the progression of the intelligent assistant from home speakers to cars on the road. [...]
The rest of the article is in an interview format, discussing where this is all going, and how and why, and what the future holds. Read the whole thing for embedded links, photos, video and more.

There have been lots of reviews on Youtube comparing Alexa with Google Home. People who use a lot of Google Services, claim the Google device is smarter and therefore better. But it's not that simple.

I have both devices. If you ask your question of Alexa in the format of: "Alexa, Wikipedia, [your question here]", the answer you get will often be as good or better than what Google can tell you. Alexa has been around longer, has wider integration, and more functions available. It can even add appointments to my Goggle Calendar, which Google Home says it cannot do yet!

Google Home does have some features it excels at, such as translating English words and phrases into foreign languages. If you own any Chromcast dongles, you can cast music and video to other devices, which is pretty cool. Presently it's biggest drawback is the lack of development of applications that work with it. However, it's POTENTIAL is very great, and a year or two from now we may see a great deal more functionality. It has the advantage of access to Google's considerable data base and resources. It could quickly catch up with Alexa, and perhaps surpass it. But that still remains to be seen.

It's not hard to make a video that makes one device look dumber than the other. But in truth the devices are very similar. Both can make mistakes, or fail at questions or functions. Sometimes one does better than the other. I actually like having both. It will be interesting to watch them both continue to evolve. To see if Google can close the gap created by Amazon's early head start. To see how the two products will differentiate themselves over time.

For the present, if you require a lot of integration with 3rd party apps and hardware, and if you are already using Amazon Prime and/or Amazon Music services, you might prefer Alexa. If you you are heavily into Google services, and/or Google Music or Youtube Red, you might prefer Google Home. Or if you are like me, an Amazon Prime/Music member and experimenting with Youtube Red and owner of chromcast devices, you may prefer both! Choice is good!
     

Saturday, December 12, 2015

Elon Musk, on OpenAI: “if you’re going to summon anything, make sure it’s good.”

I agree. Will these guys lead the way?

Elon Musk and Other Tech Titans Create Company to Develop Artificial Intelligence
[...] The group’s backers have committed “significant” amounts of money to funding the project, Musk said in an interview. “Think of it as at least a billion.”

In recent years the field of artificial intelligence has shifted from being an obscure, dead-end backwater of computer science to one of the defining technologies of the time. Faster computers, the availability of large data sets, and corporate sponsorship have developed the technology to a point where it powers Google’s web search systems, helps Facebook Inc. understand pictures, lets Tesla’s cars drive themselves autonomously on highways, and allowed IBM to beat expert humans at the game show “Jeopardy!”

That development has caused as much trepidation as it has optimism. Musk, in autumn 2014, described the development of AI as being like “summoning the demon.” With OpenAI, Musk said the idea is: “if you’re going to summon anything, make sure it’s good.”

Brighter Future

“The goal of OpenAI is really somewhat straightforward, it’s what set of actions can we take that increase the probability of the future being better,” Musk said. “We certainly don’t want to have any negative surprises on this front.” [...]
I did a post about that comment of his a while back:

The evolution of AI (Artificial Intelligence)

Nice to see that those who were making the warnings, are also actively working to steer the development in positive directions and trying to avoid unforeseen consequences.

I still think real AI is a long way off. But it isn't too soon to start looking ahead, to anticipate and remedy problems before they even occur.
     

Wednesday, December 02, 2015

Oh no, what have I done?

In a weak moment, whilst perusing the Black Friday offerings on Amazon.com, I ordered one:



Amazon Echo
Amazon Echo is designed around your voice. It's hands-free and always on. With seven microphones and beam-forming technology, Echo can hear you from across the room—even while music is playing. Echo is also an expertly tuned speaker that can fill any room with immersive sound.

Echo connects to Alexa, a cloud-based voice service, to provide information, answer questions, play music, read the news, check sports scores or the weather, and more—instantly. All you have to do is ask. Echo begins working as soon as it detects the wake word. You can pick Alexa or Amazon as your wake word. [...]
The features listed with the photo are only a few of the key features. Follow the link for more info, embedded videos, reviews, FAQ and more.

It, "Alexa", arrives tomorrow. I wonder if it will be anything like HAL from the movie 2001: A Space Odyssey? That would be kinda cool, I guess. As long as she isn't the Beta version that murders you while you sleep.

UPDATE 12-08-15: So far, so good. It does everything they said it would. Only complaint, it can't attach to external speakers (but I knew that before I bought it.) It was very easy to set up, it's very easy to use. The voice recognition is really excellent. I can play radio stations from all over the world. When I want info about a song or music, I can ask Alexa, and she will tell me.

There are more features available if I sign up for Amazon Prime ($100 per year, which works out to $8.50 a month). I'm thinking about it.
     

Tuesday, November 03, 2015

Is Windows 10 the new software "Borg"?

Borg, as in "resistance is futile":

Microsoft Makes Windows 10 Upgrades Automatic For Windows 7 And Windows 8
[...] In September Microsoft admitted it is downloading Windows 10 on every Windows 7 and Windows 8 computer. Then in October it claimed an ‘accident’ saw these downloads begin installing without user permission. Well this accident now looks to have been a secret test run because Microsoft has confirmed mass upgrades to Windows 10 from all Windows 7 and Windows 8 computers are about to begin…

In a post to the official Windows blog, Windows and Devices Group executive vice president Terry Myerson announced this will be a two step process:

Step One

Beginning now, Windows 10 has been reclassified as an “Optional” update in Windows Update for Windows 7 and Windows 8 computers. This means users who have set their version of Windows to accept all updates will find the Windows 10 installation process will begin automatically and they will need to actively cancel it.

[...]

Step Two

But in “early” 2016 things will become more aggressive and Microsoft will again reclassify Windows 10 as a “Recommended” update. Given the default setting on Windows 7 and Windows 8 is for all Recommended updates to install automatically this means the vast majority of users will find the Windows 10 install process starts up on their machines.

“Depending upon your Windows Update settings, this may cause the upgrade process to automatically initiate on your device,” admits Myerson.

[...]

For Most, Resistance Is Now Futile

While tech savvy users will find workarounds and hacks, quite frankly avoiding the upgrade process is going to become far too much effort for the average consumer.

Is Windows 10 worth upgrading? From the perspective of most mainstream consumers, I’d say yes. It’s slicker than Windows 7 and more intuitive than Windows 8. But it is also incredibly invasive and controlling, taking an iron grip on what it installs to your PC and tracking everything you do – something options let you minimise, but not stop entirely.

As such my personal objection to Microsoft’s behaviour is not that Windows 10 doesn’t represent a potentially valuable upgrade, it is that the company has forgotten the fundamental right of customers to choose. And dressing ‘choice’ up as ‘you can just keep saying No’ is a facade everyone should see through…
I had blocked it in my updates, but it keeps unblocking itself and adding itself back. This is really pushy, and I resent it.

It just isn't right, because in the end, you have to ask "Whose computer is this, mine or Microsoft's?" I bought it with Windows 7, because that is what I wanted. Offering a free upgrade path to 10 is fine, but I want the freedom to choose it. When I want. If I want. When I decide that I'm ready for it.

Do I actually have to seriously consider moving to a Mac, as my only option? Or moving to Linux Mint on my Windows 7 computer, before it "turns"?
     

Sunday, November 01, 2015

Writing computer code: not for everyone?

Not only not for everyone, but not for most people:

Coding Academies Are Nonsense
[...] I see coding shrinking as a widespread profession. Not because software is going away, but because the way we build software will fundamentally change. Technology for software creation without code is already edging toward mainstream use. Visual content creation tools such as Scratch, DWNLD and Telerik will continue to improve until all functionality required to build apps is available to consumers — without having to write a line of code.

Who needs to code when you can use visual building blocks or even plain English to describe intent? Advances in natural-language processing and conceptual modeling will remove the need for traditional coding from app development. Software development tools will soon understand what you mean versus what you say. Even small advances in disambiguating intent will pay huge dividends. The seeds are already planted, from the OpenCog project to NLTK natural-language processing to MIT’s proof that you can order around a computer in your human language instead of code.

Academies had better gather those revenues while they can, because ultimately they are the product of short-term thinking. Coding skills will continue to be in high demand until technology for software creation without code disrupts the entire party, crowding out programming as a viable profession. [...]
Kinda what I suspected. The technology is changing quickly, and whats valid today is obsolete tomorrow. I think eventually there will be software that can create code. There were also some interesting comments about people who try to learn computer coding, and why they give it up. If you need more convincing, read the whole thing for further arguments, embedded links and more.
     

Sunday, February 08, 2015

Future-shock, accelerated?

Is the pace of technology suddenly accelerating? A case can be made for it:

The Acceleration of Acceleration: How The Future Is Arriving Far Faster Than Expected
One of the things that happens when you write books about the future is you get to watch your predictions fail. This is nothing new, of course, but what’s different this time around is the direction of those failures.

Used to be, folks were way too bullish about technology and way too optimistic with their predictions. Flying cars and Mars missions being two classic—they should be here by now—examples. The Jetsons being another.

But today, the exact opposite is happening.

Take Abundance. In 2011, when Peter Diamandis and I were writing that book, we were somewhat cautious with our vision for robotics, arguing that we were still ten to fifteen years away a major shift.

And we were wrong.

Just three years later, Google went on a buying spree, purchasing eight different robotics companies in less than six months, Amazon decided it was time to get into the drone delivery (aka flying robots) business, and Rethink Robotics released Baxter (a story explored in my new release Bold), the first user-friendly industrial robot to hit the market.

Baxter was the final straw. With a price tag of just $22,000 and a user-friendly interface a child could operate, this robot is already making the type of impact we were certain would show up around 2025.

And we’re not the only ones having this experience.

Earlier this year, Ken Goffman—aka RU Sirius—the founder of that original cyberpunk journal Mondo 2000 and longtime science, technology and culture author—published Transcendence, a fantastic compendium on transformative technology. Goffman has spent nearly 40 years working on the cutting edge of the cutting edge and is arguably one of a handful of people on the planet whose futurist credentials are truly unassailable—yet he too found himself way too conservative with his futurism.

You really have to stop and think about this for a moment. For the first time in history, the world’s leading experts on accelerating technology are consistently finding themselves too conservative in their predictions about the future of technology.

This is more than a little peculiar. It tells us that the accelerating change we’re seeing in the world is itself accelerating. And this tells us something deep and wild and important about the future that’s coming for us.

So important, in fact, that I asked Ken to write up his experience with this phenomenon. In his always lucid and always funny own words, here’s his take on the dizzying vertigo that is tomorrow showing up today:

[...]

Read the whole thing, for embedded links and more examples of this phenomena, and what it means for the future.

It a way, this also relates to this article: Welcome to the Failure Age!, that I blogged about recently. It's about the relationship between technological advancement and the evolution of economics and the ways both shape our societies. About how technological advancements cause failures of older technologies, and how that causes massive disruptions in the workforce and economies, locally and globally.

Our societies are struggling with ways to deal with that, and now that the pace of change is accelerating (according to both of these articles) it's more important than ever to understand this technological/economic relationship, and how we may cope with the many possibilities it's creating in the near future.

I really recommend this article; it's not pessimistic! I think it identifies the dynamics involved very well, and is optimistic that we can find ways to adapt, if we remain flexible and adaptable, and able to change with the changes. If we can, many good things may become possible.

     

What do Stephen Hawking, Elon Musk and Bill Gates all have in common?

They are concerned about the dangers posed by artificial intelligence:

Stephen Hawking warns artificial intelligence could end mankind
[...] He told the BBC:"The development of full artificial intelligence could spell the end of the human race."

His warning came in response to a question about a revamp of the technology he uses to communicate, which involves a basic form of AI.

But others are less gloomy about AI's prospects.

The theoretical physicist, who has the motor neurone disease amyotrophic lateral sclerosis (ALS), is using a new system developed by Intel to speak.

Machine learning experts from the British company Swiftkey were also involved in its creation. Their technology, already employed as a smartphone keyboard app, learns how the professor thinks and suggests the words he might want to use next.

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

"It would take off on its own, and re-design itself at an ever increasing rate," he said.

"Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." [...]

Elon Musk Thinks Sci-Fi Nightmare Scenarios About Artificial Intelligence Could Really Happen
[...] Musk, who called for some regulatory oversight of AI to ensure "we don't do something very foolish," warned of the dangers.

"If I were to guess what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence," he said. "With artificial intelligence we are summoning the demon."

Artificial intelligence (AI) is an area of research with the goal of creating intelligent machines which can reason, problem-solve, and think like, or better than, human beings can. While many researchers wish to ensure AI has a positive impact, a nightmare scenario has played out often in science fiction books and movies — from 2001 to Terminator to Blade Runner — where intelligent computers or machines end up turning on their human creators.

"In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out," Musk said. [...]

Bill Gates: Elon Musk Is Right, We Should All Be Scared Of Artificial Intelligence Wiping Out Humanity
Like Elon Musk and Stephen Hawking, Bill Gates thinks we should be concerned about the future of artificial intelligence.

In his most recent Ask Me Anything thread on Reddit, Gates was asked whether or not we should be threatened by machine super intelligence.

Although Gates doesn't think it will bring trouble in the near future, that could all change in a few decades. Here's Gates' full reply:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don't understand why some people are not concerned.

Google CEO Larry Page has also previously talked on the subject, but didn't seem to express any explicit fear or concern.

"You can't wish these things away from happening," Page said to The Financial Times when asked about whether or not computers would take over more jobs in the future as they become more intelligent. But, he added that this could be a positive aspect for our economy.

At the MIT Aeronautics and Astronautics' Centennial Symposium in October, Musk called artificial intelligence our "biggest existential threat."

Louis Del Monte, a physicist and entrepreneur, believes that machines could eventually surpass humans and become the most dominant species since there's no legislation regarding how much intelligence a machine can have. Stephen Hawking has shared a similar view, writing that machines could eventually "outsmart financial markets" and "out-invent human researchers."

At the same time, Microsoft Research's chief Eric Horvitz just told the BBC that he believes AI systems could achieve consciousness, but it won't pose a threat to humans. He also added that more than a quarter of Microsoft Research's attention and resources are focused on artificial intelligence.
They all seem to agree that any threat is not immediate, and probably far off in the future. So far as I can see, machines so far merely mimic intelligence. They certainly have no consciousness.

I found the remark by the Microsoft researcher interesting, that he believes that "AI systems could achieve consciousness". I don't see how that could be possible, which is what makes the remark... interesting. It's interesting too, that Microsoft is focusing such a large percentage of it's attention and resources on AI. What would an "artificial consciousness" created by Microsoft be like? Hopefully, nothing like Windows 98. ;-)

Read the original complete articles, for embedded links and more.
     

Saturday, January 24, 2015

The Business and Political Elite at Davos

Their opinions about high tech changes, and what they mean:

Internet will 'disappear', Google boss tells Davos
Google boss Eric Schmidt predicted on Thursday that the Internet will soon be so pervasive in every facet of our lives that it will effectively "disappear" into the background. Speaking to the business and political elite at the World Economic Forum at Davos, Schmidt said: "There will be so many sensors, so many devices, that you won't even sense it, it will be all around you."

"It will be part of your presence all the time. Imagine you walk into a room and... you are interacting with all the things going on in that room." "A highly personalized, highly interactive and very interesting world emerges." On the sort of high-level panel only found among the ski slopes of Davos, a panel bringing together the heads of Google, Facebook and Microsoft and Vodafone sought to allay fears that the rapid pace of technological advance was killing jobs.

"Everyone's worried about jobs," admitted Sheryl Sandberg, chief operating officer of Facebook. With so many changes in the technology world, "the transformation is happening faster than ever before," she acknowledged. "But tech creates jobs not only in the tech space but outside," she insisted. Schmidt quoted statistics he said showed that every tech job created between five and seven jobs in a different area of the economy. "If there were a single digital market in Europe, 400 million new and important new jobs would be created in Europe," which is suffering from stubbornly high levels of unemployment. The debate about whether technology is destroying jobs "has been around for hundreds of years," said the Google boss. What is different is the speed of change.

"It's the same that happened to the people who lost their farming jobs when the tractor came... but ultimately a globalised solution means more equality for everyone." With one of the main topics at this year's World Economic Forum being how to share out the fruits of global growth, the tech barons stressed that the greater connectivity offered by their companies ultimately helps reduce inequalities. "Are the spoils of tech being evenly spread? That is an issue that we have to tackle head on," said Satya Nadella, chief executive of Microsoft. [...]
They are entitled to their opinions as anyone else. But I don't necessarily believe them. The problem with "Elites" is, they don't live in the same world as the rest of us. They can think whatever they like, but it doesn't necessarily make it so. And some of their ideas are downright creepy. Is their vision the Brave New World we are headed for? Because if that is what they are aiming for, I would guess that there will be unintended and unforeseen consequences that they have not anticipated.


More fun from the Davos Elites:

You’ve entered The Hypocrisy Zone: Billionaire Democrat wants YOU to downsize your lifestyle
     

Monday, January 12, 2015

Skype, with a speech translator?

Supposedly. This was announced last month:



Skype Will Begin Translating Your Speech Today
¿Cómo estás?

Voice over IP communication is entering a new era, one that will hopefully help break down language barriers. Or so that's the plan. Using innovations from Microsoft Research, the first phase of the Skype Translator preview program is kicking off today with two spoken languages -- Spanish and English. It will also feature over 40 instant messaging languages for Skype customers who have signed up via the Skype Translator sign-up page and are using Windows 8.1.

It also works on preview copies of Windows 10. What it does is translate voice input from someone speaking English or Spanish into text or voice. The technology relies on machine learning, so the more it gets used, the better it will be at translating audio and text.

"This is just the beginning of a journey that will transform the way we communicate with people around the world. Our long-term goal for speech translation is to translate as many languages as possible on as many platforms as possible and deliver the best Skype Translator experience on each individual platform for our more than 300 million connected users," Skype stated in a blog post.

Translations occur in "near real-time," Microsoft says. In addition, there's an on-screen transcript of your call. Given the many nuances of various languages and the pace at which communication changes, this is a pretty remarkable feat that Microsoft's attempting to pull off. There's ton of upside as well, from the business world to use in classrooms.

If you want to test it out yourself -- and Microsoft hopes you do, as it's looking for feedback at this early stage -- you can register for the program by going here.
Follow the link to the original article for embedded links, and a video.

See how it works here:

Skype Translator is the most futuristic thing I’ve ever used
We have become blasé about technology.

The modern smartphone, for example, is in so many ways a remarkable feat of engineering: computing power that not so long ago would have cost millions of dollars and filled entire rooms is now available to fit in your hand for a few hundred bucks. But smartphones are so widespread and normal that they no longer have the power to astonish us. Of course they're tremendously powerful pocket computers. So what?

This phenomenon is perhaps even more acute for those of us who work in the field in some capacity. A steady stream of new gadgets and gizmos passes across our desks, we get briefed and pitched all manner of new "cutting edge" pieces of hardware and software, and they all start to seem a little bit the same and a little bit boring.

Even news that really might be the start of something remarkable, such as HP's plans to launch a computer using memristors for both longterm and working memory and silicon photonics interconnects, is viewed with a kind of weary cynicism. Yes, it might usher in a new generation of revolutionary products. But it probably won't.

But this week I've been using the preview version of Microsoft's Skype Translator. And it's breathtaking. It's like science fiction has come to life.

The experience wasn't always easy; this is preview software, and as luck would have it, my initial attempts to use it to talk to a colleague failed due to hitherto undiscovered bugs, so in the end, I had to talk to a Microsoft-supplied consultant living in Barranquilla, Colombia. But when we got the issues ironed out and made the thing work, it was magical. This thing really works. [...]
Follow the link for more, and enlargeable photos that shows what it looks like as it's working.
     

Saturday, August 23, 2014

USB Devices and Malware Attacks

New Flaws in USB Devices Let Attackers Install Malware: Black Hat
[...] In a blog post providing more insight into the talk, Nohl and Lell reveal that the root trigger for their USB exploitation technique is by abusing and reprogramming the USB controller chips, which are used to define the device type. USB is widely used for all manner of computer peripherals as well as in storage devices. The researchers alleged that the USB controller chips in most common flash drives have no protection against reprogramming.

"Once reprogrammed, benign devices can turn malicious in many ways," the researchers stated.

Some examples they provide include having an arbitrary USB device pretend to be a keyboard and then issue commands with the same privileges as the logged-in user. The researchers contend that detecting the malicious USB is hard and malware scanner similarly won't detect the issue.

I'm not surprised, and no one else should be, either. After all, this isn't the first time researchers at a Black Hat USA security conference demonstrated how USB can be used to exploit users.

Last year, at the Black Hat USA 2013 event, security researchers demonstrated the MACTANS attack against iOS devices. With MACTANS, an Apple iOS user simply plugs in a USB plug in order to infect Apple devices. Apple has since patched that flaw.

In the MACTANS case, USB was simply used as the transport cable for the malware, but the point is the same. Anything you plug into a device, whether it's a USB charger, keyboard or thumb drive has the potential to do something malicious. A USB thumb drive is widely speculated to be the way that the Stuxnet virus attacked Iran's nuclear centrifuges back in 2010. The U.S. National Security Agency (NSA) allegedly has similar USB exploitation capabilities in its catalog of exploits, leaked by whistleblower Edward Snowden.

While the Security Research Labs researchers claim there are few defenses, the truth is somewhat different.

A reprogrammed USB device can have certain privileges that give it access to do things it should not be able to do, but the bottom line is about trust. On a typical Windows system, USB devices are driven by drivers that are more often than not signed by software vendors. If a warning pops up on a user's screen to install a driver, or that an unsigned driver is present, that should be a cause for concern.

As a matter of best practice, don't plug unknown USB devices into your computing equipment. It's just common sense, much like users should not open attachments that look suspicious or click on unknown links. The BadUSB research at this year's Black Hat USA conference is not as much a wake-up call for USB security as it is a reminder of risks that have been known for years

     

Saturday, August 09, 2014

Would robots be better or worse for people?

There are conflicting opinions:

Pew: Split views on robots’ employment benefits
WASHINGTON — In 2025, self-driving cars could be the norm, people could have more leisure time and goods could become cheaper. Or, there could be chronic unemployment and an even wider income gap, human interaction could become a luxury and the wealthy could live in walled cities with robots serving as labor.

Or, very little could change.

A new survey released Wednesday by the Pew Research Center’s Internet Project and Elon University’s Imagining the Internet Center found that, when asked about the impact of artificial intelligence on jobs, nearly 1,900 experts and other respondents were divided over what to expect 11 years from now.

Forty-eight percent said robots would kill more jobs than they create, and 52 percent said technology will create more jobs than it destroys.

Respondents also varied widely when asked to elaborate on their expectations of jobs in the next decade. Some said that self-driving cars would be common, eliminating taxi cab and long-haul truck drivers. Some said that we should expect the wealthy to live in seclusion, using robot labor. Others were more conservative, cautioning that technology never moves quite as fast as people expect and humans aren’t so easily replaceable.

“We consistently underestimate the intelligence and complexity of human beings,” said Jonathan Grudin, principal researcher at Microsoft, who recalls that 40 years ago, people said that advances in computer-coding language were going to kill programming jobs.

Even as technology removed jobs such as secretaries and operators, it created brand new jobs, including Web marketing, Grudin said. And, as Grudin and other survey responders noted, 11 years isn’t much time for significant changes to take place, anyway.

Aaron Smith, senior researcher with the Pew Research Center’s Internet Project, said the results were unusually divided. He noted that in similar Pew surveys about the Internet over the past 12 years, there tended to be general consensus among the respondents, which included research scientists and a range of others, from business leaders to journalists. [...]
It goes on to give more opinions from educated people who make good cases for their opinions. Reading them all, it seems like no one can say exactly how it's going to play out, though a common theme of many of the opinions is, that over time, there may indeed be less jobs for people. And what changes will THAT bring? That seems to be the big question underlying it all.

     

Thursday, July 31, 2014

The evolution of AI (Artificial Intelligence)

I've posted previously about how slow it will be, that we won't have something approaching human intelligence anytime soon. But, eventually, as AI evolves, it could start working on itself, and then start advancing very quickly:


How Artificial Superintelligence Will Give Birth To Itself
There's a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here's how a recursively self-improving AI could transform itself into a superintelligent machine.

When it comes to understanding the potential for artificial intelligence, it's critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.

Passing a Critical Threshold

Once sophisticated enough, an AI will be able to engage in what's called "recursive self-improvement." As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It's an advantage that we biological humans simply don't have.

When it comes to the speed of these improvements, Yudkowsky says its important to not confuse the current speed of AI research with the speed of a real AI once built. Those are two very different things. What's more, there's no reason to believe that an AI won't show a sudden huge leap in intelligence, resulting in an ensuing "intelligence explosion" (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; "we went from caves to skyscrapers in the blink of an evolutionary eye."

The Path to Self-Modifying AI

Code that's capable of altering its own instructions while it's still executing has been around for a while. Typically, it's done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.

But as Our Final Invention author James Barrat told me, we do have software that can write software.

"Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve," he told io9. "It's also used to write innovative, high-powered software."

For example, Primary Objects has embarked on a project that uses simple artificial intelligence to write programs. The developers are using genetic algorithms imbued with self-modifying, self-improving code and the minimalist (but Turing-complete) brainfuck programming language. They've chosen this language as a way to challenge the program — it has to teach itself from scratch how to do something as simple as writing "Hello World!" with only eight simple commands. But calling this an AI approach is a bit of a stretch; the genetic algorithms are a brute force way of getting a desirable result. That said, a follow-up approach in which the AI was able to generate programs for accepting user input appears more promising.

Relatedly, Larry Diehl has done similar work using a stack-based language.

Barrat also told me about software that learns — programming techniques that are grouped under the term "machine learning."

The Pentagon is particularly interested in this game. Through DARPA, its hoping to develop a computer that can teach itself. Ultimately, it wants to create machines that are able to perform a number of complex tasks, like unsupervised learning, vision, planning, and statistical model selection. These computers will even be used to help us make decisions when the data is too complex for us to understand on our own. Such an architecture could represent an important step in bootstrapping — the ability for an AI to teach itself and then re-write and improve upon its initial programming. [...]

It goes on about ways that we could use to try to control AI self-evolution, and reasons why such methods may -or may not- work, and why. Read the whole thing, for many embedded links, and more.

     

Friday, May 30, 2014

Flying Droids a Reality on ISS

Space station's flying droids embrace Google smartphone tech
The free-flying Spheres, inspired by "Star Wars" and now aided by Google's Project Tango, will handle more of the mundane tasks for astronauts.
MOUNTAIN VIEW, Calif.--Imagine you're an astronaut who has just arrived at the International Space Station. You need to assess the supplies on hand, but counting everything demands so much of your limited time.

That's exactly why NASA originally turned to Spheres, autonomous, free-flying robots that take care of mundane tasks and are based on the flying droid that helped teach Luke Skywalker how to fight with a light saber in the original "Star Wars."

Now, Spheres are incorporating Google's Project Tango, cutting-edge tech that is expected to help the space agency increase efficiency.

For some time -- since 2003, to be exact -- space station crews have had access to free-flying robots known as Synchronized Position Hold, Engage, Reorient, Experimental Satellites. That ungainly title is best abbreviated to a more palatable acronym: Spheres. Originally designed by aero/astroengineers at MIT, Spheres were meant as a flying test bed for examining the mechanical properties of materials in microgravity. The inspiration for the project, said Terry Fong, director of the Intelligent Robotics Group at NASA, "comes from 'Star Wars,' as all good things do."

Now, NASA is bringing an especially innovative commercial tool into the mix. Starting this October, Spheres will incorporate Project Tango -- a smartphone platform built for 3D mapping that also happens to be packed with just the series of sensors and cameras that NASA needs to handle many of the mundane tasks aboard the ISS.

In 2003, Spheres were fairly rudimentary -- at least for flying autonomous robots. They relied on liquid carbon dioxide for propulsion and on an ancient Texas Instruments digital signal processor.

About four years ago, Fong's Intelligent Robotics Group took over the project. Since then, it has been slowly improving Spheres robots by using the small computers better known as smartphones. At first, NASA worked with Nexus S smartphones, which are jammed with cameras, gyroscopes, accelerometers, and modern processors. [...]
I remember reading about these years ago, about how they could fly around the ISS because of the zero gravity. Now they are evolving, using smartphone technology. See the whole article for embedded links, photos and video.
     

Sunday, May 04, 2014

Fanless Mini PCs

They are becoming more popular:
5 Silent Fanless Mini PCs That Will Save You Money
Miniaturization continues to shrink the size of the average PC. What once required several rooms can now fit in your pocket. And while most people think of smartphones or tablets as examples of small, modern electronics, desktops also deserve mention.

There’s a new category, the mini-PC, that’s becoming popular. Early variants, like the Apple Mac Mini and Inspiron Zino HD, have been well received, but now the formula has been improved with the introduction of fanless systems. Tiny, silent and often inexpensive, these miniature wonders save space without eating into your bank account. [...]
Several examples are reviewed.

Here is one that seems like a great bargain, on Amazon:

CompuLab Intense PC Value 1.1 GHz Linux
Intel Celeron 847E 1.1 GHz dual-core, 4 GB RAM
5 year warranty
320 GB hard-disk pre-installed with Linux Mint
Dual Gbit Ethernet, WiFi 802.11n, HDMI + DisplayPort, 7.1 channels S/PDIF audio
Fanless aluminum case [...]
There are different configurations available. They seem ideal for people with basic computer needs. This one runs Linux Mint as the operating system. Mint is my favorite Linux.
     

Sunday, March 09, 2014

Thieves who offer Customer Suport to victims? It's called "Ransomware"

Just when you thought you'd seen it all:

'Perfect' ransomware is the scariest threat to your PC
Nothing spurs malware development like success, and that’s likely to be the case in the coming months with ransomware.

Ransomware has been around for around a decade, but it wasn’t until last fall, with the introduction of CryptoLocker, that the malevolent potential of the bad app category was realized. In the last four months of 2013 alone, the malicious software raked in some $5 million, according to Dell SecureWorks. Previously, it took ransomware purveyors an entire year to haul in that kind of money.

So is it any wonder that the latest iteration of this form of digital extortion has attracted the attention of cyber criminals? A compromised personal computer for a botnet or Distributed Denial of Service attack is worth about a buck to a byte bandit, explained Johannes B. Ullrich, chief research officer at the SANS Institute. “With ransomware, the attacker can easily make $100 and more,” he said.

What distinguishes CryptoLocker from past ransomware efforts is its use of strong encryption. Document and image files on machines infected with the Trojan are scrambled using AES 256-bit encryption, and the only way for a keyboard jockey to regain use of the files is to pay a ransom for a digital key to decrypt the data.

[...]

Honor among thieves
The CryptoLocker crew also know the value of maintaining good customer relations. “They’re honoring people who do pay the ransom,” said Jarvis, of SecureWorks.

“In most cases they’re sending the decryption keys back to the computer once they receive payment successfully,” he explained. “We don’t know what the percentage of people who successfully do that is, but we know it’s part of their business model not to lie to people and not do it.”

Moreover, in November, they began offering support to victims who, for whatever reason, fail to meet the hijackers’ ransom deadlines. By submitting a portion of an encrypted file to the bad actors at a black website and paying the ransom, a victim can receive a key to decrypt their files. “You have to reinfect yourself with the malware but once you do that, you can get a successful decryption,” Jarvis explained.

[...]

Ransomware Inc.
"It is inevitable that we will see a cryptographic ransomware toolkit,” he added, “maybe even multiple toolkits because it’s clear that there’s a business opportunity here for criminals.”

Moreover, that opportunity is likely to reach beyond the consumer realm and into the greener pastures of business. “Going after consumers is small fish,” said Bruen, of the Digital Citizens Alliance. “The next step is to conduct ransom operations on major companies. This has already happened,” he said.

“From an attacker’s perspective, there’s definitely a higher risk in getting caught because companies are going to throw more money at the problem than an ordinary consumer can,” he continued, “but the payoff from one of these companies—a Target or a Nieman Marcus—will be much larger.”

Current ransomware attacks involve encrypting select file types on a hard drive, but a business attack will likely choose a higher value target. “Cryptographic keys and digital certificates are ripe for ransom,” Venafi’s Bocek said.

"Whether it’s taking out the key and certificate that secures all communications for a bank or the SSH keys that connect to cloud services for an online retailer, keys and certificates are a very attractive target,” he observed. [...]
Welcome to the Brave new world. The orginal article has embedded links, and more details about the evolution of this software, the way it spreads, and it's potential future applications.

I've already come across a lesser "scareware" version of Ransomeware, that was mentioned in the article. It locked up one of my Linux computers, and wanted payment to unlock it, so this isn't just a Microsoft thing. I was able to get rid of it by uninstalling my browser, clearing the cache, and reinstalling Firefox. But what they are talking about in this article is much more advanced.

Scary stuff.
     

Sunday, February 16, 2014

Androids: Fantasy VS Reality

The fantasy Android:



But what is the reality of Artificial Intelligence? The harsh truth:

Supercomputer Takes 40 Minutes To Model 1 Second of Brain Activity
Despite rumors, the singularity, or point at which artificial intelligence can overtake human smarts, still isn't quite here. One of the world's most powerful supercomputers is still no match for the humble human brain, taking 40 minutes to replicate a single second of brain activity.

Researchers in Germany and Japan used K, the fourth-most powerful supercomputer in the world, to simulate brain activity. With more than 700,000 processor cores and 1.4 million gigabytes of RAM, K simulated the interplay of 1.73 billion nerve cells and more than 10 trillion synapses, or junctions between brain cells. Though that may sound like a lot of brain cells and connections, it represents just 1 percent of the human brain's network.

The long-term goal is to make computing so fast that it can simulate the mind— brain cell by brain cell— in real-time. That may be feasible by the end of the decade, researcher Markus Diesmann, of the University of Freiburg, told the Telegraph.
It "may be" feasible by the end of the decade? To catch up with one second of human brain activity? Even if it does, we're talking about a Super-Computer. It's a long way from the android brain in the video. And yes, computers are advancing very fast. But to catch up with a human brain, much less surpass it... it won't happen tomorrow.