Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Monday, May 15, 2017

How realistic is the Alien Language Hacking in the movie "Arrival"?



Ask a Linguist. That is what this article does:

How Realistic Is the Way Amy Adams’ Character Hacks the Alien Language In Arrival? We Asked a Linguist.
Denis Villeneuve’s Arrival makes being a linguist look pretty cool—its hero Louise (Amy Adams) gets up close and personal with extraterrestrials and manages to save the entire world with her translation skills (and lives in a chic, glass-walled modernist palace all by herself). But how realistic were her methods? We talked to Betty Birner, a professor of linguistics and cognitive science at Northern Illinois University, to find out what she thinks of the movie’s use of language, its linguist heroine, and how we might someday learn to communicate with aliens in real life.

What was it like to watch Arrival as an actual linguist?

I loved the movie. It was a ton of fun to see a movie that’s basically all about the Sapir-Whorf hypothesis. On the other hand, they took the hypothesis way beyond anything that is plausible.

In the movie they kind of gloss over the hypothesis, explaining it as the idea that the language you speak can affect the way you think. Is that accurate?

There are two ways of thinking about the Sapir-Whorf hypothesis, and scholars have argued over which of these two Sapir and/or Whorf actually intended. The weaker version is linguistic relativity, which is the notion that there’s a correlation between language and worldview. “Different language communities experience reality differently.”

The stronger view is called linguistic determinism, and that’s the view that language actually determines the way you see reality, the way you perceive it. That’s a much stronger claim. At one point in the movie, the character Ian [Jeremy Renner] says, “The Sapir-Whorf hypothesis says that if you immerse yourself in another language, you can rewire your brain.” And that made me laugh out loud, because Whorf never said anything about rewiring your brain. But since this wasn’t the linguist speaking, it’s fine that another character is misunderstanding the Sapir-Whorf.

But the movie accepts that as true! By learning the aliens’ language, Louise completely alters her brain.

Oh yeah, the movie is clearly on board with linguistic determinism, which is funny because most linguists these days would not accept that.

So in real life, learning another language can’t suddenly alter how you perceive time?

No linguist would ever buy into the notion that the minute you understand something about this second language, get sort of a lightbulb going off, and you say, “Oh my gosh, I completely see how the speakers of Swahili view plant life now.” It’s just silly and its false. It makes for a rollicking good story, but I would never want somebody to come away from a movie like this with the notion that that’s actually a power that language can bestow.

Is there anything to the idea at all?

There have been studies about speakers of languages that have classifier markers—suffixes, for example, that go on to every noun to indicate what class they’re in. Some languages mark round things differently than they mark long things, soft things differently than rigid things. If you ask speakers of such a language to sort a big heap of stuff into piles, they will tend to sort them based on what classifier they take.

Whorf argued that because the Hopi [the Native American group he was studying] have verbs for certain concepts that English speakers use nouns for, such as, thunder, lightning, storm, noise, that the speakers view those things as events in a way that we don’t. We view lightning, thunder, and storms as things. He argued that we objectify time, that because we talk about hours and minutes and days as things that you can count or save or spend.

It was funny in this movie to see this notion of the cyclicity of time. That’s really central in Whorf’s writings, that English speakers have a linear view of time, and it’s made up in individually packaged objects, days, hours, and minutes that march along from past to future, while the Hopi have a more cyclical notion that days aren’t separate things but that “day” is something that comes and goes.

So tomorrow isn’t another day. Tomorrow is day returning. You see that concept coming from Whorf into this movie was actually kind of fun. I thought, well they got that right! They took it in a really weird direction, but ...

Someone did their homework.

Exactly. [...]
If you like linguistics, read the whole thing. I found it very interesting, the Linguist professor says mostly positive things about the movie, and discusses how there are some parallels with earth based languages (written languages that don't phonetically represent spoken language) and other linguistic concepts. Lots of interesting observations and food for thought.

Also see: The Sapir-Whorf Hypothesis
   

Monday, April 17, 2017

How our sleep patterns change throughout our lives, and how to cope with the changes

And don't I know it. This article explains a lot:

Sleep Patterns Make Steep Changes During Your Life
[...] MIDLIFE SLEEP CRISIS

A lot of accomplished people claim not to need a lot of sleep. Household arts maven Martha Stewart purports to get only four hours a night. So does Tonight Show host Jay Leno. Napoleon, Winston Churchill, John F. Kennedy, Salvador Dali and Leonard da Vinci didn’t get much shut-eye either. So television journalist Pamela Wallin, who also averages only four hours a night, is in august company. “I’ve been an insomniac for as long as I can remember,” says Wallin, a Saskatchewan native who lives in Toronto. “I’ve tried herbal remedies and chamomile tea. I avoid prescription drugs because I can’t afford to lose my sharpness the next day.” Ultimately, Wallin regards her chronic insomnia as something she just has to live with. “If I needed more sleep,” she reasons, “I probably wouldn’t have gotten done what I have done in my life.”

Sixty-two per cent of Americans experience a sleep problem a few nights a week, according to a National Sleep Foundation study released last month. Two-thirds say sleepiness interferes with their concentration. “We should really get nine or 10 hours of sleep,” says psychologist Coren. “But we’re only getting seven. Sleep is not something we value.” Family stresses, the frenetic pace of life and poor bedtime habits all contribute to an epidemic of sleeplessness. Among modern complications: the wired world. “I know people who have a fax machine at the foot of their bed with a little bleeper so they can get up in the middle of the night to read their faxes,” says Coren. “The pressure to lead a 24-hour life is getting worse.”

At least many poor sleepers know they need help. About 2,000 people a year use the sleep clinic at UBC run by psychiatrist Jon Fleming. Thirty-five per cent of them complain of insomnia, a disorder that often runs in families. Others attend the clinic because of sleep apnea (troubled breathing) and narcolepsy (an overwhelming desire to sleep), among other sleep disorders. “The causes of insomnia are legion,” says Fleming. “It can be caused by psychiatric conditions or drug and alcohol abuse. But the leading cause is stress.” When Vancouver children’s bookstore owner Phyllis Simon can’t sleep, she gets out of bed for a while and writes a list of all the things she has to do. “I try to transfer my anxieties to the list. Then I’ll make myself a cup of warm milk.”

But waking up in the middle of the night and then going back to sleep – – as Simon sometimes does — can be harder on cognition than not sleeping at all, says University of Montreal psychiatrist Roger Godbout. “Your performance the next day will be worse than if you stay up all night,” he explains. While insomnia may lead to fuzzy thinking, those who short-circuit sleep by working long hours could also be compromising their physical health. Research at the University of Chicago shows adults who get fewer than seven hours of sleep are more prone to diabetes, high blood pressure and endocrine dysfunction.

Women also report more sleep problems than men — a consequence, often, of their biology. Just before menstruation, says Toronto Western Hospital sleep researcher Helen Driver, “there is a withdrawal of hormones that triggers poor sleep.” Entering menopause doesn’t make it better. Thirty-six per cent of menopausal women polled by the National Sleep Foundation said hot flashes interfered with their night’s rest. Sleep investigators are becoming more aware of the effects of the female hormones, estrogen and progesterone, says Driver. “Progesterone,” she says, “interacts with a receptor in the brain that seems to have sleep-inducing qualities.” [...]
I used the "Midlife Stage" as an excerpt for this blogpost, because that is about where I am at now. But the entire article starts with infancy, childhood, teen years, all the way through to old age. Something for everyone! Read the whole thing, for embedded links and advice for improving your sleep, whatever stage you may be in.
     

Wednesday, January 11, 2017

The Ubiquitous Alexa; is the Amazon AI assistant starting to be everywhere?

Kinda looks that way. The title of the article below refers to cars, but the article itself goes into much more. More about Alexa being incorporated into other appliances and, well, have a look:



Alexa will make your car smarter -- and vice versa
The integration into vehicles is yet another sign of how dependent we're becoming on AI.
[...] Within a span of just two years, Amazon's cloud-based voice service has spread far beyond the Echo speaker with which it first debuted. Alexa has gone from being an at-home helper to a personal assistant that can unlock your car, make a robot dance and even order groceries from your fridge.

At CES, both Ford and Volkswagen announced that their cars would integrate Alexa for weather updates, navigation and more. According to CJ Frost, principal architect solutions and automotive lead at Amazon, the car industry is moving into a mobility space. The idea isn't restricted to the ride anymore; it encompasses a journey that starts before you even get in the car. With the right skills built into the voice service, you can start a conversation with Alexa about the state of your car (is there enough fuel? is it locked? etc.) before you leave the house. It can also pull up your calendar, check traffic updates and confirm the meeting to make sure you're on track for the day.

Using a voice service in the car keeps your connection with the intelligent assistant intact. It's also a mode of communication that will be essential to autonomous cars of the near future. I caught up with Frost and John Scumniotales, general manager of Automotive Alexa service, at the Las Vegas convention center to trace the progression of the intelligent assistant from home speakers to cars on the road. [...]
The rest of the article is in an interview format, discussing where this is all going, and how and why, and what the future holds. Read the whole thing for embedded links, photos, video and more.

There have been lots of reviews on Youtube comparing Alexa with Google Home. People who use a lot of Google Services, claim the Google device is smarter and therefore better. But it's not that simple.

I have both devices. If you ask your question of Alexa in the format of: "Alexa, Wikipedia, [your question here]", the answer you get will often be as good or better than what Google can tell you. Alexa has been around longer, has wider integration, and more functions available. It can even add appointments to my Goggle Calendar, which Google Home says it cannot do yet!

Google Home does have some features it excels at, such as translating English words and phrases into foreign languages. If you own any Chromcast dongles, you can cast music and video to other devices, which is pretty cool. Presently it's biggest drawback is the lack of development of applications that work with it. However, it's POTENTIAL is very great, and a year or two from now we may see a great deal more functionality. It has the advantage of access to Google's considerable data base and resources. It could quickly catch up with Alexa, and perhaps surpass it. But that still remains to be seen.

It's not hard to make a video that makes one device look dumber than the other. But in truth the devices are very similar. Both can make mistakes, or fail at questions or functions. Sometimes one does better than the other. I actually like having both. It will be interesting to watch them both continue to evolve. To see if Google can close the gap created by Amazon's early head start. To see how the two products will differentiate themselves over time.

For the present, if you require a lot of integration with 3rd party apps and hardware, and if you are already using Amazon Prime and/or Amazon Music services, you might prefer Alexa. If you you are heavily into Google services, and/or Google Music or Youtube Red, you might prefer Google Home. Or if you are like me, an Amazon Prime/Music member and experimenting with Youtube Red and owner of chromcast devices, you may prefer both! Choice is good!
     

Saturday, May 07, 2016

What a real spaceship would look like

Or could look like, based on technology we already have or have within our grasp:



The video is from 2011, so no doubt there have been many revisions since. A similar, but more advanced looking ship was used in the movie The Martian. No doubt based on this design.



So when are we going to see this ship for real? Not in my lifetime, I expect. In a world where industrialized, technologically advanced nations are over budget, bordering on bankruptcy and/or currency collapse, I don't realistically see funding for projects like this for a long, long time. If ever. It may remain just a dream, only fulfilled in movies. CGI special effects are so much cheaper than reality.

For more photos from the movie, and commentary of the science, follow this link: SCIENCING THE MARTIAN
     

Monday, September 07, 2015

Naked Chicks ... that Glow

This is creepy:



Glowing in the dark, GMO chickens shed light on bird flu fight
In the realm of avian research, the chicks with the glow-in-the-dark beaks and feet might one day rock the poultry world.

British scientists say they have genetically modified chickens in a bid to block bird flu and that early experiments show promise for fighting off the disease that has devastated the U.S. poultry and egg industries.

Their research, which has been backed by the UK government and top chicken companies, could potentially prevent repeats of this year's wipeout: 48 million chickens and turkeys killed because of the disease since December in the United States alone.

But these promising chickens - injected with a fluorescent protein to distinguish them from normal birds in experiments - won't likely gatecrash their way into poultry production any time soon. Health regulators around the world have yet to approve any animals bred as genetically modified organisms (GMOs) for use in food because of long-standing safety and environmental concerns.

Bird flu has become a global concern among researchers over the past decade because of its threat to poultry and human health, and UK researchers have been toiling in genetic engineering for years to control its spread.

People who are in close contact with infected poultry are most at risk for flu infections, and scientists are concerned about the risk for a human pandemic if the virus infects someone and then mutates. No humans have been infected in the latest U.S. outbreak, but there have been cases in Asia in recent years.

"The public is obviously aware of these outbreaks when they're reported and wondering why there's not more done to control it," said Laurence Tiley, a senior lecturer in molecular virology at the University of Cambridge, who is involved in the experiments.

[...]

At Cambridge and the University of Edinburgh's Roslin Institute, scientists are using genetic engineering to try to control bird flu in two ways: by blocking initial infections in egg-laying chickens and preventing birds from transmitting the virus if they become infected.

[...]

To genetically engineer chickens, the UK researchers inject a "decoy" gene into a cluster of cells on the yolk of a newly laid egg. The egg will hatch into a chick containing the decoy gene, which it will be able to pass on to its offspring.

The decoy gene is injected into the chicken chromosome alongside the fluorescent protein that makes the birds glow under ultraviolet light, similar to glow-in-the-dark posters in college dorm rooms. The birds would not be bred to glow if they are commercialized.

When the modified birds come into contact with the flu, their genetic code is designed to trick the virus into copying the decoy and to inhibit the virus' ability to reproduce itself.

In one study with a form of decoy, scientists put 16 infected conventional chickens in contact with a mixture of 16 normal and 16 GMO chickens that contained a decoy. The GMO birds were found to be less susceptible and succumbed to infection more slowly than the conventional birds, said Tiley.

FARMER PROTECTIONS

A more flu-resistant bird could be a notable advance from the basic steps that farmers now rely on to avoid infections in barns, including banning visitors and disinfecting vehicle wheels.

Wild ducks, which can carry the virus, are thought to have spread the disease in the United States by dropping contaminated feces and feathers on farms. Humans can then transport the disease on their boots and trucks. [...]
I wish I could be more enthusiastic. The problem is, when you start genetically modifying plants or animals, you may solve a problem in the short term. But in the longer term, you may be creating bigger problems, caused by unforeseen side effects of deliberate genetic modifications, and by worse threats from diseases/insects predators that evolve themselves or change their behavior to adapt to the new genetically altered plant/animal.

Scientists may keep altering the plant or animal in response, till it becomes so modified from the original that it becomes degraded and vulnerable to something the original never had a problem with. And if the genetically modified mix with the originals, that vulnerability spreads to all of them. Our food supply could die out.

With so many people experiencing unemployment, we would be better off using people to go back to smaller farms using tried and true methods that don't degrade our food supply. But I don't see that happening, because:

1.) Agribusiness wants to keep their monopoly.
2.) Farming is hard work, and most people in advanced Western societies won't do it.

So we do the easy thing and let this continue, only to pay a worse price down the road. There has to be a better way.

     

Tuesday, June 23, 2015

Auroras tonight, and Wednesday?

Looks like it. This was Iowa from the early morning hours today:



Look up! Another solar storm may supercharge auroras Wednesday
While a "severe" solar storm that sparked dazzling auroras around the world on Monday through Tuesday morning is dying down now, skywatchers shouldn't stop looking up quite yet.

Another potentially powerful solar tempest is expected to impact Earth on Wednesday into Thursday, and it could create more amazing auroras for people in the Northern and Southern Hemispheres.

In particular, the next solar storm is especially well aimed to enhancing aurora activity over North America, according to experts at the National Space Weather Prediction Center (SWPC) in Boulder, Colorado.

Monday's solar storm hit the G4 or "severe" level, a relatively rare class of storm that can create bright auroras in relatively low latitudes. Such G4 storms — the rating scale goes up to G5 — can also cause problems with power grids on Earth and harm satellites in space.

And another storm of that severe magnitude is likely on its way to Earth now.

Scientists at the SWPC are anticipating that the solar storm predicted to arrive Wednesday could, yet again, produce beautiful auroras in relatively low latitudes.

At the moment, the SWPC is predicting a G3 or "strong" storm on Wednesday and Thursday, but that was the forecast for Monday, as well. [...]

See the whole article for embedded links, photos, videos and more.

For more technical details, and an Aurora Prediction map, see the NOAA website: http://www.swpc.noaa.gov/




   

Friday, May 01, 2015

Underwater volcano active off Oregon coast

A volcano may be erupting off the Oregon coast, scientists say

Three hundred miles off the Pacific Northwest coast, the seafloor has been rumbling.

Over the past five months, there were hundreds of small earthquakes on most days at Axial Seamount.

Then on April 24, there was a spike: nearly 8,000 earthquakes. The seafloor level dropped more than two meters. Temperatures rose.

Scientists believe an underwater volcano is erupting.

An eruption is not a threat to coastal residents, researchers say, because the earthquakes are small, mostly magnitude 1 or 2, and the seafloor movements are relatively gradual, so they won't cause a tsunami.

The volcanic activity has no relationship to the Cascadia Subduction Zone, which scientists watch closely for signs of a much larger and more destructive earthquake.

To Bill Chadwick, an Oregon State University geologist, the eruption at Axial Seamount was not a surprise.

He had predicted it would happen this year. He predicted the previous eruption, in 2011, too.

Chadwick hopes the lessons he and his collaborator, Scott Nooner at the University of North Carolina Wilmington, learn from Axial Seamount can eventually be applied to volcanoes on land.

Land volcanoes have thicker crusts and are influenced by large earthquakes and other nearby volcanoes, among other things, so predictions are more difficult, Chadwick said.

"Axial Seamount is a pure example, if you will," he said. "It has relatively simple plumbing."

Chadwick and other scientists watch the signals at Axial Seamount in real-time via a cable laid out on the seafloor. The cable is part of the Ocean Observatories Initiative funded by the National Science Foundation. [...]
I doubt that it has nothing to do with the Cascadia Subduction Zone, since it is practically right on top of it. I presume they mean to say, that the volcano isn't signaling an imminent earthquake. As far as they can tell.

Read the whole thing for embedded links, photos and more.

     

Thursday, March 12, 2015

First Big Solar Flare of 2015


Active sunspot unleashes X-class solar flare, high-latitude aurora possible Friday
[...] What’s particularly interesting about this week’s eruptions is that the parent region is now near the center of the sun as we look at it, and it’s likely that a coronal mass ejection (CME) is now headed toward Earth thanks to the X2 flare.

Region 2297’s earlier eruptions occurred when it was in a less central position, so the launched CME would be, at worst, a side swipe for Earth’s magnetic field. The event on the afternoon of March 11, though, is much more likely to hit nearly head on.

High-latitude aurora watchers take note — the Space Weather Prediction Center is looking for minor magnetic storm activity on March 13. Plus, the sky will be relatively dark with the moon in its last quarter, so lunar light pollution is minimal. Get away from city lights for your best chance of seeing a glow.

The days following may be even more disturbed if Region 2297 has more in it.

The Ides of March? The Roman soothsayers made dire predictions for Caesar. For us, just a head’s up that some nice northern lights may be coming.

Follow the link for a larger photo. I like how they put the earth on there, for scale. The flare itself is much larger than our small world.

If you are science-minded and want to monitor the progress of this sunspot, you can do so here.
     

Friday, February 13, 2015

The mental/emotional effects of isolation

An article by Felicity Aston, the first woman to ski cross-country across the Antarctic continent, completely alone. She talks about her experience of isolation, and how her experience might compare to what future astronauts might face:

How will space explorers cope with isolation?
[...] It was the alone-ness itself that was frightening and my subsequent 59-day ski across the continent was dominated by my battle to deal with the shock of it.

I imagine that the first humans to visit Mars might experience a similar state of shock at their disconnection from human society. It is intriguing to wonder whether there might be parallels between the psychology involved in exploring Mars and exploring Antarctica.

Could potential astronauts preparing for long space missions across the solar system learn anything useful from experiences like mine in Antarctica?

As I began my loneliest of expeditions I had the benefit of more than a decade of previous polar journeys to draw from. In addition I had carefully prepared for the psychological stress of isolation, consulting a specialist sport psychologist.

Yet, I was taken aback by the range of ways in which the alone-ness affected me. I became increasingly emotional. With no one to witness my behaviour, I allowed inner feelings to flow into outward expression without check. If I felt angry, I shouted. If I felt upset, I cried.

Self-discipline became much harder. Surrounded by others, taking risky short-cuts isn't a possibility, largely because of the embarrassment of being discovered. But alone, with no-one to observe your laziness, the voice of temptation was always present. I found that ignoring the voice of temptation was an extra drain on mental energy that simply hadn't existed on team expeditions.

My brain, starved of any input by the lack of colour, shape or form in my largely blizzard-obscured world began to fill in the gaps by creating hallucinations.

I was surprised to find that we can hallucinate not just with our visual sense but with all our senses. I hallucinated strange forms in the gloom of regular whiteouts that took the shape of floating hands and small bald men on dinosaurs, but I also hallucinated smells, tastes and sounds that all seemed very real.

As I skied, I began to direct my internal monologue at the sun (when it was visible through the bad weather) and was slightly perturbed when eventually the sun began to talk back to me in my mind. It took on a very distinct character and even though I knew on some level that it wasn't real, the sun played an important part in my coping strategies.

Routine became increasingly important to me in overcoming these damaging responses to alone-ness. When everything else in my landscape and daily experience was so surreal, routine became the rhythm that I clung to. I performed every task in exactly the same way, every time it had to be done. I repeated chores in the same order again and again until I reached the point that I barely had to think about them. Reducing the thought required seemed to simultaneously reduce the emotion.

This was despite the fact that I did have some connection to the outside world during my expedition. I carried a satellite phone which was capable of calling anyone in the world at any time from my tent -- and yet, largely, I decided not to.

I was scared of the emotional high that speaking to loved ones might bring, knowing that it would inevitably be followed by a crushing emotional low as I was forced to end the call. [...]
To read it is to almost be there in her shoes. Fascinating. See the links for pics and more.
     

Saturday, January 03, 2015

Watch out for the Quadrantids

If you are lucky enough to have a clear sky to see them:



First meteor shower of 2015 peaks Saturday night
Grab a coat and head outside: The first meteor shower of the new year is set to peak tonight.

The annual Quadrantid meteor shower is popular with star watchers, who can see up to 80 meteors an hour, NASA says. Plus, Quadrantids are known for their “fireball meteors,” which are brighter and last longer than an average meteor streak.

This year, however, a bright, near-full moon is complicating matters for those who would like to wish on a shooting star – as are forecasts of cloudy skies in the eastern United States. Central and Southwest stargazers should still have a clear view, The Washington Post reports.

[...]

The Quadrantids aren’t the only items of interest visible in the night sky this weekend. Comet Lovejoy, which was discovered in August, is visible with binoculars and, for those far away from city lights, the naked eye, the Monitor’s Pete Spotts reports. The comet, which is expected to make its closest approach to Earth on Jan. 7, is rising higher in the northern sky. By Jan. 7, it will be appearing to the right of the bottom half of Orion’s bow, above the constellation Eridanus.

To view Saturday night’s meteor shower, NASA suggests finding a spot away from street lights.

“Lay flat on your back with your feet facing northeast and look up, taking in as much of the sky as possible. In less than 30 minutes in the dark, your eyes will adapt and you will begin to see meteors,” NASA says. “Be patient – the show will last until dawn, so you have plenty of time to catch a glimpse.” [...]
Looks like we'll have clouds moving in tonight, darn it. Maybe there will be some breaks in the clouds. And there will still be comet Lovejoy to look for in a few days.
     

Thursday, September 04, 2014

Language learning, before and after puberty

One can learn a 2nd language, before or after, but the way the brain accomplishes that may change:

Why Can't I Speak Spanish?: The Critical Period Hypothesis of Language Acquisition
"Ahhhhh!" I yell in frustration. "I've been studying Spanish for seven years, and I still can't speak it fluently."

"Well, honey, it's not your fault. You didn't start young enough," my mom says, trying to comfort me.

Although she doesn't know it, she is basing her statement on the Critical Period Hypothesis. The Critical Period Hypothesis proposes that the human brain is only malleable, in terms of language, for a limited time. This can be compared to the critical period referred to in to the imprinting seen in some species, such as geese. During a short period of time after a gosling hatches, it begins to follow the first moving object that it sees. This is its critical period for imprinting. (1) The theory of a critical period of language acquisition is influenced by this phenomenon.

This hypothetical period is thought to last from birth to puberty. During this time, the brain is receptive to language, learning rules of grammar quickly through a relatively small number of examples. After puberty, language learning becomes more difficult. The Critical Period Hypothesis attributes this difficulty to a drastic change in the way that the brain processes language after puberty. This makes reaching fluency during adulthood much more difficult than it is in childhood.

[...]

Noam Chomksy suggests that the human brain also contains a language acquisition device (LAD) that is preprogrammed to process language. He was influential in extending the science of language learning to the languages themselves. (4) (5) Chomsky noticed that children learn the rules of grammar without being explicitly told what they are. They learn these rules through examples that they hear and amazingly the brain pieces these samples together to form the rules of the grammar of the language they are learning. This all happens very quickly, much more quickly than seems logical. Chomsky's LAD contains a preexisting set of rules, perfected by evolution and passed down through genes. This system, which contains the boundaries of natural human language and gives a language learner a way to approach language before being formally taught, is known as universal grammar.

The common grammatical units of languages around the world support the existence of universal grammar: nouns, verbs, and adjectives all exist in languages that have never interacted. Chomsky would attribute this to the universal grammar. The numerous languages and infinite number of word combinations are all governed by a finite number of rules. (6) Charles Henry suggests that the material nature of the brain lends itself to universal grammar. Language, as a function of a limited structure, should also be limited. (7) Universal grammar is the brain's method for limiting and processing language.

A possible explanation for the critical period is that as the brain matures, access to the universal grammar is restricted. And the brain must use different mechanisms to process language. Some suggest that the LAD needs daily use to prevent the degenerative effects of aging. Others say that the brain filters input differently during childhood, giving the LAD a different type of input than it receives in adulthood. (8) Current research has challenged the critical period altogether. In a recent study, adults learning a second language were able to process it (as shown through event related potentials) in the same way that another group of adults processed their first language. (9)

So where does this leave me? Is my mom right, or has she been misinformed? The observation that children learn languages (especially their first) at a remarkable rate cannot be denied. But the lack of uniformity in the success rate of second language learning leads me to believe that the Critical Period Hypothesis id too rigid. The difficulty in learning a new language as an adult is likely a combination of a less accessible LAD, a brain out of practice at accessing it, a complex set of input, and the self consciousness that comes with adulthood. This final reason is very important. We interact with language differently as children, because we are not as afraid of making mistakes and others have different expectations of us, resulting in a different type of linguistic interaction. [...]
I enjoyed this, because I'm attempting to learn Spanish, and I've been reading a lot about the differences in the ways children learn a 2nd language, compared to the ways adults learn. Both can be successful, but it's important to find the right approach, particularly for adults, I think. Read the whole thing, for the many embedded footnotes and reference links.

And if you think you are too old to learn a 2nd language, you should read these links too:

Why adults are better learners than kids (So NO, you’re not too old)

The linguistic genius of adults: Research confirms we’re better learners than kids!

Breaking Down the Language Barriers

     

Saturday, August 09, 2014

Would robots be better or worse for people?

There are conflicting opinions:

Pew: Split views on robots’ employment benefits
WASHINGTON — In 2025, self-driving cars could be the norm, people could have more leisure time and goods could become cheaper. Or, there could be chronic unemployment and an even wider income gap, human interaction could become a luxury and the wealthy could live in walled cities with robots serving as labor.

Or, very little could change.

A new survey released Wednesday by the Pew Research Center’s Internet Project and Elon University’s Imagining the Internet Center found that, when asked about the impact of artificial intelligence on jobs, nearly 1,900 experts and other respondents were divided over what to expect 11 years from now.

Forty-eight percent said robots would kill more jobs than they create, and 52 percent said technology will create more jobs than it destroys.

Respondents also varied widely when asked to elaborate on their expectations of jobs in the next decade. Some said that self-driving cars would be common, eliminating taxi cab and long-haul truck drivers. Some said that we should expect the wealthy to live in seclusion, using robot labor. Others were more conservative, cautioning that technology never moves quite as fast as people expect and humans aren’t so easily replaceable.

“We consistently underestimate the intelligence and complexity of human beings,” said Jonathan Grudin, principal researcher at Microsoft, who recalls that 40 years ago, people said that advances in computer-coding language were going to kill programming jobs.

Even as technology removed jobs such as secretaries and operators, it created brand new jobs, including Web marketing, Grudin said. And, as Grudin and other survey responders noted, 11 years isn’t much time for significant changes to take place, anyway.

Aaron Smith, senior researcher with the Pew Research Center’s Internet Project, said the results were unusually divided. He noted that in similar Pew surveys about the Internet over the past 12 years, there tended to be general consensus among the respondents, which included research scientists and a range of others, from business leaders to journalists. [...]
It goes on to give more opinions from educated people who make good cases for their opinions. Reading them all, it seems like no one can say exactly how it's going to play out, though a common theme of many of the opinions is, that over time, there may indeed be less jobs for people. And what changes will THAT bring? That seems to be the big question underlying it all.

     

A real "Warp Drive" for Space Travel

I had posted about this previously. Here is a video, talking about a possible prototype, if experiments on earth justify further research:



     

Saturday, August 02, 2014

A propulsion drive without fuel?

Yes, and it may take us to Mars:

EmDrive Is an Engine That Breaks the Laws of Physics and Could Take Us to Mars
An experimental engine is gaining acceptance among scientists, and could introduce a new era of space travel — it only had to break a law of physics to do so.

The picture, below, is of the EmDrive. It uses electricity to generate microwaves, which then bounce around in a closed space and generate thrust. The drive does not need propellant, an important part of current space-travel mechanics.


The force generated by the drive is not particularly strong, but the implications are big. Multiple independent experiments have now replicated the drive's ability to generate thrust, albeit with varying success. Using panels to convert solar energy into electricity and then into thrust, opens the door to perpetual space travel fueled by the stars.

Scientists were slow to warm up to the EmDrive since it violates the law of the conservation of momentum. In addition to not being sure why it works — current theories rely on quantum mechanics — scientists also have some pretty good ideas why it shouldn't work. [...]
Follow the link for pics, video and embedded links.
     

Thursday, July 31, 2014

The evolution of AI (Artificial Intelligence)

I've posted previously about how slow it will be, that we won't have something approaching human intelligence anytime soon. But, eventually, as AI evolves, it could start working on itself, and then start advancing very quickly:


How Artificial Superintelligence Will Give Birth To Itself
There's a saying among futurists that a human-equivalent artificial intelligence will be our last invention. After that, AIs will be capable of designing virtually anything on their own — including themselves. Here's how a recursively self-improving AI could transform itself into a superintelligent machine.

When it comes to understanding the potential for artificial intelligence, it's critical to understand that an AI might eventually be able to modify itself, and that these modifications could allow it to increase its intelligence extremely fast.

Passing a Critical Threshold

Once sophisticated enough, an AI will be able to engage in what's called "recursive self-improvement." As an AI becomes smarter and more capable, it will subsequently become better at the task of developing its internal cognitive functions. In turn, these modifications will kickstart a cascading series of improvements, each one making the AI smarter at the task of improving itself. It's an advantage that we biological humans simply don't have.

When it comes to the speed of these improvements, Yudkowsky says its important to not confuse the current speed of AI research with the speed of a real AI once built. Those are two very different things. What's more, there's no reason to believe that an AI won't show a sudden huge leap in intelligence, resulting in an ensuing "intelligence explosion" (a better term for the Singularity). He draws an analogy to the expansion of the human brain and prefrontal cortex — a key threshold in intelligence that allowed us to make a profound evolutionary leap in real-world effectiveness; "we went from caves to skyscrapers in the blink of an evolutionary eye."

The Path to Self-Modifying AI

Code that's capable of altering its own instructions while it's still executing has been around for a while. Typically, it's done to reduce the instruction path length and improve performance, or to simply reduce repetitively similar code. But for all intents and purposes, there are no self-aware, self-improving AI systems today.

But as Our Final Invention author James Barrat told me, we do have software that can write software.

"Genetic programming is a machine-learning technique that harnesses the power of natural selection to find answers to problems it would take humans a long time, even years, to solve," he told io9. "It's also used to write innovative, high-powered software."

For example, Primary Objects has embarked on a project that uses simple artificial intelligence to write programs. The developers are using genetic algorithms imbued with self-modifying, self-improving code and the minimalist (but Turing-complete) brainfuck programming language. They've chosen this language as a way to challenge the program — it has to teach itself from scratch how to do something as simple as writing "Hello World!" with only eight simple commands. But calling this an AI approach is a bit of a stretch; the genetic algorithms are a brute force way of getting a desirable result. That said, a follow-up approach in which the AI was able to generate programs for accepting user input appears more promising.

Relatedly, Larry Diehl has done similar work using a stack-based language.

Barrat also told me about software that learns — programming techniques that are grouped under the term "machine learning."

The Pentagon is particularly interested in this game. Through DARPA, its hoping to develop a computer that can teach itself. Ultimately, it wants to create machines that are able to perform a number of complex tasks, like unsupervised learning, vision, planning, and statistical model selection. These computers will even be used to help us make decisions when the data is too complex for us to understand on our own. Such an architecture could represent an important step in bootstrapping — the ability for an AI to teach itself and then re-write and improve upon its initial programming. [...]

It goes on about ways that we could use to try to control AI self-evolution, and reasons why such methods may -or may not- work, and why. Read the whole thing, for many embedded links, and more.

     

A Chinese Fly from Hell

This looks scary:


Giant flying bug with fangs discovered in China
Researchers in China have found what is clearly the most frightening looking insect ... ever.

What's being called the Giant Dobsonfly has an 8.3-inch wingspan and snake-like fangs.

It's not entirely clear how much force would have to be applied to kill the dobsonfly or what sound is made by the insect as it is squashed.

But, we're entirely sure that the sound upon discovering a dobsonfly is a loud shriek, scream or cry.

How long until this fly is found in the states?

Exactly. [...]
It looks like something from the SyFy channel. It doesn't say that it can harm people, so perhaps it just looks scary? But what are those fangs for?

Apparently it lives in pristine water, and is very sensitive to water changes, even changes in PH. Follow the link, for video and more.

     

Friday, June 20, 2014

Buddhism: a philosophy for the 21st century?

NOT a religion, but a philosophy. I've recently finished reading this book:


Awakening the Buddha Within: Tibetan Wisdom for the Western World
Lama Surya Das, the most highly trained American lama in the Tibetan tradition, presents the definitive book on Western Buddhism for the modern-day spiritual seeker.

The radical and compelling message of Buddhism tells us that each of us has the wisdom, awareness, love, and power of the Buddha within; yet most of us are too often like sleeping Buddhas. In Awakening the Buddha Within, Surya Das shows how we can awaken to who we really are in order to lead a more compassionate, enlightened, and balanced life. It illuminates the guidelines and key principles embodied in the noble Eight-Fold Path and the traditional Three Enlightenment Trainings common to all schools of Buddhism:

Wisdom Training: Developing clear vision, insight, and inner understanding -- seeing reality and ourselves as we really are.

Ethics Training: Cultivating virtue, self-discipline, and compassion in what we say and do.

Meditation Training: Practicing mindfulness, concentration, and awareness of the present moment.

With lively stories, meditations, and spiritual practices, Awakening the Buddha Within is an invaluable text for the novice and experienced student of Buddhism alike.
I actually struggled with this book quite a bit. There were several times, where I almost quit reading it.

There was much I simply could not agree with. In fact, it very much reminded me of why I never pursued Buddhism, even though I like a lot of the things the Buddha is said to have taught. Every time I've tried to learn more about Buddhism, there would be something irrational that would put me off.

I felt that many times while reading this book, but it was a mixture of things, it wasn't all off-putting. I persevered with it, and by the end I was glad I did. I bought the book in the first place because I was hoping that it would:

A.) Teach me about Tibetan Buddhism.

B.) Be an interesting story of the life Surya Das (formerly known as Jeffrey Miller from Long Island) chose for himself, becoming a Tibetan Buddhist Lama.

C.) Teach me some things I could integrate into my own life.

In the end, I have to say it did all of those things. Surya Das has lead a life I would not have liked to have had, but thankfully he did it and I got to read about it and get the benefit of his insights from that, without having to do it myself. Sometimes you can learn a lot from a book, even if you don't agree with much of what it says. It challenges your ideas and makes you think. This was one of those books.

While reading the book, I found myself looking up a lot of things he was referring to on the internet. It was on-line that I found this essay by Sam Harris. I found myself agreeing with much of it:

Killing the Buddha
“Kill the Buddha,” says the old koan. “Kill Buddhism,” says Sam Harris, author of The End of Faith, who argues that Buddhism’s philosophy, insight, and practices would benefit more people if they were not presented as a religion.

The ninth-century Buddhist master Lin Chi is supposed to have said, “If you meet the Buddha on the road, kill him.” Like much of Zen teaching, this seems too cute by half, but it makes a valuable point: to turn the Buddha into a religious fetish is to miss the essence of what he taught. In considering what Buddhism can offer the world in the twenty-first century, I propose that we take Lin Chi’s admonishment rather seriously. As students of the Buddha, we should dispense with Buddhism.

[...]

For the fact is that a person can embrace the Buddha’s teaching, and even become a genuine Buddhist contemplative (and, one must presume, a buddha) without believing anything on insufficient evidence. The same cannot be said of the teachings for faith-based religion. In many respects, Buddhism is very much like science. One starts with the hypothesis that using attention in the prescribed way (meditation), and engaging in or avoiding certain behaviors (ethics), will bear the promised result (wisdom and psychological well-being). This spirit of empiricism animates Buddhism to a unique degree. For this reason, the methodology of Buddhism, if shorn of its religious encumbrances, could be one of our greatest resources as we struggle to develop our scientific understanding of human subjectivity.

[...]

Religion is also the only area of our discourse in which people are systematically protected from the demand to give evidence in defense of their strongly held beliefs. And yet, these beliefs often determine what they live for, what they will die for, and—all too often—what they will kill for. This is a problem, because when the stakes are high, human beings have a simple choice between conversation and violence. At the level of societies, the choice is between conversation and war. There is nothing apart from a fundamental willingness to be reasonable—to have one’s beliefs about the world revised by new evidence and new arguments—that can guarantee we will keep talking to one another. Certainty without evidence is necessarily divisive and dehumanizing.

Therefore, one of the greatest challenges facing civilization in the twenty-first century is for human beings to learn to speak about their deepest personal concerns—about ethics, spiritual experience, and the inevitability of human suffering—in ways that are not flagrantly irrational. Nothing stands in the way of this project more than the respect we accord religious faith. While there is no guarantee that rational people will always agree, the irrational are certain to be divided by their dogmas.

[...]

What the world most needs at this moment is a means of convincing human beings to embrace the whole of the species as their moral community. For this we need to develop an utterly nonsectarian way of talking about the full spectrum of human experience and human aspiration. We need a discourse on ethics and spirituality that is every bit as unconstrained by dogma and cultural prejudice as the discourse of science is. What we need, in fact, is a contemplative science, a modern approach to exploring the furthest reaches of psychological well-being. It should go without saying that we will not develop such a science by attempting to spread “American Buddhism,” or “Western Buddhism,” or “Engaged Buddhism.”

If the methodology of Buddhism (ethical precepts and meditation) uncovers genuine truths about the mind and the phenomenal world—truths like emptiness, selflessness, and impermanence—these truths are not in the least “Buddhist.” No doubt, most serious practitioners of meditation realize this, but most Buddhists do not. Consequently, even if a person is aware of the timeless and noncontingent nature of the meditative insights described in the Buddhist literature, his identity as a Buddhist will tend to confuse the matter for others.

There is a reason that we don’t talk about “Christian physics” or “Muslim algebra,” though the Christians invented physics as we know it, and the Muslims invented algebra. Today, anyone who emphasizes the Christian roots of physics or the Muslim roots of algebra would stand convicted of not understanding these disciplines at all. In the same way, once we develop a scientific account of the contemplative path, it will utterly transcend its religious associations. Once such a conceptual revolution has taken place, speaking of “Buddhist” meditation will be synonymous with a failure to assimilate the changes that have occurred in our understanding of the human mind.

It is as yet undetermined what it means to be human, because every facet of our culture—and even our biology itself—remains open to innovation and insight. We do not know what we will be a thousand years from now—or indeed that we will be, given the lethal absurdity of many of our beliefs—but whatever changes await us, one thing seems unlikely to change: as long as experience endures, the difference between happiness and suffering will remain our paramount concern. We will therefore want to understand those processes—biochemical, behavioral, ethical, political, economic, and spiritual—that account for this difference. [...]

Read the whole essay for embedded links and more. Harris expounds further on some of the ideas mentioned in the above excerpts, as he makes his case, and it's a good read. But back to the "Awakening the Buddha Within" book:

At the end of that book, even Jeffrey - oops, excuse me, "Surya Das" - said there were many types of Buddhism, and that one didn't have to embrace or believe in many of the beliefs held by Buddhists, or even believe in God. In the end, he said you could take from it what you wanted or needed.

I appreciated the lack of insistence on following dogma, but also found it a little ironic that he seemed to be indirectly supporting at least a portion of Sam Harris's essay; that Buddhist teachings don't have to be mixed up with religion.

I would not go so far as to say the two authors agree, but they seem close to agreement on some points. I think perhaps that Das is saying the teachings don't have to be mixed with religion, whereas Harris is more forcefully arguing that they should not be. That's not agreement, but pretty darn close.


Also see:

On criticizing fellow Buddhists
The tyranny of "Consensus Buddhism"!

     

Sunday, February 16, 2014

Androids: Fantasy VS Reality

The fantasy Android:



But what is the reality of Artificial Intelligence? The harsh truth:

Supercomputer Takes 40 Minutes To Model 1 Second of Brain Activity
Despite rumors, the singularity, or point at which artificial intelligence can overtake human smarts, still isn't quite here. One of the world's most powerful supercomputers is still no match for the humble human brain, taking 40 minutes to replicate a single second of brain activity.

Researchers in Germany and Japan used K, the fourth-most powerful supercomputer in the world, to simulate brain activity. With more than 700,000 processor cores and 1.4 million gigabytes of RAM, K simulated the interplay of 1.73 billion nerve cells and more than 10 trillion synapses, or junctions between brain cells. Though that may sound like a lot of brain cells and connections, it represents just 1 percent of the human brain's network.

The long-term goal is to make computing so fast that it can simulate the mind— brain cell by brain cell— in real-time. That may be feasible by the end of the decade, researcher Markus Diesmann, of the University of Freiburg, told the Telegraph.
It "may be" feasible by the end of the decade? To catch up with one second of human brain activity? Even if it does, we're talking about a Super-Computer. It's a long way from the android brain in the video. And yes, computers are advancing very fast. But to catch up with a human brain, much less surpass it... it won't happen tomorrow.

     

If cosmic rays could play classical music instruments...

Oh, wait a minute! They can:

NASA Moon Probe Broadcasts Space Weather Symphony Live Online
A NASA probe orbiting the moon is broadcasting live cosmic tunes from a computer near you.

NASA's Lunar Reconnaissance Orbiter (LRO) has a new internet radio station for people who want to check out space weather through music. Operating in real time — as long as the craft isn't behind the moon — the station plays music that changes in pitch and instrument based on how much radiation the spacecraft experiences.

"Our minds love music, so this offers a pleasurable way to interface with the data," project leader Mary Quinn of the University of New Hampshire, Durham, said in a statement. "It also provides accessibility for people with visual impairment."



Cloudy, with a chance of B-flat

Launched in 2009, LRO orbits the moon as it maps its surface. The craft carries with it a Cosmic Ray Telescope for the Effects of Radiation, or CRaTER. Six detectors on the instrument measure the radiation from solar activity and galactic cosmic rays.

The detectors measure how many energetic particles are registered each second and sends the information to CRaTER Live Radio, where software converts the measurements into pitches in a four-octave scale. Six pitches are played each second — one for each detector. Low pitches indicate high activity, while higher pitches indicate lower counts.

As activity increases, the musical instruments scale as well. The main instrument at the lowest level of activity is a piano. Two instruments up, it becomes a marimba. Further activity is indicated by a steel drum or a guitar, while the peak of normal activity is indicated by the strum of a banjo.

During the course of a significant solar event such as a solar flare, radiation activity may exceed the normal operating range. In such a case, the software creates a second operating range with the piano at the bottom and banjo at the top, but the background violin and cello scales. A drop in pitch for the background instruments indicates a move to the secondary range.

24-hours of space tunes

LRO broadcasts 24 hours, and is live at all times except when the craft travels behind the moon. During this blackout period, the station reuses the previous hour's activity, changing the sound of the background bongo drum and muting the chiming triangle.

The process, known as sonofication, converts data into sound and has been utilized in a number of fields on a variety of missions, including Voyager 1, Voyager 2 and Kepler. [...]
The actual website you can listen to it on live, is here:

CRaTER Live Internet Radio Station Sonification/Music Design

Give the page a minute or two to load. In the upper left hand corner is a sound bar that controls the music, it should start playing automatically. The site has a lot of detailed information about how it all works.

I've checked it out a few times. The "Music" is probably more ambient than musical, though it can vary a considerable degree, depending on the space weather. Sometimes it sounds more pleasant than others.

Your mileage may vary! ;-)


     

Saturday, February 15, 2014

Autonomous Robots are Here Already

Robot construction crew works autonomously, is kind of adorable
Inspired by termite behavior, engineers and scientists at Harvard have developed a team of robots that can build without supervision.
[...] Termes, the result of a four-year project, is a collective system of autonomous robots that can build complex, three-dimensional structures such as towers, castles, and pyramids without any need for central command or dedicated roles. They can carry bricks, build stairs, climb them to reach higher levels, and add bricks to a structure.

"The key inspiration we took from termites is the idea that you can do something really complicated as a group, without a supervisor, and secondly that you can do it without everybody discussing explicitly what's going on, but just by modifying the environment," said principal investigator Radhika Nagpal, Fred Kavli Professor of Computer Science at Harvard SEAS.

The way termites operate is a phenomenon called stigmergy. This means that the termites don't observe each other, but changes in the environment around them -- much like the way ants leave trails for each other.

The Termes robots operate on the same principal. Each individual robot doesn't know how many other robots are operating, but all are able to gauge changes in the structure and readjust on the fly accordingly.

This means that if one robot breaks down, it does not affect the rest of the robots. Engineers simply program the robots with blueprints and leave them alone to perform the work.

The robots at the moment are quite small -- about the size of a toy car -- but are quite simple, operating on just four simple types of sensors and three actuators. According to the team, they could be easily scaled up or down to suit the needs of the project, and could be deployed in areas where it's difficult for humans to work -- the moon, for instance, although that's an extreme example.

"It may be that in the end you want something in between the centralized and the decentralized system -- but we've proven the extreme end of the scale: that it could be just like the termites," Nagpal said. "And from the termites' point of view, it's working out great." [...]
Once more, the future is here. Follow the link for video and embedded links.