Wednesday, November 27, 2013

Is that you, HAL? No, it's NEIL: Google, US Navy pour money into 'associative' AI brain • The Register

"...The Never Ending Image Learner is a new approach to weak artificial intelligence systems that piggybacks on the immense tech fielded by companies like Google, and represents the bleeding edge of computer science research...."

'via Blog this'

Friday, November 15, 2013

AI research at Google

From page: 

Google AI robot leaked/hoaxed via Reddit

On December 5, 2011, someone posted anonymously to reddit claiming to be a disgruntled former employee of the Google X Lab. See the discussion thread: I recently left Google X. The original post by J32PMXR was deleted after two hours, but here's a saved copy of what they wrote:
reddit and Google X Lab
"This is in total violation of the NDA, but I don't care anymore. Sue me.
The central focus of Google X for the past few years has been a highly advanced artificial intelligence robot that leverages the underlying technology of many popular Google programs. As of October [2011] (the last time I was around the project), the artificial intelligence had passed theTuring Test 93% of the time via an hour long IM style conversation. IM was chosen to isolate the AI from the speech synthesizer and physical packaging of the robot.
The robot itself isn't particularly advanced because the focus was not on mechanics, but rather the software. It is basically a robotish looking thing on wheels. Speech recognition is somewhat better than what you would get with normal speech input, mostly because of the use of high quality microphones and lip-reading assistance.
I have had the chance to interact with the robot personally and it is honestly the most amazing thing that I have ever seen. I like to think of it like Stephen Hawking because it is extremely smart and you can interact with it naturally, but it is incapable of physically doing much. There is a planned phase two for development of an advanced robotics platform."
Follow up comments by J32PMXR say the robot has a suite of sensors including optical, laser, infrared, ultrasonic, and depth cameras. It can supposedly lip read, although that might be restricted to detecting emotion such as a smile. Most of the processing is done onboard, internet is used for external information. A stated goal is to enable intelligent conversation with your mobile phone, similar to Apple's Siri but more advanced.
Replies to the posting express skepticsm, but they also admit that Google is likely working in a similar direction. Another former Google employee replies that the idea of using a robot is non-Googly. They say Google would approach AI via statistical methods and by throwing massive computing power at the problem. Larry Page has hinted at this approach himself, see the quotes above.
Concensus on the reddit discussion is that this was probably a hoax. It's possible/probable that Google does indeed have some kind of AI-enabled robot in its lab. But the chances seem very low that a group of staff were fired and that one of them posted about it in public.

Monday, July 1, 2013

Semantic satiation

Semantic satiation - Wikipedia, the free encyclopedia:

I've come across this before.. "chicken"

Semantic satiation

Semantic satiation (also semantic saturation) is a psychological phenomenon in which repetition causes a word or phrase to temporarily lose meaning for the listener, who then processes the speech as repeated meaningless sounds.

Contents

  [hide

History and research[edit]

The phrase "semantic satiation" was coined by Leon Jakobovits James in his doctoral dissertation at McGill UniversityMontrealCanada awarded in 1962.[1] Prior to that, the expression "verbal satiation" had been used along with terms that express the idea of mental fatigue. The dissertation listed many of the names others had used for the phenomenon:
"Many other names have been used for what appears to be essentially the same process: inhibition (Herbert, 1824, in Boring, 1950), refractory phase and mental fatigue (Dodge, 1917; 1926a), lapse of meaning (Bassett and Warne, 1919), work decrement (Robinson and Bills, 1926), cortical inhibition (Pavlov, 192?), adaptation (Gibson, 1937), extinction (Hilgard and Marquis, 1940), satiation (Kohler and Wallach, 1940), reactive inhibition (Hull, 19113 [sic]), stimulus satiation (Glanzer, 1953), reminiscence (Eysenck, 1956), verbal satiation (Smith and Raygor, 1956), and verbal transformation (Warren, 1961b)." (From Leon Jakobovits James, 1962)
The dissertation presents several experiments that demonstrate the operation of the semantic satiation effect in various cognitive tasks such as rating words and figures that are presented repeatedly in a short time, verbally repeating words then grouping them into concepts, adding numbers after repeating them out loud, and bilingual translations of words repeated in one of the two languages. In each case subjects would repeat a word or number for several seconds, then perform the cognitive task using that word. It was demonstrated that repeating a word prior to its use in a task made the task somewhat more difficult.
The explanation for the phenomenon was that verbal repetition repeatedly aroused a specific neural pattern in the cortex which corresponds to the meaning of the word. Rapid repetition causes both the peripheral sensorimotor[disambiguation needed] activity and the central neural activation to fire repeatedly, which is known to cause reactive inhibition, hence a reduction in the intensity of the activity with each repetition. 
Jakobovits James (1962) calls this conclusion the beginning of "experimental neurosemantics."

Saturday, June 29, 2013

A revolutionary new 3D digital brain atlas : McGill Reporter

A revolutionary new 3D digital brain atlas : McGill Reporter:

Cool.
By Anita Kar,
"Imagine being able to zoom into the brain to see various cells the way we zoom into Google maps of the world and to look at houses on a street. Although the brain is considered the most complex structure in the universe with 86 billion neurons, zooming in on it is now possible thanks to a new brain atlas with unprecedented resolution."

Friday, June 21, 2013

What do memories look like? | KurzweilAI

What do memories look like? | KurzweilAI:

"imaged through cranial windows"

oi..

Creating a IWin32Window from a Win32 Handle

Creating a IWin32Window from a Win32 Handle: "Creating the IWin32Window wrapper class"

Thank you Sir. Here's my code based from yours.

public class WindowWrapper : IWin32Window {
public static WindowWrapper CreateWindowWrapper( IntPtr handle ) {
return new WindowWrapper( handle );
}

private WindowWrapper( IntPtr handle ) {
this.Handle = handle;
}

public IntPtr Handle { get; private set; }
}

Crowdfunded project to create "world's smartest robot" - Boing Boing

Crowdfunded project to create "world's smartest robot" - Boing Boing:

Well, it's a start at least.

Wednesday, June 12, 2013

CHRIS 4th year demo, Scenario 1b - YouTube

CHRIS 4th year demo, Scenario 1b - YouTube:

Description:
Excerpt of the EU FP7 CHRIS project 4th year demo.
The robot is able to segment the object using motion and learn to associate a given name to it through an interactive phase with the user. To successfully grasp the novel object the robot is trained via a kinesthetic teaching.

Credits: Ciliberto C., Lalleé S., Natale L., Pattacini U., Tikhanoff V.

Tuesday, June 11, 2013

人工

人工知能ヒトの脳について

33rd Square | Verbal IQ of a Four-Year Old Achieved in AI System

33rd Square | Verbal IQ of a Four-Year Old Achieved in AI System:

'via Blog this'

Again, a cool concept (no pun intended) but this is still preloading an AI with information and rules.
A truer-AI should be able to learn like a child does.

AI MadeOf  紙.

Sunday, June 2, 2013

Artificial intelligence ‘sees’ visual illusion « Mind Hacks

Artificial intelligence ‘sees’ visual illusion « Mind Hacks:

Artificial intelligence ‘sees’ visual illusion

study just published in PLoS Computational Biology has reported that an artificial intelligence system trained to make sense of a simulated natural environment is susceptible to some of the same visual illusions that humans fall for.
In one of these, the ‘Herman grid‘ illusion – illustrated on the right, you may be able to ‘see’ fuzzy patches of grey in the white stripes, despite the fact that there is no grey in the image (click for a bigger version if it’s not clear).
David Corney and Beau Lotto, researchers working in the Lotto Lab (which has a wonderful website by the way), have been training artificial intelligence systems to distinguish surfaces in a simulated natural environment with lots of ‘dead leaf’-like shapes.
When training these sorts of systems, the idea is not to program them with specific rules, but to present an image and let the neural network make a guess.
The researchers then ‘tell’ the AI system whether it is correct in its guess, and it adjusts itself to try and reduce the extent of the error on the next guess. After many learning trials, these sorts of ‘back propagation‘ neural networks can make distinctions between quite complex stimuli.
In this case, Corney and Lotto decided that once the system was fully trained to complete its task successfully, they would test it with some visual illusions experienced by humans.
Interestingly, the AI system was susceptible to the Herman Grid illusion, sensing ‘grey’ where there was none. Other illusions produced similar results.
The fact that both humans and AI system ‘fall’ for the same illusions, suggests that they take advantage of visual abilities that have been shaped by our experience of the visual world.
Link to paper in PLoS Computational Biology (thanks Matt!).
Link to study write-up from the university’s news site.
Link to Lotto Lab website (with loads of cool images and demos).

Friday, May 31, 2013

Minor Status Update

Just a minor status update for AIBrain.

Been working on it when I have the time.
Got a few things figured out in the years since I conceived the initial seed idea of this project.

Sorry I can't share more of the inner workings of AIBrain yet. Keep checking back [every year] though!

AIBrain's HomePage around August 31st, 2009

AIBrain's HomePage:

Heh, more flash-backs..

AIBrain Agent Project around Oct 3rd 2002

AIBrain Agent Project:


What is AIBrain?
OfficiallyA program that will learn and respond using inputs and give outputs.
Realistically
A program that doesn't do anything too cool by itself....yet.

A fake interview (humor) with the authorRick Harker.
QWhy would someone [like you] want to make an artificial intelligence?
ATo be the first to make a real one.
A
For the glory (social status).
A
For company.
A
Just to do it.
A
To lean more about ourselves.
A
For the money.
QWhat progress have you made? 
A
Not too much. I've rebuilt the framework to use threads more easily; made the AIBrain agent more modular; made the agent internet capable.
QWhat have you learned? 
A
I have learned a great deal about human languages, especially English and all of its quirks. I also had to learn the C++ language in order to implement AIBrain properly.
QDo you have any pet iguanas? 
A
No.
QWhat are you currently working on? 
A
I am trying to combine the internal language I am developing into the memory, grammar, input, output, and learning systems.
QHow many hours each day do you spend on this program? 
A
I don't know. I get dizzy when the sun keeps spinning through the sky like that.
QMay I ask a question? 
A
You just did.
QOkay, then can I ask you another question after this one? AYes.
QSo what is your theory behind your program? Why do you think you will succeed when others haven't? AHey, that's two questions! Oh well. My idea is that since language comes from intelligence, it might be possible to simulate intelligence using language. Kinda like a mirror effect, except that mirror isn't quite the correct word. ABecause I have a different perspective on life than most people.
QWhen will the rest of the world see a working version of your program?
AI don't know. Maybe 1/2 a year, ten years, or never...I don't know.
QWhat popular personality do you compare to? ADave Barry. "That newspaper funny guy." - Rick Harker
QWh@]] 
Error 411: connection to host lost. Please hang up and dial again or your neighbor's dog will be kicked by the IRS.
QWhen did you start working on this program? 
A
I started on parts of it back in the spring of 1995.













 1

AIBrain Home Page Archive around December 10 2002


AIBrain Home Page
Electronics
Robotics
Family & Friends
 1

Monday, April 1, 2013

Cloud Robotics

Cloud Robotics:

'via Blog this'
...In 2010, James Kuffner at Google introduced the term "Cloud Robotics" to describe a new approach to robotics that takes advantage of the Internet as a resource for massively parallel computation and sharing of vast data resources....

Thursday, March 7, 2013

"Rome is a cartridges Shakespeare dalmatian" - AIBrain 2013/03/07.

Saturday, February 23, 2013

How Fast Is a Blink of an Eye?

How Fast Is a Blink of an Eye? | eHow.com:

'via Blog this'

Speed

  • The human eye can blink much faster than a second, and, in fact, it's perfectly possible to blink several times in a single second. On average, a human eye takes between 300 and 400 milliseconds to complete a single blink.

Wednesday, February 20, 2013

IF STAFF BY A YEAR


IF STAFF
BY A YEAR
I DON'T YOU SHARE
IDEAS SHARE
HIS OWN SHARE IS BUSY
TO MAKE A
SPLENDID BRAIN
THIS ONE IN A
TEN
LET THEM MAKE THIS ONE
OF MY RELATIVES OF THE BRAIN
THIS SHOULD STOP SPRING FOR

Sunday, February 17, 2013

Ultima Online - Wikipedia, the free encyclopedia

Ultima Online - Wikipedia, the free encyclopedia:

Artificial Life Engine

Starr Long, the game's associate producer, explained in 1996:
Nearly everything in the world, from grass to goblins, has a purpose, and not just as cannon fodder either. The 'virtual ecology' affects nearly every aspect of the game world, from the very small to the very large. If the rabbit population suddenly drops (because some gung-ho adventurer was trying out his new mace) then wolves may have to find different food sources (e.g., deer). When the deer population drops as a result, the local dragon, unable to find the food he’s accustomed to, may head into a local village and attack. Since all of this happens automatically, it generates numerous adventure possibilities.
However, this feature never made it beyond the game's beta stage. As Richard Garriott explained:
We thought it was fantastic. We'd spent an enormous amount of time and effort on it. But what happened was all the players went in and just killed everything; so fast that the game couldn't spawn them fast enough to make the simulation even begin. And so, this thing that we'd spent all this time on, literally no-one ever noticed – ever – and we eventually just ripped it out of the game, you know, with some sadness.[11]
So sad that this got ripped.

Saturday, February 2, 2013

Dead PSU

Argh! The PSU on the database server for AIBrain has died.. time to RMA it because it is still under warranty.

Extricating the power supply from the case is going to be 'fun'... I should upload a pic of before and after. :)