22 July 2014

Talk breathes new life into the alternative communication device market

Talk is an augmentative and alternative communication device
Talk is an augmentative and alternative communication device that allows people to spell out words using different lengths of breath

For people with disabilities that affect their ability to speak, communicating with others can be very difficult. A new device known as Talk, however, is designed to help such people to do so. It senses dots and dashes made by the person using their breath, in order to spell out words.

Talk was produced for the Google Science Fair, and is one of the regional finalists from the Asia Pacific and Japan section in the 15-16 age group. It also is up for two additional awards within the Science Fair. The inventor of the device, 16 year-old Arsh Shah Dilbagi, says he believes it is the only augmentative and alternative communication (AAC) device to use breath as an input, that it is the most affordable AAC device available, and that it has the fastest speaking rate of any AAC device.

The Google Science Fair is a science and technology competition open to individuals and teams from ages 13 to 18 years old. It seeks to find projects that have the potential to change the world. Viewing the submissions of the young entrants will fill you with a mixed sense of inadequacy, resigned awe and hope for humanity. Many of them display the kind of lateral thinking long since ground out of adults.

Talk is no different. Conscious that AAC devices can be costly, slow and bulky, Dilbagi set himself the task of improving upon existing products. The design, he felt, should be generic, affordable, faster than comparable devices, portable and should consume less power.

"After quite some research, I hypothesized that a pressure sensor can be used to monitor variations in breath and generate two distinguishable signals," explains Dilbagi in his project description. "These signals can be further processed as a binary language and synthesized into speech accordingly."

Dilbagi decided to go forward with his idea of using breath as an input, having found that it was the means that was controllable by the highest proportion of people. A MEMS microphone was found to be sensitive enough to recognize the different breath types, and Dilbagi chose International Morse Code as the device language because it allows the users to dictate words words with the fewest required signals.

An algorithm is used to distinguish between short and long exhales and a computing engine is used to synthesize the inputted words into speech. Dilbagi opted against using a display in conjunction with the device to ensure it remained simple and portable.

Talk was tested amongst family and friends, before being tested successfully with a speech issue sufferer. In his own controlled tests, Dilbagi found that the device had a near 100 percent accuracy level.

The final design comprises a combined lightweight metal ear-clip and microphone. The microphone can be placed in front of the user's nose or mouth depending on what they find most comfortable. It can be used in communication mode for spelling out words, or command mode for triggering predefined words with abbreviated forms (such as "W" for "water"). An accompanying piece of software helps users to learn how to use the device and what the different Morse Code signals are.

Dilbagi is raising money for its production on Indiegogo where, if the campaign is successful, individuals can secure a Talk ear-clip for US$99. He will find out if his project is a global finalist in the Google Science Fair in August.

Source: Talk,Via gizmag

New technique could boost internet speed 5 to 10 times

random linear network coding

Mathematical equations can make Internet communication via computer, mobile phone or satellite many times faster and more secure than today. Results with software developed by researchers from Aalborg University in collaboration with the US universities the Massachusetts Institute of Technology (MIT) and California Institute of Technology (Caltech) are attracting attention in the international technology media.

New technique could boost internet speed 5 to 10 times
Researchers Morten Videb and Janus Heide (Photo: Aalborg University)

A new study uses a four minute long mobile video as an example. The method used by the Danish and US researchers in the study resulted in the video being downloaded five times faster than state of the art technology. The video also streamed without interruptions. In comparison, the original video got stuck 13 times along the way.

- This has the potential to change the entire market. In experiments with our network coding of Internet traffic, equipment manufacturers experienced speeds that are five to ten times faster than usual. And this technology can be used in satellite communication, mobile communication and regular Internet communication from computers, says Frank Fitzek, Professor in the Department of Electronic Systems and one of the pioneers in the development of network coding.

switch to 5G networks might bring more intelligence at the node level
The switch to 5G networks might bring more intelligence at the node level (Image: Franz Fitzek)

Goodbye to the packet principle

Internet communication formats data into packets. Error control ensures that the signal arrives in its original form, but it often means that it is necessary to send some of the packets several times and this slows down the network. The Danish and US researchers instead are solving the problem with a special kind of network coding that utilizes clever mathematics to store and send the signal in a different way. The advantage is that errors along the way do not require that a packet be sent again. Instead, the upstream and downstream data are used to reconstruct what is missing using a mathematical equation.

With the old systems you would send packet 1, packet 2, packet 3 and so on. We replace that with a mathematical equation. We don’t send packets. We send a mathematical equation. You can compare it with cars on the road. Now we can do without red lights. We can send cars into the intersection from all directions without their having to stop for each other. This means that traffic flows much faster, explains Frank Fitzek.

Network coding has a large application field for Internet of Things (IoT), 5G communication systems, software defined networks (SDN), content centric networks (CCN), and besides transport also implication on distributed storage solutions.

eavesdropper would need to intercept all the packets to decode the information
The system is much safer than the current Internet protocols, because an eavesdropper would need to intercept all the packets to decode the information (Image: Franz Fitzek)

Presence in Silicon Valley

In order for this to work, however, the data is coded and decoded with the patented technology. The professor and two of his former students from Aalborg University, developers Janus Heide and Morten Videbæk Pedersen, along with the US colleagues, founded the software company "Steinwurf." The company makes the RLNC technology (Random Linear Network Coding) available to hardware manufacturers and they are in secret negotiations that will bring the improvements to consumers. As part of the effort Steinwurf has established an office in Silicon Valley but the company is still headquartered in Aalborg.

I think the technology will be integrated in most products because it has some crucial and necessary functions. The only thing that can stop the development is patents. Previously, individual companies had a solid grip on patents for coding. But our approach is to make it as accessible as possible. Among other things, we are planning training courses in these technologies, says Frank Fitzek.

Source

20 July 2014

MIT adds two robotic fingers to the human hand enhances the grasping motion

supernumerary robotic fingers
Faye Wu uses the supernumerary robotic fingers

New wrist-mounted device augments the human hand with two robotic fingers.

Twisting a screwdriver, removing a bottle cap, and peeling a banana are just a few simple tasks that are tricky to pull off single-handedly. Now a new wrist-mounted robot can provide a helping hand or rather, fingers.

Researchers at MIT have developed a robot that enhances the grasping motion of the human hand. The device, worn around one’s wrist, works essentially like two extra fingers adjacent to the pinky and thumb. A novel control algorithm enables it to move in sync with the wearer’s fingers to grasp objects of various shapes and sizes. Wearing the robot, a user could use one hand to, for instance, hold the base of a bottle while twisting off its cap.

“This is a completely intuitive and natural way to move your robotic fingers,” says Harry Asada, the Ford Professor of Engineering in MIT’s Department of Mechanical Engineering. “You do not need to command the robot, but simply move your fingers naturally. Then the robotic fingers react and assist your fingers.”

Ultimately, Asada says, with some training people may come to perceive the robotic fingers as part of their body “like a tool you have been using for a long time, you feel the robot as an extension of your hand.” He hopes that the two-fingered robot may assist people with limited dexterity in performing routine household tasks, such as opening jars and lifting heavy objects. He and graduate student Faye Wu presented a paper on the robot this week at the Robotics: Science and Systems conference in Berkeley, Calif.

supernumerary robotic fingers
They could allow users to perform tasks that usually require two hands, using only one


Biomechanical synergy

The robot, which the researchers have dubbed “supernumerary robotic fingers,” consists of actuators linked together to exert forces as strong as those of human fingers during a grasping motion.

To develop an algorithm to coordinate the robotic fingers with a human hand, the researchers first looked to the physiology of hand gestures, learning that a hand’s five fingers are highly coordinated. While a hand may reach out and grab an orange in a different way than, say, a mug, just two general patterns of motion are used to grasp objects: bringing the fingers together, and twisting them inwards. A grasp of any object can be explained through a combination of these two patterns.

The researchers hypothesized that a similar “biomechanical synergy” may exist not only among the five human fingers, but also among seven. To test the hypothesis, Wu wore a glove outfitted with multiple position-recording sensors, and attached to her wrist via a light brace. She then scavenged the lab for common objects, such as a box of cookies, a soda bottle, and a football.

Wu grasped each object with her hand, then manually positioned the robotic fingers to support the object. She recorded both hand and robotic joint angles multiple times with various objects, then analyzed the data, and found that every grasp could be explained by a combination of two or three general patterns among all seven fingers.

The researchers used this information to develop a control algorithm to correlate the postures of the two robotic fingers with those of the five human fingers. Asada explains that the algorithm essentially “teaches” the robot to assume a certain posture that the human expects the robot to take.


Researchers at MIT have developed a device, worn around the wrist, that enhances the grasping motion of the human hand with two robotic fingers.

Bringing robots closer to humans

For now, the robot mimics the grasping of a hand, closing in and spreading apart in response to a human’s fingers. But Wu would like to take the robot one step further, controlling not just position, but also force.

“Right now we’re looking at posture, but it’s not the whole story,” Wu says. “There are other things that make a good, stable grasp. With an object that looks small but is heavy, or is slippery, the posture would be the same, but the force would be different, so how would it adapt to that? That’s the next thing we’ll look at.”

Wu also notes that certain gestures such as grabbing an apple may differ slightly from person to person, and ultimately, a robotic aid may have to account for personal grasping preferences. To that end, she envisions developing a library of human and robotic gesture correlations. As a user works with the robot, it could learn to adapt to match his or her preferences, discarding others from the library. She likens this machine learning to that of voice-command systems, like Apple’s Siri.

“After you’ve been using it for a while, it gets used to your pronunciation so it can tune to your particular accent,” Wu says. “Long-term, our technology can be similar, where the robot can adjust and adapt to you.”

“This is breaking new ground on the question of how humans and robots interact,” says Matthew Mason, director of the Robotics Institute at Carnegie Mellon University, who was not involved in the research. “It is a novel vision, and adds to the many ways that robotics can change our perceptions of ourselves."

Down the road, Asada says the robot may also be scaled down to a less bulky form.

“This is a prototype, but we can shrink it down to one-third its size, and make it foldable,” Asada says. “We could make this into a watch or a bracelet where the fingers pop up, and when the job is done, they come back into the watch. Wearable robots are a way to bring the robot closer to our daily life.”


Source

Salt water-powered Quant e-Sportlimousine nanoFLOWCELL car

Salt water-powered Quant e-Sportlimousine nanoFLOWCELL car
Nunzio La Vecchia presents the e-Sportlimousine to the crowd of journalists

The QUANT e-Sportlimousine with nanoFLOWCELL® drivetrain concept has been approved by TÜV Süd in Munich for use on public roads. Nunzio La Vecchia, chief technical officer at nanoFLOWCELL AG, was handed the official registration plate with number ROD-Q-2014. After in-depth inspection by TÜV Süd this means that the vehicle with its nanoFLOWCELL® has now been officially approved for use on public roads in Germany and Europe as a whole.
“This is a historic moment and a milestone not only for our company but perhaps even for the electro-mobility of the future. For the first time an automobile featuring flow-cell electric drive technology will appear on Germany's roads. Today we have put the product of 14 years’ hard development work on the road. This is a moment for us to celebrate. We are extremely proud that as a small company we have developed such visionary technology as the nanoFLOWCELL® and are now also able to put it into practice. But this is only the beginning of our journey of discovery,” is Nunzio La Vecchia’s delighted comment on this important step in the development of the company.

Currently the nanoFLOWCELL AG team and its partners are working at top speed on the homologation of the QUANT e-Sportlimousine with nanoFLOWCELL® for series production.

Salt water powered car
Nunzio La Vecchia accepts the TÜV registration

“At the car’s world premiere in Geneva a large number of investors and automobile manufacturers showed tremendous interest in the new QUANT e-Sportlimousine and the nanoFLOWCELL® drivetrain concept, together with its wide range of possible applications. Now that the automobile has been approved for use on public roads in Germany and Europe we can enter into detailed planning with our partners, adding an exciting new chapter to the future of electro-mobility,” states chief technical officer Nunzio La Vecchia, commenting on the wide interest in the innovative drive technology represented by nanoFLOWCELL AG. “The attention received by the nanoFLOWCELL® and the positive response to it has encouraged us to think about investment possibilities in the project, right up to a possible initial public offering. This would enable us to drive forward the wide range of possible applications and potential of the nanoFLOWCELL® on an international scale. Initial planning and discussions are already taking place,” Nunzio La Vecchia explains.

NanoFlowcell Quant e-Sportlimousine
The NanoFlowcell Quant e-Sportlimousine at the 2014 Geneva show

“Here in Munich in particular, where other prestigious automobile manufacturers are advertising their electric vehicle with the slogan ‘ERSTER EINER NEUEN ZEIT’ (‘FIRST IN A NEW AGE’) or ‘THE MOST PROGRESSIVE SPORTS CAR’, we are delighted as pioneers to be able to present an automobile driven by flow cell technology on public roads, and one which achieves not only fantastic performance values but also zero emissions. With a projected top speed of over 350 km/h, acceleration from 0-100 in 2.8 seconds, a torque of four times 2,900 NM, a length of over 5.25 m and a range of more than 600 km the four-seater QUANT e-Sportlimousine is not only a highly competitive sports car but also SEINER ZEIT VORAUS – UND ZWAR SCHON HEUTE (‘WELL AHEAD OF ITS TIME – IN FACT TODAY’),” is how technical manager Nunzio La Vecchia assesses the future possibilities of the QUANT e-Sportlimousine, with a reference to other cars produced by German automobile manufacturers in Bavaria.

Quant e-Sportlimousine make it to production, it will be a green supercar
Should the Quant e-Sportlimousine make it to production, it will be a green supercar in the mold of the Porsche 918 Spyder

“Approval of the QUANT e-Sportlimousine with the nanoFLOWCELL® drivetrain concept is a vital step forward for nanoFLOWCELL AG. What began as the vision of Nunzio La Vecchia has now become reality. The fact that only four months after the car's world premiere in Geneva we have received approval for the use of the QUANT e-Sportlimousine with nanoFLOWCELL® drivetrain concept on the road in Germany and in Europe indicates the dynamism with which our entire team is working on this project. And we are very much looking forward to the next stages of this exciting and promising journey,” states Prof. Jens-Peter Ellermann, chairman of the board of directors at nanoFLOWCELL AG. “We've got major plans, and not just within the automobile industry. The potential of the nanoFLOWCELL® is much greater, especially in terms of domestic energy supplies as well as in maritime, rail and aviation technology. The nanoFLOWCELL® offers a wide range of applications as a sustainable, low cost and environmentally-friendly source of energy,” is how Prof. Jens Ellermann describes the diversity of potential uses for the nanoFLOWCELL®.


Source

Drone that locate survivors through their mobile phones

Computer Science student, Jonathan Cheseaux developed a system for locating a person via his or her mobile phone with a drone. This device could be used to find victims in natural disasters.

Drone that locate survivors through their mobile phones
Jonathan Cheseaux with the drone that uses a Wi-Fi antenna to locate mobile phones (Photo: Alain Herzog)

A drone makes large circles in the sky. With two powerful antennas, it sniffs the data packets emitted by mobile phones. On the ground, an interface developed specifically for this project makes it possible to track the flight of a small robotic aircraft in real time from a computer. Colored dots visible on the screen map indicate the spotted phones. The vehicle tightens its flight around the selected device to indicate its position. “In the best tests we have performed, the place indicated was within 10 meters,” says Jonathan Cheseaux who worked on this project for his master’s degree.

Following an earthquake or another natural disaster, it is often difficult to know the position of victims under the rubble. At a time when most people, even in poor countries, have a mobile phone, the team of Mobile Communications Laboratory had the idea of using them to know the position of victims and thereby facilitate a search. When WiFi mode is activated, the devices emit data packets at regular intervals so that it’s possible to know various parameters, including the power received by the antenna connection. This can vary depending on the surrounding terrain, the weather or interference. It is also weaker as the layer of rubble over a person is thicker another important factor.

Drone locates survivors by their mobile phones
The EPFL students also developed an interface that allows those on the ground to track the aircraft in real time (Photo: Alain Herzog)

But there is little similarity between these signals and a distance in meters that would make it possible to know directly the position of the device. With the drone, it is the GPS points of the captured signals from several places that locate the phone. These benchmarks are considered the center of circles which could potentially find the phone. The intersection of the latter determines the location of the phone and, therefore, probably the person. “By refining the system to automatically eliminate weaker signals, the system has become even more accurate,” explains the master’s student. “Flight tests have located a cell phone on campus with high accuracy.”

“The drone’s WiFi antenna could be replaced by Avalanche Victim Detectors (DVA) which would enable the rapid and inexpensive deployment of the first avalanche searches,” predicts the student, who is also an amateur mountaineer.

In the second part of his work, Jonathan Cheseaux also noted the challenge of connecting to the WiFi network of a device through the drone without human intervention. An antenna on the plane permits the device to guess the identity of the router that connects the phone and then pretends to be it. It can thereby establish communication. “In the case of natural disasters mentioned above, it would provide a substitution network when connections have been destroyed,” he says. But for now, this system only works if the network is open and not password protected. This work has also helped highlight issues of confidentiality and the protection of private data. Recovering the names of registered WiFi (SSID) by a smartphone is done to establish the habits of owners. For example, it may have a list of “EPFL,” “Fitness,” “Café-so,” “House,” et cetera. The MAC address, a unique identifier, can also be recovered by the drones. It identifies the device brand and deconstructs statistics on the distribution of smartphone / router / printer brands.



Source

Jaguar Land Rover self-learning smart car

Jaguar Land Rover self-learning smart car
The Smart Assistant learns from you to anticipate your needs while driving

  • The intelligent car will have its own on-board 'Smart Assistant' to carry out a host of functions to allow the driver to concentrate on driving
  • The ground-breaking system will minimise driver distraction to reduce the potential for accidents - with an eventual goal of zero accidents
  • New state-of-the-art software recognises the driver and learns their preferences. It can then predict their routine and changing preferences based on variables such as the weather and their schedule for the day
  • The 'Smart Assistant' will check your calendar in advance and remind you to take your child's sports kit to sports day
  • The self-learning car will learn an individual's driving style and apply them when Auto Adaptive Cruise Control (AACC) is engaged

Cutting-edge technology is being pioneered by researchers at Jaguar Land Rover to develop a truly intelligent self-learning vehicle that will offer a completely personalised driving experience and help prevent accidents by reducing driver distraction.

Using the latest machine learning and artificial intelligence techniques, Jaguar Land Rover's self-learning car will offer a comprehensive array of services to the driver, courtesy of a new learning algorithm that recognises who is in the car and learns their preferences and driving style. The software then applies this learning by using a range of variables including your calendar, the time of day, traffic conditions and the weather to predict driver behaviour and take over many of the daily driving 'chores', allowing the driver to concentrate on the road ahead.

Dr Wolfgang Epple, Director of Research and Technology for Jaguar Land Rover, said:

"The aim of our self-learning technology is to minimise driver distraction, which will help reduce the risk of accidents. Presenting the driver with information just at the right time whilst driving will reduce both cognitive distraction and the need for the driver to look away from the road to scroll through phone lists, or adjust mirrors, temperature or seat functions while on the road.

"Up until now most self-learning car research has only focused on traffic or navigation prediction. We want to take this a significant step further and our new learning algorithm means information learnt about you will deliver a completely personalised driving experience and enhance driving pleasure."

The intelligent car will recognise the driver by the smartphone or other device in their pocket and by the time the driver has opened the car door, the mirrors, steering wheel and seat settings will all be set to the individual's preferences. The cabin will be pre-set to the desired temperature - and be intelligent enough to change it if it is snowing or raining.

Through the 'Smart Assistant', the car will also review your schedule for the day and intelligently pre-set the navigation depending on traffic conditions to avoid congestion. It will also predict your next destination based on your schedule.

The self-learning car will also know if you are going to the gym, and will have learnt that you prefer a certain temperature on the way there to warm-up, and a different temperature to cool down on your way home. If you always use the massage function at a particular time or location on a journey, the car will be able to predict this as well.

Jaguar Land Rover self-learning intelligent car
Smart Assistant technology gets the car ready when it detects the driver

If you are taking the children to school, the car will recognise every passenger and offer each their own preferred infotainment options - and the 'Smart Assistant' will review your calendar and remind you before you leave the house - by sending a note to your smartphone - to collect your children's sport kit as it knows you are going to their sports day.

If you usually make a phone call at a certain time or on a particular journey, the car will predict this and will offer to make the call. If you are going to be late for your next appointment, the car will offer to email or call ahead with minimal or no interaction from the driver.

The self-learning car will also be able to learn an individual's driving style in a range of traffic conditions and on different types of road. When the driver activates Auto Adaptive Cruise Control (AACC) the car will be able to apply these learned distance settings and acceleration profiles to automated cruise control.

"By developing a learning function for Adaptive Cruise Control, it is technology concepts like the self-learning car that will ensure any future intelligent car remains fun and rewarding to drive as we move closer to more autonomous driving over the next 10 years," added Dr Epple. "This is important because in the future customers will still want an emotional connection and a thrilling drive - with the ability to drive autonomously when required."

The personalised experience would also not be limited to the car owned by the driver. If you hire an intelligent Jaguar or Land Rover in the future, the car will recognise the driver and passengers and offer them the same preferences learned by their vehicle at home.

Some of the features included in the Self-Learning Car concept:

  • Vehicle Personalisation - climate, seat, steering wheel, mirrors and infotainment settings.
  • Destination Prediction - automatic destination entry to navigation system based on historical usage.
  • Fuel Assist - suggests fuel stations which have the driver's preferred brand and location, based on historical usage. The car will let you know if you have enough fuel before long journeys the day before you travel.
  • Predictive Phone Call - predicts who you are likely to call in a certain situation.
  • Passenger Awareness - will activate passenger preferred infotainment settings and personal climate zones.
  • Intelligent Notifications - based on traffic situation, the car can alert people that you will be late or provide relevant contextual updates such as flight delays on your drive to the airport.
  • Auto Adaptive Cruise Control (AACC) - when AACC is activated, the car applies the distance setting and acceleration profile it has learned when the driver is driving the vehicle. 


Source

17 July 2014

Tegra K1 GPU posts some beastly benchmarks but can Nvidia get devs to actually use it?

Tegra K1

The first benchmarks on Nvidia’s quad-core Tegra K1 have begun to appear, and the performance profile is looking excellent. When Nvidia announced the K1 at CES this year, the company’s CEO, Jen-Hsun Huang, declared that the chip would ship in two variants a dual-core 64-bit version based on Nvidia’s Project Denver, and a quad-core Cortex-A15 version that combined the Tegra 4′s CPU with an updated programmable GPU. Now, new benchmarks of production K1 hardware are starting to pop up and the new device looks downright formidable.

The boutique manufacturer Digital Storm got its hands on a Xiaomi Mi Pad, which is powered by the quad-core, 32-bit K1 variant and took it for a spin against a handful of phones, the iPad Mini with Retina display, and Nvidia’s own Shield (powered by Tegra 4). The Shield is the platform to watch here it’s the only system with active cooling and it’s clocked more aggressively as a result.

Xiaomi Mi Pad
Xiaomi Mi Pad Powered by Tegra K1

In CPU-specific tests, the Tegra K1 is modestly faster than its predecessor thanks to a slightly newer revision of its CPU (the Cortex-A15 r3). In graphics tests, K1 absolutely blows the doors off Tegra 4 (and everything else).

Tegra K1 benchmarks
Image courtesy of Digital Storm

This kind of performance lead is evident in every test, but it’s not particularly surprising when you consider the graphics advance between Tegra 4 and K1. Tegra 4 was essentially a power-optimized circa-2004 GPU it had some improvements and additional capabilities, but the core architecture dated back to the PS3. While it’s easy to snort at some of Nvidia’s more outlandish comparisons to full-size hardware, the company’s graphics core really has just leapt from a 2004 era solution to a 2012 design. Those eight years buy a lot of advances, even in a 5W TDP envelope.
Tegra K1: Nvidia’s mobile ambitions are at a crossroads

It’s no secret that Nvidia has pivoted away from phones and towards other devices; Jen-Hsun Huang told CNET as much in an interview earlier this year, where he said Nvidia had little interest in commodity phones or even mainstream devices. Instead, he wants to pioneer new experiences in visual computing and focus on the lucrative automotive market. He described Tegra 4 and Tegra 4i as “learning experiences,” admitted they didn’t pan out, and stated that “our strategy is to focus on performance-oriented, visual computing-oriented, gaming-oriented devices where we can add a lot of value.”

Unreal Engine 4 running on Tegra K1
Unreal Engine 4 running on Tegra K1

Adding that value is going to mean finding a degree of gaming success for Tegra K1 that has generally eluded Nvidia to-date. This has less to do with intrinsic hardware capabilities, and more to do with the difficulty of establishing a strong Android-centric gaming market. It’s a difficult challenge Nvidia has previously experimented with sponsoring titles on the Tegra platform, but it’s not clear if they’ll push ahead with that model once K1 hardware is shipping again.

Tegra K1 SDK resources

Nvidia, to its credit, is aware of the problem much of the whitepaper on Tegra K1 is actually devoted to discussing developer resources and opportunities. Google’s Android L demonstration was anchored by Tegra K1 hardware (64-bit this time) and the company has talked up its cooperation with Epic and Unreal Engine 4 on multiple occasions.

I don’t think it’s an exaggeration to say that this is sink-or-swim time for Nvidia’s mobile division. Pivoting to focus on high-end gaming and the tablet market is a valid move, but only if the company can hack out a new market for itself. The automotive space is an interesting market and possibly a good long-term play, but all of Nvidia’s current automotive wins are shipping Tegra 2-class hardware. The company has already done much of the R&D it would need to continue to pursue follow-ups and the low volumes of the car market won’t sustain a major R&D push necessary to keep up with the consumer tablet space, even given much higher margins.



Courtesy Extremetech

Meet Jibo, the world’s first family robot built by MIT’s social robotics master

Jibo robot

Cynthia Breazeal, an MIT professor and one of the pioneers of social robotics, has unveiled “the world’s first family robot.” Called Jibo, the all-white desktop-sitting robot has more than a passing resemblance to a certain robot from a recent animated Pixar movie. The robot, which will cost around $500 when it’s released, will have a range of abilities that will hopefully make it the perfect companion to have around the house such as telling stories to kids, automatically taking photos when you pose, easy messaging and video calling, providing reminders for calendar entries, and companionship through emotional interaction.

Jibo is about 11 inches (28cm) tall, with a 6-inch base. He (yes, it’s a he) weighs around six pounds (2.7kg) and is mostly made of aluminium and white plastic. Jibo’s face mainly consists of a 5.7 inch 1980×1080 touchscreen, but there’s a couple of stereo cameras, stereo speakers, and stereo microphones hidden away in there too. Jibo’s body is separated into three regions, all of which can be motor-driven through 360 degrees and it’s all fully touch sensitive, too, so you can interact by patting him on the head, poking his belly, etc.


While its hardware is pretty impressive for $500, a companion robot is nothing without some really, really good software and fortunately, it sounds like Jibo will deliver on that front as well. Jibo will: Recognize and track the faces of family members; allow for natural language input from anywhere in the room; proactively help when it recognizes you’re doing a task that it can help with (i.e. cooking); and, judging by the video, Jibo has some pretty nice speech synthesis software, too.

Perhaps most importantly, though, Jibo’s operating system (Linux-based) is being built from the ground up to be extensible with apps. Jibo will ship with a number of default apps called “skills” but there’s also an SDK that will allow developers to create (and sell) their own apps/skills to extend the robot’s functionality. For example, out of the box, Jibo will be able to tell bedtime stories to kids but you might then download a third-party app that gives Jibo the additional ability to help kids with their homework.

Jibo prototype, brain surgery
One wonders if Jibo is programmed to say things like “ow, you’re hurting me” when you cut open his head.

Now the big caveat: Jibo is currently just a prototype, and Breazeal and co are using Indiegogo to raise funds and interest for a commercial release in late 2015/early 2016. It costs $500 for a consumer version, or $600 for a developer Jibo with the SDK. Early reports from a handful of tech sites say that Jibo currently only performs a few predefined actions (but apparently his movements are pretty slick). Considering the all-star team that Breazeal has enlisted to help her commercialize her extensive knowledge of social robotics, and the fact that Jibo has already raised $70,000 of its $100,000 funding goal in the time it took me to write this story, I am pretty confident that the “world’s first family robot” will actually come to market.

Jibo robot, animated GIF

Do you actually want a family robot?

As with any shiny new gizmo, it’s all to easy to be distracted by hardware and software technicalities. Just for a moment, push aside the specs and consider a much more pertinent question: Do you actually want or need a family robot?

This is a complex question. I think most people would love to have a “guardian angel” robot that magically turns up when you really need it (say, when you fall down some stairs or when you misplace the keys). I’m not so sure if people want a glorified, static Furby that provides only questionable usefulness for a limited set of tasks.

As always with social robots, I think it will come down to Jibo’s usefulness and how it actually feels to use and interact with him. It is very, very hard to craft an artificial intelligence that is enjoyable to interact with. But who knows hopefully Breazeal has cracked it. Otherwise, Jibo will be just like every other expensive robot that you’ve purchased over the last decade or so: Kind of cool for a few days/weeks, but ultimately a dust-collecting bookend.

Extremetech

15 July 2014

Phase-changing material could allow robots to switch between hard and soft states

Squishy robots


Two 3D-printed soft, flexible scaffolds: The one on the left is maintained in a rigid, bent position via a cooled, rigid wax coating, while the one on the right is uncoated and remains compliant (here, it collapses under a wrench).

In the movie “Terminator 2,” the shape-shifting T-1000 robot morphs into a liquid state to squeeze through tight spaces or to repair itself when harmed.

Now a phase-changing material built from wax and foam, and capable of switching between hard and soft states, could allow even low-cost robots to perform the same feat.

The material developed by Anette Hosoi, a professor of mechanical engineering and applied mathematics at MIT, and her former graduate student Nadia Cheng, alongside researchers at the Max Planck Institute for Dynamics and Self-Organization and Stony Brook University could be used to build deformable surgical robots. The robots could move through the body to reach a particular point without damaging any of the organs or vessels along the way.

Robots built from the material, which is described in a new paper in the journal Macromolecular Materials and Engineering, could also be used in search-and-rescue operations to squeeze through rubble looking for survivors, Hosoi says.

Follow that octopus

Working with robotics company Boston Dynamics, based in Waltham, Mass., the researchers began developing the material as part of the Chemical Robots program of the Defense Advanced Research Projects Agency (DARPA). The agency was interested in “squishy” robots capable of squeezing through tight spaces and then expanding again to move around a given area, Hosoi says much as octopuses do.

But if a robot is going to perform meaningful tasks, it needs to be able to exert a reasonable amount of force on its surroundings, she says. “You can’t just create a bowl of Jell-O, because if the Jell-O has to manipulate an object, it would simply deform without applying significant pressure to the thing it was trying to move.”

What’s more, controlling a very soft structure is extremely difficult: It is much harder to predict how the material will move, and what shapes it will form, than it is with a rigid robot.

So the researchers decided that the only way to build a deformable robot would be to develop a material that can switch between a soft and hard state, Hosoi says. “If you’re trying to squeeze under a door, for example, you should opt for a soft state, but if you want to pick up a hammer or open a window, you need at least part of the machine to be rigid,” she says.


A new phase-changing material built from wax and foam developed by researchers at MIT is capable of switching between hard and soft states. Video: Melanie Gonick/MIT

Compressible and self-healing

To build a material capable of shifting between squishy and rigid states, the researchers coated a foam structure in wax. They chose foam because it can be squeezed into a small fraction of its normal size, but once released will bounce back to its original shape.

The wax coating, meanwhile, can change from a hard outer shell to a soft, pliable surface with moderate heating. This could be done by running a wire along each of the coated foam struts and then applying a current to heat up and melt the surrounding wax. Turning off the current again would allow the material to cool down and return to its rigid state.

In addition to switching the material to its soft state, heating the wax in this way would also repair any damage sustained, Hosoi says. “This material is self-healing,” she says. “So if you push it too far and fracture the coating, you can heat it and then cool it, and the structure returns to its original configuration.”

To build the material, the researchers simply placed the polyurethane foam in a bath of melted wax. They then squeezed the foam to encourage it to soak up the wax, Cheng says. “A lot of materials innovation can be very expensive, but in this case you could just buy really low-cost polyurethane foam and some wax from a craft store,” she says.

In order to study the properties of the material in more detail, they then used a 3-D printer to build a second version of the foam lattice structure, to allow them to carefully control the position of each of the struts and pores.

When they tested the two materials, they found that the printed lattice was more amenable to analysis than the polyurethane foam, although the latter would still be fine for low-cost applications, Hosoi says.

The wax coating could also be replaced by a stronger material, such as solder, she adds.

Hosoi is now investigating the use of other unconventional materials for robotics, such as magnetorheological and electrorheological fluids. These materials consist of a liquid with particles suspended inside, and can be made to switch from a soft to a rigid state with the application of a magnetic or electric field.

When it comes to artificial muscles for soft and biologically inspired robots, we tend to think of controlling shape through bending or contraction, says Carmel Majidi, an assistant professor of mechanical engineering in the Robotics Institute at Carnegie Mellon University, who was not involved in the research. “But for a lot of robotics tasks, reversibly tuning the mechanical rigidity of a joint can be just as important,” he says. “This work is a great demonstration of how thermally controlled rigidity-tuning could potentially be used in soft robotics.”

Source

13 July 2014

Sand to Improve Battery Performance three times

Researchers develop low cost, environmentally friendly way to produce sand-based lithium ion batteries that outperform standard by three times

sand lithium ion battery
Researchers have developed a lithium ion battery made of sand that outperforms the current standard by three times.

Researchers at the University of California, Riverside’s Bourns College of Engineering have created a lithium ion battery that outperforms the current industry standard by three times. The key material: sand. Yes, sand.

“This is the holy grail a low cost, non-toxic, environmentally friendly way to produce high performance lithium ion battery anodes,” said Zachary Favors, a graduate student working with Cengiz and Mihri Ozkan, both engineering professors at UC Riverside.

The idea came to Favors six months ago. He was relaxing on the beach after surfing in San Clemente, Calif. when he picked up some sand, took a close look at it and saw it was made up primarily of quartz, or silicon dioxide.

His research is centered on building better lithium ion batteries, primarily for personal electronics and electric vehicles. He is focused on the anode, or negative side of the battery. Graphite is the current standard material for the anode, but as electronics have become more powerful graphite’s ability to be improved has been virtually tapped out.

Researchers are now focused on using silicon at the nanoscale, or billionths of a meter, level as a replacement for graphite. The problem with nanoscale silicon is that it degrades quickly and is hard to produce in large quantities.

Favors set out to solve both these problems. He researched sand to find a spot in the United States where it is found with a high percentage of quartz. That took him to the Cedar Creek Reservoir, east of Dallas, where he grew up.

sand to pure nano-silicon
From left, (b) unpurified sand, (c) purified sand, and (d) vials of unpurified sand, purified sand, and nano silicon.

Sand in hand, he came back to the lab at UC Riverside and milled it down to the nanometer scale, followed by a series of purification steps changing its color from brown to bright white, similar in color and texture to powdered sugar.

After that, he ground salt and magnesium, both very common elements found dissolved in sea water into the purified quartz. The resulting powder was then heated. With the salt acting as a heat absorber, the magnesium worked to remove the oxygen from the quartz, resulting in pure silicon.

The Ozkan team was pleased with how the process went. And they also encountered an added positive surprise. The pure nano-silicon formed in a very porous 3-D silicon sponge like consistency. That porosity has proved to be the key to improving the performance of the batteries built with the nano-silicon.

sand into pure nano-silicon
Schematic showing how sand is turned into pure nano-silicon.

The improved performance could mean increasing the expected lifespan of silicon based electric vehicle batteries up to three times or more, which would be significant for consumers, considering replacement batteries cost thousands of dollars. The energy density is more than three times higher than that of traditional graphite based anodes, which means cell phones and tablets could last three times longer between charges.

The findings were just published in a paper, “Scalable Synthesis of Nano-Silicon from Beach Sand for Long Cycle Life Li-ion Batteries,” in the journal Nature Scientific Reports. In addition to Favors and the Ozkan’s, authors were: Wei Wang, Hamed Hosseini Bay, Zafer Mutlu, Kazi Ahmed and Chueh Liu. All five are graduate students working in the Ozkan’s labs.

Now, the Ozkan team is trying to produce larger quantities of the nano-silicon beach sand and is planning to move from coin-size batteries to pouch-size batteries that are used in cell phones.



University of California, Riverside
Get every new post delivered to your Inbox.

 

Copyright © 2014 Tracktec. All rights reserved.

Back to Top