Quantcast
Channel: 2045 Initiative
Viewing all 2275 articles
Browse latest View live

Japanese robots earn their keep

$
0
0

It was not quite as memorable as “That’s one small step for man, one giant leap for mankind,” as uttered by Neil Armstrong in 1969. But the matter-of-fact statement was thrilling for the creators of a new breed of astronaut sent into space this summer: “On August 21 2013, a robot took one small step toward a brighter future for all.”

Kirobo – a combination of the Japanese words for “hope” and “robot” – is a machine with a difference: not only does it provide technical assistance, it is also designed to provide companionship to human astronauts who spend months working in space.

“I believe robots will be the next smartphone just like Google believes Google Glass will be the next smartphone,” says Kirobo’s creator, Tomotaka Takahashi, referring to the efforts made to enable the machine to hold basic conversations.

Kirobo was last month joined by Koichi Wakata, a Japanese astronaut, having travelled ahead to the International Space Station in an unmanned rocket which blasted off from a Japanese island this year.

Mr Takahashi, a robotics engineer at Tokyo university, was inspired by the manga character Astro Boy, a classic cartoon robot, and worked on the project in collaboration with Toyota and Dentsu, an advertising firm.

Kirobo has been programmed to communicate in Japanese and recognise voices and faces. He is capable of holding a conversation and improvising basic responses. On the more practical side, he is there to act as an observer and recorder, and can relay instructions verbally to Mr Wakata sent from earth. But despite the attention the project received and the high level of Japanese robotics expertise, other manufacturers are unlikely to follow suit, say those working in the industry.

“They don’t recognise space robotics as a big area for the business,” says Hiroki Kato, an engineer at the Japan Aerospace Exploration Agency (Jaxa). He says projects such as Kirobo make great PR – the little space companion generated headlines worldwide – but are not the basis for a sustained commercial programme unless they could earn their keep by performing a wide range of tasks.

Jaxa has created prototype robots sent into orbit on several missions for tasks such as refuelling and maintenance but is not currently working on a humanoid robot, although it has not ruled that prospect out. The US regards itself as leading the field.

“Japan has significant experience with its advanced robotics but only Robonaut [the first humanoid robot in space] has worked side by side with astronauts performing tasks that currently only humans perform,” says Ron Diftler, manager of the Robonaut project at Nasa.

Japanese roboticists have sometimes come under fire for making machines that entertain rather than pursuing serious engineering projects.

Honda’s Asimo robot, the “world’s most advanced humanoid robot” according to the carmaker, can walk like a human and is the size of a small adult. But it has usually made headlines for conducting’ symphony orchestras, performing dances and greeting royalty.

The March 2011 tsunami and subsequent Fukushima nuclear disaster was a wake-up call. The event ought to have been an opportunity for roboticists to show what their technology could do, given the dangers for humans during the clean-up operation at the stricken power plant.

But few robots were capable of responding to the nuclear accident, a situation that has since encouraged many to focus on developing robots that can perform rescue operations.

After Fukushima, space is no longer regarded as a priority for Japanese roboticists – but some believe robot astronauts will eventually come into their own.

Technology used in Honda’s Asimo and other robots can easily be applied to machines created with space missions in mind, says Mr Kato. But for those that decide to go down this path profits may not materialise for 50 years, he predicts.

“Interest in space robots is growing more quickly now than 10 years ago,” says Mr Diftler. “The space station is a very busy place and having an extra set of hands, in this case robot hands that can handle maintenance tasks, frees up the crew for more science.”

Mr Takahashi argues the good publicity is enough at this stage. “A lot of people are interested in this project, and that’s the important thing.”


Robot Designed to Guide Insertion of Needles, Catheters

$
0
0

This robot has a hybrid powerstrain structure which will allow a precise, repetitive, planned and controlled insertion enhancing the current procedures used so far. In addition, it has a lower cost and it requires a brief period of learning. The design, which has been patented, is the result of a joint collaboration between researchers of the High School of Industrial Engineers of the Universidad Politécnica de Madrid (UPM) and the Gliatech S.L Company.

These procedures of guided surgery, needle insertion, catheters and stents are used to take samples of fluids and tissues for biopsies and diagnosis. This robot can make precise and controlled insertions, deposition markers and a specific administration of drugs. All this is possible thanks to a virtual planning, which is based on the analysis of medical images which are previously captured with a scanner.

The development of robots for surgical use is the result of the need to enhance surgical procedures effectiveness. The technologies based on automation and robotics can improve this effectiveness thanks that they are able to integrate diverse sources of information such as medical images and their processing, and also performing real-time complex tasks.

The patented robotic device has a hybrid powertrain structure with six degrees of freedom which uses the features of the serial and parallel mechanisms and whose structure is specially designed to perform tasks of guiding and insertion. In addition, it has a system with lasers and inertial units which will allow doctors to calibrate and correct the position and orientation of surgical instruments.

These surgical instruments are attached to the distal end of the mechanism by using a coupling device that can be uncoupled depending on whether the insertion task is done by an automated robot (active) or by an expert manually guided by the robot (passive).

Both GE and Rolls Royce Are To Use 3D Printing To Make Jet Engines And Violate Engineering's Prime Commandment

$
0
0

There is an old and important saying in engineering: fast, better, cheaper. The point being that you can only ever have two out of the three. But in this pair of tales about how both GE and Rolls Royce are to be using 3D printing in order to produce their respective jet engines we’ve an interesting violation of that basic engineering commandment.

Here’s the GE story:

General Electric GE 0% (GE), on the hunt for ways to build more than 85,000 fuel nozzles for its new Leap jet engines, is making a big investment in 3D printing. Usually the nozzles are assembled from 20 different parts. Also known as additive manufacturing, 3D printing can create the units in one metal piece, through a successive layering of materials. The process is more efficient and can be used to create designs that can’t be made using traditional techniques, GE says. The finished product is stronger and lighter than those made on the assembly line and can withstand the extreme temperatures (up to 2,400F) inside an engine.

This is 3D printing using metals of course, not the plastics that most of the home and small business printers are currently using. But do note that they are claiming that the new process is both more efficient (that is, cheaper) and also better, in that they can create more complex parts this way. And then there’s the Rolls Royce side of the story:

Rolls-Royce is looking to use 3D printers to make lighter components for its aircraft engines, the company’s head of technology strategy has said.

Henner Wapenhans said the new technology could allow the manufacturer to produce parts more quickly, slashing lead times, the Financial Times reported.

“3D printing opens up new possibilities, new design space,” Dr Wapenhans said. “Through the 3D printing process, you’re not constrained [by] having to get a tool in to create a shape. You can create any shape you like.

The point here being that they can now do things faster. For, in order to make these metal parts in the traditional manner you need first to have the tool made, that is, the form by which you will make the part. And that process can take 18 months to go through all of the necessary iterations.

Putting the two stories together we can thus see that 3D printing is going to allow faster, better andcheaper: a direct violation of that basic engineering commandment. But no, this isn’t a miracle, nor even a refutation of the rule. For what is left unsaid in the only being able to have two of the three is “using current technologies”. We’re limited when we use traditional techniques to gaining only two of the three things we desire, yes, but as every engineer knows if you can bring in some entirely new method of doing things you can indeed gain the entire trinity.

Which brings us to an interesting little end note. If traditional techniques can only bring us two of the three and we would need a breakthrough in technology in order to gain all of those three then….if we’ve a technology that can provide all three then we do have a breakthrough in technology. I’ve previously been rather dismissive of 3D printing as I’ve thought that there’s a limit, and a low one, to the number of things that people will want to print out of plastic at home. But seeing it being used at the very esoteric end of the jet engine business is revising that view somewhat. I now think it’s going to be a bigger thing than I previously did.

How 3D Printers Are Cranking Out Eyes, Bones, and Blood Vessels

$
0
0

At the dawn of rapid prototyping, a common predication was that 3D printing would transform manufacturing, spurring a consumer revolution that would put a printer in every home. That hasn't quite happened—-and like so many emerging technologies, rapid prototyping has found its foothold in a surprisingly different field: Medicine.

The following studies and projects represent some of the most fascinating examples of "bioprinting," or using a computer-controlled machine to assemble biological matter using organic inks and super-tough thermoplastics. They range from reconstructing major sections of skull to printing scaffolding upon which stem cells can grow into new bones. More below—and look out for more 3D printing week content over the next few days.

Skulls

Osteofab is a product made by a British company called Oxford Performance Materials. OPM got into the business by selling a high-performance polymer often used in medical implants—a thermoplastic called polyetherketoneketone—in raw form. But over the past few years, the company has also pioneered the application of the stuff, primarily through additive manufacturing. In February, an American patient received an FDA-approved skull patch made of the material, which had been carefully molded and printed to fit 75 percent of his unique skull geometry. [Osteofab]

Skin

A big problem with the idea of "printing" new skin is how difficult it is to recreate a particular skin tone in every kind of light: Because our skin is so unique, thin, and mutable, it's hard to perfect an exact replica. There are too many interesting studies to discuss in a short paragraph, but two highlights: Wake Forest scientist James Yoo is working on machine that can actually print skin directly onto burn victims as part of a DoD-funded grant, while scientists at University of Liverpool are using carefully-calibrated 3D scanners they're using to capture samples of each subject's existing skin, which allows them to print a more accurate patch.

The research is ongoing, but the team plans to create a "skin database" of the captured samples, which could be tapped into from remote hospitals without the cameras needed to capture a subject's own skin. [Gizmodo;PhysOrg]

Noses and Ears

Creating prosthetic ears, noses, and chins are often a painful, expensive, and laborious experience for patient and doctor both. A UK industrial designer named Tom Fripp has spent the past few years collaborating with University of Sheffield scientists to 3D print a cheaper, easier-to-make facial prosthetic. Their process involves 3D scanning a patient's face (much less invasive than casting it), modeling a replacement part, and printing it using pigment, starch, and medical grade silicone.

An added bonus: When the prosthetic wears out (inevitably, they do), the part can be cheaply re-printed. [The Guardian]

Eyes

Last week, Fripp and the team at Sheffield unveiled the results of testing the same process—on eyes. Prosthetic eyes are expensive to make, since they're hand-painted, and can often take months to complete. Fripp's printer can turn out 150 eyes an hour—and the details, like iris color, size, and blood vessels, can be easily customized based on each patient's needs. [PhysOrg]

Medical Implants

As electronic devices—from drones to medical implants—get smaller, scientists have struggled to manufacture batteries small enough to power them. But a team of Harvard engineers is 3D printing microscopic batteries that are as small as a piece of sand. The team explains:

… the researchers created an ink for the anode with nanoparticles of one lithium metal oxide compound, and an ink for the cathode from nanoparticles of another. The printer deposited the inks onto the teeth of two gold combs, creating a tightly interlaced stack of anodes and cathodes. Then the researchers packaged the electrodes into a tiny container and filled it with an electrolyte solution to complete the battery.

They could eventually power medical implants—like these ones—that are being held up by power issues. [Harvard]

Bones

If dental hygiene isn't high on your priorities, don't worry: now you can just print a new set of teeth. Last year, an 83-year-old woman… Read…

3D-printed implants—like jawbones—have been around for several years. But a handful of researchers are experimenting with printing actual replacement bones. For example, a University of Nottingham scientist named Kevin Shakeshaff has developed a bioprinter that creates a scaffold of polylactic acid and gelatinous alginate—which is then coated in adult stem cells. According to Forbes, the scaffolding will dissolve and be replaced by new bone growth within roughly three months. [Forbes]

Blood Vessels and Cells

We may be able to print organs, but part of the problem with these manufactured tissue is creating a functioning circulatory system to go with it. Günter Tovar, a German scientist who heads up the Fraunhofer Institute for Interfacial Engineering and Biotechnology, is leading a project called BioRap that's developing 3D-printed blood vessels using a mix of synthetic polymers and biomolecules. These printed systems are being tested in animals—they aren't yet ready for humans—but they could eventually enable printed organ transplants. [Fraunhofer Institute]

Google Puts Money on Robots, Using the Man Behind Android

$
0
0

PALO ALTO, Calif. — In an out-of-the-way Google office, two life-size humanoid robots hang suspended in a corner.

If Amazon can imagine delivering books by drones, is it too much to think that Google might be planning to one day have one of the robots hop off an automated Google Car and race to your doorstep to deliver a package?

Google executives acknowledge that robotic vision is a “moonshot.” But it appears to be more realistic than Amazon’s proposed drone delivery service, which Jeff Bezos, Amazon’s chief executive, revealed in a television interview the evening before one of the biggest online shopping days of the year.

Over the last half-year, Google has quietly acquired seven technology companies in an effort to create a new generation of robots. And the engineer heading the effort is Andy Rubin, the man who built Google’s Android software into the world’s dominant force in smartphones.

The company is tight-lipped about its specific plans, but the scale of the investment, which has not been previously disclosed, indicates that this is no cute science project.

At least for now, Google’s robotics effort is not something aimed at consumers. Instead, the company’s expected targets are in manufacturing — like electronics assembly, which is now largely manual — and competing with companies like Amazon in retailing, according to several people with specific knowledge of the project.

A realistic case, according to several specialists, would be automating portions of an existing supply chain that stretches from a factory floor to the companies that ship and deliver goods to a consumer’s doorstep.

“The opportunity is massive,” said Andrew McAfee, a principal research scientist at the M.I.T. Center for Digital Business. “There are still people who walk around in factories and pick things up in distribution centers and work in the back rooms of grocery stores.”

Google has recently started experimenting with package delivery in urban areas with its Google Shopping service, and it could try to automate portions of that system. The shopping service, available in a few locations like San Francisco, is already making home deliveries for companies like Target, Walgreens and American Eagle Outfitters.

Perhaps someday, there will be automated delivery to the doorstep, which for now is dependent on humans.

“Like any moonshot, you have to think of time as a factor,” Mr. Rubin said. “We need enough runway and a 10-year vision.”

Mr. Rubin, the 50-year-old Google executive in charge of the new effort, began his engineering career in robotics and has long had a well-known passion for building intelligent machines. Before joining Apple Computer, where he initially worked as a manufacturing engineer in the 1990s, he worked for the German manufacturing company Carl Zeiss as a robotics engineer.

“I have a history of making my hobbies into a career,” Mr. Rubin said in a telephone interview. “This is the world’s greatest job. Being an engineer and a tinkerer, you start thinking about what you would want to build for yourself.”

He used the example of a windshield wiper that has enough “intelligence” to operate when it rains, without human intervention, as a model for the kind of systems he is trying to create. That is consistent with a vision put forward by the Google co-founder Larry Page, who has argued that technology should be deployed wherever possible to free humans from drudgery and repetitive tasks.

The veteran of a number of previous Silicon Valley start-up efforts and twice a chief executive, Mr. Rubin said he had pondered the possibility of a commercial effort in robotics for more than a decade. He has only recently come to think that a range of technologies have matured to the point where new kinds of automated systems can be commercialized.

More...

About 

Future Robots Will Be Soft And Shaped Like Squids

$
0
0

Someday soon, your life may be saved by a weird-looking octopus, squid, or caterpillar – a squishy, form-changing, animal-like device that’s actually a soft robot.

Scientists are exploring the fluidity and deformable nature of animals like those mentioned above, as well as insects, starfish, and lizards. Their goal is to combine the maneuverability of these creatures with the autonomous nature of the hard-shelled robots we’re all familiar with—think R2-D2.

The ability of soft robots to climb onto textured surfaces and irregular shapes, crawl along wires and ropes, and burrow into complex, confined spaces will take them to places the hard robots of today can’t venture. In the biomedical field, they could assist in surgeries, while in search and rescue missions; they could crawl into hazardous situations to aid victims.

While a life-saving soft robot may not be a reality for several years, teams from various disciplines—computer science, organic chemistry, biomechanics, biomimetic robotics, flexible electronics, mechanical engineering, and materials development—are all making advances. Contributions to this field are also being made by neuroscience, polymer chemistry, control systems, and biomedical engineering.

Two groups at the forefront of this research are located in the Boston area. At Tufts University, professor Barry Trimmer heads one of the first groups to explore soft animal neuromechanics and how soft structures can be controlled through electrical motors.

At Harvard, a team of researchers led by professor George Whitesides, has developed a soft, silicone-based robot that looks much like an octopus. With a focus more on chemistry, the Whitesides Research Group is exploring elastomers, such as silicon polymer, and how the use of pneumatics—inflation and pressure—can change the shape of soft robots and power them.

According to Trimmer, soft robotics requires a new perspective for engineers. Typically, engineering theory deals with stiff materials and engineers have been trained to make sure what they build—whether it’s a bridge or a car—has minimal deformity under normal activity.

Soft materials, because they can change shape, are often considered to be problem that needs solving. “Soft material engineering is not taught much, and the engineering world needs to catch up fast,” Trimmer says. “Rarely are engineers encouraged to think about how they can build out of completely different materials.

“This would give them access to a whole new world of capabilities. Think of the proteins and sugars found in human bodies. These are amazing materials with fantastic properties that have never been used or exploited,” Trimmer says.

While this type of study won’t supersede current engineering, it will be an important part of the design of structures in the future. The promised applications are however some way off.

“There are still huge issues that need to be addressed,” Trimmer notes, “particularly in terms of developing a good framework and tools to support our theoretical approach.”

One area that needs to be advanced is soft material simulation tools. There is no means yet to build or model a device in a computer and simulate how it’s going to work as engineers do currently with structures such as aircraft.

Another challenge is the electronics that power the soft robots. “A rotary electric motor won’t work because it’s built of stiff materials and will limit the capability of a soft robot,” Trimmer says.

The Harvard team is using pressurized gas or liquid to power their robots, but it still has to use pumps and values and other equipment to drive the pressure. Trimmer’s group is exploring micro-coil shaped memory alloy wire, which can pull great force when current is passed through it. Trimmer and his team are also looking for a solution that is more like muscle, with electrically active polymers; the group is also using stem cells to grow muscles for their research.

The final major challenge is the control of movement of the robots. “Currently, engineering theory around controlling movement is for rigid systems,” Trimmer says. “This fails miserably when applied to deformable structures.”

Next page...

The Heart's Own Stem Cells Play Their Part in Regeneration

$
0
0

Up until a few years ago, the common school of thought held that the mammalian heart had very little regenerative capacity. However, scientists now know that heart muscle cells constantly regenerate, albeit at a very low rate. Researchers at the Max Planck Institute for Heart and Lung Research in Bad Nauheim, have identified a stem cell population responsible for this regeneration. Hopes are growing that it will be possible in future to stimulate the self-healing powers of patients with diseases and disorders of the heart muscle, and thus develop new potential treatments.

Some vertebrates seem to have found the fountain of youth, the source of eternal youth, at least when it comes to their heart. In many amphibians and fish, for example, this important organ has a marked capacity for regeneration and self-healing. Some species in the two animal groups have even perfected this capability and can completely repair damage caused to heart tissue, thus maintaining the organ's full functionality.

The situation is different for mammals, whose hearts have a very low regenerative capacity. According to the common school of thought that has prevailed until recently, the reason for this deficit is that the heart muscle cells in mammals cease dividing shortly after birth. It was also assumed that the mammalian heart did not have any stem cells that could be used to form new heart muscle cells. On the contrary: new studies show that aged muscle cells are also replaced in mammalian hearts. Experts estimate, however, that between just one and four percent of heart muscle cells are replaced every year.

Scientists in Thomas Braun's Research Group at the Max Planck Institute for Heart and Lung Research have succeeded in identifying a stem cell population in mice that plays a key role in this regeneration of heart muscle cells. Experiments conducted by the researchers in Bad Nauheim on genetically modified mice show that the Sca1 stem cells in a healthy heart are involved in the ongoing replacement of heart muscle cells. The Sca-1 cells increase their activity if the heart is damaged, with the result that significantly more new heart muscle cells are formed.

Since, in comparison to the large amount of heart muscle cells, Sca-1 stem cells account for just a tiny proportion of the cells in the heart muscle, searching for them is like searching for a needle in a haystack. "We also faced the problem that Sca-1 is no longer available in the cells as a marker protein for stem cells after they have been changed into heart muscle cells. To prove this, we had to be inventive," says project leader Shizuka Uchida. The Max Planck researchers genetically modified the stem cells to such an extent that, in addition to the Sca-1, they produced another visible marker. Even if Sca-1 was subsequently no longer visible, the marker could still be detected permanently.

"In this way, we were able to establish that the proportion of heart muscle cells originating from Sca-1 stem cells increased continuously in healthy mice. Around five percent of the heart muscle cells regenerated themselves within 18 months," says Uchida. Moreover, mice suffering from heart disease triggered by the experiment had up to three times more of these newly formed heart muscle cells.

"The data shows that, in principle, the mammalian heart is able to trigger regeneration and renewal processes. Under normal circumstances, however, these processes are not enough to ultimately repair cardiac damage," says Braun. The aim is to find ways in which the formation of new heart muscle cells from heart stem cells can be improved and thereby strengthen the heart's self-healing powers.

How to Change Cell Types by Flipping a Single Switch

$
0
0

With few exceptions, cells don't change type once they have become specialized -- a heart cell, for example, won't suddenly become a brain cell. However, new findings by researchers at UC Santa Barbara have identified a method for changing one cell type into another in a process called forced transdifferentiation. Their work appears today in the journal Development.

With C. elegans as the animal model, lead author Misty Riddle, a Ph.D. student in the Rothman Lab, used transcription factor ELT-7 to change the roundworm's pharynx cells into intestine cells in a single-step process. Every cell has the genetic potential to become any kind of cell. However, the cell's history and the signals it receives changes the transcription factors it contains and thus determines what kind of cell it will become. A transcription factor is a protein that causes genes to turn on.

"This discovery is quite surprising because it was previously thought that only early embryonic cells could be coaxed into changing their identity this readily," Riddle said. "The committed cells that we switched are completely remodeled and reprogrammed in every way that we tested."

Switching one cell type into another to replace lost or damaged tissue is a major focus of regenerative medicine. The stumbling block is that cells are very resistant to changing their identity once they've committed to a specific kind.

"Our discovery means it may become possible to create a tissue or organ of one type directly out of one of another type," says Joel Rothman, professor in UCSB's Department of Molecular, Cellular and Developmental Biology, who heads the lab.

Riddle and her colleagues challenged all C. elegans cells to make the switch to intestine, but only the pharynx cells were able to do so. "We asked skin cells, muscles, neurons to change but found that only the cells in the pharynx were able to transform," Riddle explained. "So this brings up some big questions. Why aren't other cells changing their identities? What is special about the cells in the pharynx that allow them to change their identity into intestine?

"Since C. elegans is such an incredible model system we can really tackle these questions," she continued. "By knocking down certain genes and manipulating the animal, we can begin to better understand the conditions under which skin cells and muscles cells might change their identities. That will help us figure out what is special about the cells in the pharynx."

Previous studies in the Rothman lab revealed the cascade of transcription factors required for the proper development of the C. elegans intestine. Used in the later stage of intestine development, ELT-7 continues to be expressed for the life of the animal and has important functions not only in gut development but also in gut function.

This study is revolutionary in that researchers have clearly demonstrated that cells are not limited to their original identities. "Think of them as different rooms in a house," Riddle said.

"Like cells, different rooms in your house have different structures and functions. Changing the function of a room is likely to be easier if the structures are similar, say, turning a bedroom into a living room or vice versa. But changing the bathroom into a living room presents a bigger challenge," Riddle explained. "Just as some rooms in a house are more easily converted to others, some cell types may be more easily coaxed into changing their identity to another specific type. This doesn't seem to depend on the relatedness of the cells in terms of when they were born or how closely related they are in their lineage."

Maybe the heart cell can become a brain cell after all.

As demonstrated by another important finding in the UCSB study, the cells remodeled themselves in a continuous process; there were stages in the remodeling process during which the identity of the cell was mixed. "Going back to the home remodeling example," Riddle said, "the couch and television were added to the bedroom before the bed and dresser were removed."

"The key importance of our finding is that we have observed cells undergoing a process of morphing in which one specialized cell type is converted into another of an entirely different type," Rothman said. "This means that it may be possible to turn any cell into any other cell in a direct conversion. In terms of our understanding of biological constraints over cell identity, we've shown a barrier that we believed absolutely prevents cells from switching their identity does not exist. It may one day be possible to switch an entire organ from one kind to another."


One Million Chips Mimic One Percent Of The Brain: A Robot's Neural Network

$
0
0

Last month, the Human Brain Project kicked off a 10 year study to understand the brain a bit more. It is considered one of the most advanced neur0science projects in the world.

As a part of that project, an ICL professor of computer engineering from the School of Computer Science, University of Manchester, Steve Furber, discussed how to apply processor chips that keep our smart phones and other consumer devices, to the brain.

He’s not just another professor speculating about the possibilities, he’s the co-inventor of the ARM chip with Sophie Wilson, which powers about 10 billion consumer devices, roughly 35%, worldwide. ARM is a semiconductor company that’s shipped more than 20 billion ARM based chips since 1990.

And, here is where it might start to sound like a scene from the Matrix – our brains and computer chips.

SpiNNaker — Inspired by the workings of the human brain, powered by 1 million ARM processors.

Furber is working on a project, called the SpiNNaker Project, which is part of the Advanced Processor Technologies Research (APT)  Group at the University of Manchester and the Human Brain Project.

SpiNNaker is a new computer architecture inspired by the human brain and from that, they created the SpiNNaker Machine, a parallel computing platform made up of one million ARM processor chips. The SpiNNaker Machine will apply what they create to three main research areas – Robotics, Computer Science and Neurscience.

A great example of the SpiNNaker machine is through this robot. Here, the SpiNNaker chips are simulating a neural network – so they process data from the silicon retinas in the same way as neurons in the brain would.

The board on this robot’s back is a SpiNNaker board, and has 48 of their chips developed by Furber and the APT group. The robot was developed by Prof. Jörg Conradt at Technische Universität München.

The eyes of the robot are a ‘silicon retina’.  The robot has been programmed to recognize one of two shapes, a ‘+’ and a ‘x’. When it recognizes either of these shapes, it take a specific action: for the ‘+’ the robot advances toward it, for the ‘x’ the robot retreats away.

The video shows the robot being baited by a piece of paper showing the symbols. Simply rotating the paper changes the symbol from a ‘+’ to a ‘x’.

“The SpiNNaker machine will incorporate one million ARM processor cores, but the brain is incredibly complicated. We will only be able to get to 1% of the scale [of the brain] with one million chips,” says Furber. “We hope that down the road our findings and research could provide new insight to pharma companies to help them develop better drugs for brain diseases or mental disorders.”

By example, Alzheimer’s is the sixth leading cause of death in the United States. In 2013, Alzheimer’s will cost the US $203 billion and is expected to rise to $1.2 trillion by 2050.

Mr. Furber and Ms. Wilson recently received The Economist Innovation award in the Telecoms catagory for the ARM processor .

You can read my full story on the Human Brain Project here. 

Gene Found to Be Crucial for Formation of Certain Brain Circuitry

$
0
0

Using a powerful gene-hunting technique for the first time in mammalian brain cells, researchers at Johns Hopkins report they have identified a gene involved in building the circuitry that relays signals through the brain. The gene is a likely player in the aging process in the brain, the researchers say. Additionally, in demonstrating the usefulness of the new method, the discovery paves the way for faster progress toward identifying genes involved in complex mental illnesses such as autism and schizophrenia -- as well as potential drugs for such conditions. A summary of the study appears in the Dec. 12 issue of Cell Reports.

"We have been looking for a way to sift through large numbers of genes at the same time to see whether they affect processes we're interested in," says Richard Huganir, Ph.D., director of the Johns Hopkins University Solomon H. Snyder Department of Neuroscience and a Howard Hughes Medical Institute investigator, who led the study. "By adapting an automated process to neurons, we were able to go through 800 genes to find one needed for forming synapses -- connections -- among those cells."

Although automated gene-sifting techniques have been used in other areas of biology, Huganir notes, many neuroscience studies instead build on existing knowledge to form a hypothesis about an individual gene's role in the brain. Traditionally, researchers then disable or "knock out" the gene in lab-grown cells or animals to test their hypothesis, a time-consuming and laborious process.

In this study, Huganir's group worked to test many genes all at once using plastic plates with dozens of small wells. A robot was used to add precise allotments of cells and nutrients to each well, along with molecules designed to knock out one of the cells' genes -- a different one for each well.

"The big challenge was getting the neurons, which are very sensitive, to function under these automated conditions," says Kamal Sharma, Ph.D., a research associate in Huganir's group. The team used a trial-and-error approach, adjusting how often the nutrient solution was changed and adding a washing step, and eventually coaxed the cells to thrive in the wells. In addition, Sharma says, they fine-tuned an automated microscope used to take pictures of the circuitry that had formed in the wells and calculated the numbers of synapses formed among the cells.

The team screened 800 genes in this way and found big differences in the well of cells with a gene called LRP6 knocked out. LRP6 had previously been identified as a player in a biochemical chain of events known as the Wnt pathway, which controls a range of processes in the brain. Interestingly, Sharma says, the team found that LRP6 was only found on a specific kind of synapse known as an excitatory synapse, suggesting that it enables the Wnt pathway to tailor its effects to just one synapse type.

"Changes in excitatory synapses are associated with aging, and changes in the Wnt pathway in later life may accelerate aging in general. However, we do not know what changes take place in the synaptic landscape of the aging brain. Our findings raise intriguing questions: Is the Wnt pathway changing that landscape, and if so, how?" says Sharma. "We're interested in learning more about what other proteins LRP6 interacts with, as well as how it acts in different types of brain cells at different developmental stages of circuit development and refinement."

Another likely outcome of the study is wider use of the gene-sifting technique, he says, to explore the genetics of complex mental illnesses. The automated method could also be used to easily test the effects on brain cells of a range of molecules and see which might be drug candidates.

Artificial Euglenids: Smaller, Softer Robots Have A Cuter Image

$
0
0

The image of robotics in popular culture is classic science fiction; cogwheels, pistons and levers with perhaps a layer of rubberized skin: miniaturized robots of the future will be "soft". 

"If I think of the robots of tomorrow, what comes to mind are the tentacles of an octopus or the trunk of an elephant rather than the mechanical arm of a crane or the inner workings of a watch. And if I think of micro-robots then I think of unicellular organisms moving in water. The robots of the future will be increasingly like biological organisms," explains Antonio De Simone of 
International School of Advanced Studies (SISSA). 

De Simone and his team at SISSA have been studying the movement of euglenids, unicellular aquatic animals, for several years. One of the aims of De Simone's research – which has recently been awarded a European Research Council Advanced Grant of 1,300,000 euro – is to transfer the knowledge acquired in euglenids to micro-robotics, a field that represents a promising challenge for the future. Micro-robots may in fact carry out a number of important functions, for example for human health, by delivering drugs directly to where they are needed, re-opening occluded blood vessels, or helping to close wounds, to name just a few.

A simulation of euglenid movement. Credit: SISSA

To do this, these tiny robots will have to be able to move around efficiently. "Imagine trying to miniaturize a device made up of levers and cogwheels: you can't go below a certain minimal size. Instead, by mimicking biological systems we can go all the way down to cell size, and this is exactly the direction research is taking. We, in particular, are working on movement and studying how certain unicellular organisms with highly efficient locomotion move".

In their study, De Simone and Arroyo simulated euglenid species with different shapes and locomotion methods, based chiefly on cell body deformation and swelling, to describe in detail the mechanics and characteristics of the movement obtained.

"Our work not only helps to understand the movement mechanism of these unicellular organisms, but it provides a knowledge base to plan the locomotion system of future micro-robots".

An Artificial Hand with Real Feelings

$
0
0

There have been remarkable mechanical advances in prosthetic limbs in recent years, including rewiring nerve fibers to control sophisticated mechanical arms (see “A Lifelike Prosthetic Arm”), and brain interfaces that allow for complicated thought control (see “Brain Helps Quadriplegics Move Robotic Arms with Their Thoughts”). But for all this progress, prosthetic limbs cannot send back sensory information to the wearer, making it harder for them to do tasks like pick up objects without crushing them or losing their grip.

Now researchers at the Cleveland Veterans Affairs Medical Center and Case Western Reserve University have developed a new kind of interface that can convey a sense of touch from 20 spots on a prosthetic hand. It does this by directly stimulating nerve bundles—known as peripheral nerves—in the arms of patients; two people have so far been fitted with the interface. What’s more, the implants continue to work after 18 months, a noteworthy milestone given that electrical interfaces to nerve tissue can gradually degrade in performance.

video produced several weeks ago shows a 48-year-old Ohio man who lost his right hand in an accident three years ago using his prosthetic hand to pick up and remove stems from cherries without crushing them from excessive squeezing. This was thanks to the new technology, which allowed force detectors on the digits of his prosthetic hand to convey touch information directly to three pea-sized nerve interfaces surgically implanted in his lower right arm. He controls the hand through a standard technology called a myoelectric interface, which uses signals from the muscles in his lower arm to govern prosthetic hand movements.

The work opens up the possibility that prosthetic limbs could one day provide enduring and nuanced feedback to humans, says Dustin Tyler, the Case Western professor behind the project. 

Lee Miller, a professor of neuroscience at Northwestern University who was not involved in the research, says the achievement appears remarkable. “This is the greatest number of distinct touch sensations generated by peripheral nerve stimulation that I know of, and the 18-month-long stability is also unsurpassed,” Miller says. A paper on the work is being prepared, Tyler says.

Nerve center: This seven-millimeter long device, called a cuff electrode, can convey feelings from sensors on a prosthetic hand or fingers when attached to a peripheral nerve in the arm stump.

At the heart of the technology is a custom version of an interface known as a cuff electrode. Three nerve bundles in the arm—radial, median, and ulnar—are held in the seven-millimeter cuffs, which gently flatten them, putting the normally round bundles in a more rectangular configuration to maximize surface area. 

Then a total of 20 electrodes on the three cuffs deliver electrical signals to nerve fibers called axons from outside a protective sheath of living cells that surround those nerve fibers. This approach differs from other experimental technologies, which penetrate the sheath in order to directly touch the axons. These sheath-penetrating interfaces are thought to offer higher resolution, at least initially, but with a potentially higher risk of signal degradation or nerve damage over the long term. And so they have not been tested for longer than a few weeks.

The cherry-grabbing test subject is Igor Spetic of Madison, Ohio. He lost his hand at his job when it was crushed in a drop-forging hammer while he was making an aluminum fitting for a jet engine. Now he’s got two small wiring harnesses protruding from a port in his upper right arm. At the lab, Tyler connects those harnesses to a device that generates the electrical signals that are sent to the cuff. This device, in turn, receives triggering information from sensors on his prosthetic hand.

His sensory feedback has only been felt in the lab so far, but Spetic professes amazement at his unique trial. “It’s real exciting to see what they are doing, and I hope it can help other people,” Spetic says. “I know that science takes a long time. If I don’t get something to take home, but the next person does, it’s all to the better.”

Researchers carefully place the implants to determine where Spetic perceives forces on his missing limb. Once installed, Spetic can detect sensations on several fingers and on the back and side of his missing hand, corresponding to inputs from the prosthetic. Once the implant is in place, the sensations always seem to arise in the same spots, and do not shift around, Spetic says.

Tyler can tune the electrical signals sent to the cuff to produce a variety of sensations. Spetic says sometimes it feels like he’s touching a ball bearing, other times like he’s brushing against cotton balls, sandpaper, or hair.

Tyler says the sensations Spetic reports are more natural and useful than the vague buzzing feeling that earlier experimental technologies had often produced. This means it may be possible to customize a sensation so that the patient feels like he’s touching a point of a pen, for example. Other groups and companies are working on better force detectors to attach to a prosthetic hand and generate such nuanced signals.

“[The] research is truly cutting edge and leading the world in terms of providing direct sensory feedback to amputees,” says Jack Judy, director of theNanoscience Institute for Medical and Engineering Technology at the University of Florida in Gainesville. Judy recently served as a program manager for the Defense Advanced Research Projects Agency, where he ran a program that sought to improve the performance and reliability of neural interfaces used to control prosthetic limbs for soldier amputees. At least 1,715 soldiers suffered amputations in the wars in Iraq and Afghanistan.

Other existing technologies provide sensory inputs directly to nerves. Cochlear implants, for example, stimulate the auditory nerve to restore hearing, and technologies for stimulating the vagus nerve—which runs from the brain stem to the chest and abdomen—can be used to treat epilepsy and even depression. However, “when it comes to stimulating nerves to provide an effective sense of touch in humans, there has been far less progress,” Judy says.

There are other approaches to sensory feedback, including efforts to do this directly through brain implants (see “Giving Prosthetics a Sense of Touch”). But brain implants are considered farther off because of the heightened safety concerns from opening the skull. The Case Western work is in a pilot feasibility trial, and Tyler says that if all goes well, a device could be on the market in five to 10 years.

Living without a pulse: Engineering a better artificial heart

$
0
0

(CNN) -- The human heart beats 60 to 100 times a minute, more than 86,000 times a day, 35 million times a year. A single beat pushes about 6 tablespoons of blood through the body.

An organ that works that hard is bound to fail, says Dr. Billy Cohn, a heart surgeon at the Texas Heart Institute. And he's right. Heart failure is the leading cause of death in men and women, killing more than 600,000 Americans every year.

For a lucky few, a heart transplant will add an average of 10 years to their lives. For others, technology that assists a failing heart -- called "bridge-to-transplant" devices -- will keep them alive as they wait for a donor heart.

Unfortunately, more often than not, the new heart doesn't arrive in time.

That's why Cohn and his mentor -- veteran heart surgeon Dr. O.H "Bud" Frazier -- are working to develop a long-term, artificial replacement for the failing human heart. Unlike existing short-term devices that emulate the beating organ, the new machine would propel blood through the body at a steady pace so that its recipients will have no heartbeat at all.

The concept of a pulseless heart is difficult to fathom. Cohn often compares it to the development of the airplane propeller. When people started to develop flying machines, he says, they first tried to emulate the way birds fly -- by flapping the wings aggressively.

"It wasn't until they decided, 'We can't do this the way Mother Nature did,' and came up with the rapidly spinning propeller that the Wright Brothers were able to fly," Cohn says.

Family fights for teen's heart transplantDick Cheney receives a new heartMom hears son's heartbeat in recipient

The idea of an artificial heart goes back decades.

Frazier began medical school in what he describes as "the Kennedy Era."

"We were going to the moon; we were going to achieve world peace," and Frazier wanted to develop the first artificial heart. In 1968, he left for Vietnam as a flight surgeon. Thirteen months later, his helicopter was shot down, and he nearly died.

"That experience convinced me I should stick to something more meaningful for the rest of my life."

That he did. The veteran surgeon, inventor and researcher has devoted the last half century to developing technologies to fix or replace the human heart, the most notable of which is the newest generation of continuous flow Left Ventricular Assist Devices, known as LVADs.

Modeled after an Archimedes Screw, a machine that raises water to fill irrigation ditches, the continuous flow LVAD is a pump that helps failing hearts push additional blood through the body with a rapidly spinning impeller.

Today, the continuous flow LVAD has been implanted in 20,000 people worldwide, including former Vice President Dick Cheney before he received a heart transplant nearly two years later.

In some cases, the LVAD's turbine has essentially taken over the pumping process entirely from the biological heart. In these instances, the implant recipient barely has any pulse at all.

Observing what happened in these patients led Frazier to one compelling question: If the LVAD can take over for a weakened heart, could it replace the organ entirely?

In 2004, Frazier asked Cohn to collaborate on a new research project. Cohn's interest in heart surgery dates back to when he was a young boy reading articles about world-renowned heart surgeons Dr. Michael E. Debakey and Dr. Denton Cooley, who developed and played a role in the transplant of the first artificial heart in a human in 1969.

Now the holder of some 70-odd U.S. patents, Cohn says his work with Frazier to build an artificial heart is the most ambitious project of his career.

The surgeons set out to combine two LVADs to replicate the functions of the heart's right and left ventricles. Using two commercially available LVAD turbines, Frazier and Cohn combined the devices with plastics and other material used for implants: hernia mesh, Dacron cardiovascular patches and medical silicone. Everything met FDA standards, but Cohn describes the final product as "rather kludged together."

The surgeons tested their invention by installing it in around 70 calves. All of the cows produced a flat line on an EKG, which measures heart electrical activity, yet they stood, ate and walked around, paying seemingly no notice to a small technicality: They had no heartbeat.

In order for the FDA to approve the device for clinical trials, the calves needed to live for at least one month. Cohn and Frazier's device trumped these standards, with many calves living healthily for full 90-day studies.

Cohn and Frazier were encouraged, and in March 2011, put their artificial heart into a human patient.

Craig Lewis, 55, was admitted to the Texas Heart Institute with amyloidosis, a rare autoimmune disease that fills internal organs with a viscous protein that causes rapid heart, kidney and liver failure. Without some intervention, Lewis would have been dead in days. Frazier and Cohn decided it was the right moment to test their device and the surgeons undertook the lengthy procedure.

Less than 48 hours later, Lewis was sitting up, talking and using his laptop. When doctors put the stethoscope to Lewis's heart, all they heard was a steady whir of what sounded like a boat propeller. Lewis survived for six weeks until his failing kidneys and liver got the best of him and his family asked doctors to unplug the device.

I quickly realized this is the most sophisticated and elegant device I've ever seen.
Dr. Billy Cohn

Lewis's case proved what Frazier and Cohn had dreamed of for nearly half a century: Humans can indeed survive without a pulse.

But there are some potential downsides to having a heart that doesn't beat.

Michael Garippa, CEO of the first-ever FDA approved mechanical artificial heart, the Syncardia, says beating "up the blood in a blender" can activate internal bleeding in other organs. Gastrointestinal bleeding and strokes are a high risk for patients on other heart assist devices, he says.

Cohn describes the Syncardia, which has been transplanted in nearly 1,300 people to date as a "brilliant and elegant device," but the device is for short-term use only and is cumbersome. Patients "have to carry around a compressor and have two air hoses going in and out of your chest."

Frazier and Cohn see a pulseless device as the only compromise to develop an artificial heart that is both efficient and long-term.

Two years ago, Daniel Timms, a 35-year-old Australian biomedical engineer, made an Australian government funded trip to Houston, stumbling through the door of Cohn's office at the Texas Heart Institute. Timms was wearing blue jeans and a T-shirt and carrying a heart device he had been working on for the past 10 years in his backpack.

Cohn was skeptical at first: "A lot of people come to our door with devices and prototypes, and they range from moderately interesting to laughably stupid. ... My expectations were very low. He pulls this thing out and starts telling me about it, and I quickly realized this is the most sophisticated and elegant device I've ever seen."

Made up of one moving part that rotates within a can-like exterior no bigger than a fist, the device has a large and small blade on opposing sides of the rotor. The small blade pushes blood through the heart's right chamber, to the lungs, and the larger blade pumps out blood through the left chamber to the rest of the body.

What fascinated the two surgeons most was that the device operates by being suspended in a powerful magnetic field, which prevents the wear and tear common in technology designed for pumping blood. Two magnetic fields also control the blades' oscillations, which each rotate an average of 2,000 times a minute depending on whether a person is standing, sitting up, exercising or coughing.

The excitement over the technology drew a $2.4 million donation from Houston furniture store owner, James Mackingvale, allowing Timms and a team of seven researchers from Australia, Germany, Japan and Brazil to relocate to Houston in January to collaborate with the Texas Heart Institute.

Timms' group also brought a 3-D printer, which enables the medical staff to quickly make its own parts for the artificial heart. Within days, the doctors can print a new part that pumps the blood and then can evaluate its performance, a process that once took months.

In July, the doctors even tried a plastic 3-D printed version on a calf. The calf survived for a number of days and was able to move around. They're now working on a titanium version of the device as a prototype for a more durable technology. Once that device is developed, they will begin animal testing, measuring the results to determine whether the technology is ready to propose implanting into terminally ill human patients.

If all goes well, Cohn, Frazier and Timms would be able to submit the device for FDA approval within the next few years.

Frazier believes this artificial heart will save a lot of patients he loses today, particularly those who suffer from premature heart failure.

"This is not something ready for prime-time yet," he says, but for those suffering now "we hope to give them hope."

Pacemaker pioneer now lives with device

Artificial intelligence, algorithms shed light on breast cancer in Alberta research

$
0
0

EDMONTON - Researchers from the University of Alberta and Alberta Health Services have figured out a faster, cheaper, more accurate way of understanding breast cancer cells.

They have developed a computer algorithm that helps researchers predict whether estrogen is sending signals to cancer cells to grow into tumours in the breast. The algorithm was 93-per-cent accurate. That could help patients get better treatment.

The results of their study were published Monday in PLOS ONE, an international, peer-reviewed online publication.

Identifying the specific genes involved in cancer growth is challenging because each cell in the body contains 23,000 genes, said Russ Greiner, a computing science professor at the U of A.

He led a team of researchers that used artificial intelligence to analyze data and find patterns, finally pinpointing three genes that determine whether a tumour is fed by estrogen.

Estrogen receptors provide doctors with critical information about the biology of breast cancer. They can then prescribe anti-estrogen drug therapies and provide patients with more personalized care.

“This doesn’t tell us how to prevent a cancer from developing, but it does tell us how to test a tumour that’s already there and give us information on how we should treat it,” said John Mackey, co-author of the study and director of the Cross Cancer Institute Clinical Trials Unit.

While the algorithm won’t replace traditional lab tests just yet, that could change in the next five to eight years as new technologies become more affordable.

“Cancers are common, pathologists are not. They’re in short supply. So we need to figure out ways to leverage the understanding of biology and the computerized systems to give us that information without monopolizing our pathologists’ day,” Mackey said.

The algorithm analyzed data gathered from 176 frozen tumour samples stored at the Cross Cancer Institute. It was later tested on other data sets available online with similar success.

Ultrathin 'Diagnostic Skin' Allows Continuous Patient Monitoring

$
0
0

It is likely that at your next visit to the doctor, a medical practitioner will start by taking your temperature. This has been part of medical practice for so long that we may see it as antiquated, with little value. However, the routine nature of the ritual belies the critical importance of obtaining accurate body temperature to assess the health of a patient. In fact, subtle variations in temperature can indicate potentially harmful underlying conditions such as constriction or dilation of blood vessels, or dehydration. Even changes in mental activity, such as increased concentration while solving a mathematical equation, are accompanied by measurable changes in body temperature.

Accordingly, a number of technologies have been developed to detect skin temperature changes that can serve as early indicators of disease development and progression. For example, sophisticated infrared digital cameras can detect, in high resolution, temperature changes across large areas of the body. At the other end of the technology spectrum, paste-on temperature sensors provide simple, single-point measurements. Although both technologies are accurate, infrared cameras are expensive and require the patient to remain completely still, and while paste-on sensors allow free movement, they provide limited information. Now, an international multidisciplinary team including researchers at the University of Illinois at Urbana/Champaign and the National Institute of Biomedical Imaging and Bioengineering (NIBIB) has developed an entirely new approach: a sophisticated "electronic skin" that adheres non-invasively to human skin, conforms well to contours, and provides a detailed temperature map of any surface of the body.

How it works

The temperature sensor array is a variation of a novel technology, originally developed in the lab of Professor John Rogers at the University of Illinois at Urbana/Champaign, called "epidermal electronics," consisting of ultrathin, flexible skin-like arrays, which resemble a tattoo of a micro-circuit board. The arrays developed with NIBIB contain sensors and heating elements. The technology offers the potential for a wide range of diagnostic and therapeutic capabilities with little patient discomfort. For example, sensors can be incorporated that detect different metabolites of interest. Similarly, the heaters can be used to deliver heat therapy to specific body regions; actuators can be added that deliver an electrical stimulus or even a specific drug. Future versions will have a wireless power coil and an antenna for remote data transfer. The development of this new thermal technology was reported in the October 23 issue of Nature Materials.

Testing the new device

In this study, the array contained heat sensors so that it could be tested for its ability to accurately detect variations in localized skin temperature when compared to the "gold standard," the infrared camera. A number of separate physical and mental stimulus tests were performed to compare the two. The subject wore a heat sensing array on the palm and also had heat measurements obtained with an infrared camera placed 16 inches above the same region. The profiles of temperature changes were virtually identical with the two methods.

The investigators also performed a test that is used as a cardiovascular screening procedure. Blood flow changes are detected by changes in skin temperature as blood moves through the forearm while a blood pressure cuff on the upper arm is inflated and deflated. Once again, the infrared camera and the array technology showed virtually identical temperature change profiles. Temperature was reduced when blood flow was blocked and it increased as blood was released. Slow return of blood to the forearm can indicate potential cardiovascular abnormalities.

Beyond serving as a test to validate the accuracy of the skin array, this experiment demonstrated that the device could potentially be used as a rapid screening tool to determine whether an individual should be further tested for disorders, such as diabetes or cardiovascular disease, that cause abnormal peripheral blood flow. It could also be a signal to doctors and patients about effects of certain medications.

The final experiment addressed a feature unique to the skin array technology: delivery of a stimulus, such as heat. The researchers sent precise pulses of heat to the skin to measure skin perspiration, which indicates a person's overall hydration. Taken together, the test results demonstrated the ability of the array technology to obtain a range of accurate, clinically useful measurements, and deliver specific stimuli, with a single, convenient, and relatively inexpensive device.

Potential applications

The researchers say the current version of the array that senses and delivers heat only hints at the vast possibilities for this technology. For example, in theory, any type of sensors can be included, such as sensors that reveal glucose levels, blood oxygen content, blood cell counts, or levels of a circulating medication. Also, instead of delivering heat, an element could be included in the circuit that delivers a medication, an essential micro-nutrient, or various stimuli to promote rapid wound healing. This ability to sense and deliver a wide range of stimuli makes the system useful for diagnostic, therapeutic and experimental purposes.

The technology has the potential to carry out such therapeutic and diagnostic functions while patients go about their daily business, with the data being delivered remotely via a cell phone to a physician -- saving the expense of obtaining the same diagnostic measurements, or performing the same therapeutic stimulus, in the clinic.

Alexander Gorbach, Ph.D., one of the co-investigators from NIBIB, and head of the Infrared Imaging and Thermometry Unit, says, "We are very excited about the unique potential of this technology to vastly improve healthcare at multiple levels. Continuous monitoring outside of a hospital setting will be more convenient and cost-effective for patients. Additionally, access to data collected over extended periods, while a patient is going about a normal routine, should improve the practice of medicine by enabling physicians to adjust a treatment regimen '24/7' as needed."

The investigators are already receiving requests from other clinical research labs to use this technology, and plan to expand collaboration with academia and industry. The hope is that the research community's interest in epidermal electronics will accelerate the development and validation of this technology and hasten its incorporation into clinical care.


Scientists grow artificial skin from stem cells of umbilical cord

$
0
0

Scientists have developed a breakthrough technique to grow artificial skin - using stem cells taken from the umbilical cord. The new method means major burn patients could benefit from faster skin grafting, the researchers say, as the artificial skin can be stored and used when needed.

According to the World Health Organization (WHO), there were approximately 410,000 burn injuries in the US in 2008, of which around 40,000 required hospitalization.

Patients who have suffered severe burns may require skin grafts. At present, this involves the growth of artificial skin using healthy skin from the patients' own bodies. But the researchers note this process can take weeks.

"Creating this new type of skin using stem cells, which can be stored in tissue banks, means that it can be used instantly when injuries are caused, and which would bring the application of artificial skin forward many weeks," says study author Antonio Campos, professor of histology at the University of Granada in Spain.

To create the new technique, details of which are published in the journal Stem Cells Translational Medicine, the scientists used Wharton jelly mesenschymal stem cells from the human umbilical cord.

Previous research from the team had already led them to believe that stem cells from the umbilical cord could be turned into epithelia cells (tissue cells).

The investigators note that the stem cells are "excellent candidates" for tissue engineering due to their "proliferation and differentiation capabilities," but that their potential to turn into epithelial cells had not been explored, until now.

Umbilical cord 'novel cell source' for tissue engineering

The scientists combined the umbilical cord stem cells with a biomaterial made of fibrin - a protein found in the clotting of blood - and agarose - a polymer usually extracted from seaweed.

The researchers found that when tested in vivo, the combination of the Wharton jelly mesenschymal stem cells and biomaterial led to the growth of artificial skin and oral mucosa - a mucous membrane lining the inside of the mouth.

Explaining their findings, the researchers say:

"Electron microscopy analysis confirmed the presence of epithelial cell-like layers and well-formed cell-cell junctions.

These results suggest that HWJSCs (human umbilical cord Wharton's jelly stem cells) have the potential to differentiate to oral mucosa and skin epithelial cells in vivo and could be an appropriate novel cell source for the development of human oral mucosa and skin in tissue engineering protocols."

Medical News Today recently reported on a study revealing that scientists have created "mini-kidneys" using human stem cells, while other research detailed the discovery of a gene that may be responsible for severe scarring of tissue.

Written by Honor Whiteman

Two-legged robot walks outside at U-Michigan

$
0
0

With prosthetic feet and hips that can swing sideways for stability, the University of Michigan's newest two-legged robot has taken its first steps outside.

The machine named MARLO is the third-generation bipedal robot for Jessy Grizzle, a U-M professor of electrical engineering and computer science. While its predecessors were connected to lateral support booms and confined to the lab, MARLO can venture out into the sunlight.

Preparations for its stroll began just after dawn on a recent Saturday morning— early to beat the snow and the foot traffic. Researchers put on their winter coats, hats and gloves and guided MARLO through the exterior door of the lab it had been walking across inside for several weeks.

The experiments started on a sidewalk, but the walkway proved too narrow for the gantry that follows MARLO to catch it if it falls (the contraption doesn't support the robot). So they moved to a courtyard nearby. But that was uphill, so the researchers then had to adjust the robot's control settings. After several tries, MARLO took 15 steps.

"There were smiles all around," said Brian Buss, a doctoral student in electrical engineering and computer science. "We were happy to see it do so well given that the control was designed and tested on a flat laboratory floor, we are using so few measuring devices at the present time, and the feet are not powered at all. There is clearly room for improvement and we look forward to the challenge."

After two hours of testing, MARLO broke a knee, which isn't uncommon in this line of work. The team moved inside just before the flurries started falling.

MARLO adds literally a new dimension to the bipedal research Grizzle's group has been conducting for more than a decade. The robot MABEL, retired last year, jogged at a nine-minute-mile pace to become the world's fastest two-legged robot with knees. It was also able to maintain footing on uneven terrain and recover from tripping, among other accomplishments—but all on a circular track. MABEL was connected to a bar that prevented it from tipping over sideways.

Before MABEL was the French robot Rabbit, which was similarly mounted to a boom. Rabbit hasn't been used for research since 2005, but in its day, it inspired a new family of gait control algorithms that allowed it to walk at a specific speed and keep its balance through surprise shoves and across changes in terrain.

"My Ph.D. students have accomplished amazing things for robots inside the lab. Now it's time to demonstrate these feats for robots walking outdoors," said Grizzle, the Jerry W. and Carol L. Levin Professor of Engineering.

"MABEL gave us some good hints about what can be done and how to do it. We have to take it from 2D, which refers to a planar robot attached to a boom, to 3D—no boom and no lateral support. It's an enormous challenge, but one my team is ready to take on."

Enabling 3D movement is crucial if the rescue robots Grizzle and his colleagues envision are ever to become reality. While wheels move well across flat surfaces, they can't climb stairs or step over wreckage. Researchers believe that fast, two-legged machines with human-like running form could eventually travel over rough ground and inside the remains of places built for people—burning or collapsed houses, for example.

In the shorter term, the work could lead to advanced prosthetic legs with powered, coordinated knees and ankles. Algorithms developed for Rabbit and MABEL are being adapted for this right now at the University of Texas at Dallas.

Brian Buss, Electrical Systems Engineering PhD Student, readjusts the legs of MARLO, a bipedal robot, in a lab located in the Environmental and Water Resources Engineering Building on November 19, 2013. Image credit: Joseph Xu

MARLO is one of three robots in the "ATRIAS" series designed by Jonathan Hurst, an assistant professor at Oregon State University and Grizzle's long-time collaborator. ATRIAS stands for "Assume the Robot is a Sphere," a rule that Hurst follows during design. One of the three is in Hurst's lab, where researchers are improving its energy efficiency and designing gaits for soft as well as hard ground. The other is at Carnegie Mellon University, where Assistant Professor Hartmut Geyer's group is focusing on running. The U-M team is working on stability outside the lab.

MARLO and its counterparts represent the second bipedal robot model in the world—and the first at a university—with a gait that isn't flat-footed, Grizzle says. The other is Boston Dynamics' ATLAS. In these two machines, the robots' steps start with a heel-strike followed by a roll to the toe. This fluid action is vital for real-world stability, the researchers say.

"If we want robots that can walk outside, over roots, stones, uneven sidewalks or steps, they need a walking gait that allows the foot to have only partial contact with the ground," Grizzle said.

Over the coming year, Grizzle says the MARLO will become a fixture on U-M's campus.

MARLO is funded by the National Science Foundation and the Defense Advanced Research Projects Agency.

Now DHL tests a delivery drone: Airborne robots could be used to deliver medicine to hard-to-reach places

$
0
0

Germany’s express delivery company, Deutsche Post (DHL), is testing a drone that could be used to deliver urgently needed goods such as medicine to remote locations in the future.

The debut of the yellow remote-controlled helicopter follows the debut of Amazon's octocopter, which could be used to deliver packages to its customers in the future, replacing postmen and cutting the delivery times of its goods.

The German firm’s small quadcopter flew a package of medicine from a pharmacy in the city of Bonn to the company’s headquarters on the other side of the Rhine river.

DHL said its yellow drone would be used to delivery urgently needed goods such as medicine to remote locations. The aircraft can carry approximately six-and-a-half pounds and has four propellers

The aircraft can carry approximately six-and-a-half pounds (three kilograms) and has four propellers, while Amazon’s robot has eight.

DHL’s yellow drone is known as the ‘Paketkopter’ and flew at a height of 50 metres for one kilometre, taking just two minutes to complete its journey, The Local reported. 

Two men controlled the vehicle using a remote control, but the company said technology is available to send the drones to a specified location using GPS alone.

DHL’s spokesman, Thomas Kutsch, said the flights all this week are strictly a research project to see if the technology works and there are no plans yet to start actual drone deliveries.

The German firm’s small pilot-less quadcopter flew a package of medicine from a pharmacy in the city of Bonn to the company’s headquarters on the other side of the Rhine river (pictured right) It was controlled remotely by technicians (pictured left)

The test flights required permission from local aviation authorities.

Amazon plans to deliver goods to customers by drone within five years, despite legal obstacles in the U.S. 

Jeff Bezos, CEO of Amazon, said that he wants to use octocoptors to replace postmen and cut delivery times to just 30 minutes.

Customers would have their order dropped onto their front lawn by the machine which would fly through the air from a nearby warehouse with it clasped in a metal grabber.

Speaking to US TV network CBS, Bezos said: ‘I know this looks like science fiction. It’s not.’

Bezos’ claims raise the prospect of a future where drones travel across the sky all the time ferrying post around - and perhaps one day even letters, too.  

In the interview Bezos said that the drones would be able to carry goods up to 5lb in weight, which covers 86 per cent of the items that the company delivers.

Bezos said that he wants to launch the ‘Amazon Prime Air’ service within four to five years, though that will almost certainly be in the US before anywhere else. 

Amazon's CEO Jeff Bezos says that he wants to use octocoptors (pictured) to replace postmen and cut delivery times to just 30 minutes

Read more:

Courant professors create flying robot

$
0
0

Most flying robot models currently base their design on insect-like wing motions, like those of bees, hummingbirds and moths. Instead of flapping up and down, insects sweep their wings forward, flip them over and bring them back the other way.

Leif Ristroph, assistant professor of Mathematics at NYU’s Courant Institute, has designed and created a jellyfish-like robot with flying capabilities, along with Steve Childress, retired NYU Professor Emeritus of Mathematics, and the assistance of other professors in the Department of Mathematics.

The robot, which Ristroph began building after receiving a postdoctoral fellowship from the National Science Foundation, consists of four Mylar plastic wings and is framed by carbon fiber rods. A tiny direct current electric motor spins a crankshaft around, pulls the wings in and out to make the robot fly. A part of the motor is the gearbox, which slows down power to flap wings 20 times per second, a frequency suitable for flight.

Ristroph started the project about two years ago in the Courant Institute’s Applied Mathematics lab. He and Childress iterated many varying ideas and tested them in a trial and error process. Jun Zhang, professor of physics and math, and Mike Shelley, professor of math and neuroscience, contributed their ideas to assist the process. Zhang and Shelley are co-directors of the lab and Childress is a co-founder.

Ristroph’s flying robot is the first to differ from insect-like models, instead using a closing and opening motion to move itself.

Flight stability is a main advantage of the jellyfish design. From studying their product, Ristroph discovered if their model is knocked over while in flight, perhaps from a gust of wind, it tends to come back upright.

“[Our model] is a much simpler way to fly, where keeping upright is automatically taken care of by the aerodynamics and doesn’t need any sensors or any neural circuits or anything like that,” Ristroph said.

Zhang explained how the robot’s method of flying differs from current flight technology.

“Swimming and flying is a natural phenomenon, and we always find them fascinating. But real-world fliers like an airplane or a ship that involve a steady state approach, they don’t have many moving parts,” Zhang said. “[The model] is important in sense because we better understand nature.”

The robot’s flight was filmed using two high-speed cameras to capture all of its wing motions The team can reconstruct the robot’s 3-D flight path based on the two camera views of its flight.

In the future, miniature fliers could be used for search and rescue, air quality monitoring or surveillance.

Ristroph and Childress continue to analyze the experiment even though they built a functioning flying model.

“Where we’d like to go next is actually understanding how it works. We don’t really understand the aerodynamics of it,” Ristroph said. “We got it to work, we don’t really know why in terms of the physics, so we’re still after that.”

A version of this article appeared in the Tuesday, Dec. 10 print edition. Nicole Del Mauro is a staff writer. Email her at features@nyunews.com.

Artificial Creativity – Rise of the Idea Machines

$
0
0

In the not too far future machines and robots will not only become more advanced, they will also begin to exhibit aspects of Creativity, and may soon exceed people in the ability to produce simple creative outputs. However, while I believe robots will be able to imitate a human’s ability for crafting creative work, I don’t believe this is the same as true creativity.

Skeptical? Let’s me outline the three technological advances which will lead to the breakthroughs, and then see my predictions of jobs robots will soon steal from creative people:

1. Modelling of the human mind

A lot of advances in robot technology have been about making them more independent (able to move in a new space independently, recognising faces and commands etc). The big upcoming leaps come from research into how machines can emulate the human thought process. The EU is investing €1billion into the modelling of the human brain over the next 10 years, which will likely include experiments into modelling thought processes.

Even before that, IBM created a new type of knowledge supercomputer called Watson, which managed to win the Gameshow ‘Jeopardy’. Unlike previous supercomputers used to search for data faster, Jeopardy questions are often ambiguous and rely on cryptic connotations within them, so Watson needed to analyse queries in a more human-like manner to react, and did so very successfully.

 

Finally, one of my favorite experiments is called Yossarianlives, which is a metaphorical search engine. Instead of searching for specific data, it lets you search for a concept and returns results of what its databank from internet searches say are related metaphorical concepts. Its close to a digital brainstorming session.

2. Machine Learning

In order to make machines more independent, many researchers are looking into robots building their own awareness of their surroundings over time. The aim is to reduce the requirement of humans to programme all of the information they need in advance. So now there are machines which are learning new information in the same way that toddlers do, and learning about their own body in order to learn how to move. It can even begin to imagine what is going on in the minds of people it is interacting with.

While that is interesting, the real changes will come out of letting learning computers loose on the internet’s data so that they can learn human concepts. Last year, Google created a neural network of 16,000 computers and fed it random image thumbnails from Youtube. Without any previous knowledge, it was able to form a concept of similarity between many images, and to learn what the most common object was. In case you were guessing, it was a cat. Thanks Youtube. Given more processing power and time, these machines will soon look at objects and see not only descriptions which humans have programmed, but the meaning people give to them.

3. Big data, predictions and instant experimentation

‘Big Data’ is one of the biggest trends in analytics from the past few years, already doing everything from predicting what you will search for in Google Autocomplete, which type of Toaster Amazon should recommend to you, and which Movies Netflix thinks you would like to see on a Tuesday evening. By feeding a system enough data it is able to discern the underlying trends more effectively than a person ever could and make predictions of what may work in the future. It is already predicting what music you will listen to.

Pandora’s Music Genome project gets input from music experts on thousands of songs, including how the lyrics work, aspects of the base melody, genre, style, speed, and impact. It also runs thousands of experiments with its millions of users when producing a personal track list, streamed as a radio station, and gets real-time feedback on how successful it was by how the user interacts with the suggested music. This helps it figure out how people react to and enjoy aspects of music in different settings, and so is able to produce a list of new music a customer may like.

But what about the next evolution of big data? Computers are already able to understand voice, language structure and word meanings. If big data analysed the lyrics to every song released in the last 100 years and saw how popular they fared, it is likely it could find the underlying patterns and predict new lyrics. More than that, it could instantly test them with people to see how they fared. Imagine a programme able to take a concept, find metaphors for it, use big data to predict potential lyrics which would be popular, and then produce 100 slightly different versions. It could produce a song by “singing” the lyrics using a computer voice over a synthesised track, and release each version either on Youtube or a radio streaming service. Based on user feedback, it would then amend the content and style, run the experiment again, get more feedback, until it had a song which users loved, and then release it to its iTunes account, without any human every writing a note.

Similarly, big data could be used to analyse previous links between all forms of media and internet chatter and its effect on the success of media released after that. Would there have been a way to predict the success of ‘Vampire’ based media earlier? Could it predict the rise of a music genre representing the attitudes of a demographic like Grunge did in the 90s? How far in advance could you predict what will be popular? Big Data will eventually enable all of this.

So what comes next?

While I do believe that machines will soon replace certain aspects of the creative process, I don’t think they will ever be truly creative. This is due to the distinct difference between creativity (the generation of new and valuable ideas) and craft (turning those ideas into something tangible). Machines will overtake humans in craft, and in many cases already have (manufacturing), they can produce the ‘What?’ and ‘How?’, but not the ‘Why?’.  Until there is a machine which has gone beyond using inputs as data, and using data as experiences, then all of its information, no matter how much analysis went into it from however many millions of sources, is still second hand from human.

That being said, here are my predictions of creative jobs that will be at least partially replaced by machines in the next decade:

  • Advertising: Programs will produce try out hundreds or thousands of designs, slogans etc, and try them out in small scale on the internet before a full campaign launch. Based on user reaction they will refine the campaign and iterate until an ideal message is found.
  • Music: The first fully digitally written, sung and produced song will be released. It will likely have very generic lyrics about ‘Love’, ‘Beauty’ and use the word ‘Baby’ a lot. But the second album will show a lot more nuance and variety. And the live performances will have a lot of lighting effects but not much soul.
  • Architecture & Design: By providing the exact functionality required from a building or product, a programme will produce several very different designs which all meet the underlying requirements.
  • Writing, Screenwriting & TV: By finding the underlying trends in public opinion, software will be able to predict what books, films and TV shows will be popular in the 1, 2 & 3 years time. It will then compare this against previous films to suggest story arcs which the book / film / TV show should follow to enhance likelihood of success.
Viewing all 2275 articles
Browse latest View live




Latest Images