Quantcast
Channel: 2045 Initiative
Viewing all 2275 articles
Browse latest View live

Made In Space and NanoRacks Take First Steps Towards On-Orbit Satellite Manufacturing, Assembly and Deployment

$
0
0

Made In Space, the space manufacturing company, and NanoRacks, the premier provider of commercial low-Earth orbit services, are partnering to provide a transformative new service for CubeSat developers: the Stash & Deploy satellite deployment service. The Stash & Deploy service will leverage NanoRacks’ heritage in CubeSat deployment and Made In Space’s in-space additive manufacturing capabilities to deliver on-demand satellite manufacturing, assembly, and deployment to the space environment. A variety of standard and customer-specific satellite components will be cached aboard a satellite deployment platform, such as the International Space Station. These components are “stashed” for rapid manufacture of CubeSats. Made In Space’s Additive Manufacturing Facility will be used to create custom structure, optimized for both the space environment and customer need.

“This is a fundamental shift for satellite production,” says Andrew Rush, president of Made In Space. “In the near future, we envision that satellites will be manufactured quickly and to the customer’s exact needs, without being overbuilt to survive launch or have to wait for the next launch.”

As envisioned, customers will easily and quickly design their satellite or request a satellite be designed based on their requirements. Once designed, the optimized structure is created on orbit and the necessary components are integrated. The satellite will then be deployed into low Earth orbit. The entire assembly and deployment process will occur in a fraction of the time necessary to build, manifest, launch and deploy satellites from the ground, For the first time, incredibly valuable responsiveness will be available to satellite operators. “Stash and deploy opens a new chapter in space utilization,” believes Jeffrey Manber, CEO of NanoRacks. “Looking out a few years this option may be more desirable than launch and deploy.”

The Stash & Deploy service makes on-orbit assembly and deployment of small satellites a powerful option for operators looking to push the envelope of modern space development and deploy hardware faster than traditional CubeSat deployment. “Made In Space was founded with the belief that one day entire spacecraft will be manufactured in space. With Stash & Deloy, NanoRacks and Made In Space make the first step towards this goal,” says Made In Space’s CTO and Co-Founder, Jason Dunn.

The first steps of the Stash & Deploy service will be available Q1 2016. To explore further the Stash & Deploy service, contact rpournelle@nanoracks.com and business@madeinspace.us

About Made In Space

Made In Space, Inc. (MIS) was founded in 2010 as the world’s first space manufacturing company. MIS was contracted by NASA to design, build, and operate the 3D Printing In Zero-G Experiment (3D Print) on the International Space Station (ISS) in 2014. 3D Print became the first machine to manufacture off-Earth. Controlled from a mission operations center at MIS HQ in the NASA Ames Research Park, the device allows for hardware to be digitally sent to space and printed out. By the end of 2015 the company will launch a new 3D printer, the Additive Manufacturing Facility (AMF), to provide hardware manufacturing services to both NASA and the U.S. National Laboratory onboard the ISS. As the first commercially available manufacturing service in space, the AMF will put the capability of off-world manufacturing in the hands of space developers everywhere.

About NanoRacks

NanoRacks LLC was formed in 2009 to provide commercial hardware and services for the U.S. National Laboratory onboard the International Space Station via a Space Act Agreement with NASA. NanoRacks’ main office is in Houston, Texas, right alongside the NASA Johnson Space Center. The Business Development office is in Washington, DC., and NanoRacks’ now has a new office in Silicon Valley, California. The Company has grown into the Operating System for Space Utilization by having the tools, the hardware and the services to allow other companies, organizations and governments to realize their own space plans.
To date over 200 payloads have been deployed by the Company on the International Space Station and our customer base includes the European Space Agency (ESA) the German Space Agency (DLR,) the American space agency (NASA,) US Government Agencies, Planet Labs, Urthecast, Space Florida, NCESSE, Virgin Galactic, pharmaceutical drug companies, and organizations in Vietnam, UK, Romania and Israel. Our customer base has propelled NanoRacks into a leadership position in understanding the emerging commercial market for low-earth orbit utilization.


3D-printed brain tissue

$
0
0

Researchers in Australia have developed a new way of printing 3D structures that closely resemble layered brain tissue

In the latest effort to build an artificial laboratory model of the brain, Australian researchers have developed a novel method for constructing layered biological structures that looks just like cerebral cortex tissue using a handheld 3D printer.

Neuroscientists rarely get the opportunity to study the human brain directly, and so work on cells or tissue slices that have been dissected from animals and grown in Petri dishes. These in vitro methods are useful for studying development and processes such as neurodegeneration and cell-to-cell signalling, but are severely limited in that they do not resemble the complex three-dimensional structure of the brain.

To overcome these obstacles, Rodrigo Lozano of the University of Wollongong in Australia and his colleagues used 3D printing, a manufacturing process that involves creating three-dimensional objects by laying down successive layers of material one on top of the other.

The researchers harvested immature cortical neurons from embryonic mice and encapsulated them within a natural gellan gum polymer hydrogel to create a ‘bio-ink’ cell suspension. As well as being cheap and biocampatible, gellan gum protects the cells it encapsulates, is porous enough for them to exchange nutrients and waste materials with the surrounding growth medium, and solidifies effectively at room temperature.

Lozano and his colleagues fabricated the brain tissue with a simple handheld 3D printer, then used scanning electron microscopy to probe the internal structure of the printed structures, and fluorescent antibody staining combined with confocal microscopy to examine the cells within them.

This revealed that the hydrogel supported the survival and attachment of the neurons, allowing them to grow and extend their fibres over distances of several hundred microns, so that five days later they had an appearance characteristic of mature cortical cells and had formed layered structures resembling the cerebral cortex.

Recently, several groups have succeeded in growing artificial miniature models of the brain called cerebral organoids, but these can only grow to about 4mm in diameter because they lack an organized blood supply and, because they self-assemble from stem cells, do not lend themselves to being examined in any detail.

The new method is, therefore, something of an advance on this, but Lozano and his colleagues emphasise that it was not developed in order to grow replacement brain parts in the lab. Rather, it could offer researchers a cheap new method for testing drugs and studying nerve cell behaviour, injury, and disease.

Reference

Lozano, R., et al. (2015). 3D printing of layered brain-like structures using peptide modified gellan gum substrates. Biomaterials, 67: 264-273. [Abstract]

Scientists make a robot that can have babies

$
0
0

Everyone who thinks robots are going to take over the world might be getting a lot more frightened: Scientists have created a machine that's able to have babies. Sort of.

In an experiment designed to show how robots can learn and evolve, researchers in Cambridge and Zurich programmed a robot arm—or "mother"—with an algorithm to create a device made out of blocks containing motors—its "child".

The blocks are assembled into a structure by the robot arm, and the motors are turned on. A camera detects how far the blocks are able to travel. The robot arm sees this, and then modifies the next "baby" to try to make it go further, learning from the mistakes and good traits of the last one.

This is all done without human intervention. The research was published in the journal PLOS One.

Luzius Brodbeck, one of the researchers from the Institute of Robotics and Intelligent Systems at ETH Zurich, said the robots are normally programmed to do one just thing.

"Machines usually build the same thing and what it will do and it will do it over again. What we did here was use a genetic algorithm so each operation is different," Brodbeck told CNBC by phone.

The scientist said that the technology could be used in areas where robots need to carry out autonomous tasks; for example in remote locations and even in disaster response.

The experiment may seem like something out of a science fiction film and technologists have expressed concerns about the future of robotics.

Elon Musk and Bill Gates are just two figures concerned about the rise of artificial intelligence (AI). Last month, a letter signed by figures including Musk and Stephen Hawking warned about the potential damage AI-controlled weapons could cause.

But Brodbeck said the fear about systems like his may be overblown.

"I think it makes sense to think about this, but I personally am not afraid that robots will take over the world," the scientist told CNBC.

Activity of entire central nervous system captured on film for first time

$
0
0

The neural activity of an entire central nervous system has been captured in a fairly complex animal for the first time.

The video footage shows neurons firing in the nervous system of a fruit fly larva,Drosophila melanogaster, as it crawls back and forth.

This video shows the neural activity (yellow/red) throughout the entire central nervous system (grey) of a Drosophila larva as it crawls backwards. 
Credit: Keller et al. Nature Communications

Scientists at the Howard Hughes Medical Institute in Ashburn, Virginia, imaged patterns of motor neuron activity in the organism as it crawled backwards and forwards. 

The images were captured five times per second for up to an hour, at a resolution high enough to see single neurons firing.

“We are curious to see neural activity as behaviours are being produced,” said Philipp Keller, an author of the study in Nature Communications

A 2D video showing motor neurons firing (yellow/red) throughout the entire central nervous system (grey) of a Drosophila larva. Credit: Keller et al. Nature Communications

“By imaging different parts of the nervous system at the same time, we can see how behaviours are controlled and then build models of how it all works.”

The study advances on previous work which was limited to large scale imaging of tiny creatures, such as nematode worms, or to parts of more complex animals, such as the brains of zebrafish larvae.

The team measured the neural activity of the millimetre-long larva with a technique called light-sheet microscopy. The procedure illuminates the specimen with laser light from both sides, while twin cameras record images from the front and back. The scientists genetically modified the larva’s neurons to make them fluoresce when they fire. 

Light-sheet microscopy produces two views of the specimen, usually its back and belly, which build up to produce high-resolution 3D images. They could help scientists understand how the brain and nerve cord interact to generate behaviour.

The work lays the foundations for further studies in fruit flies and larger organisms by providing a new method to image their entire central nervous systems in real time.

The researchers are now looking at neural activity in adult fly brains, zebrafish and early-stage mouse embryos.

The 3D Printed OctaWorm Robot Can Go Where No Other Robot Can

$
0
0

Imagine a collapsed building that has been reduced to a pile of tangled rubble, steel beams and debris. Now try to imagine how many living people may be trapped under the thousands of tons of that collapsed building, clinging desperately to life. How will you reach them? How will you even know where they are? How would you even keep them alive while you shifted piles of rubble off of them? This is, of course, a problem that rescue workers have faced far too many times, and while they do their very best, even the best rescue worker will tell you that there are no easy answers for those questions, and realistically quite a few of those people trapped will not make it out of the rubble alive.

But imagine a small robot that can squeeze itself into the gaps, crevices and cracks of piles of rubble. A robot that can shift its actual size, expanding when it needs to, and constricting when things get tight. That is the concept behind the OctaWorm, a small, deformable octahedron robot that is capable of exploring and negotiating all sorts of spaces that most traditional rescue robots are incapable of traversing. The project is the result of an international collaboration between the University of Chile and the University of Akron, and OctaWorm itself was designed by Juan Cristóbal Zagal.

As the director of the robotics laboratory at the University of Chile in Santiago, Zagal is charged with creating and developing new types of robotic mechanisms and technology. The goal of the OctaWorm project is to develop a new way to use robotic motion to access and navigate confined spaces such as cracks and voids found in disaster environments, as well as pipes and air ducts. Zagal also envisions tiny, future versions that can be used for medical applications, such as navigating inside of the human body. Although robots of that size are quite a ways away from the current prototype, the design is remarkably scalable.

“The current version of the robot is capable of traveling inside a pipe. It is also capable of dealing with changes on the internal diameter of the pipe. The functional symmetry of the robot allows it to travel along T, L and Y joints in pipelines. Traditional in-pipe robots have many problems for dealing with these types of junctions. In contrast the deformable octahedral robotcan simply squeeze into junctions,” Zagal told us via email.

The OctaWorm that you see in the video is the third prototype of the deformable robot, and it has gone through a lot of changes since Zagal and his team’s first attempt. The first two iterations of OctaWorm used syringes to move the joints into place that were run with small hydraulics while the joints on the second used electronic actuators. The current version upgraded the joints to much more reliable pneumatic driven servo motors. While the robot currently is operated via a wired controller created by Zagal and his University of Akron partners Jeff Davis and Daniel Deckler, eventually OctaWorm will be controlled wirelessly.

The basic structure of the robot itself is constructed with mostly 3D printed parts and some aluminum rods to add durability to the legs. It is run with an Arduino board, an Arduino-compatible shield that controls the relays and three pneumatic 5-way solenoid valves. Because the robot is pneumatically driven, Zagal also used high-quality rapid pneumatic connectors and durable plastic tubing to connect it to the controller. But of course the star of the show, and the key to being able to make a working prototype, are the 3D printed components.

“The use of 3D printing was critical for producing joinery parts that allow connecting the linear actuators. We also used 3D printing for producing high definition ball joints. The ball joints were fabricated using a Stratasys Objet 3D printer. The remaining 3D printed parts were produced with an FDM 3D printer,” Zagal explained.

The rubbery balls on the end of each leg is what provides OctaWorm with the ability to grip onto and traverse through a wide variety of terrains and materials. The 3D printed ball joints control the deformation motion, and allow it to assume a wide variety of shapes and configurations, allowing it to squeeze into virtually anywhere.

Just imagine rescue workers equipped with an army of OctaWorms, each outfitted with an infrared camera and medical equipment. They could be set loose in a disaster area and explore the ruins looking for signs of life. A fleet of small, inexpensive OctaWorm robots could potentially save countless lives that could otherwise be lost while waiting for rescue workers to find them by randomly digging through rubble. They could even be outfitted to carry small parcels of water or medical supplies to help trapped survivors last long enough to be saved.

You can read more about the OctaWorm project over on Juan Cristóbal Zagal’s website, and make sure that you let us know what you think of this new pipe-crawling robot over on our 3D Printed OctaWorm Robot forum thread at 3DPB.com.

Robot Weapons: What’s the Harm?

$
0
0

LAST month over a thousand scientists and tech-world luminaries, including Elon Musk, Stephen Hawking and Steve Wozniak, released an open letter calling for a global ban on offensive “autonomous” weapons like drones, which can identify and attack targets without having to rely on a human to make a decision.

The letter, which warned that such weapons could set off a destabilizing global arms race, taps into a growing fear among experts and the public that artificial intelligence could easily slip out of humanity’s control — much of the subsequent coverage online was illustrated with screen shots from the “Terminator” films.

The specter of autonomous weapons may evoke images of killer robots, but most applications are likely to be decidedly more pedestrian. Indeed, while there are certainly risks involved, the potential benefits of artificial intelligence on the battlefield — to soldiers, civilians and global stability — are also significant.

The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn’t stand up under scrutiny. However high-tech those systems are in design, in their application they are “dumb” — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.

A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).

Consider the lowly land mine. Those horrific and indiscriminate weapons detonate when stepped on, causing injury, death or damage to anyone or anything that happens upon them. They make a simple-minded “decision” whether to detonate by sensing their environment — and often continue to do so, long after the fighting has stopped.

Now imagine such a weapon enhanced by an A.I. technology less sophisticated than what is found in most smartphones. An inexpensive camera, in conjunction with other sensors, could discriminate among adults, children and animals; observe whether a person in its vicinity is wearing a uniform or carrying a weapon; or target only military vehicles, instead of civilian cars.

This would be a substantial improvement over the current state of the art, yet such a device would qualify as an offensive autonomous weapon of the sort the open letter proposes to ban.

Then there’s the question of whether a machine — say, an A.I.-enabled helicopter drone — might be more effective than a human at making targeting decisions. In the heat of battle, a soldier may be tempted to return fire indiscriminately, in part to save his or her own life. By contrast, a machine won’t grow impatient or scared, be swayed by prejudice or hate, willfully ignore orders or be motivated by an instinct for self-preservation.

Continue reading the main storyRECENT COMMENTSJY 1 hour ago

The argument is sloppy and forced, which I think is because war and ethics do not go together.

Graham 1 hour ago

The refined application of human ingenuity. Impressive lot aren't we? Waste of a nice planet.....

Dead Fish 1 hour ago

Wow, I can't imagine a slope any more slippery than this one!

  • SEE ALL COMMENTS
  •  
  • WRITE A COMMENT

Indeed, many A.I. researchers argue for speedy deployment of self-driving cars on similar grounds: Vigilant electronics may save lives currently lost because of poor split-second decisions made by humans. How many soldiers in the field might die waiting for the person exercising “meaningful human control” to approve an action that a computer could initiate instantly?

Neither human nor machine is perfect, but as the philosopher B. J. Strawser has recently argued, leaders who send soldiers into war “have a duty to protect an agent engaged in a justified act from harm to the greatest extent possible, so long as that protection does not interfere with the agent’s ability to act justly.” In other words, if an A.I. weapons system can get a dangerous job done in the place of a human, we have a moral obligation to use it.

CONTINUE READING THE MAIN STORY6COMMENTS

Of course, there are all sorts of caveats. The technology has to be as effective as a human soldier. It has to be fully controllable. All this needs to be demonstrated, of course, but presupposing the answer is not the best path forward. In any case, a ban wouldn’t be effective. As the authors of the letter recognize, A.I. weapons aren’t rocket science; they don’t require advanced knowledge or enormous resource expenditures, so they may be widely available to adversaries that adhere to different ethical standards.

The world should approach A.I. weapons as an engineering problem — to establish internationally sanctioned weapons standards, mandate proper testing and formulate reasonable post-deployment controls — rather than by forgoing the prospect of potentially safer and more effective weapons.

Instead of turning the planet into a “Terminator”-like battlefield, machines may be able to pierce the fog of war better than humans can, offering at least the possibility of a more humane and secure world. We deserve a chance to find out.

Jerry Kaplan, who teaches about the ethics and impact of artificial intelligence at Stanford, is the author of “Humans Need Not Apply: A Guide to Wealth and Work in the Age of Artificial Intelligence.”

A version of this op-ed appears in print on August 17, 2015, on page A19 of the New York edition with the headline: Robot Weapons: What’s the Harm?. Today's Paper|Subscribe

NASA testing robot with sticky gecko feet (VIDEO)

$
0
0

NASA, the US space agency, has equipped a climbing robot with sticky feet – a technology modeled on geckos and spiders. The robots could do repair work on spaceships, and be used back on Earth.

NASA’s Jet Propulsion Laboratory has already successfully used its “gecko grippers” to stick a 100 kilogram person to a wall and manipulate a 10 kilogram cube during tests conducted in micro-gravity. At Earth gravity, the latest generation grippers are able to hold up an object weighing up to 16 kg. The grippers, which are specially designed flat panels, are to be attached to a LEMUR 3 (Limbed Excursion Mechanical Utility Robot) prototype.

The gecko gripper technology, which is being developed simultaneously at various cutting-edge institutions around the world, including the defense agency’s DARPA and Stanford University, appears at first glance to be a work of magic. Unlike conventional sticky tape, there is no stickiness, and the gripper does not wear out with use – as NASA discovered after sticking a surface to foreign objects 30,000 times in a row with barely any loss of function. And in contrast to Velcro, there is no need for a second “mating” surface to make the object stick.

The principle behind the technology is van der Waals forces. Like the feet of spiders, the gecko gripper actually consists of thousands of microscopic hairs. Due to an even spacing of electrons around an atom, when together these form an electrical field. The stickiness is created by the positive particles being attracted to the negative. These forces only operate within nanometers, hence the need for tiny hairs. When millions of these work together, they can create a powerful cumulative force: a spider can lift 170 times its own weight while climbing along a smooth vertical force. 

The stickiness is increased by simply pushing down harder, and making the hairs bend more.

“This is how the gecko does it, by weighting its feet,” said NASA engineer Aaron Parness.

The properties of the wonder-material, whose effectiveness is not hampered by temperature, radiation or other factors, would come in very handy for astronauts working at the International Space Station. NASA has already developed a series of patches that can be used to stick various objects like clipboards, and handheld devices to a wall in zero-gravity conditions.

Besides being used in developing more capable robots, NASA believes the technology could even revolutionize how spaceships interact with their outside environment.

“We might eventually grab satellites to repair them, service them, and we also could grab space garbage and try to clear it out of the way,” explained Parness.

Google's Atlas robot takes a forest hike

$
0
0

Google's Boston Dynamics division has released footage of its Atlas humanoid robot taking a walk in a forest.

The 6ft 2in (1.9m) machine required a power tether to keep it charged, but was able to keep itself balanced despite the unpredictable terrain.

"Our focus is on balance and dynamics - working a little bit the way people and animals do, where you move quickly in order to keep yourself stabilised," explained Boston Dynamics' founder Marc Raibert to the Fab 11 conference earlier this month.

"Out in the world is just a totally different challenge than in the lab. You can't predict what it's going to be like.

"We're working on a version that doesn't have that [power tether] and we're making pretty good progress on making it so it has mobility that's within shooting range of [a human's]."

Boston Dynamics has shown off other models tackling similar outside courses in the past, but they had more legs to help them stay upright.

Earlier in the year, other versions of Atlas were entered into the Pentagon's Darpa Robotics Challenge - a contest that also involved navigating rough terrain - however, another model from South Korea took the top prize.


Scientists discover atomic-resolution details of brain signaling

$
0
0

Scientists have revealed never-before-seen details of how our brain sends rapid-fire messages between its cells. They mapped the 3-D atomic structure of a two-part protein complex that controls the release of signaling chemicals, called neurotransmitters, from brain cells. Understanding how cells release those signals in less than one-thousandth of a second could help launch a new wave of research on drugs for treating brain disorders.

The experiments, at the Linac Coherent Light Source (LCLS) X-ray laser at the Department of Energy's SLAC National Accelerator Laboratory, build upon decades of previous research at Stanford University, Stanford School of Medicine and SLAC. Researchers reported their latest findings today in the journal Nature.

"This is a very important, exciting advance that may open up possibilities for targeting new drugs to control neurotransmitter release. Many mental disorders, including depression, schizophrenia and anxiety, affect neurotransmitter systems," said Axel Brunger, the study's principal investigator. He is a professor at Stanford School of Medicine and SLAC and a Howard Hughes Medical Institute investigator.

"Both parts of this protein complex are essential," Brunger said, "but until now it was unclear how its two pieces fit and work together."

Unraveling the Combined Secrets of Two Proteins

The two protein parts are known as neuronal SNAREs and synaptotagmin-1.

Earlier X-ray studies, including experiments at SLAC's Stanford Synchrotron Radiation Lightsource (SSRL) nearly two decades ago, shed light on the structure of the SNARE complex, a helical protein bundle found in yeasts and mammals. SNAREs play a key role in the brain's chemical signaling by joining, or "fusing," little packets of neurotransmitters to the outer edges of neurons, where they are released and then dock with chemical receptors in another neuron to trigger a response.

A 'Smoking Gun' for Neurotransmitter Release

In this latest research, the scientists found that when the SNAREs and synaptotagmin-1 join up, they act as an amplifier for a slight increase in calcium concentration, triggering a gunshot-like release of neurotransmitters from one neuron to another. They also learned that the proteins join together before they arrive at a neuron's membrane, which helps to explain how they trigger brain signaling so rapidly.

"The neuron is not building the 'gun' as it sits there on the membrane - it's already there," Brunger said.

The team speculates that several of the joined protein complexes may group together and simultaneously interact with the same vesicle to efficiently trigger neurotransmitter release, an exciting area for further studies.

"The structure of the SNARE-synaptotagmin-1 complex is a milestone that the field has awaited for a long time, and it sets the framework for a better understanding of the system," said James Rothman, a professor at Yale University who discovered the SNARE proteins and shared the 2013 Nobel Prize in Physiology or Medicine.

Thomas C. Südhof, a professor at the Stanford School of Medicine and Howard Hughes Medical Institute investigator who shared that 2013 Nobel Prize with Rothman, discovered synaptotagmin-1 and showed that it plays an important role as a calcium sensor and calcium-dependent trigger for neurotransmitter release.

"The new structure has identified unanticipated interfaces between synaptotagmin-1 and the neuronal SNARE complex that change how we think about their interaction by revealing, in atomic detail, exactly where they bind together," Südhof said. "This is a new concept that goes much beyond previous general models of how synaptotagmin-1 functions."

Using Crystals, Robotics and X-rays to Advance Neuroscience

To study the joined protein structure, researchers in Brunger's laboratory at the Stanford School of Medicine found a way to grow crystals of the complex. They used a robotic system developed at SSRL to study the crystals at SLAC's LCLS, an X-ray laser that is one of the brightest sources of X-rays on the planet. SSRL and LCLS are DOE Office of Science User Facilities.

The researchers combined and analyzed hundreds of X-ray images from about 150 protein crystals to reveal the atomic-scale details of the joined structure.

SSRL's Aina Cohen, who oversaw the development of the highly automated platform used for the neuroscience experiment, said, "This experiment was the first to use this robotic platform at LCLS to determine a previously unsolved structure of a large, challenging multi-protein complex." The study was also supported by X-ray experiments at SSRL and at Argonne National Laboratory's Advanced Photon Source.

"This is a good example of how advanced tools, instruments and X-ray methods are providing us new insights into what are truly complex mechanisms," Cohen said.

Brunger said future studies will explore other protein interactions relevant to neurotransmitter release. "What we studied is only a subset," he said. "There are many other factors interacting with this system and we want to know what these look like. This by no means is the end of the story."

 Explore further: Robotics meet X-ray lasers in cutting-edge biology studies

More information: Architecture of the synaptotagmin-SNARE machinery for neuronal exocytosis, DOI: 10.1038/nature14975 

Journal reference: Nature  

Provided by: SLAC National Accelerator Laboratory 

A brain-computer interface for controlling an exoskeleton

$
0
0

Scientists working at Korea University, Korea, and TU Berlin, Germany have developed a brain-computer control interface for a lower limb exoskeleton by decoding specific signals from within the user's brain.

Using an electroencephalogram (EEG) cap, the system allows users to move forwards, turn left and right, sit and stand simply by staring at one of five flickering light emitting diodes (LEDs).

The results are published today (Tuesday 18th August) in the Journal of Neural Engineering.

Each of the five LEDs flickers at a different frequency, and when the user focusses their attention on a specific LED this frequency is reflected within the EEG readout. This signal is identified and used to control the exoskeleton.

A key problem has been separating these precise brain signals from those associated with other brain activity, and the highly artificial signals generated by the exoskeleton.

"Exoskeletons create lots of electrical 'noise'" explains Klaus Muller, an author on the paper. "The EEG signal gets buried under all this noise -- but our system is able to separate not only the EEG signal, but the frequency of the flickering LED within this signal."

Although the paper reports tests on healthy individuals, the system has the potential to aid sick or disabled people.

"People with amyotrophic lateral sclerosis (ALS) [motor neuron disease], or high spinal cord injuries face difficulties communicating or using their limbs" continues Muller. "Decoding what they intend from their brain signals could offer means to communicate and walk again."

The control system could serve as a technically simple and feasible add-on to other devices, with EEG caps and hardware now emerging on the consumer market.

It only took volunteers a few minutes to be training how to operate the system. Because of the flickering LEDs they were carefully screened for epilepsy prior to taking part in the research. The researchers are now working to reduce the 'visual fatigue' associated with longer-term users of such systems.

"We were driven to assist disabled people, and our study shows that this brain control interface can easily and intuitively control an exoskeleton system -- despite the highly challenging artefacts from the exoskeleton itself" concludes Muller.

Story Source:

The above post is reprinted from materials provided by Institute of Physics.Note: Materials may be edited for content and length.

Journal Reference:

  1. No-Sang Kwak, Klaus-Robert Müller, Seong-Whan Lee. A lower limb exoskeleton control system based on steady state visual evoked potentials. Journal of Neural Engineering, 2015; 12 (5): 056009 DOI:10.1088/1741-2560/12/5/056009

Scientists stimulate mouse brains with wireless 'charger'

$
0
0

For reasons we'll soon explain, turning on a light inside a mouse's head can help scientistsmap brain function. It's easy to implant an LED in a mouse's brain, but how to power it? Until now, the mice either needed to be tethered to a fiberoptic cable or fitted with heavy wireless charging devices. However, Stanford scientists managed to build an implant that's not only lightweight, but able to receive consistent amounts of wireless energy.

How can light change brain function? Using "optogenetics," scientists can genetically alter neurons with green algae genes to make them responsive to light. By modifying only select parts of the brain, researchers can see how those regions affect behavior. The Stanford team created peppercorn-sized implants that contain a power receiving coil, circuit and LED, all weighing a nearly negligible 20 to 50 milligrams. When the mouse is placed in an electromagnetic chamber, the implant coil harvests RF energy to power the light, which in turn stimulates the targeted brain region.

To prove that the wireless implants worked, the researchers tested them on neurons and spinal cord nerves. When the system is powered on, the mouse walks in circles, as shown in the video below. When the power is shut off, the behavior stops, proving that the concept works. Further experiments could lead to insight on neurological conditions like Parkinson's disease, blindness or mental health issues. The research also gives new meaning to the term "wireless mouse" (sorry).

Russian scientists create artificial brain that can educate itself

$
0
0

In a step closer to developing artificial intellect, Russian scientists have created a physical model of a brain that is able to educate itself.

An international team of scientists at a laboratory in Tomsk State University in western Siberia have created a device that could be an artificial carrier of a natural mind, able to learn and react to the environment, according to a press release, published by the university on Monday.

Russian scientists have teamed up with their colleagues from Germany, Bulgaria, Ukraine, Belarus and Kazakhstan to tackle a problem which has bothered researchers for decades: the process apparently requires the copying of 100 million brain neurons and one trillion of their connections. The main system of a robotic complex is currently being developed as an intellectual control center.

“First, we built mathematic and computer models of the human brain,” head of the laboratory Vladimir Syryamkin said in the press release. “Afterwards an electronic device with perceptrons was constructed. It is capable of processing diverse information (video, sound, etc.).”

The physical prototype can accumulate life experience it gains from various external stimuli, for example by turning away from a source of light or moving away from it. In the case of success, the artificial mind is able memorize it and use it in similar situations.

“In the end, an artificial brain could become an analogue of the biological model,” main developer Vladimir Shumilov said.“We’ve got colossal scale of work in front of us, but one major step is already done – we have managed to crack a mystery of brain neural system.”

“The creation of new neuron nets and degeneration of the already existing ones takes place in our physical model, as in the human brain. It is the process of forgetting in humans,” Shumilov added.

In the future, the project will be overseen by biologists and psychologists, but its major appliance is seen in the field of healthcare. The artificial brain could be used to make models of pathological state of various dementias, such as Alzheimer’s disease and Parkinson’s disease, and choose methods of drug correction.

Another field of the future use is robotic systems and neurocomputers.

First almost fully-formed human brain grown in lab, researchers claim

$
0
0

An almost fully-formed human brain has been grown in a lab for the first time, claim scientists from Ohio State University. The team behind the feat hope the brain could transform our understanding of neurological disease.

Though not conscious the miniature brain, which resembles that of a five-week-old foetus, could potentially be useful for scientists who want to study the progression of developmental diseases. It could also be used to test drugs for conditions such as Alzheimer’s and Parkinson’s, since the regions they affect are in place during an early stage of brain development.

The brain, which is about the size of a pencil eraser, is engineered from adult human skin cells and is the most complete human brain model yet developed, claimed Rene Anand of Ohio State University, Columbus, who presented the work today at the Military Health System Research Symposium in Fort Lauderdale, Florida.

Previous attempts at growing whole brains have at best achieved mini-organs that resemble those of nine-week-old foetuses, although these “cerebral organoids” were not complete and only contained certain aspects of the brain. “We have grown the entire brain from the get-go,” said Anand.

Anand and his colleagues claim to have reproduced 99% of the brain’s diverse cell types and genes. They say their brain also contains a spinal cord, signalling circuitry and even a retina.

The ethical concerns were non-existent, said Anand. “We don’t have any sensory stimuli entering the brain. This brain is not thinking in any way.”

Anand claims to have created the brain by converting adult skin cells into pluripotent cells: stem cells that can be programmed to become any tissue in the body. These were then grown in a specialised environment that persuaded the stem cells to grow into all the different components of the brain and central nervous system.

According to Anand, it takes about 12 weeks to create a brain that resembles the maturity of a five-week-old foetus. To go further would require a network of blood vessels that the team cannot yet produce. “We’d need an artificial heart to help the brain grow further in development,” said Anand.

Several researchers contacted by the Guardian said it was hard to judge the quality of the work without access to more data, which Anand is keeping under wraps due to a pending patent on the technique. Many were uncomfortable that the team had released information to the press without the science having gone through peer review.

Zameel Cader, a consultant neurologist at the John Radcliffe Hospital, Oxford, said that while the work sounds very exciting, it’s not yet possible to judge its impact. “When someone makes such an extraordinary claim as this, you have to be cautious until they are willing to reveal their data.”

If the team’s claims prove true, the technique could revolutionise personalised medicine. “If you have an inherited disease, for example, you could give us a sample of skin cells, we could make a brain and then ask what’s going on,” said Anand.

You could also test the effect of different environmental toxins on the growing brain, he added. “We can look at the expression of every gene in the human genome at every step of the development process and see how they change with different toxins. Maybe then we’ll be able to say ‘holy cow, this one isn’t good for you.’”

For now, the team say they are focusing on using the brain for military research, to understand the effect of post traumatic stress disorder and traumatic brain injuries.

Scientists discover electrical control of cancer cell growth

$
0
0

The molecular switches regulating human cell growth do a great job of replacing cells that die during the course of a lifetime. But when they misfire, life-threatening cancers can occur. Research led by scientists at The University of Texas Health Science Center at Houston (UTHealth) has revealed a new electrical mechanism that can control these switches.

This information is seen as critical in developing treatments for some of the most lethal types of cancer including pancreatic, colon and lung, which are characterized by uncontrolled cell growth caused by breakdowns in cell signaling cascades.

The research focused on a molecular switch called K-Ras. Mutated versions of K-Ras are found in about 20 percent of all human cancers in the United States and these mutations lock the K-Ras switch in the on position.

"When K-Ras is locked in the on position, it drives cell division, which leads to the production of a cancer," said John Hancock, M.B., B.Chir, Ph.D., ScD, the study's senior author and chairman of the Department of Integrative Biology and Pharmacology at UTHealth Medical School. "We have identified a completely new molecular mechanism that further enhances the activity of K-Ras."

Findings appear in Science, a journal of the American Association for the Advancement of Science.

The study focused on the tiny electrical charges that all cells carry across their limiting (plasma) membrane. "What we have shown is that the electrical potential (charge) that a cell carries is inversely proportional to the strength of a K-Ras signal," Hancock said.

With the aid of a high-powered electron microscope, the investigators observed that certain lipid molecules in the plasma membrane respond to an electrical charge, which in turn amplifies the output of the Ras signaling circuit. This is exactly like a transistor in an electronic circuit board.

Yong Zhou, Ph.D., first author and assistant professor of integrative biology and pharmacology at UTHealth Medical School, said, "Our results may finally account for a long-standing but unexplained observation that many cancer cells actively try to reduce their electrical charge."

Initial work was done with human and animal cells and findings were subsequently confirmed in a fruit fly model on membrane organization.

"This has huge implications for biology," Hancock said. "Beyond the immediate relevance to K-Ras in cancer, it is a completely new way that cells can use electrical charge to control a multitude of signaling pathways, which may be particularly relevant to the nervous system."

Hancock's co-authors at UTHealth include Ching-On Wong Ph.D., Kwang-Jin Cho, Ph.D., Dharini van der Hoeven, Ph.D., Hong Liang, M.D., Dhananjay Thakur, Jialie Luo, Ph.D., Michael Zhu, Ph.D., Hongzhen Hu, Ph.D., and Kartik Venkatachalam, Ph.D.

Co-authors from The University of Arizona include Milos Babic, Ph.D., and Konrad Zinsmaier Ph.D.

At UTHealth Medical School, Hancock is the vice dean for basic research, executive director of the Brown Foundation Institute of Molecular Medicine for the Prevention of Human Diseases and holder of the John S. Dunn Distinguished University Chair in Physiology and Medicine.

Hancock and Venkatachalam are on the faculty of The University of Texas Graduate School of Biomedical Sciences at Houston.

Story Source:

The above post is reprinted from materials provided by University of Texas Health Science Center at Houston. Note: Materials may be edited for content and length.

Journal Reference:

  1. Y. Zhou, C.-O. Wong, K.-j. Cho, D. van der Hoeven, H. Liang, D. P. Thakur, J. Luo, M. Babic, K. E. Zinsmaier, M. X. Zhu, H. Hu, K. Venkatachalam, J. F. Hancock. Membrane potential modulates plasma membrane phospholipid dynamics and K-Ras signaling. Science, 2015; 349 (6250): 873 DOI: 10.1126/science.aaa5619

Watching MIT's Glass 3D Printer Is Absolutely Mesmerizing

$
0
0

MIT’s Mediated Matter Group made a video showing off their first of its kind optically transparent glass printing process. It will soothe your soul.

GLASS from Mediated Matter Group on Vimeo.

 

Called G3DP (Glass 3D Printing) and developed in collaboration with MIT’s Glass Lab, the process is an additive manufacturing platform with dual heated chambers. The upper chamber is a “Kiln Cartridge,” operating at a mind-boggling 1900°F, while the lower chamber works to anneal (heat then cool in order to soften the glass). The special 3D printer is not creating glass from scratch, but rather working with the preexisting substance, then layering and building out fantastical shapes like a robot glassblower.

It’s wonderfully soothing to watch in action—and strangely delicious-looking. “Like warm frosting,” my colleague Andrew Liszewski confirmed. “Center of the Earth warm frosting.”


Team Designs Robots to Build Things in Messy, Unpredictable Situations

$
0
0

Researchers at Harvard University and SUNY at Buffalo are designing robots to function outside of ideal, predictable environments such as warehouses or factories and instead work in places where there may be unexpected obstructions, and where predictive algorithms can’t be used to plan several thousand steps ahead. The goal for such “builder bots,” which are designed to handle inconsistent and malleable building materials, is to be deployed as disaster relief agents.

Radhika Nagpal, a professor of computer science at Harvard, and Nils Napp, an assistant professor of computer science at SUNY at Buffalo and a former postdoctoral fellow in Nagpal’s lab, have designed two robots: one that deposits expandable, self-hardening foam and another that drags and piles up sandbags.

Robots built for construction can usually handle only discrete materials, such as blocks or bricks. The materials these new robots build with are useful in a range of real-world environments, but they are highly unpredictable. The foam can stick to most surfaces and expand to fill holes, but it starts off as a liquid, so it’s impossible to know exactly how far it’ll run before it hardens; sandbags are frequently used in disaster relief as retaining walls, but the granules inside them have a tendency to shift around when manipulated.

To combat this unpredictability, Nagpal and Napp’s robots are equipped with an infrared sensor that takes scans and assesses the environment in between laying down a building material. The scan is integral to making the bots so adaptable.

“These robots need to continuously monitor and replan while they work,” says Napp. “That’s something that animals do, and that robots often don’t do.”

Using an algorithm that functions as a loop—scan, assess the environment, lay the material, scan again, assess the changes to the environment, lay more material, etc.—the robots are able to iteratively build as they go, taking into account any changes in the environment as well as any changes to the material they’re using.

The team is currently focusing this adaptable system on building ramps, a relatively simple structure with practical applications: they can be used to connect two points, and they also have a lot of flexibility in design.

The system is applicable for any climbing, manipulating robot that’s using any unpredictable materials, not just foam or sandbags. Nagpal also says that the system can work with multi-robot teams. Because the algorithm is adaptable, it doesn’t matter whether the uncertainty that a robot confronts comes from the environment, a material, or another robot’s behavior.

The researchers are just starting to test out the system in increasingly unpredictable environments. The next stage will be to configure a robot to build in situations where it doesn’t know what materials will be available.

Scientists unveil world’s first 3D printer that can print 10 materials at once

$
0
0

As scientists figure out how to use awesome new materials in 3D printers, one big problem remains: the vast majority of 3D printers only work with one kind of source material (for example, plastic) which severely limits the complexity of the objects you can make.

But now, for the first time, scientists in the US have developed a 3D printer that can print with a record-breaking 10 materials at a time, which will allow a whole new range of objects to be printed quickly and easily without requiring any manual post-print assembly of individually printed components.

While some existing industrial 3D printers can also print with multiple materials, they aren’t exactly cheap (costing upwards of US$250,000. And even then, they’ve been limited to a maximum of three materials at once.

The MultiFab printer on the other hand, developed by researchers at the Computer Science and Artificial Intelligence Lab (CSAIL) at the Massachusetts Institute of Technology (MIT), only cost its makers US$7,000 to build from scratch and features innovations that make 3D printing of complex objects easier than ever.

In addition to its ability to print with up to 10 source materials at once, the MultiFab uses machine vision 3D-scanning techniques that allow the printer to self-calibrate and guide itself as it prints.

“The platform opens up new possibilities for manufacturing, giving researchers and hobbyists alike the power to create objects that have previously been difficult or even impossible to print,” said Javier Ramos, one of the CSAIL research engineers who worked on the project, in a press release.

Unlike material extrusion methods of 3D printing, which the researchers liken to using a syringe to apply cake frosting, the MultiFab printer mixes microscopic droplets of photopolymers together and prints with much finer printheads.

Together with the machine vision, this enables the printer to 3D-scan an object and then print directly around it with extreme precision. The researchers say you could place an iPhone inside the printer and tell it to print a case directly onto the phone. Or you could start off with circuits and sensors and print around them, effectively embedding a functional electronic device at the heart of a new 3D-printed object.

“Right now a big portion of 3D printing, the hardware that’s available, is focused on printing form and objects for prototype,” says Ramos in the video below. “The holy grail is to print things that are fully functional right out of the printer, combining multiple materials with many different properties, but also existing objects that have some inherent functionality.”

The team’s paper, presented this month at the 42nd International Conference and Exhibition on Computer Graphics and Interactive Techniques in the US, could have major implications for the way people make objects, with potential applications for researchers, personalised consumer items, or even mass-manufactured products.

“Picture someone who sells electric wine-openers, but doesn’t have $7,000 to buy a printer like this. In the future they could walk into a FedEx with a design and print out batches of their finished product at a reasonable price,” said Ramos. “For me, a practical use like that would be the ultimate dream.”

Cheap 3D-printed bionic hand wins James Dyson Award, could bring robotic limbs to world

$
0
0

A robotics graduate has created new, far cheaper robotic hands — winning the UK James Dyson Award for an invention that uses 3D printing to transform the bionics industry.

The devices usually cost as much as £60,000 — but by using 3D printing and other techniques, 25-year-old Plymouth University graduate Joel Gibbard and his company Open Robotics have been able to slash that to only £3,000.

That could allow far more people to get the hands — which mostly use old technology, but remain out of reach for many because of their cost.

The economics of robotics will now work for children, who haven’t often been able to get them since the hands need to be replaced every six months to a year, and so aren’t viable. Even the most expensive ones for adults tend to need to be replaced every three to five years, because of wear and tear.

It could also help the technology to get more internationally widespread, helping them overcome the price barriers in other countries where bionic hands have been simply too expensive to use. 

“By using rapid prototyping techniques, Joel has initiated a step-change in the development of robotic limbs,” James Dyson, the British inventor behind the award, said in a statement. “Embracing a streamlined approach to manufacturing allows Joel's design to be highly efficient, giving more amputees’ access to advanced prosthetics.”

The hands work by connecting into the muscles in the upper forearm. Myoelectric sensors are used to take messages from the muscles and uses them to control the hand.

Gibbard’s innovation is using 3D printing — first, the arm is scanned, and then the hand can be custom printed to fit. That means that they can be precisely created for whoever is wearing them, but at a fraction of the usual cost.

The same technology could eventually come to be used in most things that people wear, Gibbard predicts.

“This has tons of applications, like prosthetics and orthotics,” which are used for modifying the neuromuscular and skeletal system, says Gibbard. “But also earphones, or any other wearable, should be custom fitted.

“Even a phone could be custom fitted to your hand. There are so many things where we use a standardised thing, where in reality, a customised thing would be better.

“You can imagine a world in the future where you scan your whole body in — and then use it whenever you buy clothes, or a desk chair,” with the manufacturers custom making it around a person’s scan.

Gibbard has been working on prosthetic hands since he was 17, initially considering it “just as a fun thing to do”. But once he’d arrived at university and took on the project more seriously, he found there was a need that wasn’t being fulfilled, for cheap prosthetics that could be more easily made and bought.

A video of his proposal was uploaded to YouTube. “I had tons of comments, saying: I need this, I want to buy this,” says Gibbard. “At that point I realise that I wanted to share the idea and designs further.”

That interest helped spur interest in a crowdfunding campaign — titled “The Open Hand Project: A Low Cost Robotic Hand” — that raised almost £44,000 in one month.

Initially Gibbard’s plan was to make a 3D printable design, which could then be shared across the world so that anyone, anywhere could make them. But he realised that a company — which could ensure quality control, guarantee its products and ensure that they work well — would be a better way of getting the new product out into the world, and he founded his firm Open Bionics in April 2014.

From now, the company has both technological and business hurdles that it needs to overcome.

First, the product needs to be tested in the real world — it has already been worked on with several amputees, but now those tests must move out into the field, letting people use them for a few weeks and provide feedback on what they’re like.

There is business work to do, too. Gibbard’s company will use the money from the award to buy another 3D printer, meaning that they will be able to increase their production substantially straight away — printing twice as quickly and therefore doing more field testing, and faster.

But Gibbard says that the award will help as the company goes onto grow, too. “The James Dyson award comes with such huge prestige, that when you’re trying to get funding and credibility, it’s a massive, massive boost.”

Open Bionics currently has six members of staff, enough to keep working on the project for the moment. But by the time that it comes to market — which is set to happen before the end of 2016 — the staff and resources will need to grow.

Joel Gibbard and Open Bionics will join the rest of the UK runners up to move onto the next stage of the James Dyson Award, where Dyson engineers whittle down the list of 100 to 20. That will then be passed onto Dyson himself, who will pick the overall winner.

Josh Cathcar became first Brit to be fitted with new child-sized bionic hand

$
0
0

A nine-year-old who was bullied for having only one hand has become the first boy in the UK to be fitted with a new child-sized bionic hand.

Josh Cathcart, from Dalgety Bay in Fife, was tormented by classmates at school after he was born without his right hand. 

Now he has become the first child in Britain to be fitted with the i-limb Quantum, an extra-small version of their prosthetic hand, to give him extra dexterity.

It means the youngster can build Lego, give a thumbs up and open bottles or packets with ease for the first time.

Josh said: 'I got it put on about two days ago. It feels quite heavy. I can stick my thumb up. I can make a pinch grip, I can get a grip for cutting with a knife.

'I made myself a bagel yesterday. I can open bottles and packets with it, I can stack up blocks, I can build lego with it and I can pull my trousers up.'

His worried parents Clare and James came across Touch Bionics in Livingston, West Lothian, after the became concerned their son was being bullied. 

Mrs Cathcart said: 'Josh had been getting picked on and became quite withdrawn and upset, so we started looking for something a bit more advanced, something that moved.

'So, we had chats with him and then went on the internet and came across this company.

'He was born missing a hand. At first, I didn't really give it much thought to it, but as time went on I blamed myself for it.'

Wiping away tears, she said: 'Now I can see him with two hands.'

To Josh's visible protestations, she added: 'It gives him his independence, so he can now make his own food and tidy his own room.'

Mr Cathcart said: 'Obviously his socket's going to grow, so he'll get about nine months to a year out of this one and then he will have to come again and get a new socket.

'I think it's great. Just to see him pull his trousers up this morning, it was just something that he had never done, and he has been shown how to cut with a knife and fork.

'It just looked so natural for him. He can do things for himself without us helping him.'

Alison Goodwin, prosthetist at Touch Bionics, said: 'Josh has spent this week with us being fitted with the Touch Bionics i-limb quantum prosthesis.

'He's the youngest we've fitted so far because of the extra small hand that we now have available, so it's been great to now have the experience this week of fitting the youngest-ever person with the i-limb hand.

'We do fit the hand worldwide but he's the first one that we have fitted here in Scotland, so it's great that he's a local lad.

'We only released the i-limb Quantum in June, so it's brand new and offers some new features such as what we call i-mo technology, which allows him to do forward, backward and side-to-side movements, so he can enter grips such as a pinch grip or a lateral grip.

'It's a much easier way to control the prosthesis and we've been able to see this by somebody so young being able to pick it up so well.

'It works from electrodes which are positioned on the surface of his skin within the socket of his prosthesis, so this is the custom-made part which is fitted on to his residual limb.

'When he tenses those muscles, the electrodes open and close the hand.

'He's not worked these muscles because he has not used this type of prothesis before, and obviously without having a hand he has spent about nine years of not using those muscles, but he has developed them very well this week and has been working great with them.

'Josh was born without a right hand, as simple as that, it was just one of those things. He has a bit of a residual limb just below the elbow, but no wrist or hand.

'He's taken to it really positively this week, and he's looking forward to being able to integrate it into his daily life so he's really motivated to learn and use it day-to-day, so that's really helped the learning process because he is actually so positive and so motivated.' 

Tiny, 3D-Printed Fish to Swim in Blood Stream, Deliver Drugs

$
0
0

New 3D-printed fish-shaped microbots — called microfish — could one day transport drugs to specific places in the human body and be able to sense and remove toxins.

These microfish, smaller than the width of a human hair, are groundbreaking for two reasons: they’re simple to create, but remarkably high-tech in what they can do, doubling as toxin sensors and detoxifying robots, according to researchers at the University of California, San Diego.

Micro-Muscles Could Propel Tiny Bots Through Veins

Researchers used a high-resolution 3-D printing technology called microscale continuous optical printing to create the microfish. The custom computer-aided design (CAD) program used could allow researchers to experiment with other shapes, such as sharks or birds, in the future.

The microfish have platinum nanoparticles in their tails which, when placed in a solution with hydrogen peroxide, undergo a chemical reaction to propel them forward. They’re magnetically steered by way of iron oxide nanoparticle in their heads.

Even more impressive, when the microfish are placed in a solution filled with toxins, they turn fluorescent red and display an intense glow as their toxin-neutralizing nanoparticles chemically bind with toxin molecules.

Micro Factory Employs Tiny Robots

These proof-of-concept synthetic microfish will inspire a new generation of ‘smart’ microrobots with capabilities such as detoxification, sensing and directed drug delivery, according to the researchers.

“Another exciting possibility we could explore is to encapsulate medicines inside the microfish and use them for directed drug delivery,” Jinxing Li, the other co-first author of the study, said in a statement.

via UC San Diego

Viewing all 2275 articles
Browse latest View live




Latest Images