IEEE University of Lahore

IEEE
//

Archive for the ‘Uncategorized’ Category

Three Steps to a Moon Base

Saturday, July 13th, 2019

Space agencies and private companies are working on rockets, landers, and other tech for lunar settlement

.article-detail .listicle li h3, .article-detail .listicle li .listicle-item-hed {
color: #03A6E3;
}

graphic link to special report landing page
graphic link to special report landing  page

In 1968, NASA astronaut Jim Lovell gazed out of a porthole from lunar orbit and remarked on the “vast loneliness” of the moon. It may not be lonely place for much longer. Today, a new rush of enthusiasm for lunar exploration has swept up government space agencies, commercial space companies funded by billionaires, and startups that want in on the action. Here’s the tech they’re building that may enable humanity’s return to the moon, and the building of the first permanent moon base.

How NASA Recruited Snoopy and Drafted Barbie

Saturday, July 13th, 2019

The space agency has long relied on kid-friendly mascots to make the case for space

graphic link to special report landing page
graphic link to special report landing  page

In the comic-strip universe of Peanuts, Snoopy beat Neil Armstrong to the moon. It was in March 1969—four months before Armstrong would take his famous small step—that the intrepid astrobeagle and his flying doghouse touched down on the lunar surface. “I beat the Russians…I beat everybody,” Snoopy marveled. “I even beat that stupid cat who lives next door!”

The comic-strip dog had begun a formal partnership with NASA the previous year, when Charles Schulz, the creator of Peanuts, and its distributor United Feature Syndicate, agreed to the use of Snoopy as a semi-official NASA mascot.

Snoopy was already a renowned World War I flying ace—again, within the Peanuts universe. Clad in a leather flying helmet, goggles, and signature red scarf, he sat atop his doghouse, reenacting epic battles with his nemesis, the Red Baron. Just as NASA had turned to real-life fighter pilots for its first cohort of astronauts, the space agency also recruited Snoopy.

Two months after the comic-strip Snoopy’s lunar landing, a second, real-world Snoopy buzzed the surface of the moon, as part of Apollo 10. This mission was essentially a dress rehearsal for Apollo 11. The crew was tasked with skimming, or “snooping,” the surface of the moon, so they nicknamed the lunar module “Snoopy.” It logically followed that Apollo 10’s command module was “Charlie Brown.”

On 21 May, as the astronauts settled in for their first night in lunar orbit, Snoopy’s pilot, Eugene Cernan, asked ground control to “watch Snoopy well tonight, and make him sleep good, and we’ll take him out for a walk and let him stretch his legs in the morning.” The next day, Cernan and Tom Stafford descended in Snoopy, stopping some 14,000 meters above the surface.

Since then, Snoopy and NASA have been locked in a mutually beneficial orbit. Schulz, a space enthusiast, ran comic strips about space exploration, and the moon shot in particular, which helped excite popular support for the program. Commercial tie-ins extended well beyond the commemorative plush toy shown at top. Over the years, Snoopy figurines, music boxes, banks, watches, pencil cases, bags, posters, towels, and pins have all promoted a fun and upbeat attitude toward life beyond Earth’s atmosphere.

There’s also a serious side to Snoopy. In the wake of the tragic Apollo 1 fire, which claimed the lives of three astronauts, NASA wanted to promote greater flight safety and awareness. Al Chop, director of public affairs for the Manned Spacecraft Center (now the Lyndon B. Johnson Space Center), suggested using Snoopy as a symbol for safety, and Schulz agreed. 

NASA created the Silver Snoopy Award to honor ground crew who have contributed to flight safety and mission success. The recipient’s prize? A silver Snoopy lapel pin, designed by Schulz and presented by an astronaut, in appreciation for the person’s efforts to preserve astronauts’ lives.

Snoopy was by no means the only popularizer of the U.S. space program. Over the years, there have been GI Joe astronauts, LEGO astronauts, and Hello Kitty astronauts. Not all of these came with the NASA stamp of approval, but even unofficially they served as tiny ambassadors for space.

Of all the astronautical dolls, I’m most intrigued by Astronaut Barbie, of which there have been numerous incarnations over the years. The first was Miss Astronaut Barbie, who debuted in 1965—13 years before women were accepted into NASA’s astronaut classes and 18 years before Sally Ride flew in space.

Miss Astronaut Barbie might have been ahead of her time, but she was also a reflection of that era’s pioneering women. Cosmonaut Valentina Tereshkova became the first woman to go to space on 16 June 1963, when she completed a solo mission aboard Vostok 6. Meanwhile, American women were training for space as early as 1960, through the privately funded Women in Space program. The Mercury 13 endured the same battery of tests that NASA used to train the all-male astronaut corps and were celebrated in the press, but none of them ever went to space.

In 2009, Mattel reissued Miss Astronaut of 1965 as part of the celebration of Barbie’s 50th anniversary. “Yes, she was a rocket scientist,” the packaging declares, “taking us to new fashion heights, while firmly placing her stilettos on the moon.” For the record, Miss Astronaut Barbie wore zippered boots, not high heels.

Other Barbies chose careers in space exploration and always with a flair for fashion. A 1985 Astronaut Barbie modeled a hot pink jumpsuit, with matching miniskirt for attending press conferences. Space Camp Barbie, produced through a partnership between Mattel and the U.S. Space & Rocket Center in Huntsville, Ala., wore a blue flight suit, although a later version sported white and pink. An Apollo 11 commemorative Barbie rocked a red- and silver-trimmed jumpsuit and silver boots and came with a Barbie flag, backpack, and three glow-in-the-dark moon rocks. (Scientific accuracy has never been Mattel’s strong suit, at least where Barbie is concerned.) And in 2013, Mattel collaborated with NASA to create Mars Explorer Barbie, to mark the first anniversary of the rover Curiosity.

More recently, Mattel has extended the Barbie brand to promote real-life role models for girls. In 2018, as part of its Inspiring Women series, the toymaker debuted the Katherine Johnson doll, which pays homage to the African-American mathematician who calculated the trajectory for NASA’s first crewed spaceflight. Needless to say, this Barbie is also clad in pink, with era-appropriate cat-eye glasses, a double strand of pearls, and a NASA employee ID tag.

Commemorative dolls and stuffed animals may be playthings designed to tug at our consumerist heartstrings. But let’s suspend the cynicism for a minute and imagine what goes on in the mind of a young girl or boy who plays with a doll and dreams of the future. Maybe we’re seeing a recruit for the next generation of astronauts, scientists, and engineers.

An abridged version of this article appears in the July 2019 print issue as “The Beagle Has Landed.”

Part of a continuing series looking at photographs of historical artifacts that embrace the boundless potential of technology.

About the Author

Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society.

How High Fives Help Us Get in Touch With Robots

Friday, July 12th, 2019

Social touch is a cornerstone of human interaction, and robots are learning how to do it too

The human sense of touch is so naturally ingrained in our everyday lives that we often don’t notice its presence. Even so, touch is a crucial sensing ability that helps people to understand the world and connect with others. As the market for robots grows, and as robots become more ingrained into our environments, people will expect robots to participate in a wide variety of social touch interactions. At Oregon State University’s Collaborative Robotics and Intelligent Systems (CoRIS) Institute, I research how to equip everyday robots with better social-physical interaction skills—from playful high-fives to challenging physical therapy routines.  

Lunar Pioneers Will Use Lasers to Phone Home

Friday, July 12th, 2019

NASA’s Orion and Gateway will try out optical communications gear for a high-speed connection to Earth

.article-detail aside.inlay .sb-list li .sb-list-hed {
font-family: “Theinhardt-Medium”, sans-serif;
font-size: 24px;
line-height: 25px;
color: #000000;
font-weight: normal;
margin: 0;
padding: 0;
}

graphic link to special report landing page
graphic link to special report landing  page

With NASA making serious moves toward a permanent return to the moon, it’s natural to wonder whether human settlers—accustomed to high-speed, ubiquitous Internet access—will have to deal with mind-numbingly slow connections once they arrive on the lunar surface. The vast majority of today’s satellites and spacecraft have data rates measured in kilobits per second. But long-term lunar residents might not be as satisfied with the skinny bandwidth that, say, the Apollo astronauts contended with.

To meet the demands of high-definition video and data-intensive scientific research, NASA and other space agencies are pushing the radio bands traditionally allocated for space research to their limits. For example, the Orion spacecraft, which will carry astronauts around the moon during NASA’s Artemis 2 mission in 2022, will transmit mission-critical information to Earth via an S-band radio at 50 megabits per second. “It’s the most complex flight-management system ever flown on a spacecraft,” says Jim Schier, the chief architect for NASA’s Space Communications and Navigation program. Still, barely 1 Mb/s will be allocated for streaming video from the mission. That’s about one-fifth the speed needed to stream a high-definition movie from Netflix.

To boost data rates even higher means moving beyond radio and developing optical communications systems that use lasers to beam data across space. In addition to its S-band radio, Orion will carry a laser communications system for sending ultrahigh-definition 4K video back to Earth. And further out, NASA’s Gateway will create a long-term laser communications hub linking our planet and its satellite.

Laser communications are a tricky proposition. The slightest jolt to a spacecraft could send a laser beam wildly off course, while a passing cloud could interrupt it. But if they work, robust optical communications will allow future missions to receive software updates in minutes, not days. Astronauts will be sheltered from the loneliness of working in space. And the scientific community will have access to an unprecedented flow of data between Earth and the moon.

Today, space agencies prefer to use radios in the S band (2 to 4 gigahertz) and Ka band (26.5 to 40 GHz) for communications between spacecraft and mission control, with onboard radios transmitting course information, environmental conditions, and data from dozens of spaceflight systems back to mission control. The Ka band is particularly prized—Don Cornwell, who oversees radio and optical technology development at NASA, calls it “the Cadillac of radio frequencies”—because it can transmit up to gigabits per second and propagates well in space.

Any spacecraft’s ability to transmit data is constrained by some unavoidable physical truths of the electromagnetic spectrum. For one, radio spectrum is finite, and the prized bands for space communications are equally prized by commercial applications. Bluetooth and Wi-Fi use the S band, and 5G cellular networks use the Ka band.

The second big problem is that radio signals disperse in the vacuum of space. By the time a Ka-band signal from the moon reaches Earth, it will have spread out to cover an area about 2,000 kilometers in diameter—roughly the size of India. By then, the signal is a lot weaker, so you’ll need either a sensitive receiver on Earth or a powerful transmitter on the moon.

Laser communications systems also have dispersion issues, and beams that intersect can muddle up the data. But a laser beam sent from the moon would cover an area only 6 km across by the time it arrives on Earth. That means it’s much less likely for any two beams to intersect. Plus, they won’t have to contend with an already crowded chunk of spectrum. You can transmit a virtually limitless quantity of data using lasers, says Cornwell. “The spectrum for optical is unconstrained. Laser beams are so narrow, it’s almost impossible [for them] to interfere with one another.”

Higher frequencies also mean shorter wavelengths, which bring more benefits. Ka-band signals have wavelengths from 7.5 millimeters to 1 centimeter, but NASA plans to use lasers that have a 1,550-nanometer wavelength, the same wavelength used for terrestrial optical-fiber networks. Indeed, much of the development of laser communications for space builds on existing optical-fiber engineering. Shorter wavelengths (and higher frequencies) mean that more data can be packed into every second.

The advantages of laser communications have been known for many years, but it’s only recently that engineers have been able to build systems that outperform radio. In 2013, for example, NASA’s Lunar Laser Communications Demonstration proved that optical signals can reliably send information from lunar orbit back to Earth. The month-long experiment used a transmitter on the Lunar Atmosphere and Dust Environment Explorer to beam data back to Earth at speeds of 622 Mb/s, more than 10 times as fast as Orion’s S-band radio will.

“I was shocked to learn [Orion was] going back to the moon with an S-band radio,” says Bryan Robinson, an optical communications expert at MIT Lincoln Laboratory in Lexington, Mass. Lincoln Lab has played an important role in developing many of the laser communications systems on NASA missions, starting with the early optical demonstrations of the classified GeoLITE satellite in 2001. “Humans have gotten used to so much more, here on Earth and in low Earth orbit. I was glad they came around and put laser comm back on the mission.”

As a complement to its S-band radio, during the Artemis 2 mission Orion will carry a laser system called Optical to Orion, or O2O. NASA doesn’t plan to use O2O for any mission-critical communications. Its main task will be to stream 4K ultrahigh-definition video from the moon to a curious public back home. O2O will receive data at 80 Mb/s and transmit at 20 Mb/s while in lunar orbit. If you’re wondering why O2O will transmit at 20 Mb/s when a demonstration project six years ago was able to transmit at 622 Mb/s, it’s simply because the Orion developers “never asked us to do 622,” says Farzana Khatri, a senior staff member in Lincoln Lab’s optical communications group. Cornwell confirms that O2O’s downlink will deliver a minimum of 80 Mb/s from Earth, though the system is capable of higher data rates.

If successful, O2O will open the door for data-heavy communications on future crewed missions, allowing for video chats with family, private consultations with doctors, or even just watching a live sports event during downtime. The more time people spend on the moon, the more important all of these connections will be to their mental well-being. And eventually, video will become mission critical for crews on board deep-space missions.

Before O2O can even be tested in space, it first has to survive the journey. Laser systems mounted on spacecraft use telescopes to send and receive signals. Those telescopes rely on a fiddly arrangement of mirrors and other moving parts. O2O’s telescope will use an off-axis Cassegrain design, a type of telescope with two mirrors to focus the captured light, mounted on a rotating gimbal. Lincoln Lab researchers selected the design because it will allow them to separate the telescope from the optical transceiver, making the entire system more modular. The engineers must ensure that the Space Launch System rocket carrying Orion won’t shake the whole delicate arrangement apart. The researchers at Lincoln Lab have developed clasps and mounts that they hope will reduce vibrations and keep everything intact during the tumultuous launch.

Once O2O is in space, it will have to be precisely aimed. It’s hard to miss a receiver when your radio signal has the cross section the size of a large country. A 6-km-diameter signal, on the other hand, could miss Earth entirely with just a slight bump from the spacecraft. “If you [use] a laser pointer when you’re nervous and your hand is shaking, it’s going to go all over the place,” says Cornwell.

Orion’s onboard equipment will also generate constant minuscule vibrations, any one of which would be enough to throw off an optical signal. So engineers at NASA and Lincoln Lab will place the optical system on an antijitter platform. The platform measures the jitters from the spacecraft and produces an opposite pattern of vibrations to cancel them out—“like noise-canceling headphones,” Cornwell says.

One final hurdle for O2O will be dealing with any cloud cover back on Earth. Infrared wavelengths, like the O2O’s 1,550 nm, are easily absorbed by clouds. A laser beam might travel the nearly 400,000 km from the moon without incident, only to be blocked just above Earth’s surface. Today, the best defense against losing a signal to a passing stratocumulus is to beam transmissions to multiple receivers. O2O, for example, will use ground stations at Table Mountain, Calif., and White Sands, N.M.

The Gateway, scheduled to be built in the 2020s, will present a far bigger opportunity for high-speed laser communications in space. NASA, with help from its Canadian, European, Japanese, and Russian counterparts, will place this space station in orbit around the moon; the station will serve as a staging area and communications relay for lunar research.

NASA’s Schier suspects that research and technology demonstrations on the Gateway could generate 5 to 8 Gb/s of data that will need to be sent back to Earth. That data rate would dwarf the transmission speed of anything in space right now—the International Space Station (ISS) sends data to Earth at 25 Mb/s. “[Five to 8 GB/s is] the kind of thing that if you turned everything on in the [ISS], you’d be able to run it for 2 seconds before you overran the buffers,” Schier says.

The Gateway offers an opportunity to build a permanent optical trunk line between Earth and the moon. One thing NASA would like to use the Gateway for is transmitting positioning, navigation, and timing information to vehicles on the lunar surface. “A cellphone in your pocket needs to see four GPS satellites,” says Schier. “We’re not going to have that around the moon.” Instead, a single beam from the Gateway could provide a lunar rover with accurate distance, azimuth, and timing to find its exact position on the surface.

What’s more, using optical communications could free up radio spectrum for scientific research. Robinson points out that the far side of the moon is an optimal spot to build a radio telescope, because it would be shielded from the chatter coming from Earth. (In fact, radio astronomers are already planning such an observatory: Our article “Rovers Will Unroll a Telescope on the Moon’s Far Side” explains their scheme.) If all the communication systems around the moon were optical, he says, there’d be nothing to corrupt the observations.

Beyond that, scientists and engineers still aren’t sure what else they’ll do with the Gateway’s potential data speeds. “A lot of this, we’re still studying,” says Cornwell.

In the coming years, other missions will test whether laser communications work well in deep space. NASA’s mission to the asteroid Psyche, for instance, will help determine how precisely an optical communications system can be pointed and how powerful the lasers can be before they start damaging the telescopes used to transmit the signals. But closer to home, the communications needed to work and live on the moon can be provided only by lasers. Fortunately, the future of those lasers looks bright.

This article appears in the July 2019 print issue as “Phoning Home, With Lasers.”

Robots Will Navigate the Moon With Maps They Make Themselves

Thursday, July 11th, 2019

Astrobotic’s autonomous navigation will help lunar landers, rovers, and drones find their way on the moon

graphic link to special report landing page
graphic link to special report landing  page

Neil Armstrong made it sound easy. “Houston, Tranquility Base here. The Eagle has landed,” he said calmly, as if he had just pulled into a parking lot. In fact, the descent of the Apollo 11 lander was nerve-racking. As the Eagle headed to the moon’s surface, Armstrong and his colleague Buzz Aldrin realized it would touch down well past the planned landing site and was heading straight for a field of boulders. Armstrong started looking for a better place to park. Finally, at 150 meters, he leveled off and steered to a smooth spot with about 45 seconds of fuel to spare.

“If he hadn’t been there, who knows what would have happened?” says Andrew Horchler, throwing his hands up. He’s sitting in a glass-walled conference room in a repurposed brick warehouse, part of Pittsburgh’s Robotics Row, a hub for tech startups. This is the headquarters of space robotics company Astrobotic Technology. In the coming decades, human forays to the moon will rely heavily on robotic landers, rovers, and drones. Horchler leads a team whose aim is ensuring those robotic vessels—including Astrobotic’s own Peregrine lander—can perform at least as well as Armstrong did.

Astrobotic’s precision-navigation technology will let both uncrewed and crewed landers touch down exactly where they should, so a future Armstrong won’t have to strong-arm her landing vessel’s controls. Once they’re safely on the surface, robots like Astrobotic’s will explore the moon’s geology, scout out sites for future lunar bases, and carry equipment and material destined for those bases, Horchler says. Eventually, rovers will help mine for minerals and water frozen deep in craters and at the poles.

Astrobotic was founded in 2007 by roboticists at Carnegie Mellon University to compete for the Google Lunar X Prize, which challenged teams to put a robotic spacecraft on the moon. The company pulled out of the competition in 2016, but its mission has continued to evolve. It now has a 20-person staff and contracts with a dozen organizations to deliver payloads to the moon, at US $1.2 million per kilogram, which the company says is the lowest in the industry. Late last year, Astrobotic was one of nine companies that NASA chose to carry payloads to the moon for its 10-year, $2.6 billion Commercial Lunar Payload Services (CLPS) program. The space agency announced the first round of CLPS contracts in late May, with Astrobotic receiving $79.5 million to deliver its payloads by July 2021.

Meanwhile, China, India, and Israel have all launched uncrewed lunar landers or plan to do so soon. The moon will probably be a much busier place by the 60th anniversary of Apollo 11, in 2029.

The moon’s allure is universal, says John Horack, an aerospace engineer at Ohio State University. “The moon is just hanging in the sky, beckoning to us. That beckoning doesn’t know language or culture barriers. It’s not surprising to see so many thinking about how to get to the moon.”

On the moon, there is no GPS, compass-enabling magnetic field, or high-resolution maps for a lunar craft to use to figure out where it is and where it’s going. Any craft will also be limited in the computing, power, and sensors it can carry. Navigating on the moon is more like the wayfinding of the ancient Polynesians, who studied the stars and ocean currents to track their boats’ trajectory, location, and direction.

A spacecraft’s wayfinders are inertial measurement units that use gyroscopes and accelerometers to calculate attitude, velocity, and direction from a fixed starting point. These systems extrapolate from previous estimates, so errors accumulate over time. “Your knowledge of where you are gets fuzzier and fuzzier as you fly forward,” Horchler says. “Our system collapses that fuzziness down to a known point.”

A conventional guidance system can put a vessel down within an ellipse that’s several kilometers long, but Astrobotic’s system will land a craft within 100 meters of its target. This could allow touchdowns near minable craters, at the heavily shadowed icy poles, or on a landing pad next to a moon base. “It’s one thing to land once at a site, a whole other thing to land repeatedly with precision,” says Horchler.

Astrobotic’s terrain-relative navigation (TRN) sensor contains all the hardware and software needed for smart navigation. It uses 32-bit processors that have worked well on other missions and FPGA hardware acceleration for low-level computer-vision processing. The processors and FPGAs are all radiation hardened. The brick-size unit can be bolted to any spacecraft. The sensor will take a several-megapixel image of the lunar surface every second or so as the lander approaches. Algorithms akin to those for facial recognition will spot unique features in the images, comparing them with stored maps to calculate lunar coordinates and orientation.

Those stored maps are a computing marvel. Images taken by NASA’s Lunar Reconnaissance Orbiter (LRO), which has been mapping the moon since 2009, have very different perspectives and shadows from what the lander will see as it descends. This is especially true at the poles, where the angle of the sun changes the lighting dramatically.

So software wizards at Astrobotic are creating synthetic maps. Their software starts with elevation models based on LRO data. It fuses those terrain models with data on the relative positions of the sun, moon, and Earth; the approximate location of the lander; and the texture and reflectiveness of the lunar soil. Finally, a physics-based ray-tracing system, similar to what’s used in animated films to create synthetic imagery, puts everything together.

Horchler pulls up two images of a 50-by-200-kilometer patch near the moon’s south pole. One is a photo taken by the LRO. The other is a digitally rendered version created by the Astrobotic software. I can’t tell them apart. Future TRN systems may be able to build high-fidelity maps on the fly as the lander descends, but that’s impossible with current onboard computing power, Horchler says.

To confirm the TRN’s algorithms, Astrobotic has run tests in the Mojave Desert. A 2014 video shows the TRN sensor mounted on a vertical-takeoff-and-landing vehicle made by Masten Space Systems, another company chosen for NASA’s CLPS program. Astrobotic engineers had mapped the scrubby area beforehand, including a potential landing site littered with sandbags to mimic large rocks. In the video, the vehicle takes off without a programmed destination. The navigation sensor scans the ground, matching what it sees to the stored maps. The hazard-detection sensor uses lidar and stereo cameras to map shapes and elevation on the rocky terrain and track the lander’s distance to the ground. The craft lands safely, avoiding the sandbags.

Astrobotic expects its first CLPS mission to launch in July 2021, aboard a United Launch Alliance Atlas V rocket. The 28 payloads aboard the stout Peregrine lander will include NASA scientific instruments, another scientific instrument from the Mexican Space Agency, rovers from startups in Chile and Japan, and personal mementos from paying customers.

In a space that Astrobotic employees call the Tiger’s Den, a large plush tiger keeps an eye on aerospace engineer Jeremy Hardy, who looks like he’s having too much fun. He’s flying a virtual drone onscreen through a landscape of trees and rocks. When he switches to a drone’s-eye view, the landscape fills with green dots, each a unique feature that the drone is tracking, like a corner or an edge.

The program Hardy is using is called AstroNav, which will guide propulsion-powered drones as they fly through the moon’s immense lava tubes. These temperature-stable tunnels are believed to be tens of kilometers long and “could fit whole cities within them,” Horchler says. The drones will map the tunnels as they fly, coming back out to recharge and send images to a lunar station or to Earth.

Hardy’s drone is flying in unchartered territory. AstroNav uses a simultaneous localization and mapping (SLAM) algorithm, a heavyweight technology also used by self-driving cars and office delivery robots to build a map of their surroundings and compute their own location within that map. AstroNav blends data from the drone’s inertial measurement units, stereo-vision cameras, and lidar. The software tracks the green-dotted features across many frames to calculate where the drone is.

The company has tested AstroNav-guided hexacopters in West Virginian caves, craters in New Mexico, and the Lofthellir lava tube of Iceland. Similar SLAM techniques could guide autonomous lunar rovers as they explore permanently shadowed regions at the poles.

Astrobotic has plenty of competition. Another CLPS contractor is Draper Laboratory, which helped guide Apollo missions. The lab’s navigation system, also built around image processing and recognition, will take Japanese startup Ispace’s lander to the moon.

Draper’s “special sauce” is software developed for the U.S. Army’s Joint Precision Airdrop System, which delivers supplies via parachute in war zones, says space systems program manager Alan Campbell. Within a box called an aerial guidance unit is a downward-facing camera, motors, and a small computer running Draper’s software. The software determines the parachute’s location by comparing terrain features in the camera’s images with commercial satellite images to land the parachute within 50 meters of its target.

The unit also uses Doppler lidar, which detects hazards and measures relative velocity. “When you’re higher up, you can compare images to maps,” says Campbell. At lower altitudes, a different method tracks features and how they’re moving. “Lidar will give you a finer-grain map of hazards.”

Draper’s long experience dating back to Apollo gives the lab an edge, Campbell adds. “We’ve landed on the moon before, and I don’t think our competitors can say that.”

Other nations with lunar aspirations are also relying on autonomous navigation. China’s Chang’e 4, for example, became the first craft to land on the far side of the moon, in early January. In its landing video, the craft hovers for a few seconds above the surface. “That indicates it has lidar or [a] camera and is taking an image of the field to make sure it’s landing on a safe spot,” says Campbell. “It’s definitely an autonomous system.”

Israel’s lunar spacecraft Beresheet was also expected to make a fully automated touchdown in April. It relied on image-processing software run on a computer about as powerful as a smartphone, according to reports. However, just moments before it was to land, it crashed on the lunar surface due to an apparent engine failure.

In the race to the moon, there will be no one winner, Ohio State’s Horack says. “We need a fair number of successful organizations from around the world working on this.”

Astrobotic is also looking further out. Its AstroNav could be used on other cosmic bodies for which there are no high-resolution maps, like the moons of Jupiter and Saturn. The challenge will be scaling back the software’s appetite for computing power. Computing in space lags far behind computing on Earth, Horchler notes. Everything needs to be radiation tolerant and designed for a thermally challenging environment. “It tends to be very custom,” he says. “You don’t have a new family of processors every two years. An Apple Watch has more computing power than a lot of spacecraft out there.”

The moon will be a crucial test-bed for precision landing and navigation. “A lot of the technology that it takes to land on the moon is similar to what it takes to land on Mars or icy moons like Europa,” Horchler says. “It’s much easier to prove things out at our nearest neighbor than at bodies halfway across the solar system.”

This article appears in the July 2019 print issue as “Turn Left at Tranquility Base.”

Humanoid Robots Teach Coping Skills to Children With Autism

Thursday, July 11th, 2019

Roboticist Ayanna Howard explains what inspired her to work on assistive technologies for kids

.entry-content .tisubhead {
color: #999999;
font-family: verdana;
font-size: 14px;
font-weight: bold;
letter-spacing: 1px;
margin-bottom: -5px !important;
text-transform: uppercase;
}
.tiopener {
color: #0f4994;
font-family: theinhardt;
letter-spacing: 1px;
margin-right: 15px;
font-weight: bold;
text-transform: uppercase;
}

THE INSTITUTEChildren with autism spectrum disorder can have a difficult time expressing their emotions and can be highly sensitive to sound, sight, and touch. That sometimes restricts their participation in everyday activities, leaving them socially isolated. Occupational therapists can help them cope better, but the time they’re able to spend is limited and the sessions tend to be expensive.

Roboticist Ayanna Howard, an IEEE senior member, has been using interactive androids to guide children with autism on ways to socially and emotionally engage with others—as a supplement to therapy. Howard is chair of the School of Interactive Computing and director of the Human-Automation Systems Lab at Georgia Tech. She helped found Zyrobotics, a Georgia Tech VentureLab startup that is working on AI and robotics technologies to engage children with special needs. Last year Forbes named Howard, Zyrobotics’ chief technology officer, one of the Top 50 U.S. Women in Tech.

In a recent study, Howard and other researchers explored how robots might help children navigate sensory experiences. The experiment involved 18 participants between the ages of 4 and 12; five had autism, and the rest were meeting typical developmental milestones. Two humanoid robots were programmed to express boredom, excitement, nervousness, and 17 other emotional states. As children explored stations set up for hearing, seeing, smelling, tasting, and touching, the robots modeled what the socially acceptable responses should be.

“If a child’s expression is one of happiness or joy, the robot will have a corresponding response of encouragement,” Howard says. “If there are aspects of frustration or sadness, the robot will provide input to try again.” The study suggested that many children with autism exhibit stronger levels of engagement when the robots interact with them at such sensory stations.

It is one of many robotics projects Howard has tackled. She has designed robots for researching glaciers, and she is working on assistive robots for the home, as well as an exoskeleton that can help children who have motor disabilities.

 Howard spoke about her work during the Ethics in AI: Impacts of (Anti?) Social Robotics panel session held in May at the IEEE Vision, Innovation, and Challenges Summit in San Diego. You can watch the session on IEEE.tv.

In this interview with The Institute, Howard talks about how she got involved with assistive technologies, the need for a more diverse workforce, and ways IEEE has benefited her career.

FOCUS ON ACCESSIBILITY

Howard was inspired to work on technology that can improve accessibility in 2008 while teaching high school students at a summer camp devoted to science, technology, engineering, and math.

“A young lady with a visual impairment attended camp. The robot programming tools being used at the camp weren’t accessible to her,” Howard says. “As an engineer, I want to fix problems when I see them, so we ended up designing tools to enable access to programming tools that could be used in STEM education.

“That was my starting motivation, and this theme of accessibility has expanded to become a main focus of my research. One of the things about this world of accessibility is that when you start interacting with kids and parents, you discover another world out there of assistive technologies and how robotics can be used for good in education as well as therapy.”

DIVERSITY OF THOUGHT

The Institute asked Howard why it’s important to have a more diverse STEM workforce and what could be done to increase the number of women and others from underrepresented groups.

“The makeup of the current engineering workforce isn’t necessarily representative of the world, which is composed of different races, cultures, ages, disabilities, and socio-economic backgrounds,” Howard says. “We’re creating products used by people around the globe, so we have to ensure they’re being designed for a diverse population. As IEEE members, we also need to engage with people who aren’t engineers, and we don’t do that enough.”

Educational institutions are doing a better job of increasing diversity in areas such as gender, she says, adding that more work is needed because the enrollment numbers still aren’t representative of the population and the gains don’t necessarily carry through after graduation.

 “There has been an increase in the number of underrepresented minorities and females going into engineering and computer science,” she says, “but data has shown that their numbers are not sustained in the workforce.”

ROLE MODEL

Because there are more underrepresented groups on today’s college campuses that can form a community, the lack of engineering role models—although a concern on campuses—is more extreme for preuniversity students, Howard says.

 “Depending on where you go to school, you may not know what an engineer does or even consider engineering as an option,” she says, “so there’s still a big disconnect there.”

Howard has been involved for many years in math- and science-mentoring programs for at-risk high school girls. She tells them to find what they’re passionate about and combine it with math and science to create something. She also advises them not to let anyone tell them that they can’t.

Howard’s father is an engineer. She says he never encouraged or discouraged her to become one, but when she broke something, he would show her how to fix it and talk her through the process. Along the way, he taught her a logical way of thinking she says all engineers have.

“When I would try to explain something, he would quiz me and tell me to ‘think more logically,’” she says.

Howard earned a bachelor’s degree in engineering from Brown University, in Providence, R.I., then she received both a master’s and doctorate degree in electrical engineering from the University of Southern California. Before joining the faculty of Georgia Tech in 2005, she worked at NASA’s Jet Propulsion Laboratory at the California Institute of Technology for more than a decade as a senior robotics researcher and deputy manager in the Office of the Chief Scientist.

ACTIVE VOLUNTEER

Howard’s father was also an IEEE member, but that’s not why she joined the organization. She says she signed up when she was a student because, “that was something that you just did. Plus, my student membership fee was subsidized.”

She kept the membership as a grad student because of the discounted rates members receive on conferences.

Those conferences have had an impact on her career. “They allow you to understand what the state of the art is,” she says. “Back then you received a printed conference proceeding and reading through it was brutal, but by attending it in person, you got a 15-minute snippet about the research.”

Howard is an active volunteer with the IEEE Robotics and Automation and the IEEE Systems, Man, and Cybernetics societies, holding many positions and serving on several committees.

“I value IEEE for its community,” she says. “One of the nice things about IEEE is that it’s international.”

A Q&A with Cruise’s head of AI, Hussein Mehanna

Wednesday, July 10th, 2019

AI engineers of all descriptions—the autonomous vehicle industry wants you

In 2016, Cruise, an autonomous vehicle startup acquired by General Motors, had about 50 employees. At the beginning of 2019, the headcount at its San Francisco headquarters—mostly software engineers, mostly working on projects connected to machine learning and artificial intelligence—hit around 1000. Now that number is up to 1500, and by the end of this year it’s expected to reach about 2000, sprawling into a recently purchased building that had housed Dropbox. And that’s not counting the 200 or so tech workers that Cruise is aiming to install in a Seattle, Wash., satellite development center and a handful of others in Phoenix, Ariz., and Pasadena, Calif.

Cruise’s recent hires aren’t all engineers—it takes more than engineering talent to manage operations. And there are hundreds of so-called safety drivers that are required to sit in the 180 or so autonomous test vehicles whenever they roam the San Francisco streets. But that’s still a lot of AI experts to be hiring in a time of AI engineer shortages.

Hussein Mehanna, head of AI/ML at Cruise, says the company’s hiring efforts are on track, due to the appeal of the challenge of autonomous vehicles in drawing in AI experts from other fields. Mehanna himself joined Cruise in May from Google, where he was director of engineering at Google Cloud AI. Mehanna had been there about a year and a half, a relatively quick career stop after a short stint at Snap following four years working in machine learning at Facebook.

Mehanna has been immersed in AI and machine learning research since his graduate studies in speech recognition and natural language processing at the University of Cambridge. I sat down with Mehanna to talk about his career, the challenges of recruiting AI experts and autonomous vehicle development in general—and some of the challenges specific to San Francisco. We were joined by Michael Thomas, Cruise’s manager of AI/ML recruiting, who had also spent time recruiting AI engineers at Google and then Facebook.

IEEE Spectrum: When you were at Cambridge, did you think AI was going to take off like a rocket?

Mehanna: Did I imagine that AI was going to be as dominant and prevailing and sometimes hyped as it is now? No. I do recall in 2003 that my supervisor and I were wondering if neural networks could help at all in speech recognition. I remember my supervisor saying if anyone could figure out how use a neural net for speech he would give them a grant immediately. So he was on the right path. Now neural networks have dominated vision, speech, and language [processing]. But that boom started in 2012.

I didn’t [expect it], but I certainly aimed for it when [I was at] Microsoft, where I deliberately pushed my career towards machine learning instead of big data, which was more popular at the time. And [I aimed for it] when I joined Facebook.

In the early days, Facebook wasn’t that open to PhDs, or researchers. It actually had a negative sentiment about researchers. And then Facebook shifted to becoming one of the key places where PhD students wanted to do internships or join after they graduated. It was a mindset shift, they were [once] at a point in time where they thought what was needed for success wasn’t research, but now it’s different.

There was definitely an element of risk [in taking a machine learning career path], but I was very lucky, things developed very fast.

IEEE Spectrum: Is it getting harder or easier to find AI engineers to hire, given the reported shortages?

Mehanna: There is a mismatch [between job openings and qualified engineers], though it is hard to quantify it with numbers. There is good news as well: I see a lot more students diving deep into machine learning and data in their [undergraduate] computer science studies, so it’s not as bleak as it seems. But there is massive demand in the market.

Here at Cruise, demand for AI talent is just growing and growing. It might be is saturating or slowing down at other kinds of companies, though, [which] are leveraging more traditional applications—ad prediction, recommendations—that have been out there in the market for a while. These are more mature, better understood problems.

I believe autonomous vehicle technologies is the most difficult AI problem out there. The magnitude of the challenge of these problems is 1000 times more than other problems. They aren’t as well understood yet, and they require far deeper technology. And also the quality at which they are expected to operate is off the roof.

The autonomous vehicle problem is the engineering challenge of our generation. There’s a lot of code to write, and if we think we are going to hire armies of people to write it line by line, it’s not going to work. Machine learning can accelerate the process of generating the code, but that doesn’t mean we aren’t going to have engineers; we actually need a lot more engineers.

Sometimes people worry that AI is taking jobs. It is taking some developer jobs, but it is actually generating other developer jobs as well, protecting developers from the mundane and helping them build software faster and faster.

IEEE Spectrum: Are you concerned that the demand for AI in industry is drawing out the people in academia who are needed to educate future engineers, that is, the “eating the seed corn” problem?

Mehanna: There are some negative examples in the industry, but that’s not our style. We are looking for collaborations with professors, we want to cultivate a very deep and respectful relationship with universities.

And there’s another angle to this: Universities require a thriving industry for them to thrive. It is going to be extremely beneficial for academia to have this flourishing industry in AI, because it attracts more students to academia. I think we are doing them a fantastic favor by building these career opportunities. This is not the same as in my early days, [when] people told me “don’t go to AI; go to networking, work in the mobile industry; mobile is flourishing.”

IEEE Spectrum: Where are you looking as you try to find a thousand or so engineers to hire this year?

Thomas: We look for people who want to use machine learning to solve problems.  They can be in many different industries—in the financial markets, in social media, in advertising.  The autonomous vehicle industry is in its infancy. You can compare it to mobile in the early days: When the iPhone first came out, everyone was looking for developers with mobile experience, but you weren’t going to find them unless you went to straight to Apple, [so you had to hire other kinds of engineers]. This is the same type of thing: it is so new that you aren’t going to find experts in this area, because we are all still learning.

Mehanna: Because autonomous vehicle technology is the new frontier for AI experts, [the number of] people with both AI and autonomous vehicle experience is quite limited. So we are acquiring AI experts wherever they are, and helping them grow into the autonomous vehicle area. You don’t have to be an autonomous vehicle expert to flourish in this world. It’s not too late to move; even though there is a lot of great tech developed, there’s even more innovation ahead, so now would be a great time for AI experts working on other problems or applications to shift their attention to autonomous vehicles.

It feels like the Internet in 1980. It’s about to happen, but there are endless applications [to be developed over] the next few decades. Even if we can get a car to drive safely, there is the question of how can we tune the ride comfort, and then applying it all to different cities, different vehicles, different driving situations, and who knows to what other applications.

I can see how I can spend a lifetime career trying to solve this problem.

IEEE Spectrum: Why are you doing most of your development in San Francisco?

Mehanna: I think the best talent of the world is in Silicon Valley, and solving the autonomous vehicle problem is going to require the best of the best. It’s not just the engineering talent that is here, but [also] the entrepreneurial spirit. Solving the problem just as a technology is not going to be successful, you need to solve the product and the technology together. And the entrepreneurial spirit is one of the key reasons Cruise secured 7.5 billion in funding [besides GM, the company has a number of outside investors, including Honda and Softbank] That [funding] is another Cruise is ahead of many others, because this problem requires deep resources.

[And then there is the driving environment.] When I speak to my peers in the industry, they have a lot of respect for us, because the problems to solve in San Francisco technically are an order of magnitude harder. It is a tight environment, with a lot of pedestrians, and driving patterns that, let’s put it this way, are not necessarily the best in the nation. Which means we are seeing more problems ahead of our competitors, which gets us to better [software]. I think if you can do an autonomous vehicle in San Francisco you can do it almost anywhere.

Robots Have a Hard Time Grasping These "Adversarial Objects"

Wednesday, July 10th, 2019

To make robot grasping more robust, researchers are designing objects that are as difficult as possible for robots to manipulate

There’s been a bunch of research recently into adversarial images, which are images of things that have been modified to be particularly difficult for computer vision algorithms to accurately identify. The idea is that these kinds of images can be used to help design more robust computer vision algorithms, because their “adversarial” nature is sort of a deliberate worst-case scenario—if your algorithm can handle adversarial images, then it can probably handle most other things.

Researchers at UC Berkeley have been extending this concept to robot grasping, with physical adversarial objects carefully designed to be tricky for conventional robot grippers to pick up. All it takes is a slight tweak to straightforward three-dimensional shapes, and a standard two-finger will have all kinds of trouble finding a solid grasp.

Watch This Drone Explode Into Maple Seed Microdrones in Midair

Tuesday, July 9th, 2019

Starting out together and then splitting apart makes these bio-inspired drones fly farther and more precisely

Journal Watch report logo, link to report landing page

As useful as conventional fixed-wing and quadrotor drones have become, they still tend to be relatively complicated, expensive machines that you really want to be able to use more than once. When a one-way trip is all that you have in mind, you want something simple, reliable, and cheap, and we’ve seen a bunch of different designs for drone gliders that more or less fulfill those criteria. 

For an even simpler gliding design, you want to minimize both airframe mass and control surfaces, and the maple tree provides some inspiration in the form of samara, those distinctive seed pods that whirl to the ground in the fall. Samara are essentially just an unbalanced wing that spins, and while the natural ones don’t steer, adding an actuated flap to the robotic version and moving it at just the right time results in enough controllability to aim for a specific point on the ground.

Roboticists at the Singapore University of Technology and Design (SUTD) have been experimenting with samara-inspired drones, and in a new paper in IEEE Robotics and Automation Letters they explore what happens if you attach five of the drones together and then separate them in mid air.

Simulating a Medical Device Interaction with a Biological System Webinar

Tuesday, July 9th, 2019

If you are interested in learning how to model a medical device interacting with physiology, then tune into this webinar

If you are interested in learning how to model a medical device interacting with physiology, then tune into this webinar featuring guest speaker Paul Belk from Boston Scientific Corporation.

Modeling physiologic systems uses the same principles applied to other multiphysics applications, but it is often complicated by the challenges in characterizing the properties of the biological tissues and processes involved. These challenges make it even more important to be able to analyze quantitatively through numerical simulation the interactions between the variable biological phenomena and the device.

In this webinar, we will present a model of catheter ablation from a large vessel. We will begin by setting up the coupled physics, including electric currents, laminar flow of blood, and heat transfer by conduction and convection. We will then show how to characterize the properties of the tissues involved and how the COMSOL Multiphysics® software can be used to simulate a closed-loop control system to stabilize the energy flow delivered to the surrounding tissues. The simulation results will be used to characterize how intended physiologic results can be affected by uncontrolled physiologic changes and which control systems are most robust.

You can ask questions at the end of the webinar during the Q&A session.

PRESENTERS:
 

 

Paul Belk, Fellow, Process Engineering, Boston Scientific Corporation

Paul Belk has a PhD in medical physics and is a Fellow in process engineering at Boston Scientific Corporation, where he works on the development of diagnostic and therapeutic medical devices. He has been using simulation of all types for more than 20 years as an integral part of the research and development process. For the past six years, he has been using the COMSOL Multphysics® software (whenever he gets a chance) to study problems including heat transfer and fluid dynamics in tissue, field distributions, and electrochemical processes at metal surfaces.

 

Aline Tomasian, Applications Engineer, COMSOL

Aline Tomasian is an applications engineer at COMSOL, specializing in high- and low-frequency electromagnetics. She holds a BS in physics from Worcester Polytechnic Institute.

Attendees of this IEEE Spectrum webinar have the opportunity to earn PDHs or Continuing Education Certificates!  To request your certificate you will need to get a code. Once you have registered and viewed the webinar send a request to gs-webinarteam@ieeeglobalspec.com for a webinar code. To request your certificate complete the form here: http://innovationatwork.ieee.org/spectrum/

Attendance is free. To access the event please register.

NOTE: By registering for this webinar you understand and agree that IEEE Spectrum will share your contact information with the sponsors of this webinar and that both IEEE Spectrum and the sponsors may send email communications to you in the future.​