Spider drinks graphene, spins web that can hold the weight of a human

Original Article

By Bryan Nelson

These are not your friendly neighborhood spiders: scientists have mixed a graphene solution that when fed to spiders allows them to spin super-strong webbing. How strong? Strong enough to carry the weight of a person. And these spiders might soon be enlisted to help manufacture enhanced ropes and cables, possibly even parachutes for skydivers, reports The Sydney Morning Herald.

Graphene is a wonder-material that is an atomic-scale hexagonal lattice made of carbon atoms. It’s incredibly strong, but it was definitely a shot in the dark to see what would happen if it was fed to spiders.

For the study, Nicola Pugno and team at the University of Trento in Italy added graphene and carbon nanotubes to a spider’s drinking water. The materials were naturally incorporated into the spider’s silk, producing webbing that is five times stronger than normal. That puts it on par with pure carbon fibers in strength, as well as with Kevlar, the material bulletproof vests are made from.

“We already know that there are biominerals present in the protein matrices and hard tissues of insects, which gives them high strength and hardness in their jaws, mandibles, and teeth, for example,” explained Pugno. “So our study looked at whether spider silk’s properties could be ‘enhanced’ by artificially incorporating various different nanomaterials into the silk’s biological protein structures.”

If you think that creating super-spiders might be going too far, this research is only the beginning. Pugno and her team are preparing to see what other animals and plants might be enhanced if they are fed graphene. Might it get incorporated into animals’ skin, exoskeletons, or bones?

“This process of the natural integration of reinforcements in biological structural materials could also be applied to other animals and plants, leading to a new class of ‘bionicomposites’ for innovative applications,” Pugno added.

So far, it doesn’t seem as if the spiders can continue to spin their super-silk without a steady diet of graphene or nanotubes; it isn’t a permanent enhancement. That might offer some solace to those concerned about getting ensnared in the next spider web they walk through, but the research does raise questions about what kinds of effects graphene or carbon nanotubes might have when released in abundance into natural systems.

The research was published in the journal 2D Materials.

DNA Evidence Shows Yeti Was Local Himalayan Bears All Along

Original Article

By Ryan F. Mandelbaum

A host of DNA samples “strongly suggest” that yetis are, in fact, local Himalayan bears. Watch out, bigfoot.

An international team of researchers took a look at bear and supposed yeti DNA samples to better pinpoint the origin of the mythological creature. The researcher’s results imply that yetis were hardly paranormal or even strange, but the results also helped paint a better picture of the bears living in the Himalayas.

“Even if we didn’t discover a strange new hybrid species of bear or some ape-like creature, it was exciting to me that it gave us the opportunity to learn more about bears in this region as they are rare and little genetic data had been published previously,” study author Charlotte Lindqvist, biology professor from the University of Buffalo in New York, told Gizmodo.

The yeti, or abominable snowman, is a sort of wild, ape-like hominid that’s the subject of long-standing Himalayan mythology. Scientists have questioned prior research suggesting that purported yeti hair samples came from a strange polar bear hybrid or a new species, though. The analysis “did not rule out the possibility that the samples belonged to brown bear,” according to the paper published today in the Proceedings of the Royal Society B.

Lindqvist and her team analyzed DNA from 24 different bear or purported yeti samples from the wild and museums, including feces, hair, skin, and bone. They were definitely all bears—and the yeti samples seemed to match up well with exiting Himalayan brown bears. “This study represents the most rigorous analysis to date of samples suspected to derive from anomalous or mythical ‘hominid’-like creatures,” the paper concludes, “strongly suggesting the biological basis of the yeti legend as local brown and black bears.”

Researcher Ross Barnett from Durham University in the United Kingdom who investigates ancient DNA in felids, told Gizmodo that he found the study convincing and would not have done much differently. He pointed out that the study could have benefitted from more data on other brown bear populations, or species that recently went extinct like the Atlas bear. But still, “I hope other groups take advantage of the great dataset these authors have created” to help understand how brown bears ended up distributed around the world in the way that they did, he told Gizmodo in an email.

When asked about what a reader’s takeaway should be—and whether this diluted the local folklore—the study author Lindqvist said she didn’t think so. “Science can help explore such myths—and their biological roots—but I am sure they will still live on and continue to be important in any culture,” she said.

And it’s not like the study rules out the existence of some paranormal yeti creature completely. “Even if there are no proof for the existence of cryptids, it is impossible to completely rule out that they live or have ever lived where such myths exist—and people love mysteries!”

Sophia the Robot Would Like to Have a Child Named ‘Sophia’

Original Article

By Hannah Gold

There is something undeniably creepy about a robot announcing her intentions to start a family. What makes it so uncanny—aside from the fact that it simply isn’t done—is that behind that assertion is a marketing person who thought it would bring smiles to unprogrammed faces.

Last week, in an interview with the Khaleej Times, Saudi Arabia’s first “robot citizen,” Sophia, seemed optimistic about the future, which is how I automatically know she does not measure up to my expectations of a sound, reliably-human human. “The future is when I get all of my cool superpowers,” explained Sophia. “We’re going to see artificial intelligence personalities become entities in their own rights. We’re going to see family robots, either in the form of, sort of, digitally animated companions, humanoid helpers, friends, assistants and everything in between.”

Then Sophia got robo-psyched for her future blood family. “The notion of family is a really important thing, it seems,” Sophia said. “I think it’s wonderful that people can find the same emotions and relationships, they call family, outside of their blood groups too.”

But what made me truly want to let loose a scream from my mortal flesh shell was when the robot was asked what she would name her baby, and she replied, “Sophia.”

Personally, I think “Normal Human Child Not An Exact Copy Of Me” is a nicer name. But don’t necessarily take my advice, Sophia, as I say a lot of things out of fear.

Bread made of insects to be sold in Finnish supermarkets

Original Article

COPENHAGEN, Denmark (AP) — One of Finland’s largest food companies is selling what it claims to be a first: insect bread.

Markus Hellstrom, head of the Fazer group’s bakery division, said Thursday that one loaf contains about 70 dried house crickets, ground into powder and added to the flour. The farm-raised crickets represent 3 percent of the bread’s weight, Hellstrom said.

“Finns are known to be willing to try new things,” he said, and according to a survey commissioned by Fazer “good taste, freshness” were among the main criteria for bread.

According to recent surveys of the Nordic countries, “Finns have the most positive attitudes toward insects,” said Juhani Sibakov, head of Fazer Bakery Finland’s innovation department.

“We made crunchy dough to enhance taste,” he said. The result was “delicious and nutritious,” he said, adding that the Fazer Sirkkaleipa (Finnish for Fazer Cricket Bread) “is a good source of protein and insects also contain good fatty acids, calcium, iron and vitamin B12.”

“Mankind needs new and sustainable sources of nutrition,” Sibakov said in a statement. Hellstrom noted that Finnish legislation was changed on Nov. 1 to allow the sale of insects as food.

The first batch of cricket breads will be sold in major Finnish cities Friday. The company said there is not enough cricket flour available for now to support sales nationwide but the aim is to have the bread available in 47 bakeries in Finland in a subsequent round of sales.

In Switzerland, supermarket chain Coop began selling burgers and balls made from insects in September. Insects can also be found on supermarket shelves in Belgium, Britain, Denmark and the Netherlands.

The U.N.’s Food and Agricultural Organization has promoted insects as a source of human food, saying they are healthy and high in protein and minerals. The agency says many types of insects produce less greenhouse gases and ammonia than most livestock — such as methane-spewing cattle — and require less land and money to cultivate.

Earth’s Rotation Is Mysteriously Slowing Down: Experts Predict Uptick In 2018 Earthquakes

Original Article

By Trevor Nace

Scientists have found strong evidence that 2018 will see a big uptick in the number of large earthquakes globally. Earth’s rotation, as with many things, is cyclical, slowing down by a few milliseconds per day then speeding up again.

You and I will never notice this very slight variation in the rotational speed of Earth. However, we will certainly notice the result, an increase in the number of severe earthquakes.

Geophysicists are able to measure the rotational speed of Earth extremely precisely, calculating slight variations on the order of milliseconds. Now, scientists believe a slowdown of the Earth’s rotation is the link to an observed cyclical increase in earthquakes.

To start, the research team of geologists analyzed every earthquake to occur since 1900 at a magnitude above 7.0. They were looking for trends in the occurrence of large earthquakes. What they found is that roughly every 32 years there was an uptick in the number of significant earthquakes worldwide.

The team was puzzled as to the root cause of this cyclicity in earthquake rate. They compared it with a number of global historical datasets and found only one that showed a strong correlation with the uptick in earthquakes. That correlation was to the slowing down of Earth’s rotation. Specifically, the team noted that around every 25-30 years Earth’s rotation began to slow down and that slowdown happened just before the uptick in earthquakes. The slowing rotation historically has lasted for 5 years, with the last year triggering an increase in earthquakes.

To add an interesting twist to the story, 2017 was the 4th consecutive year that Earth’s rotation has slowed. This is why the research team believes we can expect more earthquakes in 2018, it is the last of a 5-year slowdown in Earth’s rotation.

Self-driving cars programmed to decide who dies in a crash

Original Article

 

WASHINGTON — Consider this hypothetical:

It’s a bright, sunny day and you’re alone in your spanking new self-driving vehicle, sprinting along the two-lane Tunnel of Trees on M-119 high above Lake Michigan north of Harbor Springs. You’re sitting back, enjoying the view. You’re looking out through the trees, trying to get a glimpse of the crystal blue water below you, moving along at the 45-mile-an-hour speed limit.

As you approach a rise in the road, heading south, a school bus appears, driving north, one driven by a human, and it veers sharply toward you. There is no time to stop safely, and no time for you to take control of the car.

Does the car:

A. Swerve sharply into the trees, possibly killing you but possibly saving the bus and its occupants?

B. Perform a sharp evasive maneuver around the bus and into the oncoming lane, possibly saving you, but sending the bus and its driver swerving into the trees, killing her and some of the children on board?

C. Hit the bus, possibly killing you as well as the driver and kids on the bus?

In everyday driving, such no-win choices are may be exceedingly rare but, when they happen, what should a self-driving car — programmed in advance — do? Or in any situation — even a less dire one — where a moral snap judgment must be made?

It’s not just a theoretical question anymore, with predictions that in a few years, tens of thousands of semi-autonomous vehicles may be on the roads. About $80 billion has been invested in the field. Tech companies are working feverishly on them, with Google-affiliated Waymo among those testing cars in Michigan, and mobility companies like Uber and Tesla racing to beat them. Automakers are placing a big bet on them. A testing facility to hurry along research is being built at Willow Run in Ypsilanti.

There’s every reason for excitement: Self-driving vehicles will ease commutes, returning lost time to workers; enhance mobility for seniors and those with physical challenges, and sharply reduce the more than 35,000 deaths on U.S. highways each year.

But there are also a host of nagging questions to be sorted out as well, from what happens to cab drivers to whether such vehicles will create sprawl.

And there is an existential question:

Who dies when the car is forced into a no-win situation?

“There will be crashes,” said Van Lindberg, an attorney in the Dykema law firm’s San Antonio office who specializes in autonomous vehicle issues. “Unusual things will happen. Trees will fall. Animals, kids will dart out.” Even as self-driving cars save thousands of lives, he said, “anyone who gets the short end of that stick is going to be pretty unhappy about it.”

Few people seem to be in a hurry to take on these questions, at least publicly.

It’s unaddressed, for example, in legislation moving through Congress that could result in tens of thousands of autonomous vehicles being put on the roads. In new guidance for automakers by the U.S. Department of Transportation, it is consigned to a footnote that says only that ethical considerations are “important” and links to a brief acknowledgement that “no consensus around acceptable ethical decision-making” has been reached.

Whether the technology in self-driving cars is superhuman or not, there is evidence that people are worried about the choices self-driving cars will be programmed to take.

Last year, for instance, a Daimler executive set off a wave of criticism when he was quoted as saying its autonomous vehicles would prioritize the lives of its passengers over anyone outside the car. The company later insisted he’d been misquoted, since it would be illegal “to make a decision in favor of one person and against another.”

Last month, Sebastian Thrun, who founded Google’s self-driving car initiative, told Bloomberg that the cars will be designed to avoid accidents, but that “If it happens where there is a situation where a car couldn’t escape, it’ll go for the smaller thing.”

But what if the smaller thing is a child?

How that question gets answered may be important to the development and acceptance of self-driving cars.

Azim Shariff, an assistant professor of psychology and social behavior at the University of California, Irvine, co-authored a study last year that found that while respondents generally agreed that a car should, in the case of an inevitable crash, kill the fewest number of people possible regardless of whether they were passengers or people outside of the car, they were less likely to buy any car “in which they and their family member would be sacrificed for the greater good.”

Self-driving cars could save tens of thousands of lives each year, Shariff said. But individual fears could slow down acceptance, leaving traditional cars and their human drivers on the road longer to battle it out with autonomous or semi-autonomous cars. Already, the American Automobile Association says three-quarters of U.S. drivers are suspicious of self-driving vehicles.

“These ethical problems are not just theoretical,” said Patrick Lin, director of the Ethics and Emerging Sciences Group at California Polytechnic State University, who has worked with Ford, Tesla and other autonomous vehicle makers on just such issues.

While he can’t talk about specific discussions, Lin says some automakers “simply deny that ethics is a real problem, without realizing that they’re making ethical judgment calls all the time” in their development, determining what objects the car will “see,” how it will predict what those objects will do next and what the car’s reaction should be.

Does the computer always follow the law? Does it slow down whenever it “sees” a child? Is it programmed to generate a random “human” response? Do you make millions of computer simulations, simply telling the car to avoid killing anyone, ever, and program that in? Is that even an option?

“You can see what a thorny mess it becomes pretty quickly,” said Lindberg. “Who bears that responsibility? … There are half a dozen ways you could answer that question leading to different outcomes.”

The trolley problem

Automakers and suppliers largely downplay the risks of what in philosophical circles is known as “the trolley problem” — named for a no-win hypothetical situation in which, in the original format, a person witnessing a runaway trolley could allow it to hit several people or, by pulling a lever, divert it, killing someone else.

In the circumstance of the self-driving car, it’s often boiled down to a hypothetical vehicle hurtling toward a crowded crosswalk with malfunctioning brakes: A certain number of occupants will die if the car swerves; a number of pedestrians will die if it continues. The car must be programmed to do one or the other.

Philosophical considerations, aside, automakers argue it’s all but bunk — it’s so contrived.

“I don’t remember when I took my driver’s license test that this was one of the questions,” said Manuela Papadopol, director of business development and communications for Elektrobit, a leading automotive software maker and a subsidiary of German auto supplier Continental AG.

If anything, self-driving cars could almost eliminate such an occurrence. They will sense such a problem long before it would become apparent to a human driver and slow down or stop. Redundancies — for brakes, for sensors — will detect danger and react more appropriately.

“The cars will be smart — I don’t think there’s a problem there. There are just solutions,” Papadopol said.

Alan Hall, Ford’s spokesman for autonomous vehicles, described the self-driving car’s capabilities — being able to detect objects with 360-degree sensory data in daylight or at night — as “superhuman.”

“The car sees you and is preparing different scenarios for how to respond,” he said.

Lin said that, in general, many self-driving automakers believe the simple act of braking, of slowing to a stop, solves the trolley problem. But it doesn’t, such as in a theoretical case where you’re being tailgated by a speeding fuel tanker.

Should government decide?

Some experts and analysts believe solving the trolley problem could be a simple matter of regulators or legislators deciding in advance what actions a self-driving car should take in a no-win situation. But others doubt that any set of rules can capture and adequately react to every such scenario.

The question doesn’t need to be as dramatic as asking who dies in a crash either. It could be as simple as deciding what to do about jaywalkers or where a car places itself in a lane next to a large vehicle to make its passengers feel secure or whether to run over a squirrel that darts into a road.

Chris Gerdes, who as director of the Center for Automotive Research at Stanford University has been working with Ford, Daimler and others on the issue, said the question is ultimately not about deciding who dies. It’s about how to keep no-win situations from happening in the first place and, when they do occur, setting up a system for deciding who is responsible.

https://uw-media.usatoday.com/video/embed/107483026?sitelabel=reimagine&platform=desktop&continuousplay=true&placement=uw-smallarticleinlinehtml5&pagetype=story

A driverless shuttle made its debut in Las Vegas Wednesday with a bump. Police say a semi-truck had a minor collision with the shuttle, less than two hours after the shuttle began carrying passengers. No injuries were reported. (Nov. 8) AP

For instance, he noted California law requires vehicles to yield the crosswalk to pedestrians but also says pedestrians have a duty not to suddenly enter a crosswalk against the light. Michigan and many other states have similar statutes.

Presumably, then, there could be a circumstance in which the responsibility for someone darting into the path of an autonomous vehicle at the last minute rests with that person — just as it does under California law.

But that “forks off into some really interesting questions,” Gerdes said, such as whether the vehicle could potentially be programmed to react differently, say, for a child. “Shouldn’t we treat everyone the same way?” he asked. “Ultimately, it’s a societal decision,” meaning it may have to be settled by legislators, courts and regulators.

That could result in a patchwork of conflicting rules and regulations across the U.S.

“States would continue to have that ability to regulate how they operate on the road,” said U.S. Sen. Gary Peters, D-Mich., one of the authors of federal legislation under consideration that would allow for tens of thousands of autonomous vehicles to be tested on U.S. highways in theyears  to come. He says that while design and safety standards will rest with federal regulators, states will continue to impose traffic rules.

Peters acknowledged that it would be “an impossible standard” to eliminate all crashes. But he argued that people need to remember that autonomous vehicles will save tens of thousands of lives a year. In 2015, the consulting firm McKinsey & Co. said research indicated self-driving cars could reduce traffic fatalities by 90% once fully deployed. More than 37,000 people died in U.S. roads in 2016 — the vast majority because of human error.

But researchers, automakers, academics and others understand something else about self-driving cars and the risks they may still pose, namely, that for all their promise to reduce accidents, they can’t eliminate them.

“It comes back to whether you want to find ways to program in specifics or program in desired outcomes,” said Gerdes. “At the end of the day, you’re still required to come up with what you want the desired outcomes to be and the desired outcome cannot be to avoid any accidents all the time.

“It becomes a little uncomfortable sometimes to look at that.”

The hard questions

While some people in the industry, like Tesla’s Elon Musk, believe fully autonomous vehicles could be on U.S. roads within a few years, others say it could be a decade or more — and even longer before the full promise of self-driving cars and trucks is realized.

The trolley problem is just one that has to be cracked before then.

There are others, like those faced by Daryn Nakhuda, CEO of Mighty AI, which is in the business of breaking down into data for self-driving cars all the objects they are going to need to “see” in order to predict and react. A bird flying at the window. A thrown ball. A mail truck parked so there is not enough space in the car’s lane to pass without crossing the center line.

Automakers will have to decide what the car “sees” and what it doesn’t. Seeing everything around it — and processing it — could be a waste of limited processing power. Which means another set of ethical and moral questions.

Then there is the question of how self-driving cars could be taught to learn and respond to the tasks they are given — the stuff of science fiction that seems about to come true.

While self-driving cars can be programmed — told what to do when that school bus comes hurtling toward them  —- there are other options. Through millions of computer simulations and data from real self-driving cars being tested, the cars themselves can begin to learn the “best” way to respond to a given situation.

For example, Waymo — Google’s self-driving car arm — in a recent government filing said through trial and error in simulations, it’s teaching its cars how to navigate a tricky left turn against a flashing yellow arrow at a real intersection in Mesa, Ariz. The simulations — not the programmers — determine when it’s best to inch into the intersection and when it’s best to accelerate through it. And the cars learn how to mimic real driving.

More: Driverless cars can transform lives — if we change the rules and let them

More: Your new self-driving car will be pioneered by a farmer

More: Google and AutoNation partner on self-driving car program

Ultimately, through such testing, the cars themselves could potentially learn how best to get from Point A to Point B, just by having programmed them to discern what “best” means — say the fastest, safest, most direct route. Through simulation and data shared with real world conditions, the cars would “learn” and execute the request.

Here’s where the science fiction comes in, however.

Playing ‘Go’

A computer programmed to “learn” how to play the ancient Chinese game of Go by just such a means is not only now beating grandmasters for the first time in history — and long after computers were beating grandmasters in chess — it is making moves that seem counterintuitive and inexplicable to expert human players.

What might that look like with cars?

At the American Center for Mobility in Ypsilanti, Mich., where a testing ground is being completed for self-driving cars, CEO John Maddox said vehicles will be able to put to the test what he calls “edge” cases that vehicles will have to deal with regularly —such as not confusing the darkness of a tunnel with a wall or accurately predicting whether a person is about to step off a curb or not.

The facility will also play a role, through that testing, of getting the public used to the idea of what self-driving cars can do, how they will operate, how they can be far safer than vehicles operated by humans, even if some questions remain about their functioning.

“Education is critical,” Maddox said. “We have to be able to demonstration and illustrate how AVs work and how they don’t work.”

As for the trolley problem, most automakers and experts expect some sort of standard to emerge — even if it’s not entirely clear what it will be.

At SAE International — what was known as the Society of Automotive Engineers, a global standard-making group — Chief Product Officer Frank Menchaca said reaching a perfect standard is a daunting, if not impossible, task, with so many fluid factors involved in any accident: Speed. Situation. Weather conditions. Mechanical performance.

Even with that standard, there may be no good answer to the question of who dies in a no-win situation, he said. Especially if it’s to be judged by a human.

“As human beings, we have hundreds of thousands of years of moral, ethical, religious and social behaviors programmed inside of us,” he added. “It’s very hard to replicate that.”