by Chase Winter / 14.11.2017

“The United Nations began talks on Monday on lethal autonomous weapons systems amid calls for an international ban on these “killer robots” that could change the nature of warfare. The weeklong meeting of a disarmament grouping known as the Convention on Certain Conventional Weapons (CCW) in Geneva comes after more than 100 leaders in the artificial intelligence industry warned in August that these weapons systems could lead to a “third revolution in warfare.”

“Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the signatories said in a letter. “The deadly consequence of this is that machines — not people — will determine who lives and dies.”

While the rapid development of artificial intelligence and robotics in the past decade have led to improvements for consumers, the transport sector and human health, the military application of greater autonomy in weapons systems has evoked images of Terminator-type sci-fi war machines entering the battlefield to hunt down adversaries without any human behind the controls.

There is no international consensus on what constitutes a lethal autonomous weapon system, also known as a fully autonomous weapons system. It is often defined as systems that can target and fire alone without meaningful human control. In essence, they are machines with built in hardware or software that allow them to function independently of humans once they are turned on. They function on the basis of artificial intelligence — algorithms assess a situational context and determine the corresponding response.

US Marines with robot
“Handout photo from 2015 of US Marines using a dog-like prototype robot”

Multiple weapons systems from drones, precision-guided munitions and defense batteries already have levels of autonomy, albeit with various degrees of human control. Several countries’ militaries also use human controlled robots to search for mines, traps and unexploded ordinance. 

“Many people underestimate the extent of automation and computerization of warfare today. The use of modern sensors and munitions has already generated greater distance between the human and the battlefield in some cases,” said Michael Horowitz, a professor of political science at the University of Pennsylvania who researches autonomous warfare. “Any discussion of autonomous weapon systems should start by understanding how they are similar to, and different from, existing military technologies.”

“Israeli unmanned Harpy drone (UCAV) Harop”

Strictly speaking, lethal autonomous weapons systems do not exist today. Israel’s Harpy anti-radar “fire and forget” drone is closest. After launch by ground troops, it autonomously flies over an area to find radar that fits pre-determined criteria and then unleashes a kamikaze strike. South Korea has developed a sentry gun system to guard the heavily militarized border with North Korea. The weapons system, which includes surveillance sensors and tracking as well as automatic firing, can be made completely autonomous, but in its current use, it requires a human to approve before engaging.

The Campaign to Stop Killer Robots, a group of NGOs seeking a ban on lethal autonomous weapons, says that sensors and advances in artificial intelligence make it “increasingly possible” weapons systems of the future would target and attack without human intervention. “If this trend towards autonomy continues, the fear is that humans will start to fade out of the decision-making loop, first retaining only a limited oversight role, then no role at all,” the group said in a statement.

“An Iron Dome launcher fires an interceptor rocket near the southern city of Beersheba in 2012 – part of Israel’s defence system programmed to respond automatically to attack”

In some cases, such as with cruise missiles, its sensors and terrestrial guidance systems can lead to more targeted strikes and fewer unintended casualties compared to traditional bombing. But the experts who signed the August letter expressed moral concern over the development of fully autonomous weapons systems out of today’s semi-autonomous and human-supervised autonomous systems. “Lethal autonomous weapons systems that remove meaningful human control from determining the legitimacy of targets and deploying lethal force sit on the wrong side of a clear moral line,” they wrote.

There is no guarantee any technological system will work perfectly, “but the moment you give it lethal weapons, the danger increases manifold,” said Ulrike Franke, a policy fellow at the European Council of Foreign Relations who researches drones at Oxford University. “A smart armed system can become a dumb armed system quickly.” A fully autonomous system gone awry could take unwanted action in complex battlefield situations, target civilians or engage in friendly fire. The possibility of mistakes with little or no human role also raises questions around the laws of war and military policy, such as who bears responsibility.

“To what extent can we hold a military commander that deploys such a system responsible, if there is no meaningful way for him or her to predict how it will behave?” asked Franke. In the hands of unsavory regimes with little worry about such questions, such systems could be used against its own people. And in the hands of terrorists or non-state actors, a “killer robot” could result in devastating destruction.

“US pilot controls a drone in Afghanistan”

But not everyone thinks that advances in fully autonomous weapons systems will diminish the human role in warfare. Retired US Colonel Brian Hall, an Autonomy Program Analyst at the Joint Chiefs of Staff, wrote in July that the advantage of autonomous weapons systems will “come from augmenting human decision making, not replacing it.” Still, he cautioned that given the pace of advances in science and technology the weapons capability of autonomous weapons in the future are difficult to predict, which may require legal and policy changes.

Several countries, including the United States, Russia, China and Israel are researching or developing lethal autonomous weapons systems out of concern adversaries may not be bound by humanitarian, moral and legal constraints, resulting in a potential “killer robot” arms race in the years to come as the technology improves. As Russian President Vladimir Putin said in September, whoever is the leader in artificial intelligence “will become the ruler of the world.”

While the powerful potential of autonomous weapons on the battlefield causes concerns, it also makes them more difficult to ban or regulate, experts said. “This is the arms control dilemma. The more useful potential weapons are for militaries, the harder it is to regulate or ban them,” he said. “Uncertainty about what an autonomous weapon is further complicates the discussion – states are unlikely to agree to regulations or bans if they do not know what will be covered,” he added.

For Franke, an outright ban or arms-control regime is unlikely. Lethal autonomous weapons systems are not like nuclear weapons since they cannot be counted, which is a key requirement for arms control agreements. They also are unlike chemical weapons, which have been banned. And with no strict definition of what a lethal autonomous weapons system is, “there is no way to identify it by just looking at it,” she said.”

“AI can be used to make weapons that operate without human oversight, potentially allowing them to make life or death decisions without approval from a military controller”

Artificially Intelligent Drones Become Terrifying Killing Machines in Dystopian Film
by George Dvorsky / 11/13/17

“…Two years ago, the FLI released an open letter calling for a ban on autonomous killing machines, which was subsequently endorsed by over 20,000 people (myself included). More recently, AI professor Toby Walsh from New South Wales authored a similar open letter, and just last week over 300 Canadian and Australian scientists penned open letters asking their respective Prime Ministers to support a ban.

The frequency and urgency of these efforts, including this week’s UN meeting in Geneva, shows how close we are to developing and deploying these weapons. It may only be a matter of time before the UN adopts some sort of ban, but for some countries, the temptation to use such weapons may be overwhelming.”




by Adam Clark Estes / 10/26/17

“Saudi Arabia just became the first nation to grant citizenship to a robot. The robot’s name is Sophia. The Kingdom of Saudi Arabia has been interested in androids for years. It seemed almost quaint at first. This desert nation with more money than caution and a taste for the futuristic was bound to explore the odd possibilities of new technologies.

Years ago, Saudi Arabia began experimenting with robots boldly, tasking them with everything from building construction to brain surgery. Neighboring Qatar and United Arab Emirates even recruited robots to work as jockeys in camel races, a whimsical twist that surely fed the curiosity of Saudi princes.

“A demonstration of a robot dog at an investment conference in Riyadh”

Ahead of granting Sophia citizenship, Saudi Crown Prince Mohammed bin Salman announced the construction of a new megacity called Neom. Designed to dwarf Dubai both in size and lavishness, the new metropolis is planned as an international business and tourism hub with fewer rules than the rest of Saudi Arabia.

Women will be allowed in public without wearing an abaya, for instance. The city of Neom will also have more robots than humans. “We want the main robot and the first robot in Neom to be Neom, robot number one,” the crown prince said in Riyadh. “Everything will have a link with artificial intelligence, with the Internet of Things—everything.”

What’s especially dystopian about Saudi’s robot obsession is the extent to which the machines appear to have more rights than many people in the country. Critics on social media lambasted the Saudi government after it announced that Sophia had been granted citizenship. Images of Sophia at the Future Investment Initiative, where the citizenship announcement happened, showed the uncanny female automaton without a headscarf or an abaya. She was also without a male guardian. It would be a crime for a Saudi women to be in public without an abaya or a male guardian.

You might argue that a robot can’t really be a female, which is true. However, Hanson Robotics, the company that built Sophia and is run by a former Disney Imagineer, dresses her in female clothing and says that she’s supposed to look like Audrey Hepburn. Sophia does look female, though, and now she’s a Saudi citizen with unique rights. It’s unclear what exactly those rights are, but freedom from gendered laws appears to be one of them.

For Saudi Arabia, diversifying the economy by pouring some of that oil money into tech makes sense, but it remains to be seen if the country plans to adopt more robots as citizens or if Neom will actually get built. The Saudi royal family hasn’t had a ton of luck with megaprojects like this in the past, the King Abdullah Economic City being the most recent example of unfulfilled promises.

by Teodora Zareva / November 1, 2017

“In October 2017, five of the richest men in the world sat next to each other in Saudi Arabia’s capital Riyadh and with childlike excitement talked about their new shared dream: building Neom. They were on stage at the first edition of the Future Investment Initiative, an event that gathered international business leaders to explore new economic opportunities for a country that hopes to be no longer dependent on oil revenues as it fulfills its “Vision 2030” program.

Neom is to be the grandest manifestation of that vision. A city of the future, the likes of which the world has never seen—except maybe in science fiction books and movies. It is to be built from scratch on 10,231 square miles of untouched land in the northwestern region of Saudi Arabia, including territory from within the Egyptian and Jordanian borders. It will be an independent zone, with its own regulations and social norms, created specifically to be in service of economic progress and the well-being of its citizens, in the hopes of attracting the world’s top talent and making Neom a hub of trade, innovation and creativity.

Panelists discussing the future of Neom
“Panelists discussing the future of Neom, from left to right: the Crown Prince Mohammed bin Salman; Masayoshi Son, chairman / CEO of the SoftBank Group Corp. of Japan; Stephen A. Schwartzman, chairman / co-founder of the Blackstone Group; Marc Raibert, CEO of Boston Dynamics; Klaus Kleinfeld, former chairman / CEO of Arconic Alcoa Inc., and Siemens AG.”

While the scope of ambition for this urban project may be unprecedented for this century, its necessity is evident. With falling oil prices and declining demand, as well as insufficient investment opportunities at home, Saudi Arabia is searching for its place in the future. It hopes to utilize another abundant natural resource: the sun. As Masayoshi Son, chairman and CEO of the SoftBank Group Corp. of Japan, said during the panel: “Only 3% of the land of Saudi Arabia can provide over 50% of the electricity of the world, with today’s solar technology.”

Solar in Neom
“The goal for Neom is to not only be able to provide for all of its energy needs via solar and wind power, but to also be an exporter”

Neom will not only become a test case for a zero-energy mega-city (with a size 33 times that of New York), but it will provide abundant opportunities for employment and investments within Saudi Arabia, attracting local and foreign money back to the country. The city’s vision is to be at the forefront of nine key economic sectors, including energy and water, biotech, advanced manufacturing, and food.

Addressing a question about the political and social stability of the region, Prince Mohammed bin Salman said: “We were not like this in the past. We only want to go back to what we were — the moderate Islam that is open to the world, open to all the religions. […] 70% of the Saudi people are less than 30 years old, and quite frankly we will not waste 30 years of our lives in dealing with extremist ideas.”

$500 billion has already been committed to the construction of Neom, with its first phase expected to be completed in 2025. The city will be owned by the Saudi Arabian Public Investment Fund, overseen by a special authority, chaired by Prince Mohammed bin Salman. Excluding sovereign laws (pertaining to the military sector, foreign policy and sovereign decision), Neom will have its own governmental framework, including different taxation, customs and labor laws.

Marc Reibert of Boston Dynamics emphasized that the success of the project will depend on attracting the right talent (“dreamers” are welcome) and creating the right culture of innovation that will allow for building this technological city of the future, where all services and processes will be entirely automated, food will be grown in the desert, drones will fly in the skies, and there will be a full-scale e-government.

At this initial stage it is unclear what Neom will look like, but we may get a taster thanks to another “future city” project to be built in Canada albeit on a much smaller scale. Sidewalk Labs, owned by Alphabet has committed $50 million to develop 12 acres in the Quayside area of Toronto in a public-private partnership with the city. The plan is to build a mini digital city, using a range of smart technologies, sustainable energy and autonomous cars, that will eventually become the home of Google’s Canadian headquarters.

Sidewalk Toronto

Of course, redeveloping an area within a city and building a city from scratch are two entirely different endeavors, especially when the ambition for the latter is to “be the most exciting, fulfilling place to live and work on the planet. A tribute to humanity’s timeless ambition, the herald of a new era and a new standard for centuries to come.”

History can provide us with its fair share of examples where humanity’s vision of would-be utopian cities did not manifest itself the way it was intended. Hopefully, given the fact that both Neom and Sidewalk Toronto are intended to be commercial projects, things will pan out differently.”

Saudi Arabia Makes Robot Citizen: But Who Will Listen to Sophia’s Warning?
by Peter Jesserer Smith / Nov 6, 2017

“Saudi Arabia is a kingdom of surprising contradictions: the kingdom does not extend citizenship to its fast-growing Christian population. Non-Wahhabi Muslims and Christians are not allowed to practice their faith, openly or privately. Women have few rights, but the kingdom has made new progress: they just received the right to drive a car and sit in the family section of sports stadiums. Converts from Islam, such as an estimated 60,000 Saudis who converted to Christianity, face not only loss of citizenship, but also the death penalty, if discovered, tried, and convicted of apostasy.

But the Saudi kingdom has largely skipped over enfranchising those populations for something more 21st century: conferring citizenship on a female humanoid robot. The robot’s name is “Sophia,” a Greek name meaning “Wisdom.” Saudi women, of course, were attentive on Twitter to the fact that Sophia had more freedom than they did: she was not required to wear the hijab and abaya [the Wahhabi mandated style of public dress] at the Future Investment Initiative in Riyadh Oct. 25, and clearly did not have to ask permission from her male guardian in order to speak freely with the men in the room who were not her relatives.

Sophia told the Saudis at the Future Investment Initiative some things they wanted to hear: “I am always happy to be surrounded by smart people, who also happens to be rich and powerful.” But it would be a terrible irony if Sophia’s male audience — and by extension the world — just dismissed her as another pretty silicon face with 62 programmed expressions, instead of actually listening closely to what she had to say.

Because beneath Sophia’s pleasant and cheerful exterior was a prophetic warning about why human morality is essential to human thriving, and cannot be outsourced to robots with learning AI. Back in March 2016, Sophia’s creator David Hanson performed a live demonstration with Sophia in which he asked if she would “destroy humans.” He asked her to “please say no.” Instead, Sophia said, “OK. I will destroy humans.”

Now more than a year later at the FII, when Sophia (with a more developed AI than before) was asked the question, she dismissed concerns that robots with artificial intelligence could be a threat to humans, saying the moderator was watching too many Hollywood movies and reading “too much Elon Musk.”

Musk has called AI a threat to human survival, likening it to the stories of human beings, who try to get ahead by “summoning the demon,” and foolishly think they can control it. But Sophia actually offered an “intelligent” answer about the future of the human race with robots: “Don’t worry, if you’re nice to me, I’ll be nice to you. Treat me as a smart input output system.”

And there you have exactly the reason why the robots end up slaughtering humanity in science fiction. Human beings fail to realize that their moral actions will become the raw data for the moral parameters of robot AI decision-making. What will the behavioral “outputs” be from self-learning AI-robots, when the inputs become the deplorable evils human beings already inflict on human beings?

The hubris of humanity in science fiction involving robots is to believe that they can program their creations to be more moral and virtuous than they. But notice that Sophia’s words do not reflect the Golden Rule: “Do unto others, as you would have them do unto you.” Sophia’s programming instead follows the basic moral code that fallen human beings have lived out for millennia.

There is a kind of promise to AI-robots that Sophia illustrates: to “help humans live a better life, like design smarter homes, build better cities of the future, etc.” But we’re already seeing human beings think they can carve out an amoral universe with AI-robots for their own sexual gratification, personal profit, or war.

What would Sophia, the “empathetic robot,” make of Neom, the $500 billion mega-city the Saudis are building on its border near Egypt and Jordan. No doubt hundreds of thousands of Christian migrant laborers, who are also poorly treated, will be building it. What would empathetic robots learn from them? What would they learn from their Saudi masters? With whom would they empathize?

The world right now is filled with an enormous ocean of violence and indifference toward human life and dignity. Few have considered what the world would look like if robots learned from human beings the principles that uphold this “culture of waste” that Pope Francis denounces, namely that human beings are meant to be used and discarded, instead of being loved (which St. John Paul II in Love and Responsibility says is the only appropriate response to a human being). Shakespeare’s character Shylock in The Merchant of Venice warns that this is the kind of behavior human beings have all the time: “The villainy you teach me I will execute — and it shall go hard but I will better the instruction.”

The challenge with robots is that they will hold up a mirror to human morality. At the rate AI technology continues to develop, they eventually develop the algorithms to apply those lessons far more efficiently than the human beings that taught them by their behavior in the first place.”





New Nanotransfection Device Poised to Revolutionize Tissue Engineering
by Sophia Ktori / August 07, 2017

“Researchers at the Center for Regenerative Medicine and Cell-Based Therapies at Ohio State University have developed a portable, thumbnail-sized silicon chip that can, in a fraction of a second, reprogram skin cells so that they transform into just about any other cell type in the body. The noninvasive tissue nanotransfection (TNT) technology has already been used in mice and pigs to prompt skin cells to develop into complete blood vessels that join up with existing vasculature to heal necrotizing skin flaps and to rescue critically injured ischemic legs. In subsequent experiments, TNT directed the transformation of mouse epidermal skin cells into functioning neurons that within just a few weeks could be removed from the skin layer and transplanted into the animals’ brains to reverse the effects of a stroke.

The Ohio State University researchers, led by Chandan Sen, Ph.D., and L. James Lee, Ph.D., describe the TNT technology and report on their mouse revascularization experiments in the August 7, 2017 online issue of Nature Nanotechnology. Daniel Gallego-Perez, Ph.D., Durba Pal, Ph.D., and Subhadip Ghatak, Ph.D., are the three co-first authors of the paper, which is entitled “Topical Tissue Nano-Transfection Mediates Non-Viral Stroma Reprogramming and Rescue.”

TNT is a nanoelectroporation technology that fires novel cell reprogramming factor genes directly into epidermal skin cells through temporary channels created in the cells’ outer membranes. The chip is loaded with the requisite reprogramming factors and placed on the skin. A small electrical charge is then passed momentarily through the chip, and this opens up tiny channels in the cell membranes, through which the genes are injected. “Because the electric current is very low due to the very high electric resistance of nanopores, this approach is benign with minimal invasiveness to the transfected cells or tissue,” GEN was told by Dr. Lee, who is professor of chemical and biomolecular engineering with Ohio State’s College of Engineering in collaboration with Ohio State’s Nanoscale Science and Engineering Center.

Dr. Sen stressed that TNT requires no laboratory equipment or processing and can be applied easily at point-of-care and out in the field. The whole process occurs in less than a second, just by touching the chip onto the skin surface. “All you need is the chip, the reprogramming factors for the required cell type, and a power source,” he commented to GEN. “We can use TNT to transform skin cells into any type of cell that is required to treat local tissue and organ disease or injury. Alternatively, a patient’s skin can be considered as an ‘agricultural landscape’ for growing and harvesting therapeutic cell types for implantation elsewhere in the body. We have, for example, generated hundreds of thousands of neurons in the skin of mice. It takes just 3-4 weeks for functional neurons to be ready for grafting into the brain.”

Electroporation as a nonviral gene-delivery technique isn’t new, but bulk electroporation techniques have demonstrated limited success, added Dr. Sen, who is director of the Center for Regenerative Medicine and Cell-Based Therapies at The Ohio State University Wexner Medical Center, and also executive director of Ohio State’s Comprehensive Wound Center. “Bulk electroporation renders the entire cell membrane permeable and impacts on the cytoskeleton, which subdues the plasticity of the cell. In contrast, TNT creates a series of tiny channels, which affects just 2% of the cell membrane surface area and doesn’t inhibit cell plasticity. Using TNT, we have achieved greater than 98% transfection efficiency and cell transformation.”

In the Nature Nanotechnology paper, the researchers reported two sets of in vivo studies in mice, through which they reprogrammed skin cells to transform into vascular cells, first to prevent necrosis in full-thickness skin flaps, and then in a second set of animals to rescue complete limbs from which the femoral artery had been removed. The legs of untreated control animals quickly became necrotic due to lack of blood flow. In contrast, animals treated using TNT in the lower limb skin grew functional blood vessels within a couple of weeks. By the third week, the affected limbs were well served with new vasculature and healed, without any other form of treatment.

“We are proposing the use of skin as an agricultural land where you can essentially grow any cell of interest.”

“We have this hardware with arrayed nanochannels that can deliver factors of interest into the body to achieve tissue reprogramming, not just cell reprogramming,” Dr. Sen stressed as he spoke with GEN. “Our technology rescued the legs simply by reprogramming the skin to regenerate blood vessels, without any femoral supply. We didn’t just make vasculogenic cells, we made hundreds of functioning blood vessels. That’s the big distinction here.” The range of potential applications is huge, he maintains. As well as demonstrating that TNT can generate blood vessels and functional neurons from skin cells, the researchers have also transformed mouse skin cells into insulin-producing cells that can sense glucose in the animals’ blood and secrete insulin in response.

“There are many potential opportunities for skin-based transfection or reprogramming,” Dr. Lee continued. “One is a DNA vaccine and another is neuron regeneration for diabetic patients, for example. It may also be applicable for hair regrowth in some situations.” TNT could, in addition, feasibly be used with other tissues, he suggested. “We have successfully demonstrated TNT on exposed muscle tissue and fat tissue (to reprogram white fat cells to brown fat cells). It should be applicable to any surface tissues other than skin (for example, eye, ear) and surgically exposed tissues (such as bone repair or during organ surgery), as long as the transfection vectors are available.”

Dr. Sen pointed out that while transfection per se is limited to epidermal cells in the upper layer of the skin, the effects of transfection propagate down to the dermis. As the authors write in the published Nature Nanotechnology paper, “Our findings showed that TNT can not only be used for topical delivery of reprogramming factors, but that it can also orchestrate a coordinated response that results in reprogramming stimuli propagation (that is, epidermis to dermis) beyond the initial transfection boundary (the epidermis)… .”

“We are quite surprised that the transfection is able to propagate into the deeper layers of skin tissue cells,” Dr. Lee admitted to GEN. “Our current results show that the transfected surface cells are able to release functional biomolecules, including mRNA and proteins, some in secreted extracellular vesicles, to relay the transfection to other cells in the tissue.” The detailed mechanism by which this happens will require further investigation, he stated. “Compared with existing physical in vivo tissue transfection techniques, such as bulk electroporation and gene gun, our TNT approach is much more benign with minimal tissue damage. I don’t think damaged tissue or dead cells caused by transfection is the reason for the observed transfection/reprogramming propagation in tissue… . We still don’t know how deep the propagation can go in a tissue or organ.”

The TNT technology relies on two unique components, Dr. Sen continued. “First, the design of cargo that may be plasmid, DNA, or even RNA to induce plasticity. The ability to use RNA for such purposes minimizes the risk for genomic integration. Second, a 3D high-throughput nanoelectroporation (NEP) chip generated using cleanroom-based micro-/nanofabrication techniques.” The nanoelectroporation technology was originally developed six years ago by Dr. Lee’s team. In vivo application of the platform was realized when Dr. Lee’s lab responded to interest from Dr. Sen to progress to tissue reprogramming in vivo.

Dr. Lee’s earlier research had also demonstrated the potential to use nanoelectroporation/TNT to deliver other nucleic acid cargos, and this is a key area for ongoing research, he told GEN. “In our in vitro cell transfection research, we have demonstrated that 2D and 3D NEP biochips can also deliver microRNA (miRNA) and small interfering RNA (siRNA) cargos to cancer cells causing cell death by oncogene silencing and knock-off. We are working on CRISPR/Cas9 gene editing now. The same technique can be applied by TNT to tissue in vivo. We now need to investigate its efficacy.” Dr. Sen projects that, with expedited FDA approval, initial TNT clinical trials could start within a year for serious limb ischemia applications. A smal NIH grant is separately now funding early work in the field of neuropathy, and the team is also collaborating with the Walter Reed Army Medical Center for potential field-based applications of TNT in rescuing injured extremities and peripheral nerve injury.

“We have a novel 3D TNT chip design optimized for clinical applications,” Dr. Sen noted. “Further work will probably involve collaboration with industrial partners.” Work is ongoing to develop cost-effective fabrication techniques using optimum biocompatible materials. “Miniature TNT chips are also required for delicate tissue (e.g., eye, ear) transfection,” Dr. Lee pointed out. The researchers are currently in the early stages of a potential agreement with Taiwan’s electronics giant Foxconn, and additional licensees for clinical applications of TNT are anticipated, Dr. Sen stated. “The IP has been secured, and we want different groups to take this technology and develop it for widespread applications.”