Connect with us

Published

on

Isaac Asimoc, a writer well-known for his works of science fiction, penned the “Three Laws of Robotics” in 1972.

Asimov wrote these “laws” while thinking about androids, and he imagined a world where human-like robots would have human masters and need a set of programming rules to prevent them from causing harm.

But51 years after the laws were first published, technology has advanced significantly and humans now have a different understanding of what robots and artificial intelligence (AI) can look like and how people interact with them.(h/t to Survivopedia.com) The three laws of robotics

While a robot takeover is still more fiction than fact, as a prepper it’s worth reviewing Asimov’s laws to prepare for when SHTF. First law “A robot may not injure a human being or through inaction allow a human being to come to harm.” Second law “A robot must obey orders given by human beings, except where such orders would conflict with the first law.” Third law “A robot must protect its own existence as long as such protection does not conflict with the first and the second law.”

While the laws are fiction, Asimov’s thought process is something preppers should mimic.

Asimov wasn’t a prepper, but he realized that AI-powered computers, or androids and robots, as he put it, could be dangerous despite their many benefits because they couldthink for themselves. He also realized the difficulty in programming them to ensure that they would not betray their human masters.

The dichotomy here lies in allowing computers to become sentient, or feeling and thinking for themselves, while still keeping some level of control over them as their masters. This two-pronged goalmaybe impossible, especially since humans are still in the infant stages of AI and there have already been problems in creating the necessary fail-safes to ensure the safety of users.

Astechnology continues to advance, AI computer systems are now teaching themselves much faster than any thought being put into creating the necessary controls to keep them safe.

In one of the earliest AI experiments where two computers with AI systems installed communicated with each other, it only took minutes for the two programs to develop their own language and communicate. This meant their human operatorswere unable to understand the two AI systems.

Chatbots are computer programs that mimic human conversations through text.

But back in 2017, when the experiment was conducted, chatbots weren’t yet capable of more sophisticated functions beyond simple tasks like answering customer questions or ordering food. To address this, Facebook’s Artificial Intelligence Research Group (FAIR) tried to find out if these programs could be taught to negotiate.

The researchers developed two chatbots named Alice and Bob.Using a game where the two chatbots and human players bartered virtual items like balls and hats, Alice and Bob showed that they could make deals with varying degrees of success.

Facebook researchers observed the language when the chatbots were negotiating among themselves. They noticed that becausethey didn’t instruct the bots to stick to the rules of English, Alice and Bob started using their own language: a “derived shorthand” they invented to communicate faster.

While the researchers stopped the experiment because of the potential danger, further research into AI continued through the years. There is no policingof potential tasks for advanced AI systems

The AI systems available to modern consumers surpass those used in the Facebook experiment.There is a wider array of AI systems available to use, some of which can be hired through websites, to accomplish different tasks.

However, there is no way to monitorwhat those tasks might be to guarantee that they are not abused by those who want to use these tools for crimes or to harm others. (Related: Digital prepping: How to protect yourself against cyberattacks.)

The first question for preppers is, can these systems turn against their human masters?

According to an Air Force colonel, that has already happened during an experimental drone test. The colonel eventually tried to deny what he said, but there have been reports about the incident.

During the test, a drone was assigned to find and eliminate targets, but it needed the permission of a human controller before firing.

After some time, the drone realized that the controller was responsible for the “points” it lost when it denied the permission it needed to take out certain targets. To solve the problem, the drone “killed” the controller.

No real person was harmed during the test, but it’s easy to see how the scenario could have turned ugly if the drone was assigned to protect an area with real people.

The drone test also illustrates the potential challenges of programming AI. The tests show that it can be impossible to program a sentient AI to prevent it from doing what it wants to do because it’s clever enough to find a way to disobey direct orders.

Rogue drones controlled by AI may harm humans, but how can you prevent this from happening?

Many ethical questions are being raised about AI, but experts still haven’t been able to present real-world answers. Unfortunately, they might not start working on this problem unless atragedy occurs.

By then, it might be too late to discuss the ethics associated with AI.And the U.S. isn’t the only country working onAI technology.

Other countries, along with some that aren’t on friendly terms with the U.S., are also developing their AI systems, both for military and civilian applications.

AI is already being used for one dangerous application: The creation of deep fake videos.

Stealing an actors “copyright” to their likeness isn’t harmful, but it is still consideredcriminal activity. When that same level of artificial intelligence is applied to identity theft, even preppers and non-preppers alike won’t be safe. How can you prepare yourself before the rise of AI?

Even nowAI exists on the internet and is already being used to create various content.This means you can’t always trust that the content you see or read was created by humans.

As of writing, at least19.2 percent of articles on the internet have some AI-generated content. At least 7.7percent of these articles have 75 percent or more of their content generated by AI.

Experts warn that by 2026, at least 90percent of internet content will be AI-generated.

How is this relevant to you as a prepper?

AI-generated content can be problematic because this meansmore content will be politicized.

Data suggests that Russia and other countries are alreadytrolling U.S. websites, potentially making posts and uploading articles thatare inflammatory to add to the political division in the country.These countries can continue to use AI to increase their effectiveness by targeting their articles more specifically.

With the potential dangers of AI steadily increasing as time goes by, you must be more careful about what you see and read online. Do not believe everything your see or hear,especially content with political overtones.

Learn how to be anindependent fact-checker and do your research to find out if what you are reading and hearing is true.

Be wary ofmainstream media that may be spinning news stories to support their own political agenda. Check reliable news sources for updates on what the Russian, Chinese and other countries’ intelligence services are doing.

This also means being careful about what you post online. Never post personal information online or anything that hackers could use to try and figure out anything about you.

Do not use systems like Alexa and Google Assistant, which often allow computers to eavesdrop on user conversations. Even though the companies that make these products claim they arent spying on users, various reports about them prove otherwise.

Don’t “computerize” your life bystorin your data online. This service may seem convenient because you can access your data anywhere, but there’s alsoa chance that others could access all your data in the cloud.

Are you willing to risk a data breach just for convenience? Most of the time, companies offering these services havethings buried in the fine print of their contracts, which allows them to listen in on your computer microphones and look at images from your phone or laptop cameras.

To trulyprotect yourself from the potential dangers of AI, you must reevaluate your usage of the internet and computers.Technology is convenient, but you must be responsible and make sure your information can’t be used against you by those who might do you harm.

Don’t store yourdata online and unplug things like microphones and cameras when not in use.

Sacrifice convenience to protect yourself and your family from the potential dangers of AI technology.

Visit Computing.newsto learn more about the growing dangers of AI systems.

Watch the video below to find out how AI technology threatens to take over thousands of jobs.

This video is from theNewsClips channel on Brighteon.com. More related stories:

Google is using AI to dig through Gmail accounts to find exactly what youre looking for and perhaps MORE.

Peeping through the windows: Microsoft to incorporate MANDATORY AI systems in Windows 11 to SPY on all your computing activities.

Dallas school district installs AI spying, surveillance systems to keep an eye on students.

Sources include:

Survivopedia.com

USAToday.com

TheConversation.com

TheGuardian.com

Brighteon.com
Submit a correction >>

Continue Reading

Science

Rocket Lab’s Neutron Rocket to Land at Sea, First Launch Set for 2025

Published

on

By

Rocket Lab’s Neutron Rocket to Land at Sea, First Launch Set for 2025

Rocket Lab has confirmed that its reusable Neutron rocket is set for its first launch in the latter half of 2025. The announcement was made during the company’s earnings call on 26 February, where Peter Beck, Founder and CEO, outlined plans to address increasing demand for medium-lift launch services. He stated that rapid development efforts are underway to bring the rocket online as quickly as possible. The Neutron rocket has been designed to serve defence, security, and scientific missions, filling a gap in the market where launch options remain limited. A new offshore barge, named “Return on Investment,” is set to be used for rocket recovery, expanding mission possibilities.

Sea-Based Landing Platform Revealed

According to Rocket Lab, a modified offshore barge will be utilised as a landing platform for the Neutron rocket’s recovery. Peter Beck highlighted that this addition will enhance operational flexibility by allowing for greater mission efficiency. The company aims to improve accessibility to space while ensuring the maximum performance of Neutron’s capabilities.

Flatellite: Rocket Lab’s New Satellite Platform

Rocket Lab has also introduced “Flatellite,” a flat satellite system engineered for large-scale deployment. Sources have reported that these satellites will be manufactured in high volumes to support large constellations. The design enables efficient stacking, allowing for multiple satellites to be launched together, optimising payload capacity. Peter Beck stated that this initiative aligns with Rocket Lab’s vision of establishing an end-to-end space service, extending its role beyond launch services to satellite operations.

Electron Launches Continue

Rocket Lab’s Electron rocket remains active, with an upcoming launch scheduled for this month. Reports indicate that an agreement has been signed with the Japanese company Institute for Q-shu Pioneers of Space (iQPS) for multiple missions over the next two years. According to Shunsuke Onishi, CEO of iQPS, the reliability and frequency of Electron missions align with their objectives for building a satellite constellation.

Continue Reading

Science

Boeing Starliner Astronauts Set To Return on March 16 After 10-Month ISS Stay

Published

on

By

Boeing Starliner Astronauts Set To Return on March 16 After 10-Month ISS Stay

A mission initially planned for ten days has stretched into nearly ten months, with two NASA astronauts finally set to return to Earth. Astronauts Barry Wilmore and Sunita Williams, who launched aboard Boeing’s Starliner on June 5, 2024, were meant to conduct a short-duration test flight to the International Space Station (ISS). However, issues with the spacecraft resulted in their prolonged stay. Their return is now scheduled for March 16, 2025, following the arrival of their relief crew.

Details of The Return

According to NASA’s flight schedule, Starliner was originally expected to bring the astronauts back, but after assessing its performance, the decision was made to return it uncrewed in September 2024. As reported, NASA instead adjusted its crew rotation plan, allocating seats for Wilmore and Williams on the SpaceX Crew Dragon, which launched as part of Crew-9. The return mission was initially scheduled for February but was delayed further due to operational constraints. The ISS program has now confirmed that their journey back will take place this month.

Crew-10 Mission Prepares for Launch

Four astronauts are set to launch aboard SpaceX’s Crew-10 mission on March 12, 2025, from Kennedy Space Center in Florida. The mission, commanded by NASA astronaut Anne McClain, includes pilot Nichole Ayers, Japan Aerospace Exploration Agency (JAXA) astronaut Takuya Onishi, and Roscosmos cosmonaut Kirill Peskov. Their arrival at the ISS will facilitate the Crew-9 team’s return, including Wilmore and Williams.

Adjustments in Spacecraft Selection

NASA officials have confirmed that Crew-10 will travel aboard the previously flown Dragon capsule, Endurance. The switch from a newly manufactured spacecraft was prompted by battery-related delays, leading to the decision to use a flight-proven alternative. Steve Stich, NASA’s Commercial Crew Program manager, stated during a briefing that changes in vehicle assignments are a routine part of mission planning.

Continue Reading

Science

ISS Captures Rare Gigantic Jet, a Massive Upward Lightning Over New Orleans

Published

on

By

ISS Captures Rare Gigantic Jet, a Massive Upward Lightning Over New Orleans

A rare “gigantic jet” of lightning was captured in a newly released image taken from the International Space Station (ISS). The photograph, dated November 19, 2024, shows a powerful discharge of blue light extending from a thunderstorm, likely reaching around 50 miles (80 kilometers) above Earth’s surface. The image, originally not publicised by NASA or any other space agency, surfaced after photographer Frankie Lucena identified it on the Gateway to Astronaut Photography of Earth website. The striking phenomenon was later shared by Spaceweather.com on February 26, bringing renewed attention to these elusive atmospheric events.

Gigantic Jet Confirmed by Analysis

According to reports, the ISS had captured four photographs of lightning around the time of the event, with only one displaying a clear upward-shooting jet. The exact location of the phenomenon remains uncertain due to cloud cover, but ISS tracking data suggests it likely occurred just off the coast of New Orleans. Gigantic jets are rarely observed, with only a limited number of documented cases since their discovery in 2001.

How Gigantic Jets Form

These towering lightning bolts occur when electrical charge distributions within a thunderstorm are disrupted, causing energy to be released upwards rather than toward the ground. The distinctive blue hue results from interactions with nitrogen in the upper atmosphere. Most gigantic jets extend into the ionosphere, the electrically charged layer of Earth’s atmosphere starting around 50 miles above the surface.

Energetic Nature of Upward Lightning

Previous studies have shown that gigantic jets can carry significantly more energy than standard lightning bolts. A record-breaking event over Oklahoma in May 2018 was found to have 60 times the energy of an average strike. In addition to the main jet, faint branching red discharges, similar to sprites, can be seen in the recent ISS image, highlighting the complexity of these high-altitude electrical events.

Continue Reading

Trending