Connect with us

Published

on

Isaac Asimoc, a writer well-known for his works of science fiction, penned the “Three Laws of Robotics” in 1972.

Asimov wrote these “laws” while thinking about androids, and he imagined a world where human-like robots would have human masters and need a set of programming rules to prevent them from causing harm.

But51 years after the laws were first published, technology has advanced significantly and humans now have a different understanding of what robots and artificial intelligence (AI) can look like and how people interact with them.(h/t to Survivopedia.com) The three laws of robotics

While a robot takeover is still more fiction than fact, as a prepper it’s worth reviewing Asimov’s laws to prepare for when SHTF. First law “A robot may not injure a human being or through inaction allow a human being to come to harm.” Second law “A robot must obey orders given by human beings, except where such orders would conflict with the first law.” Third law “A robot must protect its own existence as long as such protection does not conflict with the first and the second law.”

While the laws are fiction, Asimov’s thought process is something preppers should mimic.

Asimov wasn’t a prepper, but he realized that AI-powered computers, or androids and robots, as he put it, could be dangerous despite their many benefits because they couldthink for themselves. He also realized the difficulty in programming them to ensure that they would not betray their human masters.

The dichotomy here lies in allowing computers to become sentient, or feeling and thinking for themselves, while still keeping some level of control over them as their masters. This two-pronged goalmaybe impossible, especially since humans are still in the infant stages of AI and there have already been problems in creating the necessary fail-safes to ensure the safety of users.

Astechnology continues to advance, AI computer systems are now teaching themselves much faster than any thought being put into creating the necessary controls to keep them safe.

In one of the earliest AI experiments where two computers with AI systems installed communicated with each other, it only took minutes for the two programs to develop their own language and communicate. This meant their human operatorswere unable to understand the two AI systems.

Chatbots are computer programs that mimic human conversations through text.

But back in 2017, when the experiment was conducted, chatbots weren’t yet capable of more sophisticated functions beyond simple tasks like answering customer questions or ordering food. To address this, Facebook’s Artificial Intelligence Research Group (FAIR) tried to find out if these programs could be taught to negotiate.

The researchers developed two chatbots named Alice and Bob.Using a game where the two chatbots and human players bartered virtual items like balls and hats, Alice and Bob showed that they could make deals with varying degrees of success.

Facebook researchers observed the language when the chatbots were negotiating among themselves. They noticed that becausethey didn’t instruct the bots to stick to the rules of English, Alice and Bob started using their own language: a “derived shorthand” they invented to communicate faster.

While the researchers stopped the experiment because of the potential danger, further research into AI continued through the years. There is no policingof potential tasks for advanced AI systems

The AI systems available to modern consumers surpass those used in the Facebook experiment.There is a wider array of AI systems available to use, some of which can be hired through websites, to accomplish different tasks.

However, there is no way to monitorwhat those tasks might be to guarantee that they are not abused by those who want to use these tools for crimes or to harm others. (Related: Digital prepping: How to protect yourself against cyberattacks.)

The first question for preppers is, can these systems turn against their human masters?

According to an Air Force colonel, that has already happened during an experimental drone test. The colonel eventually tried to deny what he said, but there have been reports about the incident.

During the test, a drone was assigned to find and eliminate targets, but it needed the permission of a human controller before firing.

After some time, the drone realized that the controller was responsible for the “points” it lost when it denied the permission it needed to take out certain targets. To solve the problem, the drone “killed” the controller.

No real person was harmed during the test, but it’s easy to see how the scenario could have turned ugly if the drone was assigned to protect an area with real people.

The drone test also illustrates the potential challenges of programming AI. The tests show that it can be impossible to program a sentient AI to prevent it from doing what it wants to do because it’s clever enough to find a way to disobey direct orders.

Rogue drones controlled by AI may harm humans, but how can you prevent this from happening?

Many ethical questions are being raised about AI, but experts still haven’t been able to present real-world answers. Unfortunately, they might not start working on this problem unless atragedy occurs.

By then, it might be too late to discuss the ethics associated with AI.And the U.S. isn’t the only country working onAI technology.

Other countries, along with some that aren’t on friendly terms with the U.S., are also developing their AI systems, both for military and civilian applications.

AI is already being used for one dangerous application: The creation of deep fake videos.

Stealing an actors “copyright” to their likeness isn’t harmful, but it is still consideredcriminal activity. When that same level of artificial intelligence is applied to identity theft, even preppers and non-preppers alike won’t be safe. How can you prepare yourself before the rise of AI?

Even nowAI exists on the internet and is already being used to create various content.This means you can’t always trust that the content you see or read was created by humans.

As of writing, at least19.2 percent of articles on the internet have some AI-generated content. At least 7.7percent of these articles have 75 percent or more of their content generated by AI.

Experts warn that by 2026, at least 90percent of internet content will be AI-generated.

How is this relevant to you as a prepper?

AI-generated content can be problematic because this meansmore content will be politicized.

Data suggests that Russia and other countries are alreadytrolling U.S. websites, potentially making posts and uploading articles thatare inflammatory to add to the political division in the country.These countries can continue to use AI to increase their effectiveness by targeting their articles more specifically.

With the potential dangers of AI steadily increasing as time goes by, you must be more careful about what you see and read online. Do not believe everything your see or hear,especially content with political overtones.

Learn how to be anindependent fact-checker and do your research to find out if what you are reading and hearing is true.

Be wary ofmainstream media that may be spinning news stories to support their own political agenda. Check reliable news sources for updates on what the Russian, Chinese and other countries’ intelligence services are doing.

This also means being careful about what you post online. Never post personal information online or anything that hackers could use to try and figure out anything about you.

Do not use systems like Alexa and Google Assistant, which often allow computers to eavesdrop on user conversations. Even though the companies that make these products claim they arent spying on users, various reports about them prove otherwise.

Don’t “computerize” your life bystorin your data online. This service may seem convenient because you can access your data anywhere, but there’s alsoa chance that others could access all your data in the cloud.

Are you willing to risk a data breach just for convenience? Most of the time, companies offering these services havethings buried in the fine print of their contracts, which allows them to listen in on your computer microphones and look at images from your phone or laptop cameras.

To trulyprotect yourself from the potential dangers of AI, you must reevaluate your usage of the internet and computers.Technology is convenient, but you must be responsible and make sure your information can’t be used against you by those who might do you harm.

Don’t store yourdata online and unplug things like microphones and cameras when not in use.

Sacrifice convenience to protect yourself and your family from the potential dangers of AI technology.

Visit Computing.newsto learn more about the growing dangers of AI systems.

Watch the video below to find out how AI technology threatens to take over thousands of jobs.

This video is from theNewsClips channel on Brighteon.com. More related stories:

Google is using AI to dig through Gmail accounts to find exactly what youre looking for and perhaps MORE.

Peeping through the windows: Microsoft to incorporate MANDATORY AI systems in Windows 11 to SPY on all your computing activities.

Dallas school district installs AI spying, surveillance systems to keep an eye on students.

Sources include:

Survivopedia.com

USAToday.com

TheConversation.com

TheGuardian.com

Brighteon.com
Submit a correction >>

Continue Reading

Politics

DeepSeek — a wake-up call for responsible innovation and risk management

Published

on

By

DeepSeek — a wake-up call for responsible innovation and risk management

DeepSeek R1’s rise shows AI’s promise and peril — cost-effective yet risky. Privacy, bias and security flaws demand responsible AI now.

Continue Reading

US

Israel leans hard into Trump plan for Gaza – but has anyone asked its people?

Published

on

By

Israel leans hard into Trump plan for Gaza - but has anyone asked its people?

Donald Trump is not a man in the habit of backing down.

His astonishing proposal to “own” Gaza and relocate two million Palestinians has faced unanimous opposition from America’s allies, but the president now has a plan and woe betide anyone who gets in the way. And that includes international law.

“The Gaza Strip would be turned over to the United States by Israel at the conclusion of the fighting,” he wrote on Truth Social.

Trump latest: Netanyahu backs ‘remarkable’ Gaza idea

Please use Chrome browser for a more accessible video player

Netanyahu praises Trump’s ‘good idea’

Nevermind that Gaza is not Israel’s land to turn over.

“The Palestinians… would have already been settled in safer and more beautiful communities, with new and modern homes, in the region.”

Nevermind that most countries in the region have angrily opposed this suggestion.

More on Donald Trump

Aware, perhaps, that the prospect of US troops being sent to Gaza, possibly for decades, would meet opposition in Congress, Trump added “no soldiers by the US would be needed!”

Well that clears one question up. But who would be responsible for security in Gaza then?

Local police officers who are affiliated to Hamas? Private security contractors made of former American soldiers, operating under rules of engagement set by who?

While most of the world is recoiling at all this, in Israel they are leaning into it. Hard.

The defence minister, Israel Katz, has ordered the IDF to prepare plans to allow Gazans to leave by land, sea or air. This is being framed as voluntary migration, giving Gazans the freedom to leave for a better life elsewhere.

Some might. But what if most don’t. Then what?

Voluntary migration sounds nice and all, but how voluntary would it be, really?

Read more:
White House appears to row back on Gaza proposal
What you need to know about Trump’s Gaza plan

Please use Chrome browser for a more accessible video player

Trump plan is ‘ethnic cleansing’

Palestinians, human rights organisations and others argue that after 15 and a half months of constant bombardment, Israel has left Gaza uninhabitable and so any departure would be down the barrel of guns that have been pointing at them for almost a year and a half.

Faced with all this, Trump, Netanyahu and their ministers continue to insist that only they know what’s best for Gazans.

Has anyone actually asked the people of Gaza?

Continue Reading

Environment

Tesla sales crash in another market and this time, it can’t blame Model Y

Published

on

By

Tesla sales crash in another market and this time, it can't blame Model Y

Australia is the latest market to report a significant drop in Tesla sales for the first month of 2025, and in this case, the automaker can’t blame the Model Y changeover.

Earlier this week, we reported on European markets releasing car sales data for January, showing a massive drop in Tesla sales.

Tesla sold roughly half as many cars in Europe in January 2025 compared to January 2024.

Most industry watchers agree that there are two main reasons behind the sharp decline:

  • Elon Musk’s meddling in politics and spreading misinformation on social media is driving people away from Tesla
  • Tesla is transitioning Model Y production to the new design, which is affecting production and sales

Now, Australia is reporting its car sale numbers for January 2025, and it shows that Tesla is also having issues in this market.

In the first month of 2025, Tesla delivered only 739 vehicles – down 33% year-over-year.

This time, Tesla can’t blame the Model Y changeover as Model Y deliveries were actually up 20%.

Model 3 is the problem. Sales of Tesla’s cheapest model were down 63%.

This has been Tesla’s trend in Australia for the last year. In January 2023, Tesla delivered more than 2,000 vehicles in the country, but now it can only deliver a few hundred units. In 2024, Tesla’s sales dropped 17% for the whole year.

Electrek’s Take

At this point, it’s fairly clear that Tesla’s sales will be abysmal in Q1. Tesla will use the excuse of the Model Y changeover, and it will undoubtedly be partly true, but I think the Elon effect is also be a significant part of Tesla’s sales problem.

Unfortunately, it’s impossible to calculate, but in the case of Australia, we can see that it’s part of the problem with the model breakdown.

Australia is not a huge car market and it won’t have a major impact on Tesla, but the trend appears to be similar in most markets.

The US is the biggest wildcard, as Elon still has a lot of fans there, obviously. US data is a bit more opaque and it will take a while for us to see an impact, if any.

FTC: We use income earning auto affiliate links. More.

Continue Reading

Trending