Connect with us

Published

on

Isaac Asimoc, a writer well-known for his works of science fiction, penned the “Three Laws of Robotics” in 1972.

Asimov wrote these “laws” while thinking about androids, and he imagined a world where human-like robots would have human masters and need a set of programming rules to prevent them from causing harm.

But51 years after the laws were first published, technology has advanced significantly and humans now have a different understanding of what robots and artificial intelligence (AI) can look like and how people interact with them.(h/t to Survivopedia.com) The three laws of robotics

While a robot takeover is still more fiction than fact, as a prepper it’s worth reviewing Asimov’s laws to prepare for when SHTF. First law “A robot may not injure a human being or through inaction allow a human being to come to harm.” Second law “A robot must obey orders given by human beings, except where such orders would conflict with the first law.” Third law “A robot must protect its own existence as long as such protection does not conflict with the first and the second law.”

While the laws are fiction, Asimov’s thought process is something preppers should mimic.

Asimov wasn’t a prepper, but he realized that AI-powered computers, or androids and robots, as he put it, could be dangerous despite their many benefits because they couldthink for themselves. He also realized the difficulty in programming them to ensure that they would not betray their human masters.

The dichotomy here lies in allowing computers to become sentient, or feeling and thinking for themselves, while still keeping some level of control over them as their masters. This two-pronged goalmaybe impossible, especially since humans are still in the infant stages of AI and there have already been problems in creating the necessary fail-safes to ensure the safety of users.

Astechnology continues to advance, AI computer systems are now teaching themselves much faster than any thought being put into creating the necessary controls to keep them safe.

In one of the earliest AI experiments where two computers with AI systems installed communicated with each other, it only took minutes for the two programs to develop their own language and communicate. This meant their human operatorswere unable to understand the two AI systems.

Chatbots are computer programs that mimic human conversations through text.

But back in 2017, when the experiment was conducted, chatbots weren’t yet capable of more sophisticated functions beyond simple tasks like answering customer questions or ordering food. To address this, Facebook’s Artificial Intelligence Research Group (FAIR) tried to find out if these programs could be taught to negotiate.

The researchers developed two chatbots named Alice and Bob.Using a game where the two chatbots and human players bartered virtual items like balls and hats, Alice and Bob showed that they could make deals with varying degrees of success.

Facebook researchers observed the language when the chatbots were negotiating among themselves. They noticed that becausethey didn’t instruct the bots to stick to the rules of English, Alice and Bob started using their own language: a “derived shorthand” they invented to communicate faster.

While the researchers stopped the experiment because of the potential danger, further research into AI continued through the years. There is no policingof potential tasks for advanced AI systems

The AI systems available to modern consumers surpass those used in the Facebook experiment.There is a wider array of AI systems available to use, some of which can be hired through websites, to accomplish different tasks.

However, there is no way to monitorwhat those tasks might be to guarantee that they are not abused by those who want to use these tools for crimes or to harm others. (Related: Digital prepping: How to protect yourself against cyberattacks.)

The first question for preppers is, can these systems turn against their human masters?

According to an Air Force colonel, that has already happened during an experimental drone test. The colonel eventually tried to deny what he said, but there have been reports about the incident.

During the test, a drone was assigned to find and eliminate targets, but it needed the permission of a human controller before firing.

After some time, the drone realized that the controller was responsible for the “points” it lost when it denied the permission it needed to take out certain targets. To solve the problem, the drone “killed” the controller.

No real person was harmed during the test, but it’s easy to see how the scenario could have turned ugly if the drone was assigned to protect an area with real people.

The drone test also illustrates the potential challenges of programming AI. The tests show that it can be impossible to program a sentient AI to prevent it from doing what it wants to do because it’s clever enough to find a way to disobey direct orders.

Rogue drones controlled by AI may harm humans, but how can you prevent this from happening?

Many ethical questions are being raised about AI, but experts still haven’t been able to present real-world answers. Unfortunately, they might not start working on this problem unless atragedy occurs.

By then, it might be too late to discuss the ethics associated with AI.And the U.S. isn’t the only country working onAI technology.

Other countries, along with some that aren’t on friendly terms with the U.S., are also developing their AI systems, both for military and civilian applications.

AI is already being used for one dangerous application: The creation of deep fake videos.

Stealing an actors “copyright” to their likeness isn’t harmful, but it is still consideredcriminal activity. When that same level of artificial intelligence is applied to identity theft, even preppers and non-preppers alike won’t be safe. How can you prepare yourself before the rise of AI?

Even nowAI exists on the internet and is already being used to create various content.This means you can’t always trust that the content you see or read was created by humans.

As of writing, at least19.2 percent of articles on the internet have some AI-generated content. At least 7.7percent of these articles have 75 percent or more of their content generated by AI.

Experts warn that by 2026, at least 90percent of internet content will be AI-generated.

How is this relevant to you as a prepper?

AI-generated content can be problematic because this meansmore content will be politicized.

Data suggests that Russia and other countries are alreadytrolling U.S. websites, potentially making posts and uploading articles thatare inflammatory to add to the political division in the country.These countries can continue to use AI to increase their effectiveness by targeting their articles more specifically.

With the potential dangers of AI steadily increasing as time goes by, you must be more careful about what you see and read online. Do not believe everything your see or hear,especially content with political overtones.

Learn how to be anindependent fact-checker and do your research to find out if what you are reading and hearing is true.

Be wary ofmainstream media that may be spinning news stories to support their own political agenda. Check reliable news sources for updates on what the Russian, Chinese and other countries’ intelligence services are doing.

This also means being careful about what you post online. Never post personal information online or anything that hackers could use to try and figure out anything about you.

Do not use systems like Alexa and Google Assistant, which often allow computers to eavesdrop on user conversations. Even though the companies that make these products claim they arent spying on users, various reports about them prove otherwise.

Don’t “computerize” your life bystorin your data online. This service may seem convenient because you can access your data anywhere, but there’s alsoa chance that others could access all your data in the cloud.

Are you willing to risk a data breach just for convenience? Most of the time, companies offering these services havethings buried in the fine print of their contracts, which allows them to listen in on your computer microphones and look at images from your phone or laptop cameras.

To trulyprotect yourself from the potential dangers of AI, you must reevaluate your usage of the internet and computers.Technology is convenient, but you must be responsible and make sure your information can’t be used against you by those who might do you harm.

Don’t store yourdata online and unplug things like microphones and cameras when not in use.

Sacrifice convenience to protect yourself and your family from the potential dangers of AI technology.

Visit Computing.newsto learn more about the growing dangers of AI systems.

Watch the video below to find out how AI technology threatens to take over thousands of jobs.

This video is from theNewsClips channel on Brighteon.com. More related stories:

Google is using AI to dig through Gmail accounts to find exactly what youre looking for and perhaps MORE.

Peeping through the windows: Microsoft to incorporate MANDATORY AI systems in Windows 11 to SPY on all your computing activities.

Dallas school district installs AI spying, surveillance systems to keep an eye on students.

Sources include:

Survivopedia.com

USAToday.com

TheConversation.com

TheGuardian.com

Brighteon.com
Submit a correction >>

Continue Reading

Business

Ryanair and easyJet cancel hundreds of flights over air traffic control strike

Published

on

By

Ryanair and easyJet cancel hundreds of flights over air traffic control strike

Ryanair and easyJet have cancelled hundreds of flights as a French air traffic controllers strike looms.

Ryanair, Europe’s largest airline by passenger numbers, said it had axed 170 services amid a plea by French authorities for airlines to reduce flights at Paris airports by 40% on Friday.

EasyJet said it was cancelling 274 flights during the action, which is due to begin later as part of a row over staffing numbers and ageing equipment.

Money latest: Bond market fires warning shot at Downing St

The owner of British Airways, IAG, said it was planning to use larger aircraft to minimise disruption for its own passengers.

The industrial action is set to affect all flights using French airspace, leading to wider cancellations and delays across Europe and the wider world.

Ryanair said its cancellations, covering both days, would hit services to and from France, and also flights over the country to destinations such as the UK, Greece, Spain and Ireland.

More from Money

Group chief executive Michael O’Leary has campaigned for a European Union-led shake-up of air traffic control services in a bid to prevent such disruptive strikes, which have proved common in recent years.

He described the latest action as “recreational”.

Michael O'Leary. Pic: Reuters
Image:
Michael O’Leary. Pic: Reuters

“Once again, European families are held to ransom by French air traffic controllers going on strike,” he said.

“It is not acceptable that overflights over French airspace en route to their destination are being cancelled/delayed as a result of yet another French ATC strike.

“It makes no sense and is abundantly unfair on EU passengers and families going on holidays.”

Ryanair is demanding the EU ensure that air traffic services are fully staffed for the first wave of daily departures, as well as to protect overflights during national strikes.

“These two splendid reforms would eliminate 90% of all ATC delays and cancellations, and protect EU passengers from these repeated and avoidable ATC disruptions due to yet another French ATC strike,” Mr O’Leary added.

Continue Reading

Business

CBI kicks off search for successor to ‘saviour’ Soames

Published

on

By

CBI kicks off search for successor to 'saviour' Soames

The CBI has begun a search for a successor to Rupert Soames, its chairman, as it continues its recovery from the crisis which brought it to the brink of collapse in 2023.

Sky News has learnt that the business lobbying group’s nominations committee has engaged headhunters to assist with a hunt for its next corporate figurehead.

Mr Soames, the grandson of Sir Winston Churchill, was recruited by the CBI in late 2023 with the organisation lurching towards insolvency after an exodus of members.

Money latest: Has bond market calmed after chancellor’s tears?

The group’s handling of a sexual misconduct scandal saw it forced to secure emergency funding from a group of banks, even as it was frozen out of meetings with government ministers.

One prominent CBI member described Mr Soames on Thursday as the group’s “saviour”.

“Without his ability to bring members back, the organisation wouldn’t exist today,” they claimed.

More from Money

Rupert Soames
Image:
Rupert Soames. Pic: Reuters

Read more:
Starmer could be ousted as PM ‘within months’
Reeves’s tears a hard watch but reminder of her challenges

Mr Soames and Rain Newton-Smith, the CBI chief executive, have partly restored its influence in Whitehall, although many doubt that it will ever be able to credibly reclaim its former status as ‘the voice of British business’.

Its next chair, who is also likely to be drawn from a leading listed company boardroom, will take over from Mr Soames early next year.

Egon Zehnder International is handling the search for the CBI.

“The CBI chair’s term typically runs for two years and Rupert Soames will end his term in early 2026,” a CBI spokesperson said.

“In line with good governance, we have begun the search for a successor to ensure continuity and a smooth transition.”

Continue Reading

Technology

Apple’s China iPhone sales grows for the first time in two years

Published

on

By

Apple's China iPhone sales grows for the first time in two years

People stand in front of an Apple store in Beijing, China, on April 9, 2025.

Tingshu Wang | Reuters

Apple iPhone sales in China rose in the second quarter of the year for the first time in two years, Counterpoint Research said, as the tech giant looks to turnaround its business in one of its most critical markets.

Sales of iPhones in China jumped 8% year-on-year in the three months to the end of June, according to Counterpoint Research. It’s the first time Apple has recorded growth in China since the second quarter of 2023.

Apple’s performance was boosted by promotions in May as Chinese e-commerce firms discounted Apple’s iPhone 16 models, its latest devices, Counterpoint said. The tech giant also increased trade-in prices for some iPhone.

“Apple’s adjustment of iPhone prices in May was well timed and well received, coming a week ahead of the 618 shopping festival,” Ethan Qi, associate director at Counterpoint said in a press release. The 618 shopping festival happens in China every June and e-commerce retailers offer heavy discounts.

Apple’s return to growth in China will be welcomed by investors who have seen the company’s stock fall around 15% this year as it faces a number of headwinds.

U.S. President Donald Trump has threatened Apple with tariffs and urged CEO Tim Cook to manufacture iPhones in America, a move experts have said would be near-impossible. China has also been a headache for Apple since Huawei, whose smartphone business was crippled by U.S. sanctions, made a comeback in late 2023 with the release of a new phone containing a more advanced chip that many had thought would be difficult for China to produce.

Since then, Huawei has aggressively launched devices in China and has even begun dipping its toe back into international markets. The Chinese tech giant has found success eating away at some of Apple’s market share in China.

Huawei’s sales rose 12% year-on-year in the second-quarter, according to Counterpoint. The firm was the biggest player in China by market share in the second quarter, followed by Vivo and then Apple in third place.

“Huawei is still riding high on core user loyalty as they replace their old phones for new Huawei releases,” Counterpoint Senior Analyst Ivan Lam said.

Continue Reading

Trending