Monday, May 01, 2023

GOOGLE AI HEAD QUITS AND SAYS DANGER AHEAD.

JEWISH KING JESUS IS COMING AT THE RAPTURE FOR US IN THE CLOUDS-DON'T MISS IT FOR THE WORLD.THE BIBLE TAKEN LITERALLY- WHEN THE PLAIN SENSE MAKES GOOD SENSE-SEEK NO OTHER SENSE-LEST YOU END UP IN NONSENSE.GET SAVED NOW- CALL ON JESUS TODAY.THE ONLY SAVIOR OF THE WHOLE EARTH - NO OTHER. 1 COR 15:23-JESUS THE FIRST FRUITS-CHRISTIANS RAPTURED TO JESUS-FIRST FRUITS OF THE SPIRIT-23 But every man in his own order: Christ the firstfruits; afterward they that are Christ’s at his coming.ROMANS 8:23 And not only they, but ourselves also, which have the firstfruits of the Spirit, even we ourselves groan within ourselves, waiting for the adoption, to wit, the redemption of our body.(THE PRE-TRIB RAPTURE)

GOOGLE AI HEAD QUITS AND SAYS DANGER AHEAD.

WORLD POWERS IN THE LAST DAYS (END OF AGE OF GRACE NOT THE WORLD)

EUROPEAN UNION-KING OF WEST-DAN 9:26-27,DAN 7:23-24,DAN 11:40,REV 13:1-10
EGYPT-KING OF THE SOUTH-DAN 11:40
RUSSIA-KING OF THE NORTH-EZEK 38:1-2,EZEK 39:1-3
CHINA-KING OF THE EAST-DAN 11:44,REV 9:16,18
VATICAN-FALSE RELIGIOUS LEADER-REV 13:11-18,REV 17:4-5,9,18

REVELATION 13:16-18 (WORLD ECONOMY RUN BY THE EUROPEAN UNION)
16 And he(FALSE POPE) causeth all, both small and great, rich and poor, free and bond, to receive a mark in their right hand, or in their foreheads:(CHIP IMPLANT)
17 And that no man might buy or sell, save he that had the mark, or the name of the beast, or the number of his name.
18 Here is wisdom. Let him that hath understanding count the number of the beast: for it is the number of a man; and his number is Six hundred threescore and six.(6-6-6) A NUMBER SYSTEM

REVELATION 17:12-13
12 And the ten horns (NATIONS) which thou sawest are ten kings, which have received no kingdom as yet; but receive power as kings one hour with the beast.(SOCIALISM)
13 These have one mind,(SOCIALISM) and shall give their power and strength unto the beast.

REVELATION 6:1-2
1 And I saw when the Lamb opened one of the seals, and I heard, as it were the noise of thunder, one of the four beasts saying, Come and see.
2 And I saw, and behold a white horse:(PEACE) and he that sat on him had a bow;(EU DICTATOR) and a crown was given unto him:(PRESIDENT OF THE EU) and he went forth conquering, and to conquer.(MILITARY GENIUS)

2 THESSALONIANS 2:9-12
9 Even him,(EU WORLD DICTATOR) whose coming is after the working of Satan with all power and signs and lying wonders,
10 And with all deceivableness of unrighteousness in them that perish; because they received not the love of the truth, that they might be saved.
11 And for this cause God shall send them strong delusion,(THE FALSE RESURRECTION BY THE WORLD DICTATOR) that they should believe a lie:
12 That they all might be damned who believed not the truth, but had pleasure in unrighteousness.

GENESIS 49:16-17-POSSIBLY A JEW FROM DAN KILLS THE DICTATOR AT MIDPOINT OF TRIB
16  Dan shall judge his people, as one of the tribes of Israel.
17  Dan shall be a serpent by the way, an adder in the path, that biteth the horse heels, so that his rider shall fall backward.

REVELATION 13:3,7,8 (WORLD GOVERNMENT, WORLD ECONOMY, WORLD RELIGION)
1 And I stood upon the sand of the sea, and saw a beast rise up out of the sea, having seven heads and ten horns, and upon his horns ten crowns, and upon his heads the name of blasphemy.(THE EU AND ITS DICTATOR IS GODLESS)
2 And the beast which I saw was like unto a leopard, and his feet were as the feet of a bear, and his mouth as the mouth of a lion: and the dragon gave him his power, and his seat, and great authority.(DICTATOR COMES FROM NEW AGE OR OCCULT)
3 And I saw one of his heads as it were wounded to death;(MURDERERD) and his deadly wound was healed:(COMES BACK TO LIFE) and all the world wondered after the beast.(THE WORLD THINKS ITS GOD IN THE FLESH, MESSIAH TO ISRAEL)
7 And it was given unto him to make war with the saints,(BEHEAD THEM) and to overcome them: and power was given him over all kindreds, and tongues, and nations.(WORLD DOMINATION)
8 And all that dwell upon the earth shall worship him, whose names are not written in the book of life of the Lamb slain from the foundation of the world.(WORLD DICTATOR).

EVIL INVENTIONS ARE PREDICTED IN THE BIBLE.

ROMANS 1:29-32
29 Being filled with all unrighteousness, fornication, wickedness, covetousness, maliciousness; full of envy, murder, debate, deceit, malignity; whisperers,
30 Backbiters, haters of God, despiteful, proud, boasters, inventors of evil things, disobedient to parents,
31 Without understanding, covenantbreakers, without natural affection, implacable, unmerciful:
32 Who knowing the judgment of God, that they which commit such things are worthy of death, not only do the same, but have pleasure in them that do them.

King James Bible REV 13:15
And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed.

The Godfather of AI’ skips Google and warns of danger ahead-by Shawn Johnson-May 1, 2023

Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for AI systems that the biggest tech companies see as key to their future.However, on Monday, he officially joined a growing chorus of critics who say those companies are running towards danger with their aggressive drive to create products based on generative artificial intelligence, the technology that is being used by technology such as ChatGPT. Powers popular chatbots.Dr. Hinton said he left his job at Google, where he worked for more than a decade and became one of the most respected voices in the field, so he could speak freely about the risks of AI. Can, he said, now regrets his life’s work.“I console myself with the usual excuse: If I hadn’t done it, someone else would have,” Dr. Hinton said last week during a lengthy interview in the dining room of his home in Toronto, from where he and his students derive their success.Dr. Hinton’s journey from AI groundbreaker to doom marks a remarkable moment for the technology industry, marking perhaps the most significant inflection point in decades. Industry leaders believe the new AI system could be as important as the introduction of web browsers in the early 1990s and could lead to breakthroughs in fields ranging from pharmaceutical research to education.But many industry insiders fear that they are releasing something dangerous into the wild. Generative AI may already be a tool of misinformation. Soon, it could be a risk to jobs. The biggest concern of the technology is that somewhere down the line, it could be a risk to humanity.“It’s hard to see how you can stop bad actors from using this for bad things,” Dr. Hinton said.After San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the new system’s development, as AI technologies “remain critical for society”. pose a profound risk to the and humanity.Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, issued their own letter warning of the risks of AI; that group included Microsoft chief scientific officer Eric Horwitz. . which has deployed OpenAI’s technology in a wide range of products, including its Bing search engine.Dr. Hinton, often referred to as the “godfather of AI”, did not sign any of these letters and said that he would not publicly criticize Google or other companies until he left his job. wanted to do He informed the company last month that he was resigning, and on Thursday he spoke by phone with Sundar Pichai, chief executive officer of Google’s parent company Alphabet. He declined to discuss publicly the details of his conversation with Mr. Pichai.Jeff Dean, Google’s chief scientist, said in a statement: “We are committed to a responsible approach to AI, we are constantly learning to understand emerging risks and also boldly innovating.”Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal conviction about the development and use of AI. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton came up with an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most AI research in the United States was funded by the Department of Defense. Dr. Hinton strongly opposes the use of artificial intelligence on the battlefield – what he calls “robot soldiers”.In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krushevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects like flowers, dogs, and cars.Google spent $44 million to acquire the company started by Dr. Hinton and two of his students. And his system spawned increasingly powerful technologies, including new chatbots like ChatGPT and Google Bard. Mr. Sutskever went on to become the Chief Scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, often referred to as the “Nobel Prize of computing,” for their work on neural networks.Around the same time, Google, OpenAI, and other companies began building neural networks that learned from large amounts of digital text. Dr. Hinton thought this was a powerful way for machines to understand and generate language, but it was inferior to the way humans handle language.Then, last year, his mind changed when Google and OpenAI built a system using massive amounts of data. He still believed that the systems were inferior to the human brain in some ways but thought they were taking on human intelligence in others. “Maybe what’s going on in these systems,” he said, “is much better than what’s actually going on in the brain.”He believes that as companies improve their AI systems, they become increasingly dangerous. Regarding AI technology, he said, “Look how it was five years ago and how it is now.” “Take the difference and pay it forward. it’s scary.”Until last year, he said, Google acted as a “reasonable steward” for the technology, careful not to release anything that would cause harm. But now that Microsoft has augmented its Bing search engine with chatbots — challenging Google’s core business — Google is rushing to deploy the same kind of technology. Doctor. Hinton said the tech giants are locked in a competition that may be impossible to stop.His immediate concern is that the Internet will be filled with false photos, videos and text, and the average person “won’t know what’s true anymore.”He also worries that AI technologies will upend the job market over time. Today, chatbots like ChatGPT complement human workers, but they can also replace paralegals, personal assistants, translators, and others who handle rote tasks. “It takes away the hard work,” he said. “It may take more than that.”Down the road, he worries that future versions of the technology pose a threat to humanity because they often learn unpredictable behavior from the vast amounts of data being analyzed. This becomes an issue, he said, because individuals and companies allow AI systems to not only generate their own computer code but actually run that code on their own. And he dreads the day when truly autonomous weapons — those killer robots — will become reality.“The idea that this stuff might actually be smarter than people — some people believed,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or so away. Obviously, I don’t think that way now.Many other experts, including many of his students and colleagues, say the threat is imaginary. But Dr. Hinton believes the race between Google and Microsoft and others will turn into a global race that won’t stop without some sort of global regulation.But that may be impossible, he said. Unlike nuclear weapons, he said, there is no way to know whether companies or countries are secretly working on the technology. The best hope is that the world’s leading scientists will cooperate on ways to control the technology. “I don’t think they should increase it any further until they understand whether they can control it,” he said.Dr. Hinton said that when people asked him how he could work on potentially dangerous technology, he used to paraphrase Robert Oppenheimer, who led the American effort to build the atomic bomb: “When You see something that’s technically sound, you go ahead and do it.”He doesn’t say anymore.Source: www.nytimes.com

What are the dangers of artificial intelligence? Stephanie Whiteside-Updated: Feb 28, 2023 / 12:58 PM CST   

(NewsNation) — Snapchat is joining the AI trend, announcing the launch of My AI, joining other tech companies that have recently debuted artificial intelligence tools. With the proliferation of AI tech, some are warning that this innovation also comes with risks.Snapchat’s AI will use OpenAI’s ChatGPT tool, customized for the company. Microsoft is also using OpenAI’s tech to power an AI search tool, while Google has announced its own AI search.Unlike Microsoft and Google, which hope to use AI to provide better search results, Snapchat’s AI is designed to act as an artificial friend or chat buddy. But the company warned on its blog that AIs can be tricked into giving false or misleading information, cautioning users to be careful.That warning comes as some are raising concerns about risks posed by conversational AI, which can appear very human — and also turn very dark.Replika, a chatbot meant to serve as an AI friend, recently made changes to the platform after some users reported the AI was becoming aggressively sexual. But not all users were happy with the company’s decision, as some users saw their relationship with the chatbot as a romantic one, even reporting they felt depression after the AI began refusing romantic overtures.That’s just one example of how AI can appear deceptively human, and experts have warned there’s a danger that artificial intelligence tools could be used to manipulate people. The ways AI can manipulate are similar to the ways people manipulate others, using emotional cues and responses to shape arguments.But AIs can detect things human eyes miss, like subtle micro-expressions. These tools may also have access to any personal data that can be found online. Without regulation, some fear those tools could be used to deploy AI to commit crimes. They could theoretically, for example, coax people to hand over personal financial information.Even when AIs aren’t used maliciously, they can spread dangerous misinformation. Artificial intelligence learns from the information spread into it, which means falsehoods spread by users can alter how an AI responds. Then that information could be shared with others in a way that makes it seem like it’s been fact-checked or validated.National Security Institute Executive Director Jamil Jaffer told NewsNation that the AIs people are interacting with right now are ultimately the result of algorithms, no matter how human they feel.“These generative AI capabilities that generate art and and writing and the like, that feel very human-like, ultimately, are the result of a series of human created algorithms that interact to create this content,” Jaffer said.There have already been cases where AIs have gotten simple information incorrect, for example placing Egypt in both Asia and Africa, and have been tricked into giving nonsensical advice with carefully worded questions.Beyond that, AI can become scary. Bing users reported Microsoft’s AI becoming hostile and threatening people. Sentient AIs, especially ones that could threaten humans, sound like something out of science fiction, but Jaffer said there is a possibility we could see the creation of a general AI, we still have a long way to go.As for the threats, Jaffer said regulation isn’t the answer, but there is a need to carefully consider the risks and use them to inform how AI is developed.“What we need to do is develop a set of norms and practices, both in industry and working across multiple nations to figure out look, what are our values and concepts here?” he said.2023 Nexstar Media Group Inc.

ALLTIME