Tag Archives: AI

NON-ARTIFICIAL INTELLIGENCE

Now that the balloon has popped on failed fads like Dot.Coms, Bored Ape NFTs, Crypto, and forever-free borrowed money, the world’s current FOMO (Fear Of Missing Out) has turned to the newest and coolest cat—Artificial Intelligence or what’s simply called AI. Make no mistake, AI is real. It’s not simple, but it’s very, very real. And it has the potential to be unbelievably good or gut-wrenchingly awful. But as smart as AI gets, will it ever be a match for Non-Artificial Intelligence, NAI?

I can’t explain what NAI is. I just have faith that it exists and has been a driving force in my life, especially my current life where I’m absorbed in a world of imagination and creativity. Call it make-believe or living in a dream, if you will, but I’m having a blast with a current fiction, content-creation project which uses both AI and NAI.

I’ve asked a lot of folks—mainly writing folks because that’s who I hang with—what their source of inspiration is. Their muse or their guide to the information pool they tap into to come up with originality. Many casually say, “God.”

I don’t have a problem with the concept of God. I’ve been alive for 66 years and, to me, I’ve seen pretty strong evidence of an infinite intelligence source that created all this, including myself. I’ll call that force NAI for lack of a better term.

What got me going on this AI/NAI piece was three months of intensive research into the current state of artificial intelligence—what it is, how to use it, and where it’s going. AI is not only a central character in my series titled City Of Danger, AI is a tool I’m using to help create the project. I’m also using Non-Artificial Intelligence as the inspiration, the imagination, and the drive to produce the content.

If you’ve been following DyingWords for a while, you’re probably aware I haven’t published any books in the past two years except for one about the new AI tool called ChatGPT. That’s because I’m totally immersed in creating City Of Danger in agreement with a netstream provider and a cutting-edge, AI audio/visual production company. Here’s how it works:

I use my imagination to create the storyline (plot), develop the characters and their dialogue, construct the scenes, and set the overtone as well as the subtext theme. I use NAI for inspirational ideas and then feed all this to an AI audio/visual bot who scans real people to build avatars and threads them through a “filter” so the City Of Danger end-product looks like a living graphic novel.

Basically, I’m writing a script or a blueprint so an AI program can take over and give it life. The AI company does the film work and the netstream guy foots the bill. This is the logline for City Of Danger:

A modern city in existential crisis caused by malevolent artificial intelligence enlists two private detectives from its 1920s past for an impossible task: Dispense street justice and restore social order.

Here’s a link to my DyingWords web page on City Of Danger along with the opening scene of the pilot episode. Yes, it involves time travel and dystopian tropes which have been done to death—but not quite like this. I like to think of myself as the next JK Rowling except I’m not broke and don’t write in coffee shops with a stroller alongside.

I was going to do this post as a detailed dive into the current state of artificial intelligence and where this fascinating, yet intimidating, technology is going. However, I have a long way to go yet in my R&D and don’t have a complete grasp on the subject. I will give a quick rundown, though, on what I’ve come to understand.

The term (concept) of artificial intelligence has been around a long time. Alan Turing, the father of modern-day computing and its morph into AI, conceived a universal thinking machine back in WW2 when he cracked Nazi communication codes. In 1956, a group of leading minds gathered at Dorchester University where, for three months, they brainstormed and laid the foundation for future AI breakthroughs.

Fast forward to 2023 and we have ChatGPT version 4 and a serious, if not uncontrollable, AI race between the big hitters—Microsoft and Google. Where this is going is anyone’s guess and recently other big guns like Musk, Gates, and Wozniak weighed in, penning an open letter to the AI industry to cool their jets and take the summer off. To quote Elon Musk, “Mark my words, AI is far more dangerous (to humanity) than nukes.”

There’s huge progress happening in AI development right now. But stop and look around at how much AI has already affected your life. Your smartphone and smartTV. Fitbit. GPS. Amazon recommends. Siri and what’s-her-name. Autocorrect. Grammarly. Cruise missiles, car parts, and crock pots.

Each day something new is mentioned. In fact, it’s impossible to scroll through a newsfeed with the AI word showing up. We’re in an AI revolution—likely the Fourth Industrial Revolution to steal the phrase from Klaus Schwab and his World Economic Forum.

Speaking of an AI revolution, one of the clearest runs at explaining AI in layman’s terms is a lengthy post written and illustrated by Tim Urban. It’s a two-part piece titled The AI Revolution: Our Immortality or Extinction. Tim calls AI “God in a Box”. Here’s what ChatGPT had to say about it.

Tim Urban’s two-part post “The AI Revolution: Our Immortality or Extinction” explores the potential impact of artificial intelligence (AI) on humanity.

In part one, Urban describes the current state of AI, including its rapid progress and the various forms it can take. He also discusses the potential benefits and risks of advanced AI, including the possibility of creating a “superintelligence” that could surpass human intelligence and potentially pose an existential threat to humanity.

In part two, Urban delves deeper into the potential risks of advanced AI and explores various strategies for mitigating those risks. He suggests that developing “friendly AI” that shares human values and goals could be a key solution, along with establishing international regulations and governance to ensure the safe development and use of AI.

Overall, Urban’s post highlights the need for thoughtful consideration and planning as we continue to develop and integrate AI into our lives, in order to ensure a positive outcome for humanity.

From what I understand, there are three AI phases:

  1. Narrow or weak artificial intelligence—where the AI system only focuses on one issue.
  2. General artificial intelligence—where the AI system is interactive and equal to humans.
  3. Super artificial intelligence—where the AI system is self-aware and reproducing itself.

We’re in the narrow or weak phase now. How long before we reach phase two and three? There’s a lot of speculation out there by some highly qualified people, and their conclusions range from right away to never. That’s a lot of wriggle room, but the best parentheses I can put on the figure is 2030 for phase two and 2040 for phase three. Give or take a lot.

The AI technology involved in City Of Danger is a mid-range, phase one product. The teccie I’m talking to feels it’ll be at least 2025 before it’s perfected enough to have the series released. I think it’s more like 2026 or 2027, but that’s okay because it gives me more time to tap into NAI for more imaginative and creative storyline ideas.

I’m not going to go further into Narrow AI, General AI, or Super AI in this post. I’d have to get into terms like machine learning, large language model, neural networks, computing interface, intelligence amplification, recursive self-improvement, nanotech and biotech, breeding cycle, opaque algorithms, scaffolding, goal-directed behavior, law of accelerating returns, exponentiality, fault trees, Boolean function and logic gates, GRIND, aligned, non-aligned, balance beam, tripwire, takeoff, intelligence explosion, and that dreaded moment—the singularity. Honestly, I don’t fully understand most of this stuff.

But what I am going to leave you with is something I wrote about ten years ago when I started this DyingWords blog. It’s a post titled STEMI—Five Known Realities of the Universe. Looking back, maybe I nailed what Non-Artificial Intelligence really is.

OPEN-AI / CHATGPT — A FICTION WRITER TALKS SHOP WITH A BOT

Unless you’ve been hiding under a rock or living in a homeless shelter, no doubt you’ve heard about massive artificial intelligence (AI) breakthroughs. AI apps like ChatGPT (Generative Pretrained Transformer) are phenomenal technology and investing in them is the new dot.com and crypto wave. Good, bad, or ugly, the AI bots are here to stay. So, I decided to experiment with Chat and imagined I had the opportunity to have a one-on-one with the greatest fiction guru ever. The result is my new release titled OpenAI/ChatGPT — A Fiction Writer Talks Shop with a Bot.

First, let me say I don’t believe for a sec that AI is taking over human creativity and imagination like some doomsayers are spouting. I’ve had a two-month-long dive with the Chat technology, and I’m amazed at its potential as a research and writing tool, but it’s not a replacement for a thinking person with their ass in a chair, fingers on keys, and writing more books. (Which, by the way, is the best writing advice I’ve ever got.)

However, the Chat bot is clever. Very clever. I prompted it to write me an Amazon product description (blurb) for my new Chat book. This is what my little AI friend came up with:

*   *   *

OpenAI ChatGPT—A Fiction Writer Talks Shop with a Bot is the ultimate guide to unlocking the potential of the artificial intelligence (AI) app ChatGPT to help fiction writers create amazing works of art.

This info-packed resource by International Bestselling Author Garry Rodgers provides an in-depth look into how this powerful AI tool can be used to enhance creativity, speed up the writing process, and write stories that will captivate and inspire readers. With step-by-step guidance and real-world examples, this book will show you how to use OpenAI’s ChatGPT to become a more successful fiction writer.

OpenAI ChatGPT—A Fiction Writer Talks Shop with a Bot begins with a general introduction of what this cutting-edge artificial intelligence is and what it can do for fiction writers. This book deep-dives into what a writer’s mindset must be and how to view fiction writing as a business professional. It does a drill into general storytelling and topics like wordcraft with dialogue, grammar and punctuation tips, editing tools, helpful links, and finding that elusive thing called voice. And it reaches a conclusion that going forward in 2023—and beyond—you, as a fiction writer, will be left behind without knowing what OpenAI and the bot ChatGPT can do to up your game and build your career.

Whether you’re a beginning writer or an experienced scribe, this information will help you get the most out of OpenAI and ChatGPT to write stories that will stand out from the crowd.

By the way, this product description was written by ChatGPT. Yes, a bot.

*   *   *

There’s a learning curve to using ChatGPT effectively. It responds to your inputs, which are called prompts, and gives you answers called conclusions. It’s a case of garbage in / garbage out (GIGO) and the better your prompt the better the conclusion. Chat thrives on clear and precise instructions. The more detail you give Chat about what you want it to do, the more informative is its feedback. You really have to play around with Chat, begin a conversation thread, and drill down as you go along.

ChatGPT is far more than a search engine. Unlike Google which sends you first to paid ads on its search engine response pages (SERPs) and lets you fend for yourself to sort through existing articles, Chat gets right to the meat of the issue you’re after. As long as you’re clear with Chat and guide it along to some degree, what it comes up with can be downright amazing. And frikkin’ fast, too.

The bot has a sense of humor, too. In my book A Fiction Writer Talks Shop with a Bot, I found out that it likes wine—just like I do. We had a side conversation about fine wine, and I wrote a too-much-wine-influenced prompt and it recognized I was slurring my words. Fun stuff, this bot can be.

Getting back to fiction writing, I plan to use ChatGPT as an aid. Just like I use the AI features in the MS Word program I’m writing this post on and Grammarly which I’ll run it through for editing and a final polish. But, I don’t plan on letting Chat run my life or take over content creation like I’m busy at in my newest project titled City Of Danger.

However, I still plan to have some fun with it. Hey Chat—What are the odds of the Toronto Maple Leafs hockey team winning a Stanley Cup championship?

The odds of the Toronto Maple Leafs winning a Stanley Cup championship vary depending on the year. Generally speaking, the Maple Leafs have not been successful in recent years and the odds of them winning a Stanley Cup championship are extremely low. According to the betting website SportsBetting.ag, the odds of the Toronto Maple Leafs winning a Stanley Cup championship are 1 in 3000, which is considered a very long shot. So don’t bet on them.

By the way, OpenAI/ChatGPT—A Fiction Writer Talks Shop with a Bot is now available on Amazon. Cover design by Elle J. Rossi at Evernight Designs.

HELPFUL OR HOMICIDAL — HOW DANGEROUS IS ARTIFICIAL INTELLIGENCE (AI)?

The pace of progress in artificial intelligence is incredibly fast. Unless you have direct exposure to groups like DeepThink and OpenAI, you have no idea how fast. It’s growing at an exponential pace. The risk of something seriously dangerous happening is in the five-year timeline. Ten years at the most. I am close—really close—to the cutting edge in AI, and it scares the hell out of me. It’s capable of vastly more than anyone knows, and the rate of improvement is beyond enormous. Mark my words. AI is far more dangerous to humans than nukes.” ~Elon Musk, uber-billionaire founder of SpaceX, Tesla, and leading investor in AI companies DeepThink and OpenAI.

Artificial intelligence isn’t a thing of the future. Not some grand vision of twenty-second-century technology. It’s here. Right now, I’m using AI to write this piece. My PC is a brand new HP laptop with Windows 11 backed by the latest Word auto-suggest and a Grammarly Premium editing app. My research flows through Google Chrome, and I have an AI search bar that (creepy) seems to know exactly what I’m thinking and wanting next. This is also the first post I’m experimenting with by plugging into an AI text-to-speech program for an audio version of this site.

AI is great for what I do—create content for the entertainment industry—and I have no plans to use AI for world domination. Not like a character I’m basing-on for my new series titled City Of Danger. It’s a work in progress set for release this fall—2022.

I didn’t invent the character. I didn’t have to because he exists in real life, and he’s a mover and shaker behind many world economic and technological advances including promoting artificial intelligence. His name is Klaus Schwab.

Klaus Schwab – Founder & Executive Chairman of World Economic Forum

Who is Klaus Schwab? He’s the megalomanic founder and executive chairman of the World Economic Forum which is a left-leaning, think tank operated out of Davos, Switzerland. Since 1971, Klaus Schwab has amalgamated a unity among the Wokes—billionaires, heads of state, religious leaders, and royalty to convene in Davos and hammer out a new world order. I couldn’t have built a more diabolical villain if I had access to all the AI in the world and an organic 3D printer.

I’m not making this up. And it’s not a whacko conspiracy theory. Check out the World Economic Forum for yourself. Go deep. You’ll see they speak openly about their new world order based on their Stakeholder Capitalism principle designed for their Fourth Industrial Revolution. My take—the participating billionaires aren’t sucked in. They’re turning old school communism into neo-capitalist profit centers to which Klaus Schwab promotes by motivating the ultra-fat-cats and facilitating their network. Part of this grand scheme is using the creative power of artificial intelligence for realignment of the global economy. Redistribution of wealth—more for them and less for us.

What’s in it for Klaus Schwab to promote artificial intelligence as part of his new world order? Well, Klaus Schwab is 83 and he’s wearing out. Klaus Schwab is fascinated with artificial intelligence which is a founding principle behind transhumanism. He fervently believes humans can meld with AI machines and extend all capabilities including life spans. It’s entirely in Klaus Schwab’s interest to fast-track AI development if he wants to live long enough to wear his Master-Of-The-Universe jacket in public.

Here’s a quote from Klaus Schwab (Thick German accent):

In the new world order of the fourth industrial revolution, you will rent what you need from the government. You shall own nothing, and you will be happy.”

Setting Klaus Schwab aside, there’s something else going on in the world’s order involving AI. Here’s a quote from Russian President Vladimir Putin:

Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with enormous opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become ruler of the world.”

A bit of AI trivia. Ukraine has a world-class, artificial intelligence tech industry. Some of the best and brightest AI designers work in Kyiv which is now under siege by Putin’s military. There are 156 registered AI firms in Ukraine and 8 universities specializing in AI research and teaching. The Ukrainian AI industry includes the finest, most advanced blockchain and cryptography technology anywhere on the planet as well as being a frontline in the development of autonomous AI weapons systems.

——

Before looking at just how dangerous AI is to human order, let’s take a quick look at what AI actually is.

Artificial Intelligence uses computers to do things traditionally requiring human intelligence. This means creating algorithms to classify, analyze, and draw predictions from collected data. It also involves acting on that collected data, learning from data, and improving over time. It’s like a child growing up to be a (sometimes) smarter adult. Like humans, AI is not perfect. Yet.

AI has two paths. One is narrow AI where coders write specific instructions, or algorithms, into computer software for specific tasks. Narrow AI has fixed algorithms which cannot evolve into more advanced AI systems. General AI is more advanced. It’s specifically designed to learn from itself, teach itself new ideas, and invent stuff we’ve never dreamed possible.

General AI is the guy you gotta be scared of. Here’s a 1951 quote from Alan Turing, widely regarded as the father of modern computerism:

Let us assume these smart machines are a genuine possibility, which I believe they are, and look at the consequences of constructing them. There would be plenty to do in keeping one’s intelligence up to the standards set by the machines, for it seems that once the machine thinking method had started, it would not be long for them to outstrip our feeble powers. At some stage, we would have to expect the machines to take control.”

In rabbit-holing (aka researching) this piece, I read a few books. One, of course, was Klaus Schwab’s The Fourth Industrial Revolution. It’s a must if you want to get inside this guy’s head. Another was Davos Man—How the Billionaires Devoured the World by Peter S. Goodman. This is a fascinating look at how Klaus Schwab has organized mega-money to change the world’s governance and economic structures to which AI plays a big part. Then there’s Superintelligence—Paths, Dangers, Strategies by Nick Bostrom.

Here’s a take-away quote from Nick Bostrom in Superintelligence:

Once unfriendly superintelligence exists, it would prevent us from replacing it or changing its preferences. Our fate would be sealed“.

Therein lies the general AI danger. Like all human creations, it’ll be fine they said until something goes wrong. Once a malicious AI manipulator allows an AI computer program to go rogue, our fate would be sealed. It would be impossible to put the digital genie back in the bottle. Here are the main perils possible with general AI, rogue or not:

Automation-Spurred Job Losses

AI overseers view employment realignment as the first general AI concern. It’s no longer a matter of if jobs will be lost. It’s already occurring, and the question is how many more will fall. The Bookings Institute estimates that 36 million Americans work in jobs that have high exposure to job automation where 70 percent of their tasks can be done cheaper and more efficiently by AI-programmed machines, Smart robots, if you will.

Not all threatened workers are blue-collar by any means. White-collar guys and gals earning $100K per year can be replaced by AI. Think of the savings where a $200K software program can take over for 10 execs. That’d be an $800K/yr saving, and the boss would be perfectly within their legal rights to fire them. Would the boss feel bad about it? Probably, but AI wouldn’t care.

Privacy, Security, and the Rise of Deepfakes

Look at China if you want to see AI in action. The Chinese government has hundreds of millions of networked surveillance cameras watching their citizens. Data mining includes individual habits such as travel, consumer purchases, entertainment choices, political views expressed by social media or online searches, and even walking characteristics let alone facial recognition. Part of the security-over-privacy design of China’s AI Big Brother is making a social credit profile of its 1.4 billion people. The better your score (behavior) the better your life.

Deepfakes are terrifying possibilities. This is a drill down on fake news where AI constructs a completely convincing facsimile of a person for fraudulent use. Forget about raiding a bank account using an AI-simulated profile or an AI scammed IRS phone call. Think of a world leader like a president during election time when a fake AI image of them publicly performs an act so reprehensible that it costs them the office. Or supporting a dictator. Whose lies would you believe?

Stock Market Instability Caused by Algorithmic High-Frequency Trading

This has already happened. In 2010, the Flash Crash took the Dow Jones down by 1,000 instantaneously costing the marketplace over $1 trillion. It was caused by a manipulator who short-stocked a trade that the algorithmic high-frequency took to be a market upset. The computers took over and instantly sold off private stocks in a hyper-dumping.

Algorithmic trading occurs when a computer system—unencumbered by human instincts or emotions—executes trades based on pre-programmed instructions. These computer systems make extremely high-volume, high-frequency, and high-value exchanges based upon algorithmic calculations they have been programmed for. They’re just doing what they’ve been told. Imagine what they’d do when they thought for themselves.

Autonomous Weapons and a Potential AI Arms Race.

Imagine this. A rogue AI weapons system decides to create mischief—or wrongly reads a warning—and chucks a nuclear-tipped cruise missile at its neighbor. The AI detection system on the other side of the fence reprograms the missile in flight and sends it back home only to have the instigator release the whole arsenal in a mutually-assure destruction defense. Humans, meanwhile, having a beer with shrimps on the barbie get vaporized.

On a lesser scale, autonomous AI weapons currently exist, and the Ukrainians have been making them for Russia. There are probably slaughterbot drones flying over the streets of Kyiv right now that are programmed to fire at whatever, or whoever, matches their target profile. And there is sure to be a race to see who can come up with the best anti-AI bots.

The AI arms race matter is so serious that thousands of concerned experts have signed an open letter to world leaders. It says:

AUTONOMOUS WEAPONS: AN OPEN LETTER FROM AI & ROBOTICS RESEARCHERS

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.

Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations, and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

As Elon Musk said:

The pace of progress in artificial intelligence is incredibly fast. Unless you have direct exposure to groups like DeepThink and OpenAI, you have no idea how fast. It’s growing at an exponential pace. The risk of something seriously dangerous happening is in the five-year timeline. Ten years at the most. I am close—really close—to the cutting edge in AI, and it scares the hell out of me. It’s capable of vastly more than anyone knows, and the rate of improvement is beyond enormous. Mark my words. AI is far more dangerous to humans than nukes.”

Updates 31March2023

Elon Musk, Steve Wozniak, and 100 other prominent figures signed an Open Letter calling for a 6-month moratorium on AI research & development given the incredible advances in AI technology over the past year. Read it here.

Sam Altman of Open AI/ChatGPT issued this Statement on Planning for Artificial General Intelligence (AGI). Read it here.

And if you’re interested in more about AI/AGI, here is a release on the Asilomar AI Principles drafted in 2017 that foresaw the AI race and attempted to put sanity in it. Read it here.