AI Archives | KnowTechie https://knowtechie.com/tag/ai/ Daily Tech News for the Non-Techie Sun, 18 Dec 2022 23:22:20 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://knowtechie.com/wp-content/uploads/2015/04/cropped-LOGO-ICON-KNOW-TECHIE-FINAL01-01-COLOR-32x32.png AI Archives | KnowTechie https://knowtechie.com/tag/ai/ 32 32 What is ChatGPT and how does it work? https://knowtechie.com/what-is-chatgpt-ai-chatbot/ Wed, 14 Dec 2022 18:59:32 +0000 https://knowtechie.com/?p=262362 ChatGPT is the latest AI-powered tool to use the power of GPT, the natural language AI model created by OpenAI. But what exactly is it?

The post What is ChatGPT and how does it work? appeared first on KnowTechie.

]]>
AI-powered tools are everywhere, from image retouching to art creation. AI can also have conversations, with ChatGPT being one of the latest chatbots.

The most important thing about ChatGPT is that a human didn’t script it. There are no pre-written answers, databases, or preset tools.

The chatbot can hold conversations, look up facts, review your code (or write it), and attempt to answer anything you ask.

The question is, should we be worried about teaching computers to think in this way?

What is ChatGPT?

chatgpt question prompt
Image: KnowTechie

The natural language model tends to be on the chatty side. The chatbot could have said things more succinctly.

That said, the response did answer our question. It kept everything in the active voice, a struggle for many writers.

If you ask ChatGPT what it is, you get a pretty comprehensive answer:

ChatGPT is a large language model trained by OpenAI to generate human-like text. It is designed to be able to hold conversations with people, using its knowledge of language and natural communication skills to provide relevant and engaging responses.

With ChatGPT, users can ask questions and have a conversation with a virtual assistant, learning more about a variety of topics and getting answers to their questions in real-time. Whether you’re looking for information, entertainment, or just someone to talk to, ChatGPT is ready and able to assist.

Think of ChatGPT as a general-use Alexa, Siri, or Google Assistant. It understands conversational English, humor, essay structure, and more.

It’s also insanely popular, with over a million people logging into the free service in the first week it was open.

And don’t forget, the more we talk to it, the better it gets as it learns from our questions.

How does ChatGPT work?

futuristic blue illustration of a human brain and technology
Image: Pixabay

If you’ve ever typed a message to a chatbot on a company website, you already know how to use ChatGPT. Those chatbots are scripted or monitored by real humans, but ChatGPT is not.

The AI model can “answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.” The questions can be of almost any form, topic, or task.

OpenAI used books, websites, and articles to train the AI behind the chatbot. Those must have included various subjects, as you can get the AI to do your coding work for you.

The answers you get are generated from probabilities based on your input questions. You can trip it up by asking recent history questions, however, as the dataset only goes up to 2021.

What can you use ChatGPT for?

Honestly, the better question is – what can’t you use it for? ChatGPT can write stories, songs, code, and more.

People have been using it to generate AI art prompts to feed into Midjourney and DALL-E. That gives better-quality images, as the prompts generated are oddly specific.

It might even be better than human writing prompts. After all, the same AI training models that power the majority of AI-generated art are the same models powering ChatGPT.

Another impressive use we noticed was creating a virtual machine inside ChatGPT. That’s right, inside the chat prompt, you can create another computer.

That means you can create a fully-featured computer with an internet connection, a web browser, and more.

When it comes to coding or science questions, the results are even better than the number one search engine, Google Search.

Do we have to worry about ChatGPT?

chatgpt danger question
Image: KnowTechie

Maybe not. While we shouldn’t trust ChatGPT’s response to its potential danger, we can trust other standardized tests.

Take a standard IQ test on which ChatGPT gained a “low average” score of 83. Hardly a super-villain genius level, although it has only been ‘alive’ for a few months, so who knows once it gets more training data.

We’re not sure AI will put people out of work just yet. The number of times it returns incorrect data is still too high. In all cases, you already need an idea of the answer to be able to evaluate ChatGPT’s output.

The AI is only as good as the data sets it is trained on. Those include inherent bias, as does the world it came from.

It could change how work is done. Imagine writing code, then using the AI to debug it. Or the converse, with the AI writing it and you debugging to ensure no errors.

Have any thoughts on this? Carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post What is ChatGPT and how does it work? appeared first on KnowTechie.

]]>
DoNotPay’s new GPT-powered chatbot negotiates bills for you https://knowtechie.com/donotpay-has-a-new-gpt-powered-chatbot-for-bill-negotiation/ Wed, 14 Dec 2022 14:32:41 +0000 https://knowtechie.com/?p=262723 Never waste time arguing with customer service again.

The post DoNotPay’s new GPT-powered chatbot negotiates bills for you appeared first on KnowTechie.

]]>
DoNotPay, the robotic lawyer, has launched a new AI-powered chatbot to talk to customer service representatives.

CEO Joshua Browder posted a demo of the new GPT-powered chatbot on Monday. The chatbot negotiates a discount on Xfinity’s internet services in the demo.

It does this with the same AI that powers ChatGPT, the chatbot taking the internet by storm.

All the user needs to do is give some information about their account, and DoNotPay’s chatbot does the rest.

Until now, DoNotPay’s services have all been form-based. These forms work because companies respond favorably to a specific way of writing.

This new chatbot is not scripted. Instead, it uses AI to talk to customer service agents.

Eventually, the company wants to make the tool more independent, so the user doesn’t have to sit there and monitor the chat.

DoNotPay is adding features all the time

donotpay document creator
Image: DoNotPay

That’s impressive and will be another powerful tool in DoNotPay’s legal bag of tricks.

The service already offers many tools to save you money and time.

There’s the free subscription sign-up that cancels things before you get the first bill and robo-dialer that will wait on hold until a customer service representative is free.

Additionally, there is a robot that sues scammers and one that scans your inbox to get refunds, cancel trials, and more.

Another feature lets you tweak your photos so that AI facial recognition won’t recognize you.

You can also use DoNotPay to sign up for sweepstakes, create legal documents, and find clinical trials.

The new GPT-powered chatbot will be open for testing within the next two weeks. The CEO says it will work with any company based in the US.

Have any thoughts on this? Carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post DoNotPay’s new GPT-powered chatbot negotiates bills for you appeared first on KnowTechie.

]]>
AI can now create a better profile picture for your Tinder account https://knowtechie.com/ai-can-now-create-a-better-profile-picture-for-your-tinder-account/ Wed, 16 Nov 2022 20:24:51 +0000 https://knowtechie.com/?p=256324 No photographer needed.

The post AI can now create a better profile picture for your Tinder account appeared first on KnowTechie.

]]>
You can now use AI to generate new profile pictures because it’s better at selfies than you are.

A new service called PhotoAI costs $19, uses 10 and 20 of your badly-taken selfies, and creates a pack of AI-created images for you.

You can choose pop art, polaroid-style, royals, LinkedIn, or Tinder, all of which give you thirty tailored images to use.

In the future, the service will use the same AI to put your face into movie scenes, popular memes, and even your favorite celebrities.

You’ll get your images roughly 12 hours after your order. The site says it deletes the selfies you upload after seven days.

nine meme images created by AI
Image: KnowTechie

We’ve not tested it ourselves, but Motherboard has, with images of their executive editor. They did the “Tinder pack,” which says it generates 30 images, although they received 78.

The AI-generated images are passable on the first view, and perhaps they’d work for profile pictures on social media.

Looking closer, you can see fingers that aren’t attached to anything, over-realistic mouths, and facial expressions that look like the person washes with botox.

ai royal pack for profile picture
Image: KnowTechie

The thing is, all of this could be done by the end-user for free. The PhotoAI site uses Stable Diffusion, a free-to-use AI model. It also uses a couple of other tools to train the AI model based on the images the user uploads.

You could set all of this up on your computer. Depending on your views on value, paying PhotoAI $19 could be cheaper than the headache of trying to set the AI models up and training them to create better selfies.

Have any thoughts on this? Carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post AI can now create a better profile picture for your Tinder account appeared first on KnowTechie.

]]>
New Rewind Mac app is a search engine for your digital life https://knowtechie.com/new-mac-app-will-remember-everything-so-you-dont-have-to/ Wed, 02 Nov 2022 16:40:02 +0000 https://knowtechie.com/?p=253462 Rewind AI is Spotlight search on steroids.

The post New Rewind Mac app is a search engine for your digital life appeared first on KnowTechie.

]]>
A new Mac app called Rewind wants to make searching through your digital life easier.

Think of it as a helper for your memory or your organizational skills. When installed, it indexes everything you work on, read online, type in chat apps, or say in video meetings.

Then it creates a searchable timeline, so you can always find those pieces of information, even if you can’t remember where you had seen them.

Rewind was created by Dan Siroker, of Optimizely fame, and Brett Bejcek. Here’s what you need to know.

This Mac app is the ‘search engine for your life’

So you know, only some Macs are going to be compatible. Rewind’s AI uses every single part of Apple silicon to create the searchable index.

That means you need a Mac with either an M1 or M2 chip; Intel Macs won’t be powerful enough.

All you have to do after installing is to use your Mac how you would normally. Rewind uses APIs to determine which app is focused on your Mac and then records the behavior from that app.

That could be your daily video conference with your workmates, your research on your browser, or creating files in cloud services. Rewind takes note of everything and creates a timeline you can search.

rewind mac app searching "tps reports"
Screenshot: Rewind

Those recordings never make it off your Mac, with all the processing done locally using the power of Apple Silicon.

Users can pause or delete recordings anytime or exclude sensitive apps like password managers from being recorded.

Rewind is currently in early access. You can sign up on their website to get access once the company increases availability to more users.

Have any thoughts on this? Carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post New Rewind Mac app is a search engine for your digital life appeared first on KnowTechie.

]]>
What Joe Rogan’s AI podcast with Steve Jobs means for the world https://knowtechie.com/what-joe-rogans-ai-podcast-with-steve-jobs-means-for-the-world/ Wed, 12 Oct 2022 19:38:17 +0000 https://knowtechie.com/?p=247721 It feels real. Except, it isn’t. 

The post What Joe Rogan’s AI podcast with Steve Jobs means for the world appeared first on KnowTechie.

]]>
Canadian writer Margaret Atwood is best known as the author of The Handmaid’s Tale — a dystopic, nightmarish vision of the future that later enjoyed success as a Hulu series. But there’s another item of note on her resume. 

Atwood is the inventor of The LongPen — a computer-controlled pen that allows authors to remotely sign books from anywhere in the world.

Conceptually, it’s a fascinating idea. It bestows an element of uniqueness to a mass-produced book, but also doesn’t. An autograph produced by the LongPen is just as inauthentic as one created by a random eBay scammer. 

I don’t know why, but I thought of The LongPen earlier this week when I learned someone created an authentic-seeming interview between Joe Rogan and the late Steve Jobs using the power of AI. You can listen to it below. 

It feels unerringly real. Except, it isn’t. 

Fake Steve Jobs

steve jobs
Image: Work of the World

Joe Rogan has never interviewed Steve Jobs.

Although the Joe Rogan Experience debuted two years before Jobs’ untimely death at the hands of pancreatic cancer, it only achieved widespread success (and cultural cachet) in the past seven years or so. 

Despite that, it’s easy to forget that the interview is completely fake. You’ll know what I mean if you’ve listened to the episode. 

Podcast.ai — the outfit behind the show — didn’t merely replicate Jobs’ and Rogan’s voices. They matched their intonation and vocal cadence step-for-step. 

The fake Jobs speaks with the same energetic enthusiasm demonstrated over countless WWDC keynotes. It replicates Rogan’s trademark interviewing style, defined by simple, open-ended questions delivered in a terse clip. 

It’s eerie. And it’s a sign of things to come. 

Bubble after bubble

blockchain sphere in a blue color
Image: Unsplash

I’ve worked as a tech journalist for almost a decade. I’ve seen the rise of hype bubbles, and the inevitable deflation that follows. 

AI was one such hype bubble. I remember working at The Next Web around the decade’s halfway point. Every day, I received hundreds of email pitches from fledgling apps and consumer tech startups. Most, in some form, mentioned AI. 

It wasn’t merely a buzzword. That’s too simplistic. AI was an essential attribute. Something your product simply must include to compete, like airbags and seatbelts in a new car.

Research from the management firm McKinsey illustrated this trend. The number of tech press articles referencing AI doubled from 2015 to 2016. It was a fever pitch, driven by founders desperate to capitalize on the latest tech zeitgeist.

But here’s the kicker: most products didn’t actually use AI, or used it in a relatively trivial way. As disillusionment grew, people moved on to the next trend. Crypto. NFTs. And now, the metaverse? 

The times they are a-changing

hologram tupac performing
Holographic Tupac performs at Coachella (Screenshot: YouTube)

Now? We can’t ignore AI. 

First, let’s lay the groundwork. AI-centric businesses have access to a previously-unthinkable amount of computational power. Systems capable of quickly running complex AI models aren’t simply cheap. They’re also commoditized.

Over the past few years, companies like Amazon Web Services and Microsoft AI have launched a series of AI-specific platforms, allowing smaller companies to run complex models and tasks with almost no upfront cost. 

And because these products are inherently cloud-centric, it’s possible to scale in order to meet demand and necessity. 

Or, put it another way: Upstart tech companies and individual artists can access the computational and technical resources to create unique AI-generated content. 

I haven’t even mentioned other major trends, like the commoditization of AI models specifically geared towards machine-driven content creation, like GPT-3. Those are of equal importance. They’re also a relatively new development. 

We don’t realize it yet, but AI will play a massive role in how entertainment is created. This isn’t some futuristic pipe dream. We aren’t in the realm of speculative fiction. This isn’t an Isaac Asimov novel. This is reality. 

Remember how everyone lost their minds in 2012, when the long-deceased Tupac Shakur performed at Coachella in hologram form?

Next year, that will feel quaint. Mark my words.

Big questions lie ahead

the longpen robot
Margaret Atwood’s LongPen (Image: NBC News)

I think you get where I’m heading to. AI-created content across all verticals will soon be the norm. It’ll be a seismic change, but one society will swiftly adapt to, much as with other technological leaps. 

But this won’t be without questions. Ethical. Practical. Even legal. 

AI can resurrect the dead. But can this happen in a respectful way? One that honors not merely the memory of the person but also their surviving family? 

Copyright law is pretty clear about the notion of ownership. If you create a piece of art, it’s legally yours. But these advances seek to appropriate not merely the things we create, but who we are. 

Our intonation. The way our voices rise and fall throughout a sentence. The ineffable ways we construct sentences and articulate complex ideas. Even something as minor as our posture. Who owns that? 

And as society wrestles with misinformation and “alternative facts,” is it sensible to blur the lines between reality and fantasy even further? 

If someone can create an authentic-seeming interview between two people that likely never met, how can we trust anything? The news? The viral videos on our social media timelines? 

Atwood’s LongPen was an invention of artifice, but it was also pretty harmless. Nobody was under the illusion that it was anything other than a facsimile of a real signature. 

The rise of AI-generated entertainment, however, is an entirely different matter. It promises to upend our understanding of the world, ownership, and even the meaning of death. And that’s a genuinely scary proposition. 

Have any thoughts on this? Carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.

The post What Joe Rogan’s AI podcast with Steve Jobs means for the world appeared first on KnowTechie.

]]>
Researchers reveal how they detect deepfake audio – here’s how https://knowtechie.com/researchers-reveal-how-they-detect-deepfake-audio-heres-how/ Sun, 25 Sep 2022 14:11:00 +0000 https://knowtechie.com/?p=241933 With deepfake audio, that familiar voice on the other end of the line might not even be human let alone the person you think it is.

The post Researchers reveal how they detect deepfake audio – here’s how appeared first on KnowTechie.

]]>
Imagine the following scenario. A phone rings. An office worker answers it and hears his boss, in a panic, tell him that she forgot to transfer money to the new contractor before she left for the day and needs him to do it.

She gives him the wire transfer information, and with the money transferred, the crisis has been averted.

The worker sits back in his chair, takes a deep breath, and watches as his boss walks in the door. The voice on the other end of the call was not his boss.

In fact, it wasn’t even a human. The voice he heard was that of an audio deepfake, a machine-generated audio sample designed to sound exactly like his boss.

Attacks like this using recorded audio have already occurred, and conversational audio deepfakes might not be far off.

Deepfakes, both audio and video, have been possible only with the development of sophisticated machine learning technologies in recent years. Deepfakes have brought with them a new level of uncertainty around digital media.

To detect deepfakes, many researchers have turned to analyzing visual artifacts – minute glitches and inconsistencies – found in video deepfakes.

Audio deepfakes potentially pose an even greater threat, because people often communicate verbally without video.

For example, via phone calls, radio and voice recordings. These voice-only communications greatly expand the possibilities for attackers to use deepfakes.

To detect audio deepfakes, we and our research colleagues at the University of Florida have developed a technique that measures the acoustic and fluid dynamic differences between voice samples created organically by human speakers and those generated synthetically by computers.

Organic vs. synthetic voices

Humans vocalize by forcing air over the various structures of the vocal tract, including vocal folds, tongue and lips.

By rearranging these structures, you alter the acoustical properties of your vocal tract, allowing you to create over 200 distinct sounds, or phonemes.

However, human anatomy fundamentally limits the acoustic behavior of these different phonemes, resulting in a relatively small range of correct sounds for each.

In contrast, audio deepfakes are created by first allowing a computer to listen to audio recordings of a targeted victim speaker.

Depending on the exact techniques used, the computer might need to listen to as little as 10 to 20 seconds of audio. This audio is used to extract key information about the unique aspects of the victim’s voice.

The attacker selects a phrase for the deepfake to speak and then, using a modified text-to-speech algorithm, generates an audio sample that sounds like the victim saying the selected phrase.

This process of creating a single deepfaked audio sample can be accomplished in a matter of seconds, potentially allowing attackers enough flexibility to use the deepfake voice in a conversation.

Detecting audio deepfakes

The first step in differentiating speech produced by humans from speech generated by deepfakes is understanding how to acoustically model the vocal tract.

Luckily scientists have techniques to estimate what someone – or some being such as a dinosaur â€“ would sound like based on anatomical measurements of its vocal tract.

We did the reverse. By inverting many of these same techniques, we were able to extract an approximation of a speaker’s vocal tract during a segment of speech.

This allowed us to effectively peer into the anatomy of the speaker who created the audio sample.

diagram displaying how researchers can spot deepfake audio on purple background
Image: KnowTechie

From here, we hypothesized that deepfake audio samples would fail to be constrained by the same anatomical limitations humans have.

In other words, the analysis of deepfaked audio samples simulated vocal tract shapes that do not exist in people.

Our testing results not only confirmed our hypothesis but revealed something interesting. When extracting vocal tract estimations from deepfake audio, we found that the estimations were often comically incorrect.

For instance, it was common for deepfake audio to result in vocal tracts with the same relative diameter and consistency as a drinking straw, in contrast to human vocal tracts, which are much wider and more variable in shape.

This realization demonstrates that deepfake audio, even when convincing to human listeners, is far from indistinguishable from human-generated speech.

By estimating the anatomy responsible for creating the observed speech, it’s possible to identify the whether the audio was generated by a person or a computer.

Why this matters

Today’s world is defined by the digital exchange of media and information. Everything from news to entertainment to conversations with loved ones typically happens via digital exchanges.

Even in their infancy, deepfake video and audio undermine the confidence people have in these exchanges, effectively limiting their usefulness.

If the digital world is to remain a critical resource for information in people’s lives, effective and secure techniques for determining the source of an audio sample are crucial.

Have any thoughts on this? Carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

Editor’s Note: This article was written by Logan Blue, PhD student in Computer & Information Science & Engineering, University of Florida and Patrick Traynor,Professor of Computer and Information Science and Engineering, University of Florida, and republished from The Conversation under a Creative Commons license. Read the original article.

The post Researchers reveal how they detect deepfake audio – here’s how appeared first on KnowTechie.

]]>
Review: Autoblow A.I. – This was the hardest review I’ve ever written https://knowtechie.com/review-autoblow-a-i-this-was-the-hardest-review-ive-ever-written/ Sun, 11 Sep 2022 23:14:00 +0000 https://knowtechie.com/?p=106882 I came, I saw, I reviewed.

The post Review: Autoblow A.I. – This was the hardest review I’ve ever written appeared first on KnowTechie.

]]>
The Good
It did what it set out to do
Multiple modes
Easy to use
The Bad
A bit loud and clunky
Has to be plugged into an outlet
8.5
Overall

What started as a conversation on Twitter ended with me being sent an advanced sex toy. Isn’t the internet an amazing thing?

Before this, the closest I’d ever been to any type of self-pleasure device was an unusually crusty makeshift Fleshlight.

Composed of previously frozen hot dog buns preserved in maple syrup in two Ziploc bags rubberbanded together, laying outside of Kevin’s house.

Why was it outside, Kevin? Why did you brandish it like a sword?

The Autoblow A.I. is for those with a penis, and like Curtis, I would prefer more devices of this nature to be inclusive.

But I do understand the angle the company was going for with this.

Unlike those primitive Fleshlights and tube socks of yesteryear.

The Autoblow A.I. uses technologyℱ to simulate fellatio and, somewhat surprisingly, does a pretty decent job at it.

The Good

autoblow ai machine
Image: Josiah Motley / KnowTechie

The Autoblow A.I. features ten different modes, all of which focus on different parts of your member.

Some combine multiple, others alternate, and you can adjust the speed to get the experience just right.

There’s also a pause button – referred to as an Edge button – if the experience gets too intense and you aren’t quite ready for that glorious conclusion to your Tuesday afternoon.

What I’m trying to say is that it will diddle your dongle and do so in a manner that pleases you.

The makers of the machine also note that they used artificial intelligence to really nail the feeling.

This was accomplished through analyzing “100s of hours of blowjob videos,” and while I’m hard-pressed to call that true A.I. and not machine learning, Autoblow A.I. sounds a lot better than Autoblow M.L.

The silicon the machine uses is soft to the touch and is made of quality material.

I’m sure there’s a Silence of the Lambs joke here about lotion and skin, but I’ll leave that to you to figure out.

Cleanup is also easy, simply remove the sleeve and run it under some warm water with dish soap.

It is also recommended you use a renewal powder to keep the quality of the silicone intact.

The Bad

There are a (literal) handful of issues that really keep the Autoblow A.I. from being great.

The main one being the size of this monster. It’s not small and can make the experience an awkward one, depending on how you like to get down when you get off.

Combine that with the need to have the machine plugged in while using it and you have an intimate moment becomes a weird dance of machine and flesh that is better left to Love, Death & Robots on Netflix.

A rechargeable battery pack of some sort would really help on that end, but I imagine the weight it would add would be a deterrent to many.

Finally, it’s a bit loud.

Autoblow notes that it is at least two times quieter than the previous version, so at least the company is making strokes strides to quiet the machine.

The Sticky

All in all, if you are looking to add some spice to your personal time or time with a partner, the Autoblow A.I. is a formidable device.

At approximately $300 (currently on sale for $219), you’ll definitely need to decide how much your climax is worth to you.

Still, if you are worried about this just being an overpriced gimmick, you can confidently put those worries to rest.

 Editors’ Recommendations:

The post Review: Autoblow A.I. – This was the hardest review I’ve ever written appeared first on KnowTechie.

]]>
DALL-E 2, the AI that creates images for you, expands beta tests https://knowtechie.com/openai-dall-e-2-expands-beta-access/ Thu, 21 Jul 2022 15:15:30 +0000 https://knowtechie.com/?p=220181 And users get to keep commercial rights for any images the program creates.

The post DALL-E 2, the AI that creates images for you, expands beta tests appeared first on KnowTechie.

]]>
DALL-E 2, the AI program from research lab OpenAI that generates images from typed prompts, is looking to expand its user base. The platform has entered a beta testing phase, inviting up to 1 million waitlisted users to try out the program.

The company announced its new expansion earlier this week. Previously, the platform was limited to just 100,000 customers. But now, the program is ready to expand, opening itself up to 1 million potential users.

By now, you’ve probably seen some of DALL-E 2’s creations. The program can create impressive artwork by interpreting typed prompts from its users. Just check out this image of Donald Duck performing at a rap concert.

Dall-E 2 initially took the internet by storm back in April. At that time, OpenAI limited the program to just 100,000 users. The program is still limited to invite-only, but OpenAI is accepting tons more members for its beta testing period.

READ MORE: Given the right description, this AI can create wild works of art

You can sign up for the waitlist on the OpenAI website. If accepted for the beta, you’ll get 15 free credits that you can use to create your own images.

You will use one credit when you enter a prompt, but you’ll get four different image options for each prompt you enter.

After your 15 credits are up in a month, you’ll have to pay $15 for more. But that $15 will give you 115 credits, which could mean up to 460 images.

READ MORE: PicWish is one of the best image background removers out there

Oh yea, and you’ll get full commercial rights to any of the images that DALL-E 2 creates from your prompts. That certainly sounds like a pretty good deal.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post DALL-E 2, the AI that creates images for you, expands beta tests appeared first on KnowTechie.

]]>
Google researcher goes on leave after claiming AI is sentient https://knowtechie.com/google-researcher-goes-on-leave-after-claiming-ai-is-sentient/ Sun, 17 Jul 2022 12:57:00 +0000 https://knowtechie.com/?p=217758 It will likely be a long time before we get truly sentient AI if that ever happens at all.

The post Google researcher goes on leave after claiming AI is sentient appeared first on KnowTechie.

]]>
Can a machine think and feel like a human? Questions and fears over the subject predate artificial intelligence (AI) itself, and recent events at Google have stirred them up again.

Blake Lemoine, an engineer with Google’s Responsible AI department, claims its LaMDA chatbot is sentient after talking with it for several months.

After Lemoine went public with his claim, Google placed him on paid administrative leave for breaking their confidentiality agreement. The story has quickly gained traction, re-sparking the debate over AI sentience.

What is LaMDA?

google lamda

LaMDA, short for language model for dialogue applications, is an AI model aimed at developing better chatbots.

Unlike most similar models, LaMDA trained on actual conversations, helping it gain a more conversational tone. As a result, reading LaMDA’s responses feels a lot like chatting with an actual person.

READ MORE: DALL-E 2, the AI that creates images for you, expands beta tests

Lemoine started chatting with the bot in the Fall of 2021. His job was to see if it had picked up on any discriminatory or hate speech, something chatbots have struggled with in the past.

Instead, he came away with the impression that LaMDA could express thoughts and emotions on the same level as a human child.

Is LaMDA sentient?

google lamda
Image: KnowTechie

So, is LaMDA actually sentient? If you read its conversation with Lemoine, it certainly sounds like it. However, most experts who’ve weighed in on the matter say the bot isn’t actually thinking.

Google says they’ve reviewed Lemoine’s claims with both ethicists and technologists and found the evidence doesn’t support them.

Spokesperson Brian Gabriel points out how systems like LaMDA imitate exchanges you’d find in millions of sentences, so they can sound convincing, but sounding like a person and being a person are different things.

Others have pointed out that humans naturally tend to project their own characteristics onto other things. Think of how you can see faces in inanimate objects.

This tendency makes it easy to fall into the trap of thinking a chatbot is a real person when in reality, it’s just good at parroting one.

The consequences of “sentient” AI

LaMDA may not be sentient, but the case raises some questions about AI’s impact on humans.

Even if these machines can’t actually think and feel for themselves if they’re convincing enough to fool people, does it matter?

You can fall into some ethical and legal grey areas here. For example, AI cannot own the copyright for things it produces under current laws, but if people start treating AI as humans more, that could shift.

The danger here is that a computer program that just seems human could end up with the rights to something a human artist used AI as a tool to create. Focusing on AI rights can lead to stepping over human rights.

Author David Brin emphasizes how AI could lead to scams as it becomes more convincing. If people can’t tell the difference between chatbots and real users, criminals could use these bots to manipulate them.

Companies may also claim their bots are sentient to give them legal protection, creating a digital scapegoat for any issues arising.

AI sentience is a tricky topic

It will likely be a long time before we get truly sentient AI if that ever happens at all.

However, the LaMDA case highlights how bots don’t necessarily need to be emotionally intelligent to have serious consequences. 

AI sentience is difficult to nail down, and it may be a distraction from more important issues. Chatbots may not be people, but they deserve careful thought and ethical questioning before they reach that point.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post Google researcher goes on leave after claiming AI is sentient appeared first on KnowTechie.

]]>
FIFA is going to use AI for offside calls at the 2022 World Cup https://knowtechie.com/fifa-is-going-to-use-ai-for-offside-calls-at-the-2022-world-cup/ Wed, 06 Jul 2022 14:35:44 +0000 https://knowtechie.com/?p=214051 It will be used to help referees make decisions.

The post FIFA is going to use AI for offside calls at the 2022 World Cup appeared first on KnowTechie.

]]>
The 2022 World Cup will be using AI-powered cameras to help officiate the offside rule. That’s the latest news from FIFA, football’s governing body about this year’s tournament, which is in Qatar this time around.

The system won’t be making refereeing decisions. Instead, the semi-automated system generates alerts, which go to a control room. The officials in that room can then confer and tell the referees on the field if they need to make a call.

The system uses a sensor inside the ball that sends its position on the field 500 times a second. Twelve cameras under the stadium’s roof track the players’ positions.

They also track up to 29 points on the players’ bodies, so they can accurately show if the player is offside or not. The system can accurately show the position of all players on the pitch and each body part that is used for offside calls.

It’s a complicated system, but then the offside rule is a complicated one. This innovative technology was used in Russia in 2018, and FIFA decided that they wanted to push it even further.

It’s not just decision-making that the system aids. When a call is made, the system also creates a 3D animation of the call, which can be played back by broadcasters or on the big screens in the stadium. That level of transparency helps the fans understand what’s going on.

The most important thing to remember is that it’s still human officials making the final call. The AI-powered offside system is a tool to aid them, not one to make the decisions. At least, not at this stage.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post FIFA is going to use AI for offside calls at the 2022 World Cup appeared first on KnowTechie.

]]>
Amazon wants to make Alexa sound like your dead relatives https://knowtechie.com/amazon-alexa-dead-relatives-voice/ Thu, 23 Jun 2022 14:22:19 +0000 https://knowtechie.com/?p=210278 The idea that someone could grab a one-minute snippet of my voice and permanently turn me into a digital assistant is horrifying.

The post Amazon wants to make Alexa sound like your dead relatives appeared first on KnowTechie.

]]>
Amazon is developing technology that would allow Alexa to mimic the voice of anyone it hears, based on a one-minute recording.

The company announced the feature on Wednesday at its Re:MARS conference, which is currently taking place in Las Vegas. The Re:MARS event is a showcase for Amazon’s AI tech. MARS itself is an acronym and stands for Machine Learning, Automation, Robotics, and Space.

Rohit Prasad, Amazon’s Head Scientist, said the feature could be used to replicate the voices of deceased relatives. In one demonstration, the reconstituted voice of an older woman is heard reading her presumed grandson a bedtime story. Watch it below.

Computers have long enjoyed the ability to mimic human voices. In fact, the technology is well established and increasingly commoditized.

In addition to the various commercial tools like Resemble AI and LyreBird, you can find several free open-source packages offering the functionality, with many based on the GPT-3 AI model.

This Alexa update merely builds on this. It lowers the barrier to entry dramatically, making it possible for anyone to create faithful renditions of their loved ones’ voices. But it’s not without its ethical questions.

The Thorny Ethical Questions

First, there’s the thorny issue of consent. I’m not discounting the possibility that people will gain a sense of comfort from being able to hear their loved ones’ voices. But would you want to be turned into a voice assistant after you die?

It feels almost like a discarded plot line from Black Mirror. You die and suddenly you’re encased in a small plastic sphere, dutifully performing any task barked at you.

Dead people can’t consent. Users have no way of knowing whether this feature goes against the wishes of their relatives. Additionally, how will Amazon determine whether a voice belongs to a dead person or a living person?

Again, the idea that someone could grab a one-minute snippet of my voice and permanently turn me into a subservient digital assistant is horrifying.

Consequently, there are the other more serious — and less philosophical — concerns.

The Problem of Vishing

https://youtu.be/Wgc8EEKtpK4

As mentioned, the ability to make a computer sound like a person is nothing new. This Alexa update would simply lower the barrier to entry. With this in mind, it’s not hard to see how a malicious third party could weaponize this.

I’m talking about “vishing,” of course. Vishing stands for “voice phishing.” It’s a relatively new take on something most of us have witnessed, if not directly fallen victim to.

The premise is simple. Someone mimics another person’s voice and gets them to do something, like transfer a sum of money to an offshore bank account, or hand over their login credentials.

This approach is invariably devastating. For example, in 2021, scammers tricked a UK energy firm into transferring €200,000 (around $210,000) to a foreign bank under their control, after they successfully impersonated a company executive.

Speaking to the Washington Post, the company’s insurer described the terrifying accuracy of the deepfake used. “The software was able to imitate the voice, and not only the voice: the tonality, the punctuation, the German accent,” they said.

It’s reasonable to worry that, with this feature, Amazon is opening Pandora’s box with serious ethical and security ramifications. Without any guardrails, the consequences could prove dire.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post Amazon wants to make Alexa sound like your dead relatives appeared first on KnowTechie.

]]>
Given the right description, this AI can create wild works of art https://knowtechie.com/given-the-right-description-this-ai-can-create-wild-works-of-art/ Sun, 12 Jun 2022 13:46:00 +0000 https://knowtechie.com/?p=206733 What does it mean to make art when an algorithm automates so much of the creative process itself?

The post Given the right description, this AI can create wild works of art appeared first on KnowTechie.

]]>
A picture may be worth a thousand words, but thanks to an artificial intelligence program called DALL-E 2, you can have a professional-looking image with far fewer.

DALL-E 2 is a new neural network algorithm that creates a picture from a short phrase or sentence that you provide. The program, which was announced by the artificial intelligence research laboratory OpenAI in April 2022, hasn’t been released to the public.

But a small and growing number of people – myself included – have been given access to experiment with it.

As a researcher studying the nexus of technology and art, I was keen to see how well the program worked. After hours of experimentation, it’s clear that DALL-E – while not without shortcomings – is leaps and bounds ahead of existing image generation technology.

READ MORE: Here’s why everyone hates those annoying cookie notifications

It raises immediate questions about how these technologies will change how art is made and consumed. It also raises questions about what it means to be creative when DALL-E 2 seems to automate so much of the creative process itself.

A staggering range of style and subjects

OpenAI researchers built DALL-E 2 from an enormous collection of images with captions. They gathered some of the images online and licensed others.

Using DALL-E 2 looks a lot like searching for an image on the web: you type in a short phrase into a text box, and it gives back six images.

READ MORE: Tesla Autopilot crashes have big implications for self-driving cars

But instead of being culled from the web, the program creates six brand-new images, each of which reflect some version of the entered phrase. (Until recently, the program produced 10 images per prompt.)

For example, when some friends and I gave DALL-E 2 the text prompt “cats in devo hats,” it produced 10 images that came in different styles.

Nearly all of them could plausibly pass for professional photographs or drawings.

While the algorithm did not quite grasp “Devo hat” – the strange helmets worn by the New Wave band Devo – the headgear in the images it produced came close.

Over the past few years, a small community of artists have been using neural network algorithms to produce art.

Many of these artworks have distinctive qualities that almost look like real images, but with odd distortions of space â€“ a sort of cyberpunk Cubism.

The most recent text-to-image systems often produce dreamy, fantastical imagery that can be delightful but rarely looks real.

DALL-E 2 offers a significant leap in the quality and realism of the images. It can also mimic specific styles with remarkable accuracy.

If you want images that look like actual photographs, it’ll produce six life-like images. If you want prehistoric cave paintings of Shrek, it’ll generate six pictures of Shrek as if they’d been drawn by a prehistoric artist.

It’s staggering that an algorithm can do this. Each set of images takes less than a minute to generate. Not all of the images will look pleasing to the eye, nor do they necessarily reflect what you had in mind.

READ MORE: DALL-E 2, the AI that creates images for you, expands beta tests

But, even with the need to sift through many outputs or try different text prompts, there’s no other existing way to pump out so many great results so quickly – not even by hiring an artist.

And, sometimes, the unexpected results are the best.

In principle, anyone with enough resources and expertise can make a system like this. Google Research recently announced an impressive, similar text-to-image system, and one startup, HuggingFace, is publicly developing their own version that anyone can try right now on the web, although it’s not yet as good as DALL-E or Google’s system.

It’s easy to imagine these tools transforming the way people make images and communicate, whether via memes, greeting cards, advertising – and, yes, art.

Where’s the art in that?

I had a moment early on while using DALL-E to generate different kinds of paintings, in all different styles – like “Odilon Redon painting of Seattle” – when it hit me that this was better than any painting algorithm I’ve ever developed. Then I realized that it is, in a way, a better painter than I am.

In fact, no human can do what DALL-E does: create such a high-quality, varied range of images in mere seconds. If someone told you that a person made all these images, of course you’d say they were creative.

But this does not make DALL-E an artist. Even though it sometimes feels like magic, under the hood it is still a computer algorithm, rigidly following instructions from the algorithm’s authors at OpenAI.

If these images succeed as art, they are products of how the algorithm was designed, the images it was trained on, and – most importantly – how artists use it.

You might be inclined to say there’s little artistic merit in an image produced by a few keystrokes. But in my view, this line of thinking echoes the classic take that photography cannot be art because a machine did all the work.

Today the human authorship and craft involved in artistic photography are recognized, and critics understand that the best photography involves much more than just pushing a button.

Even so, we often discuss works of art as if they directly came from the artist’s intent. The artist intended to show a thing, or express an emotion, and so they made this image.

DALL-E does seem to shortcut this process entirely: you have an idea and type it in, and you’re done.

But when I paint the old-fashioned way, I’ve found that my paintings come from the exploratory process, not just from executing my initial goals. And this is true for many artists.

Take Paul McCartney, who came up with the track “Get Back” during a jam session. He didn’t start with a plan for the song; he just started fiddling and experimenting and the band developed it from there.

Picasso described his process similarly: “I don’t know in advance what I am going to put on canvas any more than I decide beforehand what colors I am going to use 
 Each time I undertake to paint a picture I have a sensation of leaping into space.”

In my own explorations with DALL-E, one idea would lead to another which led to another, and eventually I’d find myself in a completely unexpected, magical new terrain, very far from where I’d started.

Prompting as art

I would argue that the art, in using a system like DALL-E, comes not just from the final text prompt, but in the entire creative process that led to that prompt.

Different artists will follow different processes and end up with different results that reflect their own approaches, skills and obsessions.

I began to see my experiments as a set of series, each a consistent dive into a single theme, rather than a set of independent wacky images.

Ideas for these images and series came from all around, often linked by a set of stepping stones. At one point, while making images based on contemporary artists’ work, I wanted to generate an image of site-specific installation art in the style of the contemporary Japanese artist Yayoi Kusama.

After trying a few unsatisfactory locations, I hit on the idea of placing it in La Mezquita, a former mosque and church in CĂłrdoba, Spain.

I sent the picture to an architect colleague, Manuel Ladron de Guevara, who is from CĂłrdoba, and we began riffing on other architectural ideas together.

This became a series on imaginary new buildings in different architects’ styles.

So I’ve started to consider what I do with DALL-E to be both a form of exploration as well as a form of art, even if it’s often amateur art like the drawings I make on my iPad.

Indeed some artists, like Ryan Murdoch, have advocated for prompt-based image-making to be recognized as art. He points to the experienced AI artist Helena Sarin as an example.

“When I look at most stuff from Midjourney” – another popular text-to-image system – “a lot of it will be interesting or fun,” Murdoch told me in an interview.

“But with [Sarin’s] work, there’s a through line. It’s easy to see that she has put a lot of thought into it, and has worked at the craft, because the output is more visually appealing and interesting, and follows her style in a continuous way.”

Working with DALL-E, or any of the new text-to-image systems, means learning its quirks and developing strategies for avoiding common pitfalls.

It’s also important to know about its potential harms, such as its reliance on stereotypes, and potential uses for disinformation.

Using DALL-E 2, you’ll also discover surprising correlations, like the way everything becomes old-timey when you use an old painter, filmmaker or photographer’s style.

When I have something very specific I want to make, DALL-E often can’t do it. The results would require a lot of difficult manual editing afterward.

It’s when my goals are vague that the process is most delightful, offering up surprises that lead to new ideas that themselves lead to more ideas and so on.

Crafting new realities

These text-to-image systems can help users imagine new possibilities as well.

Artist-activist Danielle Baskin told me that she always works “to show alternative realities by ‘real’ example: either by setting scenarios up in the physical world or doing meticulous work in Photoshop.”

DALL-E, however, “is an amazing shortcut because it’s so good at realism. And that’s key to helping others bring possible futures to life – whether its satire, dreams or beauty.”

She has used it to imagine an alternative transportation system and plumbing that transports noodles instead of water, both of which reflect her artist-provocateur sensibility.

Similarly, artist Mario Klingemann’s architectural renderings with the tents of homeless people could be taken as a rejoinder to my architectural renderings of fancy dream homes.

It’s too early to judge the significance of this art form. I keep thinking of a phrase from the excellent book “Art in the After-Culture” – “The dominant AI aesthetic is novelty.”

Surely this would be true, to some extent, for any new technology used for art. The first films by the LumiĂšre brothers in 1890s were novelties, not cinematic masterpieces; it amazed people to see images moving at all.

AI art software develops so quickly that there’s continual technical and artistic novelty. It seems as if, each year, there’s an opportunity to explore an exciting new technology – each more powerful than the last, and each seemingly poised to transform art and society.

Editor’s Note: This article was written by Aaron Hertzmann, Affiliate Faculty of Computer Science, University of Washington, and republished from The Conversation under a Creative Commons license. Read the original article.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post Given the right description, this AI can create wild works of art appeared first on KnowTechie.

]]>
Shady AI company agrees to limit sales of facial recognition tech https://knowtechie.com/shady-ai-company-agrees-to-limit-sales-of-facial-recognition-tech/ Tue, 10 May 2022 14:17:07 +0000 https://knowtechie.com/?p=200389 The company can no longer sell its data to private companies or users in the US.

The post Shady AI company agrees to limit sales of facial recognition tech appeared first on KnowTechie.

]]>
Clearview AI, the controversial facial recognition company, was just crippled by a recent legal settlement. The settlement means that Clearview AI will end sales of its biometric data to private companies and individuals in the United States.

The American Civil Liberties Union (ACLU) claims that Clearview AI had violated BIPA, the Illinois Biometric Information Privacy Act. The law requires permission before a company can collect a person’s biometric data.

The ACLU has been fighting Clearview AI since May of 2020. And the announcement earlier this week finally brings the fight to a close. The fight ends with Clearview AI agreeing to halt its private company sales, among other consequences.

“Clearview can no longer treat people’s unique biometric identifiers as an unrestricted source of profit. Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws,” said Nathan Freed Wessler, deputy director of the ACLU Speech, Privacy and Technology Project.  

READ MORE: This search engine is basically Google but for facial recognition

In addition to stopping private company sales, Clearview AI must end its free trial for police officers. The company must also build a page for Illinois residents to “opt-out” and block their biometric information from Clerview’s database.

Additionally, the company cannot sell information to Illinois law enforcement agencies for the next five years. BIPA contains an exception for government contractors, but Clearview can’t take advantage of that for the next five years.

The last few months have already been expensive for Clearview AI. The company was fined €20 million in Italy earlier this year. And UK regulators issued a £17 million fine back in November.

While this settlement with the ACLU doesn’t directly hit the company’s wallet like those fines, it will certainly have an effect in the long run. And things will look even worse for the company if US lawmakers are successful in their endeavors to ban the company outright.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post Shady AI company agrees to limit sales of facial recognition tech appeared first on KnowTechie.

]]>
Chipotle’s chip-producing robot makes more per month than a human employee https://knowtechie.com/chipotles-chip-producing-robot-makes-more-per-month-than-a-human-employee/ Thu, 17 Mar 2022 13:16:00 +0000 https://knowtechie.com/?p=190874 Chippy will, you guessed it, make the chips.

The post Chipotle’s chip-producing robot makes more per month than a human employee appeared first on KnowTechie.

]]>
We’re facing a crisis in the United States at the moment. While we have record unemployment lows, inflation is at its highest point in a long time.

Wages have not kept up with the rising costs of goods, rent is out of control, and we’re still fighting for a living wage. So it makes perfect sense that a major restaurant chain would be seeking to augment its workforce with a robot that makes chips.

Chipotle has partnered with Miso Robotics to create “Chippy.” It’s an AI-enabled robot that will learn the recipe for chips, and season them appropriately.

READ MORE: Google shares a first look at its snack-delivery robot

You may remember Miso Robotics’ Flippy robotic fryer being tested at Buffalo Wild Wings or a version of it being implemented at White Castle.

In both cases, the robot isn’t there to take jobs. Instead, they are there to give employees more time for customer service by taking over the repetitive tasks.

There is a monetary cost to putting robotics in the kitchen, however

chipotle worker with chippy robot
Image: Miso Robotics

The upkeep of a single Chippy will cost Chipotle roughly $3,000 a month, which is much more than most burrito-stuffing workers take home.

While the chips (and salsa) are probably the best thing about Chipotle, it seems counter-intuitive to pay a robot to produce them when a human costs half as much. Or, conversely, pay humans as much as the damn robot.

That’s the solution to the “now hiring” signs found in almost every business around the country. Especially fast food businesses. Pay a higher wage, attract workers. Instead, companies like Chipotle are investing in robotics.

Aside from Chippy, Chipotle has invested in an autonomous delivery vehicle company and uses AI for its website and app chatbot. It is also looking at other things in the kitchen, such as dishwashing, that could be automated.

Chipotle is focusing on automation, not worker compensation. That’s why Chippy exists.

It’s not all doom and gloom

To be fair, Chipotle is one of the better-paying companies in the fast food realm. Chipotle wanted to better predict when restaurants would run out of chips. That somehow led to a robot making the chips, as long as it could memorize the seasoning recipe.

There’s a general fear as we move forward into the future of robotics that “robots are gonna take our jerbs.”

That might hold true in some industries where robotics just offer a more efficient production value. In Chipotle restaurants, however, the hope is that Chippy will simply give employees time to do other things instead of sitting around seasoning chips.

This especially holds true during peak hours, when it’s hard to leave the customer line to make more chips.

“I think we remain in a really strong place as it relates to labor,” Chipotle CTO Curt Garner told CNBC. “We didn’t approach this from a lens of trying to solve for a labor problem. We approached it from a lens of what would make it easier, more fun, more rewarding, and how do we take away some of the tasks that team members don’t like and give them more time to focus on the tasks that they do?”

In perspective, robotics with the purpose of augmenting human labor rather than replacing it is a smart path for businesses like Chipotle.

We’ve all been in line, watching employees rush around. That’s because, aside from customer service, there is a litany of mundane tasks to be performed. Making chips is one of them. Washing dishes is another. If Chipotle wants to solve that by dropping coin on a chip-making robot, so be it.

In a cynical reality, it’s only a matter of time before robots are rolling our burritos after they get done with a batch of perfectly seasoned chips. In our actual reality, some kid has once again put too much stuff inside my burrito. There is no reality in which I eat a burrito with a fork.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post Chipotle’s chip-producing robot makes more per month than a human employee appeared first on KnowTechie.

]]>
London and other UK cities are using AI-powered cameras to monitor social distancing https://knowtechie.com/london-and-other-uk-cities-are-using-ai-powered-cameras-to-monitor-social-distancing/ Wed, 14 Oct 2020 16:58:16 +0000 https://knowtechie.com/?p=131087 Right now, there are over 1,000 Vivacity sensors collecting social distancing data.

The post London and other UK cities are using AI-powered cameras to monitor social distancing appeared first on KnowTechie.

]]>
In the battle against COVID-19, London and other UK cities have started using AI-powered cameras to measure social distancing.

However, this is not something new as Vivacity, the company that developed these systems, installed these additional sensors on their AI-powered tracking cameras in early March.

According to Vivacity, the camera systems are not there to record people or for surveillance purposes. They only track social distancing and gather raw data. They assure that they don’t use their tracking systems to identify individuals or collect any kind of persona data. Furthermore, they also stated that no one would be watching the streams nor storing what they have captured, aside from the collected data.

The data is collected to help the UK government to make informed policies related to the coronavirus pandemic. Right now, there are over 1,000 Vivacity sensors to collect social distancing data in various cities such as London, Nottingham, Cambridge, and Oxford.

According to Peter Mildon, the CEO of Vivacity, their algorithm can easily distinguish pedestrians from a cyclist, or a vehicle. He believes that their data can help the UK government update its measures to stop the coronavirus spread.

Mildon also said that the data they collected through these cameras reveal how pedestrians were using the roads. That can have a significant impact on any upcoming coronavirus strategy, too.

The Department for Transport in the UK, together with several other government agencies, regularly receives data from Vivacity. The reports they receive don’t contain any personal data or anything that can be considered a violation of people’s rights.

However, Peter Mildon didn’t exclude that one day they might use the cameras for enforcement purposes if the government asks them to do so. But for now, they are solely focused on data collecting.

What do you think? How do you feel about using cameras like this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post London and other UK cities are using AI-powered cameras to monitor social distancing appeared first on KnowTechie.

]]>
Some historians are unhappy that old video footage is being upscaled to 4K https://knowtechie.com/some-historians-are-unhappy-that-old-video-footage-is-being-upscaled-to-4k/ Tue, 06 Oct 2020 20:03:13 +0000 https://knowtechie.com/?p=130539 Historians believe the footage is not always an accurate portrayal of what it was like when the footage was first recorded.

The post Some historians are unhappy that old video footage is being upscaled to 4K appeared first on KnowTechie.

]]>
Last week, I wrote a piece showing the unique work some YouTubers are doing to bring new life to old video footage. Today, I’m going to give the other side of that argument. We’re going to talk about the historians who want them to stop. Either because I like to present both sides of a debate, or because I’ll have a kick-off about anything. Even if the argument is with myself. I’m going with the latter, to be honest.

Anyway, as we’ve already seen, the work that these channels do is nothing short of spectacular. Neural Networks and Deep Learning used AI to give us a full-color look at San Francisco in the 1940s. Gdansk-based Denis Shiryaev uses his channel as a showcase for his company Neural Love, and upscaled the world’s oldest video footage, filmed in 1888, from 12 to 60fps.

It’s always a vivid look into the past, and something we wouldn’t usually be able to see without a DeLorean and a flux capacitor.

Making the old feel new

Image: YouTube

A lot of the time, feedback is positive. Elizabeth Peck, one of Shiryaev’s colleagues at Neural Love believes “it brings you more into that real-life feeling of, ‘I’m here watching someone do this’, whereas before you’re looking more at something more artistic or cinematic.”

It’s not technically restoration though. It’s enhancement. The removal of scratches, noise, dust, and other imperfections is done via neural networks. It’s a best-guess estimation based on what’s present in the film. Which, in the eyes of some historians, creates a whole set of new problems.

Criticizing the colorization

Speaking to Wired, Emily Mark-FitzGerald, Associate Professor at University College Dublin’s School of Art History and Cultural Policy said that “the problem with colourisation is it leads people to just think about photographs as a kind of uncomplicated window onto the past, and that’s not what photographs are.” 

Her worry is that while neural networks and open source programs like DeOldify can make photographs look amazing, they may not give a true representation of what’s being presented. “I look at them and think, oh, wow, that’s quite an arresting image,” she says. But obviously, that’s just a first impression. Mark-FitzGerald goes on to explain “my next impulse is to say, ‘Well, why am I having that response? And what is the person who’s made this intervention on the restoration actually doing? What information has this person added? What have they taken away?”

Criticism of colorized footage isn’t a new thing either. Writing in 2018, Luke McKernan, lead curator of news and moving images at the British Library, pulled apart the 2003 television series World War I in Colour. He explained that “the authentic colour could not be digitally deduced from the monochrome,” before going on to say it was “very pretty sometimes, but quite untrue.”

Neural Love has never claimed to be perfect, though. Their site specifically says, “while not historically accurate, the colorization appears natural.” Their tools are simply a way to make old, stuttery footage and imagery feel contemporary.

The original videos still exist, of course, they haven’t been lost to the ages just yet. But it’s a little bit like Disney. While live-action remakes are a thing of beauty to see, some people think the best way to look at things is from the original perspective.

What do you think? Do you agree with the arguments outlined in this article? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post Some historians are unhappy that old video footage is being upscaled to 4K appeared first on KnowTechie.

]]>
Microsoft just released a much-needed tool that sniffs out deepfakes ahead of the election https://knowtechie.com/microsoft-just-released-a-much-needed-tool-that-sniffs-out-deepfakes-ahead-of-the-election/ Wed, 02 Sep 2020 15:41:42 +0000 https://knowtechie.com/?p=128280 This is definitely going to come in handy ahead of the US election.

The post Microsoft just released a much-needed tool that sniffs out deepfakes ahead of the election appeared first on KnowTechie.

]]>
Worried about the growing trend of deepfakes, digitally-altered images, videos, or sounds, that use artificial intelligence (AI) to change the original with a variety of additions, from subtly adding words or completely replacing the face of the person talking. With the election coming up, Microsoft has just released some new tools to help political campaigners and media organizations spot altered media.

The detection tool is called Microsoft Video Authenticator (MVA), and it analyzes videos to score them with a confidence score that will tell the user if the footage was manipulated or not. The tool is powered by algorithms created by Microsoft’s Responsible AI team and the Microsoft AI, Ethics, and Effects in Engineering (AETHER) Committee.

So, how does it work? Everyone can tell if a popular video is heavily deepfaked, right? Well, this tool is designed to spot the subtle tweaks that you might overlook at first glance. Check out the image below for an example of how it works:

The other half of this equation is a tool powered by Microsoft’s Azure cloud, which lets content creators upload hashes of their videos; this way, Microsoft has a database of known-good values to check against. That pairs with a reader that can scan the signature of videos and check against the database, hopefully letting the user know if a piece of media is faked or not. It’s unclear at the moment if Microsoft is making these tools public yet, or if they are only being offered to approved sources like the media.

Oh, and if you think you’re up to the challenge of spotting deepfakes? Microsoft has a website where you can play a game called Spot the Deepfake. Go try it out, and see exactly how difficult the problem is for human moderators to deal with.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post Microsoft just released a much-needed tool that sniffs out deepfakes ahead of the election appeared first on KnowTechie.

]]>
We can’t even make self-driving cars work, but Airbus just landed a self-flying plane https://knowtechie.com/we-cant-even-make-self-driving-cars-work-but-airbus-just-landed-a-self-flying-plane/ Tue, 28 Jul 2020 15:36:52 +0000 https://knowtechie.com/?p=125358 Emergency Pilot on board.

The post We can’t even make self-driving cars work, but Airbus just landed a self-flying plane appeared first on KnowTechie.

]]>
The last thing you want to have to worry about when you fasten your seatbelt onboard an airplane is who’s actually flying the plane, but maybe that’s just one more thing that needs adding to the checklist of the nervous flyer.

Airbus completed a successful flight test of its Autonomous Taxi, Take-off, and Landing project in June, in which each stage of the flight was all handled by the computers, while the pilots just sat back and watched the show.

Airbus installed cameras on the new Airbus A350-1000 XWB then trained the AI on over 500 flights, which gave the AI all the training it needed to fly the plane on its own. That’s comparable to the number of flight hours that human commercial pilots need to get their Airline Transport Pilot certification. Once trained, the AI system uses the external cameras on the A350 to then fly the plane, as if the pilots sat in the cockpit had their hands on the controls.

That’s some pretty impressive stuff, but it’s even more impressive since the first test flight of the system was only in December, when the A350 did a successful autonomous take-off, before handing the reins back over to the human co-pilots to land the plane. Back then, pilots had to line the plane up on the runway as well, then the jet took over. That includes any course correction due to crosswinds, something that was previously done by the pilots.

Now the jet can do the whole flight, from navigating the taxiways to takeoff, to flight, to landing, and the taxi back to the gate. Whewww, that’s impressive. The same tech can also autonomously refuel jets in the air, no small feat.

Don’t worry though, the tech won’t be replacing the pilots completely, it’ll be there to lessen the load on pilots with the aim of improving the overall safety of flying. Awesome.

Would you trust a self-flying plane? Have any thoughts on this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post We can’t even make self-driving cars work, but Airbus just landed a self-flying plane appeared first on KnowTechie.

]]>
These OpenCV AI camera modules put the power of computer vision in anyone’s hands for under $150 https://knowtechie.com/these-opencv-ai-camera-modules-put-the-power-of-computer-vision-in-anyones-hands-for-under-150/ Tue, 14 Jul 2020 17:33:10 +0000 https://knowtechie.com/?p=124452 These AI camera modules live on the edge

The post These OpenCV AI camera modules put the power of computer vision in anyone’s hands for under $150 appeared first on KnowTechie.

]]>
It’s been twenty years since OpenCV started, aimed at making an open-source, common infrastructure for computer vision. Perhaps that anniversary is the perfect time for the OpenCV Artificial Intelligence Kit (OAK) to release two 4K/30fps spatial AI camera modules, which can do their processing on-device vs. pulling those resources from a cloud. This on-device processing can also be referred to as Edge computing.

Each module has built-in chips for artificial intelligence processing, so they won’t lose precious time sending data off to a remote server for detection. That could be the crucial difference between detection of an object, or reading a license plate before the car speeds off, or whatever else you are trying to do with the camera.

The company says that they’re “absurdly easy to use,” with the ability to get up and running in under 30 seconds. OAK-1 uses a singular USB-C port for both data and power, while the more powerful OAK-D requires a 5V cable, making confusing setups a thing of the past. The OAK units ship with multiple neural nets, for things like mask/no-mask detection, emotions recognition, facial landmarks, pedestrian detection, and vehicle detection. The team will be adding to this list as time goes on, or you can upload your own trained models to the devices.

OAK-1 can do lossless motion-based zoom when it detects moving objects, and OAK-D can do stereo depth, 3D object localization, and object tracking in 3D space. Nifty.

The company tells KnowTechie that they’ve already passed their crowdfunding goal and has gone over $100K, with over 500 backers. Whew, AI tech is hot nowadays.

If you’re interested in grabbing either of the two OAK cameras, head on over to Kickstarter where you can still get the OAK-1 for $79 or the OAK-D for $149.

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post These OpenCV AI camera modules put the power of computer vision in anyone’s hands for under $150 appeared first on KnowTechie.

]]>
Q&A: How Jumpstory is using AI to make visual marketing more human  https://knowtechie.com/qa-how-jumpstory-is-using-ai-to-make-visual-marketing-more-human/ Tue, 23 Jun 2020 13:04:11 +0000 https://knowtechie.com/?p=122993 To learn more about Jumpstory, we spoke with Co-founder, serial entrepreneur, and bestselling author Jonathan Low. 

The post Q&A: How Jumpstory is using AI to make visual marketing more human  appeared first on KnowTechie.

]]>
When we think of stock photography, the images that come to mind are typically ones that lack diversity and authenticity. There’s the smile-perfect woman pointing at some chart behind her or the businessman in his tailored suit staring sternly at the portfolio in his hands. For decades, this has been the selection for marketers and unlike choosing your headline or SEO text, there aren’t any formulas that reveal how these photos will perform. 

It looks like the times are finally changing. The Denmark-based startup, JumpStory, is the world’s first AI-based image platform that offers over 25 million high-performing, impactful visuals. By using the latest knowledge within neuromarketing to train their AI, their visuals are genuine and improve results in both communication and marketing by up to 80%.

To learn more about the platform, we spoke with Co-founder, serial entrepreneur, and bestselling author Jonathan Low. 

What inspired you to launch JumpStory? 

We looked at the market of photos & videos online, and we felt that there was a huge gap in the market. Most images that you find on stock photo websites look fake and too picture-perfect. There is a huge lack of authenticity and real, powerful images, and at the same time the picture industry has a lot of legal pitfalls and complex rules.

There was a need for a simple and powerful solution, where you only find authentic and high-performing images – and with one simple license and one price. That is what we have created with JumpStory.

What are some of the key differences between JumpStory and other stock photography platforms?

JumpStory uses AI to get rid of all the bad and fake-looking photos. We focus on impact – not just images.

We source from +500 million images from 100,000’s of photographers and video-makers. We then use machine learning to automatically remove images that don’t look authentic, and we use neuro-marketing & AI to reduce the library to only high-performing visuals.

This means that digital marketers only get authentic and high-performing images, so they save both time and money by using JumpStory. We transform an entire industry from just being about images to images with impact.

How are you using machine learning to disrupt the industry?

Machine-learning is at the core of everything we do. When people hear the term, they may think about something artificial, but in fact, we use AI to make image search more human. We train the machine to look for real humans and real emotions, and we also use ML to source the web to find the best possible images for marketing. This makes us totally unique in the industry.

What do you believe is in store for the future of stock photography? 

At JumpStory we don’t believe in a future, where stock platforms are the winners. Instead, companies like ourselves and our major competitors need to understand that in the future their products should work, where people do. This means that we integrate with all the leading platforms out there – whether that is CMS, marketing automation, landing page builders, content software, etc. People are too busy to go to an image platform to look for images. They want an instant and seamless experience like we already know it from social media etc.

If you want to understand the future of stock photography, look at how simple companies like Dropbox and Slack have made people’s lives. The future of the industry is not about rights, legal or licenses – it’s about user design and experience.

What’s next for JumpStory? 

Conquering the world. Nothing less. We’re still a small mosquito compared to elephants like Adobe, Shutterstock, and Getty, but we believe that we have the right kind of poison to really hit them, where it hurts. We’re in this industry to win it – not to become another stock photo platform that the world can easily continue without.

Is there anything else you’d like to share with our audience? 

We believe in total transparency. That is why we’ve shared our product roadmap for the next 6 months openly on our platform:

If your audience has ideas on how to create the future of images, we would love to hear their ideas and opinions!

Have any thoughts on this? Let us know down below in the comments or carry the discussion over to our Twitter or Facebook.

Editors’ Recommendations:

The post Q&A: How Jumpstory is using AI to make visual marketing more human  appeared first on KnowTechie.

]]>