Toggle light / dark theme

Dark Horse AI Gets Passing Grade in Law Exam

An artificial intelligence dubbed Claude, developed by AI research firm Anthropic, got a “marginal pass” on a recent blindly graded law and economics exam at George Mason University, according to a recent blog post by economics professor Alex Tabarrok.

It’s yet another warning shot that AI is experiencing a moment of explosive growth in capability — and it’s not just OpenAI’s ChatGPT that we have to worry about.

Anthropic — which according to Insider secured funding from disgraced crypto exec Sam Bankman-Fried and his alleged romantic partner, former Alameda Research CEO Caroline Ellison — made a big splash with its new AI earlier this week.

Those Schools Banning Access To Generative AI ChatGPT Are Not Going To Move The Needle And Are Missing The Boat, Says AI Ethics And AI Law

To ban, or not to ban, that is the question. I would guess that if Shakespeare were around nowadays, he might have said something like that about the recent efforts to ban the use of a type of AI known as Generative AI

Here’s the deal.


Some rather high-profile bans have been announced regarding the use of generative AI such as ChatGPT. We need to closely examine these bans and decide whether they make any sense. Here’s the scoop.

Dr. Rob Konrad presenting at Rejuvenation Startup Summit 2022

We use cookies on our website. Some of them are essential, while others help us to improve this website and your experience. If you are under 16 and wish to give consent to optional services, you must ask your legal guardians for permission. We use cookies and other technologies on our website. Some of them are essential, while others help us to improve this website and your experience. Personal data may be processed (e.g. IP addresses), for example for personalized ads and content or ad and content measurement. You can find more information about the use of your data in our privacy policy. You can revoke or adjust your selection at any time under Settings.

AI art tools Stable Diffusion and Midjourney targeted with copyright lawsuit

The suit claims generative AI art tools violate copyright law by scraping artists’ work from the web without their consent.

A trio of artists have launched a lawsuit against Stability AI and Midjourney, creators of AI art generators Stable Diffusion and Midjourney, and artist portfolio platform DeviantArt, which recently created its own AI art generator, DreamUp.

The artists — Sarah Andersen, Kelly McKernan, and Karla Ortiz — allege that these organizations have infringed the rights of “millions of artists” by training their AI tools on five billion images scraped from the web “with­out the con­sent of the orig­i­nal artists.”


AI art gets its first major copyright lawsuit.

Role Playing Via Generative AI ChatGPT Conjures Up Mental Health Questions, Stirring AI Ethics And AI Law Scrutiny

They say that actors ought to fully immerse themselves into their roles. Uta Hagen, acclaimed Tony Award-winning actress and a legendary acting teacher said this: “It’s not about losing yourself in the role, it’s about finding yourself in the role.”

In today’s column, I’m going to take you on a journey of looking at how the latest in Artificial Intelligence (AI) can be used for role-playing. This is not merely play-acting. Instead, people are opting to use a type of AI known as Generative AI including the social media headline-sparking AI app ChatGPT as a means of seeking self-growth via role-playing.


You might be wondering why I didn’t showcase a more alarming example of generative AI role-playing. I could do so, and you can readily find such examples online. For example, there are fantasy-style role-playing games that have the AI portray a magical character with amazing capabilities, all of which occur in written fluency on par with a human player. The AI in its role might for example try to (in the role-playing scenario) expunge the human player or might berate the human during the role-playing game.

My aim here was to illuminate the notion that role-playing doesn’t have to necessarily be the kind that clobbers someone over the head and announces itself to the world at large. There are subtle versions of role-playing that generative AI can undertake. Overall, whether the generative AI is full-on role-playing or performing in a restricted mode, the question still stands as to what kind of mental health impacts might this functionality portend. There are the good, the bad, and the ugly associated with generative AI and role-playing games.

On a societal basis, we ought to be deciding what makes the most sense. Otherwise, the choices are left in the hands of those that perchance are programming and devising generative AI. It takes a village to make sure that AI is going to be derived and fielded in an AI Ethically sound manner, and likewise going to abide by pertinent AI laws if so established.

First AI lawyer to appear in U.S. court

In the first case of its kind, artificial intelligence (AI) will be present throughout an entire U.S. court proceeding, when it helps to defend against a speeding ticket.

San Francisco-based DoNotPay has developed “the world’s first robot lawyer” – an AI that can be installed on a mobile device. The company’s stated goal is to “level the playing field and make legal information and self-help accessible to everyone.”

Reactions as First Robot Lawyer Sets for Launching, To Appear in Court Next Month

The AI company has earlier created something similar earlier, they have in the past used AI-generated form letters and chatbots to help secure and recovers people’s fund for onboarding wifi that failed to work.

Many people have reacted to this new innovation citing that it may be injurious to lawyers’ legal business, particularly lawyers who have no knowledge about artificial intelligence.

My lawyer, the robot

The eerie new capabilities of artificial intelligence are about to show up inside a courtroom — in the form of an AI chatbot lawyer that will soon argue a case in traffic court.

That’s according to Joshua Browder, the founder of a consumer-empowerment startup who conceived of the scheme.

Sometime next month, Browder is planning to send a real defendant into a real court armed with a recording device and a set of earbuds. Browder’s company will feed audio of the proceedings into an AI that will in turn spit out legal arguments; the defendant, he says, has agreed to repeat verbatim the outputs of the chatbot to an unwitting judge.

Conscious Robots: Scientists Fervently Trying To Create Them Now

The biggest obstacle is that each robotics lab has its own idea of what a conscious robot looks like. There are also moral implications to building robots that have consciousness. Will they have rights, like in Bicentennial Man?

Considerations about conscious robots have been the domain of science fiction for decades. Isaac Asimov wrote several novels, including I, Robot, that examined the implications from the perspectives of law, society, and family, raising a lot of moral questions. Experts in ethical technology have considered and expanded upon these questions as scientists like those in the Columbia University lab work toward building more intelligent machines.

Science fiction has also brought us killer machines like in The Terminator, and conscious robots sound like a good way to have some. Humans might learn bad ideas and act upon them, and there is no reason to believe that robots will not fall into the same trap. Some of science’s greatest minds have warned against getting carried away with artificial intelligence.

Top US court backs WhatsApp suit over Pegasus spyware

The US Supreme Court on Monday rejected a bid by NSO Group to block a WhatsApp lawsuit accusing the Israeli tech firm of allowing mass cyberespionage of journalists and human rights activists.

The Supreme Court denied NSO’s plea for legal immunity and ruled that the case, which targets the company’s Pegasus software, can continue in a California , a court filing showed.

Pegasus gives its government customers—which have allegedly included Mexico, Hungary, Morocco and India—near-complete access to a target’s device, including their personal data, photos, messages and location.