Three innovation areas in AI that everyone is fighting for

It is an undeniable fact that whenever a new, popular and eye-grabbing tool comes to the market, tech companies rush to replicate them and create their renditions.
AI-based coding assistants can surprisingly assist in genetic programming

Program evolution using large language-based perturbation bridges the gap between evolutionary algorithms and those that operate on the level of human thoughts.
OpenAI trains neural network to play Minecraft like a pro

The IDM can use past and future information to guess the action at each step.
When AI has a secret language

Absurd prompts that consistently generate images challenge our confidence in these big generative models.
Microsoft expands Azure OpenAI service availability with new features

The changes have been made with the help of InstructGPT, a group of GPT-3-based models that are less flawed and don’t generate text as problematic as its counterparts.
DALL.E 2 is open to select users and that might be good

In February, OpenAI invited 23 external researchers to “red team” DALL.E 2 to surface its inherent flaws and vulnerabilities.
What can we expect from GPT-4?

GPT-4 will not have 100 trillion parameters.
Why is OpenAI slow in releasing its innovation to public

It was not till November 2021 that GPT-3 had a public release. In fact, DALL.E 2’s predecessor DALL.E is yet to be publicly released.
OpenAI’s DALL·E 2 can churn out hi-res conceptual art from text commands and edit it too!

DALL·E 2 is preferred over DALL·E 1 for its caption matching and photorealism when evaluators were asked to compare 1,000 image generations from each model.
Diffusion Models Vs GANs: Which one to choose for Image Synthesis

Both of them have found wide usage in the field of image, video and voice generation, leading to a debate on what produces better results—diffusion models or GANs.
After grade school level math, OpenAI now tackles high school Math Olympiad problems

OpenAI said that it had achieved a new state-of-the-art (41.2 per cent vs 29.3 per cent) on the miniF2F benchmark.
OpenAI’s neural theorem prover can solve Math Olympiad problems

The theorem prover achieved 41.2% vs 29.3% on the miniF2F benchmark, a challenging collection of high-school olympiad problems.