• About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us
TechTrendFeed
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT
No Result
View All Result
TechTrendFeed
No Result
View All Result

I Requested ChatGPT, Claude and DeepSeek to Construct Tetris

Admin by Admin
January 5, 2026
Home Machine Learning
Share on FacebookShare on Twitter


I Asked ChatGPT, Claude and DeepSeek to Build Tetris
I Asked ChatGPT, Claude and DeepSeek to Build Tetris
Picture by Creator

 

# Introduction

 
It looks like virtually each week, a brand new mannequin claims to be state-of-the-art, beating current AI fashions on all benchmarks.

I get free entry to the most recent AI fashions at my full-time job inside weeks of launch. I usually don’t pay a lot consideration to the hype and simply use whichever mannequin is auto-selected by the system.

Nonetheless, I do know builders and associates who need to construct software program with AI that may be shipped to manufacturing. Since these initiatives are self-funded, their problem lies find one of the best mannequin to do the job. They need to steadiness value with reliability.

Because of this, after the discharge of GPT-5.2, I made a decision to run a sensible take a look at to grasp whether or not this mannequin was definitely worth the hype, and if it actually was higher than the competitors.

Particularly, I selected to check flagship fashions from every supplier: Claude Opus 4.5 (Anthropic’s most succesful mannequin), GPT-5.2 Professional (OpenAI’s newest prolonged reasoning mannequin), and DeepSeek V3.2 (one of many newest open-source options).

To place these fashions to the take a look at, I selected to get them to construct a playable Tetris sport with a single immediate.

These have been the metrics I used to guage the success of every mannequin:

 

Standards Description
First Try Success With only one immediate, did the mannequin ship working code? A number of debugging iterations results in larger value over time, which is why this metric was chosen.
Function Completeness Had been all of the options talked about within the immediate constructed by the mannequin, or was something missed out?
Playability Past the technical implementation, was the sport truly easy to play? Or have been there points that created friction within the consumer expertise?
Price-effectiveness How a lot did it value to get production-ready code?

 

# The Immediate

 
Right here is the immediate I entered into every AI mannequin:

Construct a totally purposeful Tetris sport as a single HTML file that I can open straight in my browser.

Necessities:

GAME MECHANICS:
– All 7 Tetris piece varieties
– Easy piece rotation with wall kick collision detection
– Items ought to fall mechanically, enhance the velocity step by step because the consumer’s rating will increase
– Line clearing with visible animation
– “Subsequent piece” preview field
– Sport over detection when items attain the highest

CONTROLS:
– Arrow keys: Left/Proper to maneuver, All the way down to drop quicker, As much as rotate
– Contact controls for cell: Swipe left/proper to maneuver, swipe all the way down to drop, faucet to rotate
– Spacebar to pause/unpause
– Enter key to restart after sport over

VISUAL DESIGN:
– Gradient colours for every bit kind
– Easy animations when items transfer and contours clear
– Clear UI with rounded corners
– Replace scores in actual time
– Degree indicator
– Sport over display screen with last rating and restart button

GAMEPLAY EXPERIENCE AND POLISH:
– Easy 60fps gameplay
– Particle results when strains are cleared (non-compulsory however spectacular)
– Improve the rating based mostly on variety of strains cleared concurrently
– Grid background
– Responsive design

Make it visually polished and really feel satisfying to play. The code ought to be clear and well-organized.

 

 

# The Outcomes

 

// 1. Claude Opus 4.5

The Opus 4.5 mannequin constructed precisely what I requested for.

The UI was clear and directions have been displayed clearly on the display screen. All of the controls have been responsive and the sport was enjoyable to play.

The gameplay was so easy that I truly ended up enjoying for fairly a while and obtained sidetracked from testing the opposite fashions.

Additionally, Opus 4.5 took lower than 2 minutes to supply me with this working sport, leaving me impressed on the primary attempt.

 

Tetris Gameplay Screen by ClaudeTetris Gameplay Screen by Claude
Tetris sport constructed by Opus 4.5

 

// 2. GPT-5.2 Professional

GPT-5.2 Professional is OpenAI’s newest mannequin with prolonged reasoning. For context, GPT-5.2 has three tiers: Instantaneous, Pondering, and Professional. On the level of writing this text, GPT-5.2 Professional is their most clever mannequin, offering prolonged pondering and reasoning capabilities.

Additionally it is 4x costlier than Opus 4.5.

There was a variety of hype round this mannequin, main me to go in with excessive expectations.

Sadly, I used to be underwhelmed by the sport this mannequin produced.

On the first attempt, GPT-5.2 Professional produced a Tetris sport with a format bug. The underside rows of the sport have been exterior of the viewport, and I couldn’t see the place the items have been touchdown.

This made the sport unplayable, as proven within the screenshot beneath:

 

Tetris game built by GPT-5.2Tetris game built by GPT-5.2
Tetris sport constructed by GPT-5.2

 

I used to be particularly shocked by this bug because it took round 6 minutes for the mannequin to provide this code.

I made a decision to attempt once more with this follow-up immediate to repair the viewport downside:

The sport works, however there is a bug. The underside rows of the Tetris board are reduce off on the backside of the display screen. I am unable to see the items after they land and the canvas extends past the seen viewport.

Please repair this by:
1. Ensuring your complete sport board suits within the viewport
2. Including correct centering so the complete board is seen

The sport ought to match on the display screen with all rows seen.

 

After the follow-up immediate, the GPT-5.2 Professional mannequin produced a purposeful sport, as seen within the beneath screenshot:

 

Tetris Second Try by GPT-5.2Tetris Second Try by GPT-5.2
Tetris second attempt by GPT-5.2

 

Nonetheless, the sport play wasn’t as easy because the one produced by the Opus 4.5 mannequin.

After I pressed the “down” arrow for the piece to drop, the subsequent piece would generally plummet immediately at a excessive velocity, not giving me sufficient time to consider the right way to place it.

The sport ended up being playable provided that I let every bit fall by itself, which wasn’t one of the best expertise.

(Be aware: I attempted the GPT-5.2 Normal mannequin too, which produced related buggy code on the primary attempt.)

 

// 3. DeepSeek V3.2

DeepSeek’s first try at constructing this sport had two points:

  • Items began disappearing after they hit the underside of the display screen.
  • The “down” arrow that’s used to drop the items quicker ended up scrolling your complete webpage slightly than simply shifting the sport items.

 

Tetris game built by DeepSeek V3.2Tetris game built by DeepSeek V3.2
Tetris sport constructed by DeepSeek V3.2

 

I re-prompted the mannequin to repair this situation, and the gameplay controls ended up working appropriately.

Nonetheless, some items nonetheless disappeared earlier than they landed. This made the sport utterly unplayable even after the second iteration.

I’m positive that this situation may be mounted with 2–3 extra prompts, and given DeepSeek’s low pricing, you might afford 10+ debugging rounds and nonetheless spend lower than one profitable Opus 4.5 try.

 

# Abstract: GPT-5.2 vs Opus 4.5 vs DeepSeek 3.2

 

// Price Breakdown

Here’s a value comparability between the three fashions:
 

Mannequin Enter (per 1M tokens) Output (per 1M tokens)
DeepSeek V3.2 $0.27 $1.10
GPT-5.2 $1.75 $14.00
Claude Opus 4.5 $5.00 $25.00
GPT-5.2 Professional $21.00 $84.00

 

DeepSeek V3.2 is the most affordable different, and you too can obtain the mannequin’s weights totally free and run it by yourself infrastructure.

GPT-5.2 is sort of 7x costlier than DeepSeek V3.2, adopted by Opus 4.5 and GPT-5.2 Professional.

For this particular activity (constructing a Tetris sport), we consumed roughly 1,000 enter tokens and three,500 output tokens.

For every further iteration, we are going to estimate an additional 1,500 tokens per further spherical. Right here is the overall value incurred per mannequin:

 

Mannequin Complete Price End result
DeepSeek V3.2 ~$0.005 Sport is not playable
GPT-5.2 ~$0.07 Playable, however poor consumer expertise
Claude Opus 4.5 ~$0.09 Playable and good consumer expertise
GPT-5.2 Professional ~$0.41 Playable, however poor consumer expertise

 

# Takeaways

 
Primarily based on my expertise constructing this sport, I might persist with the Opus 4.5 mannequin for daily coding duties.

Though GPT-5.2 is cheaper than Opus 4.5, I personally wouldn’t use it to code, for the reason that iterations required to yield the identical outcome would probably result in the identical sum of money spent.

DeepSeek V3.2, nevertheless, is much extra reasonably priced than the opposite fashions on this record.

For those who’re a developer on a funds and have time to spare on debugging, you’ll nonetheless find yourself saving cash even when it takes you over 10 tries to get working code.

I used to be shocked at GPT 5.2 Professional’s incapacity to provide a working sport on the primary attempt, because it took round 6 minutes to assume earlier than arising with flawed code. In spite of everything, that is OpenAI’s flagship mannequin, and Tetris ought to be a comparatively easy activity.

Nonetheless, GPT-5.2 Professional’s strengths lie in math and scientific analysis, and it’s particularly designed for issues that don’t depend on sample recognition from coaching knowledge. Maybe this mannequin is over-engineered for easy day-to-day coding duties, and will as an alternative be used when constructing one thing that’s complicated and requires novel structure.

The sensible takeaway from this experiment:

  • Opus 4.5 performs finest at day-to-day coding duties.
  • DeepSeek V3.2 is a funds different that delivers cheap output, though it requires some debugging effort to achieve your required consequence.
  • GPT-5.2 (Normal) didn’t carry out in addition to Opus 4.5, whereas GPT-5.2 (Professional) might be higher suited to complicated reasoning than fast coding duties like this one.

Be at liberty to duplicate this take a look at with the immediate I’ve shared above, and pleased coding!
&nbsp
&nbsp

Natassha Selvaraj is a self-taught knowledge scientist with a ardour for writing. Natassha writes on every little thing knowledge science-related, a real grasp of all knowledge matters. You may join along with her on LinkedIn or try her YouTube channel.

Tags: AskedBuildChatGPTClaudeDeepSeekTetris
Admin

Admin

Next Post
Actual-world AI voice cloning assault: A crimson teaming case examine

Actual-world AI voice cloning assault: A crimson teaming case examine

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Trending.

Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

Safety Amplified: Audio’s Affect Speaks Volumes About Preventive Safety

May 18, 2025
Reconeyez Launches New Web site | SDM Journal

Reconeyez Launches New Web site | SDM Journal

May 15, 2025
Flip Your Toilet Right into a Good Oasis

Flip Your Toilet Right into a Good Oasis

May 15, 2025
Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

Discover Vibrant Spring 2025 Kitchen Decor Colours and Equipment – Chefio

May 17, 2025
Apollo joins the Works With House Assistant Program

Apollo joins the Works With House Assistant Program

May 17, 2025

TechTrendFeed

Welcome to TechTrendFeed, your go-to source for the latest news and insights from the world of technology. Our mission is to bring you the most relevant and up-to-date information on everything tech-related, from machine learning and artificial intelligence to cybersecurity, gaming, and the exciting world of smart home technology and IoT.

Categories

  • Cybersecurity
  • Gaming
  • Machine Learning
  • Smart Home & IoT
  • Software
  • Tech News

Recent News

Goldilocks RL: Tuning Job Problem to Escape Sparse Rewards for Reasoning

Goldilocks RL: Tuning Job Problem to Escape Sparse Rewards for Reasoning

March 22, 2026
Crucial Quest KACE Vulnerability Probably Exploited in Assaults

Crucial Quest KACE Vulnerability Probably Exploited in Assaults

March 22, 2026
  • About Us
  • Privacy Policy
  • Disclaimer
  • Contact Us

© 2025 https://techtrendfeed.com/ - All Rights Reserved

No Result
View All Result
  • Home
  • Tech News
  • Cybersecurity
  • Software
  • Gaming
  • Machine Learning
  • Smart Home & IoT

© 2025 https://techtrendfeed.com/ - All Rights Reserved