DeepSeek Hype or Gamechanger?
Promising, but a "Sputnik moment"? I'm skeptical. Here's why.
So I was quite busy yesterday and late in the evening I posted a quick review of Gab AI’s new additional models, one of which was the DeepSeek R1 that had everyone flipping out yesterday. Only, I had not checked the news prior to posting. Ironically, I wrote:
“I had not heard of DeepSeek before, and I was eager to try it.”
I saw the news about DeepSeek being this massive “gamechanger” right after I posted the Gab AI review, of course.
DeepSeek R1 was not working yesterday on Gab AI, probably because they were getting hit with a cyberattack after their new launch tanked the tech market. In fact, I was trying last night to sign up for DeepSeek but could not, because their registration system is overloaded. Here is the message from DeepSeek’s website:
Due to large-scale malicious attacks on DeepSeek's services, registration may be busy. Please wait and try again. Registered users can log in normally. Thank you for your understanding and support.
Finally, I got my login code today and I’m in. R1 is still hanging sometimes on the DeepSeek website, but I got some decent output on R1 via Gab AI’s interface today, which I’ll share below.
At any rate, since the launch of DeepSeek’s latest model, R1, American tech companies are scrambling (DeepSeek is a Chinese company). So today OpenAI launched a government version of GPT just to save face or something.
Why did DeepSeek suddenly become so important that US tech stocks got hammered yesterday? Some say it could put other AI companies like OpenAI out of business. For more background, here’s a good video by Dave, a retired software engineer from Microsoft:
Key points:
Introduction
Dave discusses a significant development in the technology sector: China's release of an open-source AI model called "DeepSeek R1."
This model is considered a "Sputnik Moment," similar to the impact of the Soviet Union's launch of Sputnik on American technological dominance in the 20th century.
Deep SeekR1 is challenging the established players in AI, like OpenAI and Anthropic.
Technological Impact
Deep Seek R1 matches or exceeds the performance of top American AI models, such as OpenAI's GPT-4, but at a fraction of the cost ($6 million vs. billions).
The model was reportedly developed without access to the latest Nvidia chips, highlighting its efficiency and cost-effectiveness.
It is designed to be cheap and efficient, leveraging existing large AI models to create something powerful and accessible.
Model Details
DeepSeek R1 is a distilled language model, meaning it is trained using larger models to guide its development.
It can run on smaller devices, like consumer-grade CPUs and laptops, making it a potential game-changer.
The open-source nature allows for transparency and adaptability.
Potential and Challenges
The model lowers the barrier to entry for AI development, allowing smaller companies and researchers to experiment without significant infrastructure costs.
Smaller models like DeepSeek R1 may struggle with depth of knowledge and can produce incorrect responses.
Despite these challenges, it represents a significant step towards democratizing AI.
Implications for the U.S. and Global Market
The release of DeepSeek R1 could undermine the competitive advantage of proprietary models in research and enterprise adoption.
U.S. companies relying on subscription-based revenue might feel competitive pressure.
The open-source nature could accelerate global AI adoption but might reduce demand for U.S.-developed models.
OK, so a few points here.
First, DeepSeek is not by any stretch of the imagination the first LLM AI that can run on a desktop computer. I’ve had FreedomGPT installed on my computer for a while now and that has local versions that don’t require Internet access. On my computer, which has no GPU, the local AI has been super slow so I don’t use it. It’s possible it has been improved since I first tried it a while back, though. Is DeepSeek’s better? I don’t know yet, but I’ll check it out soon.
Second, much is being said about the alleged lower cost of development of DeepSeek R1. Well, yeah, when you piggyback off of other people’s large language models, then it will probably be cheaper. Point being, DeepSeek would not exist without GPT and others, and I won’t be surprised to hear about lawsuits on this point soon.
Also, much has been said about DeepSeek not needing as much computing power. The question came up in conversation yesterday: “Why does the US need to invest in so much server infrastructure if DeepSeek doesn’t need a lot of computing power? Well, as I was explaining yesterday, DeepSeek using fewer resources doesn’t mean you don’t need servers to run it…especially at scale. The more you use it, the more computing power it will eat up, perhaps less than another model, but power it will need.
That DeepSeek is crashing right now is proof of this. More servers are always needed the more people try to use a computing resource.
Finally, is DeepSeek’s output so much better that it’s going to destroy all other AI models? I’m not so sure about this.
The proof is in the pudding…how is DeepSeek’s output?
The first thing to understand is that the DeepSeek website or the chat app does not use R1 by default. It uses DeepSeek-V3, the older version. To use R1, you need to click the button “DeepThink R1” under the chat box. However, given the high server loads right now, you’re likely to get this as an answer:
The server is busy. Please try again later.
When working, R1 gives you all its thinking process before outputting the answer. You’ll see an example below, which I got from Gab AI’s R1 implementation.
The bad: The thinking process takes up a lot of time and frankly it slows the whole thing down. I’d like to turn that part off. It’s maybe interesting the first time and then tiresome afterwards.
The good: I like that the output sounds more human and casual. GPT tends to be formal and cliched and doesn’t do natural human writing very well. On that note, R1 is better, especially for small businesses who want to add some content marketing to their website with a more “human(like)” touch.
Final verdict: DeepSeek is interesting and useful. I doubt it will kill all other AI projects. Remember, AI is more than just generating blog posts. Increasingly, multimedia outputs are being developed. When DeepSeek can output a full two-hour action movie based on my prompts that is actually entertaining and interesting, then maybe other AI companies need to fold up shop. That won’t happen for a while at least.
DeepSeek DeepThink R1 Test Output
write a brief blog post on why generative AI is great from a first person perspective
<Thinking> Alright, the user wants a brief blog post on why generative AI is great, written from a first-person perspective. Let me start by understanding the requirements. First, "brief" means it shouldn't be too long, maybe a few paragraphs. The first-person perspective requires using "I" and personal experiences or opinions.
The user is probably looking to showcase the benefits of generative AI in a relatable way. They might be targeting readers who are unfamiliar with AI or skeptical about its value. The key is to highlight practical advantages without getting too technical.
I should think about the main points to cover. Common positives of generative AI include creativity, efficiency, accessibility, learning enhancement, and collaboration. Personal anecdotes would make it more engaging. Maybe start with a personal story or realization about AI's impact.
Structure-wise, an introduction with a hook, a few body paragraphs each focusing on a benefit, and a conclusion that reinforces the message. Use conversational language to keep it approachable.
Possible outline:
Introduction: Personal experience discovering generative AI.
Creativity boost: How AI helps generate ideas or content.
Efficiency: Saving time on tasks like writing or data analysis.
Accessibility: Democratizing tools for people without technical skills.
Learning and growth: AI as a tool for education and skill development.
Collaboration: Partnering with AI rather than replacing humans.
Conclusion: Summarizing the positive impact and future potential.
Need to ensure each point is concise and ties back to the first-person narrative. Avoid jargon. Use examples like drafting emails, creating art, or learning new topics with AI assistance. Maybe address common concerns subtly, like job displacement fears, by emphasizing collaboration.
Check for a natural flow. Start with a personal anecdote, then explain how AI has been beneficial in various aspects, and wrap up with a forward-looking statement. Keep paragraphs short for readability.
Also, think about the title. Should be catchy and reflective of the positive angle. Maybe something like "Why I Believe Generative AI is a Game-Changer" or "My Journey with Generative AI: Unleashing Creativity and Efficiency."
Finally, proofread to ensure clarity and maintain a friendly, enthusiastic tone throughout. Highlight the transformative aspects without sounding overly technical or salesy. </Thinking>
Title: "Why I’m (Cautiously) Obsessed with Generative AI"
Let me confess: I used to roll my eyes at the AI hype. “Another tool to replace human creativity?” I’d grumble, clinging to my notebooks and caffeine like relics of a purer time. Then I tried using generative AI to brainstorm ideas for a children’s book—and it changed my mind.
It’s Not About Replacement. It’s About Reinvention.
Generative AI isn’t the death of creativity; it’s a wildfire that clears space for new growth. When I’m stuck, tools like ChatGPT or Midjourney act like a caffeine shot for my brain. Last week, I asked an AI to generate metaphors for “loneliness” in 10 wildly different tones. One response: “Loneliness is a library where every book has the same blank page.” Did I use it verbatim? No. But it shattered my creative block and sent me scribbling.
The Great Equalizer
I’m not a coder. I still count HTML as a second language. Yet with generative AI, I’ve debugged Python scripts, designed logos, and even composed piano melodies for a friend’s podcast intro. These tools don’t demand expertise—they reward curiosity. A college student in Nairobi can now animate short films. A farmer in Iowa can prototype a sustainable packaging idea. That’s not just “cool tech.” That’s democratizing possibility.
My Unexpected Teacher
Here’s the twist: generative AI’s best feature isn’t output—it’s feedback. When I asked DALL-E to visualize “joy as a forgotten dialect,” it spat out abstract shapes and neon hieroglyphics. At first, I scoffed. Then I realized: Oh. Joy isn’t literal. It’s messy, uncontainable, a language we feel more than speak. The AI didn’t give me an answer. It gave me a mirror.
The Catch (Because Nothing’s Perfect)
Yes, I worry about plagiarism, bias, and lazy over-reliance. I fact-check every AI-generated history summary. I edit its prose to sound less… robotic. But used thoughtfully? Generative AI isn’t a cheat code—it’s a collaborator. It’s the friend who asks “What if you tried…?” when you’re too deep in your own head.
So Why Am I Here?
Because generative AI reminds me that creation isn’t sacred or scarce. Ideas can be wild, iterative, imperfect. My job isn’t to gatekeep “art” or “innovation.” It’s to stay human: curate, tweak, and care deeply about what I put into the world. The AI? It’s just the spark.
I’m still the fire.
—
TL;DR: Generative AI won’t steal your job (or your soul). But it might make you rethink what you’re capable of. And honestly? We could all use a little creative turbulence.


A personal perspective.
https://curingcoviddiseases.substack.com/p/an-old-woman-butting-in-on-the-ai