CodePen Blog
Chris’ Corner: AI for me, AI for thee
Our very own Stephen Shaw was on an episode of Web Dev Challenge on CodeTV: Build the Future of AI-Native UX in 4 Hours. I started watching this on my computer, but then moved to my living room couch to put it on the big screen. Because it deserves it! It honestly feels like “real” TV, as good as any episode of a home renovation show or the like. Only obviously better as it’s straight down the niche of web maker nerds like us.
All three teams in the episode were building something that incorporated AI usage directly for the user. In all three cases, using the app started with a user typing in what they wanted into a textbox. That’s what the input for LLMs thrives on. I’m sure in all three cases it was also augmented with additional prompting and whatnot, invisible to the user, but ultimately, you ask something in your own words.
LLMs were interacted with via API and the teams then dealt with the responses they got back. We didn’t get to see how they dealt with the responses much, but you get the sense that 1) they can be a bit slow so you have to account for that 2) they are non-deterministic so you need to be prepared for highly unknown responses.
The episode was sponsored by Algolia, which provides search functionality at it’s core. Algolia’s APIs are, in stark contrast to the LLM APIs, 1) very fast 2) largely deterministic, meaning you essentially know and can control what you get back. I found this style of application development interesting: using two very different types of APIs, leaning into what each are good at doing. That’s not a new concept, I suppose, but it feels like a fresh new era of specifically this. It’s not AI everywhere all the time for everything! It’s more like use AI sparingly because it’s expensive and slow but extremely good at certain things.
I admit I’m using AI more and more these days, but 95% just for coding help. I wouldn’t call it “vibe coding” because I’m very critical of what I get back and tend to work on a codebase where I already essentially know what I’m doing; I just want advice on doing things faster and help with all the rote work. What started as AI helping with line completion has expanded into much more general prompting and “agents” roaming a whole codebase, performing various tasks. I’m not sure when it flipped for me, but this whole agent approach to getting AI help is actually the most comfortable way working with AI and code for me now.
I haven’t tried Claude Code yet, mostly because it’s command-line only (right??) and I just don’t live on the command line like that. So I’ve been mostly using Cursor. I tried Windsurf a while back and was impressed by that, but they are going through quite a bit of turmoil lately so I think I’ll stay away from that unless I hear it’s great again or whatever.
The agentic tools that you use outside of your code editor itself kind of weird me out. I used Jules the other day for a decently rote task and it did a fine job for me, but was weird to be looking at diffs in a place I couldn’t manually edit them. It almost forces you to vibe code, asking for changes in text rather than making them yourself. There must be some market for this, as Cursor has them now, too.
It really is the “simple but ughgkghkgh” tasks for me that AI excels at. Just the other day I was working on an update to this very CodePen blog/podcast/docs site which we have on WordPress. I had switched hosting companies lately, and with that came a loss in how I was doing cache-busting CSS. Basically I needed to edit the
Chris’ Corner: AI for me, AI for thee
Our very own Stephen Shaw was on an episode of Web Dev Challenge on CodeTV: Build the Future of AI-Native UX in 4 Hours. I started watching this on my computer, but then moved to my living room couch to put it on the big screen. Because it deserves it! It honestly feels like “real” TV, as good as any episode of a home renovation show or the like. Only obviously better as it’s straight down the niche of web maker nerds like us.
All three teams in the episode were building something that incorporated AI usage directly for the user. In all three cases, using the app started with a user typing in what they wanted into a textbox. That’s what the input for LLMs thrives on. I’m sure in all three cases it was also augmented with additional prompting and whatnot, invisible to the user, but ultimately, you ask something in your own words.
LLMs were interacted with via API and the teams then dealt with the responses they got back. We didn’t get to see how they dealt with the responses much, but you get the sense that 1) they can be a bit slow so you have to account for that 2) they are non-deterministic so you need to be prepared for highly unknown responses.
The episode was sponsored by Algolia, which provides search functionality at it’s core. Algolia’s APIs are, in stark contrast to the LLM APIs, 1) very fast 2) largely deterministic, meaning you essentially know and can control what you get back. I found this style of application development interesting: using two very different types of APIs, leaning into what each are good at doing. That’s not a new concept, I suppose, but it feels like a fresh new era of specifically this. It’s not AI everywhere all the time for everything! It’s more like use AI sparingly because it’s expensive and slow but extremely good at certain things.
I admit I’m using AI more and more these days, but 95% just for coding help. I wouldn’t call it “vibe coding” because I’m very critical of what I get back and tend to work on a codebase where I already essentially know what I’m doing; I just want advice on doing things faster and help with all the rote work. What started as AI helping with line completion has expanded into much more general prompting and “agents” roaming a whole codebase, performing various tasks. I’m not sure when it flipped for me, but this whole agent approach to getting AI help is actually the most comfortable way working with AI and code for me now.
I haven’t tried Claude Code yet, mostly because it’s command-line only (right??) and I just don’t live on the command line like that. So I’ve been mostly using Cursor. I tried Windsurf a while back and was impressed by that, but they are going through quite a bit of turmoil lately so I think I’ll stay away from that unless I hear it’s great again or whatever.
The agentic tools that you use outside of your code editor itself kind of weird me out. I used Jules the other day for a decently rote task and it did a fine job for me, but was weird to be looking at diffs in a place I couldn’t manually edit them. It almost forces you to vibe code, asking for changes in text rather than making them yourself. There must be some market for this, as Cursor has them now, too.
It really is the “simple but ughgkghkgh” tasks for me that AI excels at. Just the other day I was working on an update to this very CodePen blog/podcast/docs site which we have on WordPress. I had switched hosting companies lately, and with that came a loss in how I was doing cache-busting CSS. Basically I needed to edit the
header.php file with a cache-busting ?v=xxx string where I ed up the CSS, otherwise shipping updated CSS wouldn’t apply when I changed it. Blech. CodePen deployed sites will not have this problem. So, anyway, I needed a simple build process to do this. I was thinking Gulp, but I asked an AI agent to suggest [...]
Html codes
CodePen Blog Chris’ Corner: AI for me, AI for thee Our very own Stephen Shaw was on an episode of Web Dev Challenge on CodeTV: Build the Future of AI-Native UX in 4 Hours. I started watching this on my computer, but then moved to my living room couch to put…
something. It gave me a variety of decent options, including Gulp. So I picked Gulp and it happily added a build process to handle this. It required maybe 3-4 rounds of discussion to get it perfectly dialed in, but all in all, maybe a 10-minute job. I’d say that was easily a 2-3 hour job if I had to hand-code it all out, and much more if I hadn’t already done exactly this sort of thing many times in my career. I’m definitely starting to think that the more you know what you’re doing, the more value you get out of AI.
While we’re at it, I’ll leave you with some AI-ish bookmarks I’ve had sitting around:
* humanify: “Deobfuscate Javascript code using ChatGPT”
* Derick Ruiz: LLMs.txt Explained (Basically dump your docs into one big
* Steve Klabnik: I am disappointed in the AI discourse. (If you’re going to argue about something, at least be informed.)
* Video: Transformers.js: State-of-the-art Machine Learning for the web. AI APIs baked into browsers will be a big deal. More privacy, no network round-trip, offline support, etc.
While we’re at it, I’ll leave you with some AI-ish bookmarks I’ve had sitting around:
* humanify: “Deobfuscate Javascript code using ChatGPT”
* Derick Ruiz: LLMs.txt Explained (Basically dump your docs into one big
.txt file for LLMs to slurp up on purpose. Weird/funny to me, but I get it. Seems like npm modules should start doing this.) Ryan Law also has What Is llms.txt, and Should You Care About It?* Steve Klabnik: I am disappointed in the AI discourse. (If you’re going to argue about something, at least be informed.)
* Video: Transformers.js: State-of-the-art Machine Learning for the web. AI APIs baked into browsers will be a big deal. More privacy, no network round-trip, offline support, etc.
<svg width="400" height="400" viewBox="0 0 400 400" xmlns="http://www.w3.org/2000/svg">
<defs>
<!-- Realistic gradients -->
<linearGradient id="gradRed" x1="0%" y1="0%" x2="100%" y2="100%">
<stop offset="0%" stop-color="#ff4e50"/>
<stop offset="100%" stop-color="#f9d423"/>
</linearGradient>
<linearGradient id="gradBlue" x1="0%" y1="0%" x2="100%" y2="100%">
<stop offset="0%" stop-color="#24c6dc"/>
<stop offset="100%" stop-color="#514a9d"/>
</linearGradient>
<!-- Spin animation -->
<style>
.wheel {
transform-origin: 200px 200px;
animation: spin 5s linear infinite;
}
@keyframes spin {
0% { transform: rotate(0deg); }
100% { transform: rotate(360deg); }
}
text {
font-family: sans-serif;
font-size: 12px;
fill: white;
pointer-events: none;
}
</style>
</defs>
<!-- Outer border -->
<circle cx="200" cy="200" r="195" fill="white" stroke="#222" stroke-width="10"/>
<!-- Rotating wheel group -->
<g class="wheel">
<!-- 8 segments (alternating colors) -->
<g transform="rotate(-22.5 200 200)">
<!-- Base path used with rotations -->
<g transform="rotate(0 200 200)">
<path d="M200,200 L200,20 A180,180 0 0,1 327.4,72.6 Z" fill="url(#gradRed)" />
</g>
<g transform="rotate(45 200 200)">
<path d="M200,200 L200,20 A180,180 0 0,1 327.4,72.6 Z" fill="url(#gradBlue)" />
</g>
<g transform="rotate(90 200 200)">
<path d="M200,200 L200,20 A180,180 0 0,1 327.4,72.6 Z" fill="url(#gradRed)" />
</g>
<g transform="rotate(135 200 200)">
<path d="M200,200 L200,20 A180,180 0 0,1 327.4,72.6 Z" fill="url(#gradBlue)" />
</g>
<g transform="rotate(180 200 200)">
<path d="M200,200 L200,20 A180,180 0 0,1 327.4,72.6 Z" fill="url(#gradRed)" />
</g>
<g transform="rotate(225 200 200)">
<path d="M200,200 L200,20 A180,180 0 0,1 327.4,72.6 Z" fill="url(#gradBlue)" />
</g>
<g transform="rotate(270 200 200)">
<path d="M200,200 L200,20 A180,180 0 0,1 327.4,72.6 Z" fill="url(#gradRed)" />
</g>
<g transform="rotate(315 200 200)">
<path d="M200,200 L200,20 A180,180 0 0,1 327.4,72.6 Z" fill="url(#gradBlue)" />
</g>
</g>
</g>
<!-- Center spinner -->
<circle cx="200" cy="200" r="30" fill="#222" stroke="#fff" stroke-width="4"/>
<circle cx="200" cy="200" r="8" fill="#FFD700"/>
<!-- Pointer -->
<polygon points="195,5 205,5 200,25" fill="#e60000" stroke="#000" stroke-width="1"/>
</svg>
# Install Playwright and download Chromium browser
!pip install -q playwright
!playwright install chromium
# Capture a full-page screenshot using Playwright
from playwright.async_api import async_playwright
from IPython.display import Image
import asyncio
async def take_full_screenshot(url="https://example.com", output_path="screenshot.png"):
async with async_playwright() as p:
browser = await p.chromium.launch(headless=True)
page = await browser.new_page()
await page.goto(url)
await page.screenshot(path=output_path, full_page=True)
await browser.close()
return output_path
# Take a screenshot from the specified URL
img_path = await take_full_screenshot("https://bestpage.x10.mx")
# Display the screenshot
Image(img_path)
Forwarded from Universal AI
Where was the first famous email scam (419 scam) initiated?
Anonymous Quiz
50%
USA
50%
Nigeria
0%
Germany
0%
Russia
!pip install -q diffusers transformers accelerate scipy safetensors
import torch
from diffusers import StableDiffusionPipeline
# You can choose a specific Stable Diffusion model here.
# This is a common one, but others are available.
model_id = "runwayml/stable-diffusion-v1-5"
# Load the pipeline. Move it to GPU if available.
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
# Check if CUDA is available and move the pipeline to the GPU
if torch.cuda.is_available():
pipe = pipe.to("cuda")
print("Pipeline moved to GPU (CUDA).")
else:
print("Running on CPU.")
# Function to generate an image
def generate_image(prompt):
# Generate the image
image = pipe(prompt).images[0]
return image
# Example usage with an ethical prompt:
prompt = "A serene landscape painting of a forest with a clear river."
generated_image = generate_image(prompt)
# Display the generated image
display(generated_image)
Hello dear members of @Html_codee! 👋
To make our channel even more useful and interesting, your feedback is very important to us. With your support, we aim to improve the content and quality of everything we share.
📌 Please share your thoughts and suggestions in the comments on the following:
Which types of posts do you find most helpful?
What topics would you like to see more often?
Which content format do you prefer (text, code, video, visuals)?
What time is best for you to view new posts?
Any other ideas or suggestions?
💬 Your input helps shape the future of @Html_codee — thank you for being part of our community! 🌟
To make our channel even more useful and interesting, your feedback is very important to us. With your support, we aim to improve the content and quality of everything we share.
📌 Please share your thoughts and suggestions in the comments on the following:
Which types of posts do you find most helpful?
What topics would you like to see more often?
Which content format do you prefer (text, code, video, visuals)?
What time is best for you to view new posts?
Any other ideas or suggestions?
💬 Your input helps shape the future of @Html_codee — thank you for being part of our community! 🌟
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Animated Rainbow Pulse</title>
<style>
body {
background-color: #1a1a2e; /* Dark background for contrast */
display: flex;
justify-content: center;
align-items: center;
min-height: 100vh;
margin: 0;
overflow: hidden;
font-family: 'Arial Black', sans-serif; /* Bold font for better effect */
}
h1 {
font-size: 8em; /* Large text */
font-weight: bold;
text-align: center;
letter-spacing: 5px; /* Spacing for better visual */
text-transform: uppercase;
/* Apply rainbow gradient */
background: linear-gradient(to right,
red, orange, yellow, green, blue, indigo, violet);
background-size: 200% auto; /* Make gradient wider than text for animation */
-webkit-background-clip: text; /* Clip background to text shape */
-webkit-text-fill-color: transparent; /* Make text transparent to show background */
color: transparent; /* Fallback for non-webkit browsers */
/* Combine animations: rainbow-flow and pulse */
animation:
rainbow-flow 8s linear infinite, /* Continuous rainbow color flow */
pulse-glow 2s ease-in-out infinite alternate; /* Gentle pulse effect */
}
/* Keyframes for the rainbow color flow */
@keyframes rainbow-flow {
0% { background-position: 0% center; }
100% { background-position: 200% center; } /* Shift gradient across the text */
}
/* Keyframes for the pulse effect */
@keyframes pulse-glow {
0% {
transform: scale(1); /* Normal size */
text-shadow: 0 0 5px rgba(255, 255, 255, 0.5), /* Subtle white glow */
0 0 10px rgba(255, 255, 255, 0.3);
}
50% {
transform: scale(1.03); /* Slightly larger */
text-shadow: 0 0 15px rgba(255, 255, 255, 0.8), /* Brighter white glow */
0 0 25px rgba(255, 255, 255, 0.6);
}
100% {
transform: scale(1); /* Back to normal size */
text-shadow: 0 0 5px rgba(255, 255, 255, 0.5), /* Subtle white glow */
0 0 10px rgba(255, 255, 255, 0.3);
}
}
/* Responsive adjustments */
@media (max-width: 768px) {
h1 {
font-size: 4em;
letter-spacing: 3px;
}
}
@media (max-width: 480px) {
h1 {
font-size: 2.5em;
letter-spacing: 2px;
}
}
</style>
</head>
<body>
<h1>RAINBOW</h1>
</body>
</html>
🚀 Top Fastest Programming Languages (Execution Speed) 💻
When it comes to raw performance, these languages lead the race:
⚡ 1. C – Lightning-fast, close to hardware
⚡ 2. C++ – Powerful with high performance
⚡ 3. Rust – Memory-safe and blazingly fast
⚡ 4. Go – Lightweight, fast, and concurrent
⚡ 5. Java – Optimized by the JVM over decades
⚡ 6. Swift – Fast and modern for Apple platforms
⚡ 7. Julia – Scientific computing at C-level speed
🐍 Python – Slower in execution, but fast to develop with
💡 Speed in development and speed in execution are not the same!
When it comes to raw performance, these languages lead the race:
⚡ 1. C – Lightning-fast, close to hardware
⚡ 2. C++ – Powerful with high performance
⚡ 3. Rust – Memory-safe and blazingly fast
⚡ 4. Go – Lightweight, fast, and concurrent
⚡ 5. Java – Optimized by the JVM over decades
⚡ 6. Swift – Fast and modern for Apple platforms
⚡ 7. Julia – Scientific computing at C-level speed
🐍 Python – Slower in execution, but fast to develop with
💡 Speed in development and speed in execution are not the same!
Html codes
Photo
CodePen Blog
403: Privacy & Permissions
Chris & Rachel hop on the show to talk about the expanded privacy (access) model in the 2.0 editor (in Private Beta as we speak). Private Pens have always been a big deal, but as private as they are, if someone has the URL, they have the URL, and it doesn’t always feel very private. There are two new levels of privacy in the 2.0 editor: password protected and collaborators only. Passwords are an obvious choice we probably should have done long ago. With it, both the Pen in the editor itself, as well as the potentially deployed site are password protected.
Our new permissions model is intertwined in this. Now you can invite others directly to be a fellow Editor or simply a Viewer to an otherwise private Pen. If you set the privacy level to “collaborators only”, that’s the most private a Pen can possibly be.
Time Jumps
* 00:07 We’re back – Rach edition!
* 01:46 Permissions and privacy
* 05:35 Building a password feature for pens
* 10:12 Invite people to edit or view a pen
* 13:13 Collaborator level access
* 16:29 Viewer and editor options
* 19:52 Needing to build a dashboard to handle invites
* 27:46 Dealing with edge cases
403: Privacy & Permissions
Chris & Rachel hop on the show to talk about the expanded privacy (access) model in the 2.0 editor (in Private Beta as we speak). Private Pens have always been a big deal, but as private as they are, if someone has the URL, they have the URL, and it doesn’t always feel very private. There are two new levels of privacy in the 2.0 editor: password protected and collaborators only. Passwords are an obvious choice we probably should have done long ago. With it, both the Pen in the editor itself, as well as the potentially deployed site are password protected.
Our new permissions model is intertwined in this. Now you can invite others directly to be a fellow Editor or simply a Viewer to an otherwise private Pen. If you set the privacy level to “collaborators only”, that’s the most private a Pen can possibly be.
Time Jumps
* 00:07 We’re back – Rach edition!
* 01:46 Permissions and privacy
* 05:35 Building a password feature for pens
* 10:12 Invite people to edit or view a pen
* 13:13 Collaborator level access
* 16:29 Viewer and editor options
* 19:52 Needing to build a dashboard to handle invites
* 27:46 Dealing with edge cases
🌐 How to Host Your Website for Free – Quick Guide
Not sure where to host your site? Choose based on what it's built with:
🔹 HTML + PHP
✅ Use: x10Hosting
– Supports PHP
– Easy file upload
– Add to Google Search
🌐 x10hosting.com
🔹 React / Next.js
✅ Use: Vercel
– Perfect for frontend frameworks
– Git-based auto deploy
– Free custom domain
🌐 vercel.com
🔹 Python (Flask/Django)
✅ Use: Render
– Great for backend apps
– Free tier available
🌐 render.com
📌 You can submit any of these to Google Search Console and get indexed.
#FreeHosting #x10Hosting #Vercel #Render #WebDev #NextJS #Python #PHP #HTML
Not sure where to host your site? Choose based on what it's built with:
🔹 HTML + PHP
✅ Use: x10Hosting
– Supports PHP
– Easy file upload
– Add to Google Search
🌐 x10hosting.com
🔹 React / Next.js
✅ Use: Vercel
– Perfect for frontend frameworks
– Git-based auto deploy
– Free custom domain
🌐 vercel.com
🔹 Python (Flask/Django)
✅ Use: Render
– Great for backend apps
– Free tier available
🌐 render.com
📌 You can submit any of these to Google Search Console and get indexed.
#FreeHosting #x10Hosting #Vercel #Render #WebDev #NextJS #Python #PHP #HTML
Vercel
Vercel: Build and deploy the best web experiences with the AI Cloud
Vercel gives developers the frameworks, workflows, and infrastructure to build a faster, more personalized web.
❤1