CodePen Blog
Chris’ Corner: JavaScript Ecosystem Tools
I love a good exposé on how a front-end team operates. Like what technology they use, why, and how, particularly when there are pain points and journeys through them. Jim Simon of Reddit wrote one a bit ago about their teams build process. They were using something Rollup based and getting 2-minute build times and spent quite a bit of time and effort switching to Vite and now are getting sub-1-second build times. I don’t know if “wow Vite is fast” is the right read here though, as they lost type checking entirely. Vite means esbuild for TypeScript which just strips types, meaning no build process (locally, in CI, or otherwise) will catch errors. That seems like a massive deal to me as it opens the door to all contributions having TypeScript errors. I admit I’m fascinated by the approach though, it’s kinda like treating TypeScript as a local-only linter. Sure, VS Code complains and gives you red squiggles, but nothing else will, so use that information as you will. Very mixed feelings.
Vite always seems to be front and center in conversations about the JavaScript ecosystem these days. The tooling section of this year’s JavaScript Rising Stars:
Vite has been the big winner again this year, renewing for the second time its State of JS awards as the most adopted and loved technology. It’s rare to have both high usage and retention, let alone maintain it. We are eagerly waiting to see how the new void(0) company will impact the Vite ecosystem next year!
(Interesting how it’s actually Biome that gained the most stars this year and has large goals about being the toolchain for the web, like Vite)
Vite actually has the bucks now to make a real run at it. It’s always nail biting and fascinating to see money being thrown around at front-end open source, as a strong business model around all that is hard to find.
Maybe there is an enterprise story to capture? Somehow I can see that more easily. I would guess that’s where the new venture vlt is seeing potential. npm, now being owned by Microsoft, certainly had a story there that investors probably liked to see, so maybe vlt can do it again but better. It’s the “you’ve got their data” thing that adds up to me. Not that I love it, I just get it. Vite might have your stack, but we write checks to infrastructure companies.
That tinge of worry extends to Bun and Deno too. I think they can survive decently on momentum of developers being excited about the speed and features. I wouldn’t say I’ve got a full grasp on it, but I’ve seen some developers be pretty disillusioned or at least trepidatious with Deno and their package registry JSR. But Deno has products! They have enterprise consulting and various hosting. Data and product, I think that is all very smart. Mabe void(0) can find a product play in there. This all reminds me of XState / Stately which took a bit of funding, does open source, and productizes some of what they do. Their new Store library is getting lots of attention which is good for the gander.
To be clear, I’m rooting for all of these companies. They are small and only lightly funded companies, just like CodePen, trying to make tools to make web development better. 💜
Chris’ Corner: JavaScript Ecosystem Tools
I love a good exposé on how a front-end team operates. Like what technology they use, why, and how, particularly when there are pain points and journeys through them. Jim Simon of Reddit wrote one a bit ago about their teams build process. They were using something Rollup based and getting 2-minute build times and spent quite a bit of time and effort switching to Vite and now are getting sub-1-second build times. I don’t know if “wow Vite is fast” is the right read here though, as they lost type checking entirely. Vite means esbuild for TypeScript which just strips types, meaning no build process (locally, in CI, or otherwise) will catch errors. That seems like a massive deal to me as it opens the door to all contributions having TypeScript errors. I admit I’m fascinated by the approach though, it’s kinda like treating TypeScript as a local-only linter. Sure, VS Code complains and gives you red squiggles, but nothing else will, so use that information as you will. Very mixed feelings.
Vite always seems to be front and center in conversations about the JavaScript ecosystem these days. The tooling section of this year’s JavaScript Rising Stars:
Vite has been the big winner again this year, renewing for the second time its State of JS awards as the most adopted and loved technology. It’s rare to have both high usage and retention, let alone maintain it. We are eagerly waiting to see how the new void(0) company will impact the Vite ecosystem next year!
(Interesting how it’s actually Biome that gained the most stars this year and has large goals about being the toolchain for the web, like Vite)
Vite actually has the bucks now to make a real run at it. It’s always nail biting and fascinating to see money being thrown around at front-end open source, as a strong business model around all that is hard to find.
Maybe there is an enterprise story to capture? Somehow I can see that more easily. I would guess that’s where the new venture vlt is seeing potential. npm, now being owned by Microsoft, certainly had a story there that investors probably liked to see, so maybe vlt can do it again but better. It’s the “you’ve got their data” thing that adds up to me. Not that I love it, I just get it. Vite might have your stack, but we write checks to infrastructure companies.
That tinge of worry extends to Bun and Deno too. I think they can survive decently on momentum of developers being excited about the speed and features. I wouldn’t say I’ve got a full grasp on it, but I’ve seen some developers be pretty disillusioned or at least trepidatious with Deno and their package registry JSR. But Deno has products! They have enterprise consulting and various hosting. Data and product, I think that is all very smart. Mabe void(0) can find a product play in there. This all reminds me of XState / Stately which took a bit of funding, does open source, and productizes some of what they do. Their new Store library is getting lots of attention which is good for the gander.
To be clear, I’m rooting for all of these companies. They are small and only lightly funded companies, just like CodePen, trying to make tools to make web development better. 💜
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Compass App</title>
<script src="https://registry.npmmirror.com/vue/3.3.11/files/dist/vue.global.js"></script>
<script src="https://cdn.tailwindcss.com"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/5.15.3/css/all.min.css"></link>
<style>
@import url('https://fonts.googleapis.com/css2?family=Roboto:wght@400;700&display=swap');
body {
font-family: 'Roboto', sans-serif;
}
.compass {
width: 300px;
height: 300px;
border: 10px solid #4A5568;
border-radius: 50%;
position: relative;
margin: 0 auto;
}
.needle {
width: 10px;
height: 150px;
background: #E53E3E;
position: absolute;
top: 50%;
left: 50%;
transform-origin: bottom center;
transform: translate(-50%, -100%);
}
.center {
width: 20px;
height: 20px;
background: #4A5568;
border-radius: 50%;
position: absolute;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
}
</style>
</head>
<body class="bg-gray-100 flex items-center justify-center h-screen">
<div id="app">
<div class="text-center mb-8">
<h1 class="text-4xl font-bold mb-4">Compass App</h1>
<p class="text-lg text-gray-700">Find your direction with this simple compass app.</p>
</div>
<div v-if="hasMagnetometer" class="compass">
<div class="needle" :style="{ transform: `translate(-50%, -100%) rotate(${heading}deg)` }"></div>
<div class="center"></div>
</div>
<div v-else class="text-center text-red-500">
<p class="text-lg">Magnetometer not supported on your device.</p>
</div>
<div class="text-center mt-8" v-if="hasMagnetometer">
<p class="text-lg text-gray-700">Current Heading: {{ heading.toFixed(2) }}°</p>
</div>
</div>
<script>
const { createApp, ref, onMounted } = Vue
createApp({
setup() {
const heading = ref(0)
const hasMagnetometer = ref(false)
const updateHeading = (event) => {
heading.value = event.alpha
}
onMounted(() => {
if (window.DeviceOrientationEvent) {
window.addEventListener('deviceorientation', updateHeading, true)
if (window.DeviceOrientationEvent.requestPermission) {
window.DeviceOrientationEvent.requestPermission()
.then(permissionState => {
if (permissionState === 'granted') {
hasMagnetometer.value = true
} else {
alert('Permission to access device orientation was denied.')
}
})
.catch(console.error)
} else {
hasMagnetometer.value = true
}
} else {
alert('Device orientation not supported on your device.')
}
})
return {
heading,
hasMagnetometer
}
}
}).mount('#app')
</script>
</body>
</html>
🚀 Exciting News! 🎉 I just created an awesome animated GIF with Python, showcasing a stunning nebula image! Here's a sneak peek into how it was done:
✨ Dive into the cosmic beauty and let me know what you think! 💫 #Python #Animation #GIF #CodingMagic #html #new 🌌
import numpy as np
import cv2
import imageio
from PIL import Image
# Load and resize the nebula image
image_path = "/path/to/your/nebula_image.png"
nebula_img = Image.open(image_path)
nebula_img = nebula_img.resize((512, 896))
# Convert image to a NumPy array
frame = np.array(nebula_img)
# Create animation frames with a cool pulsating effect 🌟
frames = []
num_frames = 30 # Crafting a 1-second animation at 30 FPS
for i in range(num_frames):
alpha = 1 + 0.1 * np.sin(2 * np.pi * i / num_frames)
pulsating_frame = cv2.convertScaleAbs(frame, alpha=alpha, beta=0)
frames.append(pulsating_frame)
# Save the animation as a looping GIF
output_path = "/path/to/your/nebula_animation.gif"
imageio.mimsave(output_path, frames, fps=30)
print(f"Check out the mesmerizing animation: {output_path}")
✨ Dive into the cosmic beauty and let me know what you think! 💫 #Python #Animation #GIF #CodingMagic #html #new 🌌
Top 30 AI Libraries for Coding
🚀 Want to start building AI-powered applications? Here are the top 30 AI libraries you should know!
🔥 Machine Learning & Deep Learning
1️⃣ TensorFlow – ML & DL framework
2️⃣ PyTorch – Dynamic neural networks
3️⃣ Scikit-learn – Classic ML algorithms
4️⃣ Keras – High-level API for TensorFlow
5️⃣ XGBoost – Gradient boosting models
6️⃣ LightGBM – Fast gradient boosting
7️⃣ FastAI – Deep learning with PyTorch
8️⃣ Hugging Face Transformers – NLP models
9️⃣ OpenCV – Computer vision
🔟 Dlib – Face detection & more
📝 Natural Language Processing (NLP)
1️⃣1️⃣ spaCy – Efficient NLP toolkit
1️⃣2️⃣ NLTK – Classic NLP library
1️⃣3️⃣ TextBlob – Simple NLP API
1️⃣4️⃣ Gensim – Text vectorization
1️⃣5️⃣ Flair – PyTorch-based NLP
📊 Data Visualization
1️⃣6️⃣ Matplotlib – Basic plotting
1️⃣7️⃣ Seaborn – Statistical visualization
1️⃣8️⃣ Plotly – Interactive charts
1️⃣9️⃣ Bokeh – Web-based visualization
2️⃣0️⃣ Altair – Declarative visualization
📈 Data Processing & Analysis
2️⃣1️⃣ Pandas – Data manipulation
2️⃣2️⃣ NumPy – Numerical computing
2️⃣3️⃣ SciPy – Scientific computing
2️⃣4️⃣ Polars – Faster alternative to Pandas
2️⃣5️⃣ Feature-engine – Feature engineering
🚀 AI Performance & Optimization
2️⃣6️⃣ Ray – Scalable AI computing
2️⃣7️⃣ JAX – GPU-accelerated AI
2️⃣8️⃣ DeepSpeed – AI model optimization
2️⃣9️⃣ MLflow – ML project tracking
3️⃣0️⃣ ONNX – Cross-platform AI model compatibility
⚡ Follow @html_codee for more AI & coding tips! 💡
🚀 Want to start building AI-powered applications? Here are the top 30 AI libraries you should know!
🔥 Machine Learning & Deep Learning
1️⃣ TensorFlow – ML & DL framework
2️⃣ PyTorch – Dynamic neural networks
3️⃣ Scikit-learn – Classic ML algorithms
4️⃣ Keras – High-level API for TensorFlow
5️⃣ XGBoost – Gradient boosting models
6️⃣ LightGBM – Fast gradient boosting
7️⃣ FastAI – Deep learning with PyTorch
8️⃣ Hugging Face Transformers – NLP models
9️⃣ OpenCV – Computer vision
🔟 Dlib – Face detection & more
📝 Natural Language Processing (NLP)
1️⃣1️⃣ spaCy – Efficient NLP toolkit
1️⃣2️⃣ NLTK – Classic NLP library
1️⃣3️⃣ TextBlob – Simple NLP API
1️⃣4️⃣ Gensim – Text vectorization
1️⃣5️⃣ Flair – PyTorch-based NLP
📊 Data Visualization
1️⃣6️⃣ Matplotlib – Basic plotting
1️⃣7️⃣ Seaborn – Statistical visualization
1️⃣8️⃣ Plotly – Interactive charts
1️⃣9️⃣ Bokeh – Web-based visualization
2️⃣0️⃣ Altair – Declarative visualization
📈 Data Processing & Analysis
2️⃣1️⃣ Pandas – Data manipulation
2️⃣2️⃣ NumPy – Numerical computing
2️⃣3️⃣ SciPy – Scientific computing
2️⃣4️⃣ Polars – Faster alternative to Pandas
2️⃣5️⃣ Feature-engine – Feature engineering
🚀 AI Performance & Optimization
2️⃣6️⃣ Ray – Scalable AI computing
2️⃣7️⃣ JAX – GPU-accelerated AI
2️⃣8️⃣ DeepSpeed – AI model optimization
2️⃣9️⃣ MLflow – ML project tracking
3️⃣0️⃣ ONNX – Cross-platform AI model compatibility
⚡ Follow @html_codee for more AI & coding tips! 💡
Video to ASCII Dropper
Video transforms into an ASCII animation in this interactive Pen from Jhey Tompkins. Drop in your own video, or try it with the fireworks video built into the Pen.
Video transforms into an ASCII animation in this interactive Pen from Jhey Tompkins. Drop in your own video, or try it with the fireworks video built into the Pen.
📂 Common Code File Formats & Their Meanings
Understanding different file formats is essential for developers. Here’s a quick guide to the most commonly used code-related file extensions:
🌐 Web Development
🔹 .html – Web page structure (HyperText Markup Language)
🔹 .css – Styles and design (Cascading Style Sheets)
🔹 .js – JavaScript code for web interactivity
🔹 .php – Server-side scripting (PHP Hypertext Preprocessor)
💻 Programming Languages
🔹 .py – Python script
🔹 .java – Java source code
🔹 .c / .cpp – C and C++ source files
🔹 .cs – C# source code
🔹 .kt – Kotlin file (Android development)
🔹 .swift – Swift code (iOS/macOS development)
🗃 Data & Configuration Files
🔹 .json – Lightweight data format (JavaScript Object Notation)
🔹 .xml – Structured data storage (Extensible Markup Language)
🔹 .csv – Tabular data in plain text (Comma-Separated Values)
🔹 .yaml / .yml – Configuration files (human-readable format)
🔹 .env – Environment variables file
📂 Database Files
🔹 .sql – Database queries and scripts
🔹 .db – Generic database file
🔹 .sqlite – SQLite database file
⚙ System & Executable Files
🔹 .bat – Windows batch script
🔹 .sh – Linux/macOS shell script
🔹 .dll – Windows application dependency
🔹 .exe – Windows executable file
🔹 .jar – Java application package
🔹 .mrp – Game & app file format used on older mobile devices
💡 Want to learn more about coding? Stay tuned! 🚀
@Html_codee
Which HTML attribute improves accessibility but does **not** change appearance?
Anonymous Quiz
33%
aria-label
67%
title
0%
alt
0%
hidden
Which of the following HTML elements is **not** valid inside a `<dl>`?
Anonymous Quiz
0%
<dt>
67%
<dd>
0%
<li>
33%
<div>
What does the <meta charset='UTF-8'> tag do?
Anonymous Quiz
0%
Sets page title
50%
Defines character encoding
50%
Adds CSS
0%
Increases speed
CodePen Blog
Chris’ Corner: Offlinin’ Aint Easy
I kinda like the idea of the “minimal” service worker. Service Workers can be pretty damn complicated and the power of them honestly makes me a little nervous. They are middlemen between the browser and the network and I can imagine really dinking that up, myself. Not to dissuade you from using them, as they can useful things no other technology can do.
That’s why I like the “minimal” part. I want to understand what it’s doing extremely clearly! The less code the better. Tantek posted about that recently, with a minimal idea:
You have a service worker (and “offline” HTML page) on your personal site, installed from any page on your site, that all it does is cache the offline page, and on future requests to your site checks to see if the requested page is available, and if so serves it, otherwise it displays your offline page with a “site appears to be unreachable” message that a lot of service workers provide, AND provides an algorithmically constructed link to the page on an archive (e.g. Internet Archive) or static mirror of your site (typically at another domain).
That seems clearly useful. The bit about linking to an archive of the page though seems a smidge off to me. If the reason a user can’t see the page is because they are offline, a page that sends them to the Internet Archive isn’t going to work either. But I like the bit about caching and at least trying to do something. Jeremy Keith did some thinking about this back in 2018 as well:
The logic works like this:
* If there’s a request for an HTML page, fetch it from the network and store a copy in a cache (but if the network request fails, try looking in the cache instead).
* For any other files, look for a copy in the cache first but meanwhile fetch a fresh version from the network to update the cache (and if there’s no existing version in the cache, fetch the file from the network and store a copy of it in the cache).
The implementation is actually just a few lines of code. A variation of it handles Tantek’s idea as well, implementing a custom offline page that could do the thing where it links off to an archive elsewhere.
I’ll leave you with a couple more links. Have you heard the term LoFi? I’m not the biggest fan of the shortening of it because “Lo-fi” is a pretty established musical term not to mention “low fidelity” is useful in all sorts of contexts. But recently in web tech it refers to “Local First”.
I see “local-first” as shifting reads and writes to an embedded database in each client via “sync engines” that facilitate data exchange between clients and servers. Applications like Figma and Linear pioneered this approach, but it’s becoming increasingly easy to do. Some notes on Local-First Development, Kyle Matthews
I dig the idea honestly and do see it as a place for technology (and companies that make technology) to step and really make this style of working easy. Plenty of stuff already works this way. I think of the Notes app on my phone. Those notes are always available. It doesn’t (seem to) care if I’m online or offline. If I’m online, they’ll sync up with the cloud so other devices and backups will have the latest, but if not, so be it. It better as heck work that way! And I’m glad it does, but lots of stuff on the web does not (CodePen doesn’t). But I’d like to build stuff that works that way and have it not be some huge mountain to climb.
That eh, we’ll just sync later/whenever when we have network access is super non-trivial, is part of the issue. Technology could make easy/dumb choices like “last write wins”, but that tends to be dangerous data-loss territory that users don’t put up with. Instead data need to be intelligently merged, and that isn’t easy. Dropbox is multi-billion dollar company that deals with this and they admittedly don’t always have it perfect. One of the major solutions is the concept of CRDTs, which are an [...]
Chris’ Corner: Offlinin’ Aint Easy
I kinda like the idea of the “minimal” service worker. Service Workers can be pretty damn complicated and the power of them honestly makes me a little nervous. They are middlemen between the browser and the network and I can imagine really dinking that up, myself. Not to dissuade you from using them, as they can useful things no other technology can do.
That’s why I like the “minimal” part. I want to understand what it’s doing extremely clearly! The less code the better. Tantek posted about that recently, with a minimal idea:
You have a service worker (and “offline” HTML page) on your personal site, installed from any page on your site, that all it does is cache the offline page, and on future requests to your site checks to see if the requested page is available, and if so serves it, otherwise it displays your offline page with a “site appears to be unreachable” message that a lot of service workers provide, AND provides an algorithmically constructed link to the page on an archive (e.g. Internet Archive) or static mirror of your site (typically at another domain).
That seems clearly useful. The bit about linking to an archive of the page though seems a smidge off to me. If the reason a user can’t see the page is because they are offline, a page that sends them to the Internet Archive isn’t going to work either. But I like the bit about caching and at least trying to do something. Jeremy Keith did some thinking about this back in 2018 as well:
The logic works like this:
* If there’s a request for an HTML page, fetch it from the network and store a copy in a cache (but if the network request fails, try looking in the cache instead).
* For any other files, look for a copy in the cache first but meanwhile fetch a fresh version from the network to update the cache (and if there’s no existing version in the cache, fetch the file from the network and store a copy of it in the cache).
The implementation is actually just a few lines of code. A variation of it handles Tantek’s idea as well, implementing a custom offline page that could do the thing where it links off to an archive elsewhere.
I’ll leave you with a couple more links. Have you heard the term LoFi? I’m not the biggest fan of the shortening of it because “Lo-fi” is a pretty established musical term not to mention “low fidelity” is useful in all sorts of contexts. But recently in web tech it refers to “Local First”.
I see “local-first” as shifting reads and writes to an embedded database in each client via “sync engines” that facilitate data exchange between clients and servers. Applications like Figma and Linear pioneered this approach, but it’s becoming increasingly easy to do. Some notes on Local-First Development, Kyle Matthews
I dig the idea honestly and do see it as a place for technology (and companies that make technology) to step and really make this style of working easy. Plenty of stuff already works this way. I think of the Notes app on my phone. Those notes are always available. It doesn’t (seem to) care if I’m online or offline. If I’m online, they’ll sync up with the cloud so other devices and backups will have the latest, but if not, so be it. It better as heck work that way! And I’m glad it does, but lots of stuff on the web does not (CodePen doesn’t). But I’d like to build stuff that works that way and have it not be some huge mountain to climb.
That eh, we’ll just sync later/whenever when we have network access is super non-trivial, is part of the issue. Technology could make easy/dumb choices like “last write wins”, but that tends to be dangerous data-loss territory that users don’t put up with. Instead data need to be intelligently merged, and that isn’t easy. Dropbox is multi-billion dollar company that deals with this and they admittedly don’t always have it perfect. One of the major solutions is the concept of CRDTs, which are an [...]
Html codes
CodePen Blog Chris’ Corner: Offlinin’ Aint Easy I kinda like the idea of the “minimal” service worker. Service Workers can be pretty damn complicated and the power of them honestly makes me a little nervous. They are middlemen between the browser and the…
impressive idea to say the least, but are complex enough that most of us will gently back away. So I’ll simply leave you with A Gentle Introduction to CRDTs.
👍1