I'm excited to announce that I have joined
@huggingface
! ๐ค Continuing with Transformers.js, I aim to grow the community by bridging the gap between web development and machine learning. Thank you to everyone who has supported me so far, I can't wait to show you what's next! ๐
Introducing Whisper Web: ML-powered speech recognition directly in your browser! ๐
This comes with the release of Transformers.js v2.2.0, which now supports multilingual transcription and translation for over 100 different languages! ๐คฏ
Check it out:
Meta's Segment Anything Model (SAM) can now run in your browser w/ WebGPU (+ fp16), meaning up to 8x faster image encoding (10s โ 1.25s)! ๐คฏโก๏ธ
Video is not sped up! Everything runs 100% locally thanks to ๐ค Transformers.js and onnxruntime-web!
๐ Demo:
We just updated our in-browser Background Removal demo to use WebGPU and it's now ~50x faster! ๐คฏ ~9 seconds down to 180ms! โก๏ธ
Powered by
@bria_ai_
's RMBG-v1.4 model and ๐ค Transformers.js!
... and yes, the video is in real time! ๐คฏ
New
@karpathy
video just dropped! ๐๐ฅ
After watching, if you want to learn more about how different models (e.g., GPT4, Llama, T5, BERT) tokenize text, check out "The Tokenizer Playground": a web-app I built a few months ago with ๐ค Transformers.js!
๐
New (2h13m ๐ ) lecture: "Let's build the GPT Tokenizer"
Tokenizers are a completely separate stage of the LLM pipeline: they have their own training set, training algorithm (Byte Pair Encoding), and after training implement two functions: encode() from strings to tokens, andโฆ
Introducing Distil-Whisper Web: 49% smaller, 4.2x faster Whisper directly in your browser! ๐
Here is a side-by-side comparison with OpenAI's original version! ๐คฏ
WebGPU is the future! ๐๐ฅ Transformers.js can now perform real-time background removal, powered by MODNet! โก๏ธ
Development for Transformers.js v3 (which adds full WebGPU support) is well underway, and we're excited to continue sharing updates and demos!
Try it out yourself! ๐
I know it just released, but I don't see many people talking about the Phi-3 tokenizer! ๐ Here's the full list of added special tokens... what do you notice? ๐คฏ
<|assistant|>
<|step|>
<|function_output|>
<|tag|>
<|function_call|>
<|system|>
<|end|>
<|raw|>
<|continue|>
<|user|>โฆ
YOLOv9 just released, and now it's compatible with ๐ค Transformers.js!
That's right... near real-time object detection running locally in your browser: no server required! ๐คฏ Try it out yourself! ๐
๐ Demo:
When do you *really* need to use a vector database? ๐ค To try answer that question, I recreated my semantic image search application to run 100% in-browser with Transformers.js (no server).
After loading the model and database, it only takes ~50ms to compute text embeddings andโฆ
Transformers.js v2.0 is finally here! ๐ฅ
Run
@huggingface
transformers directly in your browser, with no need for a server!
Some of the new features include:
๐ ๏ธ Complete ES6 rewrite
๐ Documentation + examples
๐ค Improved Hugging Face Hub integration
Here's a side-by-side comparison of the GPT-4, Gemma, and Llama tokenizers, tested on "The Great Gatsby" (270k characters).
As
@karpathy
points out, the Gemma and Llama tokenizers are very similar, with the main difference being vocabulary size. One interesting thing to see isโฆ
Seeing as I published my Tokenizer video yesterday, I thought it could be fun to take a deepdive into the Gemma tokenizer.
First, the Gemma technical report [pdf]:
says: "We use a subset of the SentencePiece tokenizer (Kudo and Richardson, 2018) ofโฆ
Today we released ๐ค Transformers.js v2.14, which adds support for SAM (Segment Anything Model).
This means you can now generate high-quality segmentation masks for objects in a scene, directly in your browser! ๐คฏ
Demo (+ source code):
Grok-1 is finally out! ๐ But while everyone was focused on the weights, I decided to take a look at the tokenizer. I also added it to the Tokenizer Playground!
Structurally, it looks quite similar to the Llama 2 tokenizer (BPE w/ byte-fallback), with a vocabulary size of 2ยนโท =โฆ
WOW! ๐คฏ An in-browser version of ChatGPT (or HF Chat), built with ๐ค Transformers.js!
Yes that's right, everything runs 100% locally in your browser, meaning no need for a server! Check it out!
๐
๐Just released
#BlindChat
: an open-source & privacy-first ChatGPT alternative! ๐BlindChat combines
@huggingface
transformers.js (from the great
@xenovacom
) with
#ChatUI
for a private and fully in-browser experience
Try it on HF:
Distil-Whisper small is finally here! ๐ฅ Over 10x smaller, 5x faster, and within 3% WER of large-v2. ๐คฏ
Since it's only 166M params, it can even run locally in your browser with ๐ค Transformers.js!
Check it out!
๐
Real-time object detection w/ ๐ค Transformers.js, running YOLOv9 directly in your browser! ๐คฏ
This demo shows why on-device ML is so important:
1. Privacy - local inference means no user data is sent to the cloud
2. No server latency - empowers developers to build real-timeโฆ
Due to popular demand, we added microphone support to Whisper Web! ๐๏ธ This means you can now record and transcribe audio directly in your browser: no installation required. ๐
Demo:
Source code:
We just released ๐ค Transformers.js v2.7.0, which adds supports for ๐ฃ๏ธ Text to Speech w/ speecht5. This means you can now synthesize human speech directly in your browser... no server required! ๐ฅ
Check out the demo! ๐
Introducing Remove Background Web: In-browser background removal, powered by
@bria_ai_
's new RMBG-v1.4 model and ๐ค Transformers.js!
That's right, everything runs 100% locally, meaning none of your images are uploaded to a server! ๐คฏ
Try it out: ๐
Transformers.js v2.9.0 is now out! ๐ New features:
๐ฏ Zero-shot Object Detection w/ OwlViT
๐ต๏ธโโ๏ธ Depth Estimation w/ DPT and GLPN
๐ Optical Document Understanding w/ Nougat
... and you can get started in just a few lines of code! ๐คฏ๐
Calling all JS developers! We just released 2 example Next.js applications which show how to use Transformers.js for client-side (in-browser) or server-side (Node.js) inference. ๐ค
Building full-stack AI applications has never been this easy! ๐
Tutorial:โฆ
Woah! ๐คฏ A new 20M parameter embeddings model that gives similar performance to OpenAI's text-embedding-ada-002, but is much smaller+faster! ๐ฅ
I don't understand why >95% of developers still use closed-source embeddings models... ๐
+ it's compatible with ๐ค Transformers.js!
New embeddings model, gte-tiny, is published! Distilled from gte-small, offering slightly-worse performance with half the layers. (Alternatively, same size but better performance compared to all-MiniLM-L6-v2.) ONNX models also available.
Check it out! (link below)
Depth Anything is now available in ๐ค Transformers.js!
At just 25M parameters, the small version of the model runs great locally. Here's a demo I created which performs monocular depth estimation directly in your browser (no server needed)! ๐คฏ
๐ Demo:
TinyLlama is finally here: a 1.1B Llama model trained on 3 trillion tokens! ๐คฏ It's also compatible with ๐ค Transformers.js (see code below)! ๐
What a way to end the year! ๐ฅณ
๐
Using ๐ค Transformers.js, you can now run CLIP directly in your browser at over 20fps w/ WebGPU (@ fp16) for real-time zero-shot image classification! ๐คฏ
As always, everything runs 100% locally, meaning no calls made to an API! ๐ฅ Try it out! ๐
๐ Demo:
Introducing Chat with YouTube, an AI-powered browser extension that lets you chat with YouTube videos! ๐
This project shows how easy it is to build conversational browser extensions using ๐ค Inference Endpoints and
@Vercel
's AI SDK.
+ it's open source!
Claude 3 just released and although the weights aren't open, the tokenizer is! ๐ฅ
If you want to calculate how many tokens you're sending to the API, check out The Tokenizer Playground, which we recently updated to include the Claude 3 tokenizer! ๐
๐
Introducing MusicGen Web: AI-powered music generation directly in your browser, built with ๐ค Transformers.js! ๐ต
Everything runs 100% locally, meaning no calls to an API! ๐คฏ Served as a static website... this costs $0 to host and run! ๐ฅ
Try it out yourself! ๐
We just released ๐ค Transformers.js v2.8.0, which adds a ton of new features, including:
๐ผ๏ธ Super-resolution and image restoration w/ Swin2SR
โ๏ธ Optical character recognition w/ TrOCR
๐ฌ Text-generation w/ Mistral and Falcon (<1B params)
More details in ๐งต๐
Introducing Transformers.js โ run
@HuggingFace
transformers directly in your browser! We currently support BERT, DistilBERT, T5, and GPT2 models, for a variety of tasks such as translation, text generation, and sentiment analysis.
โฆ and it's open-source!
We just released Transformers.js v2.4.0, which adds support for generating word-level timestamps w/ whisper! ๐คฏ
That's right, you can now perfectly caption videos directly in your browser. ๐ฅ I can't wait to see what you build with this!
Get started in just a few lines of code:
โก๏ธ Now with WebGPU support! โก๏ธ Run depth estimation w/ Depth Anything in under 200ms, thanks to Transformers.js and WebGPU!
Try it out yourself! ๐
Depth Anything is now available in ๐ค Transformers.js!
At just 25M parameters, the small version of the model runs great locally. Here's a demo I created which performs monocular depth estimation directly in your browser (no server needed)! ๐คฏ
๐ Demo:
Two annoying things about OpenAI's tokenizer playground: (1) it's capped at 50k characters, and (2) it doesn't support GPT-4 or GPT-3.5...
So, I built my own version w/ Transformers.js! It can tokenize the entire "Great Gatsby" (269k chars) in 200ms! ๐
New features in ๐ค Transformers.js v2.16.1:
๐ฅ New models: APISR for Anime Super-Resolution and EfficientNet for image classification
๐ผ๏ธ New pipeline: Image Feature Extraction
๐ฌ Improved chat templating support: C4AI Command-R tool and RAG prompt generation
See ๐งต for more info
A new open-source embeddings model with 8K context length that matches the performance of text-embedding-ada-002! ๐คฏ This is a game changer! ๐ฅ
And now it's compatible with ๐ค Transformers.js, meaning you can generate embeddings in your browser, Node.js, or even Deno!
Introducing jina-embeddings-v2, the world's first open-source model boasting an 8K context length. Matching the prowess of OpenAI's proprietary models, now accessible on
@huggingface
, signaling a significant milestone in the landscape of text embeddings.
The competition in AI music generation is heating up, with Suno and Udio leading the way. Unfortunately, neither are open source... ๐ข
Luckily, MusicGen is! ๐ The quality is amazing and you can even run it locally in your browser with Transformers.js! ๐ต For example:
prompt:โฆ
Yes, that's right... the new Distil-Whisper models from
@huggingface
are fully compatible with Transformers.js! ๐ค
This means you can generate high-quality transcripts directly in JavaScript: in-browser, Node, or even Deno! ๐คฏ๐
Get started in just 3 lines of code:
Introducing Distil-Whisper Web: 49% smaller, 4.2x faster Whisper directly in your browser! ๐
Here is a side-by-side comparison with OpenAI's original version! ๐คฏ
Introducing the ๐ค Transformers.js WebGPU Embedding Benchmark! โก๏ธ
How much does WebGPU speed up ML models running locally in your browser? Try it out and share your results! ๐๐
๐
AI code completion running 100% locally inside your browser, thanks to
@BigCodeProject
's StarCoder models and ๐ค Transformers.js!
We also got their new 1B model running at ~20 tokens per second in Node.js (CPU). ๐
Check out the demo!
Meta's Llama 3 is here, with a brand new tokenizer! ๐ฆ I've added it to the Tokenizer Playground, so you can experiment with it in your browser:
For those interested, here are the key differences over Llama 2:
1. 4x larger vocabulary (32K -> 128K). Thisโฆ
๐ค Transformers.js v2.13 - Holiday update! โ๏ธ In this version, we added:
1. SegFormer for semantic segmentation and image classification.
2. VITS for multilingual text-to-speech (>1000 languages).
3. CLIPSeg for zero-shot image segmentation.
4. Table Transformer for tableโฆ
This is an absolute game changer! ๐คฏ
@threejs
but for Gaussian Splatting! ๐ฅ
I canโt wait to see what the web-dev community builds with this! ๐ค cc
@mrdoob
Transformers.js just hit 1 million total requests on
@jsDelivr
, with 52% of them coming in the past 30 days alone! ๐คฏ
We have a ton of exciting updates coming soon, so stay tuned! I'm excited to show you what's next... ๐๐ฅ
Image-to-LaTeX in 3 lines of JavaScript code, with ๐ค Transformers.js!
This is made possible thanks to
@vikparuchuri
's amazing texify2 model, which we converted to ONNX so it can run in the browser! ๐ฅ
๐ก Project idea: browser extension to convert PDFs/screenshots to LaTeX!
To showcase the power of in-browser machine learning for real-time data visualization, I built a semantic music search app, powered by ๐ค Transformers.js and Deepscatter.
Users can search over 50k songs with natural language, all running client-side (no server)! ๐คฏ
Nomic Embed v1.5 is out, the first open model with variable-sized Matryoshka embeddings and 8192 context! ๐คฏ
It's also compatible with ๐ค Transformers.js, meaning you can perform adaptive retrieval directly in your browser!
Demo showing how dimensionality affects performance ๐
Yes, you *heard* that right... Transformers.js now supports automatic speech recognition w/ Whisper!
Everything runs entirely inside your browser. No need to make API calls to a server! ๐คฏ
#WebML
... and yes, it's open source:
We just released ๐ค Transformers.js v2.7.0, which adds supports for ๐ฃ๏ธ Text to Speech w/ speecht5. This means you can now synthesize human speech directly in your browser... no server required! ๐ฅ
Check out the demo! ๐
Transformers.js just hit 3000 stars on GitHub! ๐คฏ The
#WebML
community is growing so fast, and I'm proud to be a part of it! ๐ค
If you ever plan on adding in-browser machine-learning functionality to your website or web-app, check out the project:
We just released Transformers.js v2.6.0! New features:
- 14 new architectures: BLOOM, MPT, BeiT, CamemBERT, CodeLlama, GPT-J, mBART, ResNet, WavLM, and more! ๐
- Over 150 newly-converted models on the Hub! ๐
- Huge model size reductions (up to -40%)!
๐
Transformers.js v2.6.2 now supports Document Question Answering, meaning you can easily extract information from images... directly in your browser (no server needed)! ๐คฏ
We also added new models like Donut, LongT5, and Blenderbot! ๐ฅณ I can't wait to see what you build! ๐ค
We just added some new features to the ๐ค Transformers.js WebGPU Embedding Benchmark:
- fp16 and int8 support
- Ability to change models
- Lazy model loading
- Options to select which tests to run
On my device, I got >100x speedup with fp16 on WebGPU! โก๏ธ
Introducing the ๐ค Transformers.js WebGPU Embedding Benchmark! โก๏ธ
How much does WebGPU speed up ML models running locally in your browser? Try it out and share your results! ๐๐
๐
Transformers.js v2.16 is now out! ๐ Here are some of the new features:
๐ฌ StableLM text-generation models
๐ Speaker verification and diarization models
๐ Improved chat templating operation coverage
๐จ New example applications and demos
Release notes:
Generate embeddings directly in your browser (or Node.js) with the latest version of Transformers.js! ๐คฏ
We can't wait to see what you make with it! The possibilities are endless! Semantic search, sentence similarity, clustering, etc... What else? ๐๐ฅ
And now, SigLIP is available in ๐ค Transformers.js!
To test how it fares in practice, I adapted my semantic image search demo to use SigLIP instead of CLIP, and it works great! Everything runs 100% locally in your browser (no server needed)! ๐ฅ
๐
SigLIP by
@Google
is now available in ๐ค Transformers! It improves upon
@OpenAI
's CLIP with a simple sigmoid loss. SOTA for linking images with text and vice versa.
Demo notebook:
Original meme credits:
@giffmana
Nomic Embed is out: a new 8K text embedding model by
@nomic_ai
! ๐
It's also compatible with ๐ค Transformers.js, meaning you can generate embeddings directly in your browser (no server required)!
Introducing Nomic Embed - the first fully open long context text embedder to beat OpenAI
- Open source, open weights, open data
- Beats OpenAI text-embeding-3-small and Ada on short and long context benchmarks
- Day 1 integrations with
@langchain
,
@llama
-index,
@MongoDB
Transformers.js v2.5.2 now supports audio classification w/ MMS and wav2vec2, meaning you can, for example, perform language identification for over 4000 languages! ๐คฏ๐ฅ
Get started in just 3 lines of code! ๐
Full release notes:
Transformers.js v2.17 is out! ๐ฅ๐ New features:
๐ข Binary embeddings: 32x storage savings and significantly faster retrieval (with up to ~95% of the original performance)!
๐ฌ Improved conversational support: Pass chat messages directly to the `text-generation` pipeline.
Did you know that HuggingChat uses Transformers.js for RAG/WebSearch? ๐คฏ
It's amazing to see how far the library has come, and I'm so grateful to everyone in the community for helping make it what it is today! ๐ค
HuggingChat is also open source! ๐
Introducing Doodle Dash, an ML-powered web game that runs completely in your browser, thanks to Transformers.js! ๐คฏ
You have 60 seconds to draw as many words as you can, while a neural network tries to guess what you're drawing in real time!
Play here:
๐ค Transformers.js just hit 5K stars on GitHub! ๐
Thank you to everyone in the community for your support and contributions... this is why open source is the best! ๐ฅ
PS: Stay tuned for some exciting updates coming soon! ๐
Snowflake just released Arctic Embed, a collection of open-source text embedding models optimized for retrieval accuracy and efficiency! โ๏ธ
๐ Apache 2.0 license
๐ Great for in-browser use w/ ๐ค Transformers.js (22 โ 335 M params)
โก๏ธ WebGPU-accelerated (>120x faster than WASM)
Did anyone else notice the `<start_of_image>` token in Gemma's vocabulary? ๐ Are we going to see some VLM-variants soon?
I also added Gemma to "The Tokenizer Playground", which you can check out if you want to learn more about how the model performs tokenization! ๐
New blog post: An Introduction to Matryoshka Embedding Models ๐ช
Learn how these models are able to produce embeddings of various dimensions, how they can speed up tasks like retrieval, and how you can train your own! ๐
๐
๐จ Hugging Chat Assistants are out! ๐จ
Just like OpenAI's GPTs, you can now create your own personal assistant in Hugging Chat! ๐คฏ
To test it out, I built a "Prisoner Interrogation" game, where you must try to extract a secret password from a prisoner. Can you do it? ๐ค
Qwen1.5 is out: a collection of powerful LLMs with sizes ranging from 0.5B to 72B parameters.
Even at 8-bit quantization, the smallest one (0.5B) is surprisingly good for its size! Here's a demo I made with Transformers.js (v2.15), running 100% locally in the browser w/ WASM! ๐คฏ
๐Happy to announce the release of Qwen1.5! This time, we directly opensource new models of 6 sizes, 0.5B, 1.8B, 4B, 7B, 14B, and 72B (including base, chat, AWQ, GPTQ, GGUF)! From small to huge!
Blog:
GitHub:
HF:โฆ
Here's a sneak peek of my "Chat with YouTube" browser extension, made with
@Vercel
's AI SDK! ๐ฅ It uses Llama-v2 (7B) deployed with
@HuggingFace
inference endpoints. ๐
Source code and tutorial coming soon! ๐ค
Introducing the ๐ Jinja Playground: Design LLM chat templates directly in your browser with instant feedback.
Built with `@โhuggingface/jinja`, a minimalistic JavaScript implementation of the Jinja templating engine, specifically designed for parsing + rendering chat templates.
Yesterday,
@MoritzLaurer
released some new tiny zero-shot classifiers, so to put them to the test, I built a simple web application that sorts customer product reviews into classes chosen at runtime.
Everything runs 100% locally in your browser, thanks to ๐ค Transformers.js!
๐ค New 0.02B, 25 MB tiny zeroshot classifiers for edge device use-cases on
@huggingface
! The xtremedistil ONNX quantized version is only 13 MB and very fast on CPUs.
Without quantization, it has a throughput of ~4000 full sentences (! not just tokens) per second on an A10G withโฆ
Say goodbye to silent performance issues when prompting LLMs! Today we released ๐ค Transformers.js v2.12, which adds support for chat templating! ๐ฌ
This means you can generate LLM inputs for almost any model on the
@huggingface
Hub, directly in your browser w/ JavaScript! ๐คฏ
Local background removal Figma plugin, built with ๐ค Transformers.js and BRIA AI's RMBG-v1.4 model!
This shows what an amazing opportunity it is for JavaScript developers to build powerful AI applications, without worrying about API/server costs!
Great work
@enzostvs
! ๐ฅ
Today we added code-completion to Transformers.js! ๐คฏ
It's like GitHub copilot, but it runs directly in your browser (i.e., no calls to a server)!
#WebML
We can't wait to see what people make with it! ๐
... oh, and did we mention it's open source?
๐ค Transformers.js v2.11 is one of our biggest releases yet and includes 8 exciting new models! ๐งต
1. ViTMatte for image matting: separate images into foreground and background, directly in your browser! This is going to make for some cool image editing web applications! ๐ฅ
We just released ๐ค Transformers.js v2.10, which adds support for:
๐ต Zero-shot audio classification w/ CLAP
๐๏ธ Audio classification w/ Audio Spectrogram Transformer
๐ผ๏ธ Image classification w/ ConvNeXT
Get started with just a few lines of code! ๐คฏ๐ฅณ
I just tried installing the latest version of ๐ค Transformers.js with Bun 1.0:
npm: 10.7 seconds
Bun: 0.404 seconds (26.5x faster) ๐คฏ
@bunjavascript
@jarredsumner
This is amazing! Great job! ๐
We just released Transformers.js v2.6.0! New features:
- 14 new architectures: BLOOM, MPT, BeiT, CamemBERT, CodeLlama, GPT-J, mBART, ResNet, WavLM, and more! ๐
- Over 150 newly-converted models on the Hub! ๐
- Huge model size reductions (up to -40%)!
๐
Mind. Blown. Web browsers can now generate music, and it's ๐ฅ no servers needed, it all happens right in your browser.
โฌ๏ธ Check out the link below and get ready to jam!
We just updated our in-browser Background Removal demo to use WebGPU and it's now ~50x faster! ๐คฏ ~9 seconds down to 180ms! โก๏ธ
Powered by
@bria_ai_
's RMBG-v1.4 model and ๐ค Transformers.js!
... and yes, the video is in real time! ๐คฏ
๐จ Ratchet reaches alpha! ๐จ
With todays release of Distil Whisper Large V3 by
@sanchitgandhi99
, Ratchet officially enters alpha.
Check out this demo running ๐น๐ฎ๐ฟ๐ด๐ฒ-๐๐ฏ(!!) in the browser
With all the hype around multi-modal models (like GPT-4), we decided to add support for CLIP to Transformers.js ๐ฅ
CLIP can be used for image-text similarity and zero-shot image classification... and now it can run directly in your browser!
Source code:
You asked for it, so here it is... Transformers.js now supports *timestamped* speech-to-text with whisper! ๐คฏ
We can't wait to see what people will create with it!
(see below for example usage and outputs)
Source code:
Today we added object detection to Transformers.js! ๐คฏ
This means that you can detect the location and type of objects in an image, directly in your browser (no calls to a server)!
... and did we mention it's open source?
Bring images to life with ๐ค Transformers.js and Depth Anything! Everything runs 100% locally in your browser (no server required)! ๐คฏ
Great work on the demo
@conzept__
! ๐ฅ
Added "image depth estimation" to the image viewer (using
@xenovacom
#transformersjs
project)
This semi 3D perspective can add new insights and flavour to many of the images.
Painting:
Landscape:
Have you ever wanted to build your own semantic image search application in
@nextjs
? Well now you can, thanks to ๐ค Transformers.js and โก๏ธ
@supabase
!
This comes with the release of v2.5.0, which adds support for computing embeddings with CLIP!
Demo:
With all the hype around SAM (Segment Anything Model) from
@MetaAI
, we decided to add image-segmentation to Transformers.js! ๐
Unlike their demo, this *actually* runs in your browser, meaning no calls to a server!
See how you can use it in your projects today:
You can now transcribe audio clips longer than 30 seconds using
@OpenAI
's whisper model, directly in your browser (no server)! ๐
Do this by specifying a chunk length and stride length when calling the pipeline function (see example).
Check it out!
Introducing Static Templates, now live on the ๐ค Hub!
Create and deploy your own website in just two clicks! Perfect for showcasing machine learning projects, demos, and more...
Get started today! ๐
๐
Today we're releasing the Segment Anything Model (SAM) โ a step toward the first foundation model for image segmentation.
SAM is capable of one-click segmentation of any object from any photo or video + zero-shot transfer to other segmentation tasks โก๏ธ
Soon you'll be able to run
@huggingface
Transformers in browser extensions! ๐คฏ The models run locally inside your browser - no need for a server!
We can't wait to see what people will develop when Transformers.js v2.0.0 releases! ๐
#WebML
Did you know you can build powerful AI applications directly in JavaScript? ๐
Well, to show how easy it is, we published an interactive video tutorial โ in partnership with
@Scrimba
โ where we build an Object Detection web app using Transformers.js!
๐
Transformers.js can now do zero-shot classification! ๐ฅณ This means you can classify text according to classes specified at runtime (and without finetuning).
What
@huggingface
tasks should we add next? Leave them in the comments below! ๐
Build powerful AI chat applications in just a few lines of JavaScript code, with the
@huggingface
Inference API and Transformers.js! ๐ค Here's a demo showing how to run the latest
@MistralAI
7B Instruct model for free in vanilla JS!
Try it out yourself!