Drop images or click to upload
JPG, PNG, WebP âĒ Multiple files supported
Quick samples:
Extract features from 2+ images to see similarity scores
Click an image to visualize its neural embedding
1. Capture â Each video frame is grabbed at native resolution
2. Preprocess â Center-cropped to 224Ã224, converted to RGB
3. Inference â MobileNet-V3 extracts 512-dim embedding in ~5ms
4. Compare â Cosine similarity vs reference image (0 to 1)
ðĄ Tip: Set a reference image, then move objects in front of the camera to see real-time similarity changes.
Start pose detection to see keypoints
Use Cases
- Fitness tracking & exercise form
- Dance/movement analysis
- Sign language recognition
- Sports biomechanics
- VR/AR body tracking
- Gesture-based UI control
Live pose embedding visualization
1. Add labeled examples 2. Test on new images 3. Correct mistakes to improve
Test Classification:
Register classes with just a few examples, then classify new images instantly.
Add examples one by one and watch the prototype centroids evolve.
The model learns from corrections. Wrong prediction? Tell it the right answer!
View all stored embeddings and their class assignments.
Interactive Demos
Try these working demos right in your browser
Click a query image, then see similarity scores for all others.
Drop image
Drop image
Drop multiple images
Process up to 20 images
Upload an image to explore its 512-dim neural embedding
Upload images â outliers will be highlighted in red
Code Examples
Copy-paste code snippets for common use cases