GitXplorerGitXplorer
r

face-landmarks-gradio

public
5 stars
0 forks
0 issues

Commits

List of commits on branch main.
Unverified
fc4e3acb7398b3bbde723b829cb7bb0b900757a3

round linejoin

rradames committed 2 years ago
Unverified
f409352debac2ef30abbdd7db5cc6050ca83b09c

allow more faces

rradames committed 2 years ago
Unverified
734ed9157e656d44899a0dc1948427915477a8bd

refactor drawing modes, add 3 extra mode

rradames committed 2 years ago
Verified
52f093fd23af0215ecc584ccf5364d73fd775dd1

Update README.md

rradames committed 2 years ago
Verified
26a09ad61b515db18fb1f7e480f13e092eaa76d1

Update README.md

rradames committed 2 years ago
Unverified
f38f5f88fccc87ad055d930548bb1b8d452e4448

add points mode

rradames committed 2 years ago

README

The README file for this repository.

Face Landmark Detection Gradio Custom Component

This is a custom Svelte component for Gradio that uses mdeiapipe face landmarks detection to detect face landmarks in an image. Given a face position, it creates a conditioning image used alongside the input prompt to generate an image. The base model is the Uncanny Faces Model developed as a tutorial on how to train your our ControlNet Model

How to Test

npm run dev

How to Build

npm run build

After building your custom component will be in the dist folder. The single index.js can now be used as a custom component in Gradio read more about how to use on your Gradio app here

How to Use in Gradio

Note at the code below, we're using Gradio file server to serve the index.js located at the root level of your Gradio app app.py. This is done using script source script.src = "file=index.js" notation. But you can also use a CDN or any other way to serve the index.js file as long as it's served as content-type: application/javascript.

Live demo https://huggingface.co/spaces/radames/face-landmarks-gradio

import gradio as gr
import requests 
from io import BytesIO
from PIL import Image
import base64

canvas_html = "<face-canvas id='canvas-root' style='display:flex;max-width: 500px;margin: 0 auto;'></face-canvas>"
load_js = """
async () => {
  const script = document.createElement('script');
  script.type = "module"
  script.src = "file=index.js"
  document.head.appendChild(script);
}
"""
get_js_image = """
async (canvasData) => {
  const canvasEl = document.getElementById("canvas-root");
  const data = canvasEl? canvasEl._data : null;
  return data
}
"""

def predict(canvas_data):
  base64_img = canvas_data['image']
  image_data = base64.b64decode(base64_img.split(',')[1])
  image = Image.open(BytesIO(image_data))
  return image

blocks = gr.Blocks()
with blocks:
  canvas_data = gr.JSON(value={}, visible=False)
  with gr.Row():
    with gr.Column(visible=True) as box_el:
        canvas = gr.HTML(canvas_html,elem_id="canvas_html")
    with gr.Column(visible=True) as box_el:
        image_out = gr.Image()

  btn = gr.Button("Run")
  btn.click(fn=predict, inputs=[canvas_data], outputs=[image_out], _js=get_js_image)
  blocks.load(None, None, None, _js=load_js)

blocks.launch(debug=True, inline=True)