Directing the Gaze! How to Redraw Faces Cleanly with Detailer


- Detect objects with Florence-2
- Extract contours with SAM2
- Trace Accurately with Alpha Matte
Introduction
Hello, this is Easygoing.
Today, I’ll introduce how to use Detailer, an essential extension for image generation.
Detailer: Partial High-Resolution Enhancement
In my previous post, I discussed AI upscalers and Hires.fix for high-resolution image enhancement.
While Hires.fix enhances the entire illustration in high resolution, Detailer focuses on redrawing specific parts of an illustration.
flowchart LR
A1(Original Artwork)
subgraph High-Resolution Enhancement
B1(Latent Upscale)
B2(AI Upscaler)
end
subgraph Redrawing
C1("Hires.fix<br>(Redraws Entire Image)")
C2("Detailer<br>(Redraws Specific Parts)")
end
D1(Final Output)
A1-.->B1
A1==>B2
B1-.->C1
B2-.->C1
B2==>C2
C1-.->C2
B2-.->D1
C1-.->D1
C2==>D1
Solid line represents today's workflow.
Where Does Your Gaze Go?
When you look at the following illustration, where does your gaze naturally fall?

Most people first notice the character.
Next, their eyes move to the face.
Finally, they focus on the character’s eyes, sensing presence and vitality.
We don’t try to take in every detail of an illustration but focus on key elements to understand it.
AI Tends to Be Too Perfect
AI can generate highly detailed illustrations.
However, overly perfect illustrations can have too much information, making it hard to know where to focus.
Today, we’ll explore how to use high-resolution redrawing to focus on the most important parts of an illustration.
Workflow
Here’s the workflow we’ll use:
flowchart LR
A1(Original Artwork)
subgraph Florence-2
A2(Object Detection)
end
subgraph SAM2
A3(Contour Detection)
end
subgraph Alpha Matte
A4(Boundary Adjustment)
end
subgraph Detailer
A5(Redrawing)
end
A1-->A2
A2-->A3
A3-->A4
A4-->A5

Node Explanation
- Florence-2: AI for image recognition
- SAM2 (Segment Anything 2): AI for contour detection
- Alpha Matte: Node for contour adjustment
- Detailer (SEGS): Node for redrawing
Models Used
- Base model: noob_v_pencil-XL-v2.0.1
- AI upscaler: RealESRGAN_x4Plus
- Florence-2: Florence-2-large-PromptGen v2.0 (auto-download)
- SAM2: sam2.1-hiera-large (auto-download)
Starting with AI Upscaler
First, we upscale the original illustration using an AI upscaler.


The AI upscaler produces a high-resolution, clear illustration, but it causes the facial features and eyes to lose detail.
We’ll fix this using Detailer.
Recognizing Objects with Florence-2!
First, we use the Florence-2 AI model to recognize objects in the illustration.
Florence-2, developed by Microsoft, is an AI model that understands both images and text.
Here, we instruct Florence-2 to detect a human face.


Zooming in reveals the character’s face correctly identified within a red rectangle.
Florence-2 Recognizes in Rectangles
Florence-2 is trained on rectangular images, so it recognizes objects within rectangles.
However, characters often have complex curved contours.
To address this, we use another AI model for contour detection.
SAM2 Detects Contours
For contour detection, we use SAM2 (Segment Anything Model 2).
Developed by Meta, SAM2 is an open-weight model available to everyone.
How to Use the SAM2 Model

First, we output the location data detected by Florence-2 as data.
The Florence2 Coordinate node outputs the detected region as bboxes and the center coordinates as center_coordinates.
Since the center coordinates typically include the target object, connecting them to the Sam2Segmentation node’s coordinates_pos allows SAM2 to recognize the object and extract its contour.
SAM2 Contours Are Sharp
SAM2 detects regions in a binary on/off manner, resulting in sharply defined, filled contours.


However, areas like the character’s hair have complex boundaries with varying shades.
Adjusting Boundaries with Alpha Matte
To detect boundaries more accurately, we use the Alpha Matte node from ComfyUI-Image-Filters.
The Alpha Matte node compares the selected region with the original image to determine boundaries.

- Alpha Matte node inputs:
- images: Original image
- alpha_trimap: Region detected by SAM2


Using the Alpha Matte node, we accurately captured the boundaries, especially around the hair.
Redrawing with Detailer
Now, let’s redraw the detected region using Detailer.
%20nodes.png)
Mask to SEGS Node
First, we pass the detected region through the Mask to SEGS node to define the redraw area.
The Mask to SEGS node has the following settings:
- combined (usually False)
- Merges all regions into one
- crop_factor
- Expands the redraw area for extra margin
- Example: crop_factor = 3.0 redraws an area three times the size of the region
- drop_size
- Excludes regions with a long side smaller than drop_size from redrawing
This information is then passed to the Detailer (SEGS) node.

Detailer (SEGS) Node
The Detailer (SEGS) node has the following settings:
- guide_size
- Enlarges the region to this size for drawing
- max_size
- Shrinks regions larger than this size for processing
- cycle
- Number of redraw iterations
- noise_mask_feather (0–100)
- Amount of blur at the redraw boundary
- Smooths boundaries to reduce redraw inconsistencies
Setting guide_size to the model’s recommended resolution ensures accurate redrawing of small objects.
For reference, my settings are:
- combined: False
- crop_factor: 1.2
- drop_size: 100
- guide_size: 1024
- max_size: 1024
- cycle: 1
- noise_mask_feather: 20
The Actual Illustration!
Here’s the illustration redrawn using Detailer:


Color control for V-pred models remains a future challenge.
With Detailer, the character stands out, and the facial expression and eyes are beautifully refined.
Comparison with Hires.fix
Let’s compare the generated illustration with one processed using Hires.fix with Tile.


Detailer | Hires.fix | |
---|---|---|
Upscaling | AI Upscaler | AI Upscaler or Latent Upscale |
Detail Emphasis | Emphasizes Characters | Includes Background Details |
Clarity | High ✅ | Slightly Lower |
Uniformity | Slightly Lower | High ✅ |
Detailer simplifies the background, making the character stand out.


Detailer’s composition centers the redraw target, resulting in clearer illustrations with less AI confusion.
However, overall detail and harmony are stronger with Hires.fix.
Both have pros and cons, so choose based on the illustration’s purpose.
ADetailer Offers Some Features
Stable Diffusion webUI (Forge, reForge, A1111) includes some of these features under After Detailer.
Platform | ComfyUI | Stable Diffusion webUI |
---|---|---|
Name | Detailer | After Detailer |
Detection Model | Florence-2, YOLO | YOLO |
Contour Model | SAM2, SAM | SAM |
Boundary Adjustment | ✅ | ❌ |
After Detailer is limited to older YOLO models, requiring separate models for each detected part.
It also lacks contour adjustment like ComfyUI, making texture differences at boundaries more noticeable.
For detailed image processing, ComfyUI offers greater flexibility.
Custom Nodes Used
Here are the custom nodes used and their search screens in ComfyUI Manager:
ComfyUI-Florence2

ComfyUI-segment-anything-2

ComfyUI-Image-Filters

Summary: Try Detailer!
- Detect objects with Florence-2
- Extract contours with SAM2
- Trace Accurately with Alpha Matte
AI illustrations become stunningly clear with high resolution, but their perfection can be overwhelming.
Detailer allows you to focus on key parts while simplifying others.

Detailer enables unexpectedly clear illustrations.
The pursuit of higher-quality, natural AI illustrations continues.
Thank you for reading!
Model Introduction
noob_v_pencil-XL-v2.0.1 (Released 2025.4.7)
The model used is noob_v_pencil-XL-v2.0.1, released on 2025.4.7.
This model merges the vibrant NoobAI-XL v-pred, allowing natural language input without quality or negative prompts.
Try the user-friendly v-pred model today!
Reference: Sampler Comparison for noob_v_pencil-XL-v2.0.1
This is a list of sampler comparison images for noob_v_pencil-XL-v2.0.1.
The v-pred model requires precise sampler adjustments, but Euler, Heun, and Heunpp2 seem to perform well.