*Note: this article was written for v1 of DDG. Most of the information is still accurate but the picture above is not regarding style weights over 60/70%. They should be much cleaner and brighter now.
The DDG says it “makes extracted patterns from the style image bigger/smaller….” when translated into your source image. The dog on top has a 160% Style Scale, the Dog on bottom has a 40% Style Scale.
If you leave “Style Scale” at 100%, the leopard spots would be “roughly” the same size spots , especially with enhance dialed up, because it will look for similarly sized regions to match, and attempt to conform to the source image somewhat.
It’s important to note that the proportional size of both source and style images directly correlates to its default size.
This is a little harder to wrap one’s head around than Style Scale, but the DDG says “Default is 50%. Less means more like the original image and more means more like the style image.” The dog on top is a 90% Style Weight and looks more like the style image, the dog on bottom is a 10% Style Weight and looks more like the source image.
This is an even more abstract concept. The DDG says “How much strength to apply when enhancing the image. Higher value will produce higher quality output but will diminish some of the original texture and structure.” Basically, if I want the output to have more of the style image’s original texture/shape I keep enhance low. If I want the output to have more of the source image’s texture/shape I keep enhance high.
This setting functions somewhat like a sub-setting of the Style Weight, a higher iteration will result in more of the style being applied to the source image – more details, fewer “blank” areas , typically a “busier” look. Typically this effectively tops out at the x1.5 setting but some dreams will be able to take advantage of the full x2 setting. As to what setting is right for your dream that will depend: a good rule of thumb is to use lower settings when you want to preserve a solid-color background or avoid “jpeg artifacts” (squiggly lines where they shouldn’t be more or less). However if you’re applying a complicated fractal to a busy forest scene, for instance, consider a higher Iteration Boost.
For technical information on the resolution of digital images, see the Wikipedia article on Image Resolution. Why are dreams made in high resolution (0.95 mp) and those in low resolution (0.36 mp) using the same source and style images and identical settings otherwise look different? Jost Tétan explains: “The reason for that is that the same picture in low resolution and high resolution is not exactly the same. The number of pixels differs. Example: You have a picture with 100×100=10,000 pixels. When you downsize the image to 50×50, there are only 2,500 pixels. Out of 4 pixels in the larger image, the program calculates an average color and brightness into only 1 pixel. That means the image has fewer details that are affected by the style. That makes it blurrier.”
Andrew Irving: My no-brain setting is similar, except that I use a style weight 60% and style scale 60% to get more details in the result and to avoid ‘wash out’ and ‘blow up’ results.
Dave Smith: I sometimes adjust the style scale to 40 or 60% to increase details and sharpness. Sometimes it will take a few goes before I am happy with the result.
Ben Beekman: If I’m working with a large collage, I’ll do the same as Andrew Irving but with Enhance set to Extra High. If I’m working with a single style image, my no-brain setting is identical.
I disagree with the notion that a “higher (enhance) value will produce a higher quality output”, as I believe can be seen in the examples I provided. This may be the case if you have an HD style image that you’re using. But if the style image is smaller than your source image, it will make the output more blurry if you set enhance to a high setting, as can be seen in the example links above. The dog with ‘extra-high’ enhance looks much blurrier than the dog with enhance set at ‘none’, because the leopard print style image is smaller than the source image of the dog. I find that every style image has an Enhance setting that works best for it, regardless of source image. I have to experiment with each style image to find the perfect enhance setting, and once I do, I stick with it.
Ben Beekman: Here’s some examples that may clarify what Daniel Prust is saying. An example of a style image that I might set Enhance to Low or Off for is a spider web– because the structure and interlocking geometry is important to preserve in the final output (otherwise it’s no longer identifiable as a spider web). An example of a style image that I might set Enhance to Extra-High for is a large collage encompassing several styles and subjects, because I want it to disregard more of the style structure and just treat the style image as a “visual library” to borrow from in a less structured way.