The DDG says it “makes extracted patterns from the style image bigger/smaller….” when translated into your source image. The dog on top has a 160% Style Scale, the Dog on bottom has a 40% Style Scale.
If you leave “Style Scale” at 100%, the leopard spots would be “roughly” the same size spots , especially with enhance dialed up, because it will look for similarly sized regions to match, and attempt to conform to the source image somewhat.
It’s important to note that the proportional size of both source and style images directly correlates to its default size.
This is a little harder to wrap one’s head around than Style Scale, but the DDG says “Default is 50%. Less means more like the original image and more means more like the style image.” The dog on top is a 90% Style Weight and looks more like the style image, the dog on bottom is a 10% Style Weight and looks more like the source image.
This is an even more abstract concept. It seems conceptually similar to Style Weight in a way, but less extreme. The DDG says “How much strength to apply when enhancing the image. Higher value will produce higher quality output but will diminish some of the original texture and structure.” Basically, if I want the output to have more of the style image’s original texture/shape I keep enhance low. If I want the output to have more of the source image’s texture/shape I keep enhance high.
Enhance set at “None”:
Enhance set at “Extra-High”:
I disagree with the notion that a “higher (enhance) value will produce a higher quality output”, as I believe can be seen in the examples I provided. This may be the case if you have an HD style image that you’re using. But if the style image is smaller than your source image, it will make the output more blurry if you set enhance to a high setting, as can be seen in the example links above. The dog with ‘extra-high’ enhance looks much blurrier than the dog with enhance set at ‘none’, because the leopard print style image is smaller than the source image of the dog. I find that every style image has an Enhance setting that works best for it, regardless of source image. I have to experiment with each style image to find the perfect enhance setting, and once I do, I stick with it.
Ben Beekman: Here’s some examples that may clarify what Daniel Prust is saying. An example of a style image that I might set Enhance to Low or Off for is a spider web– because the structure and interlocking geometry is important to preserve in the final output (otherwise it’s no longer identifiable as a spider web). An example of a style image that I might set Enhance to Extra-High for is a large collage encompassing several styles and subjects, because I want it to disregard more of the style structure and just treat the style image as a “visual library” to borrow from in a less structured way.
This one is kind of a no-brainer in my opinion. The DDG says: “The more depth you set the better quality you will get but it will take more time to generate the image.” So unless you’re in a hurry, set depth to “Deep” (it should be the default option IMO). I just ran a test and setting Depth to “Shallow” generated the dream about a minute quicker than setting it to “Deep” did. The quality difference was minimal as well, but personally I want every ounce definition I can get.
I’ve noticed some differences with this setting, colors and patterns will sometimes appear in places on deep/high when they weren’t there on a lower setting. It’s not like they were just in higher definition so you could make out the finer details, I think it will warp the image more the higher you set it. On the other hand sometimes it does almost nothing. In most cases there will be a very slight color difference between the Shallow and Deep settings.
For technical information on the resolution of digital images, see the Wikipedia article on Image Resolution. Why are dreams made in high resolution (0.95 mp) and those in low resolution (0.36 mp) using the same source and style images and identical settings otherwise look different? Jost Tétan explains: “The reason for that is that the same picture in low resolution and high resolution is not exactly the same. The number of pixels differs. Example: You have a picture with 100×100=10,000 pixels. When you downsize the image to 50×50, there are only 2,500 pixels. Out of 4 pixels in the larger image, the program calculates an average color and brightness into only 1 pixel. That means the image has fewer details that are affected by the style. That makes it blurrier.”
Daniel Prust Btw, this is my go-to setting when I don’t feel like overthinking it. I adjust “Resolution” to the highest setting that I am currently allotted.
Andrew Irving: My no-brain setting is similar, except that I use a style weight 60% and style scale 60% to get more details in the result and to avoid ‘wash out’ and ‘blow up’ results.
Dave Smith: I sometimes adjust the style scale to 40 or 60% to increase details and sharpness. Sometimes it will take a few goes before I am happy with the result.
Ben Beekman: If I’m working with a large collage, I’ll do the same as Andrew Irving but with Enhance set to Extra High. If I’m working with a single style image, my no-brain setting is identical.