@ -207,12 +207,6 @@ To use a textual inversion concepts/embeddings in a text prompt put them in the
```embedding:embedding_filename.pt```
```embedding:embedding_filename.pt```
## How to increase generation speed?
On non Nvidia hardware you can set this command line setting to disable the upcasting to fp32 in some cross attention operations which will increase your speed. Note that this will very likely give you black images on SD2.x models. If you use xformers or pytorch attention this option does not do anything.
```--dont-upcast-attention```
## How to show high-quality previews?
## How to show high-quality previews?
Use ```--preview-method auto``` to enable previews.
Use ```--preview-method auto``` to enable previews.
parser.add_argument("--dont-upcast-attention",action="store_true",help="Disable upcasting of attention. Can boost speed but increase the chances of black images.")
fp_group=parser.add_mutually_exclusive_group()
fp_group=parser.add_mutually_exclusive_group()
fp_group.add_argument("--force-fp32",action="store_true",help="Force fp32 (If this makes your GPU work better please report it).")
fp_group.add_argument("--force-fp32",action="store_true",help="Force fp32 (If this makes your GPU work better please report it).")
context_dim=context_dimifself.disable_self_attnelseNone,dtype=dtype,device=device,operations=operations)# is a self-attention if not self.disable_self_attn
context_dim=context_dimifself.disable_self_attnelseNone,attn_precision=attn_precision,dtype=dtype,device=device,operations=operations)# is a self-attention if not self.disable_self_attn
heads=n_heads,dim_head=d_head,dropout=dropout,dtype=dtype,device=device,operations=operations)# is self-attn if context is none
heads=n_heads,dim_head=d_head,dropout=dropout,attn_precision=attn_precision,dtype=dtype,device=device,operations=operations)# is self-attn if context is none