Use a simple CLIP model implementation instead of the one from
transformers.
This will allow some interesting things that would too hackish to implement
using the transformers implementation.
More generic clip model class that can be used on more types of text
encoders.
Don't apply weighting algorithm when weight is 1.0
Don't compute an empty token output when it's not needed.
Before I had made the last layer the penultimate layer because some
checkpoints don't have them but it's not consistent with the others models.
TLDR: for SD2.x models only: CLIPSetLastLayer -1 is now -2.