Looked on the roadmap and couldn’t find anything like it, so I wanted to bring this up. Fairly high on my personal wishlist for Spine would be some kind of system for automating attachment and draw order changes.
The capacity to change sprites and draw order in animation is powerful and lends itself to some very neat flourishes and rig capabilities (e.g blending between sprites with multiple orientations, sub-units that use frame-by-frame animation, rotating gun barrels or machinery, etc), but the ergonomics for animating these setups can be a little awkward. They usually require jumping around the tree view and manipulating dissimilar elements, and frequently require different timings on those elements that crowd the timeline. That in serves to make them more fiddly to change and iterate on. If usage is frequent or complex you end up spending a lot of time babysitting these systems, which I’ve found it to be a frequent pain point.
I’d argue this creates a few problems:
-
Expensive in terms of animator hours. The hand-authored nature of techniques often leaves them fragile, prone to error, and scatters keys in inconvenient places.
-
Animators will tend to avoid these features because of the added friction, especially under a deadline. The exception is when they consciously want to use the feature in a specific animation. That is, the friction creates a tendency to only interact with them intentionally rather than experimentally/casually.
In my experience that dynamic leads to worse results (in animation, or any other creative field). That casual/experimental mode of interaction is so important and helps to create much better work. It lets you iterate quickly and try things without risk/investment, improving quality and sometimes discovering approaches you otherwise wouldn't consider.
-
The costs associated with managing these sort of rig features incentivizes simplifying them or omitting them entirely.
Most of the time the actions required to create the illusion you want (draw order changes, attachment visibility, etc) are routine and predictable, and therefore great targets for automation! I've got two suggestions for approaches:
Suggestion #1: Action Constraints
Action constraints would be a new constraint type whose job is to map continuous inputs (position, rotation, etc) to discrete actions (draw order, attachments, events, etc). This would map really well to the wired parameter stuff on the roadmap but could be done without by using a dropdown to select property + axis to reference from a given bone.
The constraint might work something like this:
Constraint interface is based on a set of configurable zones. The number and size of zones defined by a user-defined set of thresholds. These zones are implicitly in-order with no gaps between them.
Interface for zones allows user to split a zone by adding a new threshold in the middle, delete a zone, and alter existing thresholds. The system prevents users deleting the last two thresholds that define the outer bounds.
Clicking on a zone reveals a configurable list of actions associated with it. I’m not sure what the best representation here might be; it’s easy to imagine interfaces for attachment changes or firing events, but harder to imagine how draw order might be handled. Maybe it might be possible to use animation keys as primitives here, piggybacking on the workflow that makes these items keyable in the first place?
The constraint is state based. A zone is considered current if the tracked value was inside it last frame. Constraint is inert until the current frame's value leaves the current zone and enters a new one, in which case the actions of the new zone are executed.
The constraint would need to be toggleable at runtime, to allow for edge cases where manual control is desired.
Extra: Give every threshold a tunable deadzone. The deadzone would prevent transitioning to a neighbouring key unless the value moved far enough to clear the threshold PLUS the deadzone. This would be useful for preventing oscillation and adding a little bit of give to the system.
Extra: For reliability's sake it might be helpful to detect if a zone was "skipped" during an update and execute its actions before considering the next neighbour, and so on.
Extra: A toggle or dropdown that changes how the constraint behaves when the value moves out of its defined range. Options might be nothing (being outside it is "no zone", and entering the range will fire actions), extend (everything past a boundary is considered part of nearest zone), or to loop. The latter would be extremely helpful for things like rotating gun barrels or other cyclical features
Laying it out like this it’s clear that it’d be a complex feature. However, for complex rigs that need to have a wide range of capabilities (particularly hero/main character rigs that need to adapt to as many situations as possible and can expect to have large animation sets), adding something like this would substantially improve the animation ergonomics.
This would allow you to, for instance, grab the bone controlling a sprites tilt and have it automatically shift to images for other facings when it reached the edge of its useful area, or to have the draw order of a rotating gun barrel assembly manage its own draw order without any manual keys. And those are just the simple use cases!
Suggestion #2: Animation constraints
It also occurs to me that there’s an alternative approach here, and I can’t tell if it’s a crazier idea than the above or not. If you added some way to layer an animation into a rig as a basic primitive—something that could be keyed, manipulated, and driven independently of animation playback—then I think that might not only serve my use case above but also be a substantially powerful addition in its own right, especially if paired with wire parameter functionality.
Here’s a way that might work:
The constraint is tied to a specific animation in the rig, determined at setup time.
A keyable mix value parameter determines strength of the applied animation (default 100%, standard range 0-100% but any values permitted)
A keyable playback position parameter controls what frame of animation is displayed. This could be displayed as a normalized value (0-1, or 0%-100%), a frame number, or a time value. This could either display all of them at once (and just do appropriate conversions depending on which text box is typed into) or it could be controlled by a setup-time dropdown for a less cluttered UI.
The constraint is evaluated as an additive animation played on top of the existing animation, according to the two values above.
EXTRA: A keyable “playback speed” parameter (default 0), where the constraint will automatically advance playback during animation at the speed it is set to. This’d let you, for example, have a looping animation of an engine rattling that is applied to every animation automatically, and which can have its strength and speed easily modified... all without cluttering your timelines with the frequent keys such an animation requires, or requiring you to manually shepherd the playback keys to create a looping animation.
EXTRA: Allow a specific bone to have its keys masked from playback. This would make it easier to create a wire parameter setup; the bone you expect to hook it up to is animated along with the stuff it’s meant to change for the purposes of seeing how its supposed to look live, but are ignored when used by the constraint.
I think this ends up simpler, more elegant, and more powerful than my other suggestion... at least on the surface. I suspect the devil is in the details on this one and underlying implementation might be a lot hairier.
Still, here’s why I think this alternative might work, and might be a powerful feature generally:
For my use case, it provides an alternative interface for managing complex transitions: don’t make the keys yourself, just trigger a reusable sub-animation by keying the constraint. A rotating gun barrel assembly can be a looping animation of just that rotation at constant speed, slot ordering included; by keying it on appropriate curves you could spin it up, spin it down, and have it operate at any speed you please. Additionally, if the structure changed and you needed to rearrange some things (say, adding an extra barrel) then the animation could be updated once and the change propagated everywhere that used it.
Combined with wire parameters, it hits pretty much everything the other suggestion does except for deadzones; the movement of a bone can be just hooked up to an animation playback parameter that defines everything that needs to happen as it goes through whatever transition ranges it requires.
Since it’s not just discrete actions but also anything else that can be keyed in animation, it also allows the compression of bone relationships that might be extremely complex to set up using existing IK and transform constraints. You just... animate it how it's supposed to react, and then it does that, albeit in a less programmatic way. For a complex stance shift, a big gear chain, etc, a single animation could describe the process (or even just the most particular parts of the process) instead of a big nest of constraints.
Such a system could be used to describe a lot of commonly reused actions. For example, a head rig could have major emotional states and phonemes for lip-syncing in a set of small animations built this way, combined in animation for powerful facial controls. Though it’d be operating on keyable primitives instead of mesh vertices, such a setup might feel a lot like a 3D facial rig that used blendshapes for facial performance.
You could also use this system for a set of one-offs that pull complex parts of a performance out into its own animation block. That would keep its keys from cluttering the rest of the animation, let you animate the sub-action independently on its own, and easily reposition, scale, and curve its effect using only a single channel on the dopesheet.
That line of thinking might be extended into closely related features. One would be animation layers, which are something I’ve seen done in other animation packages. When I’ve used them, they were pretty handy; really useful for non-destructive corrections and secondary animation
Another extension would be to create a special key type that allows you to insert an animation into another by reference; essentially a temporary version of the animation constraint that doesn't exist outside the animation. Potentially useful since it wouldn't clutter the constraint list, though adding such a system means grappling with nested references and potential cyclical dependencies.
I haven’t the faintest idea how complex this would look under the hood, though at least on some level I imagine it’d be an extension of the already in-place additive animation system. If it's actually feasible, it’d add a lot of capability for reusing work, automating complex parts of the animator work flow… and I imagine could be put to some truly creative uses I haven't even considered yet!
…Aaaaand I’ve written another essay. I had some suggestions for reworking draw order as well, but since thats tangentially related at best (and this is long enough already) I’ll save that for another post. Apologies for the wall of text!