- It would be cool to see what files are added to the drag and drop zone when they're dropped in to know that they've successfully been uploaded without pressing the button below to see if it works.
Yes I can do something about that.
- I'm not sure what happened, but when I imported my latest rig, the import wouldn't load all of my assets. It worked when I used an older file though, so it may be an issue with my file and not the prototype (the same thing happened recently when I used that file with Rhubarb as well). But I thought I'd at least mention it (see bad import jpg)
I do not know either about the bad import. Would be hard to figure without any of the files or error log.
- Being able to adjust the strength of individual parameters was really helpful here. Before I recorded, my eyelids wouldn't open to their full open position so I adjusted that a bit
Yeah, not everyone is comfortable or able to stretch their facial features to the extreme. Even if they could, their face would fatigue eventually.
- I made a rough test of some eye blinks which I show in the video. I experimented with turning alpha to 0 on the lower lids so they would disappear when the eyes opened. It would be cool if there was a way to disappear the lids just as they open all the way.
- It would be even cooler if there was a way to use an entire animation as the motion rather than a single key (unless I set mine up incorrectly). For example, having one animation control the entire left eye blink. The first frame would be eye completely closed, and the last frame would be the eye completely open. Then the frames in between could be refined to allow custom deformation rather than a linear straight shot from one position to the next. This could also help in selecting the best time to adjust draw order too. I'm thinking of the Moho rigging process as inspiration here.
The problem is that the live timeline ( from face tracking ) would conflict with the animation timeline using animation mix alpha setup I have now. I think it is still possible but it would require the calculations to be applied to the animation track timeline itself and have the animation mix alpha constant at one. This would require me to create a separate copy of the application to experiment with. Wait for the next update. :nerd:
I tested with only with one eye but your request of able to use more than one keyframe for Spine Vtuber Prototype is possible! I keyframed at half way mark on the timeline with green eye and the full way with red eye. I had set the FPS to 100 so each frame is 1% of the movement range. You can set the FPS to any value but you have to keyframe within the one second. I still need to make the changes for the rest of the animation tracks and update it on itch.io :cooldoge:.
Spine Vtuber Prototype 1.0.5 update
- Moved the names of uploaded files from bottom of the page into the Drag and Drop Zone area. This allows easier viewing of loaded files.
- Changed how calculations are applied. This allows animators to use multiple keyframes on the timeline for each animation track. Animating within one second is recommended. Previously, each track only allowed one keframe on the timeline.
https://silverstraw.itch.io/spine-vtuber-prototype
I have also updated the Spine Vtube Test Model to reflect the change in 1.0.5
https://silverstraw.itch.io/spine-vtube-test-model