Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

Data Upgrade Script Data Upgrade

Discussion in 'Experimental Scripting Previews' started by superpig, Aug 25, 2016.

  1. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    Hello,

    Announcing a new feature preview for you all: Script Data Upgrade!

    Download Link (special build of 5.4.0f3 with the extra feature in it)
    Documentation

    Here's the overview from the documentation, so you can see what it is:

    There's a ton more info in the link above.

    As usual with experimental builds:
    • Please back up your project before trying this build.
    • Please post any feedback or issues you have to this forum.
    The feature is still very much in the "we're not sure if this is the right way to do this" stage, so give us your feedback - is it useful? Uninteresting? Horrible? Amazing? The more feedback we get, the more confidently (and quickly) we can move forward...
     
    Last edited: Aug 25, 2016
  2. Edy

    Edy

    Joined:
    Jun 3, 2010
    Posts:
    2,510
    This is really cool! Which version this feature is expected to land at?
     
  3. SweetJV

    SweetJV

    Joined:
    Aug 22, 2016
    Posts:
    11
    I just took a quick scan over the docs. Looks fairly intuitive and similar to upgrade schemes I've worked with in the past. For now, I have two critiques to make:

    1. I would highly advise against using a simple Int for the version number. For internal team use that might be fine, but for anything released to the public (ex. Asset Store), people tend to use compound version numbers (ex. Unity 5.4.0f3). I imagine it should be trivial to make a Version type that simply uses 4 ints together to form a more robust version. That way, we can use equivalent logic internally instead of mentally mapping that DataVersion16 = ReleaseVersion1.3.1. Instead, we could say [SerializeVersion(1,3,1,0)]

    2. This system would be much easier to take advantage of if we could use it for any class/struct, instead of only MonoBehaviour/ScriptableObject. I understand why it's easier to implement for those types. However, it strikes me that the easiest way, and perhaps even the right way, to offer this functionality to other types is to simply do it via an interface. If we implement the interface, you detect it and call it. Seems like that would be straight-forward, but please correct me if I'm wrong.

    Anyway, I'm very happy to see you guys working on this. And I look forward to being able to use it :cool:
     
    rakkarage likes this.
  4. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    That depends on the feedback you guys give us. If we get a lot of "yep this is great please ship it now" then we can land it soon, maybe even 5.6, while if we get a lot of "eh this needs more work" then it'll take longer.

    Hmm, I'll think about that. Under the hood this just plugs into the versioning system we already have for engine components, and that just uses a single int for storage, so we would have to just combine the values provided into a single number. It would also make it more difficult to write upgrade code that compared things to versions. But maybe there's a good solution...

    From your POV I think the existing design works fine for non-MonoBehaviour/ScriptableObject types, i.e. you just write a class that implements IScriptDataUpgrader<VanillaCSharpClass>. The reason I didn't already do it is just that it's hard to implement on the engine side. It's on the list as "this would be good," but unless a lot of you think that the system is basically useless without it, I wasn't planning on doing it for the first release.
     
  5. SweetJV

    SweetJV

    Joined:
    Aug 22, 2016
    Posts:
    11
    Ah, I see. Given that restriction, I suspect simply packing 8 bit ints into a 32 bit value, similar to Color32, would work out OK. 0~255 for each version part seems reasonable to me. But yeah, definitely worth giving a bit more thought to.

    Not sure I completely understand. Are you saying this initial release already allows us to do this? Or that this an example of how you would implement it?

    FWIW, I don't think it'd be useless if it were restricted to Mono/Scriptable. But, I do think it makes the barrier of entry higher.

    I suspect many of the people that build more complex components and systems tend to try to keep most of their data in "pure" simple classes, and then hook them up together in a Unity-derived Mono/Scriptable class. And in more complex cases (ex. Node Editors, Particle Systems, etc.) it may be common to have your own class hierarchy. And since C# only allows single-inheritance, I could imagine having to introduce something like ScriptableObject into the entire hierarchy would require a bit more work and thinking. Whereas, if you just decide that you want to change your NoiseTextureNode or ParticleVectorField data, it would be much more straight-forward to introduce this update scheme as needed, rather than rework your entire system.

    Honestly, I do think this stuff will be useful either way. But I also think the more complex cases are probably the ones that would benefit the most from this system. So, food for thought, I guess.
     
  6. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    I mean that's how it would be implemented. From your POV, very little would change - we would just lift the restriction that says "we only execute upgraders for MonoBehaviour/ScriptableObject subclasses." (From our POV, of course, there's a lot more that would need to change internally).

    While that's all quite possible, the question is - how are people serializing their data if they're not already using MonoBehaviour/ScriptableObject? If they're using other systems for serialization, then I think it's going to be up to those systems to provide a way to handle data that doesn't "fit."
     
  7. Edy

    Edy

    Joined:
    Jun 3, 2010
    Posts:
    2,510
    To me, this is just great as is, please ship it now.

    A single int as version number is perfect to me. Additional overrides allowing several version numbers (2 shorts, or 4 bytes) that combine them into a single int may be useful also for others. This may be added later as improvement.

    MonoBehaviour/ScriptableObject support is a good starting point, but I think that eventually it should work with any class marked as [Serializable]. I guess supporting this is a lot easier as [Serializable] uses the same serialization scheme as MonoBehaviour/ScriptableObject (AFAIK). I wouldn't really care about vanilla C# classes or structs.
     
  8. SweetJV

    SweetJV

    Joined:
    Aug 22, 2016
    Posts:
    11
    Oh yeah, I agree. In case there was any confusion, that is what I meant also. I probably should've been more explicit. When I mentioned "pure" classes, I was really just talking about stand-alone classes that are marked with [Serializable].

    I didn't mean to imply this upgrading scheme should work with vanilla classes, or classes that use some alternative serialization scheme (ex. JSON).
     
    Edy likes this.
  9. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    Just to be clear: I believe the current implementation will work correctly with MonoBehaviours/ScriptableObjects that contain custom [Serializable] structs/classes. So you can do stuff like:

    Code (csharp):
    1.  
    2. class MyMB : MonoBehaviour
    3. {
    4.    [Serializable]
    5.    public struct NewStructLayout
    6.    {
    7.       public int newFieldName;
    8.    }
    9.    public NewStructLayout structMember;
    10. }
    11.  
    12. class MyMBUpgrader : IScriptDataUpgrader<MyMB>
    13. {
    14.     [Serializable]
    15.     public struct OldStructLayout
    16.     {
    17.         public int someField;
    18.     }
    19.  
    20.     public OldStructLayout structMember;
    21.  
    22.     public void Upgrade(ref MyMB target, int version)
    23.     {
    24.         target.structMember.newFieldName = structMember.someField;
    25.     }
    26. }
    27.  
    As long as you're able to write an upgrader that is at the same 'level' as a MonoBehaviour/ScriptableObject, then you can declare struct and class members inside it and then use them to populate class and struct members on the target object.

    The thing that doesn't work is to have an IScriptDataUpgrader<SomeStruct> that is applied independently of any MonoBehaviours/ScriptableObjects.
     
    mcmorry likes this.
  10. Edy

    Edy

    Joined:
    Jun 3, 2010
    Posts:
    2,510
    Awesome! Any chance the current feature set could land at the 5.5 alphas/betas? I don't see the need for extending it further at this stage, just ensuring the current features work as specified.
     
  11. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    It's too late for 5.5, and I'd like to see a bit more widespread usage and feedback, but so far things are looking good...
     
    Edy likes this.
  12. Can-Baycay

    Can-Baycay

    Joined:
    Dec 14, 2010
    Posts:
    27
    I'm just here to say that I would really appreciate having this functionality in hand. It's not like I would use this everyday but I've felt so much pain in the past, figuring how to copy and paste lots of already existing values onto a renamed variable or a restructured class. So when it's needed, this feature will kick in like a painkiller.
     
  13. LightStriker

    LightStriker

    Joined:
    Aug 3, 2013
    Posts:
    2,717
    Honestly, someone one day will have to take a good look at the whole serialization - the whole MB/SO limitation - and come up with something better... like most other engines in the world.

    And in the same time allow weak-referencing towards resources, interfaces, polymorphism, in-inspector instance creation, cross referencing, reflection-based inspecting, etc.
     
  14. Sycobob

    Sycobob

    Joined:
    Feb 1, 2013
    Posts:
    78
    I like the concept, but I think this needs a some revisions before it will be useful for my usual workflow.

    • Data changes are a global, one time operation and I want to treat them as such.

      I don't want to have to commit and maintain upgrader code for something that should be a one time operation. This bloats the project, increases compile/reload time, and leaves game data in mismatched state.

      You mention that the 'big sweep' approach is error prone and 'terrible for VCS situations'. I would argue that the rolling upgrade is also error prone. What if I remove the upgrader because I think all the data has been upgraded, but it turns out I'm wrong? It's not clear why you consider big sweep terrible for VCS, but I'd also argue that the rolling upgrade is terrible for VCS. It's already super annoying when someone renames a prefab/material/etc and next time I update through VCS I get a random changes to the serialized data. This upgrade approach exacerbates that problem.

      Big sweep is the solution that is the most desirable and useful to me. I want data upgrades to be a one time operation that I don't pay a rolling cost for throughout project development.


    • It's important to me that this work for arbitrary classes, not just MonoBehaviour/ScriptableObject.

      I like to package data up into structs. I also tend to reuse those structs in multiple places (e.g. I don't just drag an AudioClip into a serialized field. There's a struct with the clip, a volume range, and a pitch range. A huge number of places in the game use this struct.) If I want to change the data in a reused struct, not being able to target the type directly is going to lead to a very bad workflow. Not handling inheritance well is similarly bad for workflow, but I don't use a lot of inheritance, so that's less of an issue for me personally.


    • "All calls to OnAfterDeserialize() will be made before any data upgrades are performed" is questionable

      This sounds like a "gotcha" in the making. When I get that callback I expect to have all my data completely deserialized and correct. Instead, I might get old, invalid data and start working with it and that's bad. How will this work with e.g. the prototypical example of making a serializable dictionary by using serialization callbacks to maintain two lists? Will I end up filling a dictionary with garbage data, then the upgrader running and the dictionary being left with broken data? Will OnAfterDeserialize have to run a second time with the 'real' data?

      A surprisingly large amount of my time is spent dealing with weird corner cases of Unity features (random example: OnDrawGizmos running before data has been deserialized) and I'm very averse to potentially adding more.


    • I'd like to be able sanitize "lost values" completely

      You mention being able to capture lost values, but this is effectively just another way to rename a serialized field. It's a better workflow if you're doing a rename and a data upgrade at the same time (though, that's maybe a questionable thing to do to begin with). However, I'm struggling to recall a use case where I needed to retrieve a serialized value from field that no longer exists.

      What I have run into is assets having unexpected dependencies because of old, invisible (and unwanted) serialized data. For example, I recently did some project-wide asset processing that required calculating asset dependencies. Materials were pulling in a bunch of Textures they weren't actually using and in fact weren't even in the shipping game. This was because changing the Shader on the Material will still keep all the Texture references for fields that no longer exist on the new Shader. We also had a similar issue with prefabs because property modifications were being stored in the prefab (grrr) and sometimes those were referencing other assets.
      (I never checked, but I sure hope those types of stale references don't cause unused assets to be included in a build, or worse, loaded into memory.)

      I'd very much like to able to remove all unused data from objects. Though, this needs to be able to be triggered for the whole project to really be useful. Ideally it would maybe be a project-wide setting to disable the retention of unused fields in the first place.
     
    mcmorry and SweetJV like this.
  15. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    Great feedback @Sycobob, thank you!

    How about if I add some kind of 'reserialize all project data at latest version' menu item? Then you'd write your upgrader, test it out on a couple of objects, and when you're happy, use that menu item to scan through all assets and scenes in your project and force-upgrade them. Then you could delete the upgrader code immediately.

    Doing that means you'd lose the ability to e.g. upgrade objects in a .unitypackage that you exported before the upgrade, or to copy in old revisions of files you pulled from VCS. But I assume you're OK with that.

    As a side effect, this menu item would also upgrade the data for engine components that have changed data format, too.

    The 'big sweep' is bad for VCS because it means you potentially have to commit changes to files all over the project, and other people are likely to be working on those files - so depending on your system you are likely to run into either merge conflicts with those people, or you'll be exclusively locking the entire project to do the upgrade and blocking other people from working.

    Are you saying that you "start working with" your data inside OnAfterDeserialize? What kind of stuff do you do? That method is not allowed to access Unity APIs as it's often running on a background thread, so I'm wondering how much you can really do.

    In the case of the serialisable dictionary: it depends on the upgrade you did. If you, for example, changed your value type from an int to a string, then you'll have lost the values entirely - while keeping the keys, so your OnAfterDeserialize callback needs to be prepared for that kind of inconsistent state. That's how things already work today (in mainline Unity).

    So you'd either decide "forget about keys[ i ] where i >= values.Length", and have an empty dictionary, or you'd decide "give those keys a default value", and have a dictionary of empty-string values. Then your upgrader would run, and it could go and overwrite those values in the dictionary directly. OnAfterDeserialize wouldn't be called a second time because the upgrader isn't changing the target object via deserialisation.

    What about some of the examples given in the doc - e.g. converting from storing Euler angles as separate variables, to storing a Quaternion?

    There's actually a very limited number of cases where 'unused' data is stored - specifically, cases where we are storing maps of things (e.g. the materials case) or referencing fields by string (e.g. PropertyModifications in prefabs). Basically cases where the actual 'used' values depend on the state of a different object. Keeping this stuff is, in general, by design, as it makes Unity more robust against a range of synchronisation problems. But I can see the use case for stripping away this stuff from time to time when you want to e.g. find unused assets in your project.
     
    SweetJV likes this.
  16. Baste

    Baste

    Joined:
    Jan 24, 2013
    Posts:
    6,338
    This is my birthday and Christmas at the same F***ing time!

    Seriously, this is a great tool, which I will definitely use! The FormerlySerializedAs+OnBeforeSerialize+old field with a deprecated-comment solution we've had to now is annoying, feels unsafe, and leaves a bunch of garbage code around forever.

    I'd like to see a version that works directly with structs. It's not going to be a very big problem - most of the structs I use are generally only serialized in one Scriptable/Mono, but writing the upgrader directly for the struct would make the whole deal much more readable. A solution that requires more boilerplate, but allows me to upgrade a struct directly would be fine.

    Requests:
    1: I'd like to see a way to make this testable. Essentially, if I could easily write a Unit test that checks "given the old format with this data, I should get a new format with this data".

    2: Can the API please be exposed? So I could write something like UnityEditor.ScriptUpgrade(myObject)? I can see instances where that'd be useful, and there's no reason not to expose it. A lot of very useful editor things are hidden from the users for no good reason, and the policy seems to be "don't expose unless necessary" (Why do I have to do reflection to clear the console?).

    3: An automatic, project-wide upgrade would be nice. The VCS situation will be bad no matter what, but I prefer having a bunch of scenes where my co-workers can select "use my version" rather than have a scene or two extra show up in every commit for a month.
    That's what happens when you do format changes internally. Since 5.4, every single time we open a scene, look at a prefab, or do anything else, the log is filled with a bunch of instances of the field m_LocalEulerAnglesHint being added. I'd really like a "apply to project" button. Let me deal with the VCS mess if I want!
     
    SweetJV likes this.
  17. Sycobob

    Sycobob

    Joined:
    Feb 1, 2013
    Posts:
    78
    I'd be very happy with that. Note that it would need to be able to update objects in scenes to really solve the problem.

    I don't have a strong opinion on .unitypackages because we don't use them at the moment. I've always considered them to be an 'export' more than an asset, so yes, in theory I'm ok with those not getting upgraded. Though, I would generally expect a data upgrade feature to be able to upgrade any data in the engine and work cleanly will all systems.

    On that note, will this work with AssetBundles? Again, not something we use, but I'm curious for completeness sake.

    To add context to my statements, here's how I've handled data upgrades that couldn't be done 'in place' in the past.
    1. Add a new field that completely replaces the previous one.
    2. Add code to OnValidate to migrate the old data into the new field.
    3. Lock down all scenes (the only files likely to have conflicts for us).
    4. Let the project compile and reload (to trigger OnValidate).
    5. Open and save each scene (to trigger OnValidate) (automated).
    6. Remove the old fields.
    7. Repeat steps 4-5.
    8. Add calls to the upgrade logic to OnEnable in ScriptableObjects.
    9. Let the project compile and reload (to trigger OnEnable).
    10. Commit everything (including the upgrade code, just in case).
    11. Release the scenes.
    12. Delete the upgrade code.
    The above is certainly a 'worst case' scenario. Often our data is in either ScriptableObjects or prefabs+scenes, not both. That means I can usually skip a couple of those steps, but sometimes the data really is in both places. Going back to the audio example above, while I would default to sticking everything in a ScriptableObject, maybe it still makes sense to be able to configure a sound effect directly on a prefab or prefab instance in some cases (I'm not actually sure. I haven't been able to do a ScriptableObject driven project from day 1 yet.).

    This new upgrade system will certainly streamline that process in all cases so I'd be happy to have it.


    I see what you mean. The rolling upgrade is a trade-off that solves that problem in exchange for others. I'm sure it's the right trade-off for some and not others. When I make a data change the default assumption is that I need to lock down that data until the upgrade is done. It can be tricky, but at our scale that has been the most efficient way to work (~10 people, same office, same hours). We also keep most of our data in ScriptableObjects so the vast majority of our changes will be able to be localized to a relatively small number of objects (in the future, at least. It took us a while to realize the perils of storing data in prefabs and scenes so we still have a lot of scattered data).

    In addition to the issues I mentioned previously, I've noticed that when there is extra noise in VCS from files changing 'randomly' people tend to make more mistakes. e.g. forgetting to add files they should have added, or adding files they should not have.


    That was speculation on my part and I don't have a concrete use case in mind. You may be right, it's not clear there will be a practical difference if the data is upgraded before or after the callback. It may actually be simpler to let the upgrader only care about the final, intended data format and not the intermediate format. That said, I can imagine it being very annoying to have to do a data upgrade that affects how the OnBeforeSerialize/OnAfterDeserialize will operate. The actual data flow is pretty weird if you think about it:
    1. OnAfterDeserialize_OLD runs on old format data
    2. Data upgrade logic transforms old data into new data
    3. Save
    4. OnBeforeSerialize_OLD runs on new format data
    5. Update OnAfterDeserialize/OnBeforeSerialize to work with new format
    6. (recompile/reload)
    7. OnAfterDeserialize_NEW runs on new format data
    The weirdness here is on step 4, where code for the old format is operating on the new format. You could add the upgrader and update OnAfterDeserialize/OnBeforeSerialize at the same time, but then you have OnAfterDeserialize_NEW running on old format data. If you add the upgrader and update only OnBeforeSerialize then update OnAfterDeserialize in a second step you'd get the 'correct' data flow, but that's not at all intuitive. That workflow is going to be awkward no matter what you do as the end user.

    But again, without an actual use case it's just speculation (it would likely have to be something that cared about the value of the data, like breaking a single list into smaller lists based on thresholds in some data field. Here changing the units of the data field will change the results of OnBeforeSerialize and cause incorrect to be serialized).


    That's a valid use case, for sure. I was remarking that I have trouble recalling when I've personally encountered those kinds of cases. But the more I think about it, I've had to do those types of upgrades multiple times, I've just circumvented the need for the retrieving the lost field by having an intermediate step where the old field and the new exist at the same time (the workflow I described above).


    I concur with @Baste's API request. I recognize there are reasons not to expose it (design, QA, and maintenance costs), but I agree with the sentiment that a lot of power is withheld from the end user by keep so much internal functionality private. I pretty regularly dig through the Unity DLLs to understand or abuse things going on under the hood and I often see things I really wish I had access to. I also often want more control over how/when the engine decides to do things. I realize that's super nebulous, but I don't have examples prepared. It's not critical, it's just nice to have. Even if it's confined to the 'unsupported' namespaces and bug reports aren't allowed.


    Will instances of classes decorated with [SerializeVersion] be able to retrieve the version number from the serialized data? I would like to be able to assert data is the expected version and that requires access to the actual serialized version number, not just the constant in the attribute. This is to handle cases where the data wasn't upgraded as expected, such as loading old AssetBundles at runtime, .unitypackage's getting missed, upgrader code being removed before everything was upgraded, old assets being pulled from VCS, etc.
     
    Last edited: Aug 29, 2016
    SweetJV likes this.
  18. Sycobob

    Sycobob

    Joined:
    Feb 1, 2013
    Posts:
    78
    Also, thank you for building this preview and providing a place for discussion. I very much like being able to understand and give feedback on features as they are developed. It seems likely that this will lead to better usability on new features out of the gate.
     
    SweetJV likes this.
  19. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    Noted. For now my thinking is still that we could ship version 1 without it, and that would still have value to people, but I definitely get the message that it needs to be added eventually.

    That's an excellent idea. I'll see what I can do - at the very least an example of how to write an EditorTest that would do it.

    The problem is that there is no API. At no point do I ever make a call like "UnityEditor.ScriptUpgrade(obj)" - it is all detected and handled during the deserialization process, taking advantage of state that only exists at that time. So it's not that there is something internal that I can just expose.

    So the question is more: how would an API like you're describing actually work? From the C# point of view, what type is 'myObject' - the old type, or the new type?

    I guess that both 1 and 2 might be solved by the same API, something like:

    T EditorUtility.DeserializeObjectAtVersionIntoCurrentVersionOf<T>(object obj, int version, T target)

    where you pass in a serializable object (of any class, unrelated to T, but that we 'assume' you picked something sensible for) and the version number you want to treat it as having, and the object you want to deserialize into. Then internally we'd serialize the old object, fix up the version number, then deserialize it again via the upgrader pipeline. Pretty easy to see how you'd write tests with that, and hopefully that covers any other weird nebulous use cases. What do you think?

    Yep, gotcha. This is easier to do than struct/class upgrading so I'll probably tackle it first.
     
  20. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    Yes, the feature would need to actually open every scene in your project and re-save it to cover everything. This is actually something you could write today - just looping through all your assets and marking them dirty + opening each scene in turn and marking it dirty and saving it - but it feels to me like something that should be built in. (Bear in mind that other staff at Unity might disagree with me, so we'll see what happens).

    Right, but the point is: if you've performed the upgrade as a one-time thing and then deleted your upgrader code, and then import a package that you previously exported... well, the engine cannot upgrade data that you deleted the upgrader for :)

    The intention here really is that you keep these things around 'just in case', off in a namespace somewhere in an Editor folder - I take the point about compile times, but there are ways of solving that (like building DLLs out of your old upgraders) and we also have work in progress to improve that situation more generally anyway. So I don't want to change the feature design to work around a problem that won't actually exist that far in the future...

    In the Editor, it might - I've not tested. At runtime, no, I think people would prefer not to have the performance and code size hits of bringing this mechanism along 'just in case.' (And right now, it is an everybody-or-nobody choice - making it be included only for projects that want to use it is not practical at the moment). Again, there's work we have planned that may make this obsolete as well, in terms of making it very easy to deploy new AssetBundles to users so they can just get the 'new' versions of them directly.

    Yep, this is exactly the kind of scenario I am very familiar with from my own time building games with Unity (you didn't abuse SerializedObject/SerializedProperty to dynamically change the script assigned to the component from inside OnValidate, at least...)

    OK, fair enough. Well, the 'force reserialize everything' API should allow you to deal with it - I'll err on the side of "don't go scanning through every single asset in a project that might be 80GB" by default, but will give you the tools to make that happen if it's appropriate to your situation.

    Hmm... not currently. It's kinda niche but let me look into it and see if there's an easy solution.
     
    SweetJV likes this.
  21. Sycobob

    Sycobob

    Joined:
    Feb 1, 2013
    Posts:
    78
    I certainly don't expect this to work at runtime or to be able to do anything magical once I've deleted the upgrade code. The workflow I'm talking about here is:
    1. Write upgrade code
    2. Run upgrade on entire project (this opens and upgrades all scenes, AssetBundles, and .unitypackages in the Assets folder)
    3. Commit
    4. Delete upgrade code
    The idea here is that data upgrades are a one time operation, that hits EVERYTHING and it Just Works. Otherwise you have to run around patching things up after the upgrade. I don't intend to support loading old bundles or packages. We have complete control of our own data, so we'll upgrade it when necessary and just be done with it. We don't need to maintain support for every version of data that has ever existed in the project.

    I realize support for packages and bundles is more work. But this is the ideal workflow that actually solves our problems robustly and intuitively without adding other costs.


    Compile time is only one part of the issue. There are still non-zero costs associated with the assembly reload, the engine reflecting through the DLL, and a larger codebase. One of the things I'm going to focus on obsessively in the next project is iteration time for both designers and programmers. We've ended up at a point in the current project where a compile+reload+play takes ~20s on fairly fast machines with SSDs and the majority of the code in the plugins assembly (~35s before moving stuff to Plugins). The only way I really know to prevent that in the future is to aggressively control the amount of code and serialized state.


    This sounds perfectly fine to me.
     
  22. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,364
    So nice.
    Will we also see improvement in missing component and reference type change?

    -displaying missing components name
    -keeping reference if the new variable type share the same interface
     
  23. Baste

    Baste

    Joined:
    Jan 24, 2013
    Posts:
    6,338
    That sounds great! It's definitely a worthwhile framework without the feature.

    That's pretty much the idea, but the API is a bit strange? I guess it's because I'm not familiar with how Unity's serializer works, but I've always assumed it changes object's data in-place, rather than writing it to a new object. So I assumed that you could just do something along the lines of:

    void EditorUtility.SerializeAndDeserialize<T>(T obj);

    And then have that object be upgraded. If it's very hard/dumb to deserialize onto the same object, there can be a target parameter, or the input object could be ref.

    The version is also probably not neccessary - the serializer/deserializer should already work with any version. Or am I missing something?

    No matter how you create that method (or if you create it at all), there's one piece missing from making this testable: being able to make objects with the old serialization layout. Take the upgrade to version 3 in the documentation - from a bearing/elevation pair to a quaternion. To test that upgrade, I'd need to be able to make a ProjectileLauncher object with a bearing and an elevation serialized, and not a Quaternion.

    So some API where I could define a version 2 object through... I guess a string-value dict of some sort? That would be great. Otherwise testing would be impossible. Or did you have some different idea?

    ...
    By the way, If you implement this to run on serialization, this should theoretically work, right?

    Code (csharp):
    1. void EditorUtility.SerializeAndDeserialize(object obj) {
    2.     var json = JsonUtility.ToJson(obj)
    3.     JsonUtility.FromJsonOverwrite(string json, object objectToOverwrite);
    I could also handle the testing by doctoring a ToJson string to create an "old-version" object, or just hard-code a ToJson'd old version into the test. This, of course, assumes that JsonUtility works like I expect it to, by running the serializer and deserializer in the background, and then doing a magic step that makes the stuff be json instead of yaml.
     
  24. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    Yes, that's the point of providing a different object as input :) The 'input' object would have the old serialisation layout, and the 'output' object would have the new serialisation layout. They could be both the same class if that's appropriate to what you're testing, but they wouldn't need to be. And to keep things flexible, instead of taking the 'old layout' version number from the attribute, let you specify it manually so you can 'fake' the object being older than it really is, and so on.
     
    SweetJV likes this.
  25. Baste

    Baste

    Joined:
    Jan 24, 2013
    Posts:
    6,338
    So I'd make a test class that's Serializeable, and fill it with public fields that I'd want to transfer? That's a pretty good solution! That would be more than sufficient for an upgrade test.
     
  26. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    Yep, exactly.
     
  27. Stephan-B

    Stephan-B

    Joined:
    Feb 23, 2011
    Posts:
    2,269
    Been paying attention to this thread and sadly been too busy to give this a try or provide feedback. Having said that and from my point of view as an Asset Store Publisher, this type of functionality is greatly needed and I will anxiously await its availability.

    For instance, two days ago I had to make changes to an Enum where I had to manually implement some method to handle the conversion from previously serialized objects using the old enum to the newer format. Having the ability to cleanly handle this is huge for me. I simply can't afford to potentially break 1000's of users projects and scenes so I have been holding back on these types of API changes.

    Thank you for working on this.
     
    Edy and SweetJV like this.
  28. AlkisFortuneFish

    AlkisFortuneFish

    Joined:
    Apr 26, 2013
    Posts:
    973
    The big issue with that is that it is already a rather serious problem with the engine. If someone changes some script, they need go and select every prefab using that script, in order for the serialization of the object to be updated. Prefabs are routinely missed and at random times during development, updates to prefabs make it to the VCS completely out of context and if we add scenes to that equation things get even more annoying.

    Even Unity itself does this routinely, even now we are seeing prefabs getting committed with float serialization having changed, even though that happened several versions of Unity ago. The solution I've had to go with in general is to literally search for "t: Prefab", select all, save project. This workflow, relying on implicit deserialization and serialization is not sane and let's not even start to mention the same issue on AnimatorControllers and related systems, where even selecting them is not enough and sometimes they have to manually be set dirty.

    There needs to be a way to semi-automatically re-serialize all assets of certain types, other than a full asset re-import, especially now that data upgrading is on the table. What you are proposing is a far bigger VCS nightmare than the possibility of an one-off upgrade.
     
  29. SweetJV

    SweetJV

    Joined:
    Aug 22, 2016
    Posts:
    11
    I think for some cases, like your team, that makes complete sense. But for others, not so much. In particular, I think people that publish tools to the Asset Store or Central Tech teams in larger studios, being able to handle rolling upgrades is a higher priority.

    From all of the talk so far, it sounds like both scenarios will be supported. Which is great! I just wanted to call that out, because I think the usage of this system will differ quite a bit from a single-team vs a broader user base.
     
  30. SweetJV

    SweetJV

    Joined:
    Aug 22, 2016
    Posts:
    11
    I've used (abused?) this a bit and, to some degree, do "start working with" the data inside OnAfterDeserialize. I think my most common use cases boil down to two scenarios:

    1. Create some data that is implicitly based on the serialized data. For example, I might initialize some lookup table based on the items in a serialized list, or some other metrics based on the serialized data.

    2. Initialize some connections between data. For example, I've used that callback to implement things like the Observer pattern, where the "owner" of some data will register itself with the data it owns, so it can receive fast garbage-free callbacks.

    Off the top of my head, I'm not sure if any of these usage scenarios would fail to work if OnAfterDeserialize was working with stale data vs upgraded data. But, for peace of mind, it would be preferable if that callback (or maybe another?) was available to work with the data after the upgrade mechanics have kicked in.
     
    mcmorry likes this.
  31. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    How about if I make my 'force reserialize assets' API accept an optional list of GUIDs or paths to reserialize? Would that give you the flexibility to re-serialize the assets you want to re-serialize, without having to process the entire project?
     
  32. Tenebrous

    Tenebrous

    Volunteer Moderator

    Joined:
    Jul 25, 2011
    Posts:
    102
    "The Script Data Upgrade feature adds the ability to ‘capture’ these lost fields, through declaring them on your IScriptDataUpgrader instance."

    Presumably, if the same variable changed type a couple of times, the only way to handle that would be with multiple classes implementing IScriptDataUpgrader? i.e. one that changed from an int in version 1 to a string in version 2, and one that changed from a string in version 2 to a float in version 3?
     
  33. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    Yep, that's one of the reasons why multiple upgraders are supported. (Though I think it's also possible to declare both fields on the same type with different names, and use 'FormerlySerializedAs' to make them read different names in the serialized data).
     
  34. AlkisFortuneFish

    AlkisFortuneFish

    Joined:
    Apr 26, 2013
    Posts:
    973
    That could help, although, thinking about it, it would still require some way of finding every reference to the data in the project, which would result in it getting deserialized anyway.
     
  35. runevision

    runevision

    Joined:
    Nov 28, 2007
    Posts:
    1,892
    Just be aware that if you split the upgrade logic up into multiple upgraders for the same data like discussed here, each upgrader needs to be very aware of the other.

    Since each upgrader can only upgrade to the latest format, it's not really

    "one that changed from an int in version 1 to a string in version 2, and one that changed from a string in version 2 to a float in version 3"

    Instead it's

    "one that changed from an int in version 1 to a float in version 3, and one that changed from a string in version 2 to a float in version 3"

    And if the one that converted from version 1 to 3 ran, then you definitely do not want to also run the one that converts from version 2 to 3 (since it would overwrite the already converted value with a default value), so you need logic specifically handling that. I think having it all in one place using the [FormerlySerializedAs] approach should be much less error prone and mind bending, but that might just be me.
     
  36. Sycobob

    Sycobob

    Joined:
    Feb 1, 2013
    Posts:
    78
    Wouldn't you want an upgrade for each version number change, not for each 'data upgrade path'? e.g a single v1-v2 upgrader and a single v2-v3 upgrader. As long as their order is guaranteed they don't have to know about each other. You can split logic within upgrader based on affected data if desired, but that should be decided by the user not the API.

    Re-reading the documentation, the idea of upgrading based on data rather than version number sounds highly problematic. You're introducing situations where some of the data in an object is at one version and some of the data is at another.

    This leads me to an important question: what happens when an upgrader throws an exception? If this isn't handled you end up with corrupted data in unsupported, mixed version configurations. The only sane thing for the engine to do here is to run the upgraders, catch any exceptions, and throw away the changes if any are caught.

    If you are upgrading based on data instead of version this becomes less robust. In the scenario where you upgrade from specific version to specific version you can execute the upgraders sequentially, saving each transformation if it succeeds and aborting on any exceptions. Your data is successfully upgraded through all versions where the upgraders worked properly. When you are performing upgrades on data it becomes all-or-nothing. Because you can't guarantee the data is uniformly at the same version unless all upgraders run successfully you have to drop all changes on the floor as soon as anything fails. Why do the same work multiple times? If it's v3-v4 that fails, v1-v3 should still complete and be saved. Maybe someone gets clever and data upgrades take a while. Maybe there's just a lot of data.

    One can question the value of designing around failure cases, but it seems like a very good idea here. We all know code rarely works correctly the first time. So the probability of exceptions being thrown are fairly high. More importantly, if you're keeping these upgrades around the validity of the assumptions you made when writing them will change. e.g. If you're talking to some other data containers they may no longer exist in the project in the future. The API ought to do its best to ensure data is never in an invalid state and this is a very straightforward step in that direction. Without it, users have to be very careful to revert data changes through VCS when an exception is thrown. This leaves a very large opening for user error. I don't expect non-programmers to reliably remember to do this.

    I'm very strongly in favor of the upgraders explicitly specifying the initial and final version of the data and only allowing a single upgrader per version change. Allowing multiple upgraders handling different portions of the same data upgrade is a feature that by definition exists to put data in an inconsistent state, in addition to being much harder to wrap your head around. Separating upgrade logic based on related subsets of data can trivially be done on the user side of the API and having it baked in does not add a significant amount of value or power to it.

    Also, the current proposal implicitly assumes the upgrader is always upgrading all the way to the latest version. What happens when I bump the version number and forget to update the upgrader? Now the old upgrader runs, succeeds, and the data version is updated even though no upgrading was actually done.

    Regarding how to specify the initial and final versions, the first solution that comes to mind is through an attribute. Something like [DataUpgrader(type, initialVersion, finalVersion)]. This matches the implementation pattern for custom Editors and PropertyDrawers, which is always nice. It also allows the engine to do some easy sanity checking and remove several sources of error. The engine can:
    • Help ensure data stays in a uniform version.
    • Help ensure upgraders are not performing overlapping upgrades (if an upgrade for v1-v2 exists, one for v1-v3 cannot exist and an error can be logged).
    • Ensure a data upgrade does not run on data versions it was not written for.
    • Ensure data upgraders exist for all versions (if v2-v3 exists, but v1-v2 does not an error can be logged).
    Again, this still leaves the power in the users hands. If you want big upgraders that handle all versions, no problem. If you want to split upgrade logic based on data subsets, no problem.

    Alternatively, the upgrade methods can set the final version through a ref parameter. This would allow upgraders that handle many versions at once to incrementally update the version number as steps complete successfully. If an error occurs later in the process we don't need to lose all the progress.
     
  37. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    The idea of "upgrade version by version" is something that discussed a lot internally, and there's one very simple problem with it: we don't have any of the intermediary type definitions. We have the latest type definition, we sort of have the type definition of the data being loaded (though in the case of YAML, we actually lost some information about e.g. which numeric type was being used)... and that's it.

    The other problem, when you combine that with the transactional approach, is that we are not actually allowed to fail: at the end of the day, there is exists an object at v4 that needs to be populated with data, and if it isn't it will be filled with default values. If one of the upgraders throws an exception and we 'abort the chain' - at best it's as if no upgrader chain is present at all, and at worst we've lost all your data.
     
  38. Baste

    Baste

    Joined:
    Jan 24, 2013
    Posts:
    6,338
    I also feel like a version-to-version upgrader would be better if possible. That's how we've managed upgrades for save games in the past. The solution there has simply been to have the old class around with all of it's data, and do the steps one-by-one.

    It's inconvenient, sure. But writing the upgrader is a LOT easier when you go version to version. With the upgrader-runs-once scheme, introducing version n requires updating the code to support n-1 version of the data - so there's n-1 new places where bugs can live. With the version-to-version scheme, you only have to create a single new upgrade path. That's a lot more manageable.


    No matter what solution is picked, the upgrader crashing will always be a problem. You can't get around it. This is another point in favor of having an upgrade-everything option; if we can do a commit, and then run all of the upgrades at once, a failed upgrade can be fixed by simply reverting to an earlier stage. If the upgrader runs as we go, a rare failure will be very hard to pick up on, and might suddenly appear next to a bunch of data we want to keep, making the revert difficult.
     
  39. SweetJV

    SweetJV

    Joined:
    Aug 22, 2016
    Posts:
    11
    What if instead of Unity auto-magically serializing lost fields into matching variables found in the Upgrader, it simply exposed a way to query the serialized version of the data for any variables it holds? With a scheme like that, I would imagine re-creating your lost values example with something like this:

    Code (csharp):
    1.  
    2. public class ProjectileLauncherUpgrader : IScriptDataUpgrader<ProjectileLauncher>
    3. {
    4.     public void Upgrade( SerializedData data, ref ProjectileLauncher launcher, int version )
    5.     {
    6.         if( version <= 2)
    7.         {
    8.             float bearing = data.GetSerializedVariable<float>( "bearing" );
    9.             float elevation = data.GetSerializedVariable<float>( "elevation" );
    10.  
    11.             if( version <= 1)
    12.             {
    13.                 bearing *= Mathf.Rad2Deg;
    14.                 elevation *= Mathf.Rad2Deg;
    15.             }
    16.  
    17.             launcher.launchDirection = Quaternion.Euler(elevation, bearing, 0);
    18.         }
    19.     }
    20. }
    21.  
    However, this scheme would also make it easy for me to embed the definitions for my type's version history and re-create them, with something like this:

    Code (csharp):
    1.  
    2. public class ProjectileLauncherUpgrader : IScriptDataUpgrader<ProjectileLauncher>
    3. {
    4.     public class ProjectileLauncherV1
    5.     {
    6.         public float bearing;
    7.         public float elevation;
    8.     }
    9.  
    10.     public class ProjectileLauncherV2
    11.     {
    12.         public float bearing;
    13.         public float elevation;
    14.     }
    15.  
    16.     public class ProjectileLauncherV3
    17.     {
    18.         public Quaternion launchDirection;
    19.     }
    20.  
    21.     public void Upgrade( SerializedData data, ref ProjectileLauncher launcher, int version )
    22.     {
    23.         ProjectileLauncherV1 v1 = null;
    24.         ProjectileLauncherV2 v2 = null;
    25.         ProjectileLauncherV3 v3 = null;
    26.  
    27.         if( version == 1)
    28.         {
    29.             v1 = new ProjectileLauncherV1();
    30.             v1.bearing = data.GetSerializedVariable<float>( "bearing" );
    31.             v1.elevation = data.GetSerializedVariable<float>( "elevation" );
    32.         }
    33.         else if( version == 2)
    34.         {
    35.             v2 = new ProjectileLauncherV2();
    36.             v2.bearing = data.GetSerializedVariable<float>( "bearing" );
    37.             v2.elevation = data.GetSerializedVariable<float>( "elevation" );
    38.         }
    39.         else if( version == 3)
    40.         {
    41.             v3 = new ProjectileLauncherV3();
    42.             v3.launchDirection = data.GetSerializedVariable<Quaternion>( "launchDirection" );
    43.         }
    44.  
    45.         // Now, for the conversion I could either apply logic similar to the above example and doc examples
    46.         // Or, I could run a chain of functions/functors that upgrades v1 to v2, v2 to v3, etc. like so:
    47.         if( v1 != null )
    48.         {
    49.             Upgrade( ref v1, out v2 );
    50.         }
    51.  
    52.         if( v2 != null )
    53.         {
    54.             Upgrade( ref v2, out v3 );
    55.         }
    56.  
    57.         // etc.
    58.     }
    59.  
    60.     private void Upgrade( ref ProjectileLauncherV1 v1, out ProjectileLauncherV2 v2 )
    61.     {
    62.         v2 = new ProjectileLauncherV2();
    63.         v2.bearing = v1.bearing * Mathf.Rad2Deg;
    64.         v2.elevation = v1.elevation * Mathf.Rad2Deg;
    65.     }
    66.  
    67.     private void Upgrade( ref ProjectileLauncherV2 v2, out ProjectileLauncherV3 v3 )
    68.     {
    69.         v3 = new ProjectileLauncherV3();
    70.         v3.launchDirection = Quaternion.Euler(elevation, bearing, 0);
    71.     }
    72.  
    73.     // etc.
    74. }
    75.  
    This is just off the top of my head, so perhaps I'm overlooking some potential problems here, a better way to structure this, a more efficient way of handling things, etc. But, hopefully you get the idea.

    In writing out that second example, a couple of things strike me. First, is that it's obviously less efficient. In the first example, as well as the examples in the docs, everything is being done in place on two objects - the data class and the upgrader class. With any kind of chaining of versions, you'll obviously have to create new temp objects along the way. So, that's certainly something to consider. However, I can also see a case to be made that debugging and robustness of the process may be more important than upgrade speed. Especially since, in the usual case, everything will be up to date and so there shouldn't be much overhead.

    At any rate, I'm personally wary of making a version-chaining system or even multiple updaters part of the Unity core. The more that goes on inside the black box, the less sure I am what's going on. I'd rather have a single entry point, one upgrader per type, and the tools to be able to then handle the chain of updates however I see fit.
     
  40. benblo

    benblo

    Joined:
    Aug 14, 2007
    Posts:
    476
    That looks awesome! That and nested prefabs (and ok, life-cycle fixes) would make a whole lot of my Unity pains go away!

    It requires the upgrader to have access to the target's field though (or at least have setters, or write custom ones but, please): can the Upgrader be a nested class of the target? This way even if some fields are private, it still sees them:

    Code (CSharp):
    1.     public class Bla : MonoBehaviour
    2.     {
    3.         [SerializeField]
    4.         Quaternion angle;
    5.  
    6. #if UNITY_EDITOR
    7.         public class BlaUpgrader : IScriptDataUpgrader<Bla>
    8.         {
    9.             public float angle;
    10.  
    11.             public void Upgrade( ref Bla target, int version )
    12.             {
    13.                 if (version <= 1)
    14.                 {
    15.                     target.angle = Quaternion.AngleAxis(angle, Vector3.up);
    16.                 }
    17.             }
    18.         }
    19. #endif
    20.     }
    21.  
    BTW, why is "target" a ref? can we change its value??


    Also, I think it's been discussed but what if I want to change the same field (same name) several times? with your solution I'm forced to have 2 upgraders, right?
    For example, "angle" was a float, then a Quaternion, then a custom struct. I like SweetJV's idea above of defining the class several times in the same Upgrader, but instead of using string names how about an attribute on each version class?

    Code (CSharp):
    1.     public class Bla : MonoBehaviour
    2.     {
    3.         [Serializable]
    4.         public struct Angle
    5.         {
    6.             //...
    7.         }
    8.         public Angle angle;
    9.  
    10. #if UNITY_EDITOR
    11.         public class BlaUpgrader : IScriptDataUpgrader<Bla>
    12.         {
    13.             [Version(0)]
    14.             public class Bla_1
    15.             {
    16.                 public float angle;
    17.             }
    18.             public Bla_1 v1;
    19.  
    20.             [Version(1)]
    21.             public class Bla_2
    22.             {
    23.                 public Quaternion angle;
    24.             }
    25.             public Bla_2 v2;
    26.  
    27.             public void Upgrade( ref Bla target, int version )
    28.             {
    29.                 switch (version)
    30.                 {
    31.                     // 0 -> 1
    32.                     var quat *= Quaternion.AngleAxis(v1.angle, Vector3.up);
    33.                     target.angle = new Angle(quat);
    34.  
    35.                     // 1 -> 2
    36.                     target.angle = new Angle(v2.angle);
    37.                 }
    38.             }
    39.         }
    40. #endif
    41.     }
    42.  
    With this model I can easily see myself writing some T4 templates to auto-generate some conversion code (at least skeleton), eg I know I what to refactor Bla, I can generate the upgrader with a snapshot of the current layout, change the layout, write the upgrade code, boom rinse & repeat.


    Less important, but [WhenVersionSmallerOrEqual] just smells! How about [UpgraderVersion(min = 2)], who knows if you need to add a "max" argument or something later on.
    I dislike Unity's habit of using long-ass attribute names, then you end up with several attributes when one with arguments could have done the trick, eg why isn't [CanEditMultipleObjects] an argument of [CustomEditor] ??


    Regarding the "big sweep" vs "just-in-time" refactoring, I also think the big sweep approach is important. Sure, for Asset Store plugins etc, JIT is super cool, but on the other hand many data upgrade scenarios are "write and forget", and having non-coders deal with "WTF I just touched this and now 10 files I never touched pop up in source control" is nasty.
    In the end I can imagine being able to do both is very powerful: upgrade ALL the data you have at the given time, but keep the upgrade code around in case you missed some (eg no more yelling around the studio "hey guys please push all your changes before lunch because I'm gonna refactor all scenes").


    Tangentially related to the subject: one last tool I oh-so-wish I had on my belt is an access to the AssetDB as raw data, a property stream of "dead" data (like an XML doc tree), and/or a YAML parser.
    At the moment, if I want to find all prefabs that have such component with such value, the only way is to actually load the prefabs, which a) is slow and a memory hog, and b) causes the objects to be instantiated --> fires OnValidate and such, which can be intrusive.
    I don't want to load the data, I just want to parse it (live objects vs POD).
    If we take the YAML parser approach (which would leave out binary data), such a tool could even possibly work without the project even open. And, it could be a true property stream, eg no notion of class or data layout or anything, just "what's actually on the disk right now", not "what will the fields be refactored to if I load it", or even have access to the properties of missing scripts etc.


    Keep it up Richard, I'm very hopeful this will turn out to be a great improvement on Unity's data pipeline!
    (I recently used CullingGroups for the first time, which I believe you initiated, and they worked brilliantly! Loving this kind of low-level access.)
     
  41. AlkisFortuneFish

    AlkisFortuneFish

    Joined:
    Apr 26, 2013
    Posts:
    973
    Off topic, but I just use Notepad++ to do a search through all *.prefab files with a regex for that.
     
  42. benblo

    benblo

    Joined:
    Aug 14, 2007
    Posts:
    476
    Don't we all!... not exactly a great solution though, is it?
     
  43. AlkisFortuneFish

    AlkisFortuneFish

    Joined:
    Apr 26, 2013
    Posts:
    973
    Not ideal indeed!
     
  44. Joachim_Ante

    Joachim_Ante

    Unity Technologies

    Joined:
    Mar 16, 2005
    Posts:
    5,203
    Having the ability to press a menu item to upgrade all assets, then remove the upgrader code makes sense for projects which have a simple linear history in version control. We'll add functionality to support this flow easily.

    That said, projects that use branches heavily and more importantly anything shipped via asset store (Asset store vendors naturally don't have access to all users projects) need an approach that is "automatic" on demand upgrading, so that will remain the default.
     
  45. 00christian00

    00christian00

    Joined:
    Jul 22, 2012
    Posts:
    1,035
    What happen to the data that get overwritten?
    Is it saved somewhere or it's lost forever?
    Otherwise It require a backup for everytime the upgrade run.
    If it is kept, any way to prune it?
     
  46. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    Just like today, the old data is lost when the asset/scene is next saved. We're not planning to change this - the idea is just to give you a 'last minute chance' to grab some of the data before it is lost.

    (Of course, we would recommend that you use a version control system on your project, so that if you really need to get the data back then you can go looking in the version control system history - but that's a separate topic).
     
  47. 00christian00

    00christian00

    Joined:
    Jul 22, 2012
    Posts:
    1,035
    Oh, I thought it was running project wide in one sweep.
     
  48. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    The "project wide sweep" menuitem/API we're talking about would basically be the same as opening+dirtying+saving every single scene and asset in your project. So afterwards the data would be lost - but it would be up to you to decide when (or even whether) to do this. You'd want to test out your upgraders on a bunch of individual files first, get everything backed up or checked into source control, etc.
     
  49. 00christian00

    00christian00

    Joined:
    Jul 22, 2012
    Posts:
    1,035
    I understand but it seem a scary feature when combined with the asset store, because if for example I upgrade an asset and it does upgrade everything a lot of things can go bad.
    It may have upgraded everything fine, but for sure it didn't upgrade my script that rely on such asset. The issue may be immediately visible or worse it could be so subtle that I discover only later when I have already committed made several changes making it difficult to find the root of the cause.
    It's a powerful tool and I am all in for it to be available to devs for their own projects, but allowing asset store developer to release assets that auto upgrade mean opening a world of new issues.
    In my opinion It should be allowed only with a recap window that let you verify every change and even then I have my doubt.
     
  50. superpig

    superpig

    Drink more water! Unity Technologies

    Joined:
    Jan 16, 2011
    Posts:
    4,660
    Ah, I see. This is probably one of those things that we'll just want to try and reject in Asset Store submissions.