Search Unity

  1. Unity Asset Manager is now available in public beta. Try it out now and join the conversation here in the forums.
    Dismiss Notice

System.Numerics

Discussion in 'Experimental Scripting Previews' started by bdovaz, Jan 14, 2017.

  1. bdovaz

    bdovaz

    Joined:
    Dec 10, 2011
    Posts:
    1,051
    Have you evaluated to adopt this internally?

    https://github.com/dotnet/corefx/tree/master/src/System.Numerics.Vectors/src/System/Numerics

    In .NET there are equal Vector2, Vector3, Vector4, Quaternion and so on implemented...

    Like Unity's "Mathf" class, there is a .NET Math class with all that and if it's not, you can contribute with it to .NET Framework with a pull request:

    https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/Math.cs

    https://github.com/dotnet/coreclr/blob/master/src/mscorlib/src/System/MathF.cs

    When you upgrade to .NET 4.6 are you going to adopt it? So we don't have redundant API and you can clean your codebase and focus on your specific API and leave the rest to Microsoft and developers community on github.

    The "problem" is how you will deal existing projects compatibility.
     
    Last edited: Jan 14, 2017
    Qbit86 and rakkarage like this.
  2. XaneFeather

    XaneFeather

    Joined:
    Sep 4, 2013
    Posts:
    97
    I've been wondering the same and vote for adopting the .NET Numerics if possible. No more converting Unity types to .NET and vice-versa when dealing with multithreading would be fantastic.
     
    rakkarage likes this.
  3. bdovaz

    bdovaz

    Joined:
    Dec 10, 2011
    Posts:
    1,051
    I think that it's the smartest choice but it's up to Unity.
     
  4. xoofx

    xoofx

    Unity Technologies

    Joined:
    Nov 5, 2016
    Posts:
    417
    As you know, Unity is based on Mono Runtime (an older version, though there is an undergoing work to migrate to the newest one) and even until very recently, Mono didn't support `System.Numerics.Vectors` (see http://www.mono-project.com/news/2016/12/20/system-numeric-vectors/)

    Also, `System.Numerics.Vectors` has some design issues that makes it quite impractical to use. Typically, many methods exposed by it are passing arguments by value instead of passing them by ref. Though when using vectorized methods that are marked as "intrinsics", they will be inlined correctly by the JIT (and it will avoid a costly roundtrip to the stack), there are still many methods in their API that don't provide by ref and don't have an intrinsic (most of the methods in Matrix4x4 for example).

    Note that it is not only Unity that has some concerns with the API, you can check on this issue on corefx. You can see later in the comments that by value vs by ref has significant performance issues on some platforms (like Android) and can cost around 10 to 20%... So they have been considering to bring up by ref, but it is still in the development pipe (and only valid for CoreCLR)

    That being said, when Unity will have switched to a newer Mono, it will be possible to allow the Unity Vectors structs to be SIMD accelerated... Though, this will also require to add support for IL2CPP platform as well (used to target iOS platform for example)

    In terms of API convergence, ideally, I agree that it would be great to have an interchangeable API for this... but upgrading legacy is not always easy!
     
    EZaca, fypliawfl, ackro and 4 others like this.
  5. bdovaz

    bdovaz

    Joined:
    Dec 10, 2011
    Posts:
    1,051
    So I see that it's not something that it's going to be available tomorrow but when we get .NET 4.6 it will be easier to get this.

    The positive thing it's that you are on github actively reading and commenting issues and talking with Microsoft employees.
     
  6. DamonJager

    DamonJager

    Joined:
    Jul 16, 2015
    Posts:
    56
    The mono's simd library works in the editor preview. And probably you can use system.numerics with the windows store stuff? I've not tried that, but it would be 64bit alone, and you can't use the editor (yet).
     
  7. laurentlavigne

    laurentlavigne

    Joined:
    Aug 16, 2012
    Posts:
    6,327
    Wow simd in managed. How are they performance wise?
     
  8. DamonJager

    DamonJager

    Joined:
    Jul 16, 2015
    Posts:
    56
    laurentlavigne likes this.
  9. alexzzzz

    alexzzzz

    Joined:
    Nov 20, 2010
    Posts:
    1,447
    Mono.Simd is a pretty old technology. All the current Unity builds contain Mono.Simd.dll, it's just not referenced by default. Simply drop it into your Assets folder. Work's fine.
     
    PrimalCoder and rakkarage like this.
  10. bdovaz

    bdovaz

    Joined:
    Dec 10, 2011
    Posts:
    1,051
    Any news on this @xoofx? It's been a year.

    I see that the issue was closed and they agreed to add ref/in/out overloads on other issue:

    https://github.com/dotnet/corefx/pull/25388
     
  11. pavelkouril

    pavelkouril

    Joined:
    Jul 22, 2016
    Posts:
    129
    This is why I think actually supporting https://github.com/dotnet/corefx/issues/22940 is much better choice, and once fesasible, updating Unity's math library to use the intrinsics internally, would be more beneficial.

    Is it possible that Unity will support this API in near future (near future after .NET Standard 2.1 hits, to be more precise)? :)
     
  12. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    How does System.Numerics, which is available now, compare to Unity.Mathematics? Specifically if I just want to do SIMD vector and quaternion math, with the best possible speed, but I'm not using the ECS/Jobs stuff and thus not the burst compiler. I can't tell if the Unity version's main benefit is the ability to hit better IL in a burst environment, or if it's actually SIMD.
     
    Qbit86 likes this.
  13. SLGSimon

    SLGSimon

    Joined:
    Jul 23, 2019
    Posts:
    80
    The problem with using a unity-specific math library means you can't share it with any external code. So now with the new math library I have to convert vectors between my vector type and two other unity types...

    (Unity.Mathematics only has a reference to UnityEngine just for implicit conversion which is pretty annoying)
     
    Qbit86 and goncalo-vasconcelos like this.
  14. snacktime

    snacktime

    Joined:
    Apr 15, 2013
    Posts:
    3,356
    Just add UnityEngine as a reference also. You can add pretty much any Unity dll in an external project. You can use most of the types like Vector3, etc.. You just can't call methods that forward to native implementations in the engine.
     
  15. SLGSimon

    SLGSimon

    Joined:
    Jul 23, 2019
    Posts:
    80
    Can't - and don't - want to be shipping Unity DLLs with a game server...
     
    ivaylo5ev likes this.
  16. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    There are no legal restrictions around this. Nonetheless, if you want to simply use SIMD, then I suggest that you import System.Numerics and use Vector4 from that. This is what I do, simply because I don't trust the performance of unity's custom implementation.

    If you wish to do Matrix math, that is something you somewhat have to re-implement, but the source for unity's matrix math is open source and can be converted to use with Vector4.

    Converting from Vector4 back to Vector3 is not something you would have to do on a server without the unity engine dlls, and from a serialization standpoint back to clients they are just getting a list of floats anyway. (Since you're not including unity dlls in your server there's already no way for you to use unity's serialization or Vector3 class, anyway).

    System.Numerics has proven to be very high-performance and reliable in the last year or two I've been using it. Given that these are also structs, you can keep this entirely on the stack and avoid any heap usage if you avoid the constructor and define it instead like this.

    Vector3 v;
    v.x = x;
    v.y = y;
    v.z = z;

    And then use the code from there. If you do this:

    Vector3 v = new Vector3 (x, y, z );

    Then you may incur some heap overhead. That's true with any struct, but YMMV. It's something we've had different test results from with differing versions of unity and mono and .net and different machines and OSes over the years.
     
  17. Qbit86

    Qbit86

    Joined:
    Sep 2, 2013
    Posts:
    487
    But System.Numerics have Matrix4x4 as well, don't they?
     
  18. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    It does, but to my recollection (been a few years) it doesn't have the same sort of methods for things like TRS() and other basic static constructors or object methods. So if you want to do unity-style translations, rotations, and scalings, you have to reimplement those methods.

    Further, there's a certain "handedness" to Matrix4x4 that Unity uses, and IIRC it's backwards from what a lot of the main public algorithms would show. I'm really stretching my memory now. But at any rate, it means that results will be slightly different if you don't use the same underlying way of filling the matrix before doing operations on them.

    Whether the results are substantially different enough to warrant notice, I have no idea. I've not tested that, largely because I haven't had a need to in my own work. And all of my recollections about Matrix4x4 in System.Numerics is from about two years ago, so big grains of salt. I do use the unity matrices and the system.numercs.vector4 on a daily basis, though.
     
  19. alexzzzz

    alexzzzz

    Joined:
    Nov 20, 2010
    Posts:
    1,447
    The math is always the same no matter what. The handedness is the way you interpret the data.
     
  20. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    That's good to know!
     
  21. ivaylo5ev

    ivaylo5ev

    Joined:
    Sep 2, 2016
    Posts:
    10
    This is a topic I am very interested in. I'd be really happy to have certain compatibility level between System.Numeric.Vector types and their Unity counterparts. As @xoofx pointed out, direct replacement may not be possible (I hope he meant for the near future), but the Unity team can definitely implement conversion operators between the Unity vector types and their corresponding System.Numerics.Vectors alternative. I believe this could add the necessary compatibility layer for most use cases. It would make sense to discourage this conversion in frequent method calls such as the game loop or the gui loop, therefore I'd recommend the conversion operators to be explicit.
     
  22. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    You can also make your own extensions that convert between them however you like. This is what I've done. There's no performance benefit to unity doing it for you, and give they have their own SIMD implementation via their Mathematics dll, I can't see any reason why they'd do more with Numerics. But we each can, quite easily!
     
    ivaylo5ev likes this.
  23. Huszky

    Huszky

    Joined:
    Mar 25, 2018
    Posts:
    109
    x4000 and ivaylo5ev like this.
  24. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    Just FYI, but code like this is really inefficient with the GC and creating temproary waste:

    Code (CSharp):
    1. public static SVectorF ToSystemGeneric(this UVector3 vector) => new SVectorF(new[] { vector.x, vector.y, vector.z });
    You have to be careful even with the params keyword, if you're looking for a low GC footprint, and this vector creation actually takes it even more directly with an array that's just immediately thrown away.

    The most efficient way to instantiate a struct is actually also to NOT use a constructor. It's a lot better to define it like this:

    Code (CSharp):
    1. public Vector2 ToVector2()
    2.         {
    3.             Vector2 result;
    4.             result.x = this.X;
    5.             result.y = this.Y;
    6.             return result;
    7.         }
    That's a verbose method, but bear in mind of course it's only used as a library method. What happens in there is that there is never a constructor call, which means that there is never any chance of it going on the heap or having its value type boxed. It also puts one less call on the call stack.

    Instead, you define a struct but never assign it directly (which looks strange), and then you assign every value of the struct in line items below. This is the most like a C or C++ style of initialization. It has the absolute minimum possible added overhead on the stack or heap, and the compiler won't let you return from the calling method if you forget to assign any of the struct variables, so you don't have to worry about that.

    Anyhow, aside from the issue of avoiding constructors where possible (which is on the extreme end of optimization, but when you're making a middle-ware library for conversion that seems like what you want to do, you know?), avoiding temporary arrays is an even bigger one.

    Also, not that I saw you doing it, but making calls to things like Vector3.zero should be done with care (aka avoided), since that's actually a property that instantiates a new blank vector every time it is called. Using the constructor method.

    The base types like Int32.Max can be compared against because they are constants, and string.Empty can be compared against because it is constant and interned. But any structs you have to be wary of built-in properties (that look like constants but are not really because otherwise someone could change the constant contents by accident and redefine Vector3.zero's internals for the duration of a progam's run).

    Similarly, doing a comparison of a string with "" is bad news, because that one does get interned but at the same time does cause a pointer to be created and generates some waste. Comparing to string.Empty is fine, but really just just checking to see if a string's length is <= or == to 0 is the best thing.

    Source: various articles over the years, and lots and lots of profiling. When you're trying to squeeze every drop of performance out of the CPU, this is how you get there.
     
    ivaylo5ev likes this.
  25. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    265
    Using a struct's constructor will not cause it to be boxed or allocated on the heap, that's a common misconception due to the fact that classes are always heap-allocated. Boxing will only happen if you yourself do it by casting to object/interface.

    On modern .NET runtimes the constructor is even optimized away if small enough (and in this particular example actually results in better codegen than setting the fields if you don't specify [SkipLocalsInit]: https://sharplab.io/#gist:40fb206832a926d5b90cafcc4bf387f0).

    Hopefully we can get CoreCLR/RyuJIT in Unity sooner rather than later, so we can benefit from those optimizations.
     
    x4000 likes this.
  26. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    In theory you are correct, and in original .NET you are correct. But in Unity we're not using .NET at all, we're using the mono runtime. Things often deviate from the specs, and in the past I have seen it first-hand in profilers causing issues.

    That said, the upgrade to the 4.5 equivalent runtime has improved a lot of things. It used to be that using foreach caused a MASSIVE amount of heap allocations (to the point of several MB per second of GC churn in a large game with many foreach calls) in older versions of unity mono.

    I suppose in those situations, it depends on where you're compiling your code, too. Where possible, I'm using visual studio itself to MSBuild the projects. Can't use pdbs or mdbs with that for stack traces, but beyond that the MSIL is generated in a way that you can directly control. When you have unity itself doing the compiling, it's much slower and not always easy to tell what it did.

    Assuming that the latest version of mono-shipped-with-unity understands the MSIL generated from something that was optimized away (like your example above), then potentially we're getting that benefit already.

    I have verified in the last few years that the foreach bug is fixed in unity, but I've not put in a lot of time on the struct initialization. If that is indeed working as intended now, then great. Either way, declaring an array that is immediately discarded is still not great.

    I have also not had time to look into the params keyword recently, and I'm not sure if that's been optimized since the last time I did. It did not fare well, in the more distant past. Things like delegates and anonymous methods and generics all perform as expected and always have, which is nice.

    The difference between what .NET advertises and what mono in unity provides can be frustrating at times, so I tend to err on the side of not using a thing until I've verified it myself or someone I trust has done so.
     
    TheZombieKiller likes this.
  27. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    265
    The behaviour of instantiating structs via a constructor is the same across Mono and .NET, so if you're seeing heap allocations when new-ing a struct, then something else must be causing those allocations (perhaps it's being stored in an object field, or the constructor itself does something that allocates. We'd need to see an example of the scenario)

    If new() directly resulted in a struct being allocated on the heap, then the following code would be invalid because it would require a fixed() statement:
    Code (CSharp):
    1. struct S
    2. {
    3.     public int Value;
    4. }
    5.  
    6. static unsafe void A(S* ptr)
    7. {
    8.     ptr->Value++;
    9. }
    10.  
    11. static unsafe void B()
    12. {
    13.     var s = new S();
    14.     A(&s);
    15. }
    These are JIT-level optimizations rather than something in the IL, so unfortunately we won't get them without Unity moving to RyuJIT.

    Absolutely agreed! I just wanted to point out the note about structure initialization and didn't mean to derail the point of your post.

    The params keyword still allocates even in the latest .NET and that will likely never change. Avoiding the allocations by caching the array would cause problems (a function that takes a params array could itself then cache that array and modify the arguments of another call). However the future doesn't look all bad, there's a proposal to allow "params Span<T>" as a replacement, which can avoid these issues in most cases: https://github.com/dotnet/csharplang/issues/1757
     
    Last edited: Jan 27, 2021
    Romaleks360 and x4000 like this.
  28. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    Thanks for the added notes! That does make sense.

    Some background on where I'm coming from and why I'm a bit suspicious (probably more than is warranted these days):

    I worked in .NET from back in the very first releases in 2001, and got really used to it over the years. This was in a business-dev environment, mainly working server-side for a gigantic SaaS program with a database backend and both a web and a .NET client frontend. Over the years I explored most things in some depth, getting into random things clients wanted like the ability to directly hook into scanner drivers and trigger OCR software, which required more COM Interop than I would have liked, etc.

    From about 2002 to 2009, I was making my own game engine, and released my first game on that. It was .NET 3.5 based, using SlimDX as a helpful wrapper around DX9, and the unfortunate thing was that meant this was windows-only and also required between 1 and 3 prerequisites to be installed on the user's system, potentially with multiple reboots. That was absolutely awful.

    So in 2010, I and another programmer I had hired started working on porting my engine to run atop the "bare metal" of unity. It was a notable step back in terms of .NET performance, and we had to do insane amounts of profiling to make the code actually functional and figure out where and why mono was under-performing compared to .NET. For a lot of years, unity never really upgraded the mono runtime much, and we fell into habits of avoiding a lot of things that should have been fine but were buggy in unity-mono.

    The last four years or so have been a whole new ballgame with unity and mono, and I've been really grateful for that. Starting around Unity 5, they really upped their game to an insane degree. Bugs were fixed, new generational GC, new language features, etc. Finally we could use the foreach statement without it essentially (from what I could tell) instantiating and discarding an IEnumerator for each item in the loop.

    Anyhow, even back in 2010, moving to unity was a breath of fresh air because all the prerequisites were gone, and we could immediately start supporting OSX. The performance of certain things took a major hit, partly because some of the DirectX extensions were not present (things for batching sprites that now we do with GPU instancing), and partly because of mono bugs, and partly because I didn't know what I was doing in the new environment as well. But it's been really refreshing to see unity evolve, and there's not another engine I would remotely think of as nearly so robust and functional from a programming standpoint now.
     
    TheZombieKiller likes this.
  29. ivaylo5ev

    ivaylo5ev

    Joined:
    Sep 2, 2016
    Posts:
    10
    Very good points. I am in the process of doing something similar, and I must admit I fell into the mainstream .NET thinking trap. The above and your later post on mono are really useful reminders.
     
  30. ivaylo5ev

    ivaylo5ev

    Joined:
    Sep 2, 2016
    Posts:
    10
    @TheZombieKiller I noticed you mentioned the `params` keyword.

    I recently struggled with it, because I am used to always have a non-null array within the method even if there are no values passed in when calling the method, therefore I have not bothered to do null-checks ( as I would not do any when coding against a regular .NET platform).

    This did work with Unity when my target platform was .NET based, but failed with a null reference in a recent WebGL build of mine. I suspect calls to var-args method that do not pass a value to the "var-args" (`params`) parameter get stripped somewere between the generated IL code and the produced web assembly code in such a way that no allocation is performed and a null is being passed instead. Despite ReSharper protesting, I did a null-check and the problem was fixed.

    However, I am still unsure was it caused by IL2CPP, or the web assembly conversion, would you happen to know? I am asking because I'd be happy to wrap that null-check around the most appropriate platform symbol check, ex `#if UNITY_WEBGL`.
     
  31. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    I think TheZombieKiller had a lot of good points about why things are no longer quite so divergent, though, too. Params and arrays aside, it sounds like most other things are under control now.
     
    ivaylo5ev likes this.
  32. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    265
    The specification states that calling a 'params' method without supplying any values for the 'params'-marked argument will pass an empty array, so if you're getting null then you've likely hit a Unity bug. I'm not sure at what stage of the build process it could be occurring at though, I'd recommend reporting it through the bug reporter.
     
    ivaylo5ev and x4000 like this.
  33. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    It sounds like an IL2CPP bug in his case.
     
    ivaylo5ev and TheZombieKiller like this.
  34. M_R

    M_R

    Joined:
    Apr 15, 2015
    Posts:
    559
    but you can explicitly pass an array into a 'params' argument, and that array can be null.
    Code (CSharp):
    1. void Foo(params int[] p) {}
    2.  
    3. int[] gotcha = null;
    4. Foo(gotcha);
     
    ivaylo5ev and x4000 like this.
  35. Huszky

    Huszky

    Joined:
    Mar 25, 2018
    Posts:
    109
    Although generally speaking you are right, that temporarily creating an array can be bad because of GC overhead the example you pointed out can't really be done any other way. That method converts a UnityEngine.Verctor2 to a System.Numerics.Vector<float> which is a generic vector type and is arbitrary big and only accepts initialization with either one fixed value, or an array of values. The same goes for the other methods which returns the other Vector<int> or Vector<float> types.

    The methods which return the fixed-length vectors all using the appropriate constructor.

    Also if anyone find bugs/performance overheads with the provided code, please raise an issue on github, so that I can take a look at it faster :)
     
    x4000 likes this.
  36. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    Ooh, I missed that. Thanks for the note.

    In that case, personally my suggestion would be that you either not provide a direct conversion to that type, or that you name it something like _Expensive or _Inefficient at the end. One of the things with libraries that people pick up and use is that they tend to think of most things as being equally efficient, when really that's not the case.

    For System.Numerics, IIRC Vector4 should basically always be used unless you need more than four dimensions, because 4 is the width of the SIMD operation in the first place. So for the most efficient results (which is mirrored in some of unity's documentation about their own math library), people should be using Vector4 if at all possible, even if they only need 2 or 3 of the actual values in it.

    It's true that for purposes of creating a conversion, your code is as efficient as it can get while remaining threadsafe. If you wanted to implement a non-threadsafe version that used a fixed static array, that would be an option for more speed when thread safety is not required.

    Overall I get a little wary of methods in SDKs that have an unexpectedly high cost to execution, so that's the nature of my suggestions. We could definitely just sat "RTFM" to the end programmers, but I suspect that doesn't always work, as we know.
     
  37. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    265
    If you're fine with taking a dependency on System.Buffers, you can use ArrayPool to reduce the allocations. Alternatively, you could consider a [ThreadStatic] field since the array size is constant:
    Code (CSharp):
    1. [ThreadStatic]
    2. static float[] s_ToSystemGeneric;
    3.  
    4. public static SVectorF ToSystemGeneric(this UVector3 vector)
    5. {
    6.     if (s_ToSystemGeneric == null)
    7.         s_ToSystemGeneric = new float[3];
    8.  
    9.     s_ToSystemGeneric[0] = vector.x;
    10.     s_ToSystemGeneric[1] = vector.y;
    11.     s_ToSystemGeneric[2] = vector.z;
    12.     return new SVectorF(s_ToSystemGeneric);
    13. }
     
    x4000 likes this.
  38. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    Huh! I really need to brush up on some of the newer .NET features that are out there. ThreadStatic looks incrediblt useful. For an extensions class like this, that solves the one and only problem, which is thread safety. If someone uses this inside Parallel.ForEach or an older ThreadPool object, then I suppose I'm not entirely sure how many allocation of that array you'd wind up with. But assuming it's not a one-off usage of this function on those threads, it would see lots of repeat use.

    That's a pretty fascinating tip, thanks for sharing that. System.Buffers is also interesting, since it's threadsafe. Normally I use my own pooling classes for things like that, but it's a real pain having to come up with a generalized lock strategy. I have really been enjoying System.Collections.Concurrent for how well that does with a ton of threads. Seems like this is in a similar boat, but using pooling rather than shared access.
     
    TheZombieKiller likes this.
  39. ivaylo5ev

    ivaylo5ev

    Joined:
    Sep 2, 2016
    Posts:
    10
    Beware of `ThreadStatic` -- it has no alternative for WebGL/WebAssembly (yet). Unlike most of the stuff that exists in the `System.Threading` namespace which you are not supposed to use for webgl-based builds, this attribute will not break your build, but the behaviour of your code will be not what you intended to be.
     
  40. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    Just out of curiosity, not trying to offend: Is WebGL really something used much? PCs, consoles, mobile... those all have visible and thriving large ecosystems. For WebGL, I assume that this is the replacement for flash player for purposes of flash-like games or game demos.

    In 2010 or so I did a demo for one of my titles in the precursor to WebGL through unity, and I found that basically nobody played it. Everyone just downloaded the actual game on Steam or ignored it entirely. That was back when you (as a player) had to have the unity player browser extension installed, so that was more of a pain to do than WebGL is now.

    I'm curious if there's a new market that I'm not aware of that is appearing somewhere. Kind of a replacement newgrounds situation.

    For non-gaming purposes, I could also see the utility of the WebGL capabilities, so I was assuming that some of it was that.
     
    ivaylo5ev likes this.
  41. ivaylo5ev

    ivaylo5ev

    Joined:
    Sep 2, 2016
    Posts:
    10
    @x4000, In my situation, we plan on usning WebGL builds to show-off small free demos of the game(s) we are making, so the potential customers could experience the gameplay before making a decision. Therefore, we need at least a portion of the game to function properly, and I've managed to identify a bunch of no-nos along the way. ThreadStatic was one of the hardest to get over, because it was internally used by some .NET libraries we've adopted (System.Collections.Immutable in particular). And as I mentioned above, it did not lead to build errors, the effect of it not functioning properly become evident only at runtime, which is pretty undesirable.

    Another case is when you develop a framework or other package that is meant to be shared across a wider audience (ex. a tool targeting the Asset Store). You'd do yourself a bad favor if your customers end up needing WebGL and you fail to deliver properly.

    Other than that, I know people who'd utilize WebGL for non-gaming projects, such as architecture visualizations.

    Nevertheless, I feel I should not be leaving WebGL behind as an option on the projects I work on because the WebGL and WebAssembly technologies will become quite potent in the near future. At the stage of my current works I feel more comfortable knowing I can target this platform whenever necessary.
     
    Last edited: Jan 30, 2021
    x4000 likes this.
  42. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    265
    On another note, I thought it might be useful to point out that you can still implement non-allocating versions of the conversion (based on how the new .NET 5/.NET Core 3 Span-based constructor overloads work internally):

    Code (CSharp):
    1. using System.Numerics;
    2. using Vector3 = UnityEngine.Vector3;
    3.  
    4. static unsafe Vector<float> ToVector(Vector3 vector)
    5. {
    6.     if (Vector<float>.Count == 3)
    7.         return *(Vector<float>*)&vector;
    8.  
    9.     var values        = stackalloc float[Vector<float>.Count];
    10.     *(Vector3*)values = vector;
    11.     return *(Vector<float>*)values;
    12. }
    13.  
    14. // If you have System.Runtime.CompilerServices.Unsafe
    15. // and System.Memory installed in your project:
    16.  
    17. using System;
    18. using System.Numerics;
    19. using System.Runtime.InteropServices;
    20. using System.Runtime.CompilerServices;
    21. using Vector3 = UnityEngine.Vector3;
    22.  
    23. static Vector<float> ToVector(Vector3 vector)
    24. {
    25.     if (Vector<float>.Count == 3)
    26.         return Unsafe.ReadUnaligned<Vector<float>>(ref Unsafe.As<Vector3, byte>(ref vector));
    27.  
    28.     var values = (Span<float>)stackalloc float[Vector<float>.Count];
    29.     ref var b0 = ref Unsafe.As<float, byte>(ref MemoryMarshal.GetReference(values));
    30.     Unsafe.WriteUnaligned(ref b0, vector);
    31.     return Unsafe.ReadUnaligned<Vector<float>>(ref b0);
    32. }
    33.  
     
    Last edited: Jan 30, 2021
    Vuh-Hans, Huszky, x4000 and 1 other person like this.
  43. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    Thank you for the context! I am always interested to hear how others are using things. Aside from what I observe directly with companies I work with, or what Unity discusses in their blog posts or surveys, it's hard to get a picture. Very interesting indeed.
     
    ivaylo5ev likes this.
  44. Huszky

    Huszky

    Joined:
    Mar 25, 2018
    Posts:
    109
    I have tried this:
    Code (CSharp):
    1. public static unsafe SVectorF ToSystemGeneric(this UVector3 vector)
    2. {
    3.     if (SVectorF.Count == 3) return *(SVectorF*)&vector;
    4.  
    5.     var values = stackalloc float[SVectorF.Count];
    6.     *(UVector3*)values = vector;
    7.     return *(SVectorF*)values;
    8. }
    But can't compile because it says that I "Cannot declare a point to a managed type 'System.numerics.Vector<float>'" on lines 3 and 7
    Any idea on that one? I have never yet delt with pointers in C#
     
  45. TheZombieKiller

    TheZombieKiller

    Joined:
    Feb 8, 2013
    Posts:
    265
    Prior to C# 8, you cannot declare a pointer to a generic type (even when the type otherwise satisfies the unmanaged constraint). If you're using C# 7 or older, you'll need to use the other approach with the Unsafe class.

    Edit: Actually, you could also do this:
    Code (CSharp):
    1. using System.Runtime.InteropServices;
    2. using UVector3 = UnityEngine.Vector3;
    3. using SVectorF = System.Numerics.Vector<float>;
    4.  
    5. // ...
    6.  
    7. [StructLayout(LayoutKind.Explicit)]
    8. struct VectorUnion
    9. {
    10.     [FieldOffset(0)]
    11.     public UVector3 UVector;
    12.  
    13.     [FieldOffset(0)]
    14.     public SVectorF SVector;
    15. }
    16.  
    17. public static SVectorF ToSystemGeneric(this UVector3 vector)
    18. {
    19.     return new VectorUnion { UVector = vector }.SVector;
    20. }
     
    Last edited: Feb 3, 2021
    ivaylo5ev and Huszky like this.
  46. Huszky

    Huszky

    Joined:
    Mar 25, 2018
    Posts:
    109
    Yeah, so turns out the original implementation did not even work, and so went with the solution using the explicit struct layout. Also I added unit tests for the generic variants. I will create a new version with the fixes, also did a benchmark (converting 100000 vectors):
    Code (CSharp):
    1. BenchmarkDotNet=v0.12.1, OS=Windows 10.0.19042
    2. AMD Ryzen 5 1600X, 1 CPU, 12 logical and 6 physical cores
    3.   [Host]     : .NET Framework 4.8 (4.8.4300.0), X64 RyuJIT
    4.   .NET 4.7.2 : .NET Framework 4.8 (4.8.4300.0), X64 RyuJIT
    5.  
    6. Job=.NET 4.7.2  Runtime=.NET 4.7.2
    7.  
    8. |  Method |     Mean |     Error |    StdDev |     Gen 0 | Gen 1 | Gen 2 | Allocated |
    9. |-------- |---------:|----------:|----------:|----------:|------:|------:|----------:|
    10. |   Alloc | 2.319 ms | 0.0406 ms | 0.0380 ms | 5839.8438 |     - |     - | 5616878 B |
    11. | NoAlloc | 1.507 ms | 0.0115 ms | 0.0102 ms |         - |     - |     - |         - |
     
    ivaylo5ev and x4000 like this.
  47. x4000

    x4000

    Joined:
    Mar 17, 2010
    Posts:
    353
    Wow, the non-allocating method cut down processing time to around 60% of the original, PLUS avoided over 5MB of transient heap allocations. That's extremely nontrivial. Well done!
     
  48. Huszky

    Huszky

    Joined:
    Mar 25, 2018
    Posts:
    109
    x4000 likes this.