How I build UI in Unity

I recently ran across a question on Reddit about UI woes in Unity. A large percentage of the replies suggested that the OP was not alone in struggling with Unity UI which came as a bit of a surprise to me.

Ever since Unity introduced the RectTransform I’ve absolutely loved doing UI in Unity. So much so, that I’ve built entire (non-game) apps in Unity to avoid having to mock about with Android or iOS UI builders.

I don’t know if I’m just weird, or I’ve found a flow that just works, but I figured I’d share it, and let you be the judge 🙂

There’s a project on GitHub here:

https://github.com/BoaNeo/BoaNeoTools

Canvas & RectTransform

The essential components of UI in Unity are obviously the Canvas and the associated RectTransform that replaces the regular Transform on all children of the Canvas. I’m not going to go into any great detail explaining these, but I feel there’s a couple of things I should at least point out:

  1. Canvas scaling is important. Getting the scaling settings correct is key to making your UI automatically adapt to various screen sizes and device types. You need to understand the scaling modes, the purpose of matching to either width or height of your reference resolution as well as the meaning of references pixels per unit.
  2. The RectTransform allows for both relative and absolute offsetting, at the same time. Keep this in mind:
    • Min/Max anchors defines the outer anchors of your rect relative to the parent. “Min X” defines the left edge and “Max X” defines the right edge, relative to the width of the parent. Same for Y. Setting both min and max to the same value defines an explicit anchor point.
    • Left/Top/Right/Bottom are absolute unit offsets from the anchors and appear whenever min != max.
    • X/Y are absolute unit positions of your pivot inside your parent anchor box and appear when min==max.
    • Width/Height are absolute unit sizes of your rect and appear when min==max.
    • Pivot is the relative location of (0,0) inside your rect.

Standard Components

The only standard components that I use are (and these cover the vast majority of what I need):

  • Image / RawImage
  • TextMeshPro
  • ScrollView
  • Button
  • Slider
  • InputField

In addition to these, I have a few helpers that I’ve built over the years, most notably:

  • GridBuilder – container for programatically laying out UI with prefabs.
  • ViewAnimator – general helper for showing and hiding panels at screen edges.

Code & File Structure

I generally try to keep code local to its place in the scene hierarchy. For example, if I have the following scene graph:

  • A
    • B
      • C
      • D
    • E

Also, let’s say I have custom scripts on A, B and E that deals with some UI logic (C and D are just standard components, let’s say an Image and a TextMeshPro node), then neither A, nor E will touch C and D, all access should be through B. Events from C and D goes to B, nowhere else. B effectively “encapsulates” C and D. There is no way to rigorously enforce this, it’s more of a frame of mind, but creating prefabs can help. I also generally try to avoid having siblings know about each other (I.e. B and E should talk to A, not to each other).

Speaking of prefabs, I used to keep prefabs separated from code in the project structure because we generally design behaviours to be used by many prefabs, but I found that for UI purposes the code is, first of all, usually very very simple and, secondly, often tightly bound to one specific prefab. For this reason I now always name and store prefabs and their main code together.

In the above example, if B was a prefab named ‘B’ it would have a ‘B’ script on its root node, and that script would be stored next to the B prefab in the project. I would always (and only) ever refer to this prefab via the B script – even if it had a bunch of other behaviours on it. From the rest of the projects perspective, it’s a ‘B’, nothing else. This makes it very easy to find things, and local changes can be done with minimal risk.

Avoid the temptation to re-use a simple script for two different prefabs just because they currently both have a label and fire a click event. If there’s a need for two different button prefabs, I can almost guarantee that you will eventually have a need for different code for them as well. There is usually so little code in these scripts that having several code-identical versions with different names is preferable over the headache of trying to keep track of what else might break if you change a scripts shared by 15 different prefabs.

GridBuilder

GridBuilder is my go-to component for any UI that can’t be laid out statically and needs to be created from code. I know Unity has its own variation of this, and also a system for pooled objects, but both were introduced long after my own implementations had become an integral part of my UI flow, so I don’t really care.

Whether you should, is entirely up to you (obviously), but here is how it works:

To build a grid, assign the GridBuilder component to any RectTransform in your scene and then between calls to grid.BeginUpdate() and grid.EndUpdate(), you add cells using grid.AddCell(). Each cell is given by a prefab that must extend the GridCell base class. There’s a few more methods to add spacing or entire rows, but that’s about it.

Internally, the grid uses an object pool to minimise allocations and make sure minimal amount of work is done to update the grid. You can essentially build the entire grid in Update() every frame – as long as the structure of the grid remains constant, there is very little overhead.

The grid is filled from the top left corner and can be asked to automatically extend the width and/or height to match the added cells. The row height is determined by the tallest cell in the row.

Now, there’s a million features I could have added to this, such as merging of cells and different layout orientation (I did have a variation at one point that allowed right to left fill, but I never used it so eventually removed the code in a refactoring), but over the 10+ years I’ve used this, I really haven’t had the need, and there’s something to be said about keeping things simple.

Do note that the grid builder itself is a GridCell, so you *can* create compound cells if you must.

Because the grid cleans itself before being filled, and because it re-uses cells, I usually keep all the cell prefabs that I use in any given grid, in-place at design-time for easy editing. Just remember to refer to the actual project prefabs, and not the instances in the scene graph since they will be removed on launch.

You should generally define your grid cell prefabs with a pivot in the upper left corner, and a fixed width and height. Though, I’m sure there are use-cases where you might want to do something other than that.

Unity and Refactoring

I use IntelliJ’s “Rider” for C# development in Unity and as all of IntelliJ’s IDE’s, it comes with top-of-the-class refactoring support.

Unfortunately, this does not extend to Unity’s serialised object data and so any refactoring of serialised fields on your MonoBehaviours will break objects that are using that behaviour – both prefabs and objects in your scene.

I’m quite sure it’s perfectly possible to fix this in Riders Unity plugin (IntelliJ: hint hint :)), but until that happens, this is how I deal with it:

Whenever I need to change a public field (or any field marked with [SerializeField]) on a MonoBehaviour that I know is already used on either prefabs or in a scene, I first add the new structure to the MonoBehaviour, then let it implement ISerializationCallbackReceiver. In OnAfterDeserialize, I then add code to copy the data from the old structure onto the new one. When this is done, I switch to Unity, which effectively causes it to reload all prefabs and scenes, applying my transformation code so that I now have the old data in the new structure, and can remove the old structure without losing anything.

For example, let’s say I have made this fantastic behaviour

public class Circle : MonoBehaviour
{
 float angleInRadians;
}

And a level designer calls me up and says: “Radians? WTF dude?”. I guess that means he’d probably prefer to specify this in degrees. I could just change the meaning in the code, but then the name is misleading and existing values will be wrong. So I would like “angleInRadians” to be called “angleInDegrees” – if I just rename it, whatever values are assigned to this in Unity will be lost, so we instead add angleInDegrees as a new field, and let’s make it an integer, just because:

public class Projectile : MonoBehaviour
{
 float angleInRadians;
 int angleInDegrees;
}

Then, let it implement ISerializationCallbackReceiver

public class Projectile :MonoBehaviour,ISerializationCallbackReceiver
{
 float angleInRadians; 
 int angleInDegrees;  
 
 public void OnBeforeSerialize() { }
 public void OnAfterDeserialize() { }
}

And add code to convert the value:

public class Projectile :MonoBehaviour,ISerializationCallbackReceiver
{
 float angleInRadians; 
 int angleInDegrees; 
 
 public void OnBeforeSerialize() { }
 public void OnAfterDeserialize() 
 {
   angleInDegrees = (int)(180*angleInRadians/Mathf.PI);
 }
}

When we return to Unity, it will compile and run the code, updating our new structure with the correct value.

Now, this looks all fine and dandy in the editor, but there’s one little catch – the modifications we made went unnoticed by Unity’s change tracking system, so it does not think the objects has changed and thus will not save them unless we explicitly, and manually, cause it to.

Having to select each object and prefab kind of defies the whole purpose of what we’re trying to do. Ideally we’d want to automatically mark the objects dirty, but we can’t do that during de-serialisation since the APIs for this are off-limits in this context (for obvious reasons), so I have added two small scripts that help me do this:

One is just a static list of objects where I can track all the mutations I make on load:

public class MutatedOnDeserialisation
{
  protected static List<Object> _mutated = new List<Object>();

  public static void Register(Object obj)
  {
    _mutated.Add(obj);
  }
}

The other is an Editor script that extends the one above with a File menu command to traverse the list and mark all the objects as dirty:

public class MutatedOnDeserialisationEditor :MutatedOnDeserialisation
{
  [MenuItem ("File/Mark Mutated Objects")]
  public static void MarkMutated ()
  {
    foreach (var obj in _mutated)
    {
      if (obj)
      {
        EditorUtility.SetDirty(obj);
        Debug.Log("Marked " + obj);
      }
      else
      {
        Debug.Log("Skipping " + obj);
      }
    }
  }    
}

I must admit that I don’t understand why there are null refs in there, but there are – I suspect a combination of temporary objects during load and Unity’s odd boolean operator overload to be playing tricks on me, but as far as I can tell it’s ok to just ignore them.

Also of note: Unity’s documentation says to use Undo.RecordObject() instead of SetDirty(), but for whatever reason, that did not work for me.

In any case, with these two scripts in place, we just need one additional line in the de-serialisation code to make it work, namely the call to register our mutated object:

public void OnAfterDeserialize() 
{
  angleInDegrees = (int)(180*angleInRadians/Mathf.PI); 
  MutatedOnDeserialisation.Register(this); 
}

This is obviously (a lot) more work than simply pressing SHIFT+F6, but it’s also a lot less than having to go through every prefab and object instance in the project and updating it manually, especially in cases that are not as trivial as this one, like changing from a primitive type to a list or struct (and also has no risk of typos or accidentally missing occurrences).

Still, if anyone has a better way of doing this, I would love to hear about it 🙂

 

Drawing Borders in Deep Rift 9

I’ve spend a lot of time over the past couple of years working on a multiplayer strategy game called Deep Rift 9. You can read more about the game here:

Deep Rift 9 Web Site

The game is being developed by Norsedale ApS, and the initial code-base was lifted from my failed Hexadome 2 project, but has since evolved quite a bit in every direction. More importantly, DR9 has a depth and a visual appearance far beyond what I could have hoped to cook up myself, so it’s all for the better.

In any case, since we’re now entering into the treacherous minefield of public (alpha) testing 🙂 I figured it would be a good time to release some of my development notes on the project – these posts will also be available on the DR9 web site.

Starting kind-of backwards, here’s the first one on how we calculate and render the dynamic borders between players land areas.

Voronoi Diagrams

I don’t remember when I first heard or read about Voronoi Diagrams, but I do remember these guys: http://www.big-robot.com giving a rather funny presentation of how they used Voronoi diagrams to procedurally generate maps in “Sir, you are being hunted” back at Unite Nordic 2013 (jeez, time flies).

Actually, the talk is on YouTube if you’re interested , I’m in the front row, purple shirt, bald spot 🙂

There’s a good page on WikiPedia if you want the details:

WikiPedia on Voronoi

The TL;DR version: A Voronoi diagram divides an set of seed-points into convex shapes. There is one seed point in each shape and each shape contains all other points that are closer to that specific seed than they are to any other seed point. The edges of the shape naturally becomes the borders to the surrounding shapes.

Sounds perfect.

Implementation

Luckily for me, I did not have to take this all the way from mathematical concept to working algorithm since several talented people have already laid the foundation and generously shared their implementations. Unfortunately, none of them decided to do it in C# which is what I needed, so I did have to port the code.

The implementation I ended up using is a C++ derivation of Steven Fortune’s original C implementation from AT&T, made by Shane O’sullivan:

Shane O’Sullivans C++ Voronoi implementation

The original source is available from Stevens web page, here:

Steven Fortune’s original C implementation

My C# adaption is below if anyone feels like saving a few hours of tedious pointer-to-reference conversion ;).

Performance

Even though performance is not critical to DR9 – the points that serve as Voronoi “seeds” in DR9 are the cities and they don’t move, nor are they created or destroyed at a very high frequency so the Voronoi generator does not need to run at anywhere near full frame-rate speed – I still decided to do some quick performance tests.

I got rid of a bunch of C and C++ legacy annoyances like additional counters and funky size conversions that are not needed in C# and got a small ~20% performance bump for my efforts (probably not because there was anything wrong with the original code, but more likely because I had earned a penalty when porting, trying to be strict rather than fast).

With this out of the way, and not feeling too eager to mess with the core algorithm, I zoned in on the use of hash tables. Even though hash tables are generally good for performance – at least at scale, something about this particular use-case just didn’t taste right.

Going on nothing but a hunch, I did not really expect to find any gains, so as a quick test, I just dumbed the hash tables down to a size of one, effectively turning them into complicated arrays. Somewhat to my surprise performance increased by another 15%. I initially assumed that my dataset was simply too small for the hashtables to be useful, and I was just suffering the overhead without the benefit, so I bumped my seed count from 128 to 2048.

This had the completely unexpected effect of being 25% faster than the hashed version. Not only was hashing slower, it also scaled worse:

Screen Shot 2017-09-11 at 22.38.17

I’m not sure why this is and I have not had time to dig deeper, but there’s a number of things that could be worth investigating:

  1. The nature of the algorithm may not need real random access, or my data set may have been skewed in such a way that it didn’t – If the searches mostly find candidates in close proximity to where it is, a hashtable has very little use. I did try to shuffle the data but that only pushed the results further in the direction of the non-hashed version, if only marginally so.
  2. The hash table implementation may have been broken, or using an inappropriate hashing function. I looked at this briefly and it seemed reasonable, using spatial position for hashing which matches the even distribution of my seeds nicely. I didn’t do any profiling of the actual buckets to see if the fill was uneven, but I doubt it.
  3. The hash table may break optimization done by either the C# compiler or run-time or by the CPU (this could be anything from overflowing the pipeline or missing the cache to fouling up the CPUs branch prediction). Generally, less code, fewer branches, fewer method calls and less messing with shared variables results in better run-time optimization, even if your loops gets a bit longer. Long lesson learned by years of Java programming – you can really mess the hotspot compiler up if you try to be clever on its behalf.

Personally, I have my money on #3 🙂

In any case, once I had the Voronoi structure, connecting neighbouring shapes with the same owner was trivial, and so was traversing the surrounding edges to create a mesh for the border.

The following screenshot shows borders in red, and lines connecting neighbours in gray. Seeds are considered to have the same owner if they have the same color.

Screen Shot 2017-09-11 at 20.50.11

What did turn out to be something of a hassle, was rendering it.

Ideally I just wanted it painted as a flat configurable-width line on top of the map, but the map is not flat and it is not “a” map, but rather a bunch of map-patches that are dynamically created and destroyed as the view changes. Because the shape of the Voronoi cells are highly dependent on neighbouring patches (in fact that’s all it depends on) it was not practical to create Voronoi meshes for each patch – it had to be created independently of the patches.

Creating a mesh that followed the terrain was not trivial for two reasons. The first is that the actual map patches are not a true representation of the procedural height map. The patches are sub-divided and smoothened to create a nicer terrain and also contain objects such as trees and mountains and thus the height at any given location was not readily available. The second problem is that the Voronoi diagram extends towards infinity (or 32000 as infinity is defined in DR9 🙂 ) and creating a mesh with that many vertices would never work.

I had to find a way to draw the lines on the map, preferably by storing only the end-points of each border.

My first attempt involved a rather exotic use of the depth buffer and a custom shader. I build a mesh of vertical planes and rendered their intersection with the map patches by measuring the distance in the depth buffer. It worked ok from certain angles, but not at all for borders that were perfectly vertical compared to the camera (which turned out to be a fairly common case).

Screen Shot 2017-08-25 at 09.31.58

Fiddling with the shader, I was able to make it less horrible, but never really good (notice how lines get thinner as they become vertical):

Screen Shot 2017-08-25 at 09.55.55

My second attempt was to create extruded upside-down U shapes instead of vertical planes. I used the actual procedural height data plus some small distance as the height of the inverted U, but limited to 10 points per edge. This meant that most “real” borders would get subdivided into small enough pieces that they kind of followed the terrain, while the outer ones that extended toward “infinity” would not generate ridiculous amounts of vertices.

The problem with this approach was that the border would sometimes extend high above the terrain, and other times penetrate through it, disappearing all together.

The final fix turned out to be as simple as it comes: I modified the shader to ignore Z-test and reduced the geometry to be only the flat upper face of the inverted U. Because the line kind of follows the terrain, it never gets wildly out of place, and even when it does deviate, nobody can tell because there is no telling where it *should* have been.

This would not have worked if the user was able to tilt or rotate the camera as it would have broken the illusion, but as it happens, they can’t 🙂

Screen Shot 2017-09-11 at 12.18.45

My C# port is below. The way this works is that you create an instance of VoronoidGenerator supplying any class that you want to associate with the seeds – it does not matter what it is, only that you can somehow convert it into a 2D location.

You then call AddSeed with each of the seeds you want included and finally Generate to build the actual Graph. Generate takes a locationResolver as its last parameter – this is a function that given a seed of your declared type must return a 2D position.

When Generate completes, you can get the first edge in the graph with GetFirstEdge and then loop the graph by following edges (edge.next). Each graph edge contains the border end points as well as a reference to the seeds on either side of the border.

/*
* The author of this software is Steven Fortune.  Copyright (c) 1994 by AT&T
* Bell Laboratories.
* Permission to use, copy, modify, and distribute this software for any
* purpose without fee is hereby granted, provided that this entire notice
* is included in all copies of any software which is or includes a copy
* or modification of this software and in all copies of the supporting
* documentation for such software.
* THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED
* WARRANTY.  IN PARTICULAR, NEITHER THE AUTHORS NOR AT&T MAKE ANY
* REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY
* OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.
*/

/* 
* This code was originally written by Stephan Fortune in C code.  I, Shane O'Sullivan, 
* have since modified it, encapsulating it in a C++ class and, fixing memory leaks and 
* adding accessors to the Voronoi Edges.
* Permission to use, copy, modify, and distribute this software for any
* purpose without fee is hereby granted, provided that this entire notice
* is included in all copies of any software which is or includes a copy
* or modification of this software and in all copies of the supporting
* documentation for such software.
* THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED
* WARRANTY.  IN PARTICULAR, NEITHER THE AUTHORS NOR AT&T MAKE ANY
* REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY
* OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.
*/

/**
 * C# adaption for use with Unity by Niels Jørgensen, BoaNeo AB (http://www.boaneo.com)
 *
 * Permission to use, copy, modify, and distribute this software for any
 * purpose without fee is hereby granted, provided that this entire notice
 * is included in all copies of any software which is or includes a copy
 * or modification of this software and in all copies of the supporting
 * documentation for such software.
 * THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED
 * WARRANTY.  IN PARTICULAR, NEITHER THE AUTHORS NOR AT&T MAKE ANY
 * REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY
 * OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.
 */

using System;
using System.Collections.Generic;
using UnityEngine;

namespace BoaNeoTools.Source.MathNoHash
{
  class Pool where T : new()
  {
    private List _memory = new List();
    private int     _free;
    public int length { get { return _free; } }

    public void Reset()
    {
      _free = 0;
    }

    public T Allocate()
    {
      if (_free >= _memory.Count)
        _memory.Add(new T());

      T obj = _memory[_free++];
      return obj;
    }

    public void Claim(T curr)
    {
      int idx = _memory.IndexOf(curr);
      if (idx >= 0 && idx < _free)
      {
        _memory.RemoveAt(idx);
        _free--;
        _memory.Add(curr);
      }
    }
  }

  class Point 
  {
    public float x;
    public float y;
  }

  class Seed : Point
  {
    public object source;
    public int seednbr;
    public int refcnt;

    public Seed()    {    }
    public Seed(object seedSource)
    {
      source = seedSource;
    }
  }

  class Edge
  {
    public float a;
    public float b;
    public float c;
    public Seed[] ep = new Seed[2];
    public Seed[] reg = new Seed[2];
    public int edgenbr;
  }

  public class GraphEdge
  {
    public float x1;
    public float y1;
    public float x2;
    public float y2;
    public T s1;
    public T s2;
    public GraphEdge next;
  }


  class Halfedge
  {
    public Halfedge ELleft;
    public Halfedge ELright;
    public Edge ELedge;
    public int ELrefcnt;
    public int ELpm;
    public Seed vertex;
    public float ystar;
    public Halfedge PQnext;
  }

  public class VoronoiGenerator
  {
    Edge DELETED = new Edge(); // Used for ugly comparison

    const int le = 0;
    const int re = 1;

    private List _userSeeds = new List();
    private int _seedIdx;

    private GraphEdge _allEdges;

    private Pool _pointPool = new Pool();
    private Pool _graphEdgePool = new Pool();
    private Pool _halfedgePool = new Pool();
    private Pool _internalSeedPool = new Pool();
    private Pool _edgePool = new Pool();

    Halfedge ELleftend;
    Halfedge ELrightend;

    float xmin, xmax, ymin, ymax;

    int nvertices;

    Halfedge PQhash;
    int PQcount;

    float _perimeterXmin, _perimeterXmax, _perimeterYmin, _perimeterYmax;

    float _minDistanceBetweenSeeds;

    public GraphEdge GetFirstEdge()
    {
      return _allEdges;
    }

    public void AddSeed( T seedSource )
    {
      _userSeeds.Add( new Seed(seedSource) );
    }

    public void RemoveSeed( T seedSource )
    {
      for (int i = 0; i < _userSeeds.Count; i++)
      {
        if (_userSeeds[i].source == (object)seedSource)
        {
          _userSeeds.RemoveAt(i);
          return;
        }
      }
    }

    public void Generate(float minX, float maxX, float minY, float maxY, float minDist, Func locationResolver)
    {
      if (_userSeeds.Count == 0)
        return;
      
      CleanupEdges();
      int i;

      _minDistanceBetweenSeeds = minDist;

      _graphEdgePool.Reset();
      _internalSeedPool.Reset();
      _pointPool.Reset();

      xmin = float.MaxValue;
      ymin = float.MaxValue;
      xmax = float.MinValue;
      ymax = float.MinValue;

      for (i = 0; i < _userSeeds.Count; i++)       {         Vector2 p0 = locationResolver((T)_userSeeds[i].source);                  _userSeeds[i].x = p0.x;         _userSeeds[i].y = p0.y;         _userSeeds[i].seednbr = i;         _userSeeds[i].refcnt = 0;         if (p0.x  xmax)           xmax = p0.x;         if (p0.y  ymax)           ymax = p0.y;       }       _userSeeds.Sort((s1, s2) =>
      {
        if (s1.y  s2.y) return 1;
        if (s1.x  s2.x) return 1;
        return 0;
      });

      _seedIdx = 0;
      _edgePool.Reset();
      nvertices = 0;

      float temp;
      if (minX > maxX)
      {
        temp = minX;
        minX = maxX;
        maxX = temp;
      }
      if (minY > maxY)
      {
        temp = minY;
        minY = maxY;
        maxY = temp;
      }
      _perimeterXmin = minX;
      _perimeterYmin = minY;
      _perimeterXmax = maxX;
      _perimeterYmax = maxY;

      _seedIdx = 0;
      Voronoi();
    }

    private void ELinitialize()
    {
      _halfedgePool.Reset();
      ELleftend = HEcreate(null, 0);
      ELrightend = HEcreate(null, 0);
      ELleftend.ELleft = null;
      ELleftend.ELright = ELrightend;
      ELrightend.ELleft = ELleftend;
      ELrightend.ELright = null;
    }

    Halfedge HEcreate(Edge e, int pm)
    {
      Halfedge answer = _halfedgePool.Allocate();
      answer.ELedge = e;
      answer.ELpm = pm;
      answer.PQnext = null;
      answer.vertex = null;
      answer.ELrefcnt = 0;
      answer.ELleft = null;
      answer.ELright = null;
      answer.ystar = 0;
      return answer;
    }

    void ELinsert(Halfedge lb, Halfedge newHe)
    {
      newHe.ELleft = lb;
      newHe.ELright = lb.ELright;
      lb.ELright.ELleft = newHe;
      lb.ELright = newHe;
    }

    Halfedge ELleftbnd(Point p)
    {
      // Use hash table to get close to desired halfedge 
      Halfedge he = ELrightend;
      //ntry += 1;
      // Now search linear list of halfedges for the correct one 
      do
      {
        he = he.ELleft;
      } while (he != ELleftend && !RightOf(he, p));

      return he;
    }

    // This delete routine can't reclaim node, since pointers from hash table may be present.   
    void ELdelete(Halfedge he)
    {
      he.ELleft.ELright = he.ELright;
      he.ELright.ELleft = he.ELleft;
      he.ELedge = DELETED;
    }

    Seed LeftReg(Halfedge he)
    {
      if (he.ELedge == null)
        return _userSeeds[0];
      return he.ELpm == le ? he.ELedge.reg[le] : he.ELedge.reg[re];
    }

    Seed RightReg(Halfedge he)
    {
      if (he.ELedge == null) //if this halfedge has no edge, return the bottom seed (whatever that is)
        return _userSeeds[0];

      //if the ELpm field is zero, return the seed 0 that this edge bisects, otherwise return seed number 1
      return(he.ELpm == le ? he.ELedge.reg[re] : he.ELedge.reg[le]);
    }

    Edge Bisect(Seed s1, Seed s2)
    {
      Edge newedge = _edgePool.Allocate();

      newedge.edgenbr = 0;
      newedge.a = newedge.b = newedge.c = 0;
      newedge.reg[0] = s1; //store the seeds that this edge is bisecting
      newedge.reg[1] = s2;
      RefBump(s1);
      RefBump(s2);
      newedge.ep[0] = null; //to begin with, there are no endpoints on the bisector - it goes to infinity
      newedge.ep[1] = null;

      float dx = s2.x - s1.x; //get the difference in x dist between the seeds
      float dy = s2.y - s1.y;
      float adx = dx > 0 ? dx : -dx; //make sure that the difference in positive
      float ady = dy > 0 ? dy : -dy;
      newedge.c = (float) (s1.x * dx + s1.y * dy + (dx * dx + dy * dy) * 0.5); //get the slope of the line

      if (adx > ady)
      {
        newedge.a = 1.0f;
        newedge.b = dy / dx;
        newedge.c /= dx; //set formula of line, with x fixed to 1
      }
      else
      {
        newedge.b = 1.0f;
        newedge.a = dx / dy;
        newedge.c /= dy; //set formula of line, with y fixed to 1
      }

      newedge.edgenbr = _edgePool.length - 1;

      return newedge;
    }

    //create a new seed where the HalfEdges el1 and el2 intersect - note that the Point in the argument list is not used, don't know why it's there
    Seed Intersect(Halfedge el1, Halfedge el2)
    {
      Edge e1 = el1.ELedge;
      Edge e2 = el2.ELedge;
      if (e1 == null || e2 == null)
        return null;

      //if the two edges bisect the same parent, return null
      if (e1.reg[1] == e2.reg[1])
        return null;

      float d = e1.a * e2.b - e1.b * e2.a;
      if (-1.0e-10 < d && d < 1.0e-10)
        return null;

      float xint = (e1.c * e2.b - e2.c * e1.b) / d;
      float yint = (e2.c * e1.a - e1.c * e2.a) / d;

      Halfedge el;
      Edge e;
      if ((e1.reg[1].y < e2.reg[1].y) ||           (e1.reg[1].y == e2.reg[1].y &&            e1.reg[1].x = e.reg[1].x;       if ((right_of_seed && el.ELpm == le) || (!right_of_seed && el.ELpm == re))         return null;       //create a new seed at the point of intersection - this is a new vector event waiting to happen       Seed v = _internalSeedPool.Allocate();       v.refcnt = 0;       v.seednbr = 0;       v.x = xint;       v.y = yint;       return v;     }     // returns 1 if p is to right of halfedge e      bool RightOf(Halfedge el, Point p)     {       Edge e = el.ELedge;       Seed topSeed = e.reg[1];       bool right_of_seed = p.x > topSeed.x;
      if (right_of_seed && el.ELpm == le) return true;
      if (!right_of_seed && el.ELpm == re) return false;

      bool above;
      if (e.a == 1.0)
      {
        float dyp = p.y - topSeed.y;
        float dxp = p.x - topSeed.x;
        bool fast = false;
        if ((!right_of_seed & (e.b = 0.0)))
        {
          above = dyp >= e.b * dxp;
          fast = above;
        }
        else
        {
          above = p.x + p.y * e.b > e.c;
          if (e.b < 0.0) above = !above;
          if (!above) fast = true;
        }
        if (!fast)
        {
          var dxs = topSeed.x - (e.reg[0]).x;
          above = e.b * (dxp * dxp - dyp * dyp) <                   dxs * dyp * (1.0 + 2.0 * dxp / dxs + e.b * e.b);           if (e.b  t2 * t2 + t3 * t3;       }       return el.ELpm == le ? above : !above;     }     void EndPoint(Edge e, int lr, Seed s)     {       e.ep[lr] = s;       RefBump(s);       if (e.ep[re - lr] == null)         return;       ClipLine(e);       DeRef(e.reg[le]);       DeRef(e.reg[re]);       _edgePool.Claim(e);     }     float Distance(Point s, Point t)     {       float dx = s.x - t.x;       float dy = s.y - t.y;       return Mathf.Sqrt(dx * dx + dy * dy);     }     void MakeVertex(Seed v)     {       v.seednbr = nvertices;       nvertices += 1;     }     void DeRef(Seed v)     {       v.refcnt -= 1;       if (v.refcnt == 0)         _internalSeedPool.Claim(v);     }     void RefBump(Seed v)     {       v.refcnt += 1;     }     //push the HalfEdge into the ordered linked list of vertices     void PQinsert(Halfedge he, Seed v, float offset)     {       he.vertex = v;       RefBump(v);       he.ystar = v.y + offset;       Halfedge last = PQhash;       Halfedge next;       while ((next = last.PQnext) != null &&              (he.ystar > next.ystar ||
              (he.ystar == next.ystar && v.x > next.vertex.x)))
      {
        last = next;
      }
      he.PQnext = last.PQnext;
      last.PQnext = he;
      PQcount += 1;
    }

    //remove the HalfEdge from the list of vertices 
    void PQdelete(Halfedge he)
    {
      if (he.vertex != null)
      {
        Halfedge last = PQhash;
        while (last.PQnext != he)
          last = last.PQnext;

        last.PQnext = he.PQnext;
        PQcount -= 1;
        DeRef(he.vertex);
        he.vertex = null;
      }
    }

    bool PQempty()
    {
      return PQcount == 0;
    }

    Point PQ_min()
    {
      Point answer = _pointPool.Allocate();

      answer.x = PQhash.PQnext.vertex.x;
      answer.y = PQhash.PQnext.ystar;

      return answer;
    }

    Halfedge PQextractmin()
    {
      Halfedge curr = PQhash.PQnext;
      PQhash.PQnext = curr.PQnext;
      PQcount -= 1;
      return curr;
    }

    void PQinitialize()
    {
      PQcount = 0;
      if (PQhash == null)
      {
        PQhash = new Halfedge();
      }
    }

    void CleanupEdges()
    {
      _allEdges = null;
    }

    void PushGraphEdge(Seed s1, Seed s2, float x1, float y1, float x2, float y2)
    {
      GraphEdge newEdge = _graphEdgePool.Allocate(); // new GraphEdge();
      newEdge.next = _allEdges;
      _allEdges = newEdge;
      newEdge.s1 = (T)s1.source;
      newEdge.s2 = (T)s2.source;
      newEdge.x1 = x1;
      newEdge.y1 = y1;
      newEdge.x2 = x2;
      newEdge.y2 = y2;
    }

    void out_triple(Seed s1, Seed s2, Seed s3)
    {
  //    Debug.Log("OutTriple ()");
    }

    void ClipLine(Edge e)
    {
      if (e.reg[0] == null)
        return;

      float x1 = e.reg[0].x;
      float x2 = e.reg[1].x;
      float y1 = e.reg[0].y;
      float y2 = e.reg[1].y;

      //if the distance between the two points this line was created from is less than the square root of 2, then ignore it
      if (Mathf.Sqrt(((x2 - x1) * (x2 - x1)) + ((y2 - y1) * (y2 - y1))) = 0.0)
      {
        s1 = e.ep[1];
        s2 = e.ep[0];
      }
      else
      {
        s1 = e.ep[0];
        s2 = e.ep[1];
      }

      if (e.a == 1.0)
      {
        y1 = _perimeterYmin;
        if (s1 != null && s1.y > _perimeterYmin)
        {
          y1 = s1.y;
        }
        if (y1 > _perimeterYmax)
        {
          y1 = _perimeterYmax;
        }
        x1 = e.c - e.b * y1;
        y2 = _perimeterYmax;
        if (s2 != null && s2.y < _perimeterYmax)           y2 = s2.y;         if (y2  _perimeterXmax) & (x2 > _perimeterXmax)) | ((x1 < _perimeterXmin) & (x2  _perimeterXmax)         {           x1 = _perimeterXmax;           y1 = (e.c - x1) / e.b;         }         ;         if (x1  _perimeterXmax)         {           x2 = _perimeterXmax;           y2 = (e.c - x2) / e.b;         }         ;         if (x2  _perimeterXmin)           x1 = s1.x;         if (x1 > _perimeterXmax)
        {
          x1 = _perimeterXmax;
        }
        y1 = e.c - e.a * x1;
        x2 = _perimeterXmax;
        if (s2 != null && s2.x < _perimeterXmax)           x2 = s2.x;         if (x2  _perimeterYmax) & (y2 > _perimeterYmax)) | ((y1 < _perimeterYmin) & (y2  _perimeterYmax)
        {
          y1 = _perimeterYmax;
          x1 = (e.c - y1) / e.a;
        }
        ;
        if (y1  _perimeterYmax)
        {
          y2 = _perimeterYmax;
          x2 = (e.c - y2) / e.a;
        }
        ;
        if (y2 < _perimeterYmin)
        {
          y2 = _perimeterYmin;
          x2 = (e.c - y2) / e.a;
        }
      }

      PushGraphEdge(e.reg[0],e.reg[1], x1, y1, x2, y2);
    }

    private void Voronoi()
    {
      PQinitialize( );
      ELinitialize( );

      NextSeed();
      Seed new_seed = NextSeed();
      
      Point newintstar = null;
      while (true)
      {
        if (!PQempty())
          newintstar = PQ_min();

        //if the lowest seed has a smaller y value than the lowest vector intersection, process the seed otherwise process the vector intersection  
        if (new_seed != null && (PQempty() || new_seed.y < newintstar.y || (new_seed.y == newintstar.y && new_seed.x  top.y) //if the seed to the left of the event is higher than the seed
          {
            //to the right of it, then swap them and set the 'pm' variable to 1
            var temp = bot;
            bot = top;
            top = temp;
            pm = re;
          }
          Edge e = Bisect(bot, top); //create an Edge (or line) that is between the two seeds. This creates
          //the formula of the line, and assigns a line number to it
          Halfedge bisector = HEcreate(e, pm); //create a HE from the Edge 'e', and make it point to that edge with its ELedge field
          ELinsert(llbnd, bisector); //insert the new bisector to the right of the left HE
          EndPoint(e, re - pm, v); //set one endpoint to the new edge to be the vector point 'v'.
          //If the seed to the left of this bisector is higher than the right
          //seed, then this endpoint is put in position 0; otherwise in pos 1
          DeRef(v); //delete the vector 'v'

          //if left HE and the new bisector don't intersect, then delete the left HE, and reinsert it 
          Seed p;
          if ((p = Intersect(llbnd, bisector)) != null)
          {
            PQdelete(llbnd);
            PQinsert(llbnd, p, Distance(p, bot));
          }

          //if right HE and the new bisector don't intersect, then reinsert it 
          if ((p = Intersect(bisector, rrbnd)) != null)
          {
            PQinsert(bisector, p, Distance(p, bot));
          }
        }
        else break;
      }


      for (Halfedge lbnd = ELleftend.ELright; lbnd != ELrightend; lbnd = lbnd.ELright)
      {
        ClipLine(lbnd.ELedge);
      }
    }

    // return a single in-storage seed
    Seed NextSeed()
    {
      Seed s = null;
      if (_seedIdx < _userSeeds.Count)
      {
        s = _userSeeds[_seedIdx];
        _seedIdx += 1;
      }
      return s;
    }
  }
}

License to Code

As a software engineer I don’t often find myself in scary or life threatening situations – at least not as a direct consequence of my career choice.

Not too long ago, however, I was working on a customer project in which some core functions were lifted from a previous version of the same product. I was told to just re-use what was already there – no point in re-inventing the wheel, right?

As it turned out, the old code was not really doing what it should. This was not a revelation per-se, pretty much all code has some bug or another if you dig deep enough, but the one I found was sufficiently odd to cause me to dig a bit deeper.

Eventually it led me to this (names has been changed to protect the innocent):

private void calc(int x, int y, String sl, String l)
{
  ArrayList<Obj> list = new ArrayList<Obj>();
  list.add(new Obj(14.29,70,0));
  list.add(new Obj(10.53,95,0));
  list.add(new Obj(8.33,120,0));
  list.add(new Obj(6.90,145,0));
  list.add(new Obj(11.11,70,20));
  list.add(new Obj(8.70,95,20));
  list.add(new Obj(7.14,120,20));
  list.add(new Obj(6.06,145,20));
  list.add(new Obj(10.00,70,30));
  list.add(new Obj(8.00,95,30));
  list.add(new Obj(6.67,120,30));
  list.add(new Obj(5.71,145,30));
  list.add(new Obj(9.09,70,40));
  list.add(new Obj(7.41,95,40));
  list.add(new Obj(6.25,120,40));
  list.add(new Obj(5.41,145,40));
  list.add(new Obj(8.33,70,50));
  list.add(new Obj(6.90,95,50));
  list.add(new Obj(5.88,120,50));
  list.add(new Obj(5.13,145,50));
  list.add(new Obj(7.69,70,60));
  list.add(new Obj(6.45,95,60));
  list.add(new Obj(5.56,120,60));
  list.add(new Obj(4.88,145,60));

  for(Obj obj : list)
  {
    if(obj.getX() == x && obj.getY() == y)
    {
      Double answer = 
        obj.getValue() * Double.parseDouble(sl) * Double.parseDouble(l);
      someFunction(answer);
    }
  }
}

I did not include the source for Obj because it doesn’t really matter, so I’ll spare you the time to review what this does and just spell it out for you:

answer = sl*l*(1000/(x+y))

If someone had presented me with this out of context, I would have assumed it was a joke, but it’s not. It’s actual production code from a real-world application, and it’s just wrong on so many levels that I’m not even going to bother dissecting it.

Instead, I’d like to return to the part about my job not being scary or life threatening.

In this case the application was benign and, although stupid, the code did (almost) what it should (round-off errors aside), but seeing this piece of code, and realising that there are actually people out there who call them selves “programmers” or “engineers” who’d do something like that, it occurred to me how ridiculously dangerous bad programmers are.

It’s not just about wasting time, or being annoying, but who is to say that this person won’t end up working for the bank that looks after my money? Or some company that writes control code for nuclear submarines?

Considering that I’m hardly allowed to install a lightbulb in my own home without a license, it’s downright scary that people are allowed to work on software when they have no clue, let alone a license.

2016 MacBook PRO review

Update 2017/3/14:

I’ve been meaning to append an update to this post for a while, but have been too busy to do so. Today, I was once again reminded about it for reasons I’ll get back to shortly.

So, immediately after the original blog post, I had two major issues:

  1. It had a HW glitch in the keyboard (Hubris for being too tough on Dell, I’m sure) – basically the ‘O’ key only worked one of three times it was pressed. Apple replaced it overnight, no prob.
  2. Then I was consistently getting a black screen on start up if I had entered sleep mode with my external monitor attached. Apple tried for a short while to blame my external monitor… That stopped when they learned that it’s an Apple Thunderbolt display. There was another feeble attempt at blaming 3rd party software, so I reset the machine to factory and the problem didn’t go away. So, finally convinced, they offered to take it back, but I would then have to wait 3 months for a new one because it’s a custom build. I finally opted to just have it repaired. One more night in the shop, and everything has been fine since. So, early build, monday model, lemon, I don’t know, but I guess no matter what you pay, you can never be sure HW won’t have issues. Lesson learned.

In both cases Apple Support has been top notch (safe the external monitor thing, but that was actually kind of funny 🙂 ). When things do go bad, and they do, good customer support is all I really ask for, and Apple’s is second to none. I leave them my number, 30 seconds later they call back and walk me through the various tests and problems until there’s a solution.

HW glitches aside, there’s a couple of things that I wrote about originally that I now have a 3+ months perspective on:

Weight and size does matter. I was unsure if the modest reduction would make any difference, but it is noticeably nimbler on an every day basis. It also is way more silent than the old one and it does not have the same hotspots around the charging port, for example.

Performance, well, it’s the same story every time you upgrade. It only takes a few days to get used to it, and you then forget what it was like before. Only thing I’ve noticed consistently is that I have much less graphics issues in Unity. Before, I would get weird texture glitches every now and then (lack of GPU memory probably), and had to restart Unity. I haven’t seen that since I got the new laptop.

The USB-C port, much as I predicted, has been a non-issue. Or rather, it has turned out to be more annoying for others because they can’t borrow my charger or my lightning cable, but since that greatly reduces my risk of loosing either, I consider it an overall win 🙂

The O-led display however… Well, it’s the reason I finally decided to write this, because it’s such a ridiculous non-feature that I just had to vent. It has no practical purpose what so ever; it does nothing that I couldn’t do without it. Nothing. But if it was just that, if it was just a party trick, I could try to forget how much it probably added to the BOM and simply ignore it. But that’s exactly the problem, it’s not just useless. It is in fact incredibly annoying on a daily basis.

The bar have no tactile feedback, so even just using it as if it was a row of regular old function keys is a step down compared to a normal keyboard. But it gets worse because of how good the touch is. I usually hover my finger over the F7 and F8 keys when debugging, but now I can’t because it causes random taps when I don’t want them. And don’t get me started on how many times I’ve been asked if I want to turn Siri on. I do not plan to talk to my laptop. Ever. So you can stop asking, but the damn button is right above the backspace key, so if your finger is just a wee bit off, Siri will think you called. Same story in the other side where F1 is so easily activated that I get annoying help dialogs all over the place.

Apple, seriously, what were you thinking? Did this “feature” go straight from Ive’s skull directly into production – nobody actually tested it?

If you absolutely had to change the keyboard, how about you fix the goddamn cursor keys. Yes, I would like full size up/down keys also – I’m a grown up, I have grown-up-size fingers.

Yes, I could use an external keyboard, but at this price, I really shouldn’t have to.

Original post follows below:

Me and my laptop

It’s been 4 years since I bought my trusty MacBook PRO retina. That’s easily a year more than I’ve ever had a laptop before, and the first time ever that I was in doubt as to whether it made sense to get a new one.

All the laptops I’ve had before (Top of the line Dells and Lenovos, typically) has started getting tired after less than 2 years, making the last year of ownership a real drag. Part of that has been due to Windows bloat and build up of crud in the registry and file system in general, so in part I suspect the MacBook has lived longer not just because of its hardware but also because of OsX, but the hardware is certainly part of it.

(On a side note, the Dells have been notorious for needing hardware replacements; from keyboards loosing keys to graphics cards getting bugged colours or weird artefacts, I’ve made extensive use of the next-business-day service. In fact, my wife got a Dell not too long ago and had to replace the HDD after just a few weeks – They might be ok for home computers, but they’re anything but a professional tool and I’ve had my last Dell for sure).

Still, 4 years in with the MacBook, and I feel like it could easily have done a year or two more, but two things were nudging me towards a new laptop – the 256GB SSD was constantly kissing the 95% mark and I had to do frequent cleanups in my temp and trash folders to be able to work (not a bad thing per-se 🙂 ) and the graphics card was sometimes struggling a bit with two large displays running Unity, photoshop and Blender at the same time, causing the fan to be on more than I would have liked.

Enter the 2016 MacBook Pro.

IMG_1954.jpg

First of all, let’s get the price of out of the way: It’s ridiculous – there is no justification for it, other than Apple charging big bucks because they know they can. For me, as for most people in my situation I would guess, it is, however, also somewhat irrelevant.

It’s my primary work horse – the only tool I own (aside from software licenses), and I buy one every 3 (now 4) years and use it 10 hours every day – saving $1000 and risk getting a Dell-esque piece-of-crap Hardware that needs a new HDD in the middle of the week, costing me a couple of days worth of work, just isn’t an option. The fact that it lasts a year or two more than the competition means I would probably pay even more if needed (Don’t tell Apple that though 😉 )

So, with that out of the way, what am I getting for all my cash?

It’s obviously lighter and thinner – I only just got it so can’t say if this is anything I’ll really notice, but since I drag it around all the time, less is definitely better. The limited testing I’ve done suggests that it also stays cooler, which was my only real concern with the reduced size. The old one could get seriously hot around the hinge when Unity was pushing polygons at 60FPS so I had a (seemingly unwarranted) suspicion that a smaller form-factor might make it harder to get rid of the heat and thus make the problem worse.

I love the keyboard. It’s a bit click-y-ti-clack where the old one was dead silent, but the feel is just phenomenal. I actually think I type faster and make less typos, though that could be a euphoria-induced illusion ;).

The touch pad is even more awesome. I loved the old one which was leaps beyond anything I’d ever had on any other laptop, yet it pales next to the new one. Everything from the sound it makes to the very subtle, yet rewardingly haptic, feedback when you press it is just best-of-class – nothing I’ve tried on any other laptop (mine or others) are in the same ballpark – hell, they are not even in the same galaxy. It’s so good that I feel a bit bad for throwing $100 at a new magic mouse 🙂

The O-led display that replaces the function keys is… Well, it’s a gimmick. A very expensive one, I presume. I really don’t see the point of it, and seriously doubt I’ll ever use it, but I’m going to try to keep an open mind. Maybe there some kind of use for it, and I just can’t see it, but this is probably the one thing I would have taken out if I could, just to save a few bucks.

USB-C… I *really* hate that I can no longer use my thunderbolt display as a charger. This, to me, is the only really annoying “feature” of the new MacBook, but I still think it was the right thing to do. My old macbook would get quite hot around the magnetic charging port, so I suspect the connection wasn’t all that fantastic, even if it was a great concept, and having a single shared port for everything just seems like the right thing to do. The world probably isn’t ready for it, but it wasn’t ready for tablets either, and that changed quick enough. Having an adapter for my other legacy peripherals is a minor moan, that I suspect will go away faster than I can possibly imagine. So thumbs up for paving the way for a world of fewer odd cables, though I wish I could have bought a mag-charger adapter.

I now have 512 GB of SSD which should keep me running for the next 4 years (my 256 GB disk has been almost full for 4 years, so it’s not like my needs are skyrocketing, but every new project I start do tend to add a handfull of GB, so it’s nice to not have to worry about it).

The new graphics card is fast. The old one was never a problem, but I can quite clearly feel the difference when working in Unity – projects that were just barely managing 60FPS in the editor are now keeping an almost constant 100FPS. I know this isn’t a cardinal point for everyone, but for me, this upgrade alone was worth the price.

It’s pretty much the same CPU I had in the old one, though the memory is faster, so I suspect that a benchmark would tell me that everything runs a tad quicker, but probably not enough to have any real-world impact on my daily use. If I consider a typical day in my life, the amount of time I spend waiting for CPU bottlenecks is probably less than the time I spend getting coffee (Yes, I drink too much coffee, but that can’t be helped). I much prefer a smaller, cooler, less battery hungry machine over one that is 15% faster in those few cases where it actually matters, so CPU was never a concern for me.

Finally, I love that I can have it in space grey 🙂

Ups and Downs of Apple TestFlight

UPDATE #4 (20151109):

Everything was fine for a while – Even the build-in “Upload to  App Store…” button in XCode worked as it should, but nothing good lasts forever. Recently I’ve been getting the “You’re not authorised to use this service” error, which is of course complete bullocks and not at all related to whatever the problem is.

I know this because using the Export function and uploading the same IPA with the same credentials with Application Loader works most of the time. The only time it doesn’t work is when XCode decides I’m not authorised and refuses to upload, but still feels a need to lock some file that Application Loader also needs access to. Brilliant…

How is this in any way a difficult problem to fix? Take this crazy idea for example: How about letting the “Upload to App Store…” button do an export and pass the file to the Application Loader.

Yeah I know, crazy idea…

UPDATE #3 (20150902):

Out of necessity I recently re-visited external testing in test flight and found that the review process has been greatly simplified with most (if not all?) of the silly requirements removed and time-to-review down to around a day or so.

Bottom line is that this post is no longer relevant – Apple has stepped up and fixed TestFlight. I could still wonder why it took so long, but I’m just happy it works 🙂

UPDATE #2 (20150604):

That’s a funny coincidence.

Last night all my pre-releases suddenly disappeared from iTunes for a couple of hours, and today – all the builds I’ve done have automatically become active without me having to hit “save” on iTunes Connect.

I’d like to think that Apple actually listens to small developers and that my little rant paid of, but chances are I’m not the only one who complained and it just happened that they fixed it a few days after I finally had enough :).

Anyway: thanks 🙂 Now for the second half of the problem …

UPDATE #1 (20150601):

As I am (once again) waiting for iTunes to update, I was thinking of this post and it occurred to me that this whole sorry mess could be fixed with just 2 simple changes:

1. Provide a way to enter the “what to test” text before uploading so I don’t have to log in to iTunes Connect.
2. Remove the ridiculous review process from the test accounts.

Come on Apple, do something good for your developers for a change?

ORIGINAL POST:

I really don’t have time for this, but seeing as I am waiting for TestFlight anyway, this seems like a good time to vent some frustration.

Last year, Apple acquired TestFlight – the awesome service that let you drop new builds in the hands of your clients in minutes with very little fuzz, once it was set up. Which was a bit of a pain (not because of TestFlight, mind you, but because of Apple).

Here is how that worked in my daily life:

  1. Build App
  2. Wait for XCode to complete
  3. TestFlight immediately discovered the new build and opened its wizard
  4. Fill in information, select team, hit publish, done. One single flow with no waiting (Uploading was done in the background)

So… Apple acquired TestFlight and you would think that they would fix the one issue with TF: the setup. Admittedly, setup is marginally simpler than before but even this they’ve manage to screw up. Really it’s just stupid in a different way, but I’ll get back to that later.

In any case, it’s not that big a deal. I go through the setup phase once for each client so I can live with a few bumps on that road. Builds, however, is a different story. Some days I do 5-10 builds for a single client while ironing out odd issues or testing various designs, so the build process is what matters to me.

Here is what that looks like today:

  1. Build
  2. Wait
  3. Open Organizer
  4. Hit Submit To AppStore
  5. Wait
  6. Pick team
  7. Hit Submit (again)
  8. Wait (a very very long time)
  9. Hit Done (Except, we’re not, are we?)
  10. Open iTunes Connect in a browser
  11. Log in
  12. Navigate to the app MyApps->App->Prerelease
  13. Keep hitting “refresh” while you:
  14. WAIT some more for the state to go from Uploaded to Processing to finally appear in the list
  15. Click the build and put in some random crap in the “what to test” field
  16. Save
  17. Hurray, it’s live

You may think I’m making this up, but trust me, this is ridiculous far beyond my imagination.

So let’s return to the registration process. Before you had a limit of 100 registered devices per developer account and you had to manually register each of them, and make sure you cleared them out yearly so you wouldn’t run out. This was annoying, but I could cope, and though I’ve been in the 60+ registered devices range, the limit was never critical.

Now you can invite testers by e-mail rather than device ID which is great. Awesome. The problem: You can have 25 of these.

Twenty-five. It’s so absurd I have no words for it. I’m a small-time developer, and I’m going to break that limit almost immediately. The only way for me to not run into problems is by telling my clients that they have to limit who can test their apps. That’s a great way to start a relationship with a client.

And the fun doesn’t stop there either. These 25 people are “internal testers” which means they must be created as iTunes Connect users with access to my developer account. Why Apple? Seriously?

Now, Apple’s “big news” when they revealed this new (broken) TestFlight was that you could now have 1000 beta testers of your app, which should have sounded too good to be true. Because it is.

In order to get your app to any of these “external” testers, it must first be reviewed and approved, which means creating icons, asset and descriptions and making sure it does not tickle Apple in any inappropriate manner. Setting all this up takes hours, time that I have to bill my clients. All for an app that is just a prototype for a very limited audience.

It’s most fascinating how a company with unlimited financial resources, massive development teams and complete control over all the involved software (and hardware) components can spend almost a year to reduce a perfectly working service to a stinking pile of pooh.

Well done…

SOAK! Released in Denmark and Sweden

Finally…

EveryplayBanner

The first public release of SOAK! is now available in the Danish and Swedish app stores – we decided to do a limited rollout so that we could get a chance to test basic features like app store and GameCenter integration in the wild, but still on a small scale. The rest of the world follows shortly and then comes the Android port.

We actually launched the game almost a month ago but immediately found a critical bug that hadn’t shown up in sandbox mode and had to pull it, so, once bitten 🙂

Please do let us know what you think – we have a huge list of things we’re going to add over the coming months, but more of that in due time, now we need to start spreading the word…

https://itunes.apple.com/dk/app/soak/id876006257?l=sv&ls=1&mt=8

XCode6 and the mystery of the missing rect

Since “upgrading” to XCode6 I’ve had issues with storyboard based UI’s that used to work, but now suddenly starts acting really weird – table cells that scale to double size of the containing table, and buttons that don’t react to input because their parent (non clipping) views are cut in half.

Everything looks fine and dandy in XCode, but breaks horribly when running the app on both iOS7 and 8 and both on real devices and in the simulator.

In two cases I tracked the problem down to missing or incorrectly updated view rects – if you open the storyboard in a text editor (or rename it to .xml and open it in XCode or, even better, AppCode) and locate the view that’s causing problems, there’s a good chance you’ll see a rect with incorrect dimensions or simply won’t see any rects at all.

The structure should be:

...
 <view key="view" contentMode="scaleToFill" id="xMb-eO-e2t">
 <rect key="frame" x="0.0" y="0.0" width="320" height="504"/>
 <autoresizingMask key="autoresizingMask" flexibleMaxX="YES" flexibleMaxY="YES"/>
 <subviews>
...

I.e. a <rect> before the <autoresizemask>.

Correcting or inserting the rect fixes the problems and should be ok as long as you don’t do something silly like, say, edit the storyboard in XCode6’s storyboard editor.

But it’s ok. I understand that Apple, with its limited resources, can’t fix every single little insignificant corner-case in XCode when they need to focus their energy on inventing important new and productive programming languages. //sarcasm…

May I just humbly suggest that they got their priorities ass-backwards?

#MadeWithUnity in 7 days

7 Days

IMG_06727 days after I got up in the middle of the night and – for no particular reason – decided to add my own peculiar contribution to a very long list of color-match puzzle games for mobile devices, I finished and uploaded the release build of Qube Crush 3D.

That’s 7 days (and nights) from initial spark to a completed F2P game with 50 levels of brain bending puzzles uploaded to the App Store.

Just to prove to myself that it wasn’t a fluke or a happy accident, and since I had a week or two to burn while waiting for the AppStore review process to complete, I decided to do another game in the mean time.

This was to be a 2 player arena battle game and the result is Magic Micro Battle Arena, which I completed in just 5 days – mostly because it has no IAPs or Ads and no highscores, so less integration work and no horrible IAP testing. Ironically, it also got approved faster because Qube Crush got rejected due to an issue with my AdColony integration.

IMG_0611So two fully functional playable games completed in two weeks. These are not Gears of War or Doom 8, and their feature lists has been cut to the bones, but anyone with an idea of just how much time the crud of project setup, tool chain woes, app-store nightmares and 3rd party integration (not to mention play-testing) usually takes, should realize what an amazingly short time-to-market that is.

The rapid development probably means I missed a bug here or there, and I could certainly have polished both of these two games for another couple of weeks, or months even, but before I do, I’d like to know that there is at least someone out there who wants to play it ;).

Because my investment is so limited, I can try whatever crazy ideas I come up with, and if it fails, I can move on without bleeding to death.

Unity Rocks!

And that’s really why I’m writing this blog post. Because of how this is all possible. Obviously, I *am* a pretty awesome developer (hah! 🙂 ) but, truth be told, I owe most of my productivity to the game-dev tools we have at our disposal today, and at the heart of those is (in my case at least) Unity3d.

I can’t say enough good things about Unity – sure, it has its bugs and quirks and annoying issues, but at the end of the day it’s just so insanely productive that I hardly notice.

I run a 4-screen setup with Unity and Blender on a big cinema display, photoshop on my Cintiq and Mono on my laptop with either my phone or iPad hooked up for testing – with Unity’s rapid automatic import there is no sense of context switching, it’s as if it’s one big application build up of state-of-the-art parts (minus Mono which is anything but).

IMG_0674Testing and trying out stuff in Unity is so quick and easy that in two weeks, I have not once had to start the debugger. Mind you, if launching the debugger from Mono wasn’t such a nightmare, I’d probably been able to finish the games even without cancelling my weekends 🙂

So here’s to the guys at Unity: YOU ROCK!

(But please find a replacement for Mono. For those of us who’d rather chop off our left arm than install Windows, Mono is still (if only barely) the preferable option, but I’d really like to see a Unity/JetBrains C# IDE with bullet proof indexing, a nice coder friendly editor and proper refactoring that does not break script-to-game-object bindings).

If you have an iOS device you can get the apps on the AppStore – Qube Crush is free* and Micro Arena is $1 (if you ask nicely I may have a promo-code for you 🙂 )

https://itunes.apple.com/us/app/magic-micro-battle-arena/id908075215?mt=8

https://itunes.apple.com/us/app/qube-crush-3d/id905380169?mt=8

If you don’t have an iOS device, wait for an Android update, or check out a video of Qube Crush here:

https://everyplay.com/videos/7850308

*) Yes, I know what I’ve previously said about “free apps”, I still think it’s a fundamentally bad idea, but the sad fact of the matter seems to be that as an unknown IP or small game developer you are left with a choice between “free” and, well, nothing else at all… So much for not being part of the problem, though.