Customising eWater Source (Part 1): Thinking Beyond Plugins

Posted: 19 November 2013 by Joel Rahman

Plugins seemed like a great idea in 2004.

That’s when the plugin concept was built into the then fledgling E2 product (that would go on to become eWater Source). In the scheme of things, very little has changed about the way plugins are built from that original version in 20041. Plugins are still created as .NET assemblies and almost exclusively in C#2. In my mind, the most significant change is improved support for data persistence in plugins.

In many respects it is a Good Thing that the plugin mechanism has been so stable and is becoming more popular with Source users.

However, in many cases there are better options than building a plugin: Faster (cheaper!) and more flexible options.

This is the first of a three part series of articles exploring those options.

In this article, I’ll cover off on plugins and the Function Manager. Plugins are very powerful and in some cases a plugin is still the only option to meet a customisation need. However plugins suffer from a development model that is better suited to ‘big’ software development rather than the small to medium size initiatives that a customisation project usually requires. Of the alternatives to plugins, the Function Manager3 is the best understood by the Source community. The Function Manager is used by many users to provide custom rules and behaviours in simulations. The Function Manager fills a particular niche, but doesn’t cover many situations where plugins are commonly used: for example, in reporting.

In the second part of this series, I’ll explore some of the options for scripting with Source, including the common approach of writing scripts against the standard issue Source command line, as well as the less well understood options for scripting the internals of Source using a native .NET language. Many automation problems are well served by writing against the Source command line, and it can be quite simple to automate jobs when the command line is used through a library that handles the low level details. Scripting the command line is limited for the same reason that it is simple: The command line exposes a deliberately restricted set of operations, such as running a scenario and modifying meta-parameters. Scripting using a native .NET language, on the other hand, allows direct querying and manipulation of any aspect of the modelling system, and is a good option for automating tasks that change the configuration of a Source model, such as bulk manipulation of parameters.

In the third and final article in this series, I’ll introduce Veneer, a new technology, developed by Flow Matters, that provides an alternate customisation option based on standard web technologies. As its name suggests, Veneer is about putting a lightweight layer over an underlying Source model. Veneer itself is a Source plugin, but a plugin that, in many situations, cuts out the need for writing a new plugin. Rather, the customisation happens in HTML and Javascript, providing a very rapid turnaround on changes and the ability to draw upon a great many useful libraries from the web world. Veneer also illustrates a further point about customising Source: Our options are not set in stone and it is possible for anyone to create an entirely new customisation platform for Source.

Throughout this series we’ll look at what the various customisation options are best suited to and we’ll recap that in the end with a handy decision tree!

Now, before we go further, lets review the motivations for customising Source.

Capability and Productivity: What are we customising and why are we doing it?

Users look to customise Source for reasons of increased capability and productivity. Increasing the capability of the platform by making the system do something that it can’t do out-of-the-box (for example, something that can’t be achieved by manipulating existing model parameters). Improving productivity by simplifying something that is complex or tedious to achieve in the base platform.

When I look at the examples of customisation that I’ve witnessed or been involved with, the following broad categories come to mind:

  • Implement new model algorithms, such as new rainfall runoff relationships or new demand models,
  • Implementing bespoke system behavioural rules that are needed in a particular river valley, often to model site specific operating rules for a river system,
  • User Interface Customisation to suit a particular group of users, such as adding a user interface tailored to a new model component, or building a decision support layer on top of a base model,
  • Custom Reports and Data Visualisation,
  • Data Handling for importing and exporting new file formats, such as in-house formats,
  • Data Pre-Processing to automate some aspect of configuring a model by using some widely available data, such as calculating sediment generation parameters from spatial data of landscape attributes,
  • Implementing bespoke batch runs, such as running a scenario for multiple input datasets, and
  • Integration with other Systems, such as linking Source to an operational forecasting system

Plugins are very flexible and could be used for all of these customisation scenarios. Indeed, for a long time, plugins were the only option. Nowadays, the Function Manager is well established as the best way to implement custom system behavioural rules, but for most of the other cases, plugins are still very common. To begin to understand why that’s not always desirable, lets review the plugin mechanism in Source.

Plugins: The Good With the Bad

Plugins in Source have a lot of power: There isn’t much that they can’t do to the running system.

This is because a plugin is, from the perspective of the running software, no different to a component in the core of the system. A plugin model is implemented in much the same way as a model built by the development team and the parallel is true for other types of plugins. At runtime, a plugin becomes a ‘first class citizen’ of the software system.

In a technical sense, a plugin is nothing more than a .NET assembly that contains one or more classes that inherit from (or implement) key classes (or interfaces) in the Source software. Source has many such key classes and interfaces, allowing plugins to slot into a wide variety of locations and usages. I can’t recall the last time I wanted to change some aspect of Source and couldn’t achieve it with a plugin. Sometimes I wouldn’t take the plugin approach, but we’ll come to that.

At a minimum, your plugin class must implement the key interface or the abstract methods of the key class, such as the following (trivial) example of a rainfall runoff plugin:

public class TrivialRunoffModel : RainfallRunoffModel {
        // runTimeStep is left unimplemented by the parent class, RainfallRunoffModel
        public override void runTimeStep() {
          // runoff is a defined output and rainfall a defined input of RainfallRunoffModel
          runoff = 0.8 * rainfall;
        }
      }
      

The process of incorporating a plugin into Source is relatively straightforward. In the case of the rainfall runoff model, above, it would go something like:

  1. Compile the C# code to a DLL file
  2. Load the DLL file into Source using the Plugin Manager (following the menus, Tools:Plugins:Plugin Manager)
  3. The next time you are presented with a choice of Rainfall Runoff Models, Source searches all loaded DLLs for classes inheriting from RainfallRunoffModel, and, finding TrivialRunoffModel would add it to the list.
  4. The user then selects TrivialRunoffModel and configures and uses the model as they would any other option.

The TrivialRunoffModel plugin will then participate in any part of the Source system where the built in models work. In the basic case, the plugin model will get used in the simulation, but the model will also be usable in other tools in Source, such as the runoff calibration tool4. For the most part, implementing plugin models for other parts of the system (eg water demand models) requires a similar setup and conveys similar benefits.

It’s also possible to build plugin tools, which slot into the Source graphical user interface and provide some custom front end for either querying or manipulating the Source model. These plugins implement a deceptively simple interface, simply accepting a reference to a Scenario object from the running system. You’d also typically inherit from something like System.Windows.Forms.Form to provide the visual context for the plugin. The following plugin accepts a scenario object and sets the window title to the name of the scenario:

public class ExampleScenarioTool: Form, IRiverSystemPlugin {
        public RiverSystemScenario Scenario {
          get{return _scenario;}
          set{
            _scenario = val;
            Text = _scenario.Name;
          }
        }
      }
      

The simple exchange of the Scenario reference is what gives the plugin so much power.

From the Scenario, the plugin can navigate to, query and manipulate almost anything in the system. This includes changing the model structure, changing parameters and creating or changing functions and data mappings. You can also attach event handling functions to tap into key points in the framework, such as performing custom actions after each simulation run, or before each time step.

So what are the downsides?

For me, there are two categories of issues when it comes to the plugin approach:

  1. Complexity (coupled closely with Stability), and
  2. Development speed.

With the power to access the inner workings of Source, comes the issue that the inner workings are complex and the internal structures change over time, often breaking existing plugins. However, I’ve found that the biggest problem with plugins is the tediously slow development model of using a compiled language (typically C#) and loading a compiled DLL into Source.

Complexity

In Object-Oriented development, its common to talk about public interfaces: The protocol that a class presents to other classes in the system5. When we talk about code bases that will be extended by third parties, it is useful to distinguish the public interface from the published interface: The interface that is available to be used by software developed by other teams or organisations. That is, the published interface of Source is the set of classes, methods and properties that are available for third parties to use in plugins. The distinction between public and published interfaces is important, because a public interface can be changed (refactored) without affecting third party code, whereas changes to a published interface has implications beyond the core development team.

In the case of Source, the published interface and the public interface are the same: You can create a plugin that can see and manipulate any part of the system as if it were a core part of the system. Source has an extremely wide published interface.

This wide published interface means there is a lot to learn. By contrast, some of the alternatives to plugins (such as the Function Manager) provide an interface that is narrower (and hence, less powerful), but considerably easier to learn and more tailored to a particular task.

The wide published interface also means that almost any change to a public class interface in Source has the potential to break an existing, third party plugin. It is near impossible for the eWater team to evolve Source without risking breaking existing plugins (particularly plugins they don’t know about!).

Should Source plugin developers be forced to work to a narrower published interface? That should provide a more stable base for building plugins, but at the expense of power.

Personally, I like the fact that Source plugins have so much power, but I look for other approaches in the very many situations where that power is not required. These alternatives also let us avoid the other great issue with plugins: development speed.

Development Speed

Developing plugins is slow.

You might not relate to that statement if you’ve been involved in plugin development and compare it to some other approaches. Yes - building a plugin rainfall runoff model and slotting it into Source is going to be quicker than replicating all the surrounding features of Source in a new environment, such as routing, supply and demand modelling and all the user interface aspects. Yes - Compared to starting from scratch, writing a plugin for Source is fast.

But customising Source with plugins is much slower than it should be.

Why? It’s because of the DLLs. The smallest change to a C# plugin requires recompiling the code to a new version of the DLL. That, in itself, takes a short while6, but given that most plugin testing seems to be done with a live version of Source, there is a major consequence: you need to shutdown Source in order to release the file lock on the DLL before you can recompile, then you need to restart Source with the new DLL, and reload or recreate your test data.

With the need to stop and restart Source, the smallest change leads to a development cycle of a few minutes. It should be a small handful of seconds. This difference adds up in a big way and discourages you from making small, incremental changes.

You can mitigate this issue somewhat if you can test your plugin outside of Source itself. Automated test frameworks, such as nunit, are obvious examples if you can easily replicate the environment your plugin requires. Source also ships with a graphical tool for testing model components (such as the TrivialRunoffModel example): TIME.VisualTIME.exe in the Source installation directory. VisualTIME still needs to be stopped and restarted for recompiling models, but its easy to setup and offers a way to quickly test a model outside of the broader river network context. VisualTIME can improve your productivity when developing new model components as Source plugins and this type of development is one of two areas where I still see plugins as the most appropriate approach to customising Source.

The place for plugins

The downsides that I’ve covered here aren’t fatal flaws of the plugin approach: Indeed they’d matter little if there weren’t better alternatives. But better alternatives there are: for most things at least.

The two cases where I would still look first to plugins are:

  1. New model algorithms (at least those that can’t be easily implemented in the Function Manager), and
  2. Tools that need to query or manipulate the model structure and that require a graphical user interface.

My second suggested application of plugins requires two conditions to be met:

  • You have a need to query or manipulate the model structure (as opposed to parameters, inputs or results), and
  • Your users need that functionality delivered in a graphical interface.

I think this is a relatively rare combination, but its still valid. As we’ll see in the following articles, there are much better alternatives if only one of the conditions are met:

  • If you need to access the model structure, but don’t need a graphical interface, then a scripting solution can be much easier to develop and test, and
  • If you need a new graphical user interface that is going to access things like parameters, inputs, results and even basic structural information (network, catchments, functions), then the web based approach of Veneer provides much quicker development and, quite likely, a better end result.

Incorporating new model algorithms was the original motivation for plugins in 2004, and in 2013 I think its again the main use case for creating plugins. Many algorithms can be implemented through the Function Manager and if they can be realised that way, then they probably should be, at least in the first instance. Plugins are useful where you want to use the same algorithm in multiple places in a single model, or to share the algorithm across models. By contrast, Functions and the Function Manager are well suited to truly bespoke problems, such as operating rules that are required in a single reservoir.

Functions

‘Functions’ are the second main customisation option that is available out-of-the-box with Source and is promoted for use by eWater7.

The Function Manager allows end users to construct simple, Excel-like, formulae and attach these formulae to input and parameter variables within Source. By default, functions can be applied to any input that would usually accept a time series, as well as a range of other parameters than can sensibly be modified at runtime.

The Functions are built in a simple expression language of basic operators and calls to functions (little-f functions: eg sin, cos or if). Individual elements can be either literal constants (eg 5.0), or references to variables. Variables are either other Functions (big-F functions!), values retrieved from the Source model (eg flow at a particular point), or values looked up from some data source (a time series, a temporally recurring pattern or a piecewise-linear function).

The Function Manager is tailored to the task of imposing small behavioural changes on the system: small in terms of effort, but potentially substantial in terms of model results. The Function Manager offers a fast development cycle as changes are made directly in the Source GUI and the results of a change are visible in the next model run. The syntax of Functions simplifies the task of accessing common data types, such as time series and recurring patterns, including the ability to easily aggregate through time over arbitrary periods (eg sum of the last 7 days).

Perhaps most importantly, the Function Manager makes it easy to reference variables from anywhere in the model. This caters for example, for a reservoir operating rule that depends on storage levels and flows at multiple points in the river network. This is very difficult to emulate in a plugin model. Plugins are better suited to being slotted in as replacement components where the inputs to and outputs from the component are either known and standard (eg inflow from upstream, outflow to downstream) or are altogether new (eg creating a new time series result).

This ability for a Function to reference any part of the model also points towards the main strength that plugins hold over Functions: Plugin models can be used in multiple locations because Source itself is able to connect the plugin to appropriate inputs and outputs, whereas Functions need to be manually copied to each usage location and then manually connected to all the required inputs and outputs. In a case where a Function would be used many times, such as in each link, this leads to a great deal of duplication and manual work: Each use of the Function would itself be a copy of the Function, and each copy would require copies of each of the input variables.

Functions are an extremely important part of Source usage. Functions give end users (who might not consider themselves to be programmers) the ability to change system behaviour. Functions have a good development model, with fast turnaround on changes. Functions make it easy to implement bespoke rules.

Functions very effectively fill the niche of implementing many algorithmic changes to Source. It’s time to move on to similarly productive approaches for the other customisations we want to create.

Coming Up

Plugins are very powerful, but they won’t always be the simplest, quickest or indeed the best way to implement a customisation for Source. Functions demonstrate a better approach for many algorithmic changes.

In part 2 of this series we’ll explore some of the scripting options for Source. Scripting is great for automating multiple model runs, using the supplied command line version of Source. However Scripting using a .NET language is also a simple and powerful way to automate many of the types of bulk model queries and transformations that would historically lead people to building a plugin.

Scripting solutions won’t suit all end-users however, so in part 3 we’ll explore Veneer: a new technology from Flow Matters that makes it much easier and quicker to build high quality reports, visualisations and decision support front ends for Source.


  1. Actually, the very simple plugin mechanism was built even earlier, in 2002 as part of the TIME framework.

  2. The internal workings haven’t changed much either. Plugin assemblies contain plugin classes, which inherit from key classes in the framework and are possibly marked up with the WorksWith metadata: Much as was the case in the earliest publications of TIME and E2.

  3. Formerly the Expression Editor

  4. Although the result would be less than spectacular, as the TrivialRunoffModel has no parameters to calibrate!

  5. As opposed to the interface presented to close relatives, which tends to vary by language and includes things like the protected and internal keywords in C#.

  6. Not long perhaps, but it can still be enough to break concentration, which kills productivity.

  7. Importantly, I’ll keep the capitalisation on ‘Functions’ to refer to the Source feature of that name, rather than the broader concept of a function in either software development or mathematics!

comments powered by Disqus