Tag Archives: code

Creating a searchable and sortable list of draft-eligible CHL players using Python and AngularJS

Looking at past NHL Entry Drafts it can be found that a large number of players selected in this event come from the Canadian Hockey League (CHL), namely its three constituents: the Quebec Major Junior Hockey League (QMJHL), the Ontario Hockey League (OHL), and the Western Hockey League (WHL). A quick analysis of the last six drafts shows that a portion of more than 46% of the selected individuals had at that time been playing in on of the three major junior leagues. Even though this percentage used to be higher as other areas and leagues have become and are becoming more important providers of NHL talent, it is safe to assume that this situation won’t change in the foreseeable future.

For the devoted fan of an NHL team this is one additional reason to follow the CHL more closely. Something that – if you’re not able to attend games personally – has become more feasible with the online dissemination of game scores and player statistics. Yet while it is possible to regularly visit each league’s website to retrieve this information, I have found it unexpectedly hard to keep track of which players have already been drafted by an NHL team (as it is very common that these return to junior for one or more seasons) or are still too young to be selected in the upcoming draft. As I am not aware of any list that compiles all candidates that are eligible for the upcoming NHL Entry Draft I have decided to create such a list by myself. The result is already available, however I will try to outline my work to achieve it in the lines below.

Technologically the process introduced consists of two individual operations:

  1. On the back end we have to retrieve the data from the aforementioned websites according to a number of criteria and finally create a suitable compilation containing all desired players and accompanying information. In the example presented here we have a Python script implementation providing the collected data in a JSON file.
  2. On the front end we want to provide means for searching and sorting the compiled data presented on a corresponding website of our own. This is done using the AngularJS framework which enhances regular HTML for dynamic content display.

Back end data retrieval

Let’s start by having a look at the back end. The general workflow for the data retrieval is made up of three working steps. First we are going to retrieve a list of all teams playing in each of the concerned leagues. For each team we will then fetch roster information, i.e. all players associated with the given team. By doing so we are going to register basic information about each player, i.e. height, weight or position but also age and NHL draft status allowing for the sole selection of draft-eligible individuals. In a last step we will then retrieve up-to-date player statistics to be finally represented in the compiled list which itself will a made available as a JSON file.

The complete process is implemented in a Python script that has been made available in the Portolan GitHub repository. Here we are going to shed light on a few selected aspects of it.

To temporarily hold information I have learned to appreciate named tuples as they have been introduced in Python’s collection module with version 2.6. If you don’t need the flexibility and mutability of real objects but still want to have your data well structured and easily accessible, named tuples should be your first choice. Following are the definitions that have been made to keep information about teams, players and player statistics:

The main criterion to differentiate between players that are draft-eligible and those that are not is age. The exact rule is laid out in the NHL’s Hockey Operation Guidelines, for the upcoming draft it boils down to the following concrete dates (note that the cutoff date does not correspond with the draft date itself hence the separate definition):

(Obviously, I am a friend of the dateutil module and you should be, too.)

As with a lot of current websites, the ones of the three leagues in question don’t have their data presently available in regular HTML directives anymore but in associated data streams usually formatted in JSON notation. In our case this holds true for team overviews, roster summaries and even team player statistics. Whilst it is somewhat awkward to find these links in the first place, it is actually quite awesome for scraping as the data is already well-structured and therefore easily accessible. Hence we’re defining look up dictionaries for each league and dataset type (see source code for actual values):

The retrieval itself is actually quite straightforward and follows the workflow outlined above:

For implementation of the used functions again see the actual source code over at GitHub.

Finally we have a JSON file with all draft-eligible skaters from the major junior leagues looking like this:

Please note the selected lines that show the actual age of the player on draft day and a boolean variable indicating whether the current player is considered an overager, i.e. could have already been drafted in the previous draft.

Front end data display

The collected data can now be displayed in tabular form. Whilst using regular HTML is perfectly viable to achieve this task, the user can easily be enabled to search, filter and sort the data comfortably by utilizing AngularJS, a JavaScript framework that extends traditional HTML to allow for dynamic information display. Angular builds on the model-view-controller architecture – and it’s not my business to introduce here what has been explained much better somewhere else (for example at the Chrome Developer Page). An important feature of Angular are directives, basically additional HTML attributes that extend the behavior of the corresponding tag. Theses directives usually can be easily recognized as they are starting with the prefix ‘ng-‘. Always striving to create valid HTML I will further add ‘data-‘ to the directive as described as best practice in the AngularJS docs. Otherwise being fairly new to Angular, I have based my work on an example presented at scotch.io.

The solution I have come up with consists of three parts:

  1. An HTML page outlining the basic layout of our page (junior.html).
  2. A JavaScript file containing the logic of our solution (junior.js).
  3. The actual data – this is the JSON file that has been produced by the back end (junior.json).

A very basic version of junior.js could look like the following snippet. We just create an application called showSortJuniorApp and define the main controller. Within this controller there is just one function. It reads the JSON data file and saves its contents within the scope.

Now let’s have a look at a basic version of the accompanying junior.html. After importing CSS definitions from Bootstrap and the AngularJS source from Google, we finally include our very own JavaScript file. In the body part (among a few other things) a container element is linked with the app we created above (using the ng-app and ng-controller directives) and a table is defined and populated via the ng-repeat directive. The latter basically provides a loop over the array we loaded in our junior.js and creates a new table row for each element.

Now how to allow for sortable columns? This can be achieved quite easy. First we define a default sort criterion and order in junior.js:

We may then modify the ng-repeat directive in junior.html to make the whole table sort by points in descending order as the default configuration:

To create clickable column headings allowing for varying sort criteria an according HTML tag and the ng-click directive have to be added to each header cell of the table in junior.html:

Here we set a descending sort order on the games played column. However to allow for sorting in both directions we can set the variable to take on its negated value. See the example for draft day age above. This configuration will change the sort order every time we click on the column heading.

Finally we would like a search function allowing for the filtering of last names and a checkbox to hide overagers. To do so we first have to add a suitable form to the HTML:

After adding some variables and a short filter function to junior.js…

… we can complete the ng-repeat directive for our tabular data in the following manner:

After a few more modifications to the HTML and the JavaScript code, the final version of our front end data display also includes the ability to switch table contents between basic statistics (as used above), player information (such as height, weight, etc.) and additional information (i.e. special team stats). You may refer to the GitHub repository to review the most recent version of this solution.

Speeding up an ArcPy script using the Python multiprocessing module

ArcPy is ESRI’s interface to scripted geoprocessing capabilities. While I personally am much more inclined to use the open-source tool stack of OGR/GDAL, I am stuck with ESRI, ArcGIS and ArcPy at my current professional position. One of my more recent projects presented the need of analyzing a number of point features with regard to nearby line features. In detail the line feature with the smallest distance to the point feature had to be retrieved.

A corresponding algorithm could look like this:

  1. Select a point feature.
  2. Create a buffer around the feature using a base size.
  3. Collect all line features that are situated within the selected buffer.
  4. If no line features were found continually increase buffer size and repeat step 3 until one or more line features have been found.
  5. From the collected line features identify the one with the smallest distance to the selected point feature.

The implementation of this process is straightforward.

Problem with this solution is that depending on the number of point objects to analyze it can be painfully inefficient as we linearly process one item at a time. This may not be problematic if the overall count of items is concise, but real-world data tends to be not to. In my case several thousands of objects had to be regarded leading to a processing time of multiple hours. Whilst optimization is something that should not be done prematurely, I regarded this situation definitely as worth looking into in more detail.

A very common approach to optimize a software solution is to split the complete set of tasks into smaller subsets and to process several of those subsets simultaneously. This approach has become particularly viable with the appearance of multi-core hardware that seems predestined to be used in this manner. From a conceptional viewpoint it is necessary that the individual problems do not interfere with each other. This is clearly the case in the described setup as the minimum distance between a point and a number of line feature is obviously independent from another point feature.

Regarding the point features at hand we will create subsets by splitting the complete dataset into parts that are not larger than a certain cutoff size. An original question over here presents a solution that is already feasible for my demands:

This generic function may now be applied to the actual set of input features:

Please note that is not sufficient to simply retrieve minimum and maximum object ids from the presented layer as previous edits or selections may have created gaps between those values. We actually have to collect all available OIDs instead.

Now for Python, the multiprocessing module (among others) comprises several options to implement parallel solutions. One is to setup a pool of worker processes that subsequently are equipped with according tasks. Here we will supply a number of workers the task to retrieve nearest lines to one of the previously created range of point objects. To do so we have to collect all the necessary data first, i.e. sources for point and line layers as well as the range of OIDs:

As you can see we have combined all necessary input data for one chunk into a list object before appending it to an overall list containing the input data for all chunks. This is by design as mapping worker processes to functions only allows one argument. That is also we have to adjust the definition of our worker function:

Finally we create a worker pool and assign each worker a task by mapping it to the function for nearest line retrieval and providing it with the necessary input data:

The Pool constructor allows for several arguments. One may define an initialization function that each worker will carry out after being created, this function may of course be provided with initialization arguments. Both of these arguments are set to None in the case at hand. More interesting is the last argument which is set to 1. This is the maximum number of tasks that will be processed by a single worker process before it is killed and a new one is spawned. As ArcPy functions are notorious for leaking memory it is especially necessary to keep this number low or we may end up with exceptions due to low memory.

Implementing multiprocessing for my use case has decreased processing time to a quarter of the previous duration. Now it is still in the range of hours but at least considerable less than in the linear version of this program.

We will have a look at how analysis tasks like the one described above may be sped up by applying low-level database queries in a later post on the Portolan Blog.

An NHL hockey database using Python, PostgreSQL and SQLAlchemy (Pt. 1): Teams and Divisions

Being an avid hockey follower and fan of the National Hockey League’s Toronto Maple Leafs for more than twenty years, I have observed the recent surge of approaches to tackle the sport and its peculiarities based on application of analytical methods with great interest. Of course as a foundation of any data analysis there has to be some data first. Preferably in a standardized and suitable environment such as a relational database management system. My very personal endeavors to retrieve, structure, store and analyze hockey-related data will be subject of a larger series of posts that I am starting today. The series title already lays out the scope within this undertaking will be conducted. I will be looking at the National Hockey League as it provides (with all of its shortcomings) the most detailed data available in this regard. Python as programming language, PostgreSQL as database system and SQLAlchemy as SQL and mapping framework connecting both of the formerly mentioned will serve as my technology stack. But let’s get it on already.

As first part of this series I will be looking at NHL teams and divisions. This is fitting for the beginning as the accompanying data is not very complex and particularly static as well, i.e. it changes with very low frequency. We may provide a simple database entity, i.e. table, containing basic team data using the following SQL data definition statement:

As you can see this table design accounts for the fact that from time to time franchises move around or change their names. In case a certain team is the current reincarnation of a given franchise, both became_team and end_year contain NULL values thus marking it as active.

To whom it may concern: Differentiating between the various abbreviation types is necessary due to the NHL using different ways of shorting a team name on game reports (N.J, L.A,…) and in URLs (NJD, LAK,…). Additionally I wasn’t happy with some of their choices for abbreviations so that I created my very own column.

Update (February 2016): With the current update of the NHL’s website the previous system of using abbreviations for team identification was abolished and actual team (and franchise) ids have been introduced. The according changes in the data model are marked above. As far as I have grasped the url_abbr column should no longer be necessary, I will keep it for safety and nostalgic reasons though.

After setting up the table data insertion may be conducted easily by preparing and applying the according SQL insert statements. Both nhl_id and franchise_id values have been retrieved from the NHL’s website.

A table containing NHL divisions may be set up using the following definition:

Here a particular property of PostgreSQL is visible: the ability to store multidimensional arrays of varying length in a database column. In this case the teams column will contain the IDs of the teams that set up a certain division at a certain time. Unfortunately there is currently no support for referential integrity between the elements of an array (here: team_ids in column teams of table nhl_divisions) and the elements of another table (column team_id in table nhl_teams). Although amends have been made to introduce this feature in more than just one of the more recent releases, it hasn’t become part of the code base yet.

We will deal with inserting data into this table later on.

Subsequently an (updated) ERD of the current database configuration looks like the following:

nhl_teams_nhl_divisions_update

Using SQLAlchemy it is now very easy to map these tables to Python classes. While it is very viable to use SQLAlchemy magic to set up the table structures from Python, I personally like to go the other way around: Having tables already defined in the database system and mapping it to Python by metadata reflection. Everything that is necessary to do so is the mapped class being inherited from the SQLAlchemy Base class and it having two special attributes __tablename__ and __autoload__.

We may now add a way to search for team objects in the database, i.e. by implementing according class methods to search the database using an abbreviation or team id. Additionally we will add means to make teams comparable and sortable by using Python’s comparison methods.

Please note that this use of Session is actually not recommended. Instead a scoped session should be your choice in production circumstances. However here the more simple approach has been used here for the sake of clarity.

It is now possible to find an NHLTeam object by specifying its abbreviation:

This gives you:

Similarly we may map the database table containing NHL Division information with an according Python class. As we’re creating division data from Python as well, we’re including a real constructor this time:

Given definitions and mappings for both team and division objects and having some source data ready, it is now convenient to insert division data using a special helper method:

Now if we finally want to determine the league configuration for a given season we can do so by linking division data and the year in question by providing the NHLDivision object definition with the following class method:

Accordingly this results in:

Class definitions for both NHLTeam and NHLDivision are available via GitHub. In the next installment of this series we will have a look at the table and class definition for NHL player data.

Creating sample points (Pt. 3): Uniform random sampling – Triangulation

The third edition of this series will introduce a more complex sampling method to create points that are spread both randomly and uniformly over the area of a given source polygon.

Previously published parts of this series include:

Uniform random sampling

To distribute points in a polygon in a uniform and random manner we are going to follow an approach laid out within the scope of a discussion amongst MATLAB users. In case the internet will forget this minor thread someday, here’s the reduction of the problem as laid out by merited forum contributor John D’Errico:

“[…] the general approach […] that generates uniform random points inside a general object in n-dimensions.

“Compute the area/volume of each simplex, then assign points to them randomly with probability defined by their area relative to the total area. Within each simplex, […] use a […] method of generating random points […].

“Of course, you need a triangulation of the polygon, but as long as the polygon is a convex one, this is trivial with [D]elaunay.”

To summarize these are steps necessary to create uniform random points in a polygon:

  1. Use the polygon’s vertices to create a Delaunay triangulation. As we can’t guarantee that real-world data will only contain convex geometries, this needs to be generalized form of it, a constrained Delaunay triangulation.
  2. Use the area of each created triangle to create a weighted random selection, i.e. to assure that larger triangles a picked more likely than smaller ones.
  3. Randomly create a point inside the selected triangle.

Now these procedure shall be repeated until a certain stop criterion is fulfilled, something that we will discuss later on. Let’s start with triangualation first.

Delaunay triangulation and constrained Delaunay triangulation

Whilst the original (C++) OGR library contains a method DelaunayTriangulation to retrieve just that for an input geometry, this function is not part of the OGR Python bindings. However, as with most tasks there is already another library that can do what we want. In this case we refer to poly2tri. Originally provided in Java and C++, there also exists a Python version of it. (There are some peculiar patches necessary to get poly2tri to work under Python that I will devote another blog entry for.)

Using Shapely and poly2tri it is now possible to initiate a constrained Delaunay triangulation (CDT):

Now two things have to be considered. First, poly2tri brings its very own Point definition that needs to be distinguished from the one Shapely provides by explicitly using the prefix p2t. Adding to that it must be avoided that the CDT is fed with duplicate points – i.e. as first and last vertex are usually specified in polygon definition. We can deal with this constraint by omitting the first vertex:

Now real-world data kicks in back again as it may contain holes, or interior rings as they are called correctly. These need to be specified separately as input for the CDT:

Finally, the triangulation can be performed:

Following is the visual result of the triangulation.

A constrained Delaunay triangulation was applied to this real-world polygon.

Next up in line to create uniformly random sample points are weighted random selection of triangles and random generation of points inside such a given triangle.

Creating sample points (Pt. 2): Regular grid sampling

In this edition of our series dedicated to polygon sampling techniques we will look into the process of creating regularly gridded sample points with non-uniform intervals.

Previously published parts of this series include:

Regular grid sampling

Using a single point to represent a whole polygon geometry may satisfy only the most basic sampling demands. Another – and certainly more applicable – way to arrange sample points is to create virtual grids of such points that stretch out over the whole area of the polygon to be processed. To do so we need a regular interval between sample points that may be defined identical (i.e. uniform) in both x- and y-direction. Here we will go the more generic route and implement separately adjustable (i.e. non-uniform) intervals for x and y.

The value range of sample coordinates is of course laid out by the extent of the source polygon. Using Shapely the extent of a polygon is called forth by the property bounds:

Given two intervals in x and y it is now easy to create points at regular gridded positions laid out over the extent of the source polygon. Additionally it should be assured that created points are actually within the polygon to be sampled.

Now real-world spatial data is different yet again as it rarely comes with extents fitting to full meters. It is still possible to have regularly gridded sample points that have *nicely* looking coordinates by creating extent ceilings and floors used as base for the sampling process. We can actually use our defined intervals to create these values:

Putting it all together

To extend our previously introduced PolygonPointSampler, we can combine all our findings in a new sub class RegularGridSampler. This one will also make use of the possibility of creating a separate constructor as there is the need to define the sampling intervals.

Using our real-world example and a (uniform) sampling interval of 1,000 meters we arrive at the following result:

A real-world polygon with a regular grid of sampling points.

We may also use non-uniform sampling intervals of 1,000 meters in x- and 500 meters in y-direction:

A real-world polygon with a regular grid of sampling points using non-uniform sampling intervals.

A multi-part polygon may also be used to apply the sampler, for example with a uniform sampling interval of 250 m:

A multi-part polygon consisting of four parts sampled by a regular grid with a uniform sampling interval of 250 m.

WTH is a floatrange?

In the meantime you most likely have already realized that we’re using a non-standard range function to iterate over a range of float values. Based on a StackExchange suggestion I have defined an according routine in a utility function:

Creating sample points (Pt. 1): Centroids and representative points

Following the declaration of my PolygonPointSampler’s base class in the previous post, I will now start to implement multiple classes each representing a different method to derive sample points for a given polygon.

Centroid sampling

Let’s start with the simplest polygon-to-point sampling method available: the creation of a centroid point. The centroid represents the geometric center of a polygon. If we’re looking at a triangle it emerges as the intersection of the triangle’s median lines, hence it is sometimes also called the median point. Shapely allows to derive a polygon’s centroid by querying the corresponding attribute:

Applying this method to some real-word data will often lead to the phenomenon visible below: The centroid comes to lie outside of the original polygon’s interior.

A real-world polygon with a sampling centroid.

Representative point sampling

To deal with this problem, Shapely equips geometry objects with a method called representative_point() that – as the documentation reads – “returns a cheaply computed point that is guaranteed to be within the geometric object”. Judging from the Shapely source code this method goes back to the PointOnSurface function provided by GEOS. I haven’t been able to find out how exactly it is guaranteed that the sample point is an interior point, but from the looks of it, it is most likely that the algorithm described in a fitting thread at gis.stackexchange.com has been applied to it. In any way for our Python example the code would look like the following:

Please note that querying for the representative point of a polygon actually calls a Python function – as indicated by the brackets following the according declaration – while the centroid is a property of the Shapely polygon object. Using real-world data we arrive at the situation displayed below:

A real-world polygon with a sampling representative point.

It is also possible to apply the sampler on multi-part polygons:

A multi-part polygon consisting of four parts each sampled by a representative point.

Putting it all together

Based on the knowledge laid out above, it is now possible to furnish the previously created PolygonPointSampler base object with extensions that create sample points by either using the centroid or representative point method. I have called them CentroidSampler and RepresentativePointSampler, respectively:

 

 

 

Creating sample points in a polygon with OGR and Shapely (Introduction)

Creating sample points for a given region of interest is a common task in geospatial analysis. It is therefore logically consistent that there is already a number of ways available to create some including the Create Random Points tool of the ArcToolbox in ArcGIS or the corresponding components of the fTools plugin for QGIS. As I’m trying to follow an explicit under-the-hood-philosophy, I will – starting with this very first “real” entry at Portolan – implement my very own sampling methodology for anyone willing to follow.

As it is my (current) programming language of choice I will use Python to accomplish my task. Adding to the fact that I have plenty of working experience with it, Python has the advantage of being very well positioned in the realm of geospatial processing. This is courtesy of a wide range of libraries dealing with corresponding tasks including two that I will use extensively, namely OGR (as part of GDAL) and Shapely. While OGR serves as a well suited toolbox for all things vector – including export and import to external file and/or database formats, basic dataset creation and editing as well as more sophisticated procedures such as generalization – I have found Shapely (basically representing a Pythonic interface to libgeos that is also used by OGR itself) to be a more direct link to the topological operations that are bread and butter for any kind of geographical information system.

As Python explicitly encourages the object-oriented programming paradigm, I will follow that and implement my very own PolygonPointSampler in compliance to that paradigm. Mind you, I’m not an explicitly stated computer scientist, but a cartographer by trait that somehow turned into a mostly self-taught specialist for geoinformatics. My theoretical ramblings with regard to programming may be a little off from time to time, however all of the things presented here do actually work in practice – which is most important for me. And maybe for the reader as well.

Corresponding to these prerequisites a base class for a PolygonPointSampler could be implemented as set out in the following listing: