I am really honored by the fact, that a lot of people seem to use
my .NET for Apache Spark docker image to explore how C# and Apache Spark can work together, for example.
Additionally, I am getting a lot of request lately, asking whether I would be willing to share the code for creating the images.
And finally, after tidying it up a bit (e.g. removing the experimental Windows support), it is now
available on GitHub.
So thanks to everyone who made this image such a success and of course you are very welcome to …
NET for Apache Spark 0.11.0 is now available and I have also updated my related docker images for Linux and Windows on the docker hub.
If you are interested, check out the
official resources, or one of the following articles.
In part 2, I used htm.core as a single order sequence memory by allowing only one cell per mini-column. In this post I’ll finally have a first look at the high order sequence memory.
Before we do that, I want to show you one last single order memory example however.
Single Order Sequence Memory Recap
As you might remember from the
last post, these were the settings for our htm.core temporal memory (aka sequence memory). columns = 8
inputSDR = SDR( columns )
cellsPerColumn = 1
tm = TM(columnDimensions = (inputSDR.size,),
cellsPerColumn = cellsPerColumn,
To allow the htm.core temporal memory to learn sequences effectively, it is import to understand the impact of the different parameters in more detail.
In this part I will introduce
columnDimensions cellsPerColumn maxSegmentsPerCell maxSynapsesPerSegment initialPermanence connectedPermanence permanenceIncrement predictedSegmentDecrement Temporal Memory – Previously on this blog…
Part 1 just covered enough basics of htm.core to get us started, and we actually saw how the single order memory got trained. A cycle of encoded increasing numbers from 0 to 9 was very easy to predict, as there was always just one specific value that could follow the … more
A couple of months ago I’ve described how to
transfer data from Apache Spark to PostgreSQL by creating a Spark ForeachWriter in Scala.
This time I will show how this can be done in C#, by creating a ForeachWriter for .NET for Apache Spark.
To create a custom ForeachWriter, one needs to provide an implementation of the IForeachWriter interface, which is supported from version
0.9.0 onward. I am going to use version 0.10.0 in this article, however.
Documentation of the C# Interface is provided within the related source code:
The example project I am …