Zero-Touch Importing
What is Zero-Touch?
Zero-Touch Importing refers to a simple point-and-click method for importing C# libraries. Dynamo will read the public methods of a .dll file and convert them to Dynamo nodes. You can use Zero-Touch to develop your own custom nodes and packages, and to import external libraries into the Dynamo environment.
.dll files
Dynamo Nodes
With Zero-Touch, you can actually import a library which was not necessarily developed for Dynamo and create a suite of new nodes. The current Zero-Touch functionality demonstrates the cross-platform mentality of the Dynamo Project.
This section demonstrates how to use Zero-Touch to import a third party library. For information on developing your own Zero-Touch Library, reference the Dynamo wiki page.
Zero-Touch Packages
Zero-touch packages are a good complement to user-defined custom nodes. A few packages which use C# libraries are listed in the table below. For more detailed information on packages, visit the Packages section in the Appendix.
Case Study - Importing AForge
In this case study, we'll show how to import the AForge external .dll library. AForge is a robust library which offers a range of functionality from image processing to artificial intelligence. We'll reference the imaging class in AForge to do a few image processing exercises below.
Let's begin by downloading AForge. On the AForge download page, select [Download Installer] and install after download has completed.
In Dynamo, create a new file and select File > Import Library...
Next, locate the dll file.
In the pop-up window, navigate to the release folder in your AForge install. This will likely be in a folder similar to this one: C:\Program Files (x86)\AForge.NET\Framework\Release.
AForge.Imaging.dll: We only want to use this one file from the AForge library for this case study. Select this .dll and hit "Open".
Back in Dynamo, you should see an AForge group of nodes added to your Library. We now have access to the AForge imaging library from our visual program!
Exercise 1 - Edge Detection
Download the example file by clicking on the link below.
A full list of example files can be found in the Appendix.
Now that the library is imported, we'll start off simple with this first exercise (01-EdgeDetection.dyn). We'll do some basic image processing on a sample image to show how AForge image filters. We'll use the "Watch Image" node to show our results and apply filters in Dynamo similar to those in Photoshop
To import an image, add a File Path node to the canvas and select "soapbubbles.jpg" from the exercise folder (photo cred: flickr).
The File Path node simply provides a String of the path to the image we've selected. Next, we need to convert it into a usable image file in Dynamo.
Use File From Path to convert the file path item into an image in the Dynamo environment.
Connect the File Path node to the File.FromPath node.
To convert this File into an Image, we'll use the Image.ReadFromFile node.
Last, let's see the result! Drop a Watch Image node onto the canvas and connect to Image.ReadFromFile. We haven't used AForge yet, but we've successfully imported an image into Dynamo.
Under AForge.Imaging.AForge.Imaging.Filters (in the navigation menu), you'll notice that there is a wide array of filters available. We're going to use one of these filters now to desaturate an image based on threshold values.
Drop three sliders onto the canvas, change their ranges to be from 0 to 1 and their step values to be 0.01.
Add the Grayscale.Grayscale node to the canvas. This is an AForge filter which applies a grayscale filter to an image. Connect the three sliders from step 1 into cr, cg, and cb. Change the top and bottom sliders to have a value of 1 and the middle slider to have a value of 0.
In order to apply the Grayscale filter, we need an action to perform on our image. For this, we use BaseFilter.Apply. Connect the image into the image input and Grayscale.Grayscale into the baseFilter input.
Plugging into a Watch Image node, we get a desaturated image.
We can have control over how to desaturate this image based on threshold values for red, green, and blue. These are defined by the inputs to the Grayscale.Grayscale node. Notice that the image looks pretty dim - this is because the green value is set to 0 from our slider.
Change the top and bottom sliders to have a value of 0 and the middle slider to have a value of 1. This way we get a more legible desaturated image.
Let's use the desaturated image, and apply another filter on top of it. The desaturated image has some contrast, so we we're going to test some edge detection.
Add a SobelEdgeDetector.SobelEdgeDetector node to the canvas.
Connect this to a BaseUsingCopyPartialFilter.Apply and connect the desaturated image to the image input of this node.
The Sobel Edge Detector has highlighted the edges in a new image.
Zooming in, the edge detector has called out the outlines of the bubbles with pixels. The AForge library has tools to take results like this and create Dynamo geometry. We'll explore that in the next exercise.
Exercise 2 - Rectangle Creation
Now that we're introduced to some basic image processing, let's use an image to drive Dynamo geometry! On an elementary level, in this exercise we're aiming to do a "Live Trace" of an image using AForge and Dynamo. We're going to keep it simple and extract rectangles from a reference image, but there are tools available in AForge for more complex operations. We'll be working with 02-RectangleCreation.dyn from the downloaded exercise files.
With the File Path node, navigate to grid.jpg in the exercise folder.
Connect the remaining series of nodes above to reveal a course parametric grid.
In this next step, we want to reference the white squares in the image and convert them to actual Dynamo geometry. AForge has a lot of powerful Computer Vision tools, and here we're going to use a particularly important one for the library called BlobCounter.
Add a BlobCounter to the canvas, then we need a way to process the image (similar to the BaseFilter.Apply tool in the previous exercise).
Unfortunately the "Process Image" node is not immediately visible in the Dynamo library. This is because the function may not be visible in the AForge source code. In order to fix this, we'll need to find a work-around.
Add a Python node to the canvas and add the following code to the Python node. This code imports the AForge library and then processes the imported image.
Connecting the image output to the Python node input, we get an AForge.Imaging.BlobCounter result from the Python node.
The next steps will do some tricks that demonstrate familiarity with the AForge Imaging API. It's not necessary to learn all of this for Dynamo work. This is more of a demonstration of working with external libraries within the flexibility of the Dynamo environment.
Connect the output of the Python script to BlobCounterBase.GetObjectRectangles. This reads objects in an image, based on a threshold value, and extracts quantified rectangles from the pixel space.
Adding another Python node to the canvas, connect to the GetObjectRectangles, and input the code below. This will create an organized list of Dynamo objects.
Transpose the output of the Python node from the previous step. This creates 4 lists, each representing X,Y, Width, and Height for each rectangle.
Using Code Block, we organize the data into a structure that accommodates the Rectangle.ByCornerPoints node (code below).
We have an array of rectangles representing the white squares in the image. Through programming, we've done something (roughly) similar to a live trace in Illustrator!
We still need some cleanup, however. Zooming in, we can see that we have a bunch of small, unwanted rectangles.
Next, we are going to write codes to get rid of unwanted rectangles.
Insert a Python node in between the GetObjectRectangles node and another Python node. The node's code is below, and removes all rectangles which are below a given size.
With the superfluous rectangles gone, just for kicks, let's create a surface from these rectangles and extrude them by a distance based on their areas.
Last, change the both_sides input to false and we get an extrusion in one direction. Dip this baby in resin and you've got yourself one super nerdy table.
These are basic examples, but the concepts outlined here are transferable to exciting real-world applications. Computer vision can be used for a whole host of processes. To name a few: barcode readers, perspective matching, projection mapping, and augmented reality. For more advanced topics with AForge related to this exercise, have a read through this article.
Last updated