GPU Accelerated Deep Learning also Wanted for .NET

This entry was posted in category Talk on December 10, 2016 by dani

We took the chance and did a second Channel 9 recording on our GPU accelerated Machine Learning project in the Microsoft offices at Time Square New York City. It was a great experience to do the recording with Seth Juarez. Many thanks Seth! There exist already several deep learning libraries but none of them targets […]

Read More

Radically Simplified GPU Programming with C#

This entry was posted in category Talk on December 10, 2016 by dani

We were very happy to do a Channel 9 recording for our new Alea GPU version 3 in the Microsoft offices at Time Square New York City. It was a great experience to do the recording with Seth Juarez. Many thanks Seth! GPU computing is all about number crunching and performance. Do you have a […]

Read More

A new Deep Learning Stack for .NET

This entry was posted in category Talk on October 4, 2016 by dani

I gave a talk at GTC Europe 2016 in Amsterdam about our new open source project Alea TK. Alea TK is a library for general purpose numerical computing and Deep Learning based on tensors and tensor expressions supporting imperative calculations as well as symbolic calculations with auto-differentiation. It is designed from ground up with CUDA […]

Read More

F# [on GPUs] for Quant Finance

This entry was posted in category Talk on July 14, 2016 by dani

Talk at Swiss FinteCH Meetup July 14th 2016 I gave a talk at the Swiss FinteCH Meetup on open source technologies in fintech. The Swiss FinteCH Meetup group is a great and growing community interested in technology applied to financial problems. Thanks to Swati for organizing the event. Check out the slides for more information.

Read More

Deficiencies of .NET CLR JIT Compilers

This entry was posted in category Talk on July 8, 2016 by dani

Another Reason to Use a GPU! I recently gave a talk at an F# meetup hosted by Jet.com about deficiencies of .NET CLR JIT compilers. We know that often C# or F# does not perform at the level of native C++ because the CLR JIT compiler is not optimizing the code well enough. In worst cases we loose a […]

Read More