Webinar On-Demand: Introduction to the Apalis TK1 with NVIDIA® Tegra® K1 SoC


Welcome to today’s webinar from Toradex about our new Apalis TK1 with the
NVIDIA Tegra K 1. My name is Daniel Lang and I work for Toradex here out
of the office in Seattle. Later in the webinar, I will be joined by Dominik.
He is a Senior Software Development Engineer in our
headquarters in Switzerland and he will talk a little bit about technical
thing and how to get the TK 1 running. Before we dive into the presentation,
a couple of organizational stuff. So, at the end we’ll have a Q&A
so you will be able to ask us question and we will answer that. So, you should see in your webinar panel,
a chat or question box. You can just type your question in there while
we do the presentation, we may already answer a few while we do the
presentation and then the big Q&A will be at the end of the presentation. So, the
presentation at first is a very short overview about Toradex and then we will do an
introduction about the Apalis TK1 and the TK1 SoC. Then you will hear about a
possible application and demo, so, we will have some videos and some stuff
people do with the TK1. Then it will get more technical: how to get
started; and then Dominik really will show you how to install the latest images; how
to work with the Jetpacks, and so on; and then at the end we will do a
question and answer session live. So, a very short introduction about Toradex.
So, Toradex was founded in 2003 in Switzerland. In the meantime, we sprang over
the whole world as you can see. So, you should find somebody in your time zone or very close
to your time zone, you can call office and they can talk on your issue. Toradex is specialized in ARM embedded computer
module or a System on Modules. Our customers are in a wide range
of vertical talking industrial automation, medical, automotive, avionics, test and measurement,
digital signage, and so on… … basically everything except like the
typical consumer super high volume, things we don’t do. Our SoCs are from NXP,
from NVIDIA which we will cover today and from Marvell. We have
in-house support for Linux. So, we do Linux by ourselves, we do Windows Embedded
Compact still, actually not on the TK1, but on some of our other
modules and for some selected module, we started now supporting Windows 10 IoT Core, so that we also have a webinar over
that if you’re interested. Also, as I said software and hardware is in-house, so if
you have any trouble and many times if it’s not clear if it software or hardware, you can just get in touch with us and we
will help you. So, about the product, that was already about
Toradex. We will get started about the product. We have two main product
families of computer modules. One is called Colibri. It is a small SODIMM form-factor
module. We released the first one in 2005, so they are around for more than
10 years. There is a wide range of modules. We also
have very a low cost module starting at $24 including RAM, Flash, SoC, Power Management, everything on on the module and we also have some very low-power module and
they have the typical let’s say traditional embedded interface, like SPI, I²C, and so on. And, all the
modules in that family, they’re pin- compatible, so, if you design a carrier board 10 years ago, you still can plugin the latest module. Then on the
other hand, we have the higher performance Apalis module
and actually the TK1, it’s Apalis module, so that fits in that
group and you can actually see that it’s the module on the top. This is an MXM3-based
module from the form-factor and it’s really talk a little bit more on
how high performance has also higher bandwidth interface like PCIe Gigabit
Ethernet USB 3.0 and other high bandwidth interface for display and
camera. For the module, you also need a carrier board. Here on the left side,
you can see on the top, our evaluation boards, it’s really big but you have easy access to all the pins
and its really good for proper typing and and especially if you have hardware, you
have to connect that and that’s a good board. Then, also the Iris in the middle,
it’s also from Toradex. I hope with the connector, you get a feeling for
the size, so it’s much smaller; can be used as a Single Board Computer; has HDMI,
and so on. But we also have third-parties carrier board for our Colibri family, we already
have a big range from third-party companies providing carrier board for
our modules. For the Apalis, we just begin now to add the few one and here
you can see one from a company called Diamond Systems, it is currently in a
verification mode and I hope that it will appear soon on our website. But most
customer actually they design their own carrier board. So, for all our carrier boards, it’s open
hardware, so all the reference in Altium, a file format, or PDF is all available. You
can go on our website there and you’ll find a lot of tools and help if
you want to design your own carrier board. Then if you talk of our website,
very short something, Toradex is famous, it is really our Developer resources and how
we help you developing on our products With the developer website, a developer center with more than 800
articles. We update that daily. We have a quite new community forum. It’s
very cool. If you know Stack Overflow, it’s little bit inspired by that, so a very good overview. You can also
email us directly of course and as you seen in the slide before, we have support
offices around the world they all have phones, so if you’re in trouble, don’t hesitate to pick up the phone and
give them a call and then of course you’ve video tutorials and webinars like the
one you are enjoying at the moment. So now, we will look at the
hardware. So, first we look at the module level, so
the Apalis NVIDIA TK1, it comes with 2GB of RAM, 8 GB of eMMC Flash memory, so
it means that you don’t need an SD card or an SD or anything externally to run your operating system,
your program from, you can do that directly from that onboard flash. It also
has a wide temperature range. So, our early prototypes, they have a little bit smaller temperature
range but the volume product has -24°C to 85°C and short overview
about the interface: so, it basically supports the interface of the Apalis
module, I don’t go through all of them but here are some highlights. You only need the single 3.3V power supply
to get it running, you have a USB 3.0 (two lines). You have HDMI and LVDS
for displays, it can connect 4k displays. Actually, you have also have also display
port and the embedded display port and then you have the Camera CSI interface, so with the CSI you can connect up to three
cameras with PCIe Gen 2.0. You have a two high-performance CAN interfaces which
are actually realize that K20 microprocessors. This is actually a
Cortex M4 and it’s freely programmable. It is connected
to your the SPI, so if you don’t wanna use it for CAN and you have other low latency
stuff you want to do, you maybe could even use that. Then also maybe mention
compared to our other Apalis modules, we did not connect the parallel
camera interface and the parallel LCD, so if you have a parallel LCD on your
current carrier board, please get in touch with us so that we can look for a
solution. So, now I will talk a little bit more SoC. So, it
comes with 4-plus-1 ARM Cortex A15. So, we say 4-plus-1because
normally we have four cores in there running up to 2.2 GHz; however, if
you don’t fully load the system there is an additional core, it’s also called A15 but it’s designed
for low power, so the system will automatically migrate your
processes to the single A15 core to preserve power so you
can really see the design from that system they were really considering power
consumption and up to 2.0 GHz and i think one of the nicest features
of the TK1 and really one which is a big difference to more or less all the
other SoC is the very strong GPU with 192 CUDA cores. We will look at that
a little bit later closely. You can connect 4K displays and it also has a quite
strong video decoder. You can do 4K in H.264, it can do up to 4 things simultaneously,
Full HD screen, and you can also do H.265 which will load the GPU, so
there’s not everything done in the hardware but it can have a Full HD
streaming H.265, so this in your codec if you like. You can also encode 4K videos up
to 24 frames or 60 frames at Full HD. Then, here a little bit
more about the GPU, so it’s 192 CUDA cores means
it can really use it for general purpose computing. So, you can use it to render very nice
graphics but you can also do for signal processing, for deep learning, and so
on and so just general calculation. Nice thing also is that you have unified memory, so the memory is accessible from the CPU and the GPU directly. On a traditional
PC, you have a graphic card, which is connected to your PCIe and all their connection. So, if you want to share data
all over the city, you need to process a little bit and then the GPU and go back and
and forth, you always have to copy over PCIe to one side and then back. So, with
Tegra, you don’t have that, you can access both together. So, depending on
your algorithm, that can give you a big boost; even maybe the CPU’s a little bit
slower than your i7 and GPU maybe too. And, what we have support
for framework and feature. You have the full OpenGL 4.4, so it’s not just a mobile GPU, so you have
the full OpenGL 4.4 like you know from a PC. You also have the OpenGL ES 3.1, you have CUDA support, you have OpenCV, a very popular framework for a
Computer Vision and there we have optimized libraries for Tegra. So, it you can just replace your OpenCV functions
with this optimized one for Tegra, we also support NVIDIA VisionWork. Then,
let’s talk a little bit about the application. We think the TK1 is
interesting. I hope you have also a lot of ideas but here a few things
that people doing and what we expect, so one thing which we already got
quite some requests for is TK1 clusters, you’re not just use one TK 1, you use several
together. It was not very obvious for me why you want
to do that but we talked with people and its really a unified memory
and that you have so many CPU cores together coupled with the GPUs that give
for some algorithm that’s very ideal. So, people really want to basically
surface full of this Apalis modules. We already have a third-party doing a
carrier board which you can plug up to four Apalis modules in it and then you
can couple them together even more. So, if you’re interested in the cluster and you
want to have a carrier board with several Apalis module, maybe don’t want to do
everything by yourself, get in touch with us and we can
introduce you to the third-party. Then, another application we expect is
signal processing that it replaces FPGAs and DSPs. So, the GPUs, they have very good if you can parallelize
your work and many times you can do that in a signal processing. If you are very good at floating point
performance, it is a little bit less deterministic than FPGAs but we really
have it in hardware but it’s easier to program. I think it’s easier to find
people which can be taught on CUDA or I mean it is very close to C so then they can VHAL or all
these tools for FPGAs. It is easy to migrate to other GPUs and so on and
there is already applications like in radar application. Also, the
unified memory here again a big advantage but I mean that’s a hotly discussed topic, what’s better in which use case, also it’s
not always the GPUs but it’s many times it can be, so, we think, we will
see quite some applications there. Then the next
thing – a demo. So, let me show that that’s about a user interface done with Qt,
so something a little bit more traditional and also I didn’t say that
here, so, in the chat window, so we will show you videos. We had problems in the
past that not everybody could see the video. If you can’t see the video, just
click on the link in the chat window and you will see the
YouTube video of these clips. They’re all pretty short, so you can watch it on
YouTube site. I start it here. I hope that works and so you see that
the Iris carrier board with Apalis under the heatsink and that’s a Qt
UI doing a 3D Qt 5.6 for that demo and you see it’s
quite advanced, lot of lightning, lot of parts. But that works all very smooth on
the TK 1, so I think even the most advanced user interfaces, that will be
good. It’s also what’s run on a Full HD. The demo was provided
by KDAB, a partner company of Toradex. So, they can help you with Qt, C++,
OpenGL stuff. So, if you want to have an application with Qt or you wanna have a
really fancy user interface, they can help you with it. Here on this slide, you see a little
bit what they do. So they quite experience, they also do Qt training. If you look for training, you also
work with Qt directly, so because it’s a very popular framework to do UIs. They also
maintenance the Qt for Windows Embedded Compact, so if you have any
questions in that direction. So next, let’s talk a little bit about Deep
Learning. Just very short what it is. i mean the talk would go much too long but I try to make it a
very short. The idea , you may already know about the traditionally, if you want to acheive something like detecting a Cancer cell in a human tissue. The
picture of a tissue needs to find the Cancer cells. So, you need a lot of know-how
on what you have to see when it’s Cancer and then you need a specialist in
a Computer Vision to do edge detection to check on color and so on. So,
it’s very complicated! You have to write a really detailed algorithm. With
machine learning you do it different. It just had a lot of data may be from
the couple of years where people manually detect the
Cancer and then they mark that’s a Cancer and that’s not and so on. So, we have a lot
of pictures, a lot of information; you just feed that in a generic algorithm I say
here. There are also different ones but more or less, it’s the same concept and
then the computer will figure out by itself how to detect cancer. That’s kind of the
whole idea Machine Learning and Deep Learning
it’s one part of that. So, with the deep learning there were some
of my recent remarkable result like Google’s DeepMind, so also NVIDIA’s car called BB-8 running DAVENET, so that
was a car it learned driving from end-to-end so there
was not guys programming detecting the street, then how do I need to brake, how do
I need to steer, so they just teach the car basically driving and it learned everything from end-to-end. If you go on the internet or some pretty funny
video and you see when it learns at the beginning it’s not so good and does some
mistake. It’s also the ImageNet competition. That’s a competition where you try to describe picture or know what’s on picture and the latest algorithm – that’s a Deep Neural Network running on GPUs, they’re outperforming humans. All that stuff really
show that GPUs are ideal to run this Deep Neural Network. Then, a very
short talk on how a typical looks if you do that, so you have all the lots of data,
let’s say all the pictures which are marked what it is. Then you feed that in a
big computer, typical that’s cloud computing. So
Microsoft, Amazon, Google, they all have services for that where you can
really train that. It needs a lot of performance. Microsoft as a Microsoft NVIDIA also provides high-performance computer you can put on your desk like
the the Digits but they are heavy computers need
a lot of power, it’s not something you would have mobile they’re also
expensive. So, after you trained it, you get a trained model that doesn’t need that
much performance anymore but it’s still typically in the cloud, for example on
your cell phone or if to do was recognition or something like that, normally it
transfers some data to the cloud I can see that trained model detects the
picture or the voice and then send it back. But now with TK 1, you can actually take that
trained model and put it on the TK 1 so you don’t need that connection to the
cloud anymore. You don’t need the cloud computing, if
you’re a UAV, if you’re on the water, if you’re robotic, if you are may be a
medical device like ultrasound and somewhere you may don’t have that
reliable connection, that’s all you can do that everything on
the model and that’s pretty cool. I will show you here a small application
which was realized with Deep Learning. It’s a traffic sign detection demo
from our partner antmicro. So you can watch it on YouTube or
I can show it to you here. So, you see their stereo cameras, CSI camera connected to this carrier
board. Our module is actually on the bottom, so you couldn’t see that, but it’s
on the roller. It’s a demo we also had at the Embedded World and you
can now see the camera detect actually three times and it reacts : so
05 – it drives slow; 10 – drives faster; if there are arrows – it will steer. That
was all done with just showing them all the signs and the neural network were
trained and then it was transferred add to the TK 1. That demo was provided
by antmicro. This is a longtime partner that did many hardware and software
project for our customers. They can do Custom Carrier Boards if you don’t want
to do it by yourself, they provide software services so if you
want to have some good optimization openGL, Machine Vision, Deep Learning
on, and many more. So, if you don’t have the knowledge but you think for your application that will be cool, so I have to share my screen again. Then contact them and then they can
help you. They also do the Android support for
our module, so they have actioned Android for the TK1. Also with higher-end camera, so I can also show you that short video. So, here you can see our Evaluation board with TK1 module and you
see the camera connected there or HDMI first and then a CSI adapter and
you can see Android running on that module and the camera running and you
see it’s pretty nice quality. Then other partners SightHound, also in
this deep learning and they focus on face detection and they have software
which can run in the clouds, so they have that cloud model at the beginning, they have that. But you can also bring it to the device and they’re quite good in
that, so I will show you a demo from
them. So, you see at the beginning, you will see
the training. So that it learns the face from all sides with sunglasses
and so on and and then later it really can detect people and know who is
who and it also detects if somebody does not fit in. They also have a tool where users can
direct to that on the TK1 demo. You can login via web browser and then
learn your faces and so on. So, if you are interested in more about
that for all the partners, just get in touch with us. Then the last one is also a partner
called Aerial Guard. So, we just started working with them. They also have a solution for TK1 and mostly for drones and mobile robot for
object avoidance, so you don’t fly into seeing it. It can find the best path
through trees and I mean that also takes advantage of this learning. It has a
stereo vision camera connected via USB 3.0 and I also have a small
video about that. So, here you see that the setup and then
you see here the view from a drone and then here you see the drone flying between trees and it’s really autonomous not flying into the tree and
basically find out its way between these trees and that’s just with camera not with radar or any other expensive sensors. So, that was my part. Now
it will get a more technical. I will give the presentation to Dominik
and he will talk and how you can install the latest Toradex BSP and we
have a different variations at the moment, a little bit for what they are
and then how you get really started if you got one of our modules. So, I will then tranfer it to Dominik Hello everyone! My name’s Dominik. I’m going to show how to
prepare and flash two different images on the Apalis TK1. Ok! We will start with our own Toradex BSP. There are basically two ways of getting it: you can build it yourself, we provide all the sources and instructions on
our Developer website; but you can also download BSP packages directly from
our servers. Our BSP is based on Angstrom Distribution Build with Yocto. The
the BSP archive that you can get from combining the BSP, our demo given on our
website contains everything you need to get the board started. So, starting from U-Boot, the Linux kernel, device tree, root file system and all of the flash scripts. And, when you get the
module, the module is already flashed the BSP, so we can start work right away. You don’t need to do this. You can do
it when you want to update to the latest BSP that we have released or
you want to recover the board for some reason or it does needs a flash drive. Currently our BSP Apalis TK1 supports hardware video decoding and encoding solutions and graphics acceleration. Okay, so I’m going to start by downloading
the BSP and unpacking it. The BSP right now is as you can see
around 150 MB and that will just an entire image and kernel and stuff like that. After it’s unpacked, you can basically
start and create and update file, you can create update files. Possible update device for the Apalis TK1 is the TFTP
network update or you can copy the update files to the SD cards. This is consistent with all the other devices and you can actually create a single SD card that is able to update
all of our devices and different version of the software for different devices. Okay, so now we’ve created the image. We have used either the TFTP or the SD cards for our estimation and we can move to the
module. We will screen for a moment. We focus on the serial console. This is a serial console of our board during the boot. If you
want to update the board, you will need to break the autoboot and it will drop you to
the U-Boot shell and then will you basically just need to issue a run setupdate command. This command will automatically detect which way you’re trying
to update the device, whether through MMC, or TFTP, or USB device. Ok, because you can see it detected that
I have the TFTP server and it started downloading/flashing scripts from it. After the
flashing scripts are downloaded, you can choose whatever you want to
update in that board by running run update or you can basically run update on
the U-Boot, Kernel, device tree, or file system by running run update with file system, U-Boot, and so on. You can find more information on our
Developer website. So, I’m updating the entire module with run update. This automatically starts downloading and flashing the image and it will restart
itself after it’s done and you should boot right into the new
updated framework. Okay! Now, we will move to installing NVIDIA JetPack. This is Ubuntu based root file
system that NVIDIA provides with all of the CUDA, VisionWorks, Deep Learning stuff including multiple
demos and samples. We’re looking for a way to integrate
all of the NVIDIA binaries into our BSP but we’re not yet there. You’ll also notice
that NVIDIA JetPack is much bigger than our BSP. So, in case you want to… you
are using a lot of space that this is something to consider. So, to install NVIDIA JetPack,
you doing it in Ubuntu but in O core. You can do it in the virtual machine
and that’s actually what I am using for this demo. I already downloaded the JetPack for free from NVIDIA. You need to let it start running it. It will uncompress itself and present the dialogue, yup you will go next. At the next screen, you will
need to verify that it’s the directory that you want to use. It will download a lot of data. I think about a little less than 20
gigabytes and so we need to have enough space on your device in this regard. Well, now it will prepare and give you
options of what do you want to install. So, here you can see all the options. I do
recommend starting with the standard install. And this will include most of the
VisionWorks, CUDA, and sample stuff, as well as our Ubuntu image. So, basically I just click Next and
accept the terms and conditions and it will start downloading and installing
stuff. So, I am going to fast forward it. Yeah, that’s a lot of files to download. So, it will take some time for you. Okay, once the instance is completed then proceed to the next. This is actually getting the files to the device and unfortunately since JetPack installer is designed for the Jetson board from NVIDIA and our boards are a bit different. We need to generate our
own images for Flash and flash them as a Toradex update scripts. So, I’m going
to do it right now. You need to open a new console. I have created a new… I am going to pack the root file system created by the Jetson installer, so I can use this later
with our BSP. After that, you will need our BSP. You can
download this first. That’s what I am going to do. And, we install pockets of course. After it is done unpacking, you need to basically we need to replace the
existing root file system that is provided with the BSP with the one that Jetson installer has
created. So, I am going to change name of the existing root file system and then
unpack our JetPack root file system in place. Okay, since our update scripts are relying on a very
specific… Okay, size of the image is increased free size of it and it has increased from 150
megabytes for our images to over 2 gigabytes for the NVIDIA image it will give
us a little more free space while rendering first boot and we will also need
to include a etc/issue part because we are using this just one to determine which module the image is made for. So, I’m just using
Apalis to get one as a module and right now we can generate the update files. And from now on, the procedure is the same. As for the BSP,
you have the files, for you can use Tftp or SD card and so yeah, once you select that we would be one to use it to the module run setupdate and then the module once again will recognize that I’m
using TFTP and start downloading updated scripts. Once again I’m during run update for the update of the
whole board. This time it will take a lot more time
and then with all our BSP image but it will finish and you will once again reboot the SoC and start putting our
image. Yeah, it is reset and it is starting. After the first boot, I
would recommend running resize filesystem to occupy the entire eMMC and this will happen automatically on our BSPs and once you do it, you can reboot the board and we can start on the next part of the TK1 Jetson installation. We need to go back to the Jetson JetPack
installer and click Next it will start creating an environment for the install like setting up network on the host for the DHCP and creating
images for update. After the Jetson style is ready to install
images, it will ask you. Yeah, it will ask you to put the device in
USB carrier board. You must not do that. This is some instructions only for the Jetson TK1 board. We have
already flashed our devices using our BSP scripts. So, basically we need to press enter here and leave our device boots at Ubuntu. The
Jetpack installer will recognize that it’s unable to connect. After trying to do it few times… and
actually we will continue without any problems. Yeah so, as you can see that it says that it failed to flash the device and you can press ENTER to try again and it will fail again and then try to will
try to find our JetPack device and still since it’s not just it’s not going to be able to do it automatically. So here you need to choose manual enter the IP address. After you have selected that, you have a prompt when you can
input device address. I’m checking it on the board right now. That’s our new IP and we know that different password and user name for Ubuntu is Ubuntu and password Ubuntu. So, I am entering it here. Yeah, after a while it will connect to the board over
SSH and it will start copying dev files and running updates. To get all
of the necessary files on the board after installing all the packages as you
can see on the list, it’ll start cross-compiling GameWorks samples and CUDA sample. It
does take a lot of time and so be prepared for that. Yeah after the install is complete, you basically have that’s the Ubuntu user home directory. You can see we have GameWorks, OpenGL samples and NVIDIA CUDA samples, VisionWorks samples and right now I am going to show the VisionWork samples. You can also see it on YouTube. Ok, this is the screen running on our eval boards with the TK1 on it. That’s the VisionWorks demo
showing car recognition using single camera. We can switch between
different views, for example at point close for recognizing objects. You can turn on fences. So, it will try to recognize where are the
boundaries that you are not allowed to enter because you’ll bump into other
cars. You can see this here. This demo is running on just two
cores and not fully utilizing them, it’s also using a video encoder and few CUDA cores and so it’s actually not very taxing. We have a lot of free resources on a CPU for
much more elaborate demo applications. And the other
demo that I would like to show is the Apalis GPU demo. Sorry about that. Yeah, the GPU demo. You can also find it on YouTube. It runs natively on TK1, as you can see of
all translucency and reflections and different surfaces. It is very fluid. It runs on 240 frames per second and asyou can see that the UI is very
responsive. Okay and the other demo that what I have is a camera demo from, we have a Basler camera from one of our partners. It’s a Full-HD camera capable of 112
frames/second. It’s USB 3.0 camera. So, you can see here, this is a camera that is natively TK1 and it’s a Full HD image with
112 frames which is effectively over 250 MB/s of bandwidth through USB
3.0 up to a screen with video recorder. And that still leaves us over two unused cores on the ARM and a lot of CUDA to do video recognition and other computer vision
stuff. The final setup for today is
our full open source demo. We will go the setup. So, if you want to, you
can run the latest mainline kernel and the nouveau open source driver on the
TK1. It will give you hardware-accelerated graphics as well as
hardware-accelerated desktop and whatever Weston or Glamour. The demo is based on Arch linux
distribution. So, I’m starting installing You can find
all of the information on NVIDIA on their help page and there is an article coming on our
Developer website on how to achieve the mainline Linux Kernel + Nouveau on a Apalis TK1. Okay, so that’s all of the are installed. I’m actually
downloading NVIDIA repo bundle for the Tegra nouveau root file system. And since the check at the current Apalis TK1 module is the device tree for it is
already main-lined, you can actually run this figure out an application. And, all the general modification help is going to their minor verification link and all the things that I will show you later. Now, we need to export our CANS
directly to cup variable, with scripts to work and basically we are stuck with
downloading actually root file system. Now next we’re running download-gcc. Download the appropriate cross-compilers
that they required. So, the after the
toolchain is downloaded in an instant successfully. You can prepare filesystem. It will be updated install all the… run and install the required packages. After that we need to modify
those kernel scripts does our BSPs using your images that are zImages. That is why you need to change the image type and also include LOADADDR which
for our module is 0x80008000, We can now split the kernel. As I said, the Apalis TK1 support is
already in my main kernel. There is our device reader and
so the UMI that is generated here and interested look in the image is all that in it. After the Linux the kernel is built successfully. You can move to the nouveau driver building. And then we need some extra packages like pthread DRM and
I’m also going to build kmscube
to verify that everything, 3D graphics is working correctly and
install Weston. There is also a set of scripts for running a building Xserver if
you really want to. Basically, this will create a root file system and the kernel and device to images that you can use again with BSP
for flashing. So, once we put the the Weston, we go to the out
directory and we will once again create a tar archive with our root filesystem Download the BSP, unpack it, change the root file system name and unpack and basically replace our root filesystsem with thier Arch file system that we have
generated from the nouveau installer. And, now they are going to extra steps
compared to Jetson stuff. We need to update the Kernel image and the device tree because we are not going to use the frequent and that we supply with our BSP but going to use main line generated by the installer. So here I am creating some files that
are required and links that are required by our update script to generate a proper
image of flashing. And after this is ready you can go ahead. Issue file – etc/issue file so the others can recognize our work and the rootfile is created for it and after that is ready we can generate update images. So the images are generated using
SD card or network TFTP methods just go to the box, run updates and after the boot you will have full one resource arch base and Linux running
on our board with the GPU acceleration, smooth UI, and under the CPU levels for the UI. So yeah, that’s it for the this part. I showed you how to install and prepare and install three different images on our board that’s Ubuntu with JetPack, full open source Arch-based with mailnline kernel and our own BSP and we can now move to question and answer section. I see here one question about the availability
and a price of the module. So, from starting today, you can actually
order the TK1 in our workshop. We expect shipping in about two weeks
here in the US and the price for single quantities is 219 US Dollar and in the
price of 1K/year it is 175 USD. And, you can actually see all the
prices for volume and all the steps and all our website so we are quite
transparent. Then I think we also have the question about DSI display
interface. Maybe Dominik, you can answer that. Yeah, and so we
actually have HDMI eDP under these records. DSI up to quad channel tunnel and single
lane LVDS available on on the Tegra module and
they may not be available simultaneously as they need to check with the bustle
marks options. Then other questions about heat. So, if you need a fan. And we expect that most application
that go in volume will be fanless however the module generates quite
some heat if you fully push it and ew definitely really recommend if you buy a module to get our heat sink. If you really plan to push it in and
especially in evaluation if you don’t really want to have to worry about the temperature, you also can add a fan, so all our carrier board they have a
three-pin motherboard connector for a fan but also I mean the module is new so we expect some optimization in power consumption and also when you
do your final design. I mean there’s way for example to
connect the module, the heat’s fitted directly with the case of a device to
get rid of the heat and we also will update our Developer website with
much more information about the thermal and the use cases you already test and
tips for that. Ok, I see another question. The
question is “Can the GPU be split between multiple applications?” So we have BSPs you can run multiple OpenGL or CUDA applications. You know there will be some overheads but it will be quite minimal. Okay! There is other question if you
plan to release Windows Embedded Compact for TK1. No, we really
don’t at the moment. So that will be actually the first Toradex module that will not support Windows Embedded Compact. I mean it’s not impossible to get it running of course. The problem is
really that the very nice thing about TK1 is the GPU and then the Windows CE is
really not ideal if you want to do GPU computing and things like that. You
don’t really have CUDA and things like that. What you’re actually
discussing is if you want to bring Windows 10 IoT core on that module. So, if you have
any feedback or you feel you have a use case where you would like to
use Windows 10 IoT core on a TK1, please get back to us. The image I
showed with the arch. So, the mainline kernel that passes through nouveau driver, can be used with Ubuntu as well, but that’s an unofficial open source driver. Right now, the NVIDIA is not supporting Ubuntu 16.04
and we are based on the Jetpack over NVIDIA. So, just to make clear here he had a questions about, “Will you have support for Ubuntu 16.04 with OpenGL 4.0 support?” I would like to do the CPU/GPU memory transfer. So is there specific setup required to make use of the unified GPU/CPU memory to transfer data between the two? So, I would recommend using the JetPack because it gives you all of the toolchain, probably CUDA and computing and but if you want to just write it pretty much per in Linux, you can use nouveau driver on our BSP. But to get the CUDA and all of the
OpenCV for Tegra, you do need NVIDIA system and all of their binaries. Ok! Thanks a lot for joining and we will follow up with an email where you can find the recording of the webinar and so
you can rewatch the part of especially of Dominik if you couldn’t follow and of course to find also this information on how to install these
different BSPs on our Developer website.

Posts Tagged with…

Write a Comment

Your email address will not be published. Required fields are marked *