Trying out Logitech’s Steering Wheel SDK

I know that many of you have a Logitech Steering Wheel (I have one too). That’s why I tried out the Logitech Steering Wheel SDK. I’m using the DLL directly from Python using the ctypes lib. Unfortunattely it does not work very well. The wheel is recognized, but I can’t read any states from it like the current steering angle or apply force feedback effects to it.

My goal is to transfer the target steering angle directly to the wheel. I will definetely continue trying.

Call python functions via WebSocket

It’s holiday season and I have some spare time to continue Autopilot development.

As I was implementing the chain element for the vJoy Controller device I noticed that I need some sort of secret path to push data from the python modules to the Web UI. One use case is the configuration of the viewport. Currently you have four input fields and you have to guess the right coordinates. With the so-called web functions I can request the screen image and draw a nice rectangle to visualize the configured viewport as you change values.

I will look how useful or broken-by-design they are. Stay tuned.

What the Autopilot sees

Good news! I fiddled around and got the data from the processing chain to the browser. For testing purposes I pass the captured screen and the region of interest to the websocket. Both images are encoded as base64 string. The browser receives these strings and updates the src-fields of the <img /> tags.

captured screen (top) and region of interest (bottom)

Because the data is sent to the whole websocket connection pool you can open a second browser window and see the same two images simultaneously. Pretty cool.

I have to do some further performance testing. The data transfered to the browser is kinda big. But it works for the beginning.

Connection pool

Currently I’m figuring out how to send data from the processing chain to the UI (browser). The problem is that the processing chain is executed within another thread. I need some kind of message queue.

But despite that I’ve implemented a connection pool. A connection pool is some kind of list where every open websocket connection lays. With that I can publish information to all browsers that are viewing the UI. This is important because you could open the UI from different devices (e.g. use your tablet as dashboard and use the PC’s browser for config). Every instance should see the same information at the same time.

The next step – besides the thread issue above – is to send the current captured image to the browser.

Calibration

Oh hi! How was your last week? I mean since the last post? Remember? One post every week?

I have to admit that I’ve lost focus. I started another project but now I want to work on the autopilot. Especially when days getting shorter and colder. That means more time for ETS2.

The first idea that came to my mind was the calibration process. Unlike the autopilot systems in regular cars, the “camera” is not in a centered position. I’m sure you want to use the autopilot with your favourite viewing angle. Therefore I will implement a routine that figures out the offset to the real vehicle center.

The computer detects the lane and figures out the center of it as you’re driving normally. With every frame the orientation point (lets call it “virtual center”) gets shifted towards your actual viewing position. When this process is done the autopilot will detect the virtual center and uses this to steer a bit more to the left or right.

Processing Chain

I designed the new program to reuse some parts of the image processing. My goal is to make the autopilot platform independent. You can run it on Windows, Linux or Mac. Therefore I came up with the idea of an processing chain. This looks as follows:

  • Capture image
  • Do some color conversion
  • Select the region of interest
  • Detect lanes and steering angle
  • Write steering angle to virtual controller

Some chain elements are usable on all platform like the color conversion, lane detection, etc. But the image capturing libraries are not usable on every platform. With this architecture I only have to exchange the chain element for image capturing.

You can also change the lane detection method and tweak it a bit without touching the rest of the code. The result of one chain element is passed as input to the next chain member.

Currently I am working on the first chain members: Image capturing and image pre-processing.

Settings Frontend (WIP)

I added Materialize CSS to the project and created a base template. Actually it only loads the static files and adds a top menu.

Settings page

The more exciting part is the settings. In the screenshot above you can see the dynamically rendered settings page. All input fields and headings are created in python code. After that the result is just passed to the template.

The input fields are widgets attached to settings entries. I will explain the settingstree module in one of the next posts. There’s going on some crazy stuff.

For now the input fields have no functionality. The next step is to make them savable.

Finished settings backend

Did I say “update once per week”? I mean: Where’s the difference between one or two weeks? 😀

Finally I implemented the settingstree module and also a way to allow module specific settings. I will draw you a nice diagram on how it works when I’ve tested everything.

The settings frontend is my next goal. I’m a bit curious.

Short Update

I want to give you some updates on the development progress at least once per week. This is such an update.

Currently I’m working on the settings module. This part isn’t as trivial as I thought it would be. After some brainstorming I decided to implement a crazy tree structure for the app settings. Every module then handles it’s own settings to simplify the future development. It doesn’t make any sense to store settings constants in one global settings module. Every time you want to add a new app module or change an existing one, you have to edit the settings module. This is not good. That’s the reason why I decouple the individual app settings from the main settings module.