

I bought an old IBM Thinkpad 240 (from the year 1999). The battery was almost dead, and lastest only 3 seconds.
I am planning on replacing the cells, and want to document everything here.
FRU: 02K6606
Battery Controller: bq2040
Battery EEPROM: 24C01
The bq**** and 24C01 EEPROM are a commonly seen combination in older IBM Thinkpads, an X30 battery I disassembled had a similar controller (bq9011DBT), and a 24C01 EEPROM.
The battery cells (3x, wired in series) are LiIon, rectanglar type, labeled with DG2PB6.
Battery Cell dimensions:
From the look and dimensions, it seems that UF103450P is a suitable replacement type.
If you are doing this yourself, under no circumstances(!!) disconnect the battery control PCB from the cells. Do not allow the controller to be powered off, otherwise it will lock you out, even if you install new cells. The battery will only discharge once, and not charge without reprogramming the EEPROM.
I intend to do a “hot swap”: solder on the new cells while the old ones are still connected. Remove the old cells afterwards.
I would like to make the Nissan Sunny Rz1 Digital Cluster Adapter open-source and allow DIYers to build the project themselves.
To build the adapter yourself, you will need to:
First, you need to order the manufacturing of the PCB. You can download the schematics and layout file below.
It is probably best to directly order them from a company which directly does SMD assembly to reduce the effort.
The firmware that you need to flash can be found here: https://github.com/Danaozhong/Nissan-Sunny-B12-Rz1-Digital-Cluster-Conversion-ECU-Firmware
You will need to clone the repository, build the firmware, and flash it to the STM32 located on the PCB board.
After assembly and flashing, you should be able to see a command prompt via the UART.
To build the adapter harness, you will need the following parts:
The instructions on how to build the adapter harness can be found in the Excel document below.
Ich hatte das Glück, eine originale Presse-Informationsmappe über den Honda Insight aus dem Jahr 2000 zu bekommen. Hier möchte ich gerne die Scans und den Inhalt der CD allen Enthusiasten zur Verfügung stellen.
I am pleased to announce that for the Nissan Rz1 digital cluster conversion kit project, all development and testing steps are complete. The product will be launched in April 2021.
To put it simply, this kit allows you to put the JDM digital cluster into your US/European Nissan Rz1, without having to cut any cables or complex rewiring. No hard-to-find special speed sensor or fuel sensor are required.
For DIY enthusiasts, all technical documents, PCB layouts, part lists and 3D printing components will be made available soon. The firmware of the conversion ECU is already open source.
It will contain the following parts:
As you can see from the pictures, it is a complete plug&play solution, which can be installed without having to cut connectors, or rewire your vehicle cable loom. The adapter can be fully removed; and the factory cluster reinstalled.
Basically, the only wiring step necessary is to connect battery supply, ground, and accessories.
The kit will only work on specific left-hand driven Nissan Rz1 models.
After driving my Nissan Sunny B12 Coupés for a while, I felt I needed something more environment-friendly, and a bit safer. From my work, I had the opportunity to drive the latest-model Toyota Prius as a test car. Most car enthusiasts would consider it boring. I, however, loved the technical details this car had to offer. Driven properly, you could easily reduce your fuel consumption to 3.3l/100km. An impressive value for a sedan!
Hence, I decided I need a similar fuel efficient vehicle myself. My search started right at spritmonitor.de, a German platform where users document their fuel consumption records. The website allows filtering the fuel consumption records by the most fuel efficient vehicles, and the first entry in the list surprised me. It was not a Toyota Prius, as I would have thought. It was a car that so far has completely evaded my attention. A Honda Insight.
The record displayed a fuel consumption of less than 3l/100km, a value which I considered to be almost unobtainable without a plug-in possibility. Even stranger, it was a car built in 2000! How can such an old car achieve such numbers? Read on.
Doing my research, I learned that in the early 2000, under pressure from threatening US enivornmental laws, Honda built a car with the goal of reducing emissions as much as possible. That car was not designed by the marketing department. It was a product made by engineers, without compromises, or cutting corners to reduce costs. Just looking at it in detail, you could feel how the hard-working Japanese engineers in the late 90ies did everything possible to build a revolutionary car. Below is a list of features that the little Insight has to offer:
You really have to give Honda credit for investing so much R&D effort into building this car. I mean, how was this possible back in the late 90ies? People were still struggling with Windows 95 bluescreens every day, and, unlike today, electric drivedrains were neither famous nor popular.
Long story short, these technical details fully convinced me, and I had to get one of these fascinating two-seaters. Unfortunately, Honda did not sell many of these in the old country, especially not in Germany, where at bak in the early 2000s clean Diesel engines were regarded the future of Automotive. Only about 100 of these little cars made their way to customers here.
Now, I have to say that an ecologial lifestyle is very important for me. I value it much higher if someone fixes something that breaks, instead of just buying new. I apply the same principle to cars, and tend to buy the kind of cars that other people would simply scrap, and then invest tons of effort to bring them to life again.
That’s what happened here as well. It looked bad.
Very bad.
Noone with a clear mind would have brought that car back to life.
My appologies for the pictures. As a student, I only had a very basic smartphone, with a pretty bad camera.
How did I bring that car back on the road? You’ll read this in part two of this article (coming up)!
keywords: 3D geometry; triangle selection; vertex selection;
When developing 3D software, you might want to allow the user to select specific polygons or vertices from your 3D model. Different parts of the geometry, such as corner points (vertices), triangles or surface normals must be selectable just by clicking on them. The complicated part is not only to calculate the right geometry entity which is currently selected by the mouse cursor, but also to do it fast enough. It must be possible to find the selected object in real-time to allow hovering. If the mouse is moved over the object, the software should give a visual feedback to the user by highlighting the currently hovered entity.
With the method presented here, you can implement a selection algorithm for the following 3D geometry entitites:
One possibility to solve this issue is to calculate the solution manually using mathematics. By taking the current mouse cursor position and unprojecting it in 3D space using the inverse of the projection matrix and calculating the first intersection with a geometric entity, the solution can be found. However, this method is computation intensive, slow and cumbersome.
Luckily, it is not necessary to reinvent the wheel as already existing technologies can be used for this purpose. The OpenGL library supports rendering into back buffers, which means that the results are not visible to the user (off-screen rendering). To solve the problem of selection, the current view of the object is rendered into a back buffer using a special color coding method. In this selection rendering mode, lighting and geometry shading is disabled and the colors of the objects are defined by the programmer. Selectable objects are displayed in unique colors, while the background and non-selectable geometries are rendered in black. To find the geometric entity which is currently selected, a small part of the back buffer around the mouse cursor position is copied and the closest non-black color value is searched. The correspondence between the color code and the information of the selected element is determined by a lookup table. With this method, a hardware accelerated selection mechanism can be easily implemented. The picture above shows an image of this off-screen selection rendering. This method is sufficient for most geometric entities; however, some minor problems must be tackled with. If a vertex (a corner point of a triangle) is to be selected, it must be ensured that all vertices which are not visible due to occlusion, such as points on the backside of the object, are hidden. This is done by not only rendering the vertices with their unique color value, but also by rendering the whole geometry in the back buffer using black color. The depth-test of the video cards depth buffer will then automatically discard hidden points, making their selection impossible.
Another problem is to find the surface point on the geometry. In this selection mode, not the vertices of the triangle are to be selected, but the exact coordinates that are currently hovered by the mouse need to be known. While this case is less relevant for geometries with a dense triangle mesh, it becomes more important if the mesh is very sparse and only consists a few triangles. A large cube is an example of such geometry – due to the planarity of the surfaces, only very few triangles are necessary to generate the surface. To calculate the exact coordinates of the surface point currently hovered by the mouse, the color-keying method cannot be used. For this method, the solution is to render the geometry in the back buffer and then read the value of the depth buffer (z buffer) of the video card at the current mouse cursor position. This depth value can then be used to unproject the current surface point by inverse-multiplying the projection matrix and the extrinsic transformation to receive the coordinates of this point in the CAD model frame.
A while ago I hooked a CAN logger to a friends CRZ, and reverse-engineered it a bit. Hope it helps for your projects 😉
Vehicle CAN
Message ID | Bit | Length | Description |
0x13F | 8 | 16 | ThrottlePosition raw |
0x13F | 40 | 32 | ThrottlePosition counter |
0x164 | 0 | 8 | Headlight switch |
0x164 | 48 | 16 | Vehicle speed |
0x17C | 24 | 16 | Engine RPM |
0x191 | 1 | 1 | Transmission reverse SW |
0x191 | 2 | 1 | Transmission neutral SW |
0x374 | 32 | 8 | Trunk open |
0x136 | 8 | 8 | VTEC SW |
0x136 | 0 | 8 | fuel cutoff |
IMA CAN
Message ID | Bit | Length | Description |
0x111 | 8 | 16 | IMA Motor RPM |
0x169 | 8 | 16 | IMA motor current (mA) |
0x231 | 24 | 16 | IMA SOC |
The last two days, I had a very interesting problem that I thought is worth sharing. I am developing a project on an STM32 microcontroller. My toolchain uses Eclipse, which uses GDB, which in turn controls OpenOCD, which does all the low-level stuff so that I can flash and debug. That worked very well for a long time. Until yesterday.
What did I do? I added a new class to my project, which at some point would be dynamically allocated. I kept the method bodies all empty, so nothing should happen. But after flashing the code to the target, the debugger crashed with the message:
Can not find free FPB Comparator!
can’t add breakpoint: resource not available
This error message was already familiar to me – it usually happens when you have too many breakpoints listed in Eclipse, and when you start the debug session, it will automatically add all these breakpoints. If you have listed more breakpoints than the MCU supports (six), this message appears. But here, my breakpoint list was completely empty.
Strange! I had no idea what was going on here, so my first trial and error approach was to reduce the flash and RAM footprint of my app, but even after implementing some optimizations here and there, the problem remained the same. The fun fact was that if I commented out that class, debugging worked again! But once my class was back, even with empty function bodies, the debugger stopped working!
So I decided to have a closer look. Using the telnet interface of OpenOCD, I was able to halt and continue the MCU, so debugging in general was technically still working. Reading and writing from and to memory was also working.
After reading a lot about the FPB, I understood a bit better how this works: The Flash Patch and Breakpoint (FPB) unit is a set of registers of the ARM MCU, which consists of several comperators – one for each breakpoint. Each comperator basically stores one program ( FP_COMPx). If the currently executed address matches what is written inside this register, the execution will be stopped and you can debug.
So I decided to have a look at the hardware registers itself. Since I couldn’t use Eclipse for debugging anymore, I had to use Telnet. If you have an OpenOCD server running, it will listen on port 4444 for a telnet connection. Via this connection, I was able to read out the memory of the FPB registers, located at address 0xE0002000:
mdw 0xe0002000 20
0xe0002000: 00000261 20000000 48001239 4800164d 480019b1 480021a1 48005d0d 48006851
0xe0002020: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
0xe0002040: 00000000 00000000 00000000 00000000
Starting from the third register (0x48001239), there is a list of six registers filled with some addresses. That explains the error that we see from OpenOCD, the questio now is, who is writing to these registers?
A lot of further research revealed me that it is possible to open OpenOCD in debug mode, by passing parameter “-d3” to it. This even works in Eclipse:
With this additional debug output, I could actually see what secretly happens after flashing. I saw about six of these blocks:
Debug: 17205 11732 gdb_server.c:3153 gdb_input_inner(): received packet: ‘m8001238,2’
Debug: 17206 11732 gdb_server.c:1438 gdb_read_memory_packet(): addr: 0x0000000008001238, len: 0x00000002
Debug: 17207 11732 target.c:2210 target_read_buffer(): reading buffer of 2 byte at 0x08001238
Debug: 17208 11732 hla_target.c:777 adapter_read_memory(): adapter_read_memory 0x08001238 2 1
This is actually the part where the breakpoints were set, and I could see the breakpoints were actually explicitly requested by someone! But why? And why on earth would Eclipse set so many breakpoints? I decided to check the addresses in the Linker file.
The locations usually looked somewhat like this:
All of these blocks somehow had a function with the string “main”, and it was the same case with my recently added C++ class, which also had a member function named “main”. That was when I understood, that Eclipse automatically sets a breakpoint at main() after startup! And for some reasons, it is setting this breakpoint also to classes that have a member function with the name main. It was just because I added one more class with a “main” function, that the number of possible breakpoints was exceeded, and the debugger wouldn’t work anymore.
You could debate if it is good style to have functions named “main” in your code. For me, it was OK because they are not only class members, but also somewhere in their own namespace, and should therefore be restricted. Turns out this was not always the case.
So if you encounter this problem, make sure to reduce the number of functions named “main” in your system!