The last two days, I had a very interesting problem that I thought is worth sharing. I am developing a project on an STM32 microcontroller. My toolchain uses Eclipse, which uses GDB, which in turn controls OpenOCD, which does all the low-level stuff so that I can flash and debug. That worked very well for a long time. Until yesterday.
What did I do? I added a new class to my project, which at some point would be dynamically allocated. I kept the method bodies all empty, so nothing should happen. But after flashing the code to the target, the debugger crashed with the message:
Can not find free FPB Comparator! can’t add breakpoint: resource not available
This error message was already familiar to me – it usually happens when you have too many breakpoints listed in Eclipse, and when you start the debug session, it will automatically add all these breakpoints. If you have listed more breakpoints than the MCU supports (six), this message appears. But here, my breakpoint list was completely empty.
Strange! I had no idea what was going on here, so my first trial and error approach was to reduce the flash and RAM footprint of my app, but even after implementing some optimizations here and there, the problem remained the same. The fun fact was that if I commented out that class, debugging worked again! But once my class was back, even with empty function bodies, the debugger stopped working!
So I decided to have a closer look. Using the telnet interface of OpenOCD, I was able to halt and continue the MCU, so debugging in general was technically still working. Reading and writing from and to memory was also working.
After reading a lot about the FPB, I understood a bit better how this works: The Flash Patch and Breakpoint (FPB) unit is a set of registers of the ARM MCU, which consists of several comperators – one for each breakpoint. Each comperator basically stores one program ( FP_COMPx). If the currently executed address matches what is written inside this register, the execution will be stopped and you can debug.
So I decided to have a look at the hardware registers itself. Since I couldn’t use Eclipse for debugging anymore, I had to use Telnet. If you have an OpenOCD server running, it will listen on port 4444 for a telnet connection. Via this connection, I was able to read out the memory of the FPB registers, located at address 0xE0002000:
Starting from the third register (0x48001239), there is a list of six registers filled with some addresses. That explains the error that we see from OpenOCD, the questio now is, who is writing to these registers?
A lot of further research revealed me that it is possible to open OpenOCD in debug mode, by passing parameter “-d3” to it. This even works in Eclipse:
With this additional debug output, I could actually see what secretly happens after flashing. I saw about six of these blocks:
This is actually the part where the breakpoints were set, and I could see the breakpoints were actually explicitly requested by someone! But why? And why on earth would Eclipse set so many breakpoints? I decided to check the addresses in the Linker file.
The locations usually looked somewhat like this:
All of these blocks somehow had a function with the string “main”, and it was the same case with my recently added C++ class, which also had a member function named “main”. That was when I understood, that Eclipse automatically sets a breakpoint at main() after startup! And for some reasons, it is setting this breakpoint also to classes that have a member function with the name main. It was just because I added one more class with a “main” function, that the number of possible breakpoints was exceeded, and the debugger wouldn’t work anymore.
You could debate if it is good style to have functions named “main” in your code. For me, it was OK because they are not only class members, but also somewhere in their own namespace, and should therefore be restricted. Turns out this was not always the case.
So if you encounter this problem, make sure to reduce the number of functions named “main” in your system!
Dear friends, today I would like to share some exciting news for you, and make it official – I am building a converter to allow you to fit a digital cluster into your 80ies oldschool vintage car!
In the last few month, I published details on how the digital cluster in a Nissan Sunny B12 Rz-1 works, and how it is integrated into the vehicle electronics. Well knowing that even with the information available, most owners of these old cars would not be able to do the job themselves, I decided to start building an adapter.
So how’s the idea?
Having solved all the big questionmarks around the digital cluster was not sufficient. From knowing how the speed sensor signal looks like (0-5V PWM), or how the fuel sensor of the digital cluster works, to have a digital cluster successfully installed in your car, is a long way to go. That’s why I decided to build you guys a simple, plug-and-play conversion. The idea is that with my kit, you just install a different speed sensor, attach my harness to the one already installed in the car, and connect it to your digital cluster. No hassle, no removal of the dashboard, no cutting of wires, no soldering.
The heart of the conversion is a little, custom made ECU based on an STM32 Microcontroller. This chip will do the math, and adapt the signals that we provide to it to a format that the original Sunny Rz1 digital cluster will understand. It will basically work as a translator between the electrical signals of your car and the digital cluster.
That solution means that you don’t have to bother anymore to find a Nissan Rz1 JDM digital speed sensor or aquire a fuel level sensor from a model with a digital dash – if you own these parts, consider yourself lucky, as these are pretty much Unobtanium.
In more detail, the ECU will convert the vehicle speed signal from a sensor that fits mechanically onto the Rz1 transmission (see the black round thingy in the picture). It gives out 4 pulses per rotation – but the digital cluster needs 24 pulses per rotation! To solve this, the ECU is constantly monitoring the frequency of the vehicle speed sensor, and multiplies this frequency by 8 – and then gives this signal to the digital cluster.
For the fuel signal, the ECU does a similar job: A ADC (analog-digital-converter) measures the fuel quantity in your car, and uses a lookup table to calculate the fuel output level. Say for example, the analog fuel sensor measures 50Ohms, then the ECU knows that this corresponds to a fuel level of 30%. Using the lookup table of the digital cluster, it then translates this fuel level to a voltage: Say that 30% means a fuel signal of 2.1V, then the ECU will provide a 2.1V signal to the cluster, which will then display the correct fuel quantity of 30%.
Some geeky technical details — the STM32 has 256k flash and 64k RAM. The application uses a FreeRTOS operating system. Four different tasks manage the signal conversion, and a serial UART interface allows you to print debug information or change settings. The conversion factors and lookup tables are stored in a data flash.
But having an ECU to convert the signals is not enough, as it would still mean that you would have to spend hours and hours removing your vehicles’ dashboard, cluster, harness, cut wires, solder, measure… it can easily take several weeks to accomplish this. I asked myself, how is it possible to make the installation as easy as possible, so that basically everyone can do it?
The idea that popped into my mind was to build an adapter wire harness, which plugs into the OEM harness, and changes the pinout. On the other side of the adapter harness, you can directly connect your cluster. The biggest obstacle here is that the cluster connectors in the Rz1 are 30 years old, and there’s no place on earth were you can buy a matching female connector. I searched for days. You just can’t.
So that basically means that if you want to install a digital cluster, you have to cut off the original connectors, and solder everything together?
After giving up searching a suitable connector that plugs into the OEM harness, I decided to build my own. Above is the CAD drawing of it, and if you scroll all the way up to the picture, you can see the 3D printed result cases. Take two more custom-made electrical PCB circuits and a strain relief, and voilà – your custom made electrical connector is completed!
The custom made PCB will serve as the electrical connection. The cables are soldered to the PCB, and then slid into the 3D printed case, where they are secured. The connector can directly connect to the Rz1 harness, without any cutting required! Compare the gray plug cases with the electrical connector that came from a junkyard Sunny B12 – they will perfectly lign up.
The adapter is now in the late stages of development. The custom plugs are made, the PCB layout for the ECU is done, the software is about 80% complete, next up follows making a prototype and a test installation in a Nissan Rz1. After extensive testing, I hope I can offer you this adapter for sale within one or two month.
What will it cost?
Considering the effort that it took to write the code, design PCBs, the costs for custom PCBs and 3D printing, doing all the research on the Rz1 digital wire harness, measuring out a complete Rz1 harness from Russia, and trying out a lot of different approaches, development of this adapter had cost me a lot. The fact that the number of potential buyers will certainly be less than ten, will not help me a lot either. I am targeting a sales price of roughly 350€ per complete converter, including speed sensor, adapter harness, and custom made ECU. If you are on a tight budget, all the schematics and drawings will be published after the project was completed, so that you can also build everything yourself.
That being said, thank you for reading and the interest! Have fun keeping your old cars on the road, and make sure to come back regularly to find out about updates on this project!
By the way, for all the software guys and girls among you, feel free to check out my GitHub, you will find the complete source code for the adapter there. I will probably do another blog post which focusses only on the software parts, so stay tuned!
Today I finally managed to measure the signals on the digital speed sensor. I don’t own any of the Rz-1 digital cluster parts, but luckily, a friend borrowed me his sensor and allowed me to reverse-engineer it.
The speed sensor has a connector with three pins: black (right), yellow (middle), red (left).
Based on what I measured, black is GND, red is +5V supply, and yellow is the speedometer signal. The based on my scope prints, it seems that per revolution of the speedo cable we can see 24 pulses:
When I rotate the pin of the speed sensor at about one revolution per two seconds, the cluster shows me – very roughly – 13 km/h.
When only connecting the +5V and GND, but leaving the signal pin open. the signal looks idential. When rotating the pin of the sensor, the 5V pulses on the signal line are still visible.
Therefore, it seems we can just use a 0-5V PWM, and bring it onto the cluster, and it should display the speed signal.
This manual describes how to use an ESP32 as an extension to the existing OBDIIC&C device, available for the Honda Insight ZE1. This device will not add any new features, but will allow you to visualize the OBDII data on your smartphone.
If you are looking for a tutorial on how to read data from an OBDII connector, this ist the wrong place.
First, you’ll need an ESP32. Connect the ESP32 via a Micro USB cable to your PC. Install the drivers, if necessary. The ESP32 should be recognized as a COM port:
Download the ESP32 Flash Download Tool from the website:
window will pop up. Configure as follows:
flashing, 32Mbit flash, 40 MHz SPI speed.
Add the binary to the list of files to flash, and set a start address of 0x0. Also make sure to choose the correct COM port. After setting up everything correctly, press „START“.
If everything works OK, the download should take about ~ five minutes.
After reset the target, you should see some debug output in the serial interface, looking like this:
How to Connect the ESP32 to the OBDIIC&C
To make sure the ESP32 can talk with the OBDIIC&C, we need to connect three wires:
However, the logic levels of the two MCUs are different. The PIC uses 5V TTL logic for the UART, the ESP32 3.3V. Additionally, the logic levels are inverted.
Hence, you need two transistors to adapt the signal levels, and invert the logic levels at the same time. Have a look at the example schematics below to understand how to wire things up:
Note that the pins of the ESP32 used for UART communication are D22 and D23, not the pins labeled “RXD” and “TXD” next to it (this is a different UART channel). The ESP32 supports connecting the second UART to any pin of the device. The first UART peripheral is available via USB or the RXD and TXD pins, and is used for flashing an debugging. The second UART, on pin 22 and 23, is used to communicate with the OBDIIC&C.
Please double-check the pinout on the “headphone” plug on the OBDIIC&C, make sure to measure which pin is GND first.
3V3 are directly available on the ESP32, 5V you’ll need to get from the OBDIIC&C.
Below, you will find the download link to the ESP32 software:
A few years ago, I modified a LHD Nissan Sunny B12 Coupé Rz1 and fitted a RHD JDM digital clusters, and uploaded a short video of it on youtube.
I received a lot of questions on this modification, and decided to publish the knowledge I gained here. Most importantly, here is the pinout.
The connectors are drawn on the left side. Basically, you see five connectors here. Connector “A”, “B”, and “C” under the header “Digitaltachostecker” describe the connector of the digital cluster. Connector “A” is the large one that powers the talltales and warning lights, connector “B” is the hard-to-find black one, that provides all the actual digital signals. The three-pin connector “C” can not be found on the cluster, but is the connector on the tachometer sensor in the engine bay (you need that part too).
The connectors labeled with “EU stecker” refer to the ones that you will find in your LHD Rz1s that came with analog clusters. You should most likely find them in your car if you remove the cluster.
In case you have a low-end spec B12 that came without a rpm gauge (some U.S. Sentras maybe?), your connector layout is probably different again.
The drawings show the connectors (sockets) on the cluster, as if you look on the backside of the cluster. They do not show the connectors on the harness!
The first problem you’ll face is that you need to get the small connector “B”, which, if it is not attached to your digital cluster, is pretty much unobtanium. You could try to find the original part number or a the part number from the supplier who made this connector, or in the worst case, replace the connector with a diffent type of similar size / pin cont and just solder everything together. The white, larger connector “A” is physically identical to connector “A” of your LHD harness, but with a different pinout.
You got your hands on some plugs? Good, then you can already solder everything together, and your digital cluster should already party come to life. PRM, turn indicators, telltales (except exhaust temperature), temperatur gauge and fuel should come to life.
Vehicle Speed Sensor
The next difficult task it to get a speed sensor. The analog clusters uses a mechanical speedo cable that connects the gearbox to the speedo pointer – that is 100% mechanical oldschoolness!
For the digital cluster, NISSAN chose to go half they way – they still use a mechanical cable, about 50 cm long, that connects the gearbox to an electrical sensor. That sensor converts the motion into PWM signals, that are electrically supplied to the cluster. For the conversion, you will need both the cable and the sensor. If you don’t have these parts, then the installation will be a bit more difficult. They will plug&play fit to any gearbox. The digital sensor is connected with two cables, C2 and C3, to the main plug of the cluster. So you will need to add two wires from your engine bay to your cluster.
If I remember correctly, the speedo singal is triggered twice per revolution. If you don’t have a sensor, you might want to use a function generator and try to get your cluster displaying something. If you are struggling here, feel free to leave a comment, then I will look up the technical details.
From the youtube comments on my channel, I understood that there seem to be different types of sensors, depending on which engine / gearbox your donor car had. It is likely that non-matching gearboxes will provide false readings on the speedo! If this is the case, then you would need to invest more engineering into the problem, e.g. use a small microcontroller to correct the signal.
I once tried to fit a digital sensor from the B13/N14 series into an E16i gearbox, but they wouldn’t fix plug&play. The main problem was that while the diameter was the same, the axis of the sensor was slightly offset. With some fiddling you might be able to mount them, though. These sensors might fit for the GA16i gearbox, though. If you tried going this way, please leave a comment and share your experiences! it would definitely be cleaner looking than the OEM B12 parts.
You will have to figure out a solution to mount the speedo sensor in your engine bay, as the LHD / RHD firewalls are different. I suggest to use a little metal plate to mount the sensor to, and then adapt that to the firewall.
Incorrect mounting with too much tension on the cable might lead to the speedo not working at all, faster wear-out or not working at all.
Fuel Level Sensor
A problem which is not yet solved, it the fuel gauge. The fuel sensors used in the JDM digital cluster cars have a different characteristic.
The analog clusters use an 8V voltage regulator, which is connected to the fuel gauge and the fuel sensor in series. The resistor value of the LHD fuel sensor is as follows:
Resistor Value [Ohm]
Displayed Fuel Level
The digital cluster characteristic is quite different. I used a JDM cluster and measured the fuel input. The fuel sensor is supplied by 5V, and depending on the resistor values, the following fuel level is displayed:
Resistor Value [Ohm]
Number of bars
With this information, you should be able to build your own adapter for the fuel sensor.
This project started in 2010 as a university project with the goal do
develop a rotation clock. Main objectives of the project was to develop
an induction power supply using a MOSFET H bridge and and to layout a
PCB. The PCB contains a 8051 microcontroller and ten OSRAM RGB LEDs as
well as an optical reflex coupler. The board is mounted on a rotateable
motor, which propels the board to 40 rotations per second. Based on the
input data of the reflex coupler, the microcontroller then calculates
the current position and switches the LEDs on or off so that the user
sees a stable image.
The PCB features a power supply that suported AC/DC inputs, ranging
from 7 to 20V, while delivering a stable 5V DC current used to power the
8051 microcontroller soldered onto the PCB. The board is powered using
induction. A MOSFET H bridge is used to power the primary coil, while
the secondary coil is mounted on the rotateable part. Programming
started in Assembler (requirement of university), but will be finished
in C. The project is not yet completed – the PCB works as expected and
the induction power supply is working, but the current mechanical setup
is not fast enough to create a stable image. Development is still
ongoing (only problem is time 🙂 ).
Keywords: 3D circle reconstruction, traffic sign reconstruction, ellipse fitting
I came to the idea of publishing this software part I have been working on during my Master’s Thesis. During my time at SIMTech, I was woring on robotics, 3D reconstruction of workpieces, and path teaching using Augmented Reality (AR).
A part of our project was the task of reconstructing a 3D circle based on its projection on several camera images. Assuming I have several 2D ellipses representing the projection of this 3D circle taken from different positions, I want to find the 3D circle that matches the projections as good as possible. Several papers are available for this problem but some of them either require additional knowledge, such as the circle’s base plane, or are very scientific, requiring a lot of time to understand it. The method I present on this page is based on the following paper:
Soheilian, B.; Brédif, M., “Multi-view 3D
Circular Target Reconstruction with Uncertainty Analysis”, ISPRS Annals
of the Photogrammetry, Remote Sensing and Spatial Information Sciences,
Vol. 2, Issue 3, pp 143-148, 2014
In my opinion, it is a very good paper and their approach is very
advanced, but due to the amount of information in this paper it can be a
bit difficult to understand for usual engineers or programmers. That
was the reason why I decided to publish my implementation of it and try
to explain the approach proposed by Soheilian and Brédif a bit more in
detail. The major difference between my implementation and the one
presented by Soheilian and Brédif is that their approach also optimizes
the extrinsic camera parameters and the 2D ellipse data to provide the
best results, while my method assumes that the camera’s parameters
(extrinsic and intrinsic) are known and perfect.
The projection of a 3D circle on a camera image is an ellipse, so
these projections pose the base for the reconstruction approach. In this
document, I will not focus on how to detect ellipses on a camera image
or how to find the best fitting ellipse as this part of the algorithm is
rather simple. From my experience, a simple Canny edge detector with
subsequent contour detection, followed by an ellipse detection and
filtering usually works quite good. The algorithm I’ve used to find the
best fitting ellipse on a camera image is presented in this research
paper. The algorithm is basically an improved version of the Fitzgibbon
et al. method to make it numerically stable.
Halí?, R; Flusser, J., “Numerically Stable
Direct Least Squares Fitting Of Ellipses”, Institute of Information
Theory and Automation, Prague, 1998.
I assume that you already detected and found the best fitting ellipses that belong to the same 3D circle on different images.
What Do I need to know?
In the following text, I assume that you are already familiar with
the following mathematical things. If you’re not, you may consider
having a short look at the internet to understand it or you keep
reading. Somethings things are not as complicated as they first look
like – after all, its no rocket science 😉
Camera pinhole projection model. I assume that you know what the
intrinsic and extrinsic parameters are and how their matrices look like.
Matrix operations, multiplications, inverse, etc.
2D Ellipse representation. You need to know how to express an ellipse as the solution of a conic equation.
Quadrics and their duals (you don’t need to have a good
understanding of it. I don’t have it either. It is enough to know what a
quadric is and to know that a quadric has a dual).
Eigenvectors and Eigenvalues.
So lets get started!
Algorithm Implementation Details
Unprojecting a circle from several orientations is a highly non
trivial task and requires a skillful approach to mathematically express
the relationship between the projections and the original 3D circle with
the lowest number of unknowns as possible while keeping an
unconstrained approach. For this document, a simplified version of the
algorithm presented in this research paper published by Soheilian and
Brédif has been used.
This algorithm encodes all information of the circle in six unknowns, the circle center C and the circle’s normal vector N. With
the orientation and position of the plane supporting the circle is
fully described. The radius of the circle is encoded in the normal
vector by defining the normal vector length to be equal to the radius.
The proposed parameterization is the minimal parameterization of 3D
circles and is both unambiguous and unconstrained, so no additional
conditions are required to be set up.
The idea of the algorithm is to describe the ellipse and the 3D
circle as quadrics. To simplify the calculation effort, both quadrics
are transformed to their dual quadric. Taking the conic equation of a 2D
ellipse, the quadric for an ellipse can be calculated:
This can be rewritten as follows:
E is a 3×3 symmetric matrix and represents the ellipse as point
quadric. With this point-based conic, the dual conic E* of this quadric
based on all tangent lines of E can be calculated.
As a conic equation can be scaled without changing its validity, the
determinant can be ignored and the adjugate matrix of E can be used to
describe the dual conic of E. By additionally adding the ellipse
constraint of and scaling the conic equation in such a way that is fulfilled, the dual quadric E* can be calculated based on the ellipse parameters as follows:
To describe a circle in 3D space, Soheilian and Brédif propose to
start with a quadric representation of a sphere and flatten it over
time. Their approach starts with the equation of an origin-centered
sphere. Rewriting the sphere equation as a quadric yields:
In the next step, the sphere is flattened over time by scaling the z coordinate. At t = 0, the quadric represents the same sphere. For , we get a disc of radius r at z = 0. The dual quadric of this 3D circle is gained as follows:
Using homogeneous coordinates, this circle can be translated and rotated using a transformation matrix
containing the 3×3 rotation matrix
and translation vector C. As defined previously, the radius information
of the 3D circle is encoded in the length of the normal vector, thus we
Transforming the dual quadric results in a quadric that fully
describes a circle in 3D space. In the following formulas, vectors are
now treated as a matrix with a single column to omit the vector arrow.
Further simplification yield:
With I3 being a 3×3 identity matrix. By using the notation for the cross product
Q* can be rewritten as follows, where denotes the matrix encoding the cross product :
Ann ellipse is the projection of a 3D circle using a projection
matrix P. Similarly, a dual quadric Q* is imaged as a dual conic E*:
The projection matrix P is the product of the camera’s intrinsic and extrinsic parameters and can be decomposed as ,
with K being the matrix of intrinsic parameters, R being the 3×3
rotation matrix of the extrinsic parameters and S being the original
translation vector. If a 4×4 matrix of extrinsic parameters containing
both the rotation and translation is given, S can be calculated by
decomposing the transformation matrix into a rotation and translation
Now, S can be easily calculated by reversing the rotation of T:
With both the extrinsic and intrinsic parameters known, the
relationship between the 3D circle and the imaged ellipse can be solved
as follows by defining M = KR:
To calculate the row and column values of the dual conic, above
equation can be rewritten for each matrix element if the matrix M is
i, j represent the row and column of the matrix, respectively. The
dot represents the dot product while the x represents the cross product.
To compare the calculated projection of this circle to the estimated
ellipse parameters gained from the image contours, E* must, up to a
scalar factor, verify E*obs. As a conic equation can be scaled without
affecting the described ellipse, an additional unknown homogeneous scale
has to be calculated. Above equation yields six equations, out of which
one has to be used to calculate the unknown scale factor, resulting in
five equations. As the used model of the 3D circle has six unknown
parameters, this means that at least two ellipsoid projections of the
same circle must be known to solve this set of equations.
By scaling the parameters of the observed ellipse so that Eobs*33 =
1, the cost vector for the least-squares optimization can be written as
vec() represents a vector, containing the five different elements of
the matrix E* or Eobs*, respectively. A possible arrangement can be:
For each ellipse, five equations can be set up. The system of
equations can be solved iteratively using the Gauß-Newton algorithm.
Given N projections of the circle, the Jacobi matrix is a 5*Nx6 matrix.
Introducing the 3×3 matrices
F features the following derivatives:
Keeping the order used for vec() defined above; the first five rows
of the Jacobi matrix can be set up as follows. Each image of the 3D
circle represents five rows in the Jacobi matrix.
The solution vector can
then be calculated in several iteration steps using the well-known
Gauß-Newton algorithm. In the end, the algorithms returns the
orientation and radius of the 3D circle if a set of ellipses and the
corresponding camera parameters (extrinsic and intrinsic) are given. At
the moment, the implementation of the algorithm assumes that the
intrinsic parameters of the camera do not change; however, simple
adaptions to the algorithm can be made to make it suitable for stereo
The algorithm was successfully tested using a MATLAB throw-away
implementation before it was integrated into the C++ software project.
You can download the MATLAB script for this algorithm here:
MBDAK 3 is one of my software projects
that I wrote in the years 2007 and 2008. It is a 3D engine designed for
flight simulator games written in Delphi 7. In this project, I focussed
on creating an engine of highest compatibility and flexiblity. It was
the first time I came in contact with dynamic linked libraries which I
have used to create a flexible interface to the 3D and the sound API.
Both the graphical and the audio subsystem are encapsulated in separate
DLL files. Different implementations of these DLLs allow the use of
different graphic libraries. Implementations were done for the
Direct3D 8 and OpenGL API, thus it is possible to change the 3D output
API by just exchanging the DLLs. This is no problem as both provide the
same interface. The sound subsystem uses the FMOD audio library, which
is also stored in an encapsulated dynamic library file.
The levels are stored in text files. Each
level iteself consists of a set of building blocks, the so called level
entities (i.e. scyscrapers, cranes, streets and other level elements).
Each entity provides a file containing the 3D data for rendering and a
collision model file, which is mostly a simplified version of the 3D
data and several attributes (destroyable, material properties,
additional data for light and particle effects). Each entity can then be
placed multiple times at any position in the 3D scene. Other data which
is being configured in the level file is the type of aircraft the
player is flying, its weapon arsenal and other important attributes
(starting position, position light data, etc). This was done to provide a
maximum of flexibility in the level design.
Another important goal during the
development of this engine was to make the simulation as realistically
as possible and to become familiar with the latest rendering
technologies at that time. The engine therefore uses MipMapping texture
filters, fog and dynamic lighting and, most importantly, is able to
perform dynamic real-time shadow calculation with the help of the
stencil buffer using the Carmack’s Reverse
method. However, interesting things such as bump mapping or shaders
were not been used. The reason for this was first, because compatibility
with non-T&L-capable video cards (I am a huge fan of old 3dfx video
cards and the game was developed on a Voodoo5) was desired and second,
because of the lack of time. I still had to go to school at this time.
I would like to thank Mr. Plöchinger, also known as DungeonKeeper1 in the german VoodooAlert board,
for providing the MBDAK 3 soundtrack! The 3D models were partially
created by myself using Milkshape 3D, while some others were extracted
from older computer games. For this reason, I unfortunately cannot
publish this game.
Abilities of the 3D Engine
Selectable 3D API: Direct3D 8, OpenGL und 3dfx Glide are supported
Support of real-time dynamic stencil buffer shadows
MipMapping texture filters
Dynamic environment loading systen
Collision detection using separate collision models based on a
combination of octree data organisation and spheres / triangle collision
detection for best performance
A simple particle system supporting alpha blending and z ordering
Shown below are several screenshots of the current development stage.
Unfortunately, I cancelled the project after two years of development
due to other priorities (studying). Besides that, creating
3D simulations using Delphi was not the smartest choice and the quality
of the source code was miserable as only structures and no classes were
MBDAK 3 with highest details: real-time shadow calculation, fog, particles and hightest texture quality
MBDAK 3 menu intro. Graphics were hand-drawn, scanned and colorized… with Microsoft Paint.
3 menu. Graphics were done with Adobe Fireworks. The little preview
window in the top left corner is inspired by Nintendo’s Starwing. It
contains a little starfield simulator.