Delaunator creates badly formed models on some data
I've been testing Dealuanator-cpp and found it produces badly formed triangulations on certain data sets, example attached. Specifically, larger data sets with millimeter resolution that have closely spaced points along feature lines with large gaps inbetween, for example points along walls on the inside of a building. This can be mitigated to a large extent, but not entirely, by adding very small random offsets to the vertex position which leads me to suspect it could relate to the issue raised here, https://github.com/abellgithub/delaunator-cpp/issues/27 It doesn't seem to happen on smaller samples I've tried, so apologies for the large test case. I'll try to hunt down a much simpler data set to illustrate this problem better for testing purposes
Update, it only appears to happen where there are a mix of negative and positive coordinates. Shifting prior to triangulation seems to correct the issue. Further testing ongoing.
If you could provide a small test case that would be great.
Hi Andrew, the attached dataset reduces the couple of million points in the original sample down to 1163 points. Still trying to reduce this further but removing many more points also seems to remove the problem.
Adding a shift to the origin to remove negative coordinates fixes the issue as shown below on the same dataset
Not much smaller but at 1090 points another test case that might be interesting as it suggests an issue with the convex hull
Zooming in a bit shows another point which incorrectly seems to be on the hull
I haven't studied the delaunator algorithm or code as yet so can't really speculate what's going on beyond this.
A bit more testing here shows this issue still occurs on datasets with no negative coordinates, but much less frequently. Will continue with some more tests
Looking at the convex hull of the failed triangulation, we can see that it is not actually convex. This could be a good mechanism for automatically detecting the point at which delaunator is failing. i.e. Create a test for hull convexity, apply it as the badly formed is being built to find the point of failure.
I haven't dug into the code in any detail, but a first look at it shows different artbitary tests for equality of point similarity, i.e.
constexpr double EPSILON = std::numeric_limits<double>::epsilon();
inline bool check_pts_equal(double x1, double y1, double x2, double y2) {
return std::fabs(x1 - x2) <= EPSILON &&
std::fabs(y1 - y2) <= EPSILON;
and
if (Point::equal(m_points[i], m_points[e], span) ||
Point::equal(m_points[i], m_points[q], span))
{
e = INVALID_INDEX;
break;
}
;
;
;
static bool equal(const Point& p1, const Point& p2, double span)
{
double dist = dist2(p1, p2) / span;
// ABELL - This number should be examined to figure how how
// it correlates with the breakdown of calculating determinants.
return dist < 1e-20;
}
and
inline bool counterclockwise(const Point& p0, const Point& p1, const Point& p2)
{
Point v0 = Point::vector(p0, p1);
Point v1 = Point::vector(p0, p2);
double det = Point::determinant(v0, v1);
double dist = v0.magnitude2() + v1.magnitude2();
double dist2 = Point::dist2(v0, v1);
if (det == 0)
return false;
double reldet = std::abs(dist / det);
if (reldet > 1e14)
return false;
return det > 0;
}
I'm wondering would it make sense to have a defined value for point resolution on the plane, e.g. defaulting to 1 micron, and work to that constant for equality comparisons. Just a thought.
Implemented the above which seems to correct the problem on all test sets so far. Changes are as follows;
delaunator.hpp
namespace delaunator {
const double PointResolution = 0.00001;
const double PointResolution2 = PointResolution * PointResolution;
;
;
static bool equal(const Point& p1, const Point& p2, double span)
{
double dist = dist2(p1, p2) / span;
return dist < PointResolution2;
}
delaunator.cpp
enum { LeftSide, // Counter clockwise
RightSide, // Clockwise
Colinear };
inline int CheckSide(double x1, double y1, double x2, double y2, double px, double py) {
double o = ((x2 - x1) * (py - y1)) - ((y2 - y1) * (px - x1));
if (o > PointResolution2)
return (LeftSide);
if (o < PointResolution2)
return (RightSide);
return (Colinear);
}
inline bool clockwise(const Point& p0, const Point& p1, const Point& p2)
{
return (CheckSide(p0.x(), p0.y(), p1.x(), p1.y(), p2.x(), p2.y()) == RightSide);
}
inline bool clockwise(double px, double py, double qx, double qy,
double rx, double ry)
{
return (CheckSide(px, py, qx, qy, rx, ry) == RightSide);
}
inline bool counterclockwise(const Point& p0, const Point& p1, const Point& p2)
{
return (CheckSide(p0.x(), p0.y(), p1.x(), p1.y(), p2.x(), p2.y()) == LeftSide);
}
inline bool counterclockwise(double px, double py, double qx, double qy,
double rx, double ry)
{
return (CheckSide(px, py, qx, qy, rx, ry) == LeftSide);
}
;
;
;
inline bool check_pts_equal(double x1, double y1, double x2, double y2) {
return std::fabs(x1 - x2) <= PointResolution &&
std::fabs(y1 - y2) <= PointResolution;
}
I'm not convinced these constants are entirely right for the above comparisons, as reducing the point resolution to 1mm seemed to strip out far more points from the triangulation than I would have expected. That said, with the 10 micron value above, the results are looking very good my use cases and test data.
Note that the smaller test data set above is not useful for testing robustness as many small changes will make it form correctly whereas the bug still exists on larger datasets. The original larger dataset is better suited to this purpose, though it could be worth writing a small application to repeatedly generate and check large test cases with various geometric properties to properly demonstrate robustness. I'll carry out some more tests here on different datasets to see if the above fix breaks anything else.
It's hard to tell what you have done without a diff. Also, your point set doesn't make sense with the code that you've posted. The coordinates only have 5 decimal places of precision, which is less than the equality check change that I think you're proposing.
Please provide a complete program with a minimal test case in order to verify what's going on.
It's also important to know what system you're working on.
Modified source attached. I'm working on Visual C++ 2022 (Version 17.13.7), targetting 64 bit Windows 10/11. Building using ISO C++17 Standard with precise floating point model. Testing carried out with optimizer turned off and turned on, both yield same results. Also tested use strict floating point model, also yielding same results.
For the small degenerate test case posted, the previous counterclockwise function was returning true for the coordinates below, whereas my updated one is returning false. Not sure where you got your algorithm for the counter-clockwise test, the one I'm using is described here https://www.geeksforgeeks.org/dsa/orientation-3-ordered-points/ and is slightly more computationally efficient.
For this specific case your code lists these points as counterclockwise, the unmodified orientation check also lists the points as counterclockwise whereas the update with resolution check lists with points as co-linear. I've tested the larger test case with the unmodified orientation check and it fails, whereas with the resolution check in place, it triangulates correctly. The distance from the point rx,ry to the line px,py-qx,qy for the coordinates below is 1.9539925233402755e-13 when checked.
| Name | Value | Type | |
|---|---|---|---|
| px | -15.406501428620366 | double | |
| py | 30.480879767016340 | double | |
| qx | -15.402501428620367 | double | |
| qy | 30.464879767016342 | double | |
| rx | -15.403501428620366 | double | |
| ry | 30.468879767016343 | double |
If you want to make a PR, that's fine, but throwing out points as "duplicate" when their coordinates are within ".00001" units isn't sufficient for many use cases. You'll also have to provide a small test case.
I expect your issue has to do with constants that are in place (that I think you've found), but without understanding better why they aren't sufficient for your use case, they can't be changed. Whether something is deemed counterclockwise or not when the points are essentially collinear doesn't really matter, so long as the computation is consistent (perhaps it's not consistent and this is the problem, but without a small test case it's hard to know).
The orientation check that you posted above is essentially the same as what currently exists, but it doesn't account for scale (it assumes the scale is always small). Consider that points with very large coordinates may exist that have the same properties.
Of course, if your modified code suits your needs, go with it.
Thanks for the feedback. I'll be sticking with the modified code myself for now as it seems to produce stable results for the type of data that I'm dealing with. Could I ask you to check if the test data already provided in delaunator-bug.zip and delaunator-bug2.txt above creates correct results on your own system on the existing codebase and what system you're working on?
Also, within your own code below, what is the function of the (reldet > 1e14) comparison and why 1e14?
inline bool counterclockwise(const Point& p0, const Point& p1, const Point& p2)
{
Point v0 = Point::vector(p0, p1);
Point v1 = Point::vector(p0, p2);
double det = Point::determinant(v0, v1);
if (det == 0)
return false;
double dist = v0.magnitude2() + v1.magnitude2();
double reldet = std::abs(dist / det);
if (reldet > 1e14)
return false;
return det > 0;
}
For reference, the test data is based on a 100mm slice through a statically scanned point cloud of an office building and the problem occurs regularly using similar extractions from a number of different datasets that I've checked.