A maddening walk-through over some basic concepts of Computational Geometry. Quite a lot of stuff is going to be about the point clouds, 3D shape recognition and the Machine Learning pre-processing. This is learning by doing, errors included. Everything will happen in slow motion.
Hexagonal lattice rotated
Matching pairs of two regular lattices circled
Wednesday, July 31, 2024
Lattice lambada in higher dimensions
Thursday, June 20, 2024
Tangentially, yours!
A problem popped recently to the mental horizon: how close a predicted trajectory of a ship comes to a real trajectory? For a moment, let us consider only the unit sphere and unit vectors and we happily identify points $p$ of a unit sphere centered to origo $O$ with vectors $p-O$... Also, we identify a point $p= (\theta,\phi)$ expressed by its latitude $\phi$ and longitude $\theta$ with its 3D map $p\rightarrow q$ image $q= (\cos\theta\cos\phi,\, \cos\theta\sin\phi,\, \sin\theta)$. So the reader, be aware. (And if the notation gets you baffled, check the notation page...)
Let us assume that an arc $[p_0,b]$ is a part of the predicted trajectory and $p_1$ is the ground truth (the pun intended). $p_0$ can be the last reference point and $b= \hat{p}_1$ is the prediction for $p_1$. The problem can be presented as finding the arc distance of a point $p_1$ from a great circle $C$ defined by the arc $[p_0,b]\subset C$. This great circle $C$ can be identified with a unit vector $c$, which is a normal of the plane of $C$.
OK, $c\perp b$ and $c\perp p_0$, that is: $c= \pm(p_0\times b)^0$ and we can find a vector $a\in [p_0,b]\subseteq C$ which is a projection of $p_1$ on the plane $C$ with a suitable scaling so that $\|a\|=1$: \[a= (p_1 - (p_1\cdot c) c)^0.\;\; (1)\] Now, arc lengths shown in Fig. 1 are: \[\alpha_\perp= |[a,p_1]|= \pi/2-\cos^{-1}(c\cdot p_1)\;\; (2) \\ \alpha_\parallel= |[a,p_0]|= \cos^{-1}(a\cdot p_0).\;\; (3)\].
Eq. (2) is useful in estimating the relative randezvous error $e_p=|[a,p_1]|/|[p_0,p_1]|$ and Eq. (3) leads to the relative time error $e_t= |[p_0,a]|/|[p_0,b]|-1$. The estimate of $e_t$ assumes time information was not a part of the teaching of the predictor, which is not always the case.
Now, the question arises... Should we use spherical geometry here? Would computations of $e_p$ and $e_t$ become faster, or would the numerical accuracy be better, or both? The latter is important, since it is easy to have numerical instability at distances below 20 km. As it will be revealed soon (after the Midsummer Feast, I hope), the answer is a surprisingly close call. In general, a lot of trigonometric trickery can be substituted by vector formulas (using either geometric algebra or vector algebra) without much loss in computational efficiency, when operations happen in the modern processing environments and intermediate results have enough memory to dwell in...
But while you are waiting, here is the way where all points get projected to a plane defined by points $p_0$,$p_1$ and $b$: $l_\perp= \sqrt{1-\cos(\beta)^2}\|p_1-p_0\|$ and $l_\parallel= \cos(\beta)\,\|p_1-p_0\|$, where $\cos(\beta)= (p_1-p_0)^0\cdot (b-p_0)^0$.
It yields quite good results as long as the distances are under 200 km ($|[u,v]|/\|u-v\| - 1 < 32\times10^{-3}$).
Monday, April 8, 2024
Eliminating close gaps between points
One posting (When do point sets intersect(part 1) ) touched Poisson disk sampling (PDS). A grid based approximation was used there because ... it served the particular case considered there. There are many good algorithms, e.g. Bridson's algorithm (2007) is of $\mathcal{O}(d |A|)$, where $A\subset\mathbb{R}^d$ is a point set dense enough that the Poisson disk radius $r$ is smaller than the mean length $l_0$ between points: $r< l_0$.
PDS is sometimes needed in some manifold operations (e.g. for creating a well-behaving kernel, creating a balanced sampling) and quite fast algorithms exist for metric spaces. The defining property of PDS is that natural neighbor (NN) samples tend to be at distance $r$. We are interested in a case, where the PDS distance $r < l_0$ is used to remove close points (red circles in Fig. 1) so that the distance histogram becomes cut (e.g. to apply algorithms, which are numerically sensitive to small gaps. We also want the samples not to get attracted by local voids, therefore we fill in some temporary bogus samples (green in the Fig. 1).
Fig. 1. The initial step (black dots) is modified by removing close points (red circles) and adding bogus points (green) to free spaces.The algorithm eliminates some points from the dense areas of the PC $A$ and adds the same amount of points to a PC $B$. The algorithm focuses on a set $C_i=A_i\cup B_i$ on each iteration $i$ over steps 1-3. Initially $i:= 0,\; A_0:= A$ and $B_0 =\; \{\}$:
- Iteration $i$: Produce a Delaunay triangularization $(C_i,T_i)$.
- Remove some points $p\in C_i$, which are very close to another point
- Add the same amount of points to centers $c_t$ of largest triangles $t\in T_i$.
- Go to 1 as long as changes happen in steps 2 and 3.
- Output $A_n$ or $C_n$ depending on the application case.
- Step 1: $(A,T)$ can be approximative, especially when dimensionality $3<d$. Then, a set $NN(p)$ (in 2D case a triangle) can have a variable size, i.e. $|NN(p)| \equiv d+1$ characteristic to Delaunay simplices does not need to hold.
- Step 2: Number $k$ of points to be removed can be controlled. For example: remove $p$ or $q$ if $\|p-q\| \le l_{min}$. Increase $l_{min}$ slowly until it reaches a value $r$ you want.
- Step 3: Size of triangles $t$ corresponds to their measure (area/volume/hypervolume...) or an approximation of the measure. Addition may not be possible at some iterations, but addition may resume later. If there is an initial set with much higher density, the added points can be substituted by the nearest match; otherwise the added points $B$ will be ignored from the final result.
- Step 4 can have a control for the current $l_{min}$ to approach the Poisson disk radius $r$.



