In our previous exploitation of the subject, we found a nice $\mathcal{O}(|A|+|B|)$ way of checking how much two point set (D2 or D3) intersect. We found that a simple rounding trick delivered most of the goodies needed. But a corresponding $\mathcal(n\log(n)), n=|A\cup B|$ method was void of a similar scale sensitivity analysis. How to add that? An answer is a modified Laplacian interpolation of local insidense ratios $\lambda'(p)$ shown in the picture below right. The original ratios are here. The color bar shows values of $0\le \lambda'(p)\le 1$. (Blue: a lonely stranger in a foreign land. Yellow: a citizen among citizens).
A maddening walk-through over some basic concepts of Computational Geometry. Quite a lot of stuff is going to be about the point clouds, 3D shape recognition and the Machine Learning pre-processing. This is learning by doing, errors included. Everything will happen in slow motion.
Hexagonal lattice rotated
Matching pairs of two regular lattices circled
Wednesday, January 6, 2021
When do point sets intersect (part 2)
The adjacency matrix $\mathcal{A}$ has elements $a_{ij}\in \mathcal{A}$ in such a way, that $a_{ij}= 1/|N(p)|$ if and only if $p_j\in N(p_i)$ and $a_{ii}\equiv 0$. one can modify this a bit e.g. by using weighted inverse distances $a_{ij}= w_{ij}/\sum_{p_j\in N(p_i} w_{ij}$, where $w_{ij}= 1/\text{max}(\delta_0,\|p_i-p_j\|)$, where $\delta_0= 0.01$ m is the bumper margin for too close point pairs. Now, original insidence values can be modified by: \[ \bar{\lambda}':= A^u \bar{\lambda}' \]
where $\bar{\lambda}=\{\lambda'(p)\}_{p\in A\cup B}$ is an assembly vector. One can choose a power $u=0.23$ yielding a similar value $\lambda= 0.86$ as the rounding method. As long as weights on an individual row $a_i\in\mathcal{A}$ equal 1: $\sum_{j=1}^{|A|}a_{ij}=1$, the resulting smoothing $f:=\mathcal{A}^uf$ has a unit property, which means that a constant function $f(p)$ stays constant. This can be extended to full interpolation, but that topic may come up later, if ever.
Interestingly, the scale associated with $u=0.23$ is $\epsilon= 5.4$ m, which is within the rounding method stability area $2.2\le \epsilon \le 8$ m. There are several ways to estimate this associated scale, the handiest one uses an assumption of uniform distribution and the mean distance $l_0= 3.5$ m of the example PCs $A$ and $B$, but a more general method seeks for a stability area in the same way as was done with the rounding method. This is an interesting research question: How to choose $u$ so that the scale factor becomes controlled in a general PC (with relatively uniform spatial distribution)? And: are there meaningful interpolants $f'= \sum_{j=1}^m A^{u_j}f$ with a relatively short length 4 m so that the scale is well controlled?
But, above there is the original pointwise incidenses (left) and a result of adjacency matrix smoothing (right). If we used mere $1/|N(p)|$ weight and $u=1$, it would have been graph Laplacian interpolation.
Matrix power $u=2$ is special, because it keeps a sparse matrix $\mathcal{A}$ sparse. Below are the non-zero elements of the initial matrix, and the power $\mathcal{A}^2$.
An interesting detail with both the inverse length smoothing and Laplacian is that on each iteration, the original values disappear and the mean of the surrounding values substitute them. Since the mean natural neighborhood size $\bar{n}= \text{mean}_{p\in P}|N(p)| \approx 6$ (by Leonhard Euler, btw.!), approximately 1/7th of the information disappears while the node values (in this case $\lambda'(p)$'s) get smoother by substitution and while the overall value field progresses towards a harmonic function. And harmonic functions have a minimum of local information density of all smooth functions. So, the adjacency power interpolation is a fancy way of regularization, where the information content gets destroyed by a power constant $u$ chosen.
More about geometric information perhaps in the next year 2022!
Subscribe to:
Post Comments (Atom)


No comments:
Post a Comment