A point cloud is a set of points $P=\{p_i\}_i\subset\mathbb{R}^d$. And the k-Nearest Neighbors Graph $G=(V,E)$ links the $k$ geometrically nearest neighbors $N_k(u)$ of point $u$ together with edges $e$: $$ E= \{(u,v)| u\in V,\, v \in N_k(u) \} $$. It is wise to pre-compute all the neighborhoods before any other computations, since typically there are so many phases requiring the same preliminary step that the actual cost of storing the result becomes negligible. This is a list of the ways to implement it. This is also a task-list for the coming months/years(?). Me gonna implement most of them using Python.
- Brute Force $O(n^2 \log{k})$
- hyper-plus approach
- an intuitive approach one student outlined once... May be presented or omitted...
- Brute Force $N_k$ converted to guess-and-test $N_r$
- $maxRadius$ propagated to all $k$ neighbours
- get $N_k$ for one random point $p$ somehow
- each point $p_j \in p.N_k$ uses $maxRad + \|| pj.x-p.x \||$ as the initial guess and keeps propagating the $maxRad$: $p_j.initialRad= min(p_j.initialRad,maxRad)$ to $p_j.N_k$ points
- points traversed in the ascending order of $initialRad$
- Brute Force with local sort which gets reused by neighboring points
- some easy partial hash solution...
- kd-tree $\theta(n^{3/2}+k)$
- multi-quadtree with revised indexing
- generalized quadtree, indexing helps with neighborhood calculations, deepening, thinning and border point detection
- layered d-range-tree
- fractional cascading $O(kn\log{n})$
- randomized approach
- a random sample that picks every point with probability $p=1/k$
- with good probability $p_1$ exactly one of $k$ nearest neighbors would be in the sample
- compute the nearest-neighbor in the sample
- repeat this $O(k \log{n})$ times
- with high probability the $k$ nearest points in the $O(k \log{n})$ points computed are the $k$ nearest neighbors to your query
- if you do $O(k\log{1/\delta})$ samples, then each query succeeds with probability $p_1=1-\delta$
- thus, finding the $k$ nearest neighbors for all points is equivalent to doing $O(kn\log{n})$ nearest neighbor queries.
- radix sort
- A cheap approximate solution using a "locality-sensitive hash" would be to convert each point to it's bit interleaved form: $(xxx,yyy,zzz) \rightarrow xyzxyzxyz $, then radixsort for preprocessing.
- pick your point to query on and go $k$ points in both directions to get a size $2k$ set; then take the $k$th nearest to your point
- Morton Z -ordering
- all points within radius $r$ are within limit $\delta_N$ in the Z ordering!
- see the paper
- fair-split tree
- see the paper
No comments:
Post a Comment