Author's Accepted Manuscript

Reconstruction of Polygonal Prisms from Point-Clouds of Engineering Facilities

JOURNAL OF COMPUTATIONAL DESIGN AND ENGINEERING

Akisato Chida, Hiroshi Masuda

www.elsevier.com'locate/jcde

PII: S2288-4300(16)30005-7

DOI: http ://dx.doi. org/ 10.1016/j.j cde .2016.05.003

Reference: JCDE57

To appear in: Journal of Computational Design and Engineering

Received date: 8 January 2016 Accepted date: 23 May 2016

Cite this article as: Akisato Chida and Hiroshi Masuda, Reconstruction o Polygonal Prisms from Point-Clouds of Engineering Facilities, Journal o Computational Design and Engineering

http://dx.doi.org/10.1016/jjcde.2016.05.003

This is a PDF file of an unedited manuscript that has been accepted fo publication. As a service to our customers we are providing this early version o the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting galley proof before it is published in its final citable form Please note that during the production process errors may be discovered whic could affect the content, and all legal disclaimers that apply to the journal pertain

Reconstruction of Polygonal Prisms from Point-Clouds of Engineering Facilities

Akisato Chidaa and Hiroshi Masudaa* a Department of Information Science and Engineering, the University of Electro-Communications,

Tokyo, Japan

Abstract

The advent of high-performance terrestrial laser scanners has made it possible to capture dense point-clouds of engineering facilities. 3D shape acquisition from engineering facilities is useful for supporting maintenance and repair tasks. In this paper, we discuss methods to reconstruct box shapes and polygonal prisms from large-scale point-clouds. Since many faces may be partly occluded by other objects in engineering plants, we estimate possible box shapes and polygonal prisms and verify their compatibility with measured point-clouds. We evaluate our method using actual point-clouds of engineering plants.

Key words: geometric modeling, point processing, shape reconstruction, terrestrial laser scanner 1. Introduction

3D shape acquisition from engineering facilities is very important for maintenance and repair tasks. It is well known that model-based planning can intensively reduce the rework of maintenance and repair tasks for engineering facilities. Many CAD systems provide capabilities of task simulation based on 3D solid models. However, reliable 3D models of old engineering facilities rarely exist, because they were typically built one or more decades ago based on 2D drawings and have been repeatedly renovated in their long lifecycles. In most cases, it is required that 3D models are newly created based on existing facilities.

The advent of high-performance terrestrial laser scanners has made it possible to capture dense point-clouds of engineering facilities. The state-of-the-art phase-based laser scanners can capture about one million points in a second. By using such laser scanners, large engineering facilities can be represented with a huge number of discrete points. When a facility is measured at intervals of 6.3 mm at the distance of 10 m, the number of points is about fifty millions per each scan. An engineering facility is typically measured at dozens of places to reduce occlusions. Then the total number of points becomes hundreds of millions. Large-scale point-clouds represent faithful as-is shapes, but their data sizes are too large to handle with common PCs.

It is often required to convert large-scale point-clouds to more concise surface models. So far, some researchers proposed methods for reconstructing pipe structures from point-clouds captured by terrestrial laser scanners. Rabbani et al. [1] and Belton et al. [2] detected planar and cylindrical surfaces using the principal component analysis. Vosselman et al. extracted planes and cylinders the 3D Hough transform [3]. Lee, et al. detected cylindrical pipe using the Voronoi diagram [4]. Kawashima, et al. estimated normal vectors using a point-cloud and estimated cylindrical parts based on normal vectors [5]. For robustly extracting planes and cylinders, Masuda, et al. projected a point-cloud onto image space and detected surfaces by applying the region growing on the 2D image space [6, 7]. Mizoguchi et al. introduced the Manhattan world assumption and robustly detected pipes that were placed vertically or horizontally in manufacturing plants [8].

However, these methods extracted only primitive surfaces and did not discuss how to reconstruct volumetric shapes. Since pipes consist of cylindrical surfaces, pipe structures can be simply reconstructed by estimating the radii and lengths of cylinders. However, other shapes, such as boxes and polygonal prisms, have to be reconstructed by combining surfaces so that volumetric shapes are created. In addition, planar surfaces can be extracted using existing methods, but it is not a trivial problem to estimate boundaries of planar faces because many planar faces are partly occluded by other objects.

In engineering facilities, we can observe that the main objects are mostly box-shaped or polygonal prisms except pipe structures. In this paper, we discuss methods to extract box shapes and polygonal prisms from large-scale point-clouds.

* Corresponding author,

E-mail address : h.masuda@uec.ac.jp (H. Masuda)

2. Overview

In engineering facilities, most objects consist of planes and cylinders. Cones and tori surfaces also appear, but they are mostly used to connect cylindrical pipes and their sizes and positions can be estimated based on cylinders [4]. Other popular objects in engineering facilities are cuboids and polygonal prisms. In this paper, we call cuboids as box shapes.

Fig. 1 illustrates our shape reconstruction method. An engineering facility is represented with point-clouds. In our system, multiple scan data are separately processed. We suppose that the coordinates of points in each scan are represented on the scanner-centered coordinate system. Then points can be ordered on the 2D image (Fig. 1(b)). In this paper, we call points that are ordered in a lattice manner as the 2D map. We mainly process a point-cloud on the 2D map.

Then planes are extracted from a point-cloud. In Fig. 1(c), planar regions are shown in different colors on the 2D map. Our plane detection method is based on our previous work [7]. Next, we estimate box shapes using the relationships among planar regions. Since points are discrete and noisy, the boundary of each planar region cannot be precisely obtained. In addition, faces are often partly occluded by other objects. Therefore, we extract possible pairs of planar regions to reconstruct box shapes (Fig. 1(d)). In addition, polygonal prisms are also estimated using planar regions (Fig. 1(e)). Estimated objects are confirmed using visibility check (Fig. 1(e)). If estimated objects are inconsistent with a point-cloud, they are discarded.

Fig. 1. Process of shape reconstruction. (a) Point-cloud; (b) points on a 2D lattice; (c) extraction of planar regions; (d) reestimation of box shapes; (e) estimation of polygonal prisms; and (f) confirmation using visibility check.

3. Extraction of Planes and Cylinders

3.1 Generation of 2D map

The terrestrial laser scanners emit laser pulses and measure the round-trip travel time of the laser pulses reflected from objects. Fig. 2(a) shows a typical mechanism of terrestrial laser scanners. The laser scanner continuously emits laser pulses from the light source. The directions of laser pulses are vertically moved by the spinning miller and horizontally moved by the rotating body of the laser scanner. The laser scanner stores the directions and the round-trip travel times, which are converted into 3D coordinates in the post-process phase.

According to this mechanism, points in a point-cloud file are regularly ordered. The directions of laser pulses can be represented using the azimuth angle 6 and the zenith angle y. When the sampling intervals of each angle are constant, points are regularly ordered in the angle space. Fig. 2(b) shows the 2D map that are generated from regularly ordered points. The brightness of each pixel in the image shows the strength of reflected laser pulses, which is typically output with coordinates from the laser scanner. One of popular formats for point-clouds is the PTX format, in which points are ordered in a 2D lattice manner. Since each point can be mapped onto (I, J) in the 2D map, neighbor points can be quickly obtained.

Fig. 2. (a) Terrestrial laser scanner; and (b) point-cloud on angle space.

3.2 Surface detection

The RANSAC method is often used to detect planar and cylindrical regions in a point-cloud. To extract planes using the RANSAC method, three points are randomly selected and a plane equation is calculated. Then the number of points on the plane is counted. This process is iterated many times and the system maintains the plane equation with the maximum number of points.

It is well known that the RANSAC method is prohibitively time-consuming when it is applied to a large-scale point-cloud. Schnabel et al. extracted primitive surfaces from a relatively small point-cloud [9], but their method is very time-consuming when it is applied to tens of millions of points.

To extract surfaces in a practical time, we convert a point-cloud into a 2D lattice of points using angle space, as shown in Fig. 2 (b), and subdivide the 2D image into small regions. In our method, we subdivide a point-cloud into continuous regions using the criteria for connectivity. In our previous work, we subdivided points on angle space [10]. In this paper, we use a similar technique, but we simplify the criteria for connectivity to improve robustness.

We suppose that points are measured at the equal angle interval A(p [radian] in the azimuth and zenith angles, each of which is defined in Fig. 2. We also suppose that points are densely sampled, and therefore, the value A(p is very small. Then, when two adjacent pointspj andp2 are on a continuous surface, their distance can be estimated as:

where a is the angle between the laser beam and the normal vector of the surface, on which the two points exist; p| is the distance from the origin of the scanner-centered coordinate system.

To obtain angle a, the normal vector has to be estimated at each point. Normal vectors can be estimated using the principal component analysis, but they are noisy near the boundaries of surfaces. Therefore, we simplify the criteria to robustly estimate the connectivity of points.

It is well known that measured points become very noisy when the angle a is close to 90 degree, because only small rates of laser pulses are reflected from inclined surfaces. Since noisy points are normally eliminated in the pre-process, we suppose that the angle a is less than 70 degree. Then, we can regard that two points are on a continuous surface when the two points satisfy:

We detect each continuous surface using the region-growing method. We select an arbitrary point as a seed and extract the continuous region according to the criteria (2). We repeat this process until all points are segmented into continuous regions. Fig. 3 shows an example of segmented regions. In this example, 1,542 continuous regions are extracted. We discarded small regions that consist of less than 300 points.

Fig. 3. Segmentation to continuous regions.

3.3 Detection of cylinders and planes

Planes are detected in each continuous region using the RANSAC method. We describe the total number of points in a point-cloud as N and the number of points on a certain surface as n. The calculation time of the RANSAC method is determined according to the ratio n/N. When the ratio is very small, the number of iterations required for the RANSAC method becomes prohibitively large.

Fortunately, large continuous regions in manufacturing facilities include large planar floors or walls. Since floor and wall planes are very large, they can be detected in a small number of iterations. When floors and walls are eliminated, each continuous region can be segmented into small continuous regions. In our method, continuous regions are recursively segmented each time when a surface is extracted and it is removed from the continuous region.

In our method, a planar region and a cylindrical region are simultaneously extracted from a continuous region, and the larger region is selected. This method avoids a cylindrical surface to be subdivided into a lot of strip-shaped planes. Surface detection is repeated until regions with more than m points cannot be extracted. In this paper, we set m to 300.

Cylindrical surfaces can be extracted using the RANSAC method proposed by Schnabel et al. [9]. In this method, the normal vector is estimated at each point, and two points are randomly selected with their normal vectors. The direction v of the center axis is calculated as the exterior product of the two vectors, and the center and the radius of a cylinder are estimated on the plane whose normal vector is v. Then the number of points on the cylinder is counted. This process is iterated many times and the cylinder with the maximum number of points is selected.

Fig. 4(a) shows planar surfaces and Fig. 4(b) shows cylindrical surfaces. Fig. 5 shows planar and cylindrical surfaces extracted from an engineering facility.

Fig. 4. Extracted surfaces. (a) Planar regions; and (b) cylindrical regions.

Fig.5 Planes and cylinders extracted from a point-cloud.

4. Reconstruction of Box Shapes and Polygonal Prisms

4.1 Reconstruction of box shapes

While 3D models of pipes can be reconstructed only by extending cylindrical surfaces, the reconstruction of objects from planar faces is more complicated, because many faces are partly occluded, and therefore, the boundary edges of each face has to be estimated. Since many objects are box shapes or polygonal prisms in engineering facilities, we estimate object shapes using perpendicular planar faces. In our method, we verify estimated shapes by investigating consistency with measured points.

As shown in Fig. 6, we can consider three cases for the combinations of planar regions to reconstruct box shapes. In Fig. 6 (a), three perpendicular faces are detected, and in Fig. 6 (b), two faces of a box are detected. In both cases, some faces may be separated by chamfers, or a single face may be divided into multiple regions. Fig. 6 (c) shows a special case, in which some faces are nearly perpendicular to laser pulses. In this paper, we reconstruct box shapes only when two or three perpendicular faces are extracted, because the depth of the box in Fig. 6 (c) cannot be determined.

To robustly extract box shapes, we first estimate possible combinations of planes and then estimate the consistency of the box shape. In our method, neighbor planes are detected on the 2D image. To detect neighbor planes, we select a planar region, and enlarge the selected region by r pixels from the boundary point p. We denote the angle interval as A(p, and the normal of the plane as n. L is a constant threshold value, which indicates the maximum length from the boundary. In this paper, we specified the value L as 6 cm. Then the search range r can be determined as:

ri =round

p I2 Df

We expand the region within r pixels from each boundary points. When coplanar planar regions are detected in the expanded pixels, they are merged with the seed region. When perpendicular faces are detected, they are regarded as pair regions. When pairs of perpendicular planes are detected, the intersection lines are calculated, as shown in Fig. 7. When three planar regions are obtained, three intersection lines are detected. The sizes of rectangles are determined so that the rectangles cover planar regions. When only two planar regions are obtained, a single intersection line is calculated. Then the two rectangles are determined so that regions are covered, and finally a box shape is obtained, as shown in Fig. 7 (b).

chamfer

© _ *■ «« <

Fig. 6. The number of extracted planes. (a) Three planes; (b) two planes; and (c) a single plane

Fig. 7. Calculation of boundary edges. (a) Calculation from three faces; and (b) calculation from two faces.

4.2 Visibility check of estimated shapes

When two or three perpendicular planar regions are detected on the 2D image, box shapes can be reconstructed, as shown in Fig. 7. However, false boxes may be generated from inadequate pairs of planar regions. To avoid false boxes, we apply visibility check to investigate the consistency with a point-cloud.

As shown in Fig. 8 (a), if a rectangle face is partly occluded, other objects must exist between the occluded face and the laser scanner. Then points must exist in front of the plane. If the measured point is located in the rear of the plane, as shown in Fig. 8 (b), the rectangle is inconsistent. Then the object is not a cuboid.

The visibility check is applied to all points {pt} inside the rectangle region. As shown in Fig. 8(b), when the distance of pt is larger than the distance of the intersection point on the plane, the rectangle face is regarded as an inconsistent face. When all rectangle faces are consistent, the object is regarded as a box shape. Otherwise, estimated box shapes are rejected.

Fig. 8. Visibility check. (a) Occluded by the obstacle; and (b) an inconsistent rectangle.

4.3 Reconstruction of polygonal prisms

When a box shape is rejected according to visibility check, polygons are searched on planar regions. Fig. 9(a) shows a rejected box shape. Fig. 9(b) shows a plane of the box. This face is inconsistent as a rectangle face. Then the boundary points of the planar region are extracted and straight lines are searched using the RANSAC method. In Fig. 9(b), detected straight lines are shown in blue. A polygonal face is reconstructed by calculating intersecting points between detected straight lines and edges of the rectangle, as shown in Fig. 9.

When a polygonal face is generated, a polygonal prism is created by sweeping the polygon face. Then the visibility check is applied to the polygonal prism. When the visibility check fails, the polygonal prism is rejected.

Fig. 9. Generation of polygonal prisms. (a) An inconsistent box shape; (b) generation of a polygon; and (c) a created polygonal prism.

5. Experimental Results

In the first experiment, we placed 14 box shapes on the floor, and measured a point-cloud using a terrestrial laser scanner. We used HDS7000, which is a phase-based laser scanner developed by Leica Geo-systems. The resolution was 12.6 mm at the distance of 10 m. Then we extracted box shapes using our method.

Fig. 10 (a) shows a reflectance image of the point-cloud. Fig. 10 (b) shows detected boxes. 13 boxes could be extracted using our method. One box shown in a red circle could not be detected, because only a single planar face was extracted from this box. Box shapes were also extracted from a sofa and a power board. One fabric cushion shown in yellow could not be detected, because the side face was not recognized as a plane.

Table 1 shows the precision and recall. In this experiment, all detected boxes were correct, but one box was not detected. The calculation time was 10.6 second on a PC with 3.4 G Hz Intel Core i7-2600 CPU and 16GB RAM.

In the second experiment, we measured point-clouds from a boiler room. The resolution was 6.3 mm at the distance of 10 m. In this room, there are many box shapes. We could recognize 91 box shapes on the screen. In this example, 5,580 planar faces were detected, and 68 boxes were correctly detected by combining planar faces. Detected boxes are shown in yellow in Fig. 11. Green boxes show inconsistent shapes that fail the visibility check. Table 2 shows the precision and recall, and the both were more than 80%. The calculation time was 82.5 second. In this example, some wrong boxes were generated. In our method, cylindrical surfaces are detected to avoid strip-shaped planes. However, since other types of surfaces, such as tori, were not eliminated, some of them were detected as planes.

In the third experiment, we applied our method to a point-cloud captured in the machine room in our university. We displayed the point-cloud on the screen, and found 115 boxes and one polygonal prism in the point-cloud. In this experiment, we verified whether the polygonal prism could be identified from many box shapes. The result is shown in Fig. 12. In this figure, the polygonal prism is shown in red. Our method could automatic detect 107 boxes (93.8%) and one polygonal prism successfully. Inconsistent shapes are shown in green.

Fig. 10. Rconstruction of box shpaes. (a) points with colors, and (b) extracted box shapes.

Table 1. Detected shapes from a room at which boxes are placed.

Number of Points 10.6 millions

Detected planes 1,070

Detected boxes / total number 13 / 14

Precision 100 %

Recall 92.9 %

CPU Time 10.6 sec

Fig. 11. Reconstructed box shape from a boiler room. Table 2. Detected shapes from a boiler room.

Number of Points 40.7 millions

Detected planes 5,580

Detected box shapes 91

Precision 80.2% (73/91)

Recall 84.0% (68/81)

CPU Time 82.5 (sec)

Fig. 12. Detection of box shapes and a polygonal prism.

6. Conclusion

In this paper, we proposed a method for reconstructing box shapes and polygonal prisms from large-scale point-clouds. In our method, planar regions are detected and they are combined so that box shapes or polygonal prisms are constructed. Since many faces are partly occluded, we estimate rectangle or polygonal faces so that measured points are covered. To avoid generating false shapes, the system verified estimated shapes using the visibility check. We evaluated our method using three examples. In our experiments, our method could achieve good precision and recall rates.

In future work, we would like to reconstruct more complex shapes that can be observed in manufacturing facilities. We would like to investigate flexible templates for typical shapes. In engineering plants for fluid materials, many parts consist of bodies of revolution. It would be convenient to decompose point-clouds into swept shapes, revolution shapes, and so on.

Reference

[1] Rabbani. T, Heuvel. F. 3D Industrial reconstruction by fitting CSG Models to a combination of images and point clouds. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2004; 35(B5).

[2] Belton D, Lichti D. Classification and segmentation of terrestrial laser scanner point clouds using local variance information. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2006; 36(5); 44-49.

[3] Vosselman G, Gorte. B. G. H, Sithole G, Rabbani T. Recognizing structure in laser scanner point clouds. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, 2004; 46(8); 33-38.

[4] Lee J, Kim C, Son H, Kim C. Skeleton-based 3D reconstruction of as-built pipelines from laser-scanned data. In; ASCE International Conference on Computing in Civil Engineering; 2012 June 17-20; Clearwater Beach, Florida; 245-252.

[5] Kawashima K, Kanai S, Date H. As-built modeling of piping system from terrestrial laser scanned point clouds using normal-based region-growing. In: Asian Conference on Design and Digital Engineering; 2013; 12-14.

[6] Masuda H, Tanaka I. Extraction of surface primitives from noisy large-scale point-clouds. Computer-Aided Design and Applications; 2009; 6(3); 387-398.

[7] Masuda H, Tanaka I. As-built 3D modeling of large facilities based on interactive feature editing extraction of surface primitives from noisy large-scale point-clouds. Computer-Aided Design and Applications. 2010; 7(3); 349-360.

[8] Mizoguchi T, Kuma T, Kobayashi Y, Shirai K. Manhattan-world assumption for as-built modeling industrial plant. Key Engineering Materials. 2012; 523; 350-355.

[9] Schnabel R, Wahl R, Klein R. Efficient RANSAC for point-cloud shape detection. Computer Graphics Forum; 2007; 26(2); 214-226.

[10] Masuda H, Niwa T, Tanaka I, Matsuoka R. Reconstruction of polygonal faces from large-scale point clouds of engineering plants. Computer-Aided Design and Applications; 13(4); 511-518.

Highlights

This paper proposes a point-based reconstruction method for boxes and polygonal prisms in engineering plants. Many faces may be partly occluded by other objects in engineering plants.

In our method, possible shapes are estimated and they are verified using their compatibility with measured point-clouds. In our experiments, our method achieved high precision and recall rates.