Point Cloud-to-BIM: automatic pipeline
This demo illustrates the full Point Cloud to BIM pipeline on a model that allows near-fully automatic recognition.
Loading and setup
We begin by running the ML server using Docker — this is required for semantic segmentation. The ML component runs as a separate Docker + Python service, independent from the C++ API. Next, we launch OdaScan2BimApp, select Point Cloud to BIM mode, and load a point cloud.
Stage 1: Semantic segmentation (optional, ML-based)
We initiate semantic segmentation from the dedicated palette. The model used is Point Transformer V2 Extended — one of two models we currently focus on, the other being PointMLP. Both were selected based on our own experiments, trained on the S3DIS (Stanford 3D Indoor Scene Dataset) and our own annotated point clouds.
Semantic segmentation labels each point in the cloud as wall, floor, ceiling, or other. This is an optional step, but it significantly improves downstream accuracy — without it, the pipeline extracts planar regions from everything in the cloud, including tables, chairs, beds, and radiators. These non-structural objects produce noise that gets misinterpreted as false walls and floors during BIM recognition. Segmentation filters this noise early.
After completion, the application allows interaction with the segmentation results: classes can be selectively hidden or displayed, original point cloud colors can be restored, and different visualization modes can be combined.
Stage 2: Planar region calculation
This stage extracts planar surfaces from the semantically segmented point cloud. It uses supervoxel clustering and region growing for initial oversegmentation (deliberately avoiding undersegmentation that could merge distinct surfaces), then iterative RANSAC plane detection to find planar surfaces, followed by surface refinement using region growing.
The initial steps — supervoxel clustering, region growing, and RANSAC — use the third-party Point Cloud Library (PCL). Everything after that is ODA's own code: surface refinement and extension, and boundary recognition. Boundaries are determined by projecting 3D points onto 2D images (top-down and side-view projections), analyzing the 2D regions, and mapping them back into 3D.
The application supports combining different visualization modes: only planar regions, only point clouds, or a combination.
Stage 3: BIM object recognition
On this test model, recognition is near-fully automatic, requiring only minor parameter adjustments. The recognition follows a fixed sequence:
First, floor recognition — horizontal regions are identified as IfcSlabs. Nearby and coplanar slabs are merged, fragmented regions are connected, and small segments are filtered out.
Then wall recognition — the same procedure applied to vertical regions. Walls are identified, merged, connected, and filtered.
Next, floor-wall linking — spatial relationships between walls and floors are established, and geometric adjustments are applied so they align precisely.
Then roof recognition for sloped regions, if present.
Finally, opening recognition — voids in wall geometry are identified as windows and doors. This is the second place where ML segmentation helps: apparent openings caused by missing wall points behind furniture (beds, bookcases, radiators) can be identified as false positives and filtered. ML segmentation enables this filtering — without ML, false openings are not automatically filtered.
Output
The result is exported as an IFC file. It can be saved and opened in any IFC-compatible viewer, such as ODA's OpenIFCViewer, for validation and quality assessment.
Point Cloud-to-BIM: manual corrections
This demo shows a real-world scenario where the automatic pipeline requires significant manual intervention. It illustrates the correction tools available in OdaScan2BimApp and the types of problems that arise in practice.
Starting from saved intermediate results
To save time, we begin with precomputed and saved results from semantic segmentation and planar region calculation. The application allows saving intermediate pipeline states and resuming from them later — essential for workflows where the first two stages (segmentation and region calculation) are time-consuming.
The challenge: closed doors
The main difficulty with this particular point cloud is that the scan was performed with almost all doors closed. Closed doors are indistinguishable from walls in point cloud data, which means the pipeline cannot detect openings where doors should be. This is one of the most common real-world problems in scan-to-BIM workflows.
Automatic steps
The initial recognition steps are still automatic: slab recognition, wall identification, and wall merging. These work reasonably well even on challenging scans.
Manual correction workflow
After automatic connection, we review the results and begin manual fixes. The application provides several key tools:
Point cloud overlay on BIM model — the original point cloud can be overlaid on the recognized BIM geometry. This makes it easy to spot where the recognition went wrong by visually comparing the raw scan data against the generated model.
Undoing individual operations — in the first case, a door region was incorrectly connected to a wall region, producing an invalid wall. We undo just that specific connection (not the entire operation history), delete the door region, and manually connect the correct regions.
Wall resizing — several walls need adjustment: some shorter, some longer, one wider. Precise accuracy is not required at this stage because a later automatic procedure (floor-wall geometric adjustment) handles precise alignment.
Manual filtering — after automatic noise wall filtering, some false walls remain. We delete them manually.
Manual floor-wall linking — the automatic linking cannot process walls that don't reach the floor. These require a manual linking step.
Opening recognition challenges
After geometric adjustments, we proceed to opening recognition. Due to the closed-door problem, automatic false-opening removal performs poorly on this scan. We overlay the original point cloud again and manually remove incorrect openings.
Some door openings are not detected at all — because the scan captured the doors as solid surfaces, they look like walls. This is a fundamental limitation: if the physical door was closed during scanning, no algorithm can infer that an opening exists behind it.
Result
Despite the manual work required, the final BIM model accurately represents the building structure. The manual correction tools — especially undo of individual operations, point cloud overlay, and element resizing — make the process manageable even for difficult scans.
Mesh to B-Rep
This demo shows the Mesh to B-Rep conversion pipeline on two models of different complexity. This is an independent direction in the SDK, not connected to the Point Cloud to BIM pipeline.
What Mesh to B-Rep does
The goal is to convert a polygonal mesh (triangulated surface) into a precise boundary representation (B-Rep) — the standard geometry format used by CAD and BIM applications. B-Rep models have exact mathematical surfaces, clean edges, and proper topology, making them suitable for editing, measurement, and downstream engineering workflows.
The conversion process
The pipeline works in two stages. First, the mesh is segmented: curvature analysis and sharp edge detection divide it into regions, each region is classified as a canonical surface (plane, cylinder, sphere, or cone), and segments are extended to cover neighboring consistent areas.
Second, B-Rep construction: each segment is converted to its analytical surface representation, intersection curves between adjacent surfaces define edges, surfaces are trimmed and assembled into a complete B-Rep solid body. The B-Rep engine used is the ODA Lightweight B-Rep Modeler.
Model 1: Simple geometry with non-planar features
The first model mainly consists of flat surfaces, making the planar regions easy to identify. However, it includes cylindrical and conical holes, which test the canonical surface recognition capability. The system identifies these as cylinders and cones, creates the corresponding analytical surfaces, and builds valid B-Rep geometry including the intersection curves between the holes and the surrounding planar faces.
Model 2: Complex curved geometry
The second model has fewer triangles but is more challenging — it contains a significantly larger number of cylindrical segments. Identifying cylindrical surfaces is harder when the radius of curvature is large, because such surfaces appear nearly flat and can be confused with planes. This model demonstrates how the system handles this ambiguity.
The non-canonical segment problem
Segments that cannot be recognized as planes, cylinders, spheres, or cones are the main challenge. Currently, the SDK uses a brute-force fallback: each triangle in the unrecognized segment becomes an individual planar face in the B-Rep output. This guarantees a valid model but increases face count. An experimental spline-based surface fitting approach shows promising results for limited cases, but robust reconstruction for arbitrary segments remains an open research problem.
DWG export
The results can be exported to DWG format, which is useful for verification in standard CAD programs and for sharing with teams that use DWG-based workflows. The ability to check B-Rep quality in familiar tools is important for validating the conversion results.