Open Design Alliance Blog en Reading and Rendering U3D Files Reading and Rendering U3D Files <div><p>Reading and rendering Universal 3D (U3D) files in ODA Products is available by converting these types of files into .prc files. Reading .u3d files is implemented using the Universal 3D Sample Software library from Intel®. All types of mesh objects, including RHAdobeMesh, PointSet and LineSet objects, as well as materials and textures can be imported. Import of views and lighting objects is not supported.</p> <p>The OdU3D2PrcImport module is used to convert U3D format files to PRC:</p> <pre class="hljs"> <code class="cpp">… //Load the U3D import module: OdU3D2PrcImportModulePtr pModule = ::odrxDynamicLinker()-&gt;loadModule(OdU3D2PrcImportModuleName); //Create the U3D importer object and check whether it was successfully created: OdU3D2PrcImportPtr importer = pModule-&gt;create(); if(importer.isNull()) odPrintConsoleString(L"U3D Importer object creation failed!\n"); //Set import parameters using the properties() method of the importer object: importer-&gt;properties()-&gt;putAt(L"Database", pFile); importer-&gt;properties()-&gt;putAt(L"U3DPath", OdRxVariantValue(u3dName)); //Run the import() method of the importer object and analyze the returned value: if(OdResult::eOk != importer-&gt;import()) odPrintConsoleString(L"import failed!\n"); …</code></pre> <p> </p> <img alt="OdU3D2PrcImport module" data-entity-type="file" data-entity-uuid="b5a28c57-a543-42a5-84e7-e5d29d2e195b" height="704" src="/files/inline-images/PRC%20U3D%20Example1.jpg" width="1165" class="align-center" /><p> </p> <p>The OdU3D2PrcImport module supports a set of parameters. These parameters are stored in a dictionary object that can be retrieved for reading and modifying by calling the properties() method of the OdPdfImport class. Each parameter is accessible by its name represented with a string.</p> <p>To set a new parameter value, use the putAt() method of the dictionary object, which assumes two arguments: parameter name and a new value of the parameter.</p> <p>The full list of import parameters with their access names are described in the following table.</p> <p> </p> <table class="table table-striped"><thead><tr><th align="center">Import Parameter</th> <th align="center">Parameter Name</th> <th align="center">Description</th> </tr></thead><tbody><tr><td align="left">Database for Import</td> <td align="left">Database</td> <td align="left">An OdPrcFilePtr object where the contents of a .u3d file should be imported.</td> </tr><tr><td align="left">U3D File Path</td> <td align="left">U3DPath</td> <td align="left">A full path to the input .u3d file.</td> </tr><tr><td align="left">Transform Matrix</td> <td align="left">TransformMatrix</td> <td align="left">A transformation matrix that will be applied to all elements of the imported .u3d file.</td> </tr></tbody></table><p> </p> <p>The Database and U3DPath parameters are mandatory.</p> <p>The OdU3D2PrcImport module also supports importing several .u3d files into one .prc file. The picture below shows the result of importing two U3D models into one .prc file.</p> <p> </p> <img alt="several .u3d files into one .prc file" data-entity-type="file" data-entity-uuid="0b538c3a-a06e-4d7a-9901-6ff1884b86fd" height="684" src="/files/inline-images/PRC%20U3D%20Example2_0.jpg" width="1131" class="align-center" /><h4> </h4> <h4>U3D support in ODA Vizualize</h4> <p> </p> <p>An example of U3D support in ODA Vizualize is implemented in the Prc2Visualize module. You can see an example in the picture below.</p> <p> </p> <img alt="U3D support in ODA Vizualize " data-entity-type="file" data-entity-uuid="816a3390-8eaf-4e02-bd8b-6179d178151d" height="663" src="/files/inline-images/PRC%20U3D%20Example3.jpg" width="1098" class="align-center" /><p> </p> </div> <span><span lang="" about="/user/4955" typeof="schema:Person" property="schema:name" datatype="">Sergey Sorvenkov</span></span> <span>Thu, 10/03/2019 - 13:15</span> Thu, 03 Oct 2019 10:15:51 +0000 Sergey Sorvenkov 3318 at Creating Markups with Web SDK Creating Markups with Web SDK <div><p>Markups allow you to make temporary redlines and remarks directly in a document. Here are some examples of drawings that contain markups using Web SDK:</p> <img alt="Web SDK" data-entity-type="file" data-entity-uuid="6cfd60d1-cdd8-4743-b16c-50099a06a131" src="/files/inline-images/Web%20SDK1.jpg" class="align-center" /><img alt="Web SDK" data-entity-type="file" data-entity-uuid="cd69aa99-264d-4bbb-9aaa-3fd83658468a" src="/files/inline-images/Web%20SDK2.png" class="align-center" /><img alt="Web SDK" data-entity-type="file" data-entity-uuid="ee6a817c-c79a-494a-946f-5291af0ce244" src="/files/inline-images/Web%20SDK3.jpg" class="align-center" /><p> </p> <p>You can use the Markups toolbar in ODA Web Viewer to create different type of markups. For details about running ODA Web Viewer, see the <a href=";frmfile=tcloud_running_sample_app.html">online documentation</a> (login required). You can create these types of markups in ODA Web Viewer:</p> <ul><li style="font-style: italic">Rectangle markup</li> <li style="font-style: italic">Circle markup</li> <li style="font-style: italic">Freehand markup</li> <li style="font-style: italic">Text markup</li> </ul><img alt="picture4" data-entity-type="file" data-entity-uuid="bc2a9d9f-21b4-4fc8-8c73-9a78dcd4b623" src="/files/inline-images/picture4.jpg" class="align-center" /><img alt="picture5" data-entity-type="file" data-entity-uuid="c9f6560e-65d2-4922-a04d-a0229d1604a4" src="/files/inline-images/picture5.jpg" class="align-center" /><p>Then use the two tools for saving and loading markup views:</p> <ul><li style="font-style: italic">Save markup — Saves the current view with markups that you’ve drawn.</li> <li style="font-style: italic">Load markup — Loads a view that contains markups.</li> </ul><p>To save markups that you’ve drawn:</p> <ol><li style="font-style: italic">Click Save markup.<img alt="save markup" data-entity-type="file" data-entity-uuid="d8c566a1-5155-437e-85e2-3ff39e023d9b" src="/files/inline-images/save_markup.jpg" class="align-center" /><p> </p> </li> <li style="font-style: italic">Enter the name of the saved markup view, then click Save.</li> </ol><p>To load saved views that contain markups:</p> <ol><li style="font-style: italic">Click Load markup.<img alt="load markup" data-entity-type="file" data-entity-uuid="1a08b010-8d97-410d-9961-cb498406418c" src="/files/inline-images/load_markup.jpg" class="align-center" /><p> </p> </li> <li style="font-style: italic">Select the view to load, then click Load.</li> </ol><h4>How it works in the source code</h4> <pre class="hljs"> <code class="cpp">Create markup model: const model = this.m_viewer.getMarkupModel(); //returns a markup model, if there is one, or it creates this model Save markup model: const viewer = this.webModule.getViewer(); const markupCtrl = viewer.getMarkupController(); //getting a controller object to interact with markups; //saving markup object on client, the markup name must be passed to the function markupCtrl.sendSaveRequest(markupName); //send request and save markup on backend, the markup name must be passed to the function Get list of markups: const viewer = this.webModule.getViewer(); const markupCtrl = viewer.getMarkupController(); const saved = markupCtrl.getSaved(); //returns the list of markups that have been saved Load markup model: const viewer = this.webModule.getViewer(); const markupCtrl = viewer.getMarkupController(); markupCtrl.load(markupName); //the controller loads the desired markup by name that was previously saved Clear markup: const viewer = this.webModule.getViewer(); const markupCtrl = viewer.getMarkupController(); markupCtrl.clear(); //hides all drawn models, even those that have been saved </code></pre> </div> <span><span lang="" about="/user/5506" typeof="schema:Person" property="schema:name" datatype="">Alexey Soloshenko</span></span> <span>Thu, 07/11/2019 - 13:46</span> Thu, 11 Jul 2019 10:46:53 +0000 Alexey Soloshenko 3122 at Getting Started with IFC SDK Getting Started with IFC SDK <div><p>Recently added to the BIM Suite is IFC SDK, with support for reading and rendering IFC 2x3 files, navigating loaded IFC model entities, and viewing read-only entity attributes. This article describes how to get started using the SDK.</p> <p>The initial IFC SDK release is based on the currently available ODA SDKs version 19.12.</p> <h4>Initializing ODA IFC SDK</h4> <p>To initialize IFC SDK modules, call the OdResult odIfcInitialize(bool bInitIfcGeomResource /*= true*/) function which is declared in the IfcCore.h header file.</p> <p>Rendering geometry entities isn’t always needed (for example, just walking along IfcModel might be the only required task), so the parameter bInitIfcGeomResource is provided to avoid geometric module initialization. If calling the function with a predefined parameter value, the modules IfcCore.dll and IfcGeom.dll are loaded. IfcCore initialization also loads Spf.tx, which contains the routines and base classes needed for low-level object-model-independent work with STEP physical files.</p> <h4>Reading .ifc Files</h4> <p>To read .ifc file content, OdIfcHostAppServices::readFile() should be called. The method returns a pointer to an instance of the OdIfcFile class which contains the database of loaded IFC entities (an IFC model). The OdIfcFile class is declared in the “IfcFile.h” header inside the OdIfc namespace.</p> <p>The header section container of the .ifc file can be accessed by calling the OdSpf::OdSpfHeaderSectionPtr OdIfcFile::getHeaderSection() method. The header section contains information fields about the model in the .ifc file, about authors/owners of the model, and so on. Information about the file data section schema is also kept in the header section:</p> <pre class="hljs"> <code class="cpp">OdIfc::OdIfcFilePtr pIfcFile = svcs.readFile(pBuffer); OdDAI::OdSpfHeaderSectionPtr pHeader = pIfcFile &gt;getHeaderSection(fileName); OdDAI::OdSpfFileSchemaPtr pSchema = pHeader &gt;getEntityByType(headerFileSchema); OdString dataSectionSchema = pSchema-&gt;at(0); </code></pre> <p>To get the main entity container of OdIfcFile, its method OdIfcModelPtr OdIfcFile::getModel() should be called. Currently ODA IFC SDK supports the Ifc2x3 schema only, so getModel() returns NULL when attempting to load an unsupported file.</p> <pre class="hljs"> <code class="cpp">OdIfcModelPtr pModel = pIfcFile-&gt;getModel();</code></pre> <p>A set of IfcGeometricalContexts to generate geometry should be determined first. Let’s create an iterator along entity instances derived from the Ifc2x3 Express Schema entity type called IfcGeometricRepresentationContext in the loaded OdIfcModel:</p> <pre class="hljs"> <code class="cpp"> OdDAIObjectIds ids = pModel-&gt;getEntityExtent(“IfcGeometricRepresentationContext”); OdIfcHandles selContexts; for (OdDAIObjectIds::iterator it = ids.begin(), end = ids.end(); it != end; ++it){ OdIfc::OdIfcEntityPtr pInst = it-&gt;OpenObject(); OdAnsiString strContextType; if (pInst-&gt;getAttr("ContextType") &gt;&gt; strContextType) { </code></pre> <p>If the value of the property named “ContextType” was extracted from an entity instance and was successfully assigned to strContextType (types are in coincidence), continue. We need to find all contexts and subcontexts of the desired types: the “Model” for contexts and “Body” for subcontexts that contain 3D geometrical representations for elements of the model.</p> <pre class="hljs"> <code class="cpp"> if ("Model") == 0) { selContexts.append(*it); } OdDAIObjectIds hasSubContexts; if (pInst-&gt;getAttr("HasSubContexts") &gt;&gt; hasSubContexts) { for (OdDAIObjectIds::iterator itSub = hasSubContexts.begin(), endSub = hasSubContexts.end(); itSub != endSub; ++itSub) { OdIfc::OdIfcEntityPtr pSubCtx = itSub-&gt;openObject(); OdAnsiString strContextIdentifier; if (pSubCtx-&gt;getAttr("ContextIdentifier") &gt;&gt; strContextIdentifier &amp;&amp; strContextIdentifier == "Body") { selContexts.append(*itSub); } } } } </code></pre> <h4>Building and Rendering Geometry Entities</h4> <p>Fill the array of IfcGeometricRepresentationContext/SubContext handles to build geometry. “Model”/”Body” contexts/subcontexts are enough to get 3D entities; those names are predefined by the IFC specification, and we don’t need the other types of GeometricContexts/SubContexts (they can still keep schematical representations of the model’s drawable items, or something else).</p> <pre class="hljs"> <code class="cpp">pModel-&gt;setContextSelection(selContexts);</code></pre> <p>Build geometry entities for rendering. This method composes OdGiDrawable entities for the loaded IFC model:</p> <pre class="hljs"> <code class="cpp">pModel-&gt;composeEntities ();</code></pre> <p>If the result of OdIfcFile::setGeomResource() is eOk, the geometry is successfully generated and can be rendered using GsDevice; the root entity for drawing is OdIfcProject:</p> <pre class="hljs"> <code class="cpp">OdIfcObjectId idProject = pIfcFile-&gt;getProjectId(); OdGiContextForIfcDatabasePtr pIfcContext = OdGiContextForIfcDatabase::createObject(); pIfcContext-&gt;setDatabase(pIfcFile); pIfcContext-&gt;enableGsModel(true); OdGsDevicePtr pDevice = …; OdGsDevicePtr pIfcDevice = OdIfcGsManager::setupActiveLayoutViews(idProject, pDevice .get(), pIfcContext); OdGsView* pView = pDevice-&gt;viewAt(0); pView-&gt;setMode(...); </code></pre> <p>Process the rendering:</p> <pre class="hljs"> <code class="cpp">pDevice-&gt;update(OdGsView::kFlatShaded);</code></pre> <p>Deinitialize the modules by calling OdResult odIfcUninitialize().</p> <h4>More Information</h4> <p>Details and live demo: <a href="">IFC SDK</a><br /> Download a trial: Coming Soon</p> <p>The extended examples of working with ODA BimIfc SDK can be found in Visualize/Examples/Ifc2Visualize, this is an example of geometry rendering using ODA Visualize SDK, and in Ifc/Exports/Ifc2Dwg, this is a sample of exporting the geometry from loaded IFC file into the .dwg drawing as a set of polyface meshes. Two additional simple example applications are provided also: ExIfcReadFile, just for read the desired Ifc2x3 file, and ExIfcDump to collect entities statistics by walking along directory structure and extracting information from loaded Ifc models.</p> <p>And if you’re already using or evaluating IFC SDK, examples are located in the Visualize/Examples installation folder:</p> <ul><li style="font-style: italic">Ifc2Visualize — Shows geometry rendering from .ifc files using ODA Visualize SDK.</li> <li style="font-style: italic">Ifc/Exports/Ifc2Dwg — Exports geometry from a loaded .ifc file into a .dwg drawing as a set of polyface meshes.</li> <li style="font-style: italic">ExIfcReadFile — Reads an Ifc2x3 file.</li> <li style="font-style: italic">ExIfcDump — Collects entity statistics by walking along the directory structure and extracting information from loaded IFC models.</li> <li style="font-style: italic">ExIfcModelFiller – creates a valid IFC model with geometrical entities</li> <li style="font-style: italic">ExIfcWriteFile – loads a file and resaves it into different file.</li> </ul></div> <span><span lang="" about="/user/6459" typeof="schema:Person" property="schema:name" datatype="">Igor Egorychev</span></span> <span>Thu, 06/06/2019 - 08:22</span> Thu, 06 Jun 2019 05:22:34 +0000 Igor Egorychev 3064 at Organization of Visualization Modules Organization of Visualization Modules <div><h4>Introduction</h4> <p>Starting with intermediate version 20.2, ODA SDKs contain reorganized visualization modules.</p> <p>The new visualization module structure improves flexibility of ODA SDKs and increases the number of opportunities during application development. For example, you can build an application that doesn’t contain a database but requires rendering, and in this case you can invoke the rendering modules directly. Additionally you can now invoke rendering module branching to output graphics into different rendering modules simultaneously. A classic use-case, where an application invokes vectorization modules to draw existing databases, doesn’t change and can be used as with previous ODA SDK versions, but the new module structure gives additional possibilities for new applications.</p> <p>This article describes the set of visualization modules and libraries, how to correctly link applications with them, and how to load these modules.</p> <h4>Visualization Modules Map</h4> <p> </p> <img alt="image2" data-entity-type="file" data-entity-uuid="2b8961a4-07fd-4ad9-8ea6-d5c4d836482b" src="/files/inline-images/123_0.jpg" class="align-center" /><p> </p> <ul><li style="font-style: italic">Gray rectangles — Required static libraries.</li> <li style="font-style: italic">Blue rectangles — Main modules required for visualization applications.</li> <li style="font-style: italic">Green rectangles — Optional modules required for exporting rendering streams to XML format.</li> <li style="font-style: italic">Rounded rectangles — Different final applications.</li> </ul><p>TrRenderBase.lib is required for building rendering modules only. Export modules (like XML export in this diagram) don’t require this library for their functionality.</p> <h4>Changes to ODA SDKs Beginning with Version 20.2</h4> <ul><li style="font-style: italic">TrGL2.lib static library becomes TrGL2.txr rendering module.</li> <li style="font-style: italic">TrRenderBase.lib is a new static library, invoked by OpenGL ES 2.0 renderer and vectorizer.</li> <li style="font-style: italic">TrXml.txr is a new rendering module, required for exporting rendering streams into XML file format.</li> <li style="font-style: italic">XmlGLES2.txv vectorization module becomes TrXmlVec.tx extension module.</li> </ul><p>If your final application invokes only OpenGL ES 2.0 rendering, take into account only the first two items. If your final application invokes XML export functionality, take into account the last two items.</p> <p>Note: Don’t forget that in static library configurations, all dynamic libraries (.tx, .txv, .txr) become static libraries that should be linked with the final application. Because the WinGLES2 vectorization module now internally invokes the TrGL2 rendering module, in a static library configuration the application additionally should register the new rendering module in the static modules map (this change is described in this article later in “Static Libraries Configuration”).</p> <h4>Visualization Modules Description</h4> <table class="table table-striped"><thead><tr><th align="center">Module name</th> <th align="center">Module type</th> <th align="center">Description</th> </tr></thead><tbody><tr><td align="left">TrBase.lib</td> <td align="left">Static Library</td> <td align="left">Library that contains general functions and classes that are used in all visualization-related libraries and modules.</td> </tr><tr><td align="left">TrRenderBase.lib</td> <td align="left">Static Library</td> <td align="left">Library that contains general functions and classes for building rendering modules.</td> </tr><tr><td align="left">TrVec.lib</td> <td align="left">Static Library</td> <td align="left">Basic library that contains all required functionality for building vectorization modules.</td> </tr><tr><td align="left">TrGL2.txr</td> <td align="left">Rendering Module</td> <td align="left">OpenGL ES 2.0 renderer.</td> </tr><tr><td align="left">WinGLES2.txv</td> <td align="left">Vectorization Module</td> <td align="left">OpenGL ES 2.0 vectorization module (invoke TrGL2.txr for rendering in Gs-based applications).</td> </tr><tr><td align="left">TrXml.txr</td> <td align="left">Rendering Module</td> <td align="left">XML export module.</td> </tr><tr><td align="left">TrXmlVec.tx</td> <td align="left">Extension</td> <td align="left">Vectorization module that invokes TrXml.txr rendering module for exporting vectorized data to XML format.</td> </tr><tr><td align="left">TrXmlIO.lib</td> <td align="left">Static Library</td> <td align="left">Library for applications that open and process vectorized data from XML format, for example, load XML data and render it using another rendering module.</td> </tr></tbody></table><h4>Dynamic Library Configurations</h4> <h5>Rendering Modules</h5> <p>New rendering modules are defined in the OdModuleNames.h header file, which is available for all ODA SDK based applications:</p> <pre class="hljs"> <code class="cpp">#define OdTrGL2ModuleName L"TrGL2.txr" #define OdTrXmlModuleName L"TrXml.txr" </code></pre> <p>Rendering modules can be loaded using the following call (which uses a dynamic modules linker):</p> <pre class="hljs"> <code class="cpp">OdTrRndRenderModulePtr pRenderModule = ::odrxDynamicLinker()-&gt;loadModule(OdTrGL2ModuleName); </code></pre> <h5>Vectorization Modules</h5> <p>The same definitions are available for vectorization modules:</p> <pre class="hljs"> <code class="cpp">#define OdWinGLES2ModuleName L"WinGLES2.txv" #define OdTrXmlVecModuleName L"TrXmlVec.tx" </code></pre> <p>These modules can be loaded just like any other vectorization modules (there are no changes from previous ODA SDK versions):</p> <pre class="hljs"> <code class="cpp">OdGsModulePtr pGs = ::odrxDynamicLinker()-&gt;loadModule(OdWinGLES2ModuleName); </code></pre> <h4>Static Library Configurations</h4> <h5>Linking Libraries</h5> <p>To link with TrGL2 renderer, add the following set of libraries to the related project’s CMakeLists.txt file:</p> <pre class="hljs"> <code class="cpp">${TR_TXR_GL2_LIB} ${TR_RENDER_LIB} ${TR_BASE_LIB}</code></pre> <p>Add the following set of libraries to the linker for linking with the WinGLES2 vectorization module:</p> <pre class="hljs"> <code class="cpp">${TD_TXV_GLES2_LIB} ${TR_TXR_GL2_LIB} ${TR_RENDER_LIB} ${TR_VEC_LIB} ${TR_BASE_LIB}</code></pre> <p>A special CMake definition is available to avoid adding this long list of libraries. To link all required libraries for WinGLES2 vectorization, you can use the $(TD_TXV_GLES2_LIBS} CMake definition only:</p> <pre class="hljs"> <code class="cpp">set(TD_TXV_GLES2_LIBS ${TD_TXV_GLES2_LIB} ${TR_TXR_GL2_LIB} ${TR_RENDER_LIB} ${TR_VEC_LIB} ${TR_BASE_LIB})</code></pre> <p>There is one more definition that was kept for older projects; the definition links with the WinGLES2 vectorization module that invokes the obsolete TrGL2.lib static library:</p> <pre class="hljs"> <code class="cpp"># for compatibility with old CMakeLists set(TR_GL2_LIB ${TR_TXR_GL2_LIB} ${TR_RENDER_LIB})</code></pre> <p>So old projects, if they don’t invoke non-standard features in CMake, don’t currently require special changes in the related CMakeLists.txt file. But it is better to avoid using the obsolete $(TR_GL2_LIB} definition, since it might be removed in the future.</p> <p>For libraries that require linking with the TrXml.txr rendering module (XML export):</p> <pre class="hljs"> <code class="cpp">${TR_TXR_XML_LIB} ${TR_BASE_LIB}</code></pre> <p>For linking with TrXmlVec.tx, your application requires the following libraries in the linked libraries list:</p> <pre class="hljs"> <code class="cpp">${TR_XML_VEC_LIB} ${TR_TXR_XML_LIB} ${TR_VEC_LIB} ${TR_BASE_LIB}</code></pre> <h5>Static Modules Map</h5> <p>New rendering modules should be registered in the static modules map too, if the application invokes the WinGLES2 vectorization module:</p> <pre class="hljs"> <code class="cpp">ODRX_DECLARE_STATIC_MODULE_ENTRY_POINT(OdTrGL2RenderModule); ODRX_DECLARE_STATIC_MODULE_ENTRY_POINT(GLES2Module); ODRX_BEGIN_STATIC_MODULE_MAP() ODRX_DEFINE_STATIC_APPLICATION(OdTrGL2ModuleName, OdTrGL2RenderModule) ODRX_DEFINE_STATIC_APPMODULE(OdWinGLES2ModuleName, GLES2Module) ODRX_END_STATIC_MODULE_MAP() </code></pre> <p>The WinGLES2.txv vectorization module entry point is GLES2Module, as with previous versions. For TrGL2.txr, the rendering module entry point is OdTrGL2RenderModule.</p> <p>It is the same with XML export applications:</p> <pre class="hljs"> <code class="cpp">ODRX_DECLARE_STATIC_MODULE_ENTRY_POINT(OdTrXmlRenderModule); ODRX_DECLARE_STATIC_MODULE_ENTRY_POINT(TrXmlModule); ODRX_BEGIN_STATIC_MODULE_MAP() ODRX_DEFINE_STATIC_APPLICATION(OdTrXmlModuleName, OdTrXmlRenderModule) ODRX_DEFINE_STATIC_APPLICATION(OdTrXmlVecModuleName, TrXmlModule) ODRX_END_STATIC_MODULE_MAP() </code></pre> <p>The TrXmlVec.tx (previously XmlGLES2.txv) vectorization module entry point is TrXmlModule, as with previous versions. For TrXml.txr, the rendering module entry point is OdTrXmlRenderModule.</p> </div> <span><span lang="" about="/user/1811" typeof="schema:Person" property="schema:name" datatype="">Andrew Markovich</span></span> <span>Thu, 05/30/2019 - 14:50</span> Thu, 30 May 2019 11:50:26 +0000 Andrew Markovich 3058 at Tessellation and Surface Tolerance Tessellation and Surface Tolerance <div><p>Tessellation allows controlling the number of triangles used to represent 3D entities. Use the parameter for surface tolerance, which is the maximum distance between the source analytical surface and triangle mesh. The more precise the tolerance, the more triangles you get.</p> <p>Rendering of different files (such as .dwg, .dgn, .rvt) with ODA SDKs requires providing surface tolerance information to BrepRenderer. The same is true for other operations based on vectorization, for example exports (STL, Collada) and conversions between different formats. BrepRenderer is a library that performs B-Rep geometry data tessellation computation, caching and drawing to an OdGiCommonDraw object. Surface tolerance is one of its main input parameters.</p> <p>The deviation must be provided by an inheritor of the OdGiCommonDraw class in a virtual function:</p> <pre class="hljs"> <code class="cpp">virtual double deviation( const OdGiDeviationType deviationType, const OdGePoint3d&amp; pointOnCurve) const; </code></pre> <p>For tessellation, deviationType must be kOdGiMaxDevForFacet and pointOnCurve must contain a point on the rendering object if you have perspective mode turned on.</p> <p>The current implementation of the OdGsView class calculates the deviation for the current camera position based on the physical pixel size. This means that the closer you zoom in to the 3D object, the better (more detailed, but with more triangles) the tessellation is calculated. And zooming out from the object makes the tessellation more rough. You can see this effect in the OdaMfcApp sample application. For example, open any 3D solid and vectorize it in flat shaded mode (triangles are better seen in flat mode). Then try zooming in and out and click Regen. You will see that the number of triangles changes.</p> <p> </p> <img alt="image1" data-entity-type="file" data-entity-uuid="0c9cee53-ab37-4b6e-bf1d-fb7160687246" src="/files/inline-images/1_4.jpg" class="align-center" /><p> </p> <p>This is an example of renderings with different surface tolerances. The upper number is the tolerance, and the lower number is the number of triangles.</p> <p>If a caller wants to control the surface tolerance, they should implement their own “deviation” function in their own view class. An example is the STLModule::exportSTL function. It gets the tolerance as an input parameter and stores it in an export view class:</p> <pre class="hljs"> <code class="cpp">class StubVectorizeView : public OdGsBaseVectorizeViewDef, public OdGiGeometrySimplifier { double m_dDeviation; … </code></pre> <p>And then it returns it as:</p> <pre class="hljs"> <code class="cpp"> virtual double deviation(const OdGiDeviationType deviationType, const OdGePoint3d&amp; pointOnCurve) const { return m_dDeviation; } </code></pre> </div> <span><span lang="" about="/user/2016" typeof="schema:Person" property="schema:name" datatype="">Vladimir Savelyev</span></span> <span>Thu, 05/23/2019 - 09:08</span> Thu, 23 May 2019 06:08:15 +0000 Vladimir Savelyev 3051 at Working with Meshes in Drawings Working with Meshes in Drawings <div><p>A mesh (OdDbSubDMesh) is a set of vertices that form a surface. A mesh includes features such as color and normals.</p> <p>With color you can set the color of individual elements of a mesh (face, vertex).</p> <ul><li style="font-style: italic">OdResult setVertexNormalArray(OdGeVector3dArray&amp; arrNorm) method — Provides an array of vertex normals.</li> <li style="font-style: italic">OdResult getNormalArray (OdGeVector3dArray&amp; normalArray) const method — Returns (calculates) a Level 0 (base) vertex normal array.</li> <li style="font-style: italic">OdResult getSubDividedNormalArray(OdGeVector3dArray&amp; normalArray) const — Returns (calculates) a smoothed mesh vertex normal array.</li> </ul><p>For normals, the methods getNormalArray(…) and getSubDividedNormalArray(…) calculate the normal for all vertices. Also, you can provide an array of vertex normals, which are calculated manually (using setVertexNormalArray(…)). But the normals should map one to one with the control points of a level zero mesh. The length of the vertex normal array needs to match the length of the control point array.</p> <p>The method getNormal of the class OdDbSubDMesh allows you to calculate normals on the mesh-surface. Here is example source code:</p> <pre class="hljs"> <code class="cpp">OdDbSubDMeshPtr pMesh = OdDbSubDMesh::createObject(); ... pMesh-&gt;setCylinder(90., 60., 90., 3, 3, 3, 4); OdGeVector3dArray normals; pMesh-&gt;getSubDividedNormalArray(normals); pMesh-&gt;setVertexNormalArray(normals); ... </code></pre> <p>The result of executing this source code is shown below. The mesh (shaded mode) on the left doesn’t have normals; the mesh on the right has normals and, as a result, we have a smooth surface.</p> <p> </p> <img alt="image1" data-entity-type="file" data-entity-uuid="d2be7afb-c038-408b-b1f9-95557bd00643" src="/files/inline-images/1a.jpg" class="align-center" /><p> </p> <p>The mesh (OdDbSubDMesh) has five levels of smoothness from 0 to 4. You can redefine (reassign) the level of smoothness. When the level of smoothing is applied to a surface and the subdRefine() method is invoked, all surface properties are recalculated (vertices, edges, faces). Also the surface receives a new level of smoothing (for example 4) and the level is accepted as zero.</p> <p>Example source code:</p> <pre class="hljs"> <code class="cpp">… for (int levSmooth = 0; levSmooth &lt;= 4; levSmooth++) { OdDbSubDMeshPtr pMesh = OdDbSubDMesh::createObject(); pMesh-&gt;setCylinder(90., 60., 90., 3, 3, 3, levSmooth); res = pMesh-&gt;subdRefine(); } … </code></pre> <p>When the created mesh is opened and rendered, the result is shown below with level smoothing from 0 to 4.</p> <p>Wireframe mode:</p> <p> </p> <img alt="image2" data-entity-type="file" data-entity-uuid="54fe1138-b652-4c30-a7f0-656d46879cf1" src="/files/inline-images/2a.jpg" class="align-center" /><p> </p> <p>Realistic mode:</p> <p> </p> <img alt="image3" data-entity-type="file" data-entity-uuid="13fbbbcb-1292-4d59-b5cb-41fe2362cdc2" src="/files/inline-images/3a.jpg" class="align-center" /><p> </p> <p>The mesh can be converted to a surface. These methods allow you to convert a specified entity to a surface:</p> <pre class="hljs"> <code class="cpp">OdResult convertToSurface (bool bConvertAsSmooth, const OdDbSubentId&amp; id, OdDbSurfacePtr&amp; pSurface) const; - bool bConvertAsSmooth – isn`t supported yet. - const OdDbSubentId&amp; id – specified entity, which will be converted to surface. - OdDbSurfacePtr&amp; pSurface – resulting surface. OdResult convertToSurface (bool bConvertAsSmooth, bool optimize, OdDbSurfacePtr&amp; pSurface) const; - bool bConvertAsSmooth, bool optimize – isn`t supported yet. - OdDbSurfacePtr&amp; pSurface – resulting surface. </code></pre> <p>Also, a mesh can be converted to a 3D solid. The API method of OdDbSubDMesh is shown below:</p> <pre class="hljs"> <code class="cpp">OdResult convertToSolid (bool bConvertAsSmooth, bool optimize, OdDb3dSolidPtr&amp; pSolid) const; - bool bConvertAsSmooth, bool optimize – isn`t supported yet. - OdDb3dSolidPtr&amp; pSolid – resulting solid. </code></pre> <p>The OdDbSubDMesh API allows you to create standard primitives (box, sphere, cylinder, pyramid, wedge, cone, torus). Here are code examples and results of drawing these operations:</p> <pre class="hljs"> <code class="cpp">… OdDbSubDMeshPtr pMesh = OdDbSubDMesh::createObject(); pMesh-&gt;setBox(30, 30, 30, 3, 3, 3, 0); OdDbSubDMeshPtr pMesh1 = OdDbSubDMesh::createObject(); pMesh1-&gt;setSphere(30., 10, 10, 4); OdDbSubDMeshPtr pMesh2 = OdDbSubDMesh::createObject(); pMesh2-&gt;setCone(30., 30., 30., 3, 3, 3, .0, 4); OdDbSubDMeshPtr pMesh3 = OdDbSubDMesh::createObject(); pMesh3-&gt;setPyramid(90., 60., 3, 3, 3, 3, .0, 0); OdDbSubDMeshPtr pMesh4 = OdDbSubDMesh::createObject(); pMesh4-&gt;setTorus(90., 3, 3, .2, 0, 4); OdDbSubDMeshPtr pMesh5 = OdDbSubDMesh::createObject(); pMesh5-&gt;setWedge(100., 100., 100., 2, 2, 2, 2, 2, 0); …</code></pre> <p>Shaded mode:</p> <p> </p> <img alt="image5" data-entity-type="file" data-entity-uuid="abf8a7ee-ed41-4195-bf98-5a79e24d03c5" src="/files/inline-images/4a.jpg" class="align-center" /><p> </p> <p>Wireframe mode:</p> <p> </p> <img alt="image5" data-entity-type="file" data-entity-uuid="a98b9f25-aa95-452d-b0e8-cccb1198198d" src="/files/inline-images/5a.jpg" class="align-center" /><p> </p> <p>When the smoothing level increases, the сount of vertices and faces increases too. Below is a code example and the result of rendering.</p> <pre class="hljs"> <code class="cpp">… database preparation for (int levSmooth = 0; levSmooth &lt;= 4; levSmooth++) { OdDbSubDMeshPtr pNew = OdDbSubDMesh::createObject(); OdGePoint3dArray vertexArray; OdInt32Array faceArray; vertexArray.append(OdGePoint3d(0, 0, 0)); vertexArray.append(OdGePoint3d(1, 0, 0)); vertexArray.append(OdGePoint3d(1, 1, 0)); vertexArray.append(OdGePoint3d(2, 2, 0)); faceArray.append(3); faceArray.append(0); faceArray.append(1); faceArray.append(2); pNew-&gt;setSubDMesh(vertexArray, faceArray, levSmooth); // eOk pNew-&gt;subdRefine(); ... saving database } </code></pre> <p>Note: When the method subdRefine() is called, the mesh is recalculated to corresponding smoothness, but the current level of smoothness is assigned as base (0).</p> <p>Wireframe mode:</p> <p> </p> <img alt="image6" data-entity-type="file" data-entity-uuid="8bdd7df9-d032-4a74-b1a1-f7ead1f60fbe" src="/files/inline-images/6.jpg" class="align-center" /><p> </p> <p>You can see that the count of faces and vertices is changed because of the smoothness (invoked subdRefine()):<br /> Level 0: vertices(3), faces(1);<br /> Level 1: vertices(8), faces(3);<br /> Level 2: vertices(20), faces(12);<br /> Level 3: vertices(62), faces(48);<br /> Level 4: vertices(218), faces(192);</p> </div> <span><span lang="" about="/user/6849" typeof="schema:Person" property="schema:name" datatype="">Yurii Pylnyk</span></span> <span>Thu, 03/21/2019 - 14:06</span> Thu, 21 Mar 2019 11:06:36 +0000 Yurii Pylnyk 2967 at Modelers that can be used with ODA SDKs Modelers that can be used with ODA SDKs <div><p>The Modeler library is responsible for working with 3D models. It is used to read and write model data, to create new model objects (for example boxes, spheres, extrusions, etc.), to modify existing objects, and to render the objects.</p> <p>ODA SDKs can be used with several different modeler libraries. The options are:</p> <ul><li style="font-style: italic">AcisBuilder (ModelerGeometry.tx) — Provided as a basic ODA SDK modeler and doesn’t require an additional fee.</li> <li style="font-style: italic">Spatial ACIS® (SpaModeler.tx) — Commercial third-party library that should be purchased separately.</li> <li style="font-style: italic">C3D (c3dModeler.tx) — Commercial third-party library that should be purchased separately.</li> </ul><p>All of the modelers can read and write .sat files and render a model in wireframe and shaded mode. In the case of ACIS and C3D modelers, Boolean operations and other 3dSolid methods will also work. Otherwise they return a “Not Implemented” status.</p> <p>Spatial ACIS completely supports all surfaces of the .sat file format and has the widest set of functionality. Integration of 3D ACIS Modeler allows applications to:</p> <p>More information is at</p> <p>C3D Modeler for ODA products is a lightweight version of the full-featured C3D Modeler, and it supports the following functionality:</p> <ul><li style="font-style: italic">Create a sphere, box, frustum, torus, wedge, extrusion, pyramid, lofted solid, object, and more. (Available via methods in the OdDb3dSolid class.)</li> <li style="font-style: italic">Perform Boolean operations between two solids.</li> <li style="font-style: italic">Perform auxiliary functions for sections, slicing, interference checking, and more.</li> </ul><p>More information is at</p> <p>AcisBuilder doesn’t support sweep by path, Boolean operations, fillet and some other advanced operations yet. But it is continuously developed, and this functionality is planned for future releases.</p> <p>You can use some modeler commands in the “MODELER GEOMETRY” command group of the OdaMfcApp sample application. Just load the ModelerCommands.tx module.</p> <img alt="image1" data-entity-type="file" data-entity-uuid="c4956287-5ede-4f65-bafc-446e39e96d4e" height="555" src="/files/inline-images/1.png" width="792" class="align-center" /><p>AcisBuilder is built with ODA Kernel SDK. If you want to use the Spatial or C3D modeler, check the corresponding box in Project Generator:</p> <img alt="image2" data-entity-type="file" data-entity-uuid="7c1a4404-78de-456a-a6c9-30a91ca5f810" src="/files/inline-images/2.png" class="align-center" /><p>In OdaMfcApp you can choose the modeler with the Tools -&gt; Load Application dialog. First unload the currently used modeler by choosing it from list and clicking Unload. This is possible only if you didn’t perform a rendering yet, otherwise the module will be referenced and locked. Then click Load and choose the modeler you want to use.</p> <p>You can change the currently used modeler with the following code:</p> <pre class="hljs"> <code class="cpp">odrxDynamicLinker()-&gt;unloadModule(OdModelerGeometryModuleName); // Try to load SpaModeler.tx OdRxModule* pSpaModule = odrxDynamicLinker()-&gt;loadModule(OdSpaModelerModuleName); </code></pre> </div> <span><span lang="" about="/user/2016" typeof="schema:Person" property="schema:name" datatype="">Vladimir Savelyev</span></span> <span>Thu, 02/07/2019 - 13:18</span> Thu, 07 Feb 2019 10:18:06 +0000 Vladimir Savelyev 2935 at Collision Detection in ODA Visualize (Part 3 of 3) Collision Detection in ODA Visualize (Part 3 of 3) <div><p>This article is part of a series of articles about the collision detection mechanism in ODA Visualize. For the previous article, see <a href="">Part 2</a>.</p> <h4>Overview of OdGiCollideProc</h4> <p>The interface of the collision detection conveyor node is described in GiCollideProc.h. It contains the following methods:</p> <pre class="hljs"> <code class="cpp">virtual void set( OdGsCollisionDetectionReactor* pReactor, const OdGsCollisionDetectionContext* pCtx = NULL ) = 0; </code></pre> <p>This method specifies the collision detection reactor and context.</p> <pre class="hljs"> <code class="cpp">virtual void setDrawContext(OdGiConveyorContext* pDrawCtx) = 0; </code></pre> <p>This method specifies the drawing context that provides access to the vectorization status information.</p> <pre class="hljs"> <code class="cpp">enum ProcessingPhase { kPhaseGatherInputData = 0, kPhaseDetectIntersections }; virtual void setProcessingPhase( ProcessingPhase ) = 0; virtual ProcessingPhase processingPhase() const = 0 </code></pre> <p>These methods specify the current processing phase that allows conveyor node collecting of triangles to different lists.</p> <pre class="hljs"> <code class="cpp">virtual const OdGeExtents3d&amp; extents() const = 0; </code></pre> <p>This method returns the extents of triangles that were collected during the OdGiCollideProc::kPhaseGatherInputData processing phase.</p> <pre class="hljs"> <code class="cpp">virtual void setNoFilter( bool bNoFilter ) = 0; virtual bool noFilter() const = 0; </code></pre> <p>These methods disable entity filtering.</p> <pre class="hljs"> <code class="cpp">virtual void setInputDrawables( OdGiPathNode const*const* pInputList, OdUInt32 nInputListSize ) = 0; </code></pre> <p>This method specifies a list of OdGiPathNode items that are used in entity filtering</p> <ul><li style="font-style: italic">During the processing phase OdGiCollideProc::kPhaseGatherInputData, if OdGiPathNode of the currently processing entity is not listed in pInputList, the entity is skipped.</li> <li style="font-style: italic">During the processing phase OdGiCollideProc::kPhaseDetectIntersections, if OdGiPathNode of the currently processing entity is listed in pInputList, the entity is skipped to avoid detecting collisions between the same entity</li> </ul><pre class="hljs"> <code class="cpp">virtual void setCheckWithDrawables( OdGiPathNode const*const* pInputList, OdUInt32 nInputListSize ) = 0; </code></pre> <p>This method specifies an additional list of OdGiPathNode that is used in entity filtering during the processing phase OdGiCollideProc::kPhaseDetectIntersections; if the list is not empty and OdGiPathNode of the currently processing entity is not listed in it, the entity is skipped.</p> <pre class="hljs"> <code class="cpp">virtual void processTriangles() = 0 </code></pre> <p>This method performs processing triangles that were gathered during conveyor node processing phases.</p> <h4>Overview of OdGiCollisionDetector</h4> <p>OdGiCollisionDetector and OdGiCollisionDetectorIntersectionsOnly are heirs of OdGiIntersectionsCalculator and are described in GiIntersectionCalculator.h. The only difference between them is that OdGiCollisionDetector treats both triangle intersections and triangle touching as collisions while OdGiCollisionDetectorIntersectionsOnly treats only triangle intersections as collisions.</p> <p>The most important OdGiCollisionDetector methods are:</p> <pre class="hljs"> <code class="cpp">virtual void initializeCalculations(OdGeExtents3d&amp; ext, OdInt64 nObjects); </code></pre> <p>This method prepares internal detector data for calculations. Input parameters:</p> <ul><li style="font-style: italic">OdGeExtents3d&amp; ext – Common extents of triangles to be tested.</li> <li style="font-style: italic">OdInt64 nObjects – Number of triangle containers to be tested.</li> </ul><pre class="hljs"> <code class="cpp">virtual void finalizeCalculations(); </code></pre> <p>This method releases internal detector data. Unlike OdGiIntersectionCalculator, this method does not release memory allocated for triangle containers that was added through an appendTriangleContainer call.</p> <pre class="hljs"> <code class="cpp">OdInt64 appendTriangleContainer( OdGiIntersectTrianglesVector* pContainer ); </code></pre> <p>Adds a triangle container to the detector and returns its internal ID.</p> <pre class="hljs"> <code class="cpp">void processTrianglesIntoSpaceTree(OdInt64 objID, bool bOtherObjectsProcessed); </code></pre> <p>This method processes a specified triangle container into a detector space tree.</p> <pre class="hljs"> <code class="cpp">OdInt64 addContainerToBeTested( OdInt64 containerID ); </code></pre> <p>This method marks the specified triangle container as “container to be tested.”</p> <pre class="hljs"> <code class="cpp">void clearContainersToBeTested();</code></pre> <p>This method unmarks all “container to be tested” containers.</p> <pre class="hljs"> <code class="cpp">void detectCollisions( OdInt64 containerID, const OdGeExtents3d &amp;extents ); </code></pre> <p>This method performs collision detection between a specified triangle container and all triangle containers that are marked as “container to be tested.” Input extents are extents of the specified triangle container.</p> <pre class="hljs"> <code class="cpp">void getCollisions( OdList&lt; OdInt64 &gt;&amp; result ); </code></pre> <p>This method fills the input list with results of the last detectCollisions call using internal IDs of triangle containers that were detected as colliding.</p> <h4>Using OdGiCollisionDetector</h4> <p>The collision detector mechanism detects collisions between a specified triangle container and all triangle containers that are marked as “container to be tested.” All marked containers should be processed into the space tree before a detectCollisions call. To use OdGiCollisionDetector:</p> <ol><li style="font-style: italic">Generate triangle containers.</li> <li style="font-style: italic">Add all containers for collision detection to the detector:<br /> pDetector-&gt;appendTriangleContainer(pContainer);</li> <li style="font-style: italic">Manually process each container marked as “to be tested” into the space tree:<br /> pDetector-&gt;processTrianglesIntoSpaceTree( id, false );<br /> pDetector-&gt;addContainerToBeTested( id );</li> <li style="font-style: italic">Call detectCollisions and process its results for each container that needs to be tested for collisions with marked containers:<br /> OdList&lt; OdInt64 &gt; results;<br /> pDetector-&gt;detectCollisions(id, container_extents);<br /> pDetector-&gt;getCollisions( results );</li> </ol></div> <span><span lang="" about="/user/4492" typeof="schema:Person" property="schema:name" datatype="">Egor Slupko</span></span> <span>Thu, 11/29/2018 - 10:08</span> Thu, 29 Nov 2018 07:08:43 +0000 Egor Slupko 2875 at Collision Detection in ODA Visualize (Part 2 of 3) Collision Detection in ODA Visualize (Part 2 of 3) <div><p>This article is part of a series of articles about the collision detection mechanism in ODA Visualize. For the previous article, see <a href="">Part 1</a>.</p> <h4>Trying out OdGsView::collide</h4> <p>The ODA Drawings Debug application contains two commands that use collision detection: “CollideAll” and “Collide”. Their implementation can be found in the TD_DrawingsExamplesCommon project in the classes OdExCollideAllCmd and OdExCollideCmd.</p> <p>The CollideAll command is implemented in the OdExCollideAllCmd class. This command detects collisions between all drawables in the view.</p> <p>First it implements an inheritor of OdGsCollisionDetectionReactor that copies paths in OdArray:</p> <pre class="hljs"> <code class="cpp">class OdExCollisionDetectionReactor : public OdGsCollisionDetectionReactor { OdArray&lt; OdExCollideGsPath* &gt; m_pathes; public: OdExCollisionDetectionReactor() { }; ~OdExCollisionDetectionReactor() { } virtual OdUInt32 collisionDetected(const OdGiPathNode* pPathNode1, const OdGiPathNode* pPathNode2 ) { OdExCollideGsPath* p1 = fromGiPath( pPathNode1 ); OdExCollideGsPath* p2 = fromGiPath( pPathNode2 ); m_pathes.push_back( p1 ); m_pathes.push_back( p2 ); return OdUInt32(OdGsCollisionDetectionReactor::kContinue); } OdArray&lt; OdExCollideGsPath* &gt;&amp; pathes() {return m_pathes; } }; </code></pre> <p>Depending on user input it sets up OdGsCollisionDetectionContext to detect only intersections or both intersections and touching:</p> <pre class="hljs"> <code class="cpp">int nChoise = pIO-&gt;getInt( OD_T("Input 1 to detect only intersections, any other to detect all"), 0, 0 ); OdGsCollisionDetectionContext cdCtx; cdCtx.setIntersectionOnly( nChoise == 1 ); </code></pre> <p>Then it calls the collision detection method and collects detected paths from the reactor:</p> <pre class="hljs"> <code class="cpp">pView-&gt;collide( NULL, 0, &amp;reactor, NULL, 0, &amp;cdCtx ); OdArray&lt; OdExCollideGsPath* &gt;&amp; pathes = reactor.pathes(); </code></pre> <p>Next it highlights all paths and un-highlights those after user input:</p> <pre class="hljs"> <code class="cpp">for( OdUInt32 i = 0; i &lt; pathes.size(); ++i ) { const OdGiPathNode* p = &amp;(pathes[i]-&gt;operator const OdGiPathNode &amp;()); pModel-&gt;highlight( *p ); } pIO-&gt;getInt( OD_T("Specify any number to exit"), 0, 0 ); for( OdUInt32 i = 0; i &lt; pathes.size(); ++i ) { const OdGiPathNode* p = &amp;(pathes[i]-&gt;operator const OdGiPathNode &amp;()); pModel-&gt;highlight( *p, false ); delete pathes[i]; } pathes.clear(); </code></pre> <p>The result of performing this command:</p> <img alt="image01" data-entity-type="file" data-entity-uuid="9f2c2970-f8e4-4fb6-a7e3-30ff282ae128" src="/files/inline-images/image01_0.jpg" class="align-center" /><p>The Collide command is implemented in the OdExCollideCmd class. This command is based on the Move command and detects collisions between selected drawables and all other drawables in the view.</p> <p>The OdGsCollisionDetectionReactor implementation copies only the second OdGiPathNode since the first node is one of the input nodes:</p> <pre class="hljs"> <code class="cpp">class OdExCollisionDetectionReactor : public OdGsCollisionDetectionReactor { OdArray&lt; OdExCollideGsPath* &gt; m_pathes; public: OdExCollisionDetectionReactor() { }; ~OdExCollisionDetectionReactor() { } virtual OdUInt32 collisionDetected(const OdGiPathNode* /*pPathNode1*/, const OdGiPathNode* pPathNode2 ) { OdExCollideGsPath* p = fromGiPath( pPathNode2 ); if( p || pPathNode2-&gt;persistentDrawableId() ) { m_pathes.push_back(p); } return OdUInt32(OdGsCollisionDetectionReactor::kContinue); } OdArray&lt; OdExCollideGsPath* &gt;&amp; pathes() { return m_pathes; } }; </code></pre> <p>When the command is executed, it asks the user to select some objects (or use already selected objects) for input, creates an instance of its own tracker and executes the pIO-&gt;getPoint method with a tracker:</p> <pre class="hljs"> <code class="cpp">OdDbSelectionSetPtr pSSet = pIO-&gt;select(L"Collide: Select objects to be checked:", OdEd::kSelAllowObjects | OdEd::kSelAllowSubents | OdEd::kSelLeaveHighlighted); if (!pSSet-&gt;numEntities()) throw OdEdCancel(); OdExTransactionSaver saver( pDb ); saver.startTransaction(); OdGePoint3d ptBase = pIO-&gt;getPoint(L"Collide: Specify base point:"); CollideMoveTracker tracker( ptBase, pSSet, pDb, pView ); OdGePoint3d ptOffset = pIO-&gt;getPoint(L"Collide: Specify second point:", OdEd::kGdsFromLastPoint | OdEd::kGptRubberBand, 0, OdString::kEmpty, &amp;tracker); </code></pre> <p>This tracker creates an OdGiPathNode representation of selected objects and for each mouse move event uses it as an input list for the OdGsView::collide method:</p> <pre class="hljs"> <code class="cpp">OdExCollisionDetectionReactor reactor; m_pView-&gt;collide( inputArray.asArrayPtr(), inputArray.size(), &amp;reactor, NULL, 0 ); highlight( reactor.pathes() ); </code></pre> <p>After the OdGsView::collide method, it un-highlights already highlighted objects and highlights newly detected collisions:</p> <pre class="hljs"> <code class="cpp">void CollideMoveTracker::highlight( OdArray&lt; OdExCollideGsPath* &gt;&amp; newPathes ) { //1) Unhighlight old paths if( !m_prevHLPathes.empty() ) { for( OdUInt32 i = 0; i &lt; m_prevHLPathes.size(); ++i ) { m_pModel-&gt;highlight( m_prevHLPathes[i]-&gt;operator const OdGiPathNode &amp;(), false ); delete m_prevHLPathes[i]; } m_prevHLPathes.clear(); } //2) Highlight new paths for( OdUInt32 i = 0; i &lt; newPathes.size(); ++i ) { m_pModel-&gt;highlight( newPathes[i]-&gt;operator const OdGiPathNode &amp;(), true ); m_prevHLPathes.push_back( newPathes[i] ); } } </code></pre> <p>This command allows moving entities and seeing whether they have collisions with each other:</p> <img alt="image02" data-entity-type="file" data-entity-uuid="8edbdbdb-dd2f-432c-b0ca-f36a72869c53" src="/files/inline-images/image02.jpg" class="align-center" /><p> </p> <img alt="image03" data-entity-type="file" data-entity-uuid="e270f1f2-92fd-4a5a-ac32-9228a6f78468" src="/files/inline-images/image03.jpg" class="align-center" /><p> </p> <p>The next article in this series will describe OdGiCollideProc and OdGiCollisionDetector.</p> </div> <span><span lang="" about="/user/4492" typeof="schema:Person" property="schema:name" datatype="">Egor Slupko</span></span> <span>Thu, 11/15/2018 - 09:38</span> Thu, 15 Nov 2018 06:38:16 +0000 Egor Slupko 2871 at Setting View-Specific Graphics in BIM Setting View-Specific Graphics in BIM <div><p>Using ODA BIM SDK, it’s easy to take a view and override the display of face and edge colors, fill patterns, lineweights and other parameters. Overrides are applied to elements, categories of elements, and elements passed through filter conditions.</p> <p>The selected display mode of the view affects the overrides. In shaded mode, overrides are applied to faces and edges, even if a material is set, but for faces in realistic mode, textures of materials has the priority.</p> <p>The OdBmDBView class has the following properties to access the data of overridden graphics:</p> <ul><li style="font-style: italic">OdBmAdHocOverridesPtr getAdHocOverrides() const;</li> <li style="font-style: italic">OdBmFilterOverridesPtr getFilterOverrides() const;</li> <li style="font-style: italic">OdBmGraphicOverridesPtr getGraphicOverrides() const;</li> </ul><p>The resulting objects contain a map of element, category, or filter condition IDs and overridden graphics data.</p> <p>To simplify access to overridden graphics, OdBmDBView has three methods:</p> <ul><li style="font-style: italic">OdBmOverrideGraphicSettingsPtr getCategoryOverrides(const OdBm::BuiltInCategory::Enum&amp;) const<br /> Returns graphic overrides for a category in the view. Input parameter is the built-in category ID.</li> <li style="font-style: italic">OdBmOverrideGraphicSettingsPtr getElementOverrides(const OdBmObjectId&amp;) const<br /> Returns graphic overrides for an element in the view. Input parameter is the element ID.</li> <li style="font-style: italic">OdBmOverrideGraphicSettingsPtr getFilterOverrides(const OdBmObjectId&amp;) const;<br /> Returns graphic overrides for a filter in the view. Input parameter is the filter element ID.</li> </ul><p>These methods return an empty pointer if there is no graphics override for the passed parameters.</p> <p>The OdBmOverrideGraphicSettings class has methods to access data of graphics overrides for faces and edges:</p> <ul><li style="font-style: italic">OdInt32 getDetailLevel() const; // returns overridden detail level in view</li> <li style="font-style: italic">bool getHalftone() const; // returns value of the halftone override</li> <li style="font-style: italic">bool getProjFillPatternVisible() const; // returns visibility of fill pattern overridden for faces</li> <li style="font-style: italic">bool getCutFillPatternVisible() const; // returns visibility of the cut surface fill pattern</li> </ul><p>The next methods return OdBmObjectPtr, which should be cast to appropriate objects:</p> <ul><li style="font-style: italic">OdBmObjectPtr getProjPenNumber() const; // returns OdBmGPenNumberOverriderPtr - the projection surface lineweight</li> <li style="font-style: italic">OdBmObjectPtr getCutPenNumber() const; // returns OdBmGPenNumberOverriderPtr - the cut surface lineweight</li> <li style="font-style: italic">OdBmObjectPtr getProjLinePattern() const; // returns OdBmGLinePatternOverriderPtr - the projection surface line pattern</li> <li style="font-style: italic">OdBmObjectPtr getCutLinePattern() const; // returns OdBmGLinePatternOverriderPtr - the cut surface line pattern</li> <li style="font-style: italic">OdBmObjectPtr getProjPenColor() const; // returns OdBmGStyleColorOverriderPtr - the projection surface line color</li> <li style="font-style: italic">OdBmObjectPtr getCutPenColor() const; // returns OdBmGStyleColorOverriderPtr - the cut surface line color</li> <li style="font-style: italic">OdBmObjectPtr getProjFillPattern() const; // returns OdBmGFillPatternOverriderPtr - the surface fill pattern</li> <li style="font-style: italic">OdBmObjectPtr getCutFillPattern() const; // returns OdBmGFillPatternOverriderPtr - the cut surface fill pattern</li> <li style="font-style: italic">OdBmObjectPtr getProjFillColor() const; // returns OdBmGFillColorOverriderPtr - the projection surface fill color</li> <li style="font-style: italic">OdBmObjectPtr getCutFillColor() const; // returns OdBmGFillColorOverriderPtr - the cut surface fill color</li> <li style="font-style: italic">OdBmGSurfacesTransparencyOverriderPtr getSurfacesTransparency() const; // returns the projection and cut surface transparency</li> </ul><h4>Combining graphics overrides in the view</h4> <p>If an element’s graphic overrides are set by element, category and filter condition, the resulting overrides should be combined using this priority: element, category, filter.</p> </div> <span><span lang="" about="/user/2111" typeof="schema:Person" property="schema:name" datatype="">Evgeniy Tikhonov</span></span> <span>Thu, 11/08/2018 - 09:23</span> Thu, 08 Nov 2018 06:23:59 +0000 Evgeniy Tikhonov 2865 at Collision Detection in ODA Visualize (Part 1 of 3) Collision Detection in ODA Visualize (Part 1 of 3) <div><p>ODA Visualize contains a graphics system with a collision detection mechanism. It also contains an API that allows developers to create custom collision detection functionality. The current collision detection implementation processes triangular representations of entities and does not detect collisions between line primitives (lines, curves, etc.).</p> <p>ODA’s collision detection functionality is carried out by OdGsView::collide. This call modifies a geometry conveyor by adding a new collision detection conveyor node that implements an OdGiCollideProc interface:</p> <img alt="image1" data-entity-type="file" data-entity-uuid="aa48f4c3-81c1-4d1a-be46-a473678cb553" src="/files/inline-images/image002_14.jpg" class="align-center" /><p>This node collects triangular representations of entities and detects collisions between them using OdGiCollisionDetector. All detected collisions are processed by implementation of OdGsCollisionDetectionReactor.</p> <p>Triangle collection goes through a vectorization process. However, if a device supports the useMetafileAsGeometry option, triangles are collected from metafiles which increases performance.</p> <h4>Overview of OdGsView::collide</h4> <p>The main collision detection method is described in Gs.h:</p> <pre class="hljs"> <code class="cpp">virtual void collide( OdGiPathNode const*const* pInputList, OdUInt32 nInputListSize, OdGsCollisionDetectionReactor* pReactor, OdGiPathNode const*const* pCollisionWithList = NULL, OdUInt32 nCollisionWithListSize = 0, const OdGsCollisionDetectionContext* pCtx = NULL ) = 0; </code></pre> <ul><li style="font-style: italic">OdGiPathNode const*const* pInputList — List of GiPathNode that represents input entities; can be NULL.</li> <li style="font-style: italic">OdUInt32 nInputListSize — Size of input list; can be 0.</li> <li style="font-style: italic">OdGsCollisionDetectionReactor* pReactor — Pointer to the reactor that processes detected collisions; cannot be NULL.</li> <li style="font-style: italic">OdGiPathNode const*const* pCollisionWithList — List of GiPathNode that represents entities to check with the input list for collisions; can be NULL.</li> <li style="font-style: italic">OdUInt32 nCollisionWithListSize — Size of list; can be NULL.</li> <li style="font-style: italic">const OdGsCollisionDetectionContext* — Pointer to the context that provides additional collision detection properties; can be NULL.</li> </ul><p>Based on this there are three possible ways to use the OdGsView::collide method:</p> <ol><li style="font-style: italic">Detect collisions between all entities in the view: <pre class="hljs"> <code class="cpp">pView-&gt;collide( NULL, 0, &amp;reactor );</code></pre> </li> <li style="font-style: italic">Detect collisions between specified entities and all other entities: <pre class="hljs"> <code class="cpp">pView-&gt;collide( pList, nListSize, &amp;reactor );</code></pre> </li> <li style="font-style: italic">Detect collisions between entities from two different lists: <pre class="hljs"> <code class="cpp">pView-&gt;collide( pList1, nList1Size, &amp;reactor, pList2, nList2Size );</code></pre> </li> </ol><h4>Overview of OdGsCollisionDetectionReactor</h4> <p>The reactor interface is described in Gs.h and contains only one method:</p> <pre class="hljs"> <code class="cpp">virtual OdUInt32 collisionDetected(const OdGiPathNode* pPathNode1, const OdGiPathNode* pPathNode2)</code></pre> <p>This method is called for each detected collision. The input parameters are two OdGiPathNode values that represent colliding entities. The return value must be one of two values:</p> <ul><li style="font-style: italic">OdGsCollisionDetectionReactor::kContinue continues collision detection between other entities.</li> <li style="font-style: italic">OdGsCollisionDetectionReactor::kBreak interrupts the collision detection process. This can be useful for checking only whether collisions exist but not which entities collide.</li> </ul><p>Note that transferred pointers to OdGiPathNode may be destroyed after the OdGsView::collide call is complete, so paths should be copied if processing them is postponed.</p> <h4>Overview of OdGsCollisionDetectionContext</h4> <p>The context is described in Gs.h. Currently it supports only one option: OdGsCollisionDetectionContext::kIntersectOnly. If this option is enabled, entities that touch each other are not detected as a collision. Such entities are detected as a collision if this option is disabled or if a pointer to the context in the OdGsView::collide call is NULL.</p> <p>The next article in this series will describe how to try out OdGsView::collide.</p> </div> <span><span lang="" about="/user/4492" typeof="schema:Person" property="schema:name" datatype="">Egor Slupko</span></span> <span>Fri, 10/12/2018 - 13:21</span> Fri, 12 Oct 2018 10:21:53 +0000 Egor Slupko 2847 at Teigha Support of OBJ Format Teigha Support of OBJ Format <div><p>OBJ is a file format developed by Wavefront that is commonly used to store 3D model geometric definitions. It is mainly used as an exchange format between different 3D applications and has become one of de-facto standards for model interoperability. Many 3D model libraries provide an option to download their models in this neutral format.</p> <p>Models stored in .obj files can be enriched with detailed specifications of materials in .mtl files, which are referenced from .obj files, so exported models will not lose their views after converting to OBJ. Materials are assigned to faces, and .mtl files are always separate from .obj files, so one .mtl file can be referenced from many .obj files. Or, if materials aren’t needed, .mtl files can be simply deleted. The ability to import .obj files using Teigha also gives users access to a whole set of models from popular web-based 3D model libraries (or closed libraries from manufacturers) which can be quickly used inside their drawings as additional/decorative design elements, for example.</p> <p>OBJ is an open text format, and the structure of the file isn’t hard-fixed. Several .obj variations and even dialects can be found on the web. Teigha tries to support all found deviations from the common specification when reading and parsing files.</p> <h4>Using the module in Teigha applications</h4> <p>The OBJ import/export functionality is implemented inside of the OBJToolkit project, which is part of the Components library found in the Components/OBJToolkit directory. The path in the project solution is the same. A single header file OBJToolkit.h is available for Teigha users; it contains all interfaces, variables and definitions to start working with .obj and .mtl files. The default .tx module name OdObjToolkitModuleName (“OBJToolkit.tx”) is defined inside the Kernel\Include\OdModuleNames.h header and is available by including this header. However, loading of the module can be performed by calling the loadObjToolkitModule() function which is also implemented in OBJToolkit.h. Here is the fastest way to start supporting .obj in your application:</p> <pre class="hljs"> <code class="cpp">OdObjToolkitModulePtr module = loadObjToolkitModule(); </code></pre> <p>The main module’s exported OdObjToolkitModule class has several methods:</p> <ul><li style="font-style: italic">OdObjImportPtr createObjImporter() — Creates an object that is an implementation of an .obj file loader in the OdObjDb structure (keeps imported geometric data) and related OdMtlDb (stores material definitions assigned with the imported .obj file).</li> <li style="font-style: italic">OdObjExportPtr createObjExporter() — Creates an object that can save an assigned OdObjDb structure into the .obj file.</li> <li style="font-style: italic">OdMtlImportPtr createMtlImporter() — Returns a pointer to the created object that can load an .mtl file with materials information into OdMtlDb.</li> <li style="font-style: italic">OdMtlExportPtr createMtlExporter() — Creates and returns a pointer to the object that exports an assigned OdMtlDb into an .mtl file.</li> <li style="font-style: italic">OdResult exportObj(OdDbBaseDatabase *pDb, const OdGiDrawable *pEntity, const OdString &amp;fileName, OdObjExportOptions *options = NULL) — Exports a defined OdGiDrawable object into an .obj file. This method is used in the ExCustObjs sample command “OBJExport” which can be called from OdaMfcApp after loading the ExCustObjs.tx module (Drawings/Examples).</li> <li style="font-style: italic">OdResult exportObj(OdDbBaseDatabase *pDb, const OdString &amp;fileName, OdObjExportOptions *options = NULL) — Exports a defined view into an .obj file. This method is also used to export a view from Teigha Visualize Viewer.</li> </ul><p>As soon as an .obj file is imported into OdObjDb, the following structure can be used to access the imported entities:</p> <ul><li style="font-style: italic">Grouping is performed by the tag ‘g’ (group) of the .obj syntax. Each group can keep objects such as point sets, polylines, curves (only b-spline curves are supported currently) and faces. A point set can be used to define point clouds, for example.</li> <li style="font-style: italic">Each face is a separate entity from the point of view of the OBJ format, so users can access each face separately using OdObjEntityIterator, which is created when the skipFace parameter is equal to false.</li> </ul><p>Functionality that collects all faces of a group in a single shell to get a complex mesh model is also implemented in the getGroupShell method of the OdObjDb class.</p> <p>In short, the hierarchy for an .obj file object model in Teigha can be described as follows:</p> <ul><li style="font-style: italic">OdObjDb keeps a set of named groups.</li> <li style="font-style: italic">Each group keeps several objects (point sets, polylines, curves, faces).</li> <li style="font-style: italic">A single shell from all faces of the group can be created using the getGroupShell method of OdObjDb.</li> </ul><p>Here is sample code of importing an .obj file and walking through all the imported groups and their entities (similar code can be found in the Components/Visualize/Examples/Obj2Visualize project):</p> <pre class="hljs"> <code class="cpp">OdObjImportPtr pObjImporter = OBJToolkit::createObjImporter(); OdResult res = pObjImporter-&gt;importFile(filePath); OdObjDbPtr pObjDb = pObjImporter-&gt;getObjDb(); OdString modelName = pObjImporter-&gt;getDrawingName(); OdObjGroupIteratorPtr itGroups = pObjDb-&gt;createGroupsIterator(); while (itGroups-&gt;done() == false) { OdObjGroupPtr pGroup = itGroups-&gt;element(); if (pGroup.isNull() == false &amp;&amp; pGroup-&gt;isEmpty() == false) { if (eOk == pObjDb-&gt;getGroupShell(pGroup, vertices, texCoo, normals, faces, faceMtls) &amp;&amp; faces.size() &gt; 0) { // process shell data } // and walking through all other entities in the group OdObjEntityIteratorPtr it = pGroup-&gt;createEntitiesIterator(); while (it-&gt;done() == false) { const OdObjGeomEntity *pEntity = it-&gt;element(); const eOBJEntType entType = pEntity-&gt;type(); switch (entType) { case eOBJEntPoints: { if (!pEntity-&gt;m_vind.isEmpty()) { // process set of points } } break; case eOBJEntLine: { if (!pEntity-&gt;m_vind.isEmpty()) { // process polyline } } break; case eOBJEntCurve: { // process curve } break; } it-&gt;step(); } } itGroups-&gt;step(); } </code></pre> <p>Everything is quite simple with OBJ!</p> </div> <span><span lang="" about="/user/6459" typeof="schema:Person" property="schema:name" datatype="">Igor Egorychev</span></span> <span>Thu, 07/26/2018 - 15:20</span> Thu, 26 Jul 2018 12:20:16 +0000 Igor Egorychev 2768 at What is Open Design Visualize? What is Open Design Visualize? <div><p>The rendering capabilities of the Drawings product are well known, but there is another Open Design product (already available as a part of the basic subscription) that leverages the Drawings visualization and enhances its capabilities.</p> <p>This product is Visualize, which implements scene management and handles a range of graphics entities and interaction algorithms. Being a general purpose graphics engine, it provides quick rendering of various geometry data (including custom data) with different effects and is not attached to a particular file format, but instead provides an abstract interface so that format-specific issues can be implemented in extensions.</p> <p>Visualize includes a set of such extensions (that may be used as an example) for working with a wide variety of formats, including: .sat, .dgn (alpha version), .dwg, .obj, .prc, Parasolid, .rcs, and .stl.</p> <p>These are all expected features from a general purpose graphics engine, but with Teigha Visualize there’s more.</p> <ul><li><strong>GUI-style editing</strong> - Edit simple geometry objects such as lines, text and triangles (but not more complex objects such as dimensions).</li> <li><strong>Conversion</strong> - Load a file and save the data to .dwg, .pdf, .obj, and .json file formats. There will be side-effects in some cases, similar to GUI-style editing, for example getting a “triangle-line-text” set instead of a dimension. But there is a variety of conversion opportunities. You can also implement your own save procedure, enlarging the list of available conversions.</li> <li><strong>Web applications</strong> - Use Teigha Visualize for web applications with a proof-of-concept example found in Teigha Cloud.</li> <li><strong>.NET wrappers</strong> - Eases the use of Teigha Visualize in WPF applications (Windows Presentation Foundation applications). A sample WPF application is located in the Teigha Visualize archive.</li> </ul><p>Drawings users can easily try Visualize, as it is part of the basic subscription. Not a member? Try it! Similar to other Open Design products, a trial version of Visualize is available.</p> <img alt="image1" data-entity-type="file" data-entity-uuid="b7e6ce10-8eb9-43ca-9952-cdab3e76f107" height="504" src="/files/inline-images/reductor.png" width="885" class="align-center" /><p> </p> </div> <span><span lang="" about="/user/2095" typeof="schema:Person" property="schema:name" datatype="">Alexander Borovikov</span></span> <span>Thu, 06/07/2018 - 13:34</span> Thu, 07 Jun 2018 10:34:41 +0000 Alexander Borovikov 2696 at How to use Markups (Redlines) in Teigha Visualize How to use Markups (Redlines) in Teigha Visualize <div><p>Teigha Visualize supports features that help with reviewing and editing your files, in particular, creating and saving markups (redlines). Here are some examples of drawings that contain markups:</p> <img alt="image1" data-entity-type="file" data-entity-uuid="76ac945c-b472-4fda-bad5-665052a50fdf" src="/files/inline-images/image002_13.jpg" class="align-center" /><img alt="image2" data-entity-type="file" data-entity-uuid="29c55beb-0b60-4d9a-8b7a-e4c5d3c176ec" src="/files/inline-images/image004_6.jpg" class="align-center" /><img alt="image3" data-entity-type="file" data-entity-uuid="2ecbc113-7ab1-4fda-8805-66d54104d56b" src="/files/inline-images/image006_3.jpg" class="align-center" /><p>In the Teigha Visualize Viewer sample, you can try it for yourself. Use the Markups toolbar in the main window to create different type of markups:</p> <ul><li style="font-style: italic">Rect markup — Draw a rectangle markup;</li> <li style="font-style: italic">Circle markup — Draw a circle markup;</li> <li style="font-style: italic">Handle markup — Draw a free handle markup</li> <li style="font-style: italic">Cloud markup — Draw a cloud markup;</li> <li style="font-style: italic">Text markup — Draw a text markup.</li> </ul><img alt="image4" data-entity-type="file" data-entity-uuid="932b1671-7d10-4460-ac02-bf30b6c88932" src="/files/inline-images/image007_4.png" class="align-center" /><p>Then use the two tools for saving and loading markup views:</p> <ul><li style="font-style: italic">Save view — Save the current view with markups that you’ve drawn;</li> <li style="font-style: italic">Load view — Load a view that contains markups.</li> </ul><p>To save markups that you’ve drawn:</p> <ol><li style="font-style: italic">Click Save; <img alt="image5" data-entity-type="file" data-entity-uuid="8fed1378-70ea-4977-b118-acee8be0d862" src="/files/inline-images/image009_3.png" class="align-center" /><p> </p> </li> <li style="font-style: italic">Enter the name for the saved markup view.</li> </ol><p>Here’s how it works:</p> <p>Saved markups use a structure that contains parameters of the view: position, target, upVector, fieldWidth, fieldHeight, projection type, and render mode. These parameters are received from the view when the “Save” button is pressed. After, the structure writes binary data to a new entity with the entered name.</p> <pre class="hljs"> <code class="cpp">// create and append the binary data to the entity OdTvByteUserData *data = new OdTvByteUserData(params, sizeof(struct SaveViewParams), OdTvByteUserData::kOwn, true); pEn-&gt;appendUserData(data, m_appTvId); </code></pre> <p>To load saved views that contain markups:</p> <ol><li style="font-style: italic">Click Load; <img alt="image6" data-entity-type="file" data-entity-uuid="ed926d3a-ff59-46e8-aa1c-1ab9e9d798bb" src="/files/inline-images/image010_0.png" class="align-center" /><p> </p> </li> <li style="font-style: italic">Select the view to load, then click Load. (You can also click Delete to remove the selected markup view.)</li> </ol><p>Here’s how it works:</p> <p>When the Load button is pressed, first binary data is read from the entity with the selected name, next data is applied to the current view, and finally the visibility is turned on for the entity.</p> <h4>Implementation details</h4> <p>Here is a diagram of the implementation structure of markups:</p> <img alt="image7" data-entity-type="file" data-entity-uuid="1f15779b-147b-4bab-a88c-83bb4a99fa49" src="/files/inline-images/image012.jpg" class="align-center" /><p>In the database, different models are stored, one of which stores markup entities. This model has the name “$ODA_TVVIEWER_MARKUPS”. The model is created only when a markup is drawn. In the object tree, this model is in the “Services model.” Always perform a check for whether this model exists.</p> <pre class="hljs"> <code class="cpp">if (m_TvMarkupsModelId.isNull()) { m_TvMarkupsModelId = m_TvDatabaseId.openObject(OdTv::kForWrite) -&gt;createModel(OD_TV_MARKUP_MODEL, OdTvModel::kDirect); m_TvMarkupsModelId.openObject()-&gt;setNeedSaveInFile(true); } </code></pre> <p>When markup creation begins, create a temporary entity “$MarkupTempEntity”, where all names are declared in macros with the appropriate names; the next markup creation will write to this entity. This entity is destroyed for almost any action (pan, orbit, zoom, etc.). Also, the model contains other entities that were saved, while the markup is not saved, and the work continues with the temporary entity. The following diagram shows the structure of a markup entity.</p> <img alt="image8" data-entity-type="file" data-entity-uuid="65b43f98-8e4f-41bb-953a-7f30be859c95" src="/files/inline-images/image014_0.jpg" class="align-center" /><p>Any markup entity has just five subentities. Each subentity is a folder for the special type of markup:</p> <ul><li style="font-style: italic">Rectangle markups — Subentity “Rectangles”;</li> <li style="font-style: italic">Circle markups — Subentity “Circles”;</li> <li style="font-style: italic">Free handle markups — Subentity “Free handles”;</li> <li style="font-style: italic">Cloud markups — Subentity “Clouds”;</li> <li style="font-style: italic">Text markups — Subentity “Texts”.</li> </ul><p>The constructors for each markup search for a temporary entity, and if not found, creates a new temporary entity according to the folders for markup types.</p> <pre class="hljs"> <code class="cpp">// create main entity if not exist OdTvModelPtr modelPtr = m_tvDraggersModelId.openObject(OdTv::kForWrite); m_entityId = findEntityByName(modelPtr-&gt;getEntitiesIterator(), OD_TV_MARKUP_TEMP_ENTITY, false); if (m_entityId.isNull()) { m_entityId = modelPtr-&gt;appendEntity(OD_TV_MARKUP_TEMP_ENTITY); m_entityId.openObject(OdTv::kForWrite)-&gt;setColor(OdTvColorDef(255, 0, 0)); } // crate rectangles subEntity if not exist m_rectFoldEntityId = findSubEntityByName(m_entityId.openObject()-&gt;getGeometryDataIterator(), OD_TV_MARKUP_RECTANGLES); if (m_rectFoldEntityId.isNull()) m_rectFoldEntityId = m_entityId.openObject(OdTv::kForWrite)-&gt;appendSubEntity(OD_TV_MARKUP_RECTANGLES); </code></pre> <p>Markup folders contain subentities that have the appropriate geometry data. Subentities with the geometry data are created using the updateFrame() methods when the create object flag is true.</p> <pre class="hljs"> <code class="cpp">//update or create entity if (bCreate) { m_rectEntityId = m_rectFoldEntityId.openAsSubEntity(OdTv::kForWrite)-&gt;appendSubEntity(); { OdTvEntityPtr entityNewPtr = m_rectEntityId.openAsSubEntity(OdTv::kForWrite); //create frame m_frameId = entityNewPtr-&gt;appendPolygon(m_pts); } } </code></pre> </div> <span><span lang="" about="/user/4943" typeof="schema:Person" property="schema:name" datatype="">Vlad Malka</span></span> <span>Thu, 05/24/2018 - 15:32</span> Thu, 24 May 2018 12:32:20 +0000 Vlad Malka 2687 at Creating a NURBS Curve Creating a NURBS Curve <div><p>With Teigha you can create Non-Uniform Rational Basis Spline (NURBS) curves using the Teigha Ge library and its classes OdGeNurbCurve2d and OdGeNurbCurve3d.</p> <p>Let’s consider a 3D version of a curve (2D curves have all the same behavior). A NURBS curve is defined by its order, a set of weighted control points, and a knot vector. This data can be set to a curve using a constructor:</p> <pre class="hljs"> <code class="cpp">OdGeNurbCurve3d( int degree, const OdGeKnotVector&amp; knots, const OdGePoint3dArray&amp; controlPoints, const OdGeDoubleArray&amp; weights, bool isPeriodic = false); </code></pre> <p>But this is “core” mathematical data describing NURBS; it is not intuitive to use it to create a shape that you might need. The other option is to create a NURBS using fit data. This is the constructor that sets the fit data:</p> <pre class="hljs"> <code class="cpp">OdGeNurbCurve3d( const OdGePoint3dArray&amp; fitPoints, const OdGeVector3d&amp; startTangent, const OdGeVector3d&amp; endTangent, bool startTangentDefined = true, bool endTangentDefined = true, const OdGeTol&amp; fitTol = OdGeContext::gTol); </code></pre> <p>Here we use fitPoints which are interpolation points that the curve will go through. This means that fit points are on the curve. The following picture shows the difference between fit points and control points:</p> <img alt="image1" data-entity-type="file" data-entity-uuid="d0b47d76-259e-4e8b-b94f-27101616cf9b" src="/files/inline-images/image002_12.jpg" class="align-center" /><p>Start and end tangents are vectors that define tangents at the ends of the curve. You cannot define them by setting corresponding flags to ‘false’. The fit tolerance is a distance that the curve may deviate from the fit points. By default it is zero and the curve passes exactly through all the fit points. The following pictures show the influence of these parameters:</p> <img alt="image2" data-entity-type="file" data-entity-uuid="f659d2b7-b679-4e29-8c82-052c26672362" src="/files/inline-images/image004_5.jpg" class="align-center" /><p align="center">End tangents are not defined</p> <img alt="image3" data-entity-type="file" data-entity-uuid="d993a3b0-56f8-447a-8cdc-ec6645d0f30c" src="/files/inline-images/image006_2.jpg" class="align-center" /><p align="center">End tangents are defined</p> <img alt="image4" data-entity-type="file" data-entity-uuid="2c0fab13-3d42-4994-b8c7-f872817bf734" src="/files/inline-images/image008_2.jpg" class="align-center" /><p align="center">Fit tolerance is defined, and the curve passes ‘near’ the fit points</p> <p>Also you can create a NURBS with fit points that define a tangent in each fit point. Use the following constructor:</p> <pre class="hljs"> <code class="cpp">OdGeNurbCurve3d( const OdGePoint3dArray&amp; fitPoints, const OdGeVector3dArray&amp; fitTangents, const OdGeTol&amp; fitTolerance = OdGeContext::gTol, bool isPeriodic = false); </code></pre> <p>And you can create fit data to use even if the curve was created initially using control points and knots; use the function buildFitData.</p> <p>For database purposes, you can use the OdDbSpline class to represent a 3D NURBS curve as well. You can create an OdGeNurbCurve3d curve of the shape you need and set it to the OdDbSpline class using the setFromOdGeCurve method.</p> </div> <span><span lang="" about="/user/2016" typeof="schema:Person" property="schema:name" datatype="">Vladimir Savelyev</span></span> <span>Thu, 05/17/2018 - 12:17</span> Thu, 17 May 2018 09:17:00 +0000 Vladimir Savelyev 2680 at Maps for Missing Texture Coordinates in Teigha Visualize Maps for Missing Texture Coordinates in Teigha Visualize <div><p>Teigha Visualize has a mapping mechanism to use correct texture settings even when mapping coordinates are not present. In this tutorial, we’ll create a sphere and the texture will be set using mappers.</p> <p>Let’s see how to create a ball.</p> <p>Suppose we have a model and a database ID. Let’s get a pointer to the database, add an entity to this model, and append a sphere to it.</p> <pre class="hljs"> <code class="cpp">// Open database OdTvDatabasePtr pTvDb = databaseId.openObject(); // Create entity OdTvModelPtr modelPtr = modelId.openObject(OdTv::kForWrite); OdTvEntityId entityId = modelPtr-&gt;appendEntity(OD_T("Ball")); OdTvEntityPtr pEntity = entityId.openObject(OdTv::kForWrite); // Append sphere to entity pEntity-&gt;appendShellFromSphere(OdTvPoint(3., 3., 3.), 0.9, OdTvVector::kYAxis, OdTvVector::kXAxis, 50); </code></pre> <p>Then create a material and material map, and set it to the entity:</p> <pre class="hljs"> <code class="cpp">// Create material with unique name OdTvMaterialId materialId = pTvDb-&gt;createMaterial("Material"); OdTvMaterialPtr pMaterial = materialId.openObject(OdTv::kForWrite); // Create material map OdTvMaterialMap materialMap; // Set texture path to map materialMap.setSourceFileName(“../resources/Ball.png”); // Set material map to material pMaterial-&gt;setDiffuse(OdTvMaterialColor(OdTvColorDef(255, 0, 0)), materialMap); // Set material to mapper pEntity-&gt;setMaterial(materialId); </code></pre> <p>The mapping projection type is set to the entity mapper. It shows how the texture is associated with the geometry. The entity has a default mapper with default projection kPlanar.</p> <p>Texture without mappers:</p> <img alt="image1" data-entity-type="file" data-entity-uuid="ac1b3f38-47af-45c8-a22c-63a1f4f11d61" src="/files/inline-images/image002_11.jpg" class="align-center" /><p>In this case, the mapper type is kSphere.</p> <p>There are two kinds of mappers:</p> <ul><li style="font-style: italic">Set to the entity (OdTvMapperDef) and used for translation, rotation and scale texture.</li> <li style="font-style: italic">Used for the diffusing map (OdTvMapper) and used for texture offsetting, scaling and rotating.</li> </ul><p>Each of them has a transform matrix, but these matrices have different functions. The OdTvMapperDef transform matrix is used for setting the texture position to a 3D object. The translate vector translates the texture UV zero position in the 3D system. For example, for a sphere it can be the center of the sphere. Also, the texture can be rotated by axis by setting data in another matrix position. The transform matrix of the entity mapper is usually the transformation matrix, so its elements are the same elements as a usual matrix used for 3D.</p> <p>The transform matrix in OdTvMapper is used for offsetting, scaling and rotating the texture on the object. In the [0][0] position of this matrix, 1 / u_scale; and in the [1][1] position, 1 / v_scale. For fully setting the texture on the sphere, in the [0][0] and [1][1] positions should be 1. Texture offsetting is in [0][3] by X and in [1][3] by Y. For X, the offset is calculated as x * 1 / u_scale, and for y it is y * 1 / v_scale. And for rotating in [0][0] set cos([0][0]), in [0][1] set sin([0][1]) (clockwise), or –sin([0][0]) (counterclock-wise), in [1][0] set –sin([1][1]) (clockwise) or sin([1][1]) (counterclock-wise), and in [1][1] set cos([1][1]). Offsetting makes sense only if the texture scaling is greater than 1.</p> <p>There are a number of methods to make using mappers easy:</p> <ul><li style="font-style: italic">OdTvMapperDef::translate() sets the texture position.</li> <li style="font-style: italic">OdTvMapperDef::rotate() rotates the texture.</li> <li style="font-style: italic">OdTvMapperDef::scale() scales the texture by axis.</li> <li style="font-style: italic">OdTvTextureMapper::setSampleSize() sets the texture sample size.</li> <li style="font-style: italic">OdTvTextureMapper::setOffset() offsets the texture.</li> <li style="font-style: italic">OdTvTextureMapper::setRotation() rotates the texture.</li> </ul><pre class="hljs"> <code class="cpp">// Create material with unique name OdTvMaterialId materialId = pTvDb-&gt;createMaterial("Material"); OdTvMaterialPtr pMaterial = materialId.openObject(OdTv::kForWrite); // Create material map OdTvMaterialMap materialMap; // Create path to texture image file and set it to map materialMap.setSourceFileName(“../resources/Ball.png”); // Create entity mapper OdTvMapperDef entityMapper; entityMapper.setProjection(OdTvMapperDef::kSphere); // Translate and rotate texture entityMapper.translate(3., 3., 3.); entityMapper.rotate(0., OdaPI / 2., OdaPI / 2.); // Create material mapper, set texture sample size and set it to map OdTvTextureMapper materialMapper; materialMapper.setSampleSize(1., 1.); materialMap.setMapper(materialMapper); // Set mapper to entity pEntity-&gt;setEntityMaterialMapper(entityMapper); // Set material map to material pMaterial-&gt;setDiffuse(OdTvMaterialColor(OdTvColorDef(255, 0, 0)), materialMap); // Set material to mapper pEntity-&gt;setMaterial(materialId); </code></pre> <p>The result is the following texture with the kSphere mapper:</p> <img alt="image2" data-entity-type="file" data-entity-uuid="91131b84-33b4-4596-a302-31b3e577e13e" src="/files/inline-images/image004_4.jpg" class="align-center" /><p> </p> </div> <span><span lang="" about="/user/3819" typeof="schema:Person" property="schema:name" datatype="">Alexander Fedorov</span></span> <span>Thu, 05/10/2018 - 22:51</span> Thu, 10 May 2018 19:51:56 +0000 Alexander Fedorov 2674 at Working with Surface Fit Polygon Meshes Working with Surface Fit Polygon Meshes <div><p>A polygon mesh can be of the surface fit type, which defines the polygon mesh using an approximation method to create an object that is more smooth. In Teigha, polygon mesh surface fit is provided by the following function:</p> <pre class="hljs"> <code class="cpp">void OdDbPolygonMesh::surfaceFit(OdDb::PolyMeshType surfType, OdInt16 surfu, OdInt16 surfv); </code></pre> <p>The surface fits the polyline mesh using the surfType, surfU, and surfV argument values. Then the new vertices of the ‘k3dFitVertex’ type are calculated on that surface. This is the same as the common PEDIT command and its "SMOOTH SURFACE" option in other CAD systems, except Teigha’s method uses the control values that are passed in whereas the PEDIT command uses those in the database.</p> <p>The parameter surfType can be one of the following:</p> <ul><li style="font-style: italic">kQuadSurfaceMesh</li> <li style="font-style: italic">kCubicSurfaceMesh</li> <li style="font-style: italic">kBezierSurfaceMesh</li> </ul><p>This parameter selects the order (by both U and V) of a NURBS underlay surface which is used to compute “smoothed” mesh points. Note that for the kBezierSurfaceMesh option, the order of the NURBS is the mesh size by M and N respectively.</p> <p>The parameters surfU and surfV define the number of rows and columns of the “smoothed” mesh.</p> <h4>Code example</h4> <pre class="hljs"> <code class="cpp">OdDbPolygonMeshPtr pMesh = ...; // create some mesh pMesh-&gt;surfaceFit(OdDb::kCubicSurfaceMesh, 12, 15); for(OdDbObjectIteratorPtr pIter = pMesh-&gt;vertexIterator(); !pIter-&gt;done(); pIter-&gt;step()) { OdDbPolygonMeshVertexPtr pVert = pIter-&gt;entity(OdDb::kForRead); OdDb::Vertex3dType type = pVert-&gt;vertexType(); if (type == OdDb::k3dFitVertex) { const OdGePoint3d pt = pVert-&gt;position(); ... // use the fit vertex here } } </code></pre> <p>Examples of initial mesh shapes and smooth results</p> <img alt="image1" data-entity-type="file" data-entity-uuid="0b943f22-2fca-4c61-ad74-03b4aeea5436" src="/files/inline-images/surfaces.gif" class="align-center" /><p> </p> </div> <span><span lang="" about="/user/2016" typeof="schema:Person" property="schema:name" datatype="">Vladimir Savelyev</span></span> <span>Thu, 04/19/2018 - 12:17</span> Thu, 19 Apr 2018 09:17:02 +0000 Vladimir Savelyev 2652 at Overriding Visual Styles of Elements in Teigha BIM Overriding Visual Styles of Elements in Teigha BIM <div><p>It is well known that materials applied to elements are different in shading and realistic modes. However, this rule does not apply for some elements.</p> <p>Let’s look at the OdBmImposterLight element. The picture below shows an imposter light placed on an extrusion. Both the light and extrusion have the same material, which has a red color for shading mode and a grey color for realistic mode.</p> <img alt="image1" data-entity-type="file" data-entity-uuid="75fc3d8e-28a0-4468-906d-9865fe04234a" src="/files/inline-images/image001_27.png" class="align-center" /><p align="center">Imposter light placed on an extrusion</p> <p>To achieve this behavior for a desired element, just implement a descendant of the OdBmOverriddenVisualStyle and OdBmOverriddenVisualStyleImpl classes.</p> <p>Here is the declaration:</p> <pre class="hljs"> <code class="cpp">class TB_DB_EXPORT OdBmOverriddenVisualStyleForImposterLight : public OdBmOverriddenVisualStyle { ODBM_DECLARE_CUSTOM_CLASS_MEMBERS(OdBmOverriddenVisualStyleForImposterLight); public: virtual void overrideByDbView(const OdBmDBViewPtr&amp;); }; class TB_DB_EXPORT OdBmOverriddenVisualStyleForImposterLightImpl : public OdBmOverriddenVisualStyleImpl { ODBM_DECLARE_CUSTOM_CLASS_IMPL_MEMBERS(OdBmOverriddenVisualStyleForImposterLight); public: OdBmOverriddenVisualStyleForImposterLightImpl(); virtual ~OdBmOverriddenVisualStyleForImposterLightImpl(){}; virtual void overrideByDbView(const OdBmDBViewPtr&amp;); }; </code></pre> <p>And the definition:</p> <pre class="hljs"> <code class="cpp">#define NEW_CONSTR2(CLASS) OdRxObjectImpl&lt;OdBmOverriddenVisualStyleForImposterLight&gt;::createObject() ODTF_DEFINE_MEMBERS2(OdBmOverriddenVisualStyleForImposterLight, OdBmOverriddenVisualStyle, NULL, NULL, OdBmOverriddenVisualStyleForImposterLight::pseudoConstructor, OD_T("OdBmOverriddenVisualStyleForImposterLight")) ODRX_DEFINE_PSEUDOCONSTRUCTOR(OdBmOverriddenVisualStyleForImposterLight, NEW_CONSTR2) OdBmOverriddenVisualStyleForImposterLight::OdBmOverriddenVisualStyleForImposterLight( OdBmOverriddenVisualStyleForImposterLightImpl* pImpl) : OdBmOverriddenVisualStyle(pImpl) { } OdBmOverriddenVisualStyleForImposterLight::OdBmOverriddenVisualStyleForImposterLight() : OdBmOverriddenVisualStyle(new OdBmOverriddenVisualStyleForImposterLightImpl) { } void OdBmOverriddenVisualStyleForImposterLight::overrideByDbView(const OdBmDBViewPtr&amp; view) { getImpl()-&gt;overrideByDbView(view); } OdBmOverriddenVisualStyleForImposterLightImpl::OdBmOverriddenVisualStyleForImposterLightImpl(): OdBmOverriddenVisualStyleImpl() { } void OdBmOverriddenVisualStyleForImposterLightImpl::overrideByDbView(const OdBmDBViewPtr&amp; view) { OdBmDBViewImpl* pView = OdBmSystemInternals::getImpl(view); if (pView) { OdGiVisualStyle&amp; viewStyle = pView-&gt;getVisualStyle(); (OdGiVisualStyle&amp;)m_visualStyle = viewStyle; // disable textures for imposter light - will use shaded colors in realistic mode m_visualStyle.displayStyle().setDisplaySettingsFlag(OdGiDisplayStyle::kTextures, false); } } OdUInt32 OdBmOverriddenVisualStyleImpl::subSetAttributes(OdGiDrawableTraits* traits) const { OdGiVisualStyleTraitsPtr pVsTraits = OdGiVisualStyleTraits::cast(traits); if (pVsTraits.get()) pVsTraits-&gt;setOdGiVisualStyle(m_visualStyle); return OdBmObjectImpl::subSetAttributes(traits) | OdGiDrawable::kDrawableUsesNesting; //It is container; } </code></pre> <p>Notice that you need to register the overridden visual style in the database, as required by GI:</p> <pre class="hljs"> <code class="cpp">OdBmSystemInternals::getImpl(pDb)-&gt;addElement( OdBmOverriddenVisualStyleForImposterLight::createObject(), OdBm::OverriddenVisualStyles::ImposterLight, OdBmObjectId::kNull, false); </code></pre> <p>This code, for example, can be placed inside the OdBmMaterialTableNewImpl:: setDocument() method.</p> <p>Also you need to implement the subViewportDraw() method for the desired class.</p> <pre class="hljs"> <code class="cpp">void OdBmImposterLightImpl::subViewportDraw(OdGiViewportDraw *pVd) const { OdBmViewportPtr pViewport = OdBmObjectId(pVd-&gt;viewportObjectId()).safeOpenObject(); OdBmDBViewPtr pView = pViewport-&gt;getDbViewId().safeOpenObject(); OdBmDatabase* pDb = static_cast&lt;OdBmDatabase*&gt;(pVd-&gt;context()-&gt;database()); OdDbStub* pVisualStyle = pDb-&gt;getObjectId(OdBm::OverriddenVisualStyles::ImposterLight); if (pVisualStyle) { OdBmOverriddenVisualStylePtr pStyle = OdBmObjectId(pVisualStyle).openObject(); pStyle-&gt;overrideByDbView(pView); pVd-&gt;subEntityTraits().setVisualStyle(pVisualStyle); } OdBmImposterLightInternalImpl::subViewportDraw(pVd); } </code></pre> <p>Done.</p> </div> <span><span lang="" about="/user/2111" typeof="schema:Person" property="schema:name" datatype="">Evgeniy Tikhonov</span></span> <span>Fri, 03/16/2018 - 12:56</span> Fri, 16 Mar 2018 09:56:07 +0000 Evgeniy Tikhonov 2617 at Teigha Cloud: Mobile clients - implementation features Teigha Cloud: Mobile clients - implementation features <div><h4>Comparing Mobile Client Requirements for Teigha Cloud</h4> <p>When creating a mobile client for Teigha Cloud, you can face peculiarities on different mobile platforms. This article describes some of the differences between iOS and Android platforms.</p> <h4>Networking</h4> <p>On different platforms, the APIs of HTTP clients are very different. The main requirement for clients is to support long-poll requests because the Teigha Cloud server uses a long-poll mechanism for sending graphical data to clients.</p> <ul><li style="font-style: italic">iOS uses NSURLSession and NSURLConnection for http/https requests.</li> <li style="font-style: italic">Android uses UrlConnection and NetworkFragment for these purposes. It is recommended to use wrappers on a default client (see below).</li> </ul><h4>OpenGL screen</h4> <p>Teigha Cloud uses Open GL ES 2.0. Your app must have GL ES2 support and the SDK of the mobile OS must have tools for working with it.</p> <ul><li style="font-style: italic">iOS uses GLKViewController and GLKView (located in GLKit) for working with GL ES2. GLKViewController contains GLKView embedded (see here).</li> <li style="font-style: italic">Android uses GLSurfaceView and GLSurfaceView.Renderer. GLSurfaceView contains methods for configuring the screen, such as configuring depth color and an update screen policy, and GLSurfaceView.Renderer contains the rendering configuration (see here and here).</li> </ul><h4>Recommendations</h4> <p>It is recommended to use the following libraries for your mobile app:</p> <ul><li style="font-style: italic">Alamofire(iOS): REST client that supports async work and works with long-polling.</li> <li style="font-style: italic">Haneke(iOS): Tool that helps you download a preview of uploaded files.</li> <li style="font-style: italic">JASON: Fast json parser.</li> <li style="font-style: italic">AsyncHttpClient(Android): An async rest client that supports background work and works with long-polling.</li> <li style="font-style: italic">Picasso(Android): Downloads previews of uploaded files.</li> <li style="font-style: italic">Moshi(Android): json parser.</li> </ul></div> <span><span lang="" about="/user/3203" typeof="schema:Person" property="schema:name" datatype="">Maksym Kryva</span></span> <span>Tue, 12/19/2017 - 10:26</span> Tue, 19 Dec 2017 07:26:47 +0000 Maksym Kryva 2480 at Teigha Kernel: OdGsView Interactivity Mode Teigha Kernel: OdGsView Interactivity Mode <div><h4>Introduction</h4> <p>Interactivity mode is a new feature of the Teigha graphics system . It allows for automatic interruption of the drawing process if the process is taking a long time. Of course in this case the drawing will not be drawn completely, but this feature can be useful for multiple consecutive redraws when the result of the intermediate redraw is not so important.</p> <h4>Using interactivity mode</h4> <p>Interactivity mode is controlled through the OdGsView API. However, this mode has no effect if the corresponding device does not support it. The following method of OdGsBaseVectorizeDevice checks whether the device supports this mode:</p> <pre class="hljs"> <code class="cpp">bool supportInteractiveViewMode() const; </code></pre> <p>Also, OdGsBaseVectorizeDevice has a method that allows you to manually change support of interactivity mode:</p> <pre class="hljs"> <code class="cpp">void setSupportInteractiveViewMode( bool ); </code></pre> <p>The OdGsView API provides the following methods for interactivity mode:</p> <ul><li> <p style="font-style: italic">virtual void beginInteractivityMode( double frameRateInHz )</p> <p>This method enables view interactivity mode. frameRateInHz specifies the desired frame rate (FPS). This value defines the desired minimum number of frames per second. In other words, if you want to limit the time that each redraw operation takes (e.g., “not more than X seconds”) frameRateInHz should be the inverse ratio of this time limit (1 / X).</p> </li> <li> <p style="font-style: italic">virtual bool isInInteractivity() const</p> <p>Returns true if this view is in interactivity mode.</p> </li> <li> <p style="font-style: italic">virtual double interactivityFrameRate() const</p> <p>Returns the desired frame rate, which was specified in beginInteractivityMode().</p> </li> <li> <p style="font-style: italic">virtual void endInteractivity()</p> <p>This method disables view interactivity mode.</p> </li> </ul><h4>Implementation details</h4> <p>When a view is in interactivity mode, each time OdGsBaseVectorizer::beginViewVectorization is called (usually from OdGsDevice::update()), it starts a special timer that will be stopped in OdGsBaseVectorizer::endViewVectorization. Each request to OdGsBaseVectorizer::renderAbort checks this timer. If the elapsed time exceeds 1 / interactivityFrameRate(), OdGsBaseVectorizer::renderAbort() returns true and the rendering loop is interrupted.</p> <p>The interactive mode has some limitations:</p> <ul><li style="font-style: italic">Since this mode is based on OdGsBaseVectorizer::renderAbort(), it has no effect if the device does not support renderAbort() functionality.</li> <li style="font-style: italic">Since this mode works on the OdGsNode level, it may have no visual effect if the drawing contains only a few big (long time to draw) nodes (e.g. external references).</li> <li style="font-style: italic">Sometimes a rendering loop cannot be interrupted by renderAbort(), only suppressed. In this case, interactivity mode still works but the resulting frame rate may differ from the preset.</li> </ul><p>Currently Teigha supports interactive mode for the following devices: WinGDI, WinDirectX, and WinOpenGL.</p> <h4>Example</h4> <p>The OdaMfcApp sample application uses interactivity mode for the Orbit command. The console command “interactivity” allows you to disable or enable this mode and to change the desired frame rate:</p> <img alt="image1" data-entity-type="file" data-entity-uuid="a6bc02f2-49e9-4fa5-ae6c-6dc1145e4157" src="/files/inline-images/image002_9.jpg" class="align-center" /><p>Usage of the interactivity mode can be found in TD_DrawingExamplesCommon in EditorObject.cpp. We use a helper class to enable and disable interactivity mode:</p> <pre class="hljs"> <code class="cpp">class ViewInteractivityMode { bool m_enabled; OdGsView* m_pView; public: ViewInteractivityMode( OdRxVariantValue enable, OdRxVariantValue frameRate, OdGsView* pView ) { m_enabled = false; m_pView = pView; if( !enable.isNull() ) { m_enabled = (bool)(enable); if( m_enabled &amp;&amp; !frameRate.isNull() ) { double rate = (double)(( (frameRate.get())-&gt;getDouble() )); pView-&gt;beginInteractivity( rate ); } } } ~ViewInteractivityMode() { if( m_enabled ) m_pView-&gt;endInteractivity(); } }; </code></pre> <p>A constructor of ViewInteractivityMode enables (if required) interactivity mode and a destructor disables it.</p> <p>The function OdEx3dOrbitCmd::execute creates the instance of ViewInteractivityMode before it goes to an event loop:</p> <pre class="hljs"> <code class="cpp">void OdEx3dOrbitCmd::execute(OdEdCommandContext* pCmdCtx) { OdDbCommandContextPtr pDbCmdCtx(pCmdCtx); OdDbDatabasePtr pDb = pDbCmdCtx-&gt;database(); OdSmartPtr&lt;OdDbUserIO&gt; pIO = pDbCmdCtx-&gt;userIO(); OdDbObjectPtr pVpObj = pDb-&gt;activeViewportId().safeOpenObject(OdDb::kForWrite); OdDbAbstractViewportDataPtr pAVD(pVpObj); OdGsView* pView = pAVD-&gt;gsView(pVpObj); // There is one special case: layout with enabled 'draw viewports first' mode { if (!pDb-&gt;getTILEMODE()) { OdDbLayoutPtr pLayout = pDb-&gt;currentLayoutId().openObject(); if (pLayout-&gt;drawViewportsFirst()) { if (pView-&gt;device()-&gt;viewAt(pView-&gt;device()-&gt;numViews() - 1) == pView) pView = pView-&gt;device()-&gt;viewAt(0); } } } // OdRxVariantValue interactiveMode = (OdRxVariantValue)pCmdCtx-&gt;arbitraryData( OD_T("OdaMfcApp InteractiveMode" ) ); OdRxVariantValue interactiveFrameRate = (OdRxVariantValue)pCmdCtx-&gt;arbitraryData( OD_T("OdaMfcApp InteractiveFrameRate" ) ); ViewInteractivityMode mode( interactiveMode, interactiveFrameRate, pView ); OdStaticRxObject&lt;RTOrbitTracker&gt; tracker; for(;;) { try { tracker.init(pView, pIO-&gt;getPoint(OD_T("Press ESC or ENTER to exit."), OdEd::kInpThrowEmpty|OdEd::kGptNoUCS|OdEd::kGptNoOSnap|OdEd::kGptBeginDrag, 0, OdString::kEmpty, &amp;tracker)); pIO-&gt;getPoint(OD_T("Press ESC or ENTER to exit."), OdEd::kInpThrowEmpty|OdEd::kGptNoUCS|OdEd::kGptNoOSnap|OdEd::kGptEndDrag, 0, OdString::kEmpty, &amp;tracker); tracker.reset(); } catch(const OdEdCancel) { break; } } } </code></pre> <p>When the function ends, the instance of ViewInteractivityMode automatically deletes, which calls OdGsView::endInteractivity().</p> <p>Below are a few samples. In the first sample, the frame rate is small enough so all entities are shown while performing the Orbit command:</p> <img alt="image2" data-entity-type="file" data-entity-uuid="de60dfe2-086d-4a77-a5e5-63e92bc9ecdb" src="/files/inline-images/image004_3.jpg" class="align-center" /><p>Here the frame rate was increased so some parts of the drawing disappeared:</p> <img alt="image3" data-entity-type="file" data-entity-uuid="aa11ffd4-5c60-47b5-bd31-322aba25ce91" src="/files/inline-images/image006_1.jpg" class="align-center" /><p>And here the frame rate was increased again so only a few of the original entities are shown:</p> <img alt="image4" data-entity-type="file" data-entity-uuid="f00f3ec0-07ea-4d1c-a3a9-90e9580d34ed" src="/files/inline-images/image008_1.jpg" class="align-center" /><p> </p> </div> <span><span lang="" about="/user/4492" typeof="schema:Person" property="schema:name" datatype="">Egor Slupko</span></span> <span>Thu, 12/07/2017 - 10:00</span> Thu, 07 Dec 2017 07:00:00 +0000 Egor Slupko 2460 at Frequently Asked: How can I solve a large coordinates problem during geometry display and graphics cache auto-regeneration? Frequently Asked: How can I solve a large coordinates problem during geometry display and graphics cache auto-regeneration? <div><p>Vectorization modules based on DirectX/OpenGL graphics APIs ("WinOpenGL.txv", "WinDirectX.txv" and "WinGLES2.txv") don’t render geometry accurately for large coordinates due to hardware limitations. The GPUs work with 32-bit floats and don’t know anything about 64-bit doubles that represent geometry coordinates, so double-&gt;float truncation takes place and causes geometry rendering problems (artifacts) if geometry is far from the origin.</p> <p>The following example drawing shows double-&gt;float truncation artifacts. The geometry looks angular and broken, like it is attracted to a certain grid:</p> <img alt="image1" data-entity-type="file" data-entity-uuid="0252e92b-bc32-4068-b9ec-ef6378eeb2aa" src="/files/inline-images/image005_7.png" class="align-center" /><p>Vectorizers can fix this rendering problem, but they require geometry cache regeneration to recompute geometry coordinates into a suitable coordinates system. For example, an application can automatically regenerate geometry cache if an image becomes inaccurate.</p> <p>This method returns true if the geometry requires regeneration:</p> <pre class="hljs"> <code class="cpp">inline bool requireAutoRegen(OdGsView *pView) { OdGsDevice *pDevice = pView-&gt;device(); if (!pDevice) return false; OdRxDictionaryPtr pProps = pDevice-&gt;properties(); if (!pProps.isNull()) { if (pProps-&gt;has(OD_T("RegenCoef"))) { return OdRxVariantValue(pProps-&gt;getAt(OD_T("RegenCoef")))-&gt;getDouble() &gt; 1.; } } return false; } </code></pre> <p>The RegenCoef vectorization device property represents how many pixels in the current coordinate system will be affected by the double-&gt;float conversion. An application can check whether the geometry cache is inaccurate before updating the screen and then execute cache regeneration if this is required:</p> <pre class="hljs"> <code class="cpp">if (requireAutoRegen(pView)) { m_pDevice-&gt;invalidate(); if (m_pDevice-&gt;gsModel()) m_pDevice-&gt;gsModel()-&gt;invalidate(OdGsModel::kInvalidateAll); } </code></pre> <p>m_pDevice in this source code example represents the OdGsLayoutHelper class.</p> <p>Example drawing after geometry regeneration:</p> <img alt="image2" data-entity-type="file" data-entity-uuid="0fd7a0d4-c11a-49d0-ba11-7cfaa46e6e13" src="/files/inline-images/image006_5.png" class="align-center" /><p> </p> </div> <span><span lang="" about="/user/1811" typeof="schema:Person" property="schema:name" datatype="">Andrew Markovich</span></span> <span>Wed, 11/22/2017 - 09:14</span> Wed, 22 Nov 2017 06:14:48 +0000 Andrew Markovich 1351 at Frequently Asked: How do I disable/enable lights inside blocks? Frequently Asked: How do I disable/enable lights inside blocks? <div><p>By default, displaying light sources inside block inserts is enabled and Teigha rendering takes them into account. But large drawings can contain many light sources which can seriously downgrade rendering performance. For example, an architectural drawing can contain a building with rooms that contain lamps, and each lamp can be created as a block insert and can contain a lighting source. In this case an application can disable rendering of light sources from block inserts to increase rendering performance and only light sources placed at the top level of the hierarchy are displayed.</p> <p>For .dwg databases, you can invoke the “LIGHTSINBLOCKS” system variable to enable/disable the display of light sources inside blocks:</p> <pre class="hljs"> <code class="cpp">void disableLightsInBlocks(OdDbDatabase *pDb) { pDb-&gt;setLIGHTSINBLOCKS(0); } </code></pre> <p>Programmatically an application can control the display of light sources inside blocks for parts of the graphics cache using the OdGsModel::setEnableLightsInBlocks method:</p> <pre class="hljs"> <code class="cpp">void disableLightsInBlocks(OdGsModel *pGsModel) { pGsModel-&gt;setEnableLightsInBlocks(false); } </code></pre> <p>Example of a drawing with four lights inside block inserts:</p> <img alt="image1" data-entity-type="file" data-entity-uuid="b86139bb-0a1f-496f-a75d-e30deab25930" src="/files/inline-images/image003_12.png" class="align-center" /><p>Example of the same drawing with disabled display of lights inside block inserts:</p> <img alt="image2" data-entity-type="file" data-entity-uuid="1bcfc041-f301-4293-b7d2-0145d51555a7" src="/files/inline-images/image004_10.png" class="align-center" /><p> </p> </div> <span><span lang="" about="/user/1811" typeof="schema:Person" property="schema:name" datatype="">Andrew Markovich</span></span> <span>Tue, 11/07/2017 - 12:33</span> Tue, 07 Nov 2017 09:33:40 +0000 Andrew Markovich 1324 at Frequently Asked: How can I modify drawn colors from my application during display? Frequently Asked: How can I modify drawn colors from my application during display? <div><p>Sometimes rendering applications require a modification to drawn colors during rendering without a long update of geometry cache. For example, this functionality is helpful to draw graphics using a corrected color palette, inverted colors, grayscale or monochrome rendering modes, color gamma, brightness and contrast correction. This functionality is available for all primary Teigha vectorization modules ("WinGDI.txv", "WinOpenGL.txv", "WinDirectX.txv" and "WinGLES2.txv").</p> <p>Include the color converter interface:</p> <pre class="hljs"> <code class="cpp">// 1) Include color conversion callback interface #include "Teigha/Kernel/Extensions/ExRender/ExColorConverterCallback.h" </code></pre> <p>Implement your own color conversion function:</p> <pre class="hljs"> <code class="cpp">// 2) Define your own color conversion callback method. In this example we invert R, G and B color components. class UserDefColorConversionCallback : public OdColorConverterCallback { virtual ODCOLORREF convert(ODCOLORREF originalColor) { return ODRGBA(~ODGETRED(originalColor), ~ODGETGREEN(originalColor), ~ODGETBLUE(originalColor), ODGETALPHA(originalColor)); } }; </code></pre> <p>Attach the converted color to a Teigha vectorization module using OdGsDevice properties:</p> <pre class="hljs"> <code class="cpp">// 3) Construct and attach user-defined color conversion callback to Teigha vectorization device using device properties. void attachColorConverter(OdGsDevice *pGsDevice) { if (pGsDevice &amp;&amp; pGsDevice-&gt;properties() &amp;&amp; pGsDevice-&gt;properties()-&gt;has(OD_T("ColorConverter"))) { OdColorConverterCallbackPtr pCC = OdRxObjectImpl&lt;UserDefColorConversionCallback&gt;::createObject() pGsDevice-&gt;properties()-&gt;putAt(OD_T("ColorConverter"), pCC); } } </code></pre> <p>Example of a drawing before color conversion:</p> <img alt="image1" data-entity-type="file" data-entity-uuid="1a45aec8-51f7-4763-a901-366e885d4883" src="/files/inline-images/image001_16.png" class="align-center" /><p>Example of the same drawing after color conversion using the color inversion function from the source code example:</p> <img alt="image2" data-entity-type="file" data-entity-uuid="6caa3f49-8e03-4202-8441-5df1d2478f3e" src="/files/inline-images/image002_9.png" class="align-center" /><p> </p> </div> <span><span lang="" about="/user/1811" typeof="schema:Person" property="schema:name" datatype="">Andrew Markovich</span></span> <span>Tue, 10/24/2017 - 16:52</span> Tue, 24 Oct 2017 13:52:31 +0000 Andrew Markovich 1298 at Graphic System Overlays Graphic System Overlays <div><h4>Introduction</h4> <p>Complex graphic scenes with a large number of elements are sometimes drawn slowly. In this case, dynamic graphic elements (such as cursors, grip/snap points, selection rectangles, user interface elements, and so on) are drawn slowly too because to correctly combine graphic scenes and dynamic graphic elements, the renderer must redraw all underlying scene elements. This problem can be solved using overlay buffers where the main rendered scene and dynamic graphic elements are drawn to separate buffers. Only the buffer that contains changed geometry will be redrawn and finally combined with the unchanged buffers during screen output.</p> <img alt="image1" data-entity-type="file" data-entity-uuid="30fb2cdc-1145-478a-be94-60b243f4822c" src="/files/inline-images/image001_edited_0.jpg" class="align-center" /><p>Overlay buffers can be invoked by the graphic system if the vectorization module supports them. Currently overlay buffers are supported only by the WinGLES2.txv Teigha Vectorization Module.</p> <h4>Usage example</h4> <p>Since one of the most frequently used dynamic graphic elements is command trackers, this usage example will show a simple implementation of a command tracker. A command tracker is a simple custom drawable which can be used inside Teigha commands to provide additional visual information. First we’ll show the entire tracker listing, and after that we’ll describe the most important parts of the code.</p> <h4>Command tracker code listing</h4> <pre class="hljs"> <code class="cpp">class ExRectFrame : public OdEdPointDefTracker, public OdGiDrawableImpl&lt;&gt; { OdGsModelPtr m_pModel; // Overlay model. mutable OdGePoint3d m_pts[4]; virtual OdUInt32 subSetAttributes(OdGiDrawableTraits* ) const ODRX_OVERRIDE { return kDrawableIsAnEntity; } virtual bool subWorldDraw(OdGiWorldDraw* pWd) const ODRX_OVERRIDE { return false; } virtual void subViewportDraw(OdGiViewportDraw* pVd) const ODRX_OVERRIDE { const OdGiViewport&amp; vp = pVd-&gt;viewport(); OdGeMatrix3d x = vp.getWorldToEyeTransform(); OdGePoint3d p0 = x * m_pts[0]; OdGePoint3d p2 = x * m_pts[2]; m_pts[1].x = p0.x; m_pts[3].x = p2.x; m_pts[1].y = p2.y; m_pts[3].y = p0.y; m_pts[1].z = m_pts[3].z = p2.z; x = vp.getEyeToWorldTransform(); m_pts[1].transformBy(x); m_pts[3].transformBy(x); OdGiDrawFlagsHelper _dfh(pVd-&gt;subEntityTraits(), OdGiSubEntityTraits::kDrawNoPlotstyle); pVd-&gt;subEntityTraits().setFillType(kOdGiFillAlways); pVd-&gt;subEntityTraits().setTransparency(OdCmTransparency(0.5)); if (m_pts[3].x &lt; m_pts[1].x) pVd-&gt;subEntityTraits().setColor(3); else pVd-&gt;subEntityTraits().setColor(5); pVd-&gt;geometry().polygon(4, m_pts); } OdGePoint3d basePoint() const { return m_pts[0]; } public: static OdEdPointTrackerPtr create(const OdGePoint3d&amp; base) { OdEdPointTrackerPtr pRes = OdRxObjectImpl&lt;ExRectFrame, OdEdPointTracker&gt;::createObject(); static_cast&lt;ExRectFrame*&gt;(pRes.get())-&gt;m_pts[0] = base; return pRes; } void setValue(const OdGePoint3d&amp; p) { m_pts[2] = p; if (!m_pModel.isNull()) { // Update drawable cache. OdGiDrawable* pParent = NULL; m_pModel-&gt;onModified(this, pParent); } } int addDrawables(OdGsView* pView) { m_pModel = pView-&gt;device()-&gt;createModel(); if (!m_pModel.isNull()) // Setup model overlay. m_pModel-&gt;setRenderType(OdGsModel::kDirect); pView-&gt;add(this, m_pModel); return 1; } void removeDrawables(OdGsView* pView) { pView-&gt;erase(this); } }; </code></pre> <h4>Enable overlay buffer usage for custom drawables</h4> <p>First, create a separate graphics system model object:</p> <pre class="hljs"> <code class="cpp">m_pModel = pView-&gt;device()-&gt;createModel(); </code></pre> <p>This separate model object is required to enable the separate overlay buffer:</p> <pre class="hljs"> <code class="cpp">if (!m_pModel.isNull()) // Setup model overlay. m_pModel-&gt;setRenderType(OdGsModel::kDirect); </code></pre> <p>The main scene typically is rendered to the “kMain” overlay buffer. Each overlay type contains its own specifics and can be used for different needs:</p> <ul><li style="font-style: italic">kUserBg1 - Render under the main scene without depth buffer usage. Useful to render backgrounds.</li> <li style="font-style: italic">kUserBg2 - Render under the main scene with own depth buffer. Useful to render sprites and 3D objects under the main scene.</li> <li style="font-style: italic">kUserBg3 - Render under the main scene with main depth buffer usage. Useful to render grids.</li> <li style="font-style: italic">kMain - Main graphic scene overlay.</li> <li style="font-style: italic">kSprite - Render on top of the main scene with own depth buffer. Useful to render sprites and 3D objects on top of the main scene.</li> <li style="font-style: italic">kDirect - Render on top of the main scene directly.</li> <li style="font-style: italic">kHighlight - Render on top of the main scene directly without depth buffer.</li> <li style="font-style: italic">kHighlightSelection - Render on top of the main scene directly without depth buffer, but with usage of highlighting style.</li> <li style="font-style: italic">kDirectTopmost - Render on top of the main scene without depth buffer.</li> <li style="font-style: italic">kContrast - Render on top of main scene without depth buffer, but with usage of contrast style. Useful for UI elements.</li> <li style="font-style: italic">kUserFg1 - Render on top of the main scene with main depth buffer usage.</li> <li style="font-style: italic">kUserFg2 - Render on top of the main scene with own depth buffer usage.</li> <li style="font-style: italic">kUserFg3 - Render on top of all other overlays without usage of depth buffer.</li> </ul><p>Overlay buffers are rendered in the order in which they are specified in the table, so you can choose the rendering order of custom elements using different overlays with similar properties. If the overlay doesn’t invoke a depth buffer, the element rendering order inside the overlay is urgent since the Z coordinate of geometry will not be taken into account. Overlay buffers that invoke a main depth buffer will be combined like those drawn onto the same overlay (scene depth information will be taken into account during the merge). Overlays with their own depth buffer are useful to render 3D objects which will be visually drawn on top of or under the main graphics scene. The following table shows specific overlay properties in a more convenient way:</p> <table border="1" cellpadding="0" cellspacing="0" style="width: 736px;"><tbody><tr style="height:32px;"><td style="width:132px;"> <p align="center"><strong><em>Overlay Type</em></strong></p> </td> <td style="width: 124px;"> <p align="center"><strong><em>Rendering Order</em></strong></p> </td> <td style="width: 96px;"> <p align="center"><strong><em>Depth Buffer</em></strong></p> </td> <td style="width: 127px;"> <p align="center"><strong><em>Direct Rendering</em></strong></p> </td> <td style="width: 130px;"> <p align="center"><strong><em>Highlighting Style</em></strong></p> </td> <td style="width: 116px;"> <p align="center"><strong><em>Contrast Style</em></strong></p> </td> </tr><tr style="height:32px;"><td style="width:132px;"> <p align="center">kUserBg1</p> </td> <td style="width: 124px;"> <p align="center">1</p> </td> <td style="width: 96px;"> <p align="center">-</p> </td> <td style="width: 127px;"> <p align="center">-</p> </td> <td style="width: 130px;"> <p align="center">-</p> </td> <td style="width: 116px;"> <p align="center">-</p> </td> </tr><tr style="height:32px;"><td style="width:132px;"> <p align="center">kUserBg2</p> </td> <td style="width: 124px;"> <p align="center">2</p> </td> <td style="width: 96px;"> <p align="center">Own</p> </td> <td style="width: 127px;"> <p align="center">-</p> </td> <td style="width: 130px;"> <p align="center">-</p> </td> <td style="width: 116px;"> <p align="center">-</p> </td> </tr><tr style="height:32px;"><td style="width:132px;"> <p align="center">kUserBg3</p> </td> <td style="width: 124px;"> <p align="center">3</p> </td> <td style="width: 96px;"> <p align="center">Main</p> </td> <td style="width: 127px;"> <p align="center">-</p> </td> <td style="width: 130px;"> <p align="center">-</p> </td> <td style="width: 116px;"> <p align="center">-</p> </td> </tr><tr style="height:32px;"><td style="width:132px;"> <p align="center">kMain</p> </td> <td style="width: 124px;"> <p align="center">4</p> </td> <td style="width: 96px;"> <p align="center">Main</p> </td> <td style="width: 127px;"> <p align="center">-</p> </td> <td style="width: 130px;"> <p align="center">-</p> </td> <td style="width: 116px;"> <p align="center">-</p> </td> </tr><tr style="height:32px;"><td style="width:132px;"> <p align="center">kSprite</p> </td> <td style="width: 124px;"> <p align="center">5</p> </td> <td style="width: 96px;"> <p align="center">Own</p> </td> <td style="width: 127px;"> <p align="center">-</p> </td> <td style="width: 130px;"> <p align="center">-</p> </td> <td style="width: 116px;"> <p align="center">-</p> </td> </tr><tr style="height:32px;"><td style="width:132px;"> <p align="center">kDirect</p> </td> <td style="width: 124px;"> <p align="center">6</p> </td> <td style="width: 96px;"> <p align="center">-</p> </td> <td style="width: 127px;"> <p align="center">+</p> </td> <td style="width: 130px;"> <p align="center">-</p> </td> <td style="width: 116px;"> <p align="center">-</p> </td> </tr><tr style="height:22px;"><td style="width:132px;"> <p align="center">kHighlight</p> </td> <td style="width:124px;"> <p align="center">7</p> </td> <td style="width: 96px;"> <p align="center">-</p> </td> <td style="width:127px;"> <p align="center">+</p> </td> <td style="width:130px;"> <p align="center">-</p> </td> <td style="width:116px;"> <p align="center">-</p> </td> </tr><tr style="height:22px;"><td style="width:132px;"> <p align="center">kHighlightSelection</p> </td> <td style="width:124px;"> <p align="center">8</p> </td> <td style="width: 96px;"> <p align="center">-</p> </td> <td style="width:127px;"> <p align="center">+</p> </td> <td style="width:130px;"> <p align="center">+</p> </td> <td style="width:116px;"> <p align="center">-</p> </td> </tr><tr style="height:22px;"><td style="width:132px;"> <p align="center">kDirectTopmost</p> </td> <td style="width:124px;"> <p align="center">9</p> </td> <td style="width: 96px;"> <p align="center">-</p> </td> <td style="width:127px;"> <p align="center">-</p> </td> <td style="width:130px;"> <p align="center">-</p> </td> <td style="width:116px;"> <p align="center">-</p> </td> </tr><tr style="height:22px;"><td style="width:132px;"> <p align="center">kContrast</p> </td> <td style="width:124px;"> <p align="center">10</p> </td> <td style="width: 96px;"> <p align="center">-</p> </td> <td style="width:127px;"> <p align="center">-</p> </td> <td style="width:130px;"> <p align="center">-</p> </td> <td style="width:116px;"> <p align="center">+</p> </td> </tr><tr style="height:22px;"><td style="width:132px;"> <p align="center">kUserFg1</p> </td> <td style="width:124px;"> <p align="center">11</p> </td> <td style="width: 96px;"> <p align="center">Main</p> </td> <td style="width:127px;"> <p align="center">-</p> </td> <td style="width:130px;"> <p align="center">-</p> </td> <td style="width:116px;"> <p align="center">-</p> </td> </tr><tr style="height:22px;"><td style="width:132px;"> <p align="center">kUserFg2</p> </td> <td style="width:124px;"> <p align="center">12</p> </td> <td style="width: 96px;"> <p align="center">Own</p> </td> <td style="width:127px;"> <p align="center">-</p> </td> <td style="width:130px;"> <p align="center">-</p> </td> <td style="width:116px;"> <p align="center">-</p> </td> </tr><tr style="height:22px;"><td style="width:132px;"> <p align="center">kUserFg3</p> </td> <td style="width:124px;"> <p align="center">13</p> </td> <td style="width: 96px;"> <p align="center">-</p> </td> <td style="width:127px;"> <p align="center">-</p> </td> <td style="width:130px;"> <p align="center">-</p> </td> <td style="width:116px;"> <p align="center">-</p> </td> </tr></tbody></table><p>Overlay buffers that are marked with “Direct Rendering” ability can be drawn by the renderer directly onto the screen if this mode is supported (skipping intermediate buffer caching).</p> <p>After graphics system model creation and configuration, renderable objects (drawable) can be added into the graphics system view together with the configured model:</p> <pre class="hljs"> <code class="cpp">pView-&gt;add(this, m_pModel); </code></pre> <p>Don’t forget that the graphics cache will be created for this drawable if it is added into the view together with the graphics system model pointer. If the drawable state is modified and must be redrawn, manually inform the graphics system that the graphics cache must be updated for this drawable:</p> <pre class="hljs"> <code class="cpp">if (!m_pModel.isNull()) { // Update drawable cache. OdGiDrawable* pParent = NULL; m_pModel-&gt;onModified(this, pParent); } </code></pre> <h4>Checking custom tracker rendering</h4> <p>To check custom tracker rendering, use any graphics example that supports mouse input and user command context:</p> <pre class="hljs"> <code class="cpp">bool OdExEditorObject::OnMouseLeftButtonClick(int x, int y) { OdGePoint3d pt = toEyeToWorld(x, y); OdDbUserIO* pIO = m_pCmdCtx-&gt;dbUserIO(); try { OdEdPointTrackerPtr pTracker = ExRectFrame::create(pt); OdGePoint3d pts[2] = { pt, pIO-&gt;getPoint(OD_T("Specify opposite corner:"), OdEd::kGptNoLimCheck|OdEd::kGptNoUCS, NULL, OdString::kEmpty, pTracker) }; } catch( const OdError&amp; ) { return false; } catch( ... ) { throw; } return true; } </code></pre> <p>The following example simply invokes a database command context to get an input rectangle. The tracker draws a transparent green rectangle if the second input point is placed on the left of the first input point and draws a transparent blue rectangle otherwise.</p> <p>To check the rendering speed, use a massive drawing with a large number of elements. The following pictures were generated with a file containing one million color lines. First try to invoke the tracker without the graphics system model or with the graphics system model configured for a main scene overlay buffer:</p> <img alt="image2" data-entity-type="file" data-entity-uuid="6c622528-bdda-428c-9bc3-6f053a052c21" src="/files/inline-images/image002_0.gif" class="align-center" /><p>The speed of the rectangle redraw is visually slow, so working with large files is difficult. Enable usage of a separate overlay:</p> <img alt="image3" data-entity-type="file" data-entity-uuid="2a3cd748-a14f-42e9-b75e-34e695fd8282" src="/files/inline-images/image003.gif" class="align-center" /><p>The rectangle redraw time becomes fast if overlay buffers are enabled and supported by the vectorization module. This is recommended for editor applications — moving most of the dynamically changeable graphic elements to overlay buffers — to increase application usability while editing massive drawings.</p> <h4>Overlay behavior with non-compatible vectorization modules</h4> <p>Even if a vectorization module doesn’t support overlay buffers, usage of separate graphics system models with different rendering types can influence rendering results. First, the graphics system sorts drawables in overlay order. This means that overlay models can be used to modify rendering order. For example, grip points and other elements that must be rendered on top of the main scene can be placed in the model with a render type larger than the kMain render type. Secondary vectorization modules can disable usage of depth buffer for overlays that require it. This means that drawables placed to the model with, for example, kDirect render type, will be drawn on top of main scene graphics even if the drawables are placed spatially under the main scene graphics.</p> <h4>Conclusion</h4> <p>Prior to configuring the rendering elements order, an application could create separate OdGsView objects with similar parameters, like those specified for underlying OdGsView, and draw them on top or under a duplicated OdGsView. This mechanism works well, but requires more source code than what is required for overlays. But the main advantage of overlay buffers is fast rendering of dynamically changeable geometry on top or under main graphics scenes.</p> <p>This feature is useful not only for command trackers, cursors, user interface elements and grip/snap points, but also for fast highlighting (or hovering) too, and for animated scene elements. The WinGLES2.txv vectorization module invokes GPU OpenGL frame buffers to completely support the overlay buffers feature, so any video card (even if it is very old) supports it. Overall, the overlay buffers feature will be helpful for modern editing applications with creative user interfaces.</p> </div> <span><span lang="" about="/user/1811" typeof="schema:Person" property="schema:name" datatype="">Andrew Markovich</span></span> <span>Thu, 08/03/2017 - 14:32</span> Thu, 03 Aug 2017 11:32:08 +0000 Andrew Markovich 1172 at Hatch Types and Exporting to PDF Hatch Types and Exporting to PDF <div><p>Sometimes ODA members ask about exporting hatches and why a resulting .pdf file is very slow to open or why the quality of exported hatches is poor. This article explains a few things about hatches and exporting them to PDF.</p> <p>There are three main types of hatches in .dwg files:</p> <img alt="image1" data-entity-type="file" data-entity-uuid="2011b9d7-1945-4761-8d10-644f0e3a5290" src="/files/inline-images/image002_4.jpg" class="align-center" /><p>Solid hatches have a solid fill, gradients have one or two gradient colors, and other hatches are filled with a pattern.</p> <p>There are three ways to export hatches to PDF:</p> <pre class="hljs"> <code class="cpp">enum ExportHatchesType { kBitmap = 0, //Exports hatches as a bitmap. kDrawing = 1, //Exports hatches as a drawing (vectorizer). kPdfPaths = 2 //Exports hatches as a PDF path. };</code></pre> <p>The kBitmap type just creates a bitmap picture from the hatch and puts it into the .pdf file. kDrawing and kPdfPaths both use a vectorizer to export with an important difference between them: kDrawing exports hatches as a polygon set and kPdfPaths exports the outer loop of the hatch only and fills that loop with color. This means that kPdfPaths can only be applied to solid hatches and not to gradient or pattern hatches, and for gradient and pattern hatches will be automatically changed to the kBitmap way of exporting. kBitmap and kPdfPaths can be used for exporting all hatch types.</p> <p>About the quality of hatches in .pdf files. Because kDrawing and kPdfPaths export hatches via a vectorizer using packs of polygons or outer hatch loops for rendering, the quality of the resulting .pdf file will be determined by the vector resolution parameter:</p> <pre class="hljs"> <code class="cpp">void setGeomDPI(OdUInt16 dpi); </code></pre> <p>Other quality parameters do not impact these two ways of exporting hatches. The setGeomDPI parameter is also meaningful for the bitmap export type when a hatch has a complicated border. When exporting hatches as a bitmap, the quality can also be customized using the following parameter:</p> <pre class="hljs"> <code class="cpp">void setHatchDPI(OdUInt16 dpi) </code></pre> <p>This parameter determines the resolution of the resulting bitmap image.</p> <p>Hopefully this article helps to export hatches correctly and avoid errors.</p> </div> <span><span lang="" about="/user/2590" typeof="schema:Person" property="schema:name" datatype="">Sergey Sherstnev</span></span> <span>Tue, 07/25/2017 - 21:10</span> Tue, 25 Jul 2017 18:10:46 +0000 Sergey Sherstnev 1158 at Teigha Kernel: OdGsModel invalidation methods Teigha Kernel: OdGsModel invalidation methods <div><p>Teigha’s graphics system allows graphical objects to be cached to increase the performance. However there are conditions when cached data quality becomes too rough (for example, during zooming), so Teigha provides several methods of invalidation (regeneration) for this data. This <a href="">video</a> illustrates the different invalidation (regeneration) methods in Teigha.</p> <p><iframe allowfullscreen="" frameborder="0" height="315" src="" width="560"></iframe></p> </div> <span><span lang="" about="/user/4492" typeof="schema:Person" property="schema:name" datatype="">Egor Slupko</span></span> <span>Fri, 05/26/2017 - 12:12</span> Fri, 26 May 2017 09:12:59 +0000 Egor Slupko 1134 at Teigha Cloud: Implementing Clients on Mobile Platforms Teigha Cloud: Implementing Clients on Mobile Platforms <div><p>When CAD developers create a mobile client for Teigha Cloud, they can face peculiarities on different mobile platforms. This article describes suggestions for developers who want to design their own client on iOS and Android platforms.</p> <h4>Networking</h4> <p>On different platforms, the API of an HTTP client is variable. The main requirement for clients is supporting long-poll requests. The Teigha Cloud server uses a long-poll mechanism for sending graphical data to clients.</p> <p>iOS has NSURLSession and NSURLConnection for http/https requests. Android has UrlConnection and NetworkFragment for these purposes. But, it is recommended to use wrappers on a default client (see below).</p> <h4>OpenGL screen</h4> <p>First, it supports Open GL ES 2.0. Your app must have support for GL ES2, and the SDK of the mobile OS must have tools for working with it.</p> <p>iOS has GLKViewController and GLKView for working with GL ES2 which are located in GLKit. GLKViewController contains GLKView embedded (see <a href="">here</a>).</p> <p>Android has GLSurfaceView and GLSurfaceView.Renderer.  GLSurfaceView contains methods for configuring the screen, such as configuring depth color and updating screen policy, and GLSurfaceView.Renderer contains the rendering configuration (see <a href="">here</a> and <a href="">here</a>).</p> <h4>Recommendations</h4> <p>The following libraries are recommended for your mobile app:<br /> • Alamofire (iOS): REST client that supports async work and works with long-polling.<br /> • Haneke (iOS): Tool that helps download a preview of uploaded files.<br /> • JASON: Fast json parser.<br /> • AsyncHttpClient (Android): Async rest client that supports background work and works with long-polling.<br /> • Picasso (Android): Tool to download previews of uploaded files.<br /> • Moshi (Android): A json parser.</p> </div> <span><span lang="" about="/user/3203" typeof="schema:Person" property="schema:name" datatype="">Maksym Kryva</span></span> <span>Tue, 05/23/2017 - 21:22</span> Tue, 23 May 2017 18:22:52 +0000 Maksym Kryva 1133 at Adding an Alpha Channel to a Raster Image Adding an Alpha Channel to a Raster Image <div><p>Previously Teigha developers could add an alpha channel to a raster image using the OdGiRasterImage::convert method and subsequently cutting off the background color. However, using the new OdGiRasterImage wrapper OdGiRasterImageAlphaChannelAdder simplifies this task.</p> <h3>Creating an object</h3> <p>Before we can create an instance of OdGiRasterImageAlphaChannelAdder, obtain an instance of the original OdGiRasterImage. There are several ways to get it. For example, if the original image is a file, we can use the Teigha raster service to load it:</p> <p> </p> <pre class="hljs-pre"> <code class="cpp">OdRxRasterServicesPtr pRasSvcs = odrxDynamicLinker()-&gt;loadApp(RX_RASTER_SERVICES_APPNAME, false); if (pRasSvcs.isNull()) // Check that raster services module correctly loaded throw OdError(eNullPtr); OdGiRasterImagePtr pOriginalImage = pRasSvcs-&gt;loadRasterImage(inputFileName); if (pOriginalImage.isNull()) // Check that raster image correctly loaded throw OdError(eNullPtr);</code></pre> <p> </p> <p>If we have OdGsDevice from which we can take the raster image, use the following code:</p> <pre class="hljs-pre"> <code class="cpp">OdGiRasterImagePtr pOriginalImage = OdGiRasterImagePtr(pDevice-&gt;properties()-&gt;getAt(L"RasterImage"));&gt; if (pOriginalImage.isNull())       throw OdError(eNullPtr);</code> </pre> <p>After we obtain an instance of the original image, create an instance of OdGiRasterImageAlphaChannelAdder:</p> <pre class="hljs-pre"> <code class="cpp">OdGiRasterImagePtr pImage = OdGiRasterImageAlphaChannelAdder::createObject( pOriginalImage, CutOffColor, CutOffColorThreshold ); if (pImage.isNull()) throw OdError(eNullPtr);</code> </pre> <p>where CutOffColor is the color of the pixels to which the alpha value 0 will be assigned and CutOffColorThreshold is a threshold value for alpha interpolation (described below).</p> <h3>Working with OdGiRasterImageAlphaChannelAdder</h3> <p>We can work with an instance of OdGiRasterImageAlphaChannelAdder as a usual OdGiRasterImage. For example, we can save it using the Teigha raster service:</p> <p> </p> <pre class="hljs-pre"> <code class="cpp">pRasSvcs-&gt;saveRasterImage( pImage, ouptupFileName );</code></pre> <p> </p> <p>We also can access image pixels using the OdGiRasterImage::scanLines method. However, because OdGiRasterImageAlphaChannelAdder does not store a compatible pixels array, direct access is not allowed and</p> <pre class="hljs-pre"> <code class="cpp">const OdUInt8 *pPixels = pImage-&gt;scanLines();</code> </pre> <p>will give a NULL pointer. Instead of direct access, copy the pixel array into an intermediate user-defined array:</p> <pre class="hljs-pre"> <code class="cpp">OdUInt8Array scanLine; scanLine.resize(pInputImage-&gt;scanLineSize()); pImage-&gt;scanLines(scanLine.asArrayPtr(), 0);</code> </pre> <h3>Using pixel cut colors and threshold values</h3> <p>As an example, let’s use the following raster image with different gray rectangles and a white background:</p> <img alt="Rectangles 1" data-entity-type="file" data-entity-uuid="947291b1-0f77-4185-8bda-866921030b38" src="/files/inline-images/image001_0.png" class="align-center" /><p>Create OdGiRasterImageAlphaChannelAdder with white cutoff color and zero threshold and save it to a file:</p> <pre class="hljs-pre"> <code class="cpp">OdRxRasterServicesPtr pRasSvcs = odrxDynamicLinker()-&gt;loadApp(RX_RASTER_SERVICES_APPNAME, false); if (pRasSvcs.isNull()) // Check that raster services module correctly loaded       throw OdError(eNullPtr); OdGiRasterImagePtr pOriginalImage = pRasSvcs-&gt;loadRasterImage(inputFileName); if (pOriginalImage.isNull()) // Check that raster image correctly loaded       throw OdError(eNullPtr); OdGiRasterImagePtr pImage = OdGiRasterImageAlphaChannelAdder::createObject( pOriginalImage, ODRGB( 255, 255, 255 ) ); if (pImage.isNull())       throw OdError(eNullPtr); pRasSvcs-&gt;saveRasterImage( pImage, ouptupFileName );</code> </pre> <p>The result is:</p> <img alt="Rectangles 2" data-entity-type="file" data-entity-uuid="4fb6d74b-1915-4e96-b2af-a00e420acc5f" src="/files/inline-images/image003_0.png" class="align-center" /><p><br /> All white pixels now have zero alpha values.<br /> We can use the threshold value to make a smooth cut. If the threshold is not zero, each pixel’s color that lies in the interval [Cut Color Component Value – threshold; Cut Color Component Value + threshold] are assigned the alpha value interpolated on the interval [Color = Cut Color, Alpha value = 0 -&gt; Color = Cut Color + threshold, Alpha value = 255].<br /> So, if we set the threshold to 20, we obtain the following result:<br />  </p> <img alt="Rectangles 3" data-entity-type="file" data-entity-uuid="3c04a9d0-bb50-44fb-967e-1f144e3ca55d" src="/files/inline-images/image005_0.png" class="align-center" /><p><br /> All white pixels are still transparent, and pixels of the last rectangle become partially transparent.<br /> If we increase the threshold value up to 40, the last rectangle becomes more transparent and the next-to-last rectangle becomes partially transparent:</p> <img alt="Rectangles 4" data-entity-type="file" data-entity-uuid="b9958529-0c85-4f7b-b402-b81bb093c11b" src="/files/inline-images/image007_0.png" class="align-center" /><p> </p> <h3>Conclusion</h3> <p> OdGiRasterImageAlphaChannelAdder gives Teigha developers a simple way to set an alpha channel to a raster image.</p> </div> <span><span lang="" about="/user/4492" typeof="schema:Person" property="schema:name" datatype="">Egor Slupko</span></span> <span>Wed, 05/17/2017 - 21:34</span> Wed, 17 May 2017 18:34:24 +0000 Egor Slupko 1123 at Planar clipping sections generation for custom entities Planar clipping sections generation for custom entities <div><p>Teigha vectorization framework provides ability to clip geometry inside rendered scene, using <span class="hljs-title">OdGiOrthoClipperEx</span> conveyor node which is always available inside default geometry vectorization conveyor. <span class="hljs-title">OdGiOrthoClipperEx</span> conveyor node provides ability to generate planar geometry sections, so any application can invoke planar sections generation functionality without creating any additional objects and modifications inside geometry vectorization conveyor. This article describes simplest way to invoke planar clipping sections generation for native and custom database entities using Teigha vectorization framework API.</p> <h2>Sections generation support for custom entity</h2> <h3>Custom database entity creation</h3> <p>For example we will use simple custom entity with minimal implementation which draws dodecahedron. The main thing which must be done to enable sections generation for entity is a following call:</p> <pre class="hljs-pre"> <code class="cpp">// Enable sections generation pWd-&gt;subEntityTraits().setSectionable(true);</code></pre> <p style="margin-top:26px">This call informs vectorization framework that sections can be generated for following geometry primitives, like shell, polygon or mesh.</p> <p>Now we can attach our custom dodecahedron entity to working database:</p> <pre class="hljs-pre"> <code class="cpp">void attachDodecahedronEntity(OdDbDatabase *pDb) { OdSmartPtr&lt;DodecahedronEntity&gt; pEnt = DodecahedronEntity::createObject(); pEnt-&gt;setDatabaseDefaults(pDb); pEnt-&gt;setValues(OdGePoint3d::kOrigin, 10.0); OdDbBlockTableRecord::cast(pDb-&gt;getActiveLayoutBTRId().openObject(OdDb::kForWrite))-&gt;appendOdDbEntity(pEnt); }</code></pre> <p style="margin-top:26px">If we render result (our database contains no other entities) we get picture like that:</p> <p><a data-rel="lightcase:gallery" href="/files/images/blog/2017-03-16-pic1.png"><img alt="" class="img-responsive" src="/files/images/blog/2017-03-16-pic1.png" /></a></p> <h2>Append planar clipping boundary</h2> <p>We can append any number of clipping boundaries directly inside entity <span class="hljs-keyword">subWorldDraw</span> or <span class="hljs-keyword">subViewportDraw</span> overridden methods using <span class="hljs-title">OdGiGeometry</span>::<span class="hljs-keyword">pushClipBoundary</span> and <span class="hljs-title">OdGiGeometry</span>::<span class="hljs-keyword">popClipBoundary</span> methods and this clipping boundaries will influence entity geometry and nested entities geometry. This technique is useful for database hierarchy, where we can apply clipping of entities inside blocks, but can’t be used in applications which require clipping of the entire drawing without modifications inside database entities. For this case (and to simplify our example application) we can attach clipping boundary directly to <span class="hljs-title">OdGsView</span> object using <span class="hljs-keyword">setViewport3dClipping</span> method, and this clipping boundary will be used to clip all contents inside this <span class="hljs-title">OdGsView</span> object:</p> <pre class="hljs-pre"> <code class="cpp">void appendClippingBoundary(OdGsView *pView) { // Setup clipping plane OdGiPlanarClipBoundary::ClipPlaneArray clipPlanes; clipPlanes.push_back(OdGiPlanarClipBoundary::ClipPlane(OdGePoint3d::kOrigin - OdGeVector3d::kYAxis * 4.0, OdGeVector3d::kYAxis)); OdGiPlanarClipBoundary bnd; bnd.setClipPlanes(clipPlanes); // Setup viewport clipping OdGiClipBoundary emptyBoundary; ::odgiEmptyClipBoundary(emptyBoundary); pView-&gt;setViewport3dClipping(&amp;emptyBoundary, &amp;bnd); }</code></pre> <p style="margin-top:26px">We can specify set of clipping planes using <span class="hljs-title">OdGiPlanarClipBoundary</span> class. In this example we keep <span class="hljs-title">OdGiClipBoundary</span> object empty since clipping plane completely set in World Coordinates Space (WCS). If we will render clip result we will got picture like that:</p> <p><a data-rel="lightcase:gallery" href="/files/images/blog/2017-03-16-pic2.png"><img alt="" class="img-responsive" src="/files/images/blog/2017-03-16-pic2.png" /></a></p> <h2>Sections generation for clipping boundary</h2> <h3>Enable sections generation for clipping boundary</h3> <p>To enable sections generation make following changes in our example <span class="hljs-keyword">appendClippingBoundary</span> function:</p> <pre class="hljs-pre"> <code class="cpp">OdGiPlanarClipBoundary bnd; bnd.setClipPlanes(clipPlanes); // Setup traits resolver for section geometry OdGiSectionGeometryOutputPtr pSectionGeometry = OdRxObjectImpl&lt;OdGiSectionGeometryOutput&gt;::createObject(); pSectionGeometry-&gt;setTraitsOverrideFlags(OdGiSubEntityTraitsChangedFlags::kColorChanged | OdGiSubEntityTraitsChangedFlags::kMaterialChanged); pSectionGeometry-&gt;traitsOverrides().setColor(1); bnd.setSectionGeometryOutput(pSectionGeometry); // Setup viewport clipping</code></pre> <p style="margin-top:26px">In this example we create <span class="hljs-title">OdGiSectionGeometryOutput</span> object, which is used to handle section geometry onto application side. This object was attached to planar clipping boundary (<span class="hljs-title">OdGiPlanarClipBoundary</span>). All generated closed and opened sections will come through this <span class="hljs-title">OdGiSectionGeometryOutput</span> object, so we can customize properties and representation of output sections geometry.</p> <p>If we will render result of our changes we will got picture like that:</p> <p><a data-rel="lightcase:gallery" href="/files/images/blog/2017-03-16-pic3.png"><img alt="" class="img-responsive" src="/files/images/blog/2017-03-16-pic3.png" /></a></p> <h3>Multiple clipping planes</h3> <p>We can use any number of clipping planes to generate geometry sections. For example, extend our <span class="hljs-keyword">appendClippingBoundary</span> function by specifying secondary clipping plane:</p> <pre class="hljs-pre"> <code class="cpp">clipPlanes.push_back(OdGiPlanarClipBoundary::ClipPlane(OdGePoint3d::kOrigin - OdGeVector3d::kYAxis * 4.0, OdGeVector3d::kYAxis)); // Secondary clipping plane clipPlanes.push_back(OdGiPlanarClipBoundary::ClipPlane(OdGePoint3d::kOrigin + OdGeVector3d::kXAxis * 4.0, -OdGeVector3d::kXAxis)); OdGiPlanarClipBoundary bnd; bnd.setClipPlanes(clipPlanes);</code></pre> <p style="margin-top:26px">The result of changes will look like that:</p> <p><a data-rel="lightcase:gallery" href="/files/images/blog/2017-03-16-pic4.png"><img alt="" class="img-responsive" src="/files/images/blog/2017-03-16-pic4.png" /></a></p> </div> <span><span lang="" about="/user/1811" typeof="schema:Person" property="schema:name" datatype="">Andrew Markovich</span></span> <span>Thu, 03/16/2017 - 17:55</span> Thu, 16 Mar 2017 14:55:29 +0000 Andrew Markovich 1096 at Introducing Teigha Cloud Architecture Introducing Teigha Cloud Architecture <div><p>One of the new directions for Teigha is the cloud. With Teigha Cloud, people can work with drawings remotely from any place — home, office, restaurant.</p> <p>Teigha Cloud consists of the following architecture:</p> <ul><li> <h3>Client (Browser)</h3> <p>The client side renders geometry. The client makes the authorization process over HTTP/HTTPS. Then the user can access previously uploaded drawings and can upload new ones for their account. All geometry that the client receives is over a web socket connection, and this solution lets Teigha Cloud avoid a pulling model to get data from a server.</p> </li> <li> <h3>Load Balancer</h3> <p>Currently Teigha Cloud uses HAProxy, which satisfies requirements for this stage of the Teigha Service evolution. Teigha Cloud can create sticky sessions based on cookies to attach to a certain server, which allows for implementation of drawing features. To check server availability, HAProxy is configured to use a round-robin algorithm to assign new connections to servers.</p> </li> <li> <h3>Server</h3> <p>The server is the most complicated and important part of the service. The server part is based on the Node + Express library. Each service runs several Node processes (based on CPU count). Each Node Process has its own web socket connection with a client, which is best for CPU utilization. TxHost is the workhorse. This module generates all graphics and can interpret commands from clients. TxHost is written in C++ for performance reasons. Communication between the Node and TxHost processes uses ZMQ.</p> </li> </ul><p><a data-rel="lightcase:gallery" href="/files/images/blog/2017-02-16-pic1.png"><img alt="" class="img-fluid" src="/files/images/blog/2017-02-16-pic1.png" /></a></p> <p>So, this is a short scheme of the Teigha Cloud service. More details about Teigha Cloud components coming soon.</p> </div> <span><span lang="" about="/user/3741" typeof="schema:Person" property="schema:name" datatype="">Sergey Stepantsov</span></span> <span>Thu, 02/16/2017 - 17:01</span> Thu, 16 Feb 2017 14:01:26 +0000 Sergey Stepantsov 1093 at