ODTUG Aggregator ODTUG Blogs http://localhost:8080 Fri, 23 Jun 2017 15:05:59 +0000 http://aggrssgator.com/ ODTUG Kscope17 Livestream Sessions http://www.odtug.com/p/bl/et/blogaid=735&source=1 If you can't make it to ODTUG Kscope17, you can still participate from home. Check out the list of sessions we're bringing you live from San Antonio, Texas! ODTUG http://www.odtug.com/p/bl/et/blogaid=735&source=1 Tue Jun 20 2017 14:44:07 GMT-0400 (EDT) Lift and Shift of BIAPPS Artifacts to Oracle Business Intelligence Cloud Service https://blogs.oracle.com/biapps/biapps_lift_shift_to_bics <p><strong><span style="color:#800000;">Authors: Swathi Singamareddygari , Anand Sadaiyan</span></strong></p> <strong><span style="color:#000080;">Table of Contents</span></strong> <p><a href="#_Toc482605654">Disclaimer</a><br /> <a href="#_Toc482605655">Section 1: Lifting and Shifting Application Roles</a><br /> <a href="#_Toc482605656">Section 2: Lifting and Shifting the Repository</a><br /> <a href="#_Toc482605657">Section 3: Lifting and Shifting the Web Catalogue</a><br /> <a href="#_Toc482605658">Section 4: Repository Consistency Checks</a></p> <a name="_Toc482605654"></a><strong><a name="_Toc462761353" style="background-color: rgb(255, 255, 255);"><span style="color:#000080;">Disclaimer</span></a></strong> <p style="text-align: justify;">This document does not replace the&nbsp;Oracle Business Intelligence Cloud Service Documentation Library or&nbsp;other Cloud Services documents. It serves as a supplement for lifting and shifting Business Intelligence Applications Artifacts to BI Cloud Service.</p> <p style="text-align: justify;">This document is written based on the&nbsp;BI cloud version 17.2.5. Screenshots included in this document might differ slightly from what you see on your screen.&nbsp;</p> <p style="text-align: justify;"><span style="color:#800000;"><strong><em>Note:</em></strong>&nbsp;</span>This is a supplementary document for installing Oracle BI Applications on Paas with Oracle BI Cloud Service (<a href="https://support.oracle.com/oip/faces/secure/km/DocumentDisplay.jspx?id=2136376.1">Doc ID 2136376.1</a>).<br /> It is always a good practice to take a snapshot of the current environment in Oracle BI Cloud Service, before performing any lift and shift activities. Ensure that you create a snapshot before proceeding.</p> <a name="_Toc482605655"></a><strong><a name="_Toc462761355" style="background-color: rgb(255, 255, 255);"><span style="color:#000080;">Section 1: Lifting and Shifting Application Roles</span></a></strong> <p style="text-align: justify;">Oracle BI Applications has delivered a BAR (<strong>obia_bics_v0.9.bar</strong>) file with the out of the box BIAPPS application roles. Download the BAR file from My Oracle Support (<a href="https://support.oracle.com/oip/faces/secure/km/DocumentDisplay.jspx?id=2136376.1">Doc ID: 2136376.1</a>) and upload the application roles to Oracle BI Cloud Service environment.</p> <p><span style="color:#000080;"><strong>Uploading the Application Roles BAR File:</strong></span></p> <ol> <li style="text-align: justify;">Login to the Oracle BI Cloud Service environment.</li> <li style="text-align: justify;">From the Oracle BI Cloud Service home page, navigate to the Console and click on Snapshots and Models.</li> <li style="text-align: justify;">Click Upload Snapshot to upload the delivered Application Roles BAR file. <a href="https://docs.oracle.com/cloud/latest/reportingcs_use/BILPD/GUID-3F57E1E4-BFF9-4383-8999-32377E3F5B4F.htm#GUID-EA2778B4-8B7C-4135-91F0-90A223A35A80">See Uploading Snapshots</a> .</li> <li style="text-align: justify;">Select the delivered <strong>obia_bics_v0.9.bar</strong> file and enter &ldquo;Admin123&rdquo; as the password.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/ef6c37dabbffe116942ea1ac5f060429/image003.png" style="width: 1288px; height: 822px;" /></p> <ol> <li style="text-align: justify;" value="5">&nbsp;Select the uploaded snapshot, click the Restore action, and in Restore Snapshot popup, select Application Roles and click Restore to restore the snapshot for Application Roles. <a href="https://docs.oracle.com/cloud/latest/reportingcs_use/BILPD/GUID-C7DE34A5-7A67-4415-98B7-1CA9E5235480.htm#BILUG493">See Restoring Snapshots</a>.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/88ad9c061d375b142e8d7470e9a6d699/image005.png" style="width: 1838px; height: 751px;" /></p> <ol> <li style="text-align: justify;" value="6">Verify the imported Application Roles in the Application Role Management page (Console -&gt; Users and Roles -&gt; Application Roles).</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/f2ff3cee06094fddacc8483969166edb/image007.png" style="width: 1416px; height: 796px;" /></p> <strong><a name="_Toc482605656">Section 2: Lifting and Shifting the Repository</a></strong> <p><span style="color:#000080;"><strong>Overview:</strong></span></p> <p>Administrators can upload the on premises repository to Oracle BI Cloud Service using the steps mentioned below. While setting up Oracle BI Applications setup on the PaaS environment, the repository is provisioned with the required connection details. Administrators can obtain the Oracle BI Applications provisioned repository from the Compute environment.<br /> <br /> <em><strong>For example :</strong></em> /u01/app/oracle/middleware/instances/instance1/bifoundation/OracleBIServerComponent/coreapplication_obis1/repository .</p> <p style="text-align: justify;"><span style="color:#800000;"><strong><em>Note:</em></strong></span><br /> You can&rsquo;t import the provisioned repository directly into Oracle BI Cloud Service because the severity of the consistency check has been increased in Oracle BI Cloud Service when compared to Oracle BI EE 11.1.1.9. Hence you must fix the consistency check issues before uploading the Repository to Oracle BI Cloud Service environment.</p> <p style="text-align: justify;"><span style="color:#000080;"><strong>Fixing Consistency Check Issues:</strong></span><br /> You must fix the following consistency check issues with the repository in your local environment and then upload the &lsquo;fixed&rsquo; repository to Oracle BI Cloud Service. For the ease of use, all the changes are documented in the Section 4: Repository Consistency Checks. It is a mandatory requirement to fix the consistency check issues.</p> <p><span style="color:#000080;"><strong>Steps to be followed for lifting and shifting of repository</strong></span></p> <ol> <li>Login to the Oracle BI Cloud Service environment</li> <li>Navigate to Console from the home page and click on Snapshots and Models.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/5d4cd95180a820f2d235e440b7222c9f/image009.png" style="width: 1383px; height: 695px;" /></p> <ol> <li style="text-align: justify;" value="3">Replace the Oracle BI Cloud Service data model with on premises repository using the &ldquo;Replace Data Model&rdquo; option and provide the password for the on premises repository. Ensure that you select the repository in which you fixed the consistency check issues.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/f97a668130ee84091c0bdb2233d903a5/image011.png" style="width: 1300px; height: 736px;" /></p> <ol> <li style="text-align: justify;" value="4">Verify the uploaded repository by navigating to Analyses and clicking &ldquo;Create Analysis&rdquo;. You see the available subject areas.&nbsp;For more details on repository lifting and shifting refer to&nbsp;<a href="http://docs.oracle.com/cloud/latest/reportingcs_use/BILPD/GUID-2BEB60F6-986D-4A7A-9D63-EEE67083E98A.htm#BILPD-GUID-2BEB60F6-986D-4A7A-9D63-EEE67083E98A">Uploading an On Premises Data Model to Oracle BI Cloud Service</a></li> </ol> <strong><a name="_Toc482605657"><span style="color:#000080;">Section 3: Lifting and Shifting the Web Catalogue</span></a></strong> <p style="text-align: justify;">Administrators can upload web catalogue artifacts from another Oracle BI Cloud Service or Oracle BI Enterprise Edition 11.1.1.9.0 or later.&nbsp;</p> <p>&nbsp;<span style="color:#000080;"><strong>Lifting and shifting the web catalogue mainly involves these two activities:</strong></span></p> <ol> <li>Archive individual functional area folder from the web catalogue available in the compute node</li> <li>Unarchive each folder to Company Shared in the Oracle BI Cloud Service environment.</li> </ol> <p><span style="color:#000080;"><strong>Lifting of Presentation catalog in the Compute node</strong></span></p> <ol> <li>Open the Oracle BI Applications analytics, which is in the compute node. For Example https://&lt;host_name&gt;:&lt;port&gt;/analytics</li> <li>Navigate to the Catalog and expand Shared Folders in the left pane.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/470cbf5f0c1c0739b26ea02b82997694/image013.png" style="width: 1527px; height: 535px;" /></p> <ol> <li value="3">Select the functional area folder and click Archive in Tasks pane.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/b9e3b4121bd0ba4ecebf64adf89d4c09/image015.png" style="width: 254px; height: 849px;" /></p> <ol> <li value="4">Select the Keep Permissions and Keep Timestamps option and click Ok.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/61b450ac0bd20cfa7d2ae334816e3be2/image017.png" style="width: 155px; height: 153px;" /></p> <ol> <li value="5">Repeat steps 3 and 4 for all functional area folders.</li> </ol> <ol> <li value="6">Once archiving is done login to the Oracle BI Cloud Service environment and follow the steps for shifting the BI Presentation Catalogue.</li> </ol> <p><span style="color:#000080;"><strong>Shifting the Presentation Catalogue in Oracle BI Cloud Service Environment</strong></span></p> <ol> <li>Login to the Oracle BI Cloud Service&nbsp; environment</li> <li>Navigate to the Catalog from the home page.</li> <li>Expand the Company Shared folder in the Folder section.</li> <li>If you prefer to keep &ldquo;Sample App&rdquo; folder, then Skip Step 5 and 6.</li> <li>Select the SampleApp Folder and click Delete under Tasks pane.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/6568ae3dfe482d5c6e9d142c56175208/image019.png" style="width: 1459px; height: 827px;" /></p> <ol> <li value="6">Click OK in the Confirm Delete page.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/bab03fb611b5b6dec281a530a1179cf9/image021.png" style="width: 414px; height: 147px;" /></p> <ol> <li value="7">Select Company Shared folder in the Folders pane. &nbsp;</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/9b63e2ba2d13879f76b9ff6f07904c9d/image023.png" style="width: 1590px; height: 429px;" /></p> <p style="margin-left: 40px;">8. Select Unarchive from the Tasks pane, browse to the archive file created for each functional area, and select &quot;Preserve&quot; for the ACL option.</p> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/b92b5746a2af0d91e70361819b7a1806/image025.png" style="width: 1216px; height: 753px;" /></p> <ol> <li value="9">Repeat steps 7 and 8 to Unarchive all functional area folders to Company Shared folder.</li> <li value="10">Click on the Dashboard menu to see all the dashboards&nbsp;<img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/740fca78054c2b89bd117cd45c683ac4/image027.png" style="width: 1171px; height: 817px;" />11.&nbsp;Click on any dashboard and check for results.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/dd4e16433c825a30ff0b1df4065feb0b/image029.png" style="width: 1598px; height: 612px;" /></p> <a name="_Toc482605658"></a><strong><a name="ConsistencyCheck" style="background-color: rgb(255, 255, 255);"><span style="color:#000080;">Section 4: Repository Consistency Checks</span></a></strong> <p style="text-align: justify;">For the ease of use, all the consistency check issues have been documented in the attached spreadsheet <a href="https://app.compendium.com/api/post_attachments/93ecbf33-d349-4194-9772-1ab5a7713370/view" target="_blank">(BICS_Lift_And_Shift_17x.xls</a>). Detailed steps have been provided for fixing consistency check issue for &ldquo;SET ID-based Security&rdquo;. Follow similar steps to fix the consistency check issues for the other application roles mentioned in the attached <a href="https://app.compendium.com/api/post_attachments/93ecbf33-d349-4194-9772-1ab5a7713370/view" target="_blank">spreadsheet</a>.&nbsp; If you are using the trimmed Oracle BI Applications repository then all the issues mentioned here might not be applicable. Fix only the applicable issues.<br /> <br /> <span style="color:#800000;"><strong><em>Any modifications to the repository should be done in the Local environment and it is your responsibility to maintain the repository.</em></strong></span></p> <p><span style="color:#000080;"><strong>Detailed Steps for fixing SET ID-based Security Application Role: </strong></span></p> <ol> <li>Open the repository with the BI Server Admin tool and click on Manage and then Identity.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/a543a60e2b6317be83c894a749b23d88/image031.png" style="width: 807px; height: 354px;" /></p> <ol> <li value="2">In Identity Manager window search for the application role &ldquo;SET ID-based Security&rdquo;, right click and select Properties.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/2722228857dc71e7e4fd467959d65d95/image033.png" style="width: 966px; height: 479px;" /></p> <ol> <li value="3">Click Permissions and then the Data Filters tab.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/33e791e7244f4b80aaebe8584c425909/image035.png" style="width: 1488px; height: 553px;" /></p> <ol> <li value="4">Modify the &quot;Core&quot;.&quot;Dim - Customer&quot;.&quot;Set Id&rdquo; data filter expression to&nbsp;&nbsp; &quot;Core&quot;.&quot;Dim - Customer&quot;.&quot;Set Id&quot;=VALUEOF(NQ_SESSION.&quot;SET_ID&quot;) .</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/8a85ef4c1878a2389b5c81749856824d/image037.png" /></p> Anand Sadaiyan https://blogs.oracle.com/biapps/biapps_lift_shift_to_bics Tue Jun 20 2017 13:00:00 GMT-0400 (EDT) ODTUG Kscope17 Livestream Sessions http://www.odtug.com/p/bl/et/blogaid=730&source=1 If you can't make it to ODTUG Kscope17, you can still participate from home. Check out the list of sessions we're bringing you live from San Antonio, Texas! ODTUG http://www.odtug.com/p/bl/et/blogaid=730&source=1 Tue Jun 20 2017 10:02:17 GMT-0400 (EDT) Lift and Shift of Oracle BIAPPS Artifacts to Oracle Analytics Cloud https://blogs.oracle.com/biapps/biapps_lift_shift_to_oac <p><span style="color:#800000;"><strong>Authors: Swathi Singamareddygari , Anand Sadaiyan</strong></span></p> <p><strong><span style="color:#000080;"><strong style="font-size: 13px;">Table of Contents</strong></span></strong></p> <p><a href="#_Toc482611085">Disclaimer</a><br /> <a href="#_Toc482611086">Section:1 Deliverables</a><br /> <a href="#_Toc482611087">Section: 2 Lifting and shifting Application Roles and Web Catalogue</a><br /> <a href="#_Toc482611088">Section: 3 Lifting and Shifting Repository</a><br /> <a href="#_Toc482611089">Section: 4 FAQ</a><br /> <a href="#_Toc482611090">Section: 5 Limitations</a></p> <p><a name="_Toc482611085"></a><a name="_Toc482354171"></a><strong><a name="_Toc462761353" style="background-color: rgb(255, 255, 255);"><span style="color:#000080;">Disclaimer</span></a></strong></p> <p style="text-align: justify;">This document does not replace the&nbsp;Oracle Analytics Cloud Service Documentation Library or&nbsp;other Cloud Services documents. It serves as a supplement for Lifting and Shifting Business Intelligence Applications Artifacts to Oracle Analytics Cloud.</p> <p style="text-align: justify;">This document is written based on the&nbsp;Oracle Analytics cloud version 17.2.1.0.0 Screenshots included in this document might differ slightly from what you see on your screen.</p> <p style="text-align: justify;"><strong><em><span style="color:#800000;">Note</span>: </em></strong>&nbsp;It is always a good practice to take a snapshot of the current environment in Oracle Analytics Cloud, before lifting and shifting the artifacts. Ensure that you create a snapshot before proceeding.&nbsp;</p> <p><strong><a name="_Toc482611086"><span style="color:#000080;">Section:1 Deliverables</span></a></strong></p> <p style="text-align: justify;">Download the Oracle Analytics Cloud Deliverables Zip file (BIAPPS_10.2.0_OACV1.zip) from the <a href="http://aru.us.oracle.com:8080/ARU/ViewPatchRequest/process_form?aru=21167611">ARU21167611</a>. The deliverables are based on BIAPPS 10.2 release.</p> <p style="text-align: justify;"><strong>Zip file consists of</strong><br /> 1) OracleBIApps_10.2.0_OACV1.rpd (Password: welcome1)<br /> 2) BIAPPS_10.2.0_OACV1.bar (Password: Admin123)</p> <p><br /> <strong><a name="_Toc482611087"><span style="color:#000080;">Section:2 Lifting and shifting Application Roles and Web Catalogue</span></a></strong></p> <p><strong>Uploading the Application Roles and web catalogue</strong></p> <p>From the Oracle Analytics Cloud home page navigate to console and click on snapshots.</p> <ol> <li>Login to the Oracle Analytics Cloud VA page.</li> <li>From the Oracle Analytics Cloud home page, navigate to the Console and click Snapshots.</li> <li>Click Upload Snapshot to upload the delivered BAR file. <a href="https://docs.oracle.com/cloud/latest/analytics-cloud/ACABI/GUID-3F57E1E4-BFF9-4383-8999-32377E3F5B4F.htm#GUID-EA2778B4-8B7C-4135-91F0-90A223A35A80">See Uploading Snapshots</a>.</li> <li>If Virus scanner is not configured, Kindly click &ldquo;Proceed without a virus scanner&rdquo;</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/c3301faef9b50a87bfbdcea9fbf3479d/image002.png" style="width: 1081px; height: 538px;" /></p> <ol> <li value="5">Select the delivered BAR file and enter &ldquo;Admin123&rdquo; as password.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/7702d1d8e3cd58ed0afb3d4febece4ea/image003.png" style="width: 1280px; height: 719px;" /></p> <ol> <li style="text-align: justify;" value="6">Select the uploaded snapshot, and click the Restore action, and in Restore Snapshot popup, select Application Roles and Catalog and click Restore to restore the snapshot for Application Roles and Web catalogue. <a href="https://docs.oracle.com/cloud/latest/analytics-cloud/ACABI/GUID-C7DE34A5-7A67-4415-98B7-1CA9E5235480.htm#GUID-C88E3DCD-8B10-4826-B0F4-EEBCFD0A2897">See Restoring Snapshots</a>.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/2f6906148c07eb0a586ed1bf6b025a4e/image004.png" style="width: 1240px; height: 705px;" /></p> <ol> <li style="text-align: justify;" value="7">Verify the imported Application Roles in Application Role Management page (Console&gt; Users and Roles-&gt; Application Roles).<br /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;<img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/4559fb21482a4680f4961edd827b1164/image005.png" style="width: 974px; height: 562px;" /></li> </ol> <ol> <li style="text-align: justify;" value="8">Verify the Webcat Artifacts by Clicking on dashboard menu from Classic Home.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/4dce8ae9bc847d85b29107e827737995/image006.png" style="width: 801px; height: 824px;" /></p> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/463eaaf4fdaa34c54cb7280edec299cb/image007.png" style="width: 626px; height: 311px;" /></p> <p>&nbsp;</p> <p><strong><a name="_Toc482611088"><span style="color:#000080;">Section:3 Lifting and Shifting Repository</span></a></strong></p> <p style="text-align: justify;"><em><span style="color:#800000;"><strong>Note</strong></span></em>: Any modifications to the repository should be done in the On premise environment. No modifications are allowed in the Oracle Analytics Cloud. Allow 5 minutes of time for the Oracle Analytics Cloud environment to get refreshed after the Repository Upload.<br /> <br /> Oracle BI Applications Repository is delivered along with the Oracle Analytics Cloud deliverables zip file.</p> <p><span style="color:#000080;"><strong>Uploading the Repository:</strong></span></p> <ol> <li style="text-align: justify;">Modify the delivered repository with proper connection details (Connection pools and Schema variables OLAPTBO, CM_TBO etc)</li> </ol> <ol> <li style="text-align: justify;" value="2">Login into Oracle Analytics Cloud environment.</li> </ol> <ol> <li style="text-align: justify;" value="3">Navigate to Console from the home page and click on Snapshots.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/59226b87d6c599f1de6ed9331e63032f/image008.png" style="width: 1259px; height: 676px;" /></p> <ol> <li style="text-align: justify;" value="4">Replace the Oracle Analytics Cloud data model with on-premises repository using &ldquo;Replace Data model&rdquo; option. If Virus scanner is not configured, Kindly click &ldquo;Proceed without a virus scanner&rdquo;</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/1e3a033f744f502ab06f16f9f6459884/image009.png" style="width: 1230px; height: 805px;" /></p> <ol> <li value="5">Choose the on-premises repository and provide the repository password (&quot;welcome1&quot; without quotes).</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/332f046c37a3e6190d05012efb5d8525/image010.png" style="width: 1410px; height: 751px;" /></p> <ol> <li style="text-align: justify;" value="6">Verify the Uploaded RPD by Navigating to Analyses and clicking &ldquo;Create Analysis. You see the available subject areas.&nbsp;For more details on repository Lifting and Shifting refer to&nbsp; <a href="http://docs.oracle.com/cloud/latest/reportingcs_use/BILPD/GUID-2BEB60F6-986D-4A7A-9D63-EEE67083E98A.htm#BILPD-GUID-2BEB60F6-986D-4A7A-9D63-EEE67083E98A">Uploading an On-Premises Data Model to Oracle Cloud Service</a></li> </ol> <p><strong><a name="_Toc482611089"><span style="color:#000080;">Section:4 FAQ</span></a></strong></p> <ol> <li style="text-align: justify;">Custom application roles permissions are not getting applied on the Repository objects.<br /> <span style="color:#800000;"><strong>Solution</strong></span>: Create the custom application roles in VA first and the then upload the Repository.<br /> &nbsp;</li> <li style="text-align: justify;" value="2">Not able to change permissions for Webcat object and getting Assertion Failure error.</li> </ol> <p style="margin-left: 40px;"><img alt="" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/b8be8dec-54bb-455a-8b50-2ecd980da759/Image/36ed4de35ee7584c2c0974178f9421b4/image011.png" style="width: 919px; height: 205px;" /><br /> <br /> <strong><span style="color:#800000;">Solution</span>: </strong>Delete any unresolved accounts available in the webcat object and then change the permissions.</p> <p><a name="_Toc482611090"><span style="color:#000080;"><strong>Section:5 Limitation</strong></span>s</a></p> <p>Following features which are used in Oracle BI Applications are not supported in Oracle Analytics Cloud environment.<br /> <br /> Group<br /> KPI<br /> KPI Watchlist<br /> List Format<br /> Segment<br /> Scorecard</p> <p><strong>Name</strong></p> <p><strong>Path</strong></p> <p><strong>Signature</strong></p> <p>Campaign Members Load Format</p> <p>/shared/Marketing/Segmentation/List Formats/Campaign Members Load Format</p> <p>Campaign Load</p> <p>Campaign Members Suspects Load Format</p> <p>/shared/Marketing/Segmentation/List Formats/Campaign Members Suspects Load Format</p> <p>Campaign Load</p> <p>Consumer Campaign Members Load Format</p> <p>/shared/Marketing/Segmentation/List Formats/Consumer Campaign Members Load Format</p> <p>Campaign Load</p> <p>Consumer Leads Import Load Format</p> <p>/shared/Marketing/Segmentation/List Formats/Consumer Leads Import Load Format</p> <p>Campaign Load</p> <p>Leads Import Load Format</p> <p>/shared/Marketing/Segmentation/List Formats/Leads Import Load Format</p> <p>Campaign Load</p> <p>Campaign Load - Contacts and Prospects Example</p> <p>/shared/Marketing/Segmentation/List Formats/Siebel List Formats/Campaign Load - Contacts and Prospects Example</p> <p>Campaign Load</p> <p>Campaign Load - Database Writeback Example</p> <p>/shared/Marketing/Segmentation/List Formats/Siebel List Formats/Campaign Load - Database Writeback Example</p> <p>Campaign Load</p> <p>Mutual Exclusion Campaign Load - Contacts and Prospects Example</p> <p>/shared/Marketing/Segmentation/List Formats/Siebel List Formats/Mutual Exclusion Campaign Load - Contacts and Prospects Example</p> <p>Campaign Load</p> <p>Suspects Import Load Format</p> <p>/shared/Marketing/Segmentation/List Formats/Suspects Import Load Format</p> <p>Campaign Load</p> <p>All Groups</p> <p>/shared/Human Capital Management/_filters/Human Resources - Workforce Deployment/Role Dashboards/All Groups</p> <p>Group</p> <p>Below Top Performance</p> <p>/shared/Human Capital Management/_filters/Human Resources - Workforce Deployment/Role Dashboards/Below Top Performance</p> <p>Group</p> <p>Global</p> <p>/shared/Human Capital Management/_filters/Human Resources - Workforce Deployment/Role Dashboards/Global</p> <p>Group</p> <p>Top Performers</p> <p>/shared/Human Capital Management/_filters/Human Resources - Workforce Deployment/Role Dashboards/Top Performers</p> <p>Group</p> <p>Average Negotiation Cycle Time</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Customer/Average Negotiation Cycle Time</p> <p>KPI</p> <p>Fulfilled Requisition Lines past expected date</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Customer/Fulfilled Requisition Lines past expected date</p> <p>KPI</p> <p>Late Receipts</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Customer/Late Receipts</p> <p>KPI</p> <p>Processed Requisition Lines past expected date</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Customer/Processed Requisition Lines past expected date</p> <p>KPI</p> <p>Procurement Cycle Time</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Customer/Procurement Cycle Time</p> <p>KPI</p> <p>Unfulfilled Requisition Lines past expected date</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Customer/Unfulfilled Requisition Lines past expected date</p> <p>KPI</p> <p>Off-Contract Spend</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Financial/Off-Contract Spend</p> <p>KPI</p> <p>Perfect invoices</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Financial/Perfect invoices</p> <p>KPI</p> <p>Realized Cost Savings</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Financial/Realized Cost Savings</p> <p>KPI</p> <p>Invoice Automation</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Operations/Invoice Automation</p> <p>KPI</p> <p>Manual Requisition Lines Rate</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Operations/Manual Requisition Lines Rate</p> <p>KPI</p> <p>PO Transactions per Buyer</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Operations/PO Transactions per Buyer</p> <p>KPI</p> <p>Processed Negotiation Lines</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Operations/Processed Negotiation Lines</p> <p>KPI</p> <p># of Suppliers per Category</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Supplier/# of Suppliers per Category</p> <p>KPI</p> <p>% of Spend By Diversified Suppliers</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Supplier/% of Spend By Diversified Suppliers</p> <p>KPI</p> <p>On-Time Delivery performance</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Supplier/On-Time Delivery performance</p> <p>KPI</p> <p>Quality Performance</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Supplier/Quality Performance</p> <p>KPI</p> <p>Returns</p> <p>/shared/Procurement/Procurement Scorecard/KPIs/Supplier/Returns</p> <p>KPI</p> <p>Exact Match Rate</p> <p>/shared/Supply Chain Management/Analytic Library/Embedded Content/Logistics/KPIs/Exact Match Rate</p> <p>KPI</p> <p>Hit/Miss Accuracy</p> <p>/shared/Supply Chain Management/Analytic Library/Embedded Content/Logistics/KPIs/Hit\/Miss Accuracy</p> <p>KPI</p> <p>Inventory Value</p> <p>/shared/Supply Chain Management/Analytic Library/Embedded Content/Logistics/KPIs/Inventory Value</p> <p>KPI</p> <p>Average Change Order Approval Time</p> <p>/shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Average Change Order Approval Time</p> <p>KPI</p> <p>Average Change Order Cycle Time</p> <p>/shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Average Change Order Cycle Time</p> <p>KPI</p> <p>Average New Item Creation Approval Time</p> <p>/shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Average New Item Creation Approval Time</p> <p>KPI</p> <p>Average New Item Creation Cycle Time</p> <p>/shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Average New Item Creation Cycle Time</p> <p>KPI</p> <p>Percentage of Shared Categories</p> <p>/shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/Percentage of Shared Categories</p> <p>KPI</p> <p>List Export - Contacts Example</p> <p>/shared/Marketing/Segmentation/List Formats/Siebel List Formats/List Export - Contacts Example</p> <p>List Export</p> <p>Analytics Data Load - Leads Example</p> <p>/shared/Marketing/Segmentation/List Formats/Siebel List Formats/Analytics Data Load - Leads Example</p> <p>Marketing BI Data Load</p> <p>Analytics Data Load - Responses Example</p> <p>/shared/Marketing/Segmentation/List Formats/Siebel List Formats/Analytics Data Load - Responses Example</p> <p>Marketing BI Data Load</p> <p>Campaign Members Export Format</p> <p>/shared/Marketing/Segmentation/List Formats/Campaign Members Export Format</p> <p>Marketing Email Server</p> <p>Email Personalization - Contacts - OLTP Example</p> <p>/shared/Marketing/Segmentation/List Formats/Siebel List Formats/Email Personalization - Contacts - OLTP Example</p> <p>Marketing Email Server</p> <p>_CauseAndEffectLinkages</p> <p>/shared/Procurement/Procurement Scorecard/_CauseAndEffectLinkages</p> <p>Scorecard Cause And Effect Linkages</p> <p>Cause &amp; Effect Map: Improve Response Time</p> <p>/shared/Procurement/Procurement Scorecard/Cause &amp; Effect Map: Improve Response Time</p> <p>Scorecard Causes And Effects View</p> <p>Automate Invoice Processing</p> <p>/shared/Procurement/Procurement Scorecard/Automate Invoice Processing</p> <p>Scorecard Initiative</p> <p>Consolidate Supplier Base</p> <p>/shared/Procurement/Procurement Scorecard/Consolidate Supplier Base</p> <p>Scorecard Initiative</p> <p>Develop and Implement New policies to support Contract compliance</p> <p>/shared/Procurement/Procurement Scorecard/Develop and Implement New policies to support Contract compliance</p> <p>Scorecard Initiative</p> <p>Establish and Monitor SLAs</p> <p>/shared/Procurement/Procurement Scorecard/Establish and Monitor SLAs</p> <p>Scorecard Initiative</p> <p>Implement Internet Supplier Portal</p> <p>/shared/Procurement/Procurement Scorecard/Implement Internet Supplier Portal</p> <p>Scorecard Initiative</p> <p>Implement Self Service Procurement Application</p> <p>/shared/Procurement/Procurement Scorecard/Implement Self Service Procurement Application</p> <p>Scorecard Initiative</p> <p>Implement Spend Analytics</p> <p>/shared/Procurement/Procurement Scorecard/Implement Spend Analytics</p> <p>Scorecard Initiative</p> <p>Initiatives</p> <p>/shared/Procurement/Procurement Scorecard/Initiatives</p> <p>Scorecard Initiative</p> <p>Monitor Performance and provide regular feedback on quarterly basis</p> <p>/shared/Procurement/Procurement Scorecard/Monitor Performance and provide regular feedback on quarterly basis</p> <p>Scorecard Initiative</p> <p>Monitor Spend and Savings on Monthly Basis</p> <p>/shared/Procurement/Procurement Scorecard/Monitor Spend and Savings on Monthly Basis</p> <p>Scorecard Initiative</p> <p>Monitor Spend by Diversified Suppliers on monthly basis</p> <p>/shared/Procurement/Procurement Scorecard/Monitor Spend by Diversified Suppliers on monthly basis</p> <p>Scorecard Initiative</p> <p>Reward high performing employees</p> <p>/shared/Procurement/Procurement Scorecard/Reward high performing employees</p> <p>Scorecard Initiative</p> <p>_initiativeTree</p> <p>/shared/Procurement/Procurement Scorecard/_initiativeTree</p> <p>Scorecard Initiative Tree</p> <p>Mission</p> <p>/shared/Procurement/Procurement Scorecard/Mission</p> <p>Scorecard Mission</p> <p>Control Spend</p> <p>/shared/Procurement/Procurement Scorecard/Control Spend</p> <p>Scorecard Objective</p> <p>Develop and Retain Strategic Suppliers</p> <p>/shared/Procurement/Procurement Scorecard/Develop and Retain Strategic Suppliers</p> <p>Scorecard Objective</p> <p>Improve Response Time</p> <p>/shared/Procurement/Procurement Scorecard/Improve Response Time</p> <p>Scorecard Objective</p> <p>Improve Supplier Performance</p> <p>/shared/Procurement/Procurement Scorecard/Improve Supplier Performance</p> <p>Scorecard Objective</p> <p>Increase Productivity</p> <p>/shared/Procurement/Procurement Scorecard/Increase Productivity</p> <p>Scorecard Objective</p> <p>New Objective</p> <p>/shared/Procurement/Procurement Scorecard/New Objective</p> <p>Scorecard Objective</p> <p>New Objective 1</p> <p>/shared/Procurement/Procurement Scorecard/New Objective 1</p> <p>Scorecard Objective</p> <p>Procurement Scorecard</p> <p>/shared/Procurement/Procurement Scorecard/Procurement Scorecard</p> <p>Scorecard Objective</p> <p>Promote Supplier Diversity</p> <p>/shared/Procurement/Procurement Scorecard/Promote Supplier Diversity</p> <p>Scorecard Objective</p> <p>Reduce Operational Costs</p> <p>/shared/Procurement/Procurement Scorecard/Reduce Operational Costs</p> <p>Scorecard Objective</p> <p>Reduce Out-of-process Spend</p> <p>/shared/Procurement/Procurement Scorecard/Reduce Out-of-process Spend</p> <p>Scorecard Objective</p> <p>Customer</p> <p>/shared/Procurement/Procurement Scorecard/Customer</p> <p>Scorecard Perspective</p> <p>Financial</p> <p>/shared/Procurement/Procurement Scorecard/Financial</p> <p>Scorecard Perspective</p> <p>Operations</p> <p>/shared/Procurement/Procurement Scorecard/Operations</p> <p>Scorecard Perspective</p> <p>Supplier</p> <p>/shared/Procurement/Procurement Scorecard/Supplier</p> <p>Scorecard Perspective</p> <p>_Perspectives</p> <p>/shared/Procurement/Procurement Scorecard/_Perspectives</p> <p>Scorecard Perspective List</p> <p>_scorecardSettings</p> <p>/shared/Procurement/Procurement Scorecard/_scorecardSettings</p> <p>Scorecard Settings</p> <p>Strategy Map</p> <p>/shared/Procurement/Procurement Scorecard/Strategy Map</p> <p>Scorecard Strategy Map View</p> <p>_strategyTree</p> <p>/shared/Procurement/Procurement Scorecard/_strategyTree</p> <p>Scorecard Strategy Tree</p> <p>Strategy Tree</p> <p>/shared/Procurement/Procurement Scorecard/Strategy Tree</p> <p>Scorecard Strategy Tree View</p> <p>Vision</p> <p>/shared/Procurement/Procurement Scorecard/Vision</p> <p>Scorecard Vision</p> <p>Suspect Sync Segment</p> <p>/shared/Marketing/Segmentation/Segments/Suspect Sync Segment</p> <p>Segment</p> <p>Logistics KPI Watchlist</p> <p>/shared/Supply Chain Management/Analytic Library/Embedded Content/Logistics/KPIs/Logistics KPI Watchlist</p> <p>Watchlist</p> <p>PIM KPI Watchlist</p> <p>/shared/Supply Chain Management/Analytic Library/Embedded Content/PIM/KPIs/PIM KPI Watchlist</p> <p>Watchlist</p> Anand Sadaiyan https://blogs.oracle.com/biapps/biapps_lift_shift_to_oac Tue Jun 20 2017 02:54:00 GMT-0400 (EDT) All You Need to Know About ODTUG Kscope17 Beacon Technology http://www.odtug.com/p/bl/et/blogaid=728&source=1 At ODTUG Kscope17, we are using wearable beacon technology to make the event better, and understand what works and what does not. ODTUG http://www.odtug.com/p/bl/et/blogaid=728&source=1 Mon Jun 19 2017 14:18:22 GMT-0400 (EDT) Unify: Could it be any easier? http://www.rittmanmead.com/blog/2017/06/unify-could-it-be-easier/ <p>Rittman Mead’s Unify is the easiest and most efficient method to pull your OBIEE reporting data directly into your local Tableau environment. No longer will you have to worry about database connection credentials, Excel exports, or any other roundabout way to get your data where you need it to be.</p> <p>Unify leverages OBIEE’s existing metadata layer to provide quick access to your curated data through a standard Tableau Web Data Connector. After a short installation and configuration process, you can be building Tableau workbooks from your OBIEE data in minutes.</p> <p>This blog post will demonstrate how intuitive and easy it is to use the Unify application. We will only cover using Unify and it’s features, as once the data gets into Tableau it can be used the same as any other Tableau Data Source. The environment shown already has Unify <a href="https://www.youtube.com/watch?v=nc-Ro258W88">installed and configured</a>, so we can jump right in and start using the tool immediately.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/ss1.png" alt=""></p> <p>To start pulling data from OBIEE using Unify, we need to create a new Web Data Connector Data Source in Tableau. This data source will prompt us for a URL to access Unify. In this instance, Unify is installed as a desktop application, so the URL is <a href="http://localhost:8080/unify">http://localhost:8080/unify</a>. </p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screen-Shot-2017-06-16-at-9.33.57-AM.png" alt=""></p> <p>Once we put in the URL, we’re shown an authentication screen. This screen will allow us to authenticate against OBIEE using the same credentials. In this case, I will authenticate as the weblogic user.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screen-Shot-2017-06-16-at-9.37.19-AM.png" alt=""></p> <p>Once authenticated, we are welcomed by a window where we can construct an OBIEE query visually. On the left hand side of the application, I can select the Subject Area I wish to query, and users are shown a list of tables and columns in the selected Subject Area. There are additional options along the top of the window, and I can see all saved queries on the right hand side of the window. </p> <p>The center of the window is where we can see the current query, as well as a preview of the query results. Since I have not started building a query yet, this area is blank.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screen-Shot-2017-06-16-at-9.44.32-AM.png" alt=""></p> <p>Unify allows us to either build a new query from scratch, or select an existing OBIEE report. First, let’s build our own query. The lefthand side of the screen displays the Subject Areas and Columns which I have access to in OBIEE. With a Subject Area selected, I can drag columns, or double click them, to add them to the current query. In the screenshot above, I have added three columns to my current query, “P1 Product”, “P2 Product Type”, and “1 - Revenue”. <br> <img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screen-Shot-2017-06-16-at-9.47.52-AM.png" alt=""></p> <p>If we wanted to, we could also create new columns by defining a Column Name and Column Formula. We even have the ability to modify existing column formulas for our query. We can do this by clicking the gear icon for a specific column, or by double-clicking the grey bar at the top of the query window.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screen-Shot-2017-06-16-at-9.51.29-AM.png" alt=""></p> <p>It’s also possible to add filters to our data set. By clicking the Filter icon at the top of the window, we can view the current filters for the query. We can then add filters the same way we would add columns, by double clicking or dragging the specific column. In the example shown, I have a query on the column “D2 Department” where the column value equals “Local Plants Dept.”. </p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screen-Shot-2017-06-16-at-10.11.09-AM.png" alt=""></p> <p>Filters can be configured using any of the familiar methods, such as checking if a value exists in a list of values, numerical comparisons, or even using repository or session variables.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screen-Shot-2017-06-16-at-10.10.24-AM.png" alt=""></p> <p>Now that we have our columns selected and our filters defined, we can execute this query and see a preview of the result set. By clicking the “Table” icon in the top header of the window, we can preview the result.</p> <p>Once we are comfortable with the results of the query, we can export the results to Tableau. It is important to understand that the preview data is trimmed down to 500 rows by default, so don’t worry if you think something is missing! This value, and the export row limit, can be configured, but for now we can export the results using the green “Unify” button at the top right hand corner of the window.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screen-Shot-2017-06-16-at-10.19.27-AM.png" alt=""></p> <p>When this button is clicked, the Unify window will close and the query will execute. You will then be taken to a new Tableau Workbook with the results of the query as a Data Source. We can now use this query as a data source in Tableau, just as we would with any other data source.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screen-Shot-2017-06-16-at-10.26.41-AM.png" alt=""></p> <p>But what if we have existing reports we want to use? Do we have to rebuild the report from scratch in the web data connector? Of course not! With Unify, you can select existing reports and pull them directly into Tableau.</p> <p>Instead of adding columns from the lefthand pane, we can instead select the “Open” icon, which will let us select an existing report. We can then export this report to Tableau, just as before.</p> <p>Now let’s try to do something a little more complicated. OBIEE doesn’t have the capability to execute queries across Subject Areas without common tables in the business model, however Tableau can perform joins between two data sources (so long as we select the correct join conditions). We can use Unify to pull two queries from OBIEE from different Subject Areas, and perform a data mashup with the two Subject Areas in Tableau.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screen-Shot-2017-06-16-at-10.45.17-AM.png" alt=""></p> <p>Here I’ve created a query with “Product Number” and “Revenue”, both from the Subject Area “A - Sample Sales”. I’ve saved this query as “Sales”. I can then click the “New” icon in the header to create a new query.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screen-Shot-2017-06-16-at-10.43.30-AM.png" alt=""></p> <p>This second query is using the “C - Sample Costs” Subject Area, and is saved as “Costs”. This query contains the columns “Product Number”, “Variable Costs”, and “Fixed Costs”.</p> <p>When I click the Unify button, both of these queries will be pulled into Tableau as two separate data sources. Since both of the queries contain the “Product Number” column, I can join these data sources on the “Product Number” column. In fact, Tableau is smart enough to do this for us:</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screen-Shot-2017-06-16-at-10.47.24-AM.png" alt=""></p> <p>We now have two data sets, each from a different OBIEE subject area, joined and available for visualization in Tableau. Wow, that was easy!</p> <p>What about refreshing the data? Good question! The exported data sources are published as data extracts, so all you need to do to refresh the data is select the data source and hit the refresh button. If you are not authenticated with OBIEE, or your session has expired, you will simply be prompted to re-authenticate.</p> <p>Using Tableau to consume OBIEE data has never been easier. Rittman Mead’s Unify allows users to connect to OBIEE as a data source within a Tableau environment in an intuitive and efficient method. If only everything was this easy!</p> <p>Interested in getting OBIEE data into Tableau? <a href="mailto:info+unifynp@rittmanmead.com" target="_blank">Contact us</a> to see how we can help, or head over to <a href="https://unify.ritt.md">https://unify.ritt.md</a> to get a free Unify trial version.</p> Nick Padgett 3dbe3dfe-aea3-4302-8cc2-2f95c1e57805 Mon Jun 19 2017 10:00:00 GMT-0400 (EDT) Unify - An Insight Into the Product http://www.rittmanmead.com/blog/2017/06/unify-an-insight-into-the-product/ <img src="http://www.rittmanmead.com/blog/content/images/2017/06/endtoend.jpg" alt="Unify - An Insight Into the Product"><p><a href="https://www.rittmanmead.com/blog/2017/06/unify-see-your-data-from-every-perspective/">Monday, 12 Jun</a> saw the official release of <a href="https://unify.ritt.md">Unify</a>, Rittman Mead's very own connector between Tableau and OBIEE. It provides a simple but powerful integration between the two applications that allows you to execute queries through OBIEE and manipulate and render the datasets using Tableau.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/unify.png" alt="Unify - An Insight Into the Product"></p> <h1 id="whywemadeit">Why We Made It</h1> <p>One of the first questions of course would be <em>why</em> we would want to do this in the first place. The excellent thing about OBI is that it acts as an abstraction layer on top of a database, allowing analysts to write efficient and secure reports without going into the detail of writing queries. As with any abstraction, it is a trade of simplicity for capability. Products like Tableau and Data Visualiser seek to reverse this trade, putting the power back in the hands of the report builder. However, without quoting Spiderman, care should be taken when doing this. </p> <p>The result can be that users write inefficient queries, or worse still, incorrect ones. We know there will be some out there that use self service tools as purely a visualisation engine, simply dropping pre-made datasets into it. If you are looking to produce sustainable, scalable and accessible reporting systems, you need to tackle the problem both at the data acquisition stage as well as the communication stage at the end.</p> <p>If you are already meeting both requirements, perhaps by using OBI with Data Visualiser (formerly Visual Analyser) or by other means then that's perfectly good. However, We know from experience that there are many of you out there that have already invested heavily into both OBI and Tableau as separate solutions. Rather than have them linger in a state of conflict, we'd rather we nurse them into a state of symbiosis.</p> <p>The idea behind Unify is that it bridges this gap, allowing you to use your OBIEE system as an efficient data acquisition platform and Tableau as an intuitive playground for users who want to do a a bit more with their data. Unify works by using the Tableau Web Data Connector as a data source and then our customised software to act as an interface for creating OBIEE queries and them exporting them into Tableau.</p> <h1 id="howitworks">How It Works</h1> <p>Unify uses Tableau's latest <a href="https://www.tableau.com/about/blog/2015/8/connect-just-about-any-web-data-new-web-data-connector-42246">Web Data Connector</a> data source to allow us to dynamically query OBIEE and extract data into Tableau. Once a dataset is extracted into Tableau, it can be used with Tableau as normal, taking advantages of all of the powerful features of Tableau. This native integration means you can add in OBIEE data sources just as you would add in any others - Excel files, SQL results etc. Then you can join the data sources using Tableau itself, even if the data sources don't join up together in the background.</p> <p>First you open up Tableau and add a Web Data Connector source:</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/1-tableau.png" alt="Unify - An Insight Into the Product"></p> <p>Then give the link to the Unify application, e.g. <code>http://localhost:8080/unify</code>. This will open up Unify and prompt you to login with your OBIEE credentials. This is important as Unify operates through the OBIEE server layer in order to maintain all security permissions that you've already defined.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/2-login.png" alt="Unify - An Insight Into the Product"></p> <p>Now that the application is open, you can make OBIEE queries using the interface provided. This is a bit like Answers and allows you to query from any of your available subject areas and presentation columns. The interface also allows you to use filtering, column formulae and OBIEE variables much in the same way as Answers does. </p> <p>Alternatively, you can open up an existing report that you've made in OBIEE and then edit it at your leisure. Unify will display a preview of the dataset so you can tweak it until you are happy that is what you want to bring into Tableau.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/3-query.png" alt="Unify - An Insight Into the Product"></p> <p>Once you're happy with your dataset, click the <strong>Unify</strong> button in the top right and it will export the data into Tableau. From this point, it behaves exactly as Tableau does with any other data set. This means you can join your OBIEE dataset to external sources, or bring in queries from multiple subject areas from OBIEE and join them in Tableau. Then of course, take advantage of Tableau's powerful and interactive visualisation engine.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/4-visual.png" alt="Unify - An Insight Into the Product"></p> <h1 id="unifyserver">Unify Server</h1> <p>Unify comes in <a href="https://unify.ritt.md/unify/desktop">desktop</a> and <a href="https://unify.ritt.md/unify/server">server</a> flavours. The main difference between the two is that the server version allows you to upload Tableau workbooks with OBIEE data to Tableau Server <em>and</em> refresh them. With the desktop version, you will only be able to upload static workbooks that you've created, however with the server version of Unify, you can tell Tableau Server to refresh data from OBIEE in accordance with a schedule. This lets you produce production quality dashboards for your users, sourcing data from OBIEE as a well as any other source you choose.</p> <h1 id="unifyyourdata">Unify Your Data</h1> <p>In a nutshell, Unify allows you to combine the best aspects of two very powerful BI tools and will prevent the need for building all of your reporting artefacts from scratch if you already have a good, working system.</p> <p>I hope you've found this brief introduction to Unify informative and if you have OBIEE and would like to try it with Tableau, I encourage you to register for a <a href="https://unify.ritt.md/register">free desktop trial</a>. If you have any questions, please don't hesitate to <a href="mailto:unify@rittmanmead.com">get in touch</a>.</p> Minesh Patel 65721995-1ce4-4dfb-ace8-fb1fa6a78851 Thu Jun 15 2017 07:00:00 GMT-0400 (EDT) Giddy Up — Red Pill is Headed to Texas https://medium.com/red-pill-analytics/giddy-up-red-pill-is-headed-to-texas-b28c28198c59?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*zxwx1rrhRnezrMnRrsukJQ.jpeg" /></figure><h4>Kscope17 — San Antonio, TX</h4><p>The countdown is on for <a href="http://kscope17.com/">ODTUG’s Kscope17 in San Antonio, Texas</a> and there is a packed lineup of impressive content, fun events, and a focus on emerging technologies.</p><p>We know it will be tough to decide which sessions to attend but make sure you save some time to check out Red Pill Analytic’s sessions, listed below.</p><h3><strong>Analyze This</strong></h3><p>Red Pill Analytics’ reach will also extend outside of the classroom this year. As Kscope17’s Analytics Sponsor, we will be using live polling, IOT technologies, and beacon data to paint a picture of the conference in real-time. We will answer questions like: Which sessions are best attended? Which location is the busiest? How many sessions are people attending? Analytics will be on display throughout the conference venue. Visit us near registration and at displays near session rooms in the Grand Oaks Foyer and Wildflower Hallway as we dial you into the fun with live polling. All we’re offering is the truth. Nothing more.</p><p>We will also host a special session about the data gathering process on Tuesday: <a href="http://kscope17.com/component/seminar/seminarslist#A%20Lambda%20Architecture%20in%20the%20Cloud:%20A%20Kscope17%20Case%20Study">A Lambda Architecture in the Cloud: A Kscope17 Case Study</a>. We look forward to sharing more with you about the analytics we gather throughout the weeks and the unique and innovative ways we are using that data to tell a story.</p><h3>Don’t be bound by conference tracks</h3><p>In addition to our sponsorship, Red Pill Analytics has three speakers delivering sessions at Kscope17. Why should you attend a Red Pill Analytics Business Intelligence/Big Data session at Kscope17? Especially if these sessions do not fall in the track you are planning on attending? The ability to communicate with data in a visual way is a skill that is critical in any professional’s toolbelt. Are you interested in learning more about Oracle Data Visualization? Or are you in a pattern of connecting to an Essbase cube, pulling down information in Excel and mashing different spreadsheets together? Then it is imperative that you attend one of our Data Visualization sessions at <a href="http://kscope17.com/">Kscope17</a> and to learn to combine these processes in one place using Oracle Analytics Cloud (OAC).</p><h3>Where can you Find us?</h3><p>Check out these Red Pill Analytics sessions at Kscope and swing by our Analytics Stations. <em>(Please note as with any conference schedule, times may change. Make sure to check out the Kscope17 app for the most up-to-date information.)</em></p><p>Will you be at Kscope17 and want to meet up? <a href="http://redpillanalytics.com/contact/">Contact us</a> and let’s talk analytics.</p><p>We are looking forward to this event and all of the other great opportunities to speak. Make sure to keep an eye on our <a href="http://events.redpillanalytics.com">events</a> page for what Red Pill Analytics is up to this year.</p><p><strong>Sunday<br>8:30 AM — 4:30 PM</strong> <a href="http://kscope17.com/content/sunday-symposiums#BI">Sunday Symposium</a> <br><strong>8:30–9:00 PM</strong> <a href="http://kscope17.com/events/geek-game-night">Geek Game Night</a></p><p><strong>Monday<br>8:00–10:00 PM: </strong><a href="http://kscope17.com/events/daily-events"><strong>Community Night Event: BI Texas-Style Trivia</strong></a></p><p><strong>Tuesday<br>12:45–1:45 PM: </strong><a href="http://kscope17.com/events/lunch-learn"><strong>Lunch and Learn Panels</strong></a><br><strong>Topics: </strong><br>DATA WAREHOUSING &amp; BIG DATA, Stewart Bryson<br>BI &amp; REPORTING, Michelle Kolbe<br>DATA VISUALIZATION &amp; ADVANCED ANALYTICS, Kevin McGinley</p><h3>Must See Sessions</h3><blockquote><strong>MONDAY</strong></blockquote><p><a href="http://kscope17.com/component/seminar/seminarslist#Architecture%20Live:%20Designing%20an%20Analytics%20Platform%20for%20the%20Big%20Data%20Era"><strong>Architecture Live: Designing an Analytics Platform for the Big Data Era</strong></a><strong><br></strong><a href="http://kscope17.com/component/seminar/presenterlist?last_id=75"><strong>Jean-Pierre Dijcks</strong></a><strong><em>, </em></strong>Oracle Corporation<strong><em><br></em>Co-presenter(s):</strong> <a href="http://kscope17.com/component/seminar/presenterlist?last_id=72"><strong>Stewart Bryson</strong></a><strong>,</strong> Red Pill Analytics<br><strong>When:</strong> June 26 — Monday: Session 1 , 10:30-11:30 am<br><strong>Topic:</strong> Data Warehousing &amp; Big Data — <strong>Subtopic:</strong> Data Architecture</p><p>Don’t miss the Architecture Live experience! In this interactive session, you’ll witness two industry experts digitally illustrating data-driven architectures live, with input and feedback from the audience.</p><p>Kafka, Lambda, and Streaming Analytics will be all covered. We’ll tell you what these words mean and more importantly how they affect the choices we make building an enterprise architecture. With the Oracle Information Management Reference Architecture as the backdrop, we’ll clarify and delineate the different components involved in delivering big data, fast data, and all the gray area in between. The Architecture Live experience will be fun and different, and we’ll all learn something along the way.</p><p><a href="http://kscope17.com/component/seminar/seminarslist#Kafka,%20Data%20Streaming,%20and%20Analytic%20Microservices"><strong>Kafka, Data Streaming and Analytic Microservices</strong></a><strong><br></strong><a href="http://kscope17.com/component/seminar/presenterlist?last_id=76"><strong>Stewart Bryson</strong></a><strong><em>, Red Pill Analytics<br></em>When:</strong> June 26 — Monday: Session 2 , 11:45 am — 12:45 pm<br><strong>Topic:</strong> Data Warehousing &amp; Big Data — <strong>Subtopic:</strong> Data Architecture</p><p>While traditional data warehouses excel at sourcing data from enterprise applications, they usually fail at handling the volume, velocity, and variety of data for modern analytics applications relying on big and fast data. Instead of modeling these data sources into a system that doesn’t fit, let’s apply a new software design pattern to analytics: microservices. Microservices are small, independent applications — building blocks that provide only a distinct subset of functionality — that can be stacked together to build an end-to-end platform.</p><p>In this presentation, we’ll explore using Apache Kafka and the Confluent Platform 3.0 as the data streaming hub for ingesting data bound for downstream analytic applications: an enterprise data warehouse, a Hadoop cluster for batch processing, and lightweight, purpose-built microservices in the cloud or on-premises. Experience the next generation of analytic platforms.</p><p><a href="http://kscope17.com/component/seminar/seminarslist#Oracle%20Data%20Visualization%20for%20the%20Finance%20Analyst"><strong>Oracle Data Visualization for the Finance Analyst</strong></a><strong><br></strong><a href="http://kscope17.com/component/seminar/presenterlist?last_id=66"><strong>Kevin McGinley</strong></a><strong><em>, </em></strong>Red Pill Analytics<strong><em><br></em>Co-presenter(s):</strong> <a href="http://kscope17.com/component/seminar/presenterlist?last_id=170"><strong>Tim German</strong></a>, Qubix<br><strong>When:</strong> June 26 — Monday: Session 3 , 2:00–3:00 pm<br><strong>Topic:</strong> Data Visualization &amp; Advanced Analytics — <strong>Subtopic:</strong> Oracle Data Visualization</p><p>Many analysts within Finance are used to manipulating spreadsheets and waiting for enhancements to Essbase cubes to produce reports that need to be shared with their management or peers. This session will demonstrate how all analysts within Finance can get immediate value from Oracle Data Visualization (DV) and decrease their reliance on overly complex spreadsheets. From its ability to connect to many different kinds of data sources, wrangle multiple data sources into a usable format, and visualize insights that would be otherwise hard to see in a table, Oracle DV provides analysts an extra layer of functionality they can easily learn and use without IT intervention.</p><p><a href="http://kscope17.com/component/seminar/seminarslist#Using%20R%20for%20Data%20Profiling"><strong>Using R for Data Profiling</strong></a><strong><br></strong><a href="http://kscope17.com/component/seminar/presenterlist?last_id=43"><strong>Michelle Kolbe</strong></a><strong><em>, Red Pill Analytics<br></em>When:</strong> June 26 — Monday: Session 3 , 2:00-3:00 pm<br><strong>Topic:</strong> BI &amp; Reporting — <strong>Subtopic:</strong> Other BI and Reporting</p><p>The benefits of knowing your data before embarking on a BI project are endless. Sure, you can buy a tool to help with this, or you could use R, an open-source tool. This session will dig into methods for using R to connect to your data source to see visual and tabular analyses of your data set. You’ll learn how to find missing data, outliers, and unexpected values. If you don’t know R or you are wanting to learn more functions within R, you’ll benefit from this session.</p><blockquote><strong>TUESDAY</strong></blockquote><p><a href="http://kscope17.com/component/seminar/seminarslist#A%20Lambda%20Architecture%20in%20the%20Cloud:%20A%20Kscope17%20Case%20Study"><strong>A Lambda Architecture in the Cloud: A Kscope17 Case Study</strong></a><strong><br></strong><a href="http://kscope17.com/component/seminar/presenterlist?last_id=72"><strong>Stewart Bryson</strong></a><strong>,</strong> Red Pill Analytics and <a href="http://kscope17.com/component/seminar/presenterlist?last_id=69"><strong>Kevin McGinley</strong></a><strong>, </strong>Red Pill Analytics<br><strong>When: </strong>Jun 27, 2017, Tuesday: Session 8, 2:00–3:00 pm<strong><br>Topic: </strong>Data Visualization &amp; Advanced Analytics <strong>Subtopic: </strong>Other</p><p>A Lambda Architecture enables data-driven organizations by simultaneously providing batch and speed processing layers to satisfy the overall appetite for analytics and reporting. But building a Lambda architecture is not easy, usually requiring all of the following: a universal ingestion layer, an immutable data store as a system of record, one or more data processing layers that can satisfy batch and speed requirements, and a serving layer that enables data-driven decision making.</p><p>In this session, we’ll demonstrate how Cloud platforms can supercharge the delivery of a capable Lambda architecture. Our case study will be the IoT data generated by Kscope17 attendees including the beacon from their badges, as well as other devices capturing the results of live polling.</p><p><a href="http://kscope17.com/component/seminar/seminarslist#Expanding%20Your%20Data-Driven%20Story:%20The%20Next%20Chapter"><strong>Expanding Your Data-Driven Story: The Next Chapter</strong></a><strong><br></strong><a href="http://kscope17.com/component/seminar/presenterlist?last_id=70"><strong>Mike Durran</strong></a><strong><em>, Oracle Corporation<br></em>Co-presenter(s):</strong> <a href="http://kscope17.com/component/seminar/presenterlist?last_id=75"><strong>Stewart Bryson</strong></a>, Red Pill Analytics<br><strong>When:</strong> June 27 — Tuesday: Session 9, 3:30-4:30 pm<br><strong>Topic:</strong> Data Visualization &amp; Advanced Analytics — <strong>Subtopic:</strong> Oracle Data Visualization</p><p>Oracle Data Visualization (DV) makes it easy to get insight from your data. This stunningly visual and intuitive product enables you to access, blend, and wrangle a variety of sources — including spreadsheets, databases, and applications — and tell the story of your data. In this session, learn about the power of data storytelling and the latest capabilities of Oracle DV (including details of product roadmap) to create compelling analytic narratives, including how you can rapidly apply advanced analytic techniques to gain insights previously only accessible to advanced users. Learn about how Oracle DV has been used in real-life scenarios to gain insight and improve business performance.</p><blockquote><strong>WEDNESDAY</strong></blockquote><p><a href="http://kscope17.com/component/seminar/seminarslist#Oracle%20DV%20for%20the%20Finance%20Analyst%20Hands%20on%20Lab"><strong>Hands-on Training: Oracle DV for the Finance Analyst</strong></a><strong><br></strong><a href="http://kscope17.com/component/seminar/presenterlist?last_id=69"><strong>Kevin McGinley</strong></a><strong>, </strong>Red Pill Analytics and<strong> </strong><a href="http://kscope17.com/component/seminar/presenterlist?last_id=174"><strong>Tim German</strong></a><strong>, </strong>Qubix<br><strong>When: <br></strong>Wednesday, June 28, 2017, 9:45-11:15 AM <br>Wednesday, June 28, 2017, 1:45-3:15 PM</p><p>This hands-on-lab will build upon the session given by Kevin McGinley and Tim German by allowing attendees to perform some of the demonstrations shown in the session given earlier in the week. Attendees will get to use Oracle Data Visualization against Essbase cubes, Excel spreadsheets, and even learn how to create their own mashups of data to be used for their own analytical purposes. They’ll also learn how building certain types of visualizations and using features like narrative mode can help deepen their analysis and make the communication of their findings easier. Prior attendance of the session is not required to attend the hands-on-lab.</p><p><strong>Trends in the World of Analytics, Business Intelligence, and Performance Management Panel Session Moderated by </strong><a href="http://kscope17.com/component/seminar/presenterlist?last_id=66"><strong>Edward Roske</strong></a><strong> <em>, interRel Consulting<br></em></strong><a href="http://kscope17.com/component/seminar/presenterlist?last_id=72"><strong>Stewart Bryson</strong></a><strong>,</strong> Red Pill Analytics<br><strong>When:</strong> Jun 28, 2017, Wednesday Session 14 , 1:45 pm — 2:45 pm<br><strong>Topic:</strong> BI &amp; Reporting — <strong>Subtopic:</strong> Other BI and Reporting</p><p>There has never been a panel assembled with as many luminaries in the world of BI, EPM, and business analytics as you’ll see on this stage. Each one of these people has over 20 years of experience and collectively, they’ve been involved in more than 1,000 implementations. But they won’t be talking technical tips: with their wealth of experience, they’ll be discussing trends in the bigger world of analytics. Which products are rising up, where are companies investing their money, what new areas are emerging, and much, much more will be discussed as these gurus descend from their metaphorical mountains to discuss and debate for your amusement and education. If you want to know what the reporting, analysis, planning, and consolidation fields are up to, come with plenty of questions and an open mind.</p><blockquote><strong>THURSDAY</strong></blockquote><p><a href="http://kscope17.com/content/thursday-deep-dive-sessions"><strong>Deep Dive Session: Navigating the Oracle Business Analytics Frontier</strong></a><strong><br></strong><a href="http://kscope17.com/component/seminar/presenterlist?last_id=66"><strong>Kevin McGinley</strong></a><strong>, </strong>Red Pill Analytics<strong><em><br></em>Co-presenter(s):</strong> <a href="http://kscope17.com/component/seminar/presenterlist?last_id=221"><strong>Tracy McMullen</strong></a><strong>,</strong> interRel Consulting<br><strong>When:</strong> June 29 — Deep-Dive Session, 9:00-11:00 am<br><strong>Topic:</strong> BI &amp; Reporting — <strong>Subtopic:</strong> Other BI and Reporting</p><p>Saddle up and dig in your spurs as we trail blaze through Oracle’s Reporting, Business Intelligence, and Data Visualization solutions. Through a rotating panel of experts from Oracle, partners, and customers and interactive discussions with attendees, we’ll navigate Reporting and BI challenges and how Oracle Business Analytics addresses those requirements. Led by moderators Kevin McGinley and Tracy McMullen, the panel will discuss questions such as, “Should I use Smart View or Data Visualization or Oracle Analytics Cloud?”, “How do these solutions work together, and when should I use what?” and, “What are the considerations for moving to the Cloud?” Our panel will share thoughts and perspectives on today’s reporting, BI, and DV questions, climate, and trends. We reckon you won’t want to miss this EPM and BI reporting rodeo in Thursday’s Deep Dive Session.</p><p>Will you be at Kscope17 and want to meet up? <a href="mailto: lauren@redpillanalytics.com">Contact us</a> and let’s get a drink!</p><p>We are looking forward to this event and all of the other great opportunies to speak. Make sure to keep an eye on our <a href="http://events.redpillanalytics.com">events</a> page for what Red Pill Analytics is up to this year.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b28c28198c59" width="1" height="1"><hr><p><a href="https://medium.com/red-pill-analytics/giddy-up-red-pill-is-headed-to-texas-b28c28198c59">Giddy Up — Red Pill is Headed to Texas</a> was originally published in <a href="https://medium.com/red-pill-analytics">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Lauren Prezby https://medium.com/p/b28c28198c59 Wed Jun 14 2017 08:56:18 GMT-0400 (EDT) Giddy Up — Red Pill is Headed to Texas http://redpillanalytics.com/kscope17/ <p><img width="300" height="201" src="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/melissa-newkirk-194315.jpg?fit=300%2C201" class="attachment-medium size-medium wp-post-image" alt="Kscope17 Event Details" srcset="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/melissa-newkirk-194315.jpg?w=1920 1920w, https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/melissa-newkirk-194315.jpg?resize=300%2C201 300w, https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/melissa-newkirk-194315.jpg?resize=768%2C514 768w, https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/melissa-newkirk-194315.jpg?resize=1024%2C685 1024w" sizes="(max-width: 300px) 100vw, 300px" data-attachment-id="4975" data-permalink="http://redpillanalytics.com/kscope17/melissa-newkirk-194315/" data-orig-file="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/melissa-newkirk-194315.jpg?fit=1920%2C1285" data-orig-size="1920,1285" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Kscope17 Event Details" data-image-description="&lt;p&gt;Kscope17 Event Details&lt;/p&gt; " data-medium-file="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/melissa-newkirk-194315.jpg?fit=300%2C201" data-large-file="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/melissa-newkirk-194315.jpg?fit=1024%2C685" /></p><p class="graf graf--h3">The countdown is on for <a class="markup--anchor markup--p-anchor" href="http://kscope17.com/" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/">ODTUG’s Kscope17 in San Antonio, Texas</a> and there is a packed lineup of impressive content, fun events, and a focus on emerging technologies.</p> <p class="graf graf--p">We know it will be tough to decide which sessions to attend but make sure you save some time to check out Red Pill Analytic’s sessions, listed below.</p> <h2 class="graf graf--h3">Analyze This</h2> <p class="graf graf--p">Red Pill Analytics’ reach will also extend outside of the classroom this year. As Kscope17’s Analytics Sponsor, we will be using live polling, IOT technologies, and beacon data to paint a picture of the conference in real-time. We will answer questions like: Which sessions are best attended? Which location is the busiest? How many sessions are people attending? Analytics will be on display throughout the conference venue. Visit us near registration and at displays near session rooms in the Grand Oaks Foyer and Wildflower Hallway as we dial you into the fun with live polling. All we’re offering is the truth. Nothing more.</p> <p class="graf graf--p">We will also host a special session about the data gathering process on Tuesday: <a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/seminarslist#A%20Lambda%20Architecture%20in%20the%20Cloud:%20A%20Kscope17%20Case%20Study" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/seminarslist#A%20Lambda%20Architecture%20in%20the%20Cloud:%20A%20Kscope17%20Case%20Study">A Lambda Architecture in the Cloud: A Kscope17 Case Study</a>. We look forward to sharing more with you about the analytics we gather throughout the weeks and the unique and innovative ways we are using that data to tell a story.</p> <h2 class="graf graf--h3">Don’t be bound by conference tracks</h2> <p class="graf graf--p">In addition to our sponsorship, Red Pill Analytics has three speakers delivering sessions at Kscope17. Why should you attend a Red Pill Analytics Business Intelligence/Big Data session at Kscope17? Especially if these sessions do not fall in the track you are planning on attending? The ability to communicate with data in a visual way is a skill that is critical in any professional’s toolbelt. Are you interested in learning more about Oracle Data Visualization? Or are you in a pattern of connecting to an Essbase cube, pulling down information in Excel and mashing different spreadsheets together? Then it is imperative that you attend one of our Data Visualization sessions at <a class="markup--anchor markup--p-anchor" href="http://kscope17.com/" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/">Kscope17</a> and to learn to combine these processes in one place using Oracle Analytics Cloud (OAC).</p> <h2 class="graf graf--h3">Where can you Find us?</h2> <p class="graf graf--p">Check out these Red Pill Analytics sessions at Kscope and swing by our Analytics Stations. <em class="markup--em markup--p-em">(Please note as with any conference schedule, times may change. Make sure to check out the Kscope17 app for the most up-to-date information.)</em></p> <p class="graf graf--p">Will you be at Kscope17 and want to meet up? <a class="markup--anchor markup--p-anchor" href="http://redpillanalytics.com/contact/" target="_blank" rel="noopener noreferrer" data-href="http://redpillanalytics.com/contact/">Contact us</a> and let’s talk analytics.</p> <p class="graf graf--p">We are looking forward to this event and all of the other great opportunities to speak. Make sure to keep an eye on our <a class="markup--anchor markup--p-anchor" href="http://events.redpillanalytics.com" target="_blank" rel="noopener noreferrer" data-href="http://events.redpillanalytics.com">events</a> page for what Red Pill Analytics is up to this year.</p> <p class="graf graf--p"><strong class="markup--strong markup--p-strong">Sunday<br /> 8:30 AM — 4:30 PM</strong> <strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/content/sunday-symposiums#BI" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/content/sunday-symposiums#BI">Sunday Symposium</a></strong><br /> <strong class="markup--strong markup--p-strong">8:30–9:00 PM</strong> <strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/events/geek-game-night" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/events/geek-game-night">Geek Game Night</a></strong></p> <p class="graf graf--p"><strong class="markup--strong markup--p-strong">Monday<br /> 8:00–10:00 PM: </strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/events/daily-events" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/events/daily-events"><strong class="markup--strong markup--p-strong">Community Night Event: BI Texas-Style Trivia</strong></a></p> <p class="graf graf--p"><strong class="markup--strong markup--p-strong">Tuesday<br /> 12:45–1:45 PM: </strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/events/lunch-learn" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/events/lunch-learn"><strong class="markup--strong markup--p-strong">Lunch and Learn Panels</strong></a><br /> <strong class="markup--strong markup--p-strong">Topics: </strong></p> <ul> <li class="graf graf--p">DATA WAREHOUSING &amp; BIG DATA, Stewart Bryson</li> <li class="graf graf--p">BI &amp; REPORTING, Michelle Kolbe</li> <li class="graf graf--p">DATA VISUALIZATION &amp; ADVANCED ANALYTICS, Kevin McGinley</li> </ul> <h3 class="graf graf--h3">Must See Sessions</h3> <blockquote class="graf graf--blockquote"><p><strong class="markup--strong markup--blockquote-strong">MONDAY</strong></p></blockquote> <p class="graf graf--p"><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/seminarslist#Architecture%20Live:%20Designing%20an%20Analytics%20Platform%20for%20the%20Big%20Data%20Era" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/seminarslist#Architecture%20Live:%20Designing%20an%20Analytics%20Platform%20for%20the%20Big%20Data%20Era"><strong class="markup--strong markup--p-strong">Architecture Live: Designing an Analytics Platform for the Big Data Era</strong></a><strong class="markup--strong markup--p-strong"><br /> </strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=75" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=75"><strong class="markup--strong markup--p-strong">Jean-Pierre Dijcks</strong></a><strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">, </em></strong>Oracle Corporation<strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em"><br /> </em>Co-presenter(s):</strong> <a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=72" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=72"><strong class="markup--strong markup--p-strong">Stewart Bryson</strong></a><strong class="markup--strong markup--p-strong">,</strong> Red Pill Analytics<br /> <strong class="markup--strong markup--p-strong">When:</strong> June 26 — Monday: Session 1 , 10:30-11:30 am<br /> <strong class="markup--strong markup--p-strong">Topic:</strong> Data Warehousing &amp; Big Data — <strong class="markup--strong markup--p-strong">Subtopic:</strong> Data Architecture</p> <p class="graf graf--p">Don’t miss the Architecture Live experience! In this interactive session, you’ll witness two industry experts digitally illustrating data-driven architectures live, with input and feedback from the audience.</p> <p class="graf graf--p">Kafka, Lambda, and Streaming Analytics will be all covered. We’ll tell you what these words mean and more importantly how they affect the choices we make building an enterprise architecture. With the Oracle Information Management Reference Architecture as the backdrop, we’ll clarify and delineate the different components involved in delivering big data, fast data, and all the gray area in between. The Architecture Live experience will be fun and different, and we’ll all learn something along the way.</p> <hr /> <p class="graf graf--p"><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/seminarslist#Kafka,%20Data%20Streaming,%20and%20Analytic%20Microservices" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/seminarslist#Kafka,%20Data%20Streaming,%20and%20Analytic%20Microservices"><strong class="markup--strong markup--p-strong">Kafka, Data Streaming and Analytic Microservices</strong></a><strong class="markup--strong markup--p-strong"><br /> </strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=76" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=76"><strong class="markup--strong markup--p-strong">Stewart Bryson</strong></a><strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">, Red Pill Analytics<br /> </em>When:</strong> June 26 — Monday: Session 2 , 11:45 am — 12:45 pm<br /> <strong class="markup--strong markup--p-strong">Topic:</strong> Data Warehousing &amp; Big Data — <strong class="markup--strong markup--p-strong">Subtopic:</strong> Data Architecture</p> <p class="graf graf--p">While traditional data warehouses excel at sourcing data from enterprise applications, they usually fail at handling the volume, velocity, and variety of data for modern analytics applications relying on big and fast data. Instead of modeling these data sources into a system that doesn’t fit, let’s apply a new software design pattern to analytics: microservices. Microservices are small, independent applications — building blocks that provide only a distinct subset of functionality — that can be stacked together to build an end-to-end platform.</p> <p class="graf graf--p">In this presentation, we’ll explore using Apache Kafka and the Confluent Platform 3.0 as the data streaming hub for ingesting data bound for downstream analytic applications: an enterprise data warehouse, a Hadoop cluster for batch processing, and lightweight, purpose-built microservices in the cloud or on-premises. Experience the next generation of analytic platforms.</p> <hr /> <p class="graf graf--p"><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/seminarslist#Oracle%20Data%20Visualization%20for%20the%20Finance%20Analyst" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/seminarslist#Oracle%20Data%20Visualization%20for%20the%20Finance%20Analyst"><strong class="markup--strong markup--p-strong">Oracle Data Visualization for the Finance Analyst</strong></a><strong class="markup--strong markup--p-strong"><br /> </strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=66" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=66"><strong class="markup--strong markup--p-strong">Kevin McGinley</strong></a><strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">, </em></strong>Red Pill Analytics<strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em"><br /> </em>Co-presenter(s):</strong> <a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=170" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=170"><strong class="markup--strong markup--p-strong">Tim German</strong></a>, Qubix<br /> <strong class="markup--strong markup--p-strong">When:</strong> June 26 — Monday: Session 3 , 2:00–3:00 pm<br /> <strong class="markup--strong markup--p-strong">Topic:</strong> Data Visualization &amp; Advanced Analytics — <strong class="markup--strong markup--p-strong">Subtopic:</strong> Oracle Data Visualization</p> <p class="graf graf--p">Many analysts within Finance are used to manipulating spreadsheets and waiting for enhancements to Essbase cubes to produce reports that need to be shared with their management or peers. This session will demonstrate how all analysts within Finance can get immediate value from Oracle Data Visualization (DV) and decrease their reliance on overly complex spreadsheets. From its ability to connect to many different kinds of data sources, wrangle multiple data sources into a usable format, and visualize insights that would be otherwise hard to see in a table, Oracle DV provides analysts an extra layer of functionality they can easily learn and use without IT intervention.</p> <hr /> <p class="graf graf--p"><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/seminarslist#Using%20R%20for%20Data%20Profiling" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/seminarslist#Using%20R%20for%20Data%20Profiling"><strong class="markup--strong markup--p-strong">Using R for Data Profiling</strong></a><strong class="markup--strong markup--p-strong"><br /> </strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=43" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=43"><strong class="markup--strong markup--p-strong">Michelle Kolbe</strong></a><strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">, Red Pill Analytics<br /> </em>When:</strong> June 26 — Monday: Session 3 , 2:00-3:00 pm<br /> <strong class="markup--strong markup--p-strong">Topic:</strong> BI &amp; Reporting — <strong class="markup--strong markup--p-strong">Subtopic:</strong> Other BI and Reporting</p> <p class="graf graf--p">The benefits of knowing your data before embarking on a BI project are endless. Sure, you can buy a tool to help with this, or you could use R, an open-source tool. This session will dig into methods for using R to connect to your data source to see visual and tabular analyses of your data set. You’ll learn how to find missing data, outliers, and unexpected values. If you don’t know R or you are wanting to learn more functions within R, you’ll benefit from this session.</p> <blockquote class="graf graf--blockquote"><p><strong class="markup--strong markup--blockquote-strong">TUESDAY</strong></p></blockquote> <p class="graf graf--p"><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/seminarslist#A%20Lambda%20Architecture%20in%20the%20Cloud:%20A%20Kscope17%20Case%20Study" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/seminarslist#A%20Lambda%20Architecture%20in%20the%20Cloud:%20A%20Kscope17%20Case%20Study"><strong class="markup--strong markup--p-strong">A Lambda Architecture in the Cloud: A Kscope17 Case Study</strong></a><strong class="markup--strong markup--p-strong"><br /> </strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=72" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=72"><strong class="markup--strong markup--p-strong">Stewart Bryson</strong></a><strong class="markup--strong markup--p-strong">,</strong> Red Pill Analytics and <a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=69" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=69"><strong class="markup--strong markup--p-strong">Kevin McGinley</strong></a><strong class="markup--strong markup--p-strong">, </strong>Red Pill Analytics<br /> <strong class="markup--strong markup--p-strong">When: </strong>Jun 27, 2017, Tuesday: Session 8, 2:00–3:00 pm<strong class="markup--strong markup--p-strong"><br /> Topic: </strong>Data Visualization &amp; Advanced Analytics <strong class="markup--strong markup--p-strong">Subtopic: </strong>Other</p> <p class="graf graf--p">A Lambda Architecture enables data-driven organizations by simultaneously providing batch and speed processing layers to satisfy the overall appetite for analytics and reporting. But building a Lambda architecture is not easy, usually requiring all of the following: a universal ingestion layer, an immutable data store as a system of record, one or more data processing layers that can satisfy batch and speed requirements, and a serving layer that enables data-driven decision making.</p> <p class="graf graf--p">In this session, we’ll demonstrate how Cloud platforms can supercharge the delivery of a capable Lambda architecture. Our case study will be the IoT data generated by Kscope17 attendees including the beacon from their badges, as well as other devices capturing the results of live polling.</p> <hr /> <p class="graf graf--p"><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/seminarslist#Expanding%20Your%20Data-Driven%20Story:%20The%20Next%20Chapter" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/seminarslist#Expanding%20Your%20Data-Driven%20Story:%20The%20Next%20Chapter"><strong class="markup--strong markup--p-strong">Expanding Your Data-Driven Story: The Next Chapter</strong></a><strong class="markup--strong markup--p-strong"><br /> </strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=70" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=70"><strong class="markup--strong markup--p-strong">Mike Durran</strong></a><strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">, Oracle Corporation<br /> </em>Co-presenter(s):</strong> <a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=75" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=75"><strong class="markup--strong markup--p-strong">Stewart Bryson</strong></a>, Red Pill Analytics<br /> <strong class="markup--strong markup--p-strong">When:</strong> June 27 — Tuesday: Session 9, 3:30-4:30 pm<br /> <strong class="markup--strong markup--p-strong">Topic:</strong> Data Visualization &amp; Advanced Analytics — <strong class="markup--strong markup--p-strong">Subtopic:</strong> Oracle Data Visualization</p> <p class="graf graf--p">Oracle Data Visualization (DV) makes it easy to get insight from your data. This stunningly visual and intuitive product enables you to access, blend, and wrangle a variety of sources — including spreadsheets, databases, and applications — and tell the story of your data. In this session, learn about the power of data storytelling and the latest capabilities of Oracle DV (including details of product roadmap) to create compelling analytic narratives, including how you can rapidly apply advanced analytic techniques to gain insights previously only accessible to advanced users. Learn about how Oracle DV has been used in real-life scenarios to gain insight and improve business performance.</p> <blockquote class="graf graf--blockquote"><p><strong class="markup--strong markup--blockquote-strong">WEDNESDAY</strong></p></blockquote> <p class="graf graf--p"><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/seminarslist#Oracle%20DV%20for%20the%20Finance%20Analyst%20Hands%20on%20Lab" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/seminarslist#Oracle%20DV%20for%20the%20Finance%20Analyst%20Hands%20on%20Lab"><strong class="markup--strong markup--p-strong">Hands-on Training: Oracle DV for the Finance Analyst</strong></a><strong class="markup--strong markup--p-strong"><br /> </strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=69" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=69"><strong class="markup--strong markup--p-strong">Kevin McGinley</strong></a><strong class="markup--strong markup--p-strong">, </strong>Red Pill Analytics and <a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=174" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=174"><strong class="markup--strong markup--p-strong">Tim German</strong></a><strong class="markup--strong markup--p-strong">, </strong>Qubix<br /> <strong class="markup--strong markup--p-strong">When:<br /> </strong>Wednesday, June 28, 2017, 9:45-11:15 AM<br /> Wednesday, June 28, 2017, 1:45-3:15 PM</p> <p class="graf graf--p">This hands-on-lab will build upon the session given by Kevin McGinley and Tim German by allowing attendees to perform some of the demonstrations shown in the session given earlier in the week. Attendees will get to use Oracle Data Visualization against Essbase cubes, Excel spreadsheets, and even learn how to create their own mashups of data to be used for their own analytical purposes. They’ll also learn how building certain types of visualizations and using features like narrative mode can help deepen their analysis and make the communication of their findings easier. Prior attendance of the session is not required to attend the hands-on-lab.</p> <hr /> <p class="graf graf--p"><strong class="markup--strong markup--p-strong">Trends in the World of Analytics, Business Intelligence, and Performance Management Panel Session Moderated by </strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=66" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=66"><strong class="markup--strong markup--p-strong">Edward Roske</strong></a><strong class="markup--strong markup--p-strong"> <em class="markup--em markup--p-em">, interRel Consulting<br /> </em></strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=72" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=72"><strong class="markup--strong markup--p-strong">Stewart Bryson</strong></a><strong class="markup--strong markup--p-strong">,</strong> Red Pill Analytics<br /> <strong class="markup--strong markup--p-strong">When:</strong> Jun 28, 2017, Wednesday Session 14 , 1:45 pm — 2:45 pm<br /> <strong class="markup--strong markup--p-strong">Topic:</strong> BI &amp; Reporting — <strong class="markup--strong markup--p-strong">Subtopic:</strong> Other BI and Reporting</p> <p class="graf graf--p">There has never been a panel assembled with as many luminaries in the world of BI, EPM, and business analytics as you’ll see on this stage. Each one of these people has over 20 years of experience and collectively, they’ve been involved in more than 1,000 implementations. But they won’t be talking technical tips: with their wealth of experience, they’ll be discussing trends in the bigger world of analytics. Which products are rising up, where are companies investing their money, what new areas are emerging, and much, much more will be discussed as these gurus descend from their metaphorical mountains to discuss and debate for your amusement and education. If you want to know what the reporting, analysis, planning, and consolidation fields are up to, come with plenty of questions and an open mind.</p> <blockquote class="graf graf--blockquote"><p><strong class="markup--strong markup--blockquote-strong">THURSDAY</strong></p></blockquote> <p class="graf graf--p"><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/content/thursday-deep-dive-sessions" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/content/thursday-deep-dive-sessions"><strong class="markup--strong markup--p-strong">Deep Dive Session: Navigating the Oracle Business Analytics Frontier</strong></a><strong class="markup--strong markup--p-strong"><br /> </strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=66" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=66"><strong class="markup--strong markup--p-strong">Kevin McGinley</strong></a><strong class="markup--strong markup--p-strong">, </strong>Red Pill Analytics<strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em"><br /> </em>Co-presenter(s):</strong> <a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=221" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=221"><strong class="markup--strong markup--p-strong">Tracy McMullen</strong></a><strong class="markup--strong markup--p-strong">,</strong> interRel Consulting<br /> <strong class="markup--strong markup--p-strong">When:</strong> June 29 — Deep-Dive Session, 9:00-11:00 am<br /> <strong class="markup--strong markup--p-strong">Topic:</strong> BI &amp; Reporting — <strong class="markup--strong markup--p-strong">Subtopic:</strong> Other BI and Reporting</p> <p class="graf graf--p">Saddle up and dig in your spurs as we trail blaze through Oracle’s Reporting, Business Intelligence, and Data Visualization solutions. Through a rotating panel of experts from Oracle, partners, and customers and interactive discussions with attendees, we’ll navigate Reporting and BI challenges and how Oracle Business Analytics addresses those requirements. Led by moderators Kevin McGinley and Tracy McMullen, the panel will discuss questions such as, “Should I use Smart View or Data Visualization or Oracle Analytics Cloud?”, “How do these solutions work together, and when should I use what?” and, “What are the considerations for moving to the Cloud?” Our panel will share thoughts and perspectives on today’s reporting, BI, and DV questions, climate, and trends. We reckon you won’t want to miss this EPM and BI reporting rodeo in Thursday’s Deep Dive Session.</p> <p class="graf graf--p"> <p class="graf graf--p"><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/content/thursday-deep-dive-sessions" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/content/thursday-deep-dive-sessions"><strong class="markup--strong markup--p-strong">The Great Debate: Where Should My Data Warehouse Live?</strong></a><strong class="markup--strong markup--p-strong"><br /> </strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=85" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=85"><strong class="markup--strong markup--p-strong">Michael Rainey</strong></a>, Moderator, Gluent<br /> <strong class="markup--strong markup--p-strong">When:</strong> June 29 — Deep-Dive Session, 9:00–11:00 am<br /> <strong class="markup--strong markup--p-strong">Topic:</strong> Data Warehousing &amp; Big Data — <strong class="markup--strong markup--p-strong">Subtopic:</strong> Data Architecture<br /> Panelists include:<br /> <a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=77" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=77"><strong class="markup--strong markup--p-strong">Stewart Bryson</strong></a><strong class="markup--strong markup--p-strong">,</strong> Red Pill Analytics<br /> <a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=81" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=81"><strong class="markup--strong markup--p-strong">Holger Friedrich</strong></a><strong class="markup--strong markup--p-strong">, </strong>sumIT AG<strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em"><br /> </em></strong><a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=75" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=75"><strong class="markup--strong markup--p-strong">Antony Heljula</strong></a><strong class="markup--strong markup--p-strong">, </strong>Peak Indicators Ltd<br /> <a class="markup--anchor markup--p-anchor" href="http://kscope17.com/component/seminar/presenterlist?last_id=105" target="_blank" rel="noopener noreferrer" data-href="http://kscope17.com/component/seminar/presenterlist?last_id=105"><strong class="markup--strong markup--p-strong">Kent Graziano</strong></a><strong class="markup--strong markup--p-strong">, </strong>Snowflake Computing</p> <p class="graf graf--p">The long standing debate of running your data warehouse on premises versus in the cloud continues at the KScope17 Big Data and Data Warehousing Thursday Deep Dive session. Whether built in a traditional, relational database or constructed from “schema on read” data in Hadoop, the recent rise of cloud services over the past few years has led data architects, IT directors, and CIO’s to ask the question: “Where should my data warehouse live?” Several experts in the Oracle data warehousing field will provide arguments for their preferred approach, while attempting to refute evidence presented for the alternative solution. Along with what should be a lively and engaging debate, the experts will join together to answer any questions you may have around big data, data warehousing, and data integration in general. Don’t miss this great debate and Q&amp;A session!</p> <hr /> <p class="graf graf--p">Will you be at Kscope17 and want to meet up? <a class="markup--anchor markup--p-anchor" href="mailto:%20lauren@redpillanalytics.com" target="_blank" rel="noopener noreferrer" data-href="mailto: lauren@redpillanalytics.com">Contact us</a> and let’s get a drink!</p> <p class="graf graf--p">We are looking forward to this event and all of the other great opportunies to speak. Make sure to keep an eye on our <a class="markup--anchor markup--p-anchor" href="http://events.redpillanalytics.com" target="_blank" rel="nofollow noopener noreferrer" data-href="http://events.redpillanalytics.com">events</a> page for what Red Pill Analytics is up to this year.</p> Lauren Prezby http://redpillanalytics.com/?p=4974 Wed Jun 14 2017 08:56:14 GMT-0400 (EDT) Big Data Tundra: Creating a Flexible Cloud Based Data Ecosystem http://redpillanalytics.com/flexiblecloudbaseddataecosystem/ <p><img width="300" height="200" src="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/yousif-malibiran-125106.jpg?fit=300%2C200" class="attachment-medium size-medium wp-post-image" alt="Cloud Presentation" srcset="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/yousif-malibiran-125106.jpg?w=1920 1920w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/yousif-malibiran-125106.jpg?resize=300%2C200 300w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/yousif-malibiran-125106.jpg?resize=768%2C512 768w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/yousif-malibiran-125106.jpg?resize=1024%2C683 1024w" sizes="(max-width: 300px) 100vw, 300px" data-attachment-id="4967" data-permalink="http://redpillanalytics.com/flexiblecloudbaseddataecosystem/yousif-malibiran-125106/" data-orig-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/yousif-malibiran-125106.jpg?fit=1920%2C1281" data-orig-size="1920,1281" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Cloud Presentation" data-image-description="&lt;p&gt;Cloud Presentation&lt;/p&gt; " data-medium-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/yousif-malibiran-125106.jpg?fit=300%2C200" data-large-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/yousif-malibiran-125106.jpg?fit=1024%2C683" /></p><p><span id="ember8242" class="ember-view">Did you miss <a href="https://www.linkedin.com/in/phil-goerdt-773a2319/">Phil Goerdt</a> and </span><a id="ember8245" class="feed-link feed-s-main-content__mention ember-view" tabindex="0" href="https://www.linkedin.com/in/mike-fuller-935823b4/" data-control-name="mention">Mike Fuller</a>&#8216;s <span id="ember8247" class="ember-view">presentation about Cloud computing and storing data at </span><a id="ember8250" class="feed-link feed-s-main-content__mention ember-view" tabindex="0" href="https://www.linkedin.com/company-beta/2625621/" data-control-name="mention">MinneAnalytics</a><span id="ember8252" class="ember-view"> <a id="ember8255" class="hashtag-link ember-view" href="https://www.linkedin.com/search/results/content/?keywords=%23BIgDataTech&amp;origin=HASH_TAG_FROM_FEED" data-control-name="update_hashtag">#BIgDataTech</a> last week? Let us share their presentation with you! Download it <a href="https://www.slideshare.net/PhilGoerdt/big-data-tundra-creating-a-flexible-cloud-based-data-ecosystem?trk=v-feed">here</a>!</span></p> <p>Cloud computing has changed how organizations use, access and store their data. While these paradigms have shifted, the traditional way of thinking about databases and data warehouses remain steadfast in “on-prem” thinking, even in many cloud deployments. Can cloud-native data platforms such as Snowflake coupled with Big Data thinking enable better performance, lower total cost of ownership, and higher data flexibility? This presentation will walk you through a real-world customer story to provide the answer.</p> Lauren Prezby http://redpillanalytics.com/?p=4966 Mon Jun 12 2017 11:38:31 GMT-0400 (EDT) Unify: See Your Data From Every Perspective http://www.rittmanmead.com/blog/2017/06/unify-see-your-data-from-every-perspective/ <img src="http://www.rittmanmead.com/blog/content/images/2017/06/explainexplore3-1.jpg" alt="Unify: See Your Data From Every Perspective"><p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/explainexplore3.jpg" alt="Unify: See Your Data From Every Perspective"></p> <p>Ad hoc access to accurate and secured data has always been the goal of business intelligence platforms. Yet, most fall short of balancing the needs of business users with the concerns of IT.</p> <p>Rittman Mead has worked with hundreds of organizations representing all points on the spectrum between agility and governance. Today we're excited to announce our new product, <a href="https://www.rittmanmead.com/unify">Unify</a>, which allows Tableau users to directly connect to OBIEE, providing the best of both worlds. <br> <br> </p> <h2 id="governeddatadiscovery">Governed Data Discovery</h2> <p>Business users get Tableau's intuitive data discovery features and the agility they need to easily blend their departmental data without waiting on IT to incorporate it into a warehouse. IT gets peace of mind, knowing their mission-critical data is protected by OBIEE's semantic layer and row-level security. <br> <br> </p> <h2 id="unifyessentials">Unify Essentials</h2> <p>Unify runs as a <a href="https://unify.ritt.md/unify/desktop">desktop app</a>, making it easy for departmental Tableau users to connect to a central OBIEE server. Unify also has a <a href="https://unify.ritt.md/unify/server">server option</a> that runs alongside OBIEE, for organizations with a large Tableau user base or those using Tableau Server.</p> <p>Desktop installation and configuration is simple. Once installed, users can query OBIEE from within Tableau with just a few clicks. Have a look at these short videos demonstrating <a href="https://youtu.be/nc-Ro258W88">setup</a> and <a href="https://youtu.be/iUA3ab0oadw">use of Unify</a>. <br> <br> </p> <h2 id="availabletoday">Available Today</h2> <p>Download your free 7-day trial of Unify Desktop <a href="https://unify.ritt.md/">here</a>.</p> <p>No Tableau Desktop license? No problem. Unify is compatible with <a href="https://public.tableau.com/s/">Tableau Public</a>.</p> Jordan Meyer 4c449525-8485-4652-a9a3-0bea2a0de4e1 Mon Jun 12 2017 10:09:45 GMT-0400 (EDT) Speaker at OTN Yathra 2017 https://gavinsoorma.com/2017/06/speaker-at-otn-yathra-2017/ <p>I am presenting the following papers at the OTN Yathra tour 2017 which will cover six cities in India &#8211; Chennai, Bangalore, Hyderabad, Pune, Mumbai and Delhi.</p> <p>&nbsp;</p> <pre><em><strong>Oracle Database Multitenant – What’s New in Oracle 12c Release 2</strong></em> Oracle Database 12c Release 2 presents several new features related to the Multitenant option which was introduced in 12c Release 1. The session introduces all the new exciting features related to Container and Pluggable databases which will include  Hot Cloning, Refreshable Pluggable Databases, Application Containers, PDB Flashback, PDB Lockdown Profiles, Performance Profiles and Proxy PDBs to name a few. <em><strong><span style="font-family: 'Courier New';">Upgrade and Migrate to Oracle Database 12c Release 2– best practices for minimizing downtime</span></strong></em> Oracle database 12.2.0.1 has been released a few months ago and introduces many exciting and ground-breaking new features. However, in many cases organizations are not able to afford the outage required for such upgrades and migrations to a new release. This session outlines the best practices which can be deployed to minimize downtime required for upgrades and discusses the pros and cons of different upgrade/migration methods and techniques like Oracle GoldenGate, Cross-platform Transportable Tablespaces, Rolling Upgrades using Transient Logical Standby Databases and Data Guard among others. Attendees will also learn how to upgrade an Oracle 12.1.0.2 Multitenant environment with Container and Pluggable databases to Oracle 12c Release 2.</pre> <pre><a href="https://gavinsoorma.com/wp-content/uploads/2017/06/otn1.png"><img class="aligncenter wp-image-7660 " src="https://gavinsoorma.com/wp-content/uploads/2017/06/otn1-203x300.png" alt="" width="500" height="739" srcset="https://gavinsoorma.com/wp-content/uploads/2017/06/otn1-203x300.png 203w, https://gavinsoorma.com/wp-content/uploads/2017/06/otn1-768x1133.png 768w, https://gavinsoorma.com/wp-content/uploads/2017/06/otn1-694x1024.png 694w" sizes="(max-width: 500px) 100vw, 500px" /></a></pre> Gavin Soorma https://gavinsoorma.com/?p=7661 Sun Jun 11 2017 00:24:29 GMT-0400 (EDT) Kscope17 Essbase Track Highlights – Natalie Delemar http://www.odtug.com/p/bl/et/blogaid=725&source=1 Natalie Delemar, ODTUG president, shares her top six Essbase track sessions with reasons why they are her “don’t miss sessions” at ODTUG Kscope17: ODTUG http://www.odtug.com/p/bl/et/blogaid=725&source=1 Thu Jun 08 2017 14:41:26 GMT-0400 (EDT) Oracle DV for Data Scientists – Part II: Modifying Advanced Analytic/R Scripts https://realtrigeek.com/2017/06/07/oracle-dv-for-data-scientists-part-ii-modifying-advanced-analyticr-scripts/ <p>Something I enjoy doing in software is opening the different components of a tool. Here&#8217;s an example of what I mean&#8230; Say I have a Data Visualization Desktop project that I have exported. I want to see what is in the export, so I unzip all the contents of the .dva to a folder.</p> <p><img data-attachment-id="1779" data-permalink="https://realtrigeek.com/2017/06/07/oracle-dv-for-data-scientists-part-ii-modifying-advanced-analyticr-scripts/1-9/" data-orig-file="https://epmqueen.files.wordpress.com/2017/06/1.jpg?w=840" data-orig-size="687,467" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="1" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/06/1.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/06/1.jpg?w=840?w=687" class="alignnone size-full wp-image-1779" src="https://epmqueen.files.wordpress.com/2017/06/1.jpg?w=840" alt="1" srcset="https://epmqueen.files.wordpress.com/2017/06/1.jpg 687w, https://epmqueen.files.wordpress.com/2017/06/1.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/06/1.jpg?w=300 300w" sizes="(max-width: 687px) 100vw, 687px" /></p> <p>Okay, cool. I have the project &#8220;guts&#8221;.</p> <p><img data-attachment-id="1780" data-permalink="https://realtrigeek.com/2017/06/07/oracle-dv-for-data-scientists-part-ii-modifying-advanced-analyticr-scripts/2-11/" data-orig-file="https://epmqueen.files.wordpress.com/2017/06/2.jpg?w=840" data-orig-size="259,168" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="2" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/06/2.jpg?w=840?w=259" data-large-file="https://epmqueen.files.wordpress.com/2017/06/2.jpg?w=840?w=259" class="alignnone size-full wp-image-1780" src="https://epmqueen.files.wordpress.com/2017/06/2.jpg?w=840" alt="2" srcset="https://epmqueen.files.wordpress.com/2017/06/2.jpg 259w, https://epmqueen.files.wordpress.com/2017/06/2.jpg?w=150 150w" sizes="(max-width: 259px) 100vw, 259px" /></p> <p>But what about other pieces that aren&#8217;t outputs, but pieces of the software puzzle? Since we proudly advertise that you can bring your own R scripts to DV, what should that file look like? The folder &#8220;script_repository&#8221; holds the key. Here you can see all the different Advanced Analytics and R scripts available to you for DV. Note that I have downloaded all possible R plugins from the <a href="https://sites.google.com/site/oraclebipublicstore/downloads" target="_blank" rel="noopener">Oracle BI Public Store</a>, which is why I may have more scripts than you.</p> <p>How about if you are just curious as to how the scripts work?</p> <p><img data-attachment-id="1781" data-permalink="https://realtrigeek.com/2017/06/07/oracle-dv-for-data-scientists-part-ii-modifying-advanced-analyticr-scripts/3-10/" data-orig-file="https://epmqueen.files.wordpress.com/2017/06/3.jpg?w=840" data-orig-size="793,510" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="3" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/06/3.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/06/3.jpg?w=840?w=793" class="alignnone size-full wp-image-1781" src="https://epmqueen.files.wordpress.com/2017/06/3.jpg?w=840" alt="3" srcset="https://epmqueen.files.wordpress.com/2017/06/3.jpg 793w, https://epmqueen.files.wordpress.com/2017/06/3.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/06/3.jpg?w=300 300w, https://epmqueen.files.wordpress.com/2017/06/3.jpg?w=768 768w" sizes="(max-width: 793px) 100vw, 793px" /></p> <p>Taking &#8220;R.Correlation.xml&#8221; as an example, in your favorite text editor, you can see notes about the script as well as different options. Since there is more than one kind of correlation calculation, this particular script gives you the option to choose between <a href="http://www.statisticssolutions.com/correlation-pearson-kendall-spearman/" target="_blank" rel="noopener">Pearson, Kendall, and Spearman</a>.</p> <p><img data-attachment-id="1782" data-permalink="https://realtrigeek.com/2017/06/07/oracle-dv-for-data-scientists-part-ii-modifying-advanced-analyticr-scripts/4-10/" data-orig-file="https://epmqueen.files.wordpress.com/2017/06/4.jpg?w=840" data-orig-size="1920,944" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="4" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/06/4.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/06/4.jpg?w=840?w=840" class="alignnone size-full wp-image-1782" src="https://epmqueen.files.wordpress.com/2017/06/4.jpg?w=840" alt="4" srcset="https://epmqueen.files.wordpress.com/2017/06/4.jpg?w=840 840w, https://epmqueen.files.wordpress.com/2017/06/4.jpg?w=1680 1680w, https://epmqueen.files.wordpress.com/2017/06/4.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/06/4.jpg?w=300 300w, https://epmqueen.files.wordpress.com/2017/06/4.jpg?w=768 768w, https://epmqueen.files.wordpress.com/2017/06/4.jpg?w=1024 1024w" sizes="(max-width: 840px) 100vw, 840px" /></p> <p>The default is Pearson.</p> <p><img data-attachment-id="1783" data-permalink="https://realtrigeek.com/2017/06/07/oracle-dv-for-data-scientists-part-ii-modifying-advanced-analyticr-scripts/5-11/" data-orig-file="https://epmqueen.files.wordpress.com/2017/06/5.jpg?w=840" data-orig-size="507,220" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="5" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/06/5.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/06/5.jpg?w=840?w=507" class="alignnone size-full wp-image-1783" src="https://epmqueen.files.wordpress.com/2017/06/5.jpg?w=840" alt="5" srcset="https://epmqueen.files.wordpress.com/2017/06/5.jpg 507w, https://epmqueen.files.wordpress.com/2017/06/5.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/06/5.jpg?w=300 300w" sizes="(max-width: 507px) 100vw, 507px" /></p> <p>In our custom calculation in DV, I am using R.CorrelationPlot.xml to calculate my data.</p> <p><img data-attachment-id="1784" data-permalink="https://realtrigeek.com/2017/06/07/oracle-dv-for-data-scientists-part-ii-modifying-advanced-analyticr-scripts/6-10/" data-orig-file="https://epmqueen.files.wordpress.com/2017/06/6.jpg?w=840" data-orig-size="633,402" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="6" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/06/6.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/06/6.jpg?w=840?w=633" class="alignnone size-full wp-image-1784" src="https://epmqueen.files.wordpress.com/2017/06/6.jpg?w=840" alt="6" srcset="https://epmqueen.files.wordpress.com/2017/06/6.jpg 633w, https://epmqueen.files.wordpress.com/2017/06/6.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/06/6.jpg?w=300 300w" sizes="(max-width: 633px) 100vw, 633px" /></p> <p>Here are the results from my correlation calculation. Note that also talked about this graph and how to use download plugins in a previous <a href="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/" target="_blank" rel="noopener">post</a>.</p> <p><img data-attachment-id="1785" data-permalink="https://realtrigeek.com/2017/06/07/oracle-dv-for-data-scientists-part-ii-modifying-advanced-analyticr-scripts/7-9/" data-orig-file="https://epmqueen.files.wordpress.com/2017/06/7.jpg?w=840" data-orig-size="574,526" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="7" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/06/7.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/06/7.jpg?w=840?w=574" class="alignnone size-full wp-image-1785" src="https://epmqueen.files.wordpress.com/2017/06/7.jpg?w=840" alt="7" srcset="https://epmqueen.files.wordpress.com/2017/06/7.jpg 574w, https://epmqueen.files.wordpress.com/2017/06/7.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/06/7.jpg?w=300 300w" sizes="(max-width: 574px) 100vw, 574px" /></p> <p>Let&#8217;s say I wanted to use the Kendall calculation for correlation. Based on the script notes, I can change the option in this section for the script &#8220;R.CorrelationPlot.xml&#8221; that I referenced in my custom calculation.</p> <p><img data-attachment-id="1786" data-permalink="https://realtrigeek.com/2017/06/07/oracle-dv-for-data-scientists-part-ii-modifying-advanced-analyticr-scripts/8-9/" data-orig-file="https://epmqueen.files.wordpress.com/2017/06/8.jpg?w=840" data-orig-size="379,184" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="8" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/06/8.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/06/8.jpg?w=840?w=379" class="alignnone size-full wp-image-1786" src="https://epmqueen.files.wordpress.com/2017/06/8.jpg?w=840" alt="8" srcset="https://epmqueen.files.wordpress.com/2017/06/8.jpg 379w, https://epmqueen.files.wordpress.com/2017/06/8.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/06/8.jpg?w=300 300w" sizes="(max-width: 379px) 100vw, 379px" /></p> <p>I get the same results as before (based on the same dataset), but since it did not error out (ha), I can confirm that the script still works with the new option.</p> <p><img data-attachment-id="1787" data-permalink="https://realtrigeek.com/2017/06/07/oracle-dv-for-data-scientists-part-ii-modifying-advanced-analyticr-scripts/9-7/" data-orig-file="https://epmqueen.files.wordpress.com/2017/06/9.jpg?w=840" data-orig-size="614,548" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="9" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/06/9.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/06/9.jpg?w=840?w=614" class="alignnone size-full wp-image-1787" src="https://epmqueen.files.wordpress.com/2017/06/9.jpg?w=840" alt="9" srcset="https://epmqueen.files.wordpress.com/2017/06/9.jpg 614w, https://epmqueen.files.wordpress.com/2017/06/9.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/06/9.jpg?w=300 300w" sizes="(max-width: 614px) 100vw, 614px" /></p> <p>And for those R lovers out there, if you are curious as to the actual code used, it is at the bottom in the CDATA section.</p> <p><img data-attachment-id="1788" data-permalink="https://realtrigeek.com/2017/06/07/oracle-dv-for-data-scientists-part-ii-modifying-advanced-analyticr-scripts/10-6/" data-orig-file="https://epmqueen.files.wordpress.com/2017/06/10.jpg?w=1180&#038;h=85" data-orig-size="1682,121" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="10" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/06/10.jpg?w=1180&#038;h=85?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/06/10.jpg?w=1180&#038;h=85?w=840" class="alignnone wp-image-1788" src="https://epmqueen.files.wordpress.com/2017/06/10.jpg?w=1180&#038;h=85" alt="10" width="1180" height="85" srcset="https://epmqueen.files.wordpress.com/2017/06/10.jpg?w=1180&amp;h=85 1180w, https://epmqueen.files.wordpress.com/2017/06/10.jpg?w=150&amp;h=11 150w, https://epmqueen.files.wordpress.com/2017/06/10.jpg?w=300&amp;h=22 300w, https://epmqueen.files.wordpress.com/2017/06/10.jpg?w=768&amp;h=55 768w, https://epmqueen.files.wordpress.com/2017/06/10.jpg?w=1024&amp;h=74 1024w, https://epmqueen.files.wordpress.com/2017/06/10.jpg 1682w" sizes="(max-width: 1180px) 100vw, 1180px" /></p> <p>All this to say&#8230; I encourage you to take a look at each of the XML scripts in the script_repository folder to see what options are available to you. You may not have to write a brand new script; perhaps you can modify one already available to you! Also, you can learn how DV processes advanced analytics and R scripts. You can even start your journey towards learning R by reverse engineering the code. <img src="https://s0.wp.com/wp-content/mu-plugins/wpcom-smileys/twemoji/2/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> Each DVD installation comes with the option to install advanced analytics, and with this installation comes an installation of the R GUI. Take time to play with this if you are curious about data science&#8230;</p> <p>More on using custom R scripts to come&#8230;</p><br /> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/gocomments/epmqueen.wordpress.com/1775/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/epmqueen.wordpress.com/1775/" /></a> <img alt="" border="0" src=&qut;https://pixel.wp.com/b.gif?host=realtrigeek.com&#038;blog=70089387&#038;post=1775&#038;subd=epmqueen&#038;ref=&#038;feed=1" width="1" height="1" /> Sarah Craynon Zumbrum http://realtrigeek.com/?p=1775 Wed Jun 07 2017 13:03:24 GMT-0400 (EDT) OAC: Essbase and DVCS http://www.rittmanmead.com/blog/2017/06/oac-essbase-and-dvcs/ <p>Finally managed to get around to having a proper look at Essbase within Oracle Analytics Cloud Service (OAC) after a busy couple of months. This post focusses mainly on initial impressions on the ‘out of the box’ the Essbase side of this - which we will explore in more detail in future posts, as well as more detail on the use of Essbase with DVCS. </p> <h2 id="usingessbasewithdvcs">Using Essbase with DVCS</h2> <p>One of the features we are keen to explore more in this context is the integration of <strong>Essbase and the Data Visualisation Cloud Service (DVCS)</strong>. One point that we found that we do not think is being expressed clearly anywhere else we have seen is how to configure this: In setting up our OAC instance, we were having difficulty coming up with a combination of configuration selections that enables Essbase and DV to work at the same time. </p> <p>Oracle documentation (such as the price list) suggest that both should be available within Standard Edition OAC:</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/blog5.png" alt=""></p> <p>But Doc ID 2265410.1 on MoS suggests, by needing to add a security rule to the Essbase OAC, that two OAC instances are required. We could not find any reference to this requirement in Oracle documentation or blogs on the subject, but it transpires after checking with Oracle that this is indeed the case – <strong><em>Essbase and DV need to be on separate OAC instances.</em></strong> </p> <h2 id="essbase">Essbase</h2> <p>Looking purely at Essbase, my initial reaction is very positive…whilst the interface is different (I am sure tears will be shed for <strong>EAS &amp; Studio</strong> in the foreseeable future…although given the way some stalwarts are still clinging on the last surviving copies of the <strong>Excel Add In</strong>, maybe not too imminently), once the surface of the new interface is scratched more...<em>ahem</em>…’seasoned’ developers will take comfort from being able to do a lot of the same things as they currently can. I am also confident it will fulfil one of the stated objectives in making it easier for non-experts to be able quickly and easily deploy cubes for analysis purposes. </p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/blog0.png" alt=""></p> <p>Whilst the manual application and cube maintenance tools through the OAC front-end seem resilient and work effectively, I think some aspects will be difficult to use as the primary maintenance method in a production system - the ‘breadcrumb’ method afforded to dimension maintenance in particular will start to get fiddly to use with a dimension of any sort of volume. The application and cube <strong>Import</strong> (from a formatted Excel spreadsheet) facility is great - to my mind, a bit like a supercharged and easier-to-use Outline Load Utility in Hyperion Planning - and the ability to refresh the spreadsheet from a deployed cube is a good feature that shouldn’t have been taken for granted. I know Excel is regarded as the Devil’s work in some BI quarters…I personally don’t feel that way until it is being used as a database (or as some form of primary data storage)…but in this context, it is quick &amp; easy to use, on most people’s desktops straightway, and is intuitive. </p> <p>Still in the Excel corner, on the Smartview side, the addition of the <strong>Cube Designer</strong> extension (requiring Smartview 11.1.2.5.700) to be able to consider &amp; change the more generic aspects (not members) of the ‘cube maintenance’ spreadsheets is a nice touch that makes this more straightforward and removes the need to pay strict attention to the spreadsheet layout. The ‘treeview’ style hierarchy viewer also helps make sense of the parent-child members that need to be detailed on the individual dimension tabs.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/blog1.png" alt=""></p> <p>One issue that has flitted across my mind at this early stage is that of rules files. Whilst the <strong>Import</strong> facility creates these for you (as with creating a cube from Essbase Studio) which is welcome, and rules files created in an on-prem system can be uploaded (again, welcome), the on-board rules file editor is text based: </p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/blog4.png" alt=""></p> <p>I’m not too sure how many people have created or edited rules files like this before (although I’d hazard a guess), but whilst the presence of <em>any</em> means to create, amend, or even tweak a file is good, it remains to be seen how usable this approach is. The alternative is to resubmit from the maintenance spreadsheet thus getting it created / amended for you or to maintain in on-prem system…but seeing as this platform is an alternative to (rather than an augmentation of) on prem for a lot of people, I’m not sure how practical this is. </p> <p>Whilst the existing tools look really promising, I can’t help but think there will be occasions going forwards where it might be advantageous to be able to create a rules file to run an uploaded file outside of them: time will tell. </p> <p>The <strong>Command Line Tool</strong> (downloadable from OAC-Essbase / Utilities) is a little limited at the moment, but goes some way towards filling the potential gap left by the absence of client-side EssMsh and can only grow with further releases: from the Oracle OAC documentation...</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/blog2.png" alt=""></p> <p>In conclusion, first impressions are very favourable. There are changes (eg Security), new features (eg Sandboxing), and I am sure there will be gaps for those considering moving from existing on-prem applications - for example, as I have seen someone else reference, there does not seem to be any reference to <em>partitions</em> in the front end or the import spreadsheet layout - so whilst there is a lot with which we will quite quickly feel familiar, there are also going to be new areas and new practices for us to get into step with: as above, we will look to explore some of these in future posts. </p> Mark Cann 6822a78e-222c-4a02-ace4-cd52f675a66f Wed Jun 07 2017 10:00:00 GMT-0400 (EDT) Announcing the 2017 ODTUG Innovation Award Nominations http://www.odtug.com/p/bl/et/blogaid=724&source=1 New for 2017 - member voting! If you are an ODTUG member, you should have received a members only voting link to cast a vote for your favorite innovation. Thanks to all the individuals who nominated these outstanding individuals and their exceptionally innovative projects. ODTUG http://www.odtug.com/p/bl/et/blogaid=724&source=1 Mon Jun 05 2017 10:06:47 GMT-0400 (EDT) Kscope17 EPM Data Integration Session Highlights - Tony Scalese http://www.odtug.com/p/bl/et/blogaid=723&source=1 With ODTUG Kscope17 less than one month away, here is a highlight of the Kscope17 EPM Data Integration sessions that Tony Scalese is most excited to attend and why he thinks you should attend them: ODTUG http://www.odtug.com/p/bl/et/blogaid=723&source=1 Fri Jun 02 2017 11:31:57 GMT-0400 (EDT) Overview of the new Cloudera Data Science Workbench http://www.rittmanmead.com/blog/2017/06/cloudera-data-science-workbench/ <p>Recently Cloudera released a new product called <a href="https://www.cloudera.com/products/data-science-and-engineering/data-science-workbench.html">Cloudera Data Science Workbench</a>(CDSW)</p> <p>Being a Cloudera Partner, we at Rittman Mead are always excited when something new comes along.</p> <p>The CDSW is positioned as a collaborative platform for data scientists/engineers and analysts, enabling larger teams to work in a self-service manner through a web browser. This browser application is effectively an IDE for R, Python and Scala - all your favorite toys!</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screenshot-2017-05-18-17.10.50.png" alt=""></p> <p>The CDSW is deployed onto edge nodes of your CDH cluster, providing easy access to your HDFS data and the Spark2 and Impala engines. This means that team members can immediately start working on their projects, accessing full datasets and share analysis and results. A CDSW Project can include reusable code and snippets, libraries etc helping your teams to collaborate. Oh, and these projects can be linked with Github repos to help keep version history.</p> <p>The workbench is used to fire up user session with R, Python or Scala inside a dedicated Docker engines. These engines can be customised, or extended, like any other Docker images to include all your favourite R packages and Python libraries. Using HDFS, Hive, Spark2 or Impala the workload can then be distributed over to the CDH cluster, by use of your preferred methods, without having to configure anything. This engine (virtual machine, really) runs for as long as the analysis. Any logs or output files need to be saved in the project folder, which is mounted inside the engine and saved on the CDSW master node. The master node is a gateway node to the CDH cluster and can scale out to many worker nodes to distribute the Docker engines</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screenshot-2017-06-02-14.46.00-4.png" alt="(C) Cloudera.com"></p> <p>And under the hood we also have Kubernetes to schedule user workload across the worker nodes and provide CPU and memory isolation</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/06/Screenshot-2017-05-19-11.34.39.png" alt=""></p> <p>So far I find the IDE to be a bit too simple and lacking features compared to e.g. RStudio Server. But the ease of use and the fact that everything is automatically configured makes the CDSW an absolute must for any Cloudera customer with data science teams. Also, I'm convinced that future releases will add loads of cool functionality</p> <p>I spent about two days building a new cluster on AWS and install the Cloudera Data Science Workbench, just an indication of how easy it is to get up and running. Btw, it also runs in the cloud (Iaas) ;) </p> <p>Want to know more or see a live demo? Contact us at <a href="mailto:info+cdsw@rittmanmead.com?subject=Cloudera+Data+Science+Workbench">info@rittmanmead.com</a></p> Borkur Steingrimsson 4e5761d8-8b92-464c-a57e-1caedeb25d39 Fri Jun 02 2017 10:07:37 GMT-0400 (EDT) Because… Tradition? http://redpillanalytics.com/because-tradition/ <p><img width="300" height="200" src="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/BecauseTradition.jpg?fit=300%2C200" class="attachment-medium size-medium wp-post-image" alt="Challenging Business Process to Create Better Data and Insights" srcset="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/BecauseTradition.jpg?w=2000 2000w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/BecauseTradition.jpg?resize=300%2C200 300w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/BecauseTradition.jpg?resize=768%2C512 768w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/BecauseTradition.jpg?resize=1024%2C682 1024w" sizes="(max-width: 300px) 100vw, 300px" data-attachment-id="4934" data-permalink="http://redpillanalytics.com/because-tradition/becausetradition/" data-orig-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/BecauseTradition.jpg?fit=2000%2C1333" data-orig-size="2000,1333" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Challenging Business Process to Create Better Data and Insights" data-image-description="&lt;p&gt;Challenging Business Process to Create Better Data and Insights&lt;/p&gt; " data-medium-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/BecauseTradition.jpg?fit=300%2C200" data-large-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/06/BecauseTradition.jpg?fit=1024%2C682" /></p><p id="1278" class="graf graf--p graf-after--h4">Who doesn’t love traditions? While I am not the most nostalgic guy, I certainly have a handful of traditions that I hold dear. I think as we blaze bravely ahead, it’s important to remember where we come from, both good and bad to influence a better future. Sure, we all have traditions on a personal level, but what most people don’t realize is that most businesses carry their own traditions. The problem, is that most of these traditions are bad for the business itself.</p> <h4 id="a56b" class="graf graf--h4 graf-after--p">The Process Problem</h4> <p id="71d0" class="graf graf--p graf-after--h4">The sanctity of the process creates many problems with data. Poor data quality usually stems from a bad input process. Poor query performance usually is related to a bad process that needs to be corrected at run time. Poor technology acceptance may be part of a bad process; if users can still go elsewhere, why come to you, the data supplier?</p> <p id="dcba" class="graf graf--p graf-after--p">Many times I have seen organizations hamstrung by data issues. And, not infrequently, I have seen organizations have data issues due to business processes. Sometimes the data comes to the end of the process and is incorrect, sometimes codes need to be transformed into something useful, sometimes the data is dirty. I’m sure you can think of a few examples of your own. And most times I see this issue, the solution is to fix the data after the process is complete (and the data has already been written), not during. Usually this is the case because “we can’t change the process”.</p> <p id="2c68" class="graf graf--p graf-after--p">I’ve said this before, but a good tool does not fix underlaying data issues. During data initiatives, this is especially applicable when people begin to defend the processes used. Pushing back on the process to get better data in and out is a win for everyone.</p> <p id="cc03" class="graf graf--p graf--startsWithDoubleQuote graf-after--p graf--trailing">“Well, our process won’t allow us to do that.” This a roadblock phrase that I have heard numerous times when it comes to fixing data during the process. Sometimes this is actually accurate; sometimes the process is mandated and enforced by regulation, as seen in financial services and healthcare industries. Most of the time however, it is not a by product of mandate. Most of the time the process won’t allow something to happen because it takes effort to change the process.</p> <p id="026c" class="graf graf--p graf--startsWithDoubleQuote graf--leading">“The process has always been like that”. This is another excuse that I hear often. Let me tell you a story to illustrate why this is such a poor thinking.</p> <p id="4a6f" class="graf graf--p graf-after--p">It’s holiday season and the whole family is together. Cousins, aunts, uncles (even the crazy one), grand parents, brothers, sisters and parents are all in attendance, and it will be a grand time. Of course, the best part of the day is eating, and my, what a feast it will be. During all of the excitement, one of the children wanders into the kitchen and sees some of the adults making the food. She stops and stares while she sees her mother cut the ham in half, place one half in the roasting tray, and the other half in the garbage. The little girl asks “Mommy, why did you do that?” The mother replies, “Well, that’s how your grandmother always made it when I was a girl. She always made the best ham, so I am just doing what she did.” The little girl looks to her grandmother and asks “Grandma, what does cutting the ham in half do? What makes it taste so good?” The grandmother laughs and says “I always cut the ham in half and threw half away because the entire thing didn’t fit in our small oven! There’s no need to do that now!”</p> <p id="f7fc" class="graf graf--p graf-after--p">I cannot say this strongly enough: <em class="markup--em markup--p-em">do not be like the mother</em>. Don’t blindly accept that the process is the “secret sauce” to making the ham taste good. If we stop and question “the process”, we can start to think about what is necessary, what is the goal, and how to get there. Since most organizations do not think of data as an asset, or reporting, business intelligence or analytics as a positive business activity, it creates many of the issues I referenced above.</p> <p id="1c08" class="graf graf--p graf-after--p">Here is an example of accepting tradition as truth. I was working with a client on migrating from a deprecated analytics platform to a newer one. One of the requirements was to ensure that the data matched between the old platform and the new one. As I was building metadata to handle the calculations of some of the metrics, I discovered gross errors in the math being used, which was leading to very different (i.e. incorrect) figures. I brought this up to the client and proposed a new method of calculating the metric in question that will correctly measure what they were trying to understand. Flabbergasted, the client said “That can’t be right. We have always calculated it the other way and it has always been right.”</p> <p id="7f06" class="graf graf--p graf-after--p">The trouble with traditions is that at one time they made sense. They were deemed a good way of pursuing a goal, or remembering, or instilling consistency based on the understanding at the time. But as businesses grow more sophisticated, it is important to take off the blinders and restrictors that hold back understanding, higher quality and truth. That’s the challenge.</p> <figure id="06f9" class="graf graf--figure graf-after--p"> <div class="aspectRatioPlaceholder is-locked"> <div class="aspectRatioPlaceholder-fill"></div> <div class="progressiveMedia js-progressiveMedia graf-image is-canvasLoaded is-imageLoaded" data-image-id="1*N-CcVMX9b_LbQ08kEsfv5A.jpeg" data-width="750" data-height="370" data-action="zoom" data-action-value="1*N-CcVMX9b_LbQ08kEsfv5A.jpeg" data-scroll="native"> <p>&nbsp;</p> <div style="width: 760px" class="wp-caption aligncenter"><img class="progressiveMedia-image js-progressiveMedia-image" src="https://i0.wp.com/cdn-images-1.medium.com/max/1600/1*N-CcVMX9b_LbQ08kEsfv5A.jpeg?resize=750%2C370&#038;ssl=1" alt="" data-src="https://i0.wp.com/cdn-images-1.medium.com/max/1600/1*N-CcVMX9b_LbQ08kEsfv5A.jpeg?resize=750%2C370&#038;ssl=1" data-recalc-dims="1" /><p class="wp-caption-text">…and I’ll call for back up when Agent Smith shows up.</p></div> </div> </div><figcaption class="imageCaption"></figcaption></figure> <p id="edf4" class="graf graf--p graf-after--figure">This train of thought also directly correlates with themes from my other blog posts. If the business lacks a vision for data, the data will not be considered when the business acts. And if the business does not care about the data, then data supply will be affected due to poor quality. This forces people to drill what I call “data wells” — silo-ed data marts, Access databases or even Excel workbooks that serve the needs for a small community. As I’ve discussed before, data should be a utility in the organization; one that provides access and value to all interested parties.</p> <p id="a016" class="graf graf--p graf-after--p">I’ll close with a quote from Jeff Bezos, who seems to know a thing or two about being an agent of change. In his last investor letter, he outlines ways to stay a “Day 1” company, which is a different topic for another time. However, he does have some words of warning about processes…</p> <blockquote id="fd18" class="graf graf--blockquote graf-after--p graf--trailing"><p>Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing theprocess right. Gulp. It’s not that rare to hear a junior leader defend a bad outcome with something like, “Well, we followed the process.” A more experienced leader will use it as an opportunity to investigate and improve the process. The process is not the thing. It’s always worth asking, do we own the process or does the process own us?</p></blockquote> Phil Goerdt http://redpillanalytics.com/?p=4931 Thu Jun 01 2017 15:03:21 GMT-0400 (EDT) Because… Tradition? https://medium.com/red-pill-analytics/because-tradition-dd183105ed11?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*H22WUL9ZQkDgMt0fUGHcUA.jpeg" /><figcaption>Photo Credit: <a href="https://unsplash.com/search/clock?photo=cIcX_aO9LPo">Bryce Barker</a></figcaption></figure><h4>Challenging Business Process to Create Better Data and Insights</h4><p>Who doesn’t love traditions? While I am not the most nostalgic guy, I certainly have a handful of traditions that I hold dear. I think as we blaze bravely ahead, it’s important to remember where we come from, both good and bad to influence a better future. Sure, we all have traditions on a personal level, but what most people don’t realize is that most businesses carry their own traditions. The problem, is that most of these traditions are bad for the business itself.</p><h4>The Process Problem</h4><p>The sanctity of the process creates many problems with data. Poor data quality usually stems from a bad input process. Poor query performance usually is related to a bad process that needs to be corrected at run time. Poor technology acceptance may be part of a bad process; if users can still go elsewhere, why come to you, the data supplier?</p><p>Many times I have seen organizations hamstrung by data issues. And, not infrequently, I have seen organizations have data issues due to business processes. Sometimes the data comes to the end of the process and is incorrect, sometimes codes need to be transformed into something useful, sometimes the data is dirty. I’m sure you can think of a few examples of your own. And most times I see this issue, the solution is to fix the data after the process is complete (and the data has already been written), not during. Usually this is the case because “we can’t change the process”.</p><p>I’ve said this before, but a good tool does not fix underlaying data issues. During data initiatives, this is especially applicable when people begin to defend the processes used. Pushing back on the process to get better data in and out is a win for everyone.</p><p>“Well, our process won’t allow us to do that.” This a roadblock phrase that I have heard numerous times when it comes to fixing data during the process. Sometimes this is actually accurate; sometimes the process is mandated and enforced by regulation, as seen in financial services and healthcare industries. Most of the time however, it is not a by product of mandate. Most of the time the process won’t allow something to happen because it takes effort to change the process.</p><p>“The process has always been like that”. This is another excuse that I hear often. Let me tell you a story to illustrate why this is such a poor thinking.</p><p>It’s holiday season and the whole family is together. Cousins, aunts, uncles (even the crazy one), grand parents, brothers, sisters and parents are all in attendance, and it will be a grand time. Of course, the best part of the day is eating, and my, what a feast it will be. During all of the excitement, one of the children wanders into the kitchen and sees some of the adults making the food. She stops and stares while she sees her mother cut the ham in half, place one half in the roasting tray, and the other half in the garbage. The little girl asks “Mommy, why did you do that?” The mother replies, “Well, that’s how your grandmother always made it when I was a girl. She always made the best ham, so I am just doing what she did.” The little girl looks to her grandmother and asks “Grandma, what does cutting the ham in half do? What makes it taste so good?” The grandmother laughs and says “I always cut the ham in half and threw half away because the entire thing didn’t fit in our small oven! There’s no need to do that now!”</p><p>I cannot say this strongly enough: <em>do not be like the mother</em>. Don’t blindly accept that the process is the “secret sauce” to making the ham taste good. If we stop and question “the process”, we can start to think about what is necessary, what is the goal, and how to get there. Since most organizations do not think of data as an asset, or reporting, business intelligence or analytics as a positive business activity, it creates many of the issues I referenced above.</p><p>Here is an example of accepting tradition as truth. I was working with a client on migrating from a deprecated analytics platform to a newer one. One of the requirements was to ensure that the data matched between the old platform and the new one. As I was building metadata to handle the calculations of some of the metrics, I discovered gross errors in the math being used, which was leading to very different (i.e. incorrect) figures. I brought this up to the client and proposed a new method of calculating the metric in question that will correctly measure what they were trying to understand. Flabbergasted, the client said “That can’t be right. We have always calculated it the other way and it has always been right.”</p><p>The trouble with traditions is that at one time they made sense. They were deemed a good way of pursuing a goal, or remembering, or instilling consistency based on the understanding at the time. But as businesses grow more sophisticated, it is important to take off the blinders and restrictors that hold back understanding, higher quality and truth. That’s the challenge.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/750/1*N-CcVMX9b_LbQ08kEsfv5A.jpeg" /><figcaption>…and I’ll call for back up when Agent Smith shows up.</figcaption></figure><p>This train of thought also directly correlates with themes from my other blog posts. If the business lacks a vision for data, the data will not be considered when the business acts. And if the business does not care about the data, then data supply will be affected due to poor quality. This forces people to drill what I call “data wells” — silo-ed data marts, Access databases or even Excel workbooks that serve the needs for a small community. As I’ve discussed before, data should be a utility in the organization; one that provides access and value to all interested parties.</p><p>I’ll close with a quote from Jeff Bezos, who seems to know a thing or two about being an agent of change. In his last investor letter, he outlines ways to stay a “Day 1” company, which is a different topic for another time. However, he does have some words of warning about processes…</p><blockquote>Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing theprocess right. Gulp. It’s not that rare to hear a junior leader defend a bad outcome with something like, “Well, we followed the process.” A more experienced leader will use it as an opportunity to investigate and improve the process. The process is not the thing. It’s always worth asking, do we own the process or does the process own us?</blockquote><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dd183105ed11" width="1" height="1"><hr><p><a href="https://medium.com/red-pill-analytics/because-tradition-dd183105ed11">Because… Tradition?</a> was originally published in <a href="https://medium.com/red-pill-analytics">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Phil Goerdt https://medium.com/p/dd183105ed11 Thu Jun 01 2017 15:03:17 GMT-0400 (EDT) Kscope Ambassador Program http://www.odtug.com/p/bl/et/blogaid=722&source=1 The Kscope Ambassador Program is back!  You’re already attending the sessions, so why not take an opportunity to assist with the conference and make a difference? ODTUG http://www.odtug.com/p/bl/et/blogaid=722&source=1 Thu Jun 01 2017 09:00:53 GMT-0400 (EDT) First Steps with Oracle Analytics Cloud http://www.rittmanmead.com/blog/2017/06/first-steps-with-oracle-analytics-cloud/ <h1 id="preface">Preface</h1> <p>Not long ago Oracle added a new offer to their Cloud - an OBIEE in a Cloud with full access. Francesco Tisiot made an <a href="https://www.rittmanmead.com/blog/2017/04/oracle-analytics-cloud-product-overview/">overview</a> of it and now it's time to go a bit deeper and see how you can poke it with a sharp stick by yourself. In this blog, I'll show how to get your own OAC instance as fast and easy as possible.</p> <h1 id="beforeyoustart">Before you start</h1> <p>The very first step is to <a href="https://cloud.oracle.com/tryit">register a cloud account</a>. Oracle gives a trial which allows testing of all features. I won't show it here as it is more or less a standard registration process. I just want highlight a few things:</p> <ul> <li>You will need to verify your phone number by receiving an SMS. It seems that this mechanism may be a bit overloaded and I had to make <em>more than one</em> attempts. I press the <strong>Request code</strong> button but nothing happens. I wait and press it again, and again. And eventually, I got the code. I can't say for sure and possible it was just my bad luck but if you face the same problem just keep pushing (but not too much, requesting a code every second won't help you). </li> <li>Even for trial you'll be asked for a credit card details. I haven't found a good diagnostics on how much was already spent and <a href="https://docs.oracle.com/en/cloud/get-started/subscriptions-cloud/mmocs/monitoring-service-status-account-balance-and-usage-domains.html#GUID-0C50B61D-8D55-4B4B-9C14-E41C65422278">the documentation</a> is not really helpful here.</li> </ul> <h1 id="architecture">Architecture</h1> <p>OAC instances are not self-containing and require some additional services. The absolute minimum configuration is the following:</p> <ul> <li><strong>Oracle Cloud Storage</strong> (OCS) - is used for backups, log files, etc.</li> <li><strong>Oracle Cloud Database Instance</strong> (DBC) - is used for RCU schemas.</li> <li><strong>Oracle Analytics Cloud Instance</strong> (OAC) - is our ultimate target.</li> </ul> <p>From the Cloud services point of view, architecture is the following. This picture doesn't show virtual disks mounted to instances. These disks consume Cloud Storage quota but they aren't created separately as services.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/10-service-architecture.png" alt="Architecture"></p> <p>We need at least one Oracle Database Cloud instance to store <strong>RCU</strong> schemas. This database may or may not have a separate Cloud Storage area for backups. Every OAC instance requires Cloud storage area for logs. Multiple OAC instances may share one Cloud storage area but I can't find any advantage of this approach over a separate area for every instance.</p> <h1 id="createresources">Create Resources</h1> <p>We create these resource in the order they are listed earlier. Start with Storage, then DB and the last one is OAC. Actually, we don't have to create Cloud Storage containers separately as they may be created automatically. But I show it here to make things more clear without too much "it works by itself" magic.</p> <h2 id="createcloudstorage">Create Cloud Storage</h2> <p>The easiest part of all is the Oracle Cloud Storage container. We don't need to specify its size or lots of parameters. All parameters are just a name, storage class (Standard/Archive) and encryption. </p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/20-create_ocs.gif" alt="20-create_ocs.gif"></p> <p>I spent some time here trying to figure out how to reference this storage later. There is a hint saying that <em>"Use the format: &lt;storage service>-&lt;identity domain>/&lt;container>. For example: mystorage1-myid999/mybackupcontainer."</em> And if <em>identity domain</em> and <em>container</em> are pretty obvious, <em>storage service</em> puzzled me for some time. The answer is "storage service=<code>Storage</code>". You can see this in the top of the page.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/30-OCS_naming.png" alt="30-OCS_naming.png"></p> <p>It seems that <em>Storage</em> is a fixed keyword, <em>rurittmanm</em> is the domain name created during the registration process and <em>demo</em> is the actual container name. So in this sample when I need to reference my <em>demo</em> OCS I should write <code>Storage-rurittmanm/demo</code>.</p> <h2 id="createclouddb">Create Cloud DB</h2> <p>Now when we are somewhat experienced in Oracle Cloud we may move to a more complicated task and create a Cloud DB Instance. It is harder than Cloud Storage container but not too much. If you ever created an on-premise database service using <code>DBCA</code>, cloud DB should be a piece of cake to you.</p> <p>At the first step, we set the name of the instance and select the most general options. These options are:</p> <ul> <li><p><strong>Service Level.</strong> Specifies how this instance will be managed. Options are:</p> <ul><li><em>Oracle Database Cloud Service</em>: Oracle Database software pre-installed on Oracle Cloud Virtual Machine. Database instances are created for you using configuration options provided in this wizard. Additional cloud tooling is available for backup, recovery and patching.</li> <li><em>Oracle Database Cloud Service - Virtual Image</em>: Oracle Database software pre-installed on an Oracle Cloud Virtual Machine. Database instances are created by you manually or using DBCA. No additional cloud tooling is available.</li></ul></li> <li><p><strong>Metering Frequency</strong> - defines how this instance will be paid: by months or by hours.</p></li> <li><p><strong>Software Release</strong> - if the Service Level is Oracle Database Cloud Service, we may choose <em>11.2</em>, <em>12.1</em> and <em>12.2</em>, for <em>Virtual Image</em> only <em>11.2</em> and <em>12.1</em> are available. Note that even cloud does no magic and with DB 12.2 you may expect the same <a href="http://docs.oracle.com/middleware/12212/lcm/RNINF/GUID-10F524EA-A008-4868-8AA5-3F19C7E787FE.htm#RNINF-GUID-5F639F17-964F-4FC5-AD3A-A8281F736045">problems</a> as on-premise.</p></li> <li><p><strong>Software Edition</strong> - Values are:</p> <ul><li>Standard Edition</li> <li>Enterprise Edition</li> <li>Enterprise Edition - High Performance</li> <li>Enterprise Edition - Extreme Performance</li></ul></li> <li><p>Database Type - defines High Availability and Disaster Recovery options:</p> <ul><li>Single Instance</li> <li>Database Clustering with RAC</li> <li>Single Instance with Data Guard Standby</li> <li>Database Clustering with RAC and Data Gard Standby</li></ul></li> </ul> <p><em>Database Clustering with RAC</em> and <em>Database Clustering with RAC and Data Gard Standby</em> types are available only for <em>Enterprise Edition - Extreme Performance</em> edition.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/40-create_obdc-1.gif" alt="40-create_obdc-1.gif"></p> <p>The second step is also quite intuitive. It has a lot of options but they should be pretty simple and well-known for anyone working with Oracle Database.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/60-create-odbc-dc.png" alt="60-create-odbc-dc.png"></p> <p>The first block of parameters is about basic database configuration. Parameters like <code>DB name (sid)</code> or <code>Administration Password</code> are obvious.</p> <p><code>Usable DataFile Storage (GB)</code> is less obvious. Actually, in the beginning, it puzzled me completely. In this sample, I ask for 25 Gb of space. But this doesn't mean that my instance will take 25 Gb of my disk quota. In fact, this particular instance took 150 Gb of disk space. Here we specify only a guaranteed user disk space, but an instance needs some space for OS, and DB software, and temp, and swap, and so on.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/65-db-disk.png" alt="65-db-disk.png"></p> <p>A trial account is limited with 500 Gb quota and that means that we can create only 3 Oracle DB Cloud instances at max. Every instance will use around 125 Gb of let's say "technical" disk space we can't reduce. From the practical point of view, it means that it may be preferable to have one "big" instance (in terms of the disk space) rather than multiple "small".</p> <ul> <li><strong>Compute shape</strong> specifies how powerful our VM should be. Options are the following: <ul><li>OC3 - 1.0 OCPU, 7.5 GB RAM</li> <li>OC4 - 2.0 OCPU, 15.0 GB RAM</li> <li>OC5 - 4.0 OCPU, 30.0 GB RAM</li> <li>OC6 - 8.0 OCPU, 60.0 GB RAM</li> <li>OC7 - 16.0 OCPU, 120.0 GB RAM</li> <li>OC1m - 1.0 OCPU, 15.0 GB RAM</li> <li>OC2m - 2.0 OCPU, 30.0 GB RAM</li> <li>OC3m - 4.0 OCPU, 60.0 GB RAM</li> <li>OC4m - 8.0 OCPU, 120.0 GB RAM</li> <li>OC5m - 16.0 OCPU, 240.0 GB RAM</li></ul></li> </ul> <p>We may increase or decrease this value later.</p> <ul> <li><strong>SSH Public Key</strong> - Oracle gives us an ability to connect directly to the instance and authentication is made by <code>user</code>+<code>private key</code> pair. Here we specify a public key which will be added to the instance. Obviously, we should have a private key for this public one. Possible options are either we provide a key we generated by ourselves or let Oracle create keys for us. The most non-obvious thing here is what is the username for the SSH. You can't change it and it isn't shown anywhere in the interface (at least I haven't found it). But you can find it in the <a href="https://docs.oracle.com/cloud/latest/stcomputecs/STCSG/GUID-D947E2CC-0D4C-43F4-B2A9-A517037D6C11.htm#STCSG-GUID-D947E2CC-0D4C-43F4-B2A9-A517037D6C11">documentation</a> and it is <code>opc</code>.</li> </ul> <p>The second block of parameters is about backup and restore. The meaning of these options is obvious, but exact values aren't (at least in the beginning).</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/70-create-odbc-brc.png" alt="70-create-odbc-brc.png"></p> <ul> <li><p>Cloud Storage Container - that's the Cloud Storage container I described earlier. Value for this field will be something like <code>Storage-rurittmanm/demo</code>. In fact, I may do not create this Container in advance. It's possible to specify any inexistent container here (but still in the form of <code>Storage-&lt;domain&gt;/&lt;name&gt;</code>) and tick <code>Create Cloud Storage Container</code> check-box. This will create a new container for us.</p></li> <li><p>Username and Password are credentials of a user who can access this container.</p></li> </ul> <p>The last block is <strong>Advanced settings</strong> and I believe it's quite simple and obvious. Most of the time we don't need to change anything in this block.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/80-create-odbc-ac.png" alt="80-create-odbc-ac.png"></p> <p>When we fill all parameters and press the <strong>Next</strong> button we get a <strong>Summary</strong> screen and the actual process starts. It takes about 25-30 minutes to finish.</p> <p>When I just started my experiments I was constantly getting a message saying that no sites available and my request may not be completed.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/85-create-odbc-err.png" alt=""></p> <p>It is possible that it was again the same "luck" as with the phone number verification but the problem solved by itself a few hours later.</p> <h2 id="createoacinstance">Create OAC Instance</h2> <p>At last, we have all we need for our very first OAC instance. The process of an OAC instance setup is almost the same as for an Oracle DB Cloud Instance. We start the process, define some parameters and wait for the result.</p> <p>At the first step, we give a name to our instance, provide an SSH public key, and select an edition of our instance. We have two options here <strong>Enterprise Edition</strong> or <strong>Standard Edition</strong> and later we will select more additional options. <strong>Standard edition</strong> will allow us to specify either <strong>Data Visualisation</strong> or <strong>Essbase instances</strong> and <strong>Enterprise Edition</strong> adds to this list a classical <strong>Business Intelligence</strong> feature. The rest of the parameters here are exactly the same as for Database Instance.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/90-oacs-1st-step.png" alt="90-oacs-1st-step.png"></p> <p>At the second step, we have four blocks of parameters.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/100-oacs-2nd-step.png" alt="100-oacs-2nd-step.png"></p> <ul> <li><p><strong>Service Administrator</strong> - the most obvious one. Here we specify an administrator user. This user will be a system administrator.</p></li> <li><p><strong>Database</strong> - select a database for RCU schemas. That's why we needed a database.</p></li> <li><p><strong>Options</strong> - specify which options our instance will have.</p> <ul><li><strong>Self-Service Data Visualisation, Preparation and Smart Discovery</strong> - this option means Oracle Data Visualisation and it is available for both Standard and Enterprise Editions.</li> <li><strong>Enterprise Data Models</strong> - this option gives us classical BI and available only for Enterprise Edition. Also, this option may be combined with the first one giving us both classical BI and modern Data discovery on one instance.</li> <li><strong>Collaborative Data Collection, Scenarios and What-if Analysis</strong> - this one stands for Essbase and available for Standard and Enterprise Editions. It can't be combined with other options.</li></ul></li> <li><strong>Size</strong> is the same thing that is called <strong>Compute Shape</strong> for the Database. Options are exactly the same.</li> <li><strong>Usable Storage Size on Disk GB</strong> also has the same meaning as for the DB. The minimum size we may specify here is 25 Gb what gives us total 170 Gb of used disk space.</li> </ul> <p>Here is a picture showing all possible combinations of services:</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/110-oacs-editions.png" alt="110-oacs-editions.png"></p> <p>And here virtual disks configuration. <code>data</code> disk is the one we specify. <br> <img src="http://www.rittmanmead.com/blog/content/images/2017/05/130-oacs-storage.png" alt="130-oacs-storage.png"></p> <p>The last block - <strong>Cloud Storage Configuration</strong> was the hardest one. Especially the first field - <strong>Cloud Storage Base URL</strong>. <a href="https://docs.oracle.com/cloud/latest/analytics-cloud/ACSAM/GUID-AAB78850-F18F-4D67-AACF-A096B396ED14.htm#ACSAM-GUID-591B66E9-EAD1-42D9-B9C7-37AB648A648B">The documentation</a> says <em>"Use the format: <a href="https://example.storage.oraclecloud.com/v1">https://example.storage.oraclecloud.com/v1</a>"</em> and nothing more. When you know the answer it may be easy, but when I saw it for the first time it was hard. Should I place here any unique URL just like an identifier? Should it end with v1? And what is the value for the second instance? V2? Maybe I should place here the URL of my current datacenter (<a href="https://dbcs.emea.oraclecloud.com">https://dbcs.emea.oraclecloud.com</a>). The answer is <code>https://&lt;domain&gt;.storage.oraclecloud.com/v1</code> in my case it is <code>https://rurittmanm.storage.oraclecloud.com/v1</code>. It stays the same for all instances.</p> <p>All other parameters are the same as they were for DBCS instance. We either specify an existing Cloud Storage container or create it here.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/120-oacs-cloud-storage.png" alt="120-oacs-cloud-storage.png"></p> <p>The rest of the process is obvious. We get a <strong>Summary</strong> and then wait. It takes about 40 minutes to create a new instance.</p> <blockquote> <p>Note: diagnostics here is <em>a bit poor</em> and when it says that the instance start process is completed it may not be true. Sometimes it makes sense to wait some time before starting to panic.</p> </blockquote> <p>Now we may access our instance as a usual. The only difference is that the port is 80 not 9502 (or 443 for SSL). For <em>Data Visualisation</em> the link is <code>http(s)://&lt;ip address&gt;/va</code>, for BIEE - <code>http(s)://&lt;ip address&gt;/analytics</code> and for Essbase <code>http(s)://&lt;ip address&gt;/essbase</code>. <em>Enterprise Manager</em> and <em>Weblogic Server Console</em> are availabale at port <code>7001</code> which is blocked by default.</p> <p>What is bad that https uses a self-signed certificate. Depending on browser settings it may give an error or even prevent access to https. <br> <img src="http://www.rittmanmead.com/blog/content/images/2017/05/126-bad.png" alt=""></p> <p>Options here either use HTTP rather than HTTPS or add this certificate to your local computer. But these aren't the options for a production server. Luckily Oracle provides <a href="https://docs.oracle.com/cloud/latest/analytics-cloud/ACSAM/GUID-5B8C95B0-D2D1-4A8F-9213-442E97711677.htm#ACSAM-GUID-10CFEC8F-4388-46EA-B4A0-074BB5928A28">a way to use own SSL certificates</a>.</p> <h1 id="typicalmanagementtasks">Typical Management Tasks</h1> <h2 id="sshtoinstances">SSH to Instances</h2> <p>During the setup process, we provide Oracle with a public key which is used to get an SSH access to instances. Cloud does nothing special to this. In the case of Windows, we may use Putty. Just add the private key to Pageant and connect to the instance using user <code>opc</code>.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/140-pageant.png" alt="140-pageant.png"></p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/150-putty.gif" alt="150-putty.gi"></p> <h1 id="openingports">Opening Ports</h1> <p>By default only the absolute minimum of the ports is open and we can't connect to the OAC instance using BI Admin tool or to the DB with SQLDeveloper. In order to do this, we should create an access rule which allows access to this particular ports.</p> <p>In order to get to the Access Rules interface, we must use instance menu and select the Access Rules option.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/150-access-menu.png" alt="150-access-menu.png"></p> <p>This will open the Access Rules list. What I don't like about it is that it opens the full list of all rules but we can create only a rule for this particular instance.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/160-access-rules-list.png" alt="160-access-rules-list.png"></p> <p>New rule creation form is simple and should cause no issues. But be careful here and not open too much for a wild Internet.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/170-new-rule.png" alt="170-new-rule.png"></p> <h1 id="addmoreusers">Add More Users</h1> <p>The user who registered a Cloud Account becomes its administrator and can invite more users and manage privileges.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/180-access-users.png" alt="180-access-users.png"></p> <p>Here we can add and modify users. </p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/190-users.png" alt="190-users.png"></p> <p>When we add a user we specify a name, email and login. Also here we set roles for the user. The user will get an email with these details, and link to register.</p> <p>Obviously, the user won't be asked about a credit card. He just starts working and that's all.</p> <h1 id="summary">Summary</h1> <p>My first steps with Oracle Analytics Cloud were not very easy, but I think it was worth it. Now I can create a new OBIEE instance just in a few minutes and one hour later it will be up and running. And I think that's pretty fast compared to a normal process of creating a new server in a typical organisation. We don't need to think about OS installation, or licenses, or whatever else. Just <a href="https://cloud.oracle.com/tryit">try it</a>.</p> Andrew Fomin c3ddf0c9-f494-4d2a-8523-30b510ecf0e8 Thu Jun 01 2017 08:43:13 GMT-0400 (EDT) OBIA 11g: Oracle BI Applications on Cloud Services – Deployment Options https://blogs.oracle.com/biapps/obia-11g%3A-oracle-bi-applications-on-cloud-services-%E2%80%93-deployment-options <p style="text-align: justify;">The BIAPPS on PAAS Cloud Services, primarily consists of 4 services &ndash; DBCS (Database Cloud Service), Compute, OAC (Oracle Analytics Cloud) or BICS (Business Intelligence Cloud Service) and Storage CS (Storage Cloud Service). Together, along with the required BI Applications software, these make up the BIAPPS on PAAS setup. Data is pulled from Oracle Cloud Source Applications like Fusion and Taleo via https. Data from On Prem sources can be pulled via a VPN connection or pushed by using a replication tool like Data Sync over a SSH tunnel.</p> <p>The white paper <a href="https://support.oracle.com/epmos/faces/DocumentDisplay?id=2264063.1">Doc Id 2264063.1</a> on support.oracle.com , describes the deployment options for BI Applications on PaaS.</p> Note: To learn more, view all blogs on <a href="https://blogs.oracle.com/biapps/biapps-on-paas">BIAPPS on PAAS</a> <p>&nbsp;</p> Gunaranjan Vasireddy https://blogs.oracle.com/biapps/obia-11g%3A-oracle-bi-applications-on-cloud-services-%E2%80%93-deployment-options Tue May 30 2017 00:48:34 GMT-0400 (EDT) Multiple connection pools : brief refresh of common rules https://gianniceresa.com/2017/05/multiple-connection-pool-common-rules/ <p>Connection pools, these little basic, but mandatory, things we all have in our RPD. For normal usage we generally do not really pay attention to it: enter a host, username and password and done, you are connected to the database and move forward to other tasks.<br /> But &#8230; there are still misunderstandings on how they work or how to use them.</p> <h2>Multiple connection pools in the same database but OBIEE always use the first one: why?</h2> <p>Lately saw few times people misunderstanding their role, the relationship between the database object and the contained connection pools and ending up with error because queries were executed against the wrong connection.</p> <p>For a given database object in the physical layer OBIEE will always use the first connection pool (in order top-down) with a read privilege for the authenticated user when retrieving data for an analysis, prompt etc.</p> <p>That&#8217;s how it works!<br /> You can have multiple connection pools inside the same database object and they can use different database users with different access and you can import metadata from each one and model everything like that. But OBIEE will just ignore all that and use the first one.<br /> The Admin tool will not warn you about it, in the same way it isn&#8217;t going to warn you if you don&#8217;t have a connection pool for a database.</p> <p>As you can make physical joins between tables of different databases in the physical layer, if you need to source tables from 2 or more different schemas (so using different login/password for the database) you must create different database objects in the physical layer.</p> <div id="attachment_480" style="width: 310px" class="wp-caption aligncenter"><img class="wp-image-480 size-medium" src="https://gianniceresa.com/wp-content/uploads/2017/05/quick_multiple_connection_pools-300x201.png" alt="OBIEE multiple connection pools" width="300" height="201" srcset="https://gianniceresa.com/wp-content/uploads/2017/05/quick_multiple_connection_pools-300x201.png 300w, https://gianniceresa.com/wp-content/uploads/2017/05/quick_multiple_connection_pools.png 551w" sizes="(max-width: 300px) 100vw, 300px" /><p class="wp-caption-text">Only the 1st one will be used to retrieve data for analysis.</p></div> <h2>Multiple connection pools mainly for variables init blocks</h2> <p>The general practice is to create multiple connection pools for the same database object to split &#8220;data&#8221; and &#8220;variables&#8221; pipes. The first one being used to retrieve data while the second (or all the others) are used by init blocks to initialize variables.</p> <p>As variables are often initialized at the login it would be a problem if your query is stopped in queue waiting the connection pool to be available because someone is running huge queries.<br /> So, to avoid delays on login. using an independent connection pool which is only used by init blocks is a good practice. Variables queries are generally fairly small and quick and return little data, so there are less chances (as long as the database works fine) to end up in queue waiting for &#8220;availability&#8221; of the pipe connecting to the database.</p> <p>This good practice is &#8220;enforced&#8221; by the Admin tool as by default the option &#8220;Allow first Connection Pool for Init Blocks&#8221; isn&#8217;t checked, meaning that you will not be allowed to select the first connection pool when defining variable init blocks. As it just take a right click &gt; duplicate to have a second connection pool, it&#8217;s easy to follow this good practice.</p> <h2>tl;dr</h2> <ul> <li>A database object in the physical layer has at least a connection pool but can also have more than one.</li> <li>For a given database object in the physical layer all the analysis will use the same connection pool (direct database request obviously works in a different way).</li> <li>The used connection pool (for analysis and prompts) is the first one with a &#8220;read&#8221; privilege for the user wanting to use it (based on the order in the RPD from the top to the bottom).</li> <li>A good practice is to use a separate connection pool for variables initialization blocks.</li> <li>Instead of joining tables of different database objects because of 2 credentials on the same database, consider to grant select access to a single user to all the objects on the database side. This will allow OBIEE to push down the query fully on the database side instead of having to perform joins of datasets at the BI Server level (being slower).</li> </ul> <p>The post <a rel="nofollow" href="https://gianniceresa.com/2017/05/multiple-connection-pool-common-rules/">Multiple connection pools : brief refresh of common rules</a> appeared first on <a rel="nofollow" href="https://gianniceresa.com">Gianni&#039;s world: things crossing my mind</a>.</p> Gianni Ceresa https://gianniceresa.com/?p=416 Mon May 29 2017 05:53:44 GMT-0400 (EDT) OBIEE 12c Docker image : Dive into the Dockerfile https://gianniceresa.com/2017/05/obiee-12c-docker-image-dive/ <p>Some time ago I uploaded on GitHub the code to build <a href="https://github.com/gianniceresa/docker-images/tree/master/OracleBIEE" target="_blank" rel="noopener noreferrer">Docker images for OBIEE 12c</a> (all the current 3 releases). In the meantime, as Docker is now my main (and only with the exception of <a href="http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html" target="_blank" rel="noopener noreferrer">Sample Applications</a>) way to execute OBIEE, I improved and fixed things around the images and I finally uploaded the new version.</p> <p>If all you look for is to have an OBIEE Docker container up and running in 10-15 minutes follow the notes on the <a href="https://github.com/gianniceresa/docker-images/tree/master/OracleBIEE" target="_blank" rel="noopener noreferrer">GitHub page</a>.<br /> If, on the other hand, you are curious to understand a bit more how OBIEE can be installed in a Docker container this post is for you! It is time to have a closer look at how the image is built, how OBIEE can be &#8220;dockerized&#8221;.</p> <p>For this post, I will focus on the 12.2.1.2.0 version, but the others are quite the same (12.2.1.1.0 is the exact same thing with different binaries and 12.2.1.0.0 is just missing some validation steps because not supported at the RCU level).</p> <h2>Dockerfile : the base of everything</h2> <p>The Dockerfile is the key element to build a Docker image, and the Docker documentation explains it quite clearly:</p> <blockquote><p>Docker can build images automatically by reading the instructions from a Dockerfile. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Using docker build users can create an automated build that executes several command-line instructions in succession.</p></blockquote> <p>In few words the Dockerfile contains all the operations Docker must perform to build an image, it is composed by a set of commands telling Docker what to do.</p> <p>Each command will be a step during the build process and will produce a new slice, a new layer, of the image. A Docker image is a set of slices of filesystem put together, one on top of each other to produce a &#8220;virtual filesystem&#8221; the container will use as base when executed.</p> <p>This concept is important when &#8220;size matter &#8230;&#8221;.<br /> The final size of the image is defined by the sum of the size of all the slices composing the image. Because of that it&#8217;s a key concept with building Docker images to find a good balance between readability and maintainability of the Dockerfile vs. reduced number of commands producing a smaller image (from a disk size point of view).</p> <div id="attachment_424" style="width: 835px" class="wp-caption aligncenter"><a href="https://gianniceresa.com/wp-content/uploads/2017/05/full_dockerfile.gif"><img class="size-large wp-image-424" src="https://gianniceresa.com/wp-content/uploads/2017/05/full_dockerfile-825x1024.gif" alt="" width="825" height="1024" srcset="https://gianniceresa.com/wp-content/uploads/2017/05/full_dockerfile-825x1024.gif 825w, https://gianniceresa.com/wp-content/uploads/2017/05/full_dockerfile-242x300.gif 242w, https://gianniceresa.com/wp-content/uploads/2017/05/full_dockerfile-768x953.gif 768w, https://gianniceresa.com/wp-content/uploads/2017/05/full_dockerfile-1080x1340.gif 1080w" sizes="(max-width: 825px) 100vw, 825px" /></a><p class="wp-caption-text">This Dockerfile will produce a fully working OBIEE image</p></div> <p>The file on GitHub focus more on readability than size. The final OBIEE image is around 18.6Gb, while another image I use for my developments is more focused on minimal size and is only 9.2Gb with exactly the same software and functionalities (actually it has few extra pieces like GIT etc.). When you will be comfortable with Dockerfiles you will easily find how to adapt it to reduce the final size of the image.</p> <p>Let&#8217;s dive into the Dockerfile for OBIEE 12c and go through every single step &#8230;</p> <h2>FROM: Base image on which to build the OBIEE image</h2> <p></p><pre class="crayon-plain-tag"># Pull base image # --------------- FROM oraclelinux:7-slim # Maintainer # ---------- MAINTAINER Gianni Ceresa &lt;gianni.ceresa@datalysis.ch&gt; # Environment variables required for this build (do NOT change) # -------------------------------------------------------------- ENV INSTALL_FILE_JDK="jdk-8u101-linux-x64.rpm" \ INSTALL_FILE_WLS="fmw_12.2.1.2.0_infrastructure_Disk1_1of1.zip" \ INSTALL_FILE_BI_1="fmw_12.2.1.2.0_bi_linux64_Disk1_1of2.zip" \ INSTALL_FILE_BI_2="fmw_12.2.1.2.0_bi_linux64_Disk1_2of2.zip" \ OBIEE_VERSION="12.2.1.2.0" \ INSTALL_FILE_RSP_WLS="weblogic.rsp" \ INSTALL_FILE_RSP_BI="obiee.rsp" \ INSTALL_FILE_RSP_CONFIG="bi_config.rsp" \ RUN_FILE="runOBIEE.sh" \ ORACLE_BASE=/opt/oracle # Use second ENV so that variable get substituted ENV INSTALL_DIR=$ORACLE_BASE/install \ ORACLE_HOME=$ORACLE_BASE/product/$OBIEE_VERSION \ DOMAIN_HOME=$ORACLE_BASE/config/domains # Copy binaries # ------------- COPY $INSTALL_FILE_JDK $INSTALL_FILE_WLS $INSTALL_FILE_BI_1 $INSTALL_FILE_BI_2 $INSTALL_DIR/ COPY $INSTALL_FILE_RSP_WLS $INSTALL_FILE_RSP_BI $INSTALL_FILE_RSP_CONFIG $RUN_FILE _configureOBIEE.sh _dropRCU.sh _validateRCU.sh $ORACLE_BASE/ # Setup filesystem and oracle user # Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle Business Intelligence installation # Install pre-req packages + Oracle JDK # Make sure the run file is executable # ----------------------------------------------------------------------- RUN chmod ug+x $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ groupadd -g 500 dba &amp;&amp; \ groupadd -g 501 oinstall &amp;&amp; \ useradd -d /home/oracle -g dba -G oinstall,dba -m -s /bin/bash oracle &amp;&amp; \ echo oracle:oracle | chpasswd &amp;&amp; \ yum -y install oracle-rdbms-server-12cR1-preinstall unzip wget tar openssl &amp;&amp; \ yum -y remove java-openjdk java-openjdk-headless &amp;&amp; \ yum -y install $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ yum clean all &amp;&amp; \ rm $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ touch $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inventory_loc=$ORACLE_BASE/oraInventory &gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inst_group= &gt;&gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ mkdir -p $ORACLE_HOME &amp;&amp; \ mkdir -p $DOMAIN_HOME &amp;&amp; \ chown -R oracle:dba $ORACLE_BASE &amp;&amp; \ chmod ug+x $ORACLE_BASE/$RUN_FILE # Replace place holders (and force /dev/urandom for java) # --------------------- RUN sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_WLS &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_BI &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###DOMAIN_HOME###|$DOMAIN_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|###ORACLE_BASE###|$ORACLE_BASE|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|source=file:/dev/random|source=file:/dev/urandom|g" /usr/java/default/jre/lib/security/java.security &amp;&amp; \ sed -i -e "s|source=file:/dev/urandom|source=file:/dev/./urandom|g" /usr/java/default/jre/lib/security/java.security # Start installation # ------------------- USER oracle RUN cd $INSTALL_DIR &amp;&amp; \ unzip $INSTALL_FILE_WLS -d ./tmp_wls &amp;&amp; \ rm $INSTALL_FILE_WLS &amp;&amp; \ java -jar $(find $INSTALL_DIR/tmp_wls -name *.jar) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_WLS -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_wls &amp;&amp; \ unzip $INSTALL_FILE_BI_1 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_1 &amp;&amp; \ unzip $INSTALL_FILE_BI_2 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_2 &amp;&amp; \ $(find $INSTALL_DIR/tmp_bi -name *.bin) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_BI -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_bi &amp;&amp; \ rm -rf $INSTALL_DIR # Set work directory &amp; Expose ports #-------------- WORKDIR $ORACLE_BASE EXPOSE 9500-9514 9799 # Define default command to start Oracle Business intelligence. CMD $ORACLE_BASE/$RUN_FILE</pre><p>The beginning of every Dockerfile is a reference to an existing image which will be used as base. It would make no sense to try to create a new Linux image from scratch all the time, so it always starts from an existing image, ideally the smallest and minimal as possible for the need.<br /> Oracle published a <a href="https://hub.docker.com/_/oraclelinux/" target="_blank" rel="noopener noreferrer">set of Oracle Linux images</a> freely accessible on the Docker Hub. As OBIEE 12c is certified with OEL7 the simplest is to start by that one.</p> <p>As the idea is still to have only OBIEE and only what is required to make it works I use the &#8220;7-slim&#8221; version of Oracle Enterprise Linux, which is the smallest one available, with a really minimalist set of things installed by default.</p> <h2>MAINTAINER: Who puts together the image file</h2> <p></p><pre class="crayon-plain-tag"># Pull base image # --------------- FROM oraclelinux:7-slim # Maintainer # ---------- MAINTAINER Gianni Ceresa &lt;gianni.ceresa@datalysis.ch&gt; # Environment variables required for this build (do NOT change) # -------------------------------------------------------------- ENV INSTALL_FILE_JDK="jdk-8u101-linux-x64.rpm" \ INSTALL_FILE_WLS="fmw_12.2.1.2.0_infrastructure_Disk1_1of1.zip" \ INSTALL_FILE_BI_1="fmw_12.2.1.2.0_bi_linux64_Disk1_1of2.zip" \ INSTALL_FILE_BI_2="fmw_12.2.1.2.0_bi_linux64_Disk1_2of2.zip" \ OBIEE_VERSION="12.2.1.2.0" \ INSTALL_FILE_RSP_WLS="weblogic.rsp" \ INSTALL_FILE_RSP_BI="obiee.rsp" \ INSTALL_FILE_RSP_CONFIG="bi_config.rsp" \ RUN_FILE="runOBIEE.sh" \ ORACLE_BASE=/opt/oracle # Use second ENV so that variable get substituted ENV INSTALL_DIR=$ORACLE_BASE/install \ ORACLE_HOME=$ORACLE_BASE/product/$OBIEE_VERSION \ DOMAIN_HOME=$ORACLE_BASE/config/domains # Copy binaries # ------------- COPY $INSTALL_FILE_JDK $INSTALL_FILE_WLS $INSTALL_FILE_BI_1 $INSTALL_FILE_BI_2 $INSTALL_DIR/ COPY $INSTALL_FILE_RSP_WLS $INSTALL_FILE_RSP_BI $INSTALL_FILE_RSP_CONFIG $RUN_FILE _configureOBIEE.sh _dropRCU.sh _validateRCU.sh $ORACLE_BASE/ # Setup filesystem and oracle user # Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle Business Intelligence installation # Install pre-req packages + Oracle JDK # Make sure the run file is executable # ----------------------------------------------------------------------- RUN chmod ug+x $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ groupadd -g 500 dba &amp;&amp; \ groupadd -g 501 oinstall &amp;&amp; \ useradd -d /home/oracle -g dba -G oinstall,dba -m -s /bin/bash oracle &amp;&amp; \ echo oracle:oracle | chpasswd &amp;&amp; \ yum -y install oracle-rdbms-server-12cR1-preinstall unzip wget tar openssl &amp;&amp; \ yum -y remove java-openjdk java-openjdk-headless &amp;&amp; \ yum -y install $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ yum clean all &amp;&amp; \ rm $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ touch $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inventory_loc=$ORACLE_BASE/oraInventory &gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inst_group= &gt;&gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ mkdir -p $ORACLE_HOME &amp;&amp; \ mkdir -p $DOMAIN_HOME &amp;&amp; \ chown -R oracle:dba $ORACLE_BASE &amp;&amp; \ chmod ug+x $ORACLE_BASE/$RUN_FILE # Replace place holders (and force /dev/urandom for java) # --------------------- RUN sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_WLS &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_BI &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###DOMAIN_HOME###|$DOMAIN_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|###ORACLE_BASE###|$ORACLE_BASE|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|source=file:/dev/random|source=file:/dev/urandom|g" /usr/java/default/jre/lib/security/java.security &amp;&amp; \ sed -i -e "s|source=file:/dev/urandom|source=file:/dev/./urandom|g" /usr/java/default/jre/lib/security/java.security # Start installation # ------------------- USER oracle RUN cd $INSTALL_DIR &amp;&amp; \ unzip $INSTALL_FILE_WLS -d ./tmp_wls &amp;&amp; \ rm $INSTALL_FILE_WLS &amp;&amp; \ java -jar $(find $INSTALL_DIR/tmp_wls -name *.jar) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_WLS -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_wls &amp;&amp; \ unzip $INSTALL_FILE_BI_1 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_1 &amp;&amp; \ unzip $INSTALL_FILE_BI_2 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_2 &amp;&amp; \ $(find $INSTALL_DIR/tmp_bi -name *.bin) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_BI -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_bi &amp;&amp; \ rm -rf $INSTALL_DIR # Set work directory &amp; Expose ports #-------------- WORKDIR $ORACLE_BASE EXPOSE 9500-9514 9799 # Define default command to start Oracle Business intelligence. CMD $ORACLE_BASE/$RUN_FILE</pre><p>The MAINTAINER is now deprecated and replaced by LABEL which is a more generic command to add metadata to the image. As I just find out this now, I will have to update the Dockerfile at some point to move to the new command before they remove support of the MAINTAINER command and prevent the build failing on an error.<br /> The idea of this command is to identify the author of the image.</p> <h2>ENV: Environment variable available during build and execution</h2> <p></p><pre class="crayon-plain-tag"># Pull base image # --------------- FROM oraclelinux:7-slim # Maintainer # ---------- MAINTAINER Gianni Ceresa &lt;gianni.ceresa@datalysis.ch&gt; # Environment variables required for this build (do NOT change) # -------------------------------------------------------------- ENV INSTALL_FILE_JDK="jdk-8u101-linux-x64.rpm" \ INSTALL_FILE_WLS="fmw_12.2.1.2.0_infrastructure_Disk1_1of1.zip" \ INSTALL_FILE_BI_1="fmw_12.2.1.2.0_bi_linux64_Disk1_1of2.zip" \ INSTALL_FILE_BI_2="fmw_12.2.1.2.0_bi_linux64_Disk1_2of2.zip" \ OBIEE_VERSION="12.2.1.2.0" \ INSTALL_FILE_RSP_WLS="weblogic.rsp" \ INSTALL_FILE_RSP_BI="obiee.rsp" \ INSTALL_FILE_RSP_CONFIG="bi_config.rsp" \ RUN_FILE="runOBIEE.sh" \ ORACLE_BASE=/opt/oracle # Use second ENV so that variable get substituted ENV INSTALL_DIR=$ORACLE_BASE/install \ ORACLE_HOME=$ORACLE_BASE/product/$OBIEE_VERSION \ DOMAIN_HOME=$ORACLE_BASE/config/domains # Copy binaries # ------------- COPY $INSTALL_FILE_JDK $INSTALL_FILE_WLS $INSTALL_FILE_BI_1 $INSTALL_FILE_BI_2 $INSTALL_DIR/ COPY $INSTALL_FILE_RSP_WLS $INSTALL_FILE_RSP_BI $INSTALL_FILE_RSP_CONFIG $RUN_FILE _configureOBIEE.sh _dropRCU.sh _validateRCU.sh $ORACLE_BASE/ # Setup filesystem and oracle user # Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle Business Intelligence installation # Install pre-req packages + Oracle JDK # Make sure the run file is executable # ----------------------------------------------------------------------- RUN chmod ug+x $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ groupadd -g 500 dba &amp;&amp; \ groupadd -g 501 oinstall &amp;&amp; \ useradd -d /home/oracle -g dba -G oinstall,dba -m -s /bin/bash oracle &amp;&amp; \ echo oracle:oracle | chpasswd &amp;&amp; \ yum -y install oracle-rdbms-server-12cR1-preinstall unzip wget tar openssl &amp;&amp; \ yum -y remove java-openjdk java-openjdk-headless &amp;&amp; \ yum -y install $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ yum clean all &amp;&amp; \ rm $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ touch $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inventory_loc=$ORACLE_BASE/oraInventory &gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inst_group= &gt;&gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ mkdir -p $ORACLE_HOME &amp;&amp; \ mkdir -p $DOMAIN_HOME &amp;&amp; \ chown -R oracle:dba $ORACLE_BASE &amp;&amp; \ chmod ug+x $ORACLE_BASE/$RUN_FILE # Replace place holders (and force /dev/urandom for java) # --------------------- RUN sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_WLS &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_BI &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###DOMAIN_HOME###|$DOMAIN_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|###ORACLE_BASE###|$ORACLE_BASE|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|source=file:/dev/random|source=file:/dev/urandom|g" /usr/java/default/jre/lib/security/java.security &amp;&amp; \ sed -i -e "s|source=file:/dev/urandom|source=file:/dev/./urandom|g" /usr/java/default/jre/lib/security/java.security # Start installation # ------------------- USER oracle RUN cd $INSTALL_DIR &amp;&amp; \ unzip $INSTALL_FILE_WLS -d ./tmp_wls &amp;&amp; \ rm $INSTALL_FILE_WLS &amp;&amp; \ java -jar $(find $INSTALL_DIR/tmp_wls -name *.jar) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_WLS -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_wls &amp;&amp; \ unzip $INSTALL_FILE_BI_1 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_1 &amp;&amp; \ unzip $INSTALL_FILE_BI_2 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_2 &amp;&amp; \ $(find $INSTALL_DIR/tmp_bi -name *.bin) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_BI -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_bi &amp;&amp; \ rm -rf $INSTALL_DIR # Set work directory &amp; Expose ports #-------------- WORKDIR $ORACLE_BASE EXPOSE 9500-9514 9799 # Define default command to start Oracle Business intelligence. CMD $ORACLE_BASE/$RUN_FILE</pre><p>Like any script using variables make it more flexible and dynamic as generally you only have to change values in a single place on top of a script and everything else will then adapt. In Docker it&#8217;s the same thing except that variables aren&#8217;t Docker variable but Linux environment variable, accessible by any command or script executed later.<br /> This is of course also a limitation as if you store a secret password as environment variable it isn&#8217;t really secret anymore. Anyone with access to the image will be able to see it.</p> <p>In this case, you can see there are only two commands ENV but several variables set. As each command is a slice of the final image it&#8217;s a good practice to group together commands when possible (and when making sense) and using a backslash &#8221; \ &#8221; to separate them.<br /> The only reasons to have two ENV and not a single one is because to reference another variable it must exist first. In the second ENV I reuse variables set in the first one.</p> <p>From an OBIEE point of view you can see that some variables start defining the structure on the disk of the OBIEE setup.</p> <div id="attachment_431" style="width: 582px" class="wp-caption aligncenter"><img class="size-full wp-image-431" src="https://gianniceresa.com/wp-content/uploads/2017/05/file_structure.png" alt="" width="572" height="333" srcset="https://gianniceresa.com/wp-content/uploads/2017/05/file_structure.png 572w, https://gianniceresa.com/wp-content/uploads/2017/05/file_structure-300x175.png 300w" sizes="(max-width: 572px) 100vw, 572px" /><p class="wp-caption-text">From the official documentation, advised filesystem structure for product and configuration</p></div> <p>ORACLE_HOME and DOMAIN_HOME will be the location where the product and config will be stored. The main reason for splitting it into two different folders (by default the config would be inside the ORACLE_HOME in the &#8220;user_projects/domains&#8221; folder) is to make upgrades easy.<br /> If you upgrade from 12.2.1.1.0 to 12.2.1.2.0 you will install the new product (the newer version of the code) but keep the configured domain which will be deployed on the new OBIEE. By adopting this structure, the Docker image will allow you to practice and test product upgrades the same way as it will be done on your real production environment, making this image a good sandbox to practice scripts and processes.</p> <h2>COPY: Load files from the host inside the image</h2> <p></p><pre class="crayon-plain-tag"># Pull base image # --------------- FROM oraclelinux:7-slim # Maintainer # ---------- MAINTAINER Gianni Ceresa &lt;gianni.ceresa@datalysis.ch&gt; # Environment variables required for this build (do NOT change) # -------------------------------------------------------------- ENV INSTALL_FILE_JDK="jdk-8u101-linux-x64.rpm" \ INSTALL_FILE_WLS="fmw_12.2.1.2.0_infrastructure_Disk1_1of1.zip" \ INSTALL_FILE_BI_1="fmw_12.2.1.2.0_bi_linux64_Disk1_1of2.zip" \ INSTALL_FILE_BI_2="fmw_12.2.1.2.0_bi_linux64_Disk1_2of2.zip" \ OBIEE_VERSION="12.2.1.2.0" \ INSTALL_FILE_RSP_WLS="weblogic.rsp" \ INSTALL_FILE_RSP_BI="obiee.rsp" \ INSTALL_FILE_RSP_CONFIG="bi_config.rsp" \ RUN_FILE="runOBIEE.sh" \ ORACLE_BASE=/opt/oracle # Use second ENV so that variable get substituted ENV INSTALL_DIR=$ORACLE_BASE/install \ ORACLE_HOME=$ORACLE_BASE/product/$OBIEE_VERSION \ DOMAIN_HOME=$ORACLE_BASE/config/domains # Copy binaries # ------------- COPY $INSTALL_FILE_JDK $INSTALL_FILE_WLS $INSTALL_FILE_BI_1 $INSTALL_FILE_BI_2 $INSTALL_DIR/ COPY $INSTALL_FILE_RSP_WLS $INSTALL_FILE_RSP_BI $INSTALL_FILE_RSP_CONFIG $RUN_FILE _configureOBIEE.sh _dropRCU.sh _validateRCU.sh $ORACLE_BASE/ # Setup filesystem and oracle user # Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle Business Intelligence installation # Install pre-req packages + Oracle JDK # Make sure the run file is executable # ----------------------------------------------------------------------- RUN chmod ug+x $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ groupadd -g 500 dba &amp;&amp; \ groupadd -g 501 oinstall &amp;&amp; \ useradd -d /home/oracle -g dba -G oinstall,dba -m -s /bin/bash oracle &amp;&amp; \ echo oracle:oracle | chpasswd &amp;&amp; \ yum -y install oracle-rdbms-server-12cR1-preinstall unzip wget tar openssl &amp;&amp; \ yum -y remove java-openjdk java-openjdk-headless &amp;&amp; \ yum -y install $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ yum clean all &amp;&amp; \ rm $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ touch $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inventory_loc=$ORACLE_BASE/oraInventory &gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inst_group= &gt;&gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ mkdir -p $ORACLE_HOME &amp;&amp; \ mkdir -p $DOMAIN_HOME &amp;&amp; \ chown -R oracle:dba $ORACLE_BASE &amp;&amp; \ chmod ug+x $ORACLE_BASE/$RUN_FILE # Replace place holders (and force /dev/urandom for java) # --------------------- RUN sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_WLS &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_BI &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###DOMAIN_HOME###|$DOMAIN_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|###ORACLE_BASE###|$ORACLE_BASE|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|source=file:/dev/random|source=file:/dev/urandom|g" /usr/java/default/jre/lib/security/java.security &amp;&amp; \ sed -i -e "s|source=file:/dev/urandom|source=file:/dev/./urandom|g" /usr/java/default/jre/lib/security/java.security # Start installation # ------------------- USER oracle RUN cd $INSTALL_DIR &amp;&amp; \ unzip $INSTALL_FILE_WLS -d ./tmp_wls &amp;&amp; \ rm $INSTALL_FILE_WLS &amp;&amp; \ java -jar $(find $INSTALL_DIR/tmp_wls -name *.jar) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_WLS -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_wls &amp;&amp; \ unzip $INSTALL_FILE_BI_1 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_1 &amp;&amp; \ unzip $INSTALL_FILE_BI_2 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_2 &amp;&amp; \ $(find $INSTALL_DIR/tmp_bi -name *.bin) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_BI -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_bi &amp;&amp; \ rm -rf $INSTALL_DIR # Set work directory &amp; Expose ports #-------------- WORKDIR $ORACLE_BASE EXPOSE 9500-9514 9799 # Define default command to start Oracle Business intelligence. CMD $ORACLE_BASE/$RUN_FILE</pre><p>COPY is used to make the binaires and other required files available inside the image. All these files are located next to the Dockerfile on the host and with this command I make them available on the image filesystem at the specified path.</p> <p>From an image layers point of view this step is the first one producing a big slice as it will contain all the binaries required for the installation.</p> <h2>RUN: The command executing operations</h2> <p></p><pre class="crayon-plain-tag"># Pull base image # --------------- FROM oraclelinux:7-slim # Maintainer # ---------- MAINTAINER Gianni Ceresa &lt;gianni.ceresa@datalysis.ch&gt; # Environment variables required for this build (do NOT change) # -------------------------------------------------------------- ENV INSTALL_FILE_JDK="jdk-8u101-linux-x64.rpm" \ INSTALL_FILE_WLS="fmw_12.2.1.2.0_infrastructure_Disk1_1of1.zip" \ INSTALL_FILE_BI_1="fmw_12.2.1.2.0_bi_linux64_Disk1_1of2.zip" \ INSTALL_FILE_BI_2="fmw_12.2.1.2.0_bi_linux64_Disk1_2of2.zip" \ OBIEE_VERSION="12.2.1.2.0" \ INSTALL_FILE_RSP_WLS="weblogic.rsp" \ INSTALL_FILE_RSP_BI="obiee.rsp" \ INSTALL_FILE_RSP_CONFIG="bi_config.rsp" \ RUN_FILE="runOBIEE.sh" \ ORACLE_BASE=/opt/oracle # Use second ENV so that variable get substituted ENV INSTALL_DIR=$ORACLE_BASE/install \ ORACLE_HOME=$ORACLE_BASE/product/$OBIEE_VERSION \ DOMAIN_HOME=$ORACLE_BASE/config/domains # Copy binaries # ------------- COPY $INSTALL_FILE_JDK $INSTALL_FILE_WLS $INSTALL_FILE_BI_1 $INSTALL_FILE_BI_2 $INSTALL_DIR/ COPY $INSTALL_FILE_RSP_WLS $INSTALL_FILE_RSP_BI $INSTALL_FILE_RSP_CONFIG $RUN_FILE _configureOBIEE.sh _dropRCU.sh _validateRCU.sh $ORACLE_BASE/ # Setup filesystem and oracle user # Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle Business Intelligence installation # Install pre-req packages + Oracle JDK # Make sure the run file is executable # ----------------------------------------------------------------------- RUN chmod ug+x $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ groupadd -g 500 dba &amp;&amp; \ groupadd -g 501 oinstall &amp;&amp; \ useradd -d /home/oracle -g dba -G oinstall,dba -m -s /bin/bash oracle &amp;&amp; \ echo oracle:oracle | chpasswd &amp;&amp; \ yum -y install oracle-rdbms-server-12cR1-preinstall unzip wget tar openssl &amp;&amp; \ yum -y remove java-openjdk java-openjdk-headless &amp;&amp; \ yum -y install $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ yum clean all &amp;&amp; \ rm $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ touch $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inventory_loc=$ORACLE_BASE/oraInventory &gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inst_group= &gt;&gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ mkdir -p $ORACLE_HOME &amp;&amp; \ mkdir -p $DOMAIN_HOME &amp;&amp; \ chown -R oracle:dba $ORACLE_BASE &amp;&amp; \ chmod ug+x $ORACLE_BASE/$RUN_FILE # Replace place holders (and force /dev/urandom for java) # --------------------- RUN sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_WLS &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_BI &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###DOMAIN_HOME###|$DOMAIN_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|###ORACLE_BASE###|$ORACLE_BASE|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|source=file:/dev/random|source=file:/dev/urandom|g" /usr/java/default/jre/lib/security/java.security &amp;&amp; \ sed -i -e "s|source=file:/dev/urandom|source=file:/dev/./urandom|g" /usr/java/default/jre/lib/security/java.security # Start installation # ------------------- USER oracle RUN cd $INSTALL_DIR &amp;&amp; \ unzip $INSTALL_FILE_WLS -d ./tmp_wls &amp;&amp; \ rm $INSTALL_FILE_WLS &amp;&amp; \ java -jar $(find $INSTALL_DIR/tmp_wls -name *.jar) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_WLS -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_wls &amp;&amp; \ unzip $INSTALL_FILE_BI_1 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_1 &amp;&amp; \ unzip $INSTALL_FILE_BI_2 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_2 &amp;&amp; \ $(find $INSTALL_DIR/tmp_bi -name *.bin) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_BI -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_bi &amp;&amp; \ rm -rf $INSTALL_DIR # Set work directory &amp; Expose ports #-------------- WORKDIR $ORACLE_BASE EXPOSE 9500-9514 9799 # Define default command to start Oracle Business intelligence. CMD $ORACLE_BASE/$RUN_FILE</pre><p>By default, the oraclelinux:7-slim image start with the root user. It&#8217;s the good user to perform generic actions like adding groups and users, perform updates or install required packages by using YUM. All the actions you would first perform when connecting to a clean Linux host before to install your tool.</p> <p>For OBIEE I add the <em>dba</em> and <em>oinstall</em> groups first and then create the &#8220;oracle&#8221; user as member of these groups. These groups are generally used for database installations. Here they are used as I first copied the Oracle Database Dockerfile to create the first OBIEE one. It doesn&#8217;t really matter the name of the groups and the new user is mainly because you are never supposed to install and run OBIEE as root, so there is no reason to not follow the same rules here.<br /> YUM will also install the database prerequisites mainly because with these you do not have a single problem of missing packages or requirements checks when installing OBIEE. It&#8217;s a nice way to keep the installation of prerequisites simple even if it will probably install few extra pieces I don&#8217;t really need (simplicity vs. size in this case).</p> <p>For Java, OBIEE has some precise requirements based on the certification matrix and that&#8217;s why the official Oracle JDK is used instead of other versions. The Docker image will be as close as possible to a certified install (except for Docker itself which is currently, at time of writing, still not supported).</p> <p>Last steps are the creation of the required folders structure where OBIEE will be installed and making sure the &#8220;oracle&#8221; user will have the required ownership and permissions on these folders.</p> <p>Jumping back to layers of the image and use of as few commands as possible you see that all the commands are grouped into a single RUN and using &#8221; &amp;&amp; \ &#8221; to concatenate them. It would make no sense to use RUN for each row having a slice of image for each one. As all the operations generate the requirements at the OS/environment level before the install, they make sense to be grouped in a single step.</p> <h2>RUN: Replace some placeholders to make install &#8220;dynamic&#8221;</h2> <p></p><pre class="crayon-plain-tag"># Pull base image # --------------- FROM oraclelinux:7-slim # Maintainer # ---------- MAINTAINER Gianni Ceresa &lt;gianni.ceresa@datalysis.ch&gt; # Environment variables required for this build (do NOT change) # -------------------------------------------------------------- ENV INSTALL_FILE_JDK="jdk-8u101-linux-x64.rpm" \ INSTALL_FILE_WLS="fmw_12.2.1.2.0_infrastructure_Disk1_1of1.zip" \ INSTALL_FILE_BI_1="fmw_12.2.1.2.0_bi_linux64_Disk1_1of2.zip" \ INSTALL_FILE_BI_2="fmw_12.2.1.2.0_bi_linux64_Disk1_2of2.zip" \ OBIEE_VERSION="12.2.1.2.0" \ INSTALL_FILE_RSP_WLS="weblogic.rsp" \ INSTALL_FILE_RSP_BI="obiee.rsp" \ INSTALL_FILE_RSP_CONFIG="bi_config.rsp" \ RUN_FILE="runOBIEE.sh" \ ORACLE_BASE=/opt/oracle # Use second ENV so that variable get substituted ENV INSTALL_DIR=$ORACLE_BASE/install \ ORACLE_HOME=$ORACLE_BASE/product/$OBIEE_VERSION \ DOMAIN_HOME=$ORACLE_BASE/config/domains # Copy binaries # ------------- COPY $INSTALL_FILE_JDK $INSTALL_FILE_WLS $INSTALL_FILE_BI_1 $INSTALL_FILE_BI_2 $INSTALL_DIR/ COPY $INSTALL_FILE_RSP_WLS $INSTALL_FILE_RSP_BI $INSTALL_FILE_RSP_CONFIG $RUN_FILE _configureOBIEE.sh _dropRCU.sh _validateRCU.sh $ORACLE_BASE/ # Setup filesystem and oracle user # Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle Business Intelligence installation # Install pre-req packages + Oracle JDK # Make sure the run file is executable # ----------------------------------------------------------------------- RUN chmod ug+x $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ groupadd -g 500 dba &amp;&amp; \ groupadd -g 501 oinstall &amp;&amp; \ useradd -d /home/oracle -g dba -G oinstall,dba -m -s /bin/bash oracle &amp;&amp; \ echo oracle:oracle | chpasswd &amp;&amp; \ yum -y install oracle-rdbms-server-12cR1-preinstall unzip wget tar openssl &amp;&amp; \ yum -y remove java-openjdk java-openjdk-headless &amp;&amp; \ yum -y install $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ yum clean all &amp;&amp; \ rm $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ touch $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inventory_loc=$ORACLE_BASE/oraInventory &gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inst_group= &gt;&gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ mkdir -p $ORACLE_HOME &amp;&amp; \ mkdir -p $DOMAIN_HOME &amp;&amp; \ chown -R oracle:dba $ORACLE_BASE &amp;&amp; \ chmod ug+x $ORACLE_BASE/$RUN_FILE # Replace place holders (and force /dev/urandom for java) # --------------------- RUN sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_WLS &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_BI &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###DOMAIN_HOME###|$DOMAIN_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|###ORACLE_BASE###|$ORACLE_BASE|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|source=file:/dev/random|source=file:/dev/urandom|g" /usr/java/default/jre/lib/security/java.security &amp;&amp; \ sed -i -e "s|source=file:/dev/urandom|source=file:/dev/./urandom|g" /usr/java/default/jre/lib/security/java.security # Start installation # ------------------- USER oracle RUN cd $INSTALL_DIR &amp;&amp; \ unzip $INSTALL_FILE_WLS -d ./tmp_wls &amp;&amp; \ rm $INSTALL_FILE_WLS &amp;&amp; \ java -jar $(find $INSTALL_DIR/tmp_wls -name *.jar) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_WLS -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_wls &amp;&amp; \ unzip $INSTALL_FILE_BI_1 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_1 &amp;&amp; \ unzip $INSTALL_FILE_BI_2 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_2 &amp;&amp; \ $(find $INSTALL_DIR/tmp_bi -name *.bin) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_BI -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_bi &amp;&amp; \ rm -rf $INSTALL_DIR # Set work directory &amp; Expose ports #-------------- WORKDIR $ORACLE_BASE EXPOSE 9500-9514 9799 # Define default command to start Oracle Business intelligence. CMD $ORACLE_BASE/$RUN_FILE</pre><p>This one is a separate RUN exclusively for readability, it could perfectly be included in the previous one.<br /> This step will only perform replacement of values inside the response files which are going to be used for the installation of Weblogic and OBIEE as well as a really important step pointing Java to /dev/urandom instead of /dev/random.</p> <p>Installation and execution in virtual machines, and even more in containers, often suffers the issues of /dev/random blocking the configuration or start process because of the lack of entropy. As the container isn&#8217;t performing anything else than running OBIEE there isn&#8217;t anything generating entropy like on a normal server.</p> <p>There are tons of articles online about how good or bad it is to point to /dev/urandom, the key element is that without this change there are lot of chances your container will never finish the configuration of OBIEE, so it&#8217;s a fact that /dev/urandom helps. The little trick of using /dev/./urandom is because various version of Java wanted to be smarter and automatically replace /dev/urandom by /dev/random, bringing back the issues.</p> <p>Alternative solutions could exist like deleting /dev/random in the container and make a symbolic link to /dev/urandom or map /dev/urandom of your host to /dev/random of the Docker container etc.<br /> By doing the change in the java.security file there isn&#8217;t anything else to care about: it will work!</p> <h2>USER+RUN: Switch to a different user before to install</h2> <p></p><pre class="crayon-plain-tag"># Pull base image # --------------- FROM oraclelinux:7-slim # Maintainer # ---------- MAINTAINER Gianni Ceresa &lt;gianni.ceresa@datalysis.ch&gt; # Environment variables required for this build (do NOT change) # -------------------------------------------------------------- ENV INSTALL_FILE_JDK="jdk-8u101-linux-x64.rpm" \ INSTALL_FILE_WLS="fmw_12.2.1.2.0_infrastructure_Disk1_1of1.zip" \ INSTALL_FILE_BI_1="fmw_12.2.1.2.0_bi_linux64_Disk1_1of2.zip" \ INSTALL_FILE_BI_2="fmw_12.2.1.2.0_bi_linux64_Disk1_2of2.zip" \ OBIEE_VERSION="12.2.1.2.0" \ INSTALL_FILE_RSP_WLS="weblogic.rsp" \ INSTALL_FILE_RSP_BI="obiee.rsp" \ INSTALL_FILE_RSP_CONFIG="bi_config.rsp" \ RUN_FILE="runOBIEE.sh" \ ORACLE_BASE=/opt/oracle # Use second ENV so that variable get substituted ENV INSTALL_DIR=$ORACLE_BASE/install \ ORACLE_HOME=$ORACLE_BASE/product/$OBIEE_VERSION \ DOMAIN_HOME=$ORACLE_BASE/config/domains # Copy binaries # ------------- COPY $INSTALL_FILE_JDK $INSTALL_FILE_WLS $INSTALL_FILE_BI_1 $INSTALL_FILE_BI_2 $INSTALL_DIR/ COPY $INSTALL_FILE_RSP_WLS $INSTALL_FILE_RSP_BI $INSTALL_FILE_RSP_CONFIG $RUN_FILE _configureOBIEE.sh _dropRCU.sh _validateRCU.sh $ORACLE_BASE/ # Setup filesystem and oracle user # Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle Business Intelligence installation # Install pre-req packages + Oracle JDK # Make sure the run file is executable # ----------------------------------------------------------------------- RUN chmod ug+x $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ groupadd -g 500 dba &amp;&amp; \ groupadd -g 501 oinstall &amp;&amp; \ useradd -d /home/oracle -g dba -G oinstall,dba -m -s /bin/bash oracle &amp;&amp; \ echo oracle:oracle | chpasswd &amp;&amp; \ yum -y install oracle-rdbms-server-12cR1-preinstall unzip wget tar openssl &amp;&amp; \ yum -y remove java-openjdk java-openjdk-headless &amp;&amp; \ yum -y install $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ yum clean all &amp;&amp; \ rm $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ touch $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inventory_loc=$ORACLE_BASE/oraInventory &gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inst_group= &gt;&gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ mkdir -p $ORACLE_HOME &amp;&amp; \ mkdir -p $DOMAIN_HOME &amp;&amp; \ chown -R oracle:dba $ORACLE_BASE &amp;&amp; \ chmod ug+x $ORACLE_BASE/$RUN_FILE # Replace place holders (and force /dev/urandom for java) # --------------------- RUN sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_WLS &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_BI &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###DOMAIN_HOME###|$DOMAIN_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|###ORACLE_BASE###|$ORACLE_BASE|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|source=file:/dev/random|source=file:/dev/urandom|g" /usr/java/default/jre/lib/security/java.security &amp;&amp; \ sed -i -e "s|source=file:/dev/urandom|source=file:/dev/./urandom|g" /usr/java/default/jre/lib/security/java.security # Start installation # ------------------- USER oracle RUN cd $INSTALL_DIR &amp;&amp; \ unzip $INSTALL_FILE_WLS -d ./tmp_wls &amp;&amp; \ rm $INSTALL_FILE_WLS &amp;&amp; \ java -jar $(find $INSTALL_DIR/tmp_wls -name *.jar) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_WLS -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_wls &amp;&amp; \ unzip $INSTALL_FILE_BI_1 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_1 &amp;&amp; \ unzip $INSTALL_FILE_BI_2 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_2 &amp;&amp; \ $(find $INSTALL_DIR/tmp_bi -name *.bin) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_BI -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_bi &amp;&amp; \ rm -rf $INSTALL_DIR # Set work directory &amp; Expose ports #-------------- WORKDIR $ORACLE_BASE EXPOSE 9500-9514 9799 # Define default command to start Oracle Business intelligence. CMD $ORACLE_BASE/$RUN_FILE</pre><p>The USER command tells Docker to switch to a different user and perform the following commands as this new user. In my case, as the environment is ready, it&#8217;s time to start using the freshly created &#8220;oracle&#8221; user.</p> <p>The installation is quite straightforward and well documented: unzip, install, delete install files.</p> <p>First Weblogic, which is installed by calling a Java JAR, then OBIEE by executing the &#8220;bin&#8221; file.<br /> As you can see all the installations are done in &#8220;silent mode&#8221;, requiring no interactions or wizards as all the parameters are provided in the response file. This method of installation is the one you would be supposed to use when installing your normal OBIEE environments as it&#8217;s the best way to guarantee all your environments are aligned with the same settings and properties.</p> <p>Sadly, too often users aren&#8217;t aware of this method, or just ignore it, and keep doing visual installations by using X-window emulators and things like that.<br /> I will never repeat it enough: the more you automate / script steps, the less human errors you will have.</p> <h2>WORKDIR+EXPOSE: Set default directory and ports the container can listen to</h2> <p></p><pre class="crayon-plain-tag"># Pull base image # --------------- FROM oraclelinux:7-slim # Maintainer # ---------- MAINTAINER Gianni Ceresa &lt;gianni.ceresa@datalysis.ch&gt; # Environment variables required for this build (do NOT change) # -------------------------------------------------------------- ENV INSTALL_FILE_JDK="jdk-8u101-linux-x64.rpm" \ INSTALL_FILE_WLS="fmw_12.2.1.2.0_infrastructure_Disk1_1of1.zip" \ INSTALL_FILE_BI_1="fmw_12.2.1.2.0_bi_linux64_Disk1_1of2.zip" \ INSTALL_FILE_BI_2="fmw_12.2.1.2.0_bi_linux64_Disk1_2of2.zip" \ OBIEE_VERSION="12.2.1.2.0" \ INSTALL_FILE_RSP_WLS="weblogic.rsp" \ INSTALL_FILE_RSP_BI="obiee.rsp" \ INSTALL_FILE_RSP_CONFIG="bi_config.rsp" \ RUN_FILE="runOBIEE.sh" \ ORACLE_BASE=/opt/oracle # Use second ENV so that variable get substituted ENV INSTALL_DIR=$ORACLE_BASE/install \ ORACLE_HOME=$ORACLE_BASE/product/$OBIEE_VERSION \ DOMAIN_HOME=$ORACLE_BASE/config/domains # Copy binaries # ------------- COPY $INSTALL_FILE_JDK $INSTALL_FILE_WLS $INSTALL_FILE_BI_1 $INSTALL_FILE_BI_2 $INSTALL_DIR/ COPY $INSTALL_FILE_RSP_WLS $INSTALL_FILE_RSP_BI $INSTALL_FILE_RSP_CONFIG $RUN_FILE _configureOBIEE.sh _dropRCU.sh _validateRCU.sh $ORACLE_BASE/ # Setup filesystem and oracle user # Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle Business Intelligence installation # Install pre-req packages + Oracle JDK # Make sure the run file is executable # ----------------------------------------------------------------------- RUN chmod ug+x $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ groupadd -g 500 dba &amp;&amp; \ groupadd -g 501 oinstall &amp;&amp; \ useradd -d /home/oracle -g dba -G oinstall,dba -m -s /bin/bash oracle &amp;&amp; \ echo oracle:oracle | chpasswd &amp;&amp; \ yum -y install oracle-rdbms-server-12cR1-preinstall unzip wget tar openssl &amp;&amp; \ yum -y remove java-openjdk java-openjdk-headless &amp;&amp; \ yum -y install $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ yum clean all &amp;&amp; \ rm $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ touch $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inventory_loc=$ORACLE_BASE/oraInventory &gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inst_group= &gt;&gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ mkdir -p $ORACLE_HOME &amp;&amp; \ mkdir -p $DOMAIN_HOME &amp;&amp; \ chown -R oracle:dba $ORACLE_BASE &amp;&amp; \ chmod ug+x $ORACLE_BASE/$RUN_FILE # Replace place holders (and force /dev/urandom for java) # --------------------- RUN sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_WLS &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_BI &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###DOMAIN_HOME###|$DOMAIN_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|###ORACLE_BASE###|$ORACLE_BASE|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|source=file:/dev/random|source=file:/dev/urandom|g" /usr/java/default/jre/lib/security/java.security &amp;&amp; \ sed -i -e "s|source=file:/dev/urandom|source=file:/dev/./urandom|g" /usr/java/default/jre/lib/security/java.security # Start installation # ------------------- USER oracle RUN cd $INSTALL_DIR &amp;&amp; \ unzip $INSTALL_FILE_WLS -d ./tmp_wls &amp;&amp; \ rm $INSTALL_FILE_WLS &amp;&amp; \ java -jar $(find $INSTALL_DIR/tmp_wls -name *.jar) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_WLS -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_wls &amp;&amp; \ unzip $INSTALL_FILE_BI_1 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_1 &amp;&amp; \ unzip $INSTALL_FILE_BI_2 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_2 &amp;&amp; \ $(find $INSTALL_DIR/tmp_bi -name *.bin) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_BI -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_bi &amp;&amp; \ rm -rf $INSTALL_DIR # Set work directory &amp; Expose ports #-------------- WORKDIR $ORACLE_BASE EXPOSE 9500-9514 9799 # Define default command to start Oracle Business intelligence. CMD $ORACLE_BASE/$RUN_FILE</pre><p>The WORKDIR command will set the base folder for any following operation. As there aren&#8217;t any other RUN command this folder will be the default one when you connect to a running container or execute commands on it. In my case I set the main folder where I can easily find configuration, product and logs in underneath folders.</p> <p>EXPOSE is, in the case of OBIEE, extremely important as it tells which ports the container will listen to. Without this command OBIEE will be up and running inside your Docker container, but you will have no way to connect to it from outside, OBIEE will be accessible only from inside the container itself, making it a bit useless.<br /> Exposing ports doesn&#8217;t mean your container will use them, you can decide if you want to bind these ports (one, many, all, none) when creating the container.</p> <p>OBIEE uses many ports and, in 12c, by default the range 9500-9514 used. That&#8217;s why this range is exposed as well as 9799 (Essbase 12c installed with OBIEE).</p> <h2>CMD: The default command to execute when starting a container</h2> <p></p><pre class="crayon-plain-tag"># Pull base image # --------------- FROM oraclelinux:7-slim # Maintainer # ---------- MAINTAINER Gianni Ceresa &lt;gianni.ceresa@datalysis.ch&gt; # Environment variables required for this build (do NOT change) # -------------------------------------------------------------- ENV INSTALL_FILE_JDK="jdk-8u101-linux-x64.rpm" \ INSTALL_FILE_WLS="fmw_12.2.1.2.0_infrastructure_Disk1_1of1.zip" \ INSTALL_FILE_BI_1="fmw_12.2.1.2.0_bi_linux64_Disk1_1of2.zip" \ INSTALL_FILE_BI_2="fmw_12.2.1.2.0_bi_linux64_Disk1_2of2.zip" \ OBIEE_VERSION="12.2.1.2.0" \ INSTALL_FILE_RSP_WLS="weblogic.rsp" \ INSTALL_FILE_RSP_BI="obiee.rsp" \ INSTALL_FILE_RSP_CONFIG="bi_config.rsp" \ RUN_FILE="runOBIEE.sh" \ ORACLE_BASE=/opt/oracle # Use second ENV so that variable get substituted ENV INSTALL_DIR=$ORACLE_BASE/install \ ORACLE_HOME=$ORACLE_BASE/product/$OBIEE_VERSION \ DOMAIN_HOME=$ORACLE_BASE/config/domains # Copy binaries # ------------- COPY $INSTALL_FILE_JDK $INSTALL_FILE_WLS $INSTALL_FILE_BI_1 $INSTALL_FILE_BI_2 $INSTALL_DIR/ COPY $INSTALL_FILE_RSP_WLS $INSTALL_FILE_RSP_BI $INSTALL_FILE_RSP_CONFIG $RUN_FILE _configureOBIEE.sh _dropRCU.sh _validateRCU.sh $ORACLE_BASE/ # Setup filesystem and oracle user # Adjust file permissions, go to /opt/oracle as user 'oracle' to proceed with Oracle Business Intelligence installation # Install pre-req packages + Oracle JDK # Make sure the run file is executable # ----------------------------------------------------------------------- RUN chmod ug+x $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ groupadd -g 500 dba &amp;&amp; \ groupadd -g 501 oinstall &amp;&amp; \ useradd -d /home/oracle -g dba -G oinstall,dba -m -s /bin/bash oracle &amp;&amp; \ echo oracle:oracle | chpasswd &amp;&amp; \ yum -y install oracle-rdbms-server-12cR1-preinstall unzip wget tar openssl &amp;&amp; \ yum -y remove java-openjdk java-openjdk-headless &amp;&amp; \ yum -y install $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ yum clean all &amp;&amp; \ rm $INSTALL_DIR/$INSTALL_FILE_JDK &amp;&amp; \ touch $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inventory_loc=$ORACLE_BASE/oraInventory &gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ echo inst_group= &gt;&gt; $ORACLE_BASE/oraInst.loc &amp;&amp; \ mkdir -p $ORACLE_HOME &amp;&amp; \ mkdir -p $DOMAIN_HOME &amp;&amp; \ chown -R oracle:dba $ORACLE_BASE &amp;&amp; \ chmod ug+x $ORACLE_BASE/$RUN_FILE # Replace place holders (and force /dev/urandom for java) # --------------------- RUN sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_WLS &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_BI &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###DOMAIN_HOME###|$DOMAIN_HOME|g" $ORACLE_BASE/$INSTALL_FILE_RSP_CONFIG &amp;&amp; \ sed -i -e "s|###ORACLE_HOME###|$ORACLE_HOME|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|###ORACLE_BASE###|$ORACLE_BASE|g" $ORACLE_BASE/$RUN_FILE &amp;&amp; \ sed -i -e "s|source=file:/dev/random|source=file:/dev/urandom|g" /usr/java/default/jre/lib/security/java.security &amp;&amp; \ sed -i -e "s|source=file:/dev/urandom|source=file:/dev/./urandom|g" /usr/java/default/jre/lib/security/java.security # Start installation # ------------------- USER oracle RUN cd $INSTALL_DIR &amp;&amp; \ unzip $INSTALL_FILE_WLS -d ./tmp_wls &amp;&amp; \ rm $INSTALL_FILE_WLS &amp;&amp; \ java -jar $(find $INSTALL_DIR/tmp_wls -name *.jar) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_WLS -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_wls &amp;&amp; \ unzip $INSTALL_FILE_BI_1 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_1 &amp;&amp; \ unzip $INSTALL_FILE_BI_2 -d ./tmp_bi &amp;&amp; \ rm $INSTALL_FILE_BI_2 &amp;&amp; \ $(find $INSTALL_DIR/tmp_bi -name *.bin) -silent -responseFile $ORACLE_BASE/$INSTALL_FILE_RSP_BI -invPtrLoc $ORACLE_BASE/oraInst.loc &amp;&amp; \ rm -rf $INSTALL_DIR/tmp_bi &amp;&amp; \ rm -rf $INSTALL_DIR # Set work directory &amp; Expose ports #-------------- WORKDIR $ORACLE_BASE EXPOSE 9500-9514 9799 # Define default command to start Oracle Business intelligence. CMD $ORACLE_BASE/$RUN_FILE</pre><p>Last step and the most important to make your container simple to use. By default, a container has a command defined, the command which will be executed when starting the container as long as not overwritten by the &#8220;docker run&#8221; command itself.</p> <p>Because a container run only as long as the command it executes is running, this command must be a kind of &#8220;infinite loop&#8221;, something which can run forever if needed but will also listen for stop commands and perform a clean shutdown of OBIEE before to exit.<br /> For this reason it&#8217;s a custom script: <a href="https://github.com/gianniceresa/docker-images/blob/master/OracleBIEE/12.2.1.2.0/runOBIEE.sh" target="_blank" rel="noopener noreferrer">runOBIEE.sh</a> . Here again I used the Docker Oracle Database GitHub code as example to start setting up my own script.</p> <p>An additional reason for being a custom script is that it must have a dual logic: at the first execution OBIEE isn&#8217;t configured and instead of starting the tool it must first configure it (which also means the RCU will create the schemas, the domain is created and finally OBIEE started). At any other execution, when restarting an existing container, it must only start OBIEE as it is already configured.</p> <h2>The final result: a set of layers composing the image</h2> <p>When building an image based on this Dockerfile the result is a 13-steps image composed by various slices. If I inspect the layers this is the output (by adding &#8211;no-trunc to <em>docker history</em> you can get the full list of commands for every single layer):</p> <div id="attachment_434" style="width: 1034px" class="wp-caption aligncenter"><a href="https://gianniceresa.com/wp-content/uploads/2017/05/image_layers.png"><img class="size-large wp-image-434" src="https://gianniceresa.com/wp-content/uploads/2017/05/image_layers-1024x366.png" alt="" width="1024" height="366" srcset="https://gianniceresa.com/wp-content/uploads/2017/05/image_layers-1024x366.png 1024w, https://gianniceresa.com/wp-content/uploads/2017/05/image_layers-300x107.png 300w, https://gianniceresa.com/wp-content/uploads/2017/05/image_layers-768x274.png 768w, https://gianniceresa.com/wp-content/uploads/2017/05/image_layers-1080x386.png 1080w, https://gianniceresa.com/wp-content/uploads/2017/05/image_layers.png 1145w" sizes="(max-width: 1024px) 100vw, 1024px" /></a><p class="wp-caption-text">At the bottom the layers of Oracle Linux &#8220;7-slim&#8221;, with the slices added by the OBIEE 12c Dockerfile on top</p></div> <p>When inspecting the &#8220;history&#8221; of the Docker image you clearly see every single step and also where the space on disk is lost. But you also see one of the key concept of Docker, which is the way images are managed as a sum of layers. A powerful concept if you use it well but it also has an impact on the storage driver you choose etc. For more details about this topic there is an <a href="https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/" target="_blank" rel="noopener noreferrer">interesting page</a> in the Docker documentation.</p> <h2>TL;DR : A Dockerfile is just all the steps you would do by hand &#8230;</h2> <p>As you can see there isn&#8217;t anything really special other than the <em>runOBIEE.sh</em> script as it must manage various things. All the other steps are things you would do by hand when installing an environment in a controlled way (response files and not by wizards).</p> <p>In the end the Dockerfile can also be used as &#8220;documentation&#8221; for a silent install in a VM or a physical server: response files are provided, values are all described and order of execution clearly defined.</p> <p>If you aren&#8217;t a fan of Docker and prefer <a href="https://www.ansible.com/" target="_blank" rel="noopener noreferrer">Ansible </a>or any other similar tool allowing you to script/automate environments provisioning you can easily adapt the Dockerfile steps to generate OBIEE environments.</p> <p>The post <a rel="nofollow" href="https://gianniceresa.com/2017/05/obiee-12c-docker-image-dive/">OBIEE 12c Docker image : Dive into the Dockerfile</a> appeared first on <a rel="nofollow" href="https://gianniceresa.com">Gianni&#039;s world: things crossing my mind</a>.</p> Gianni Ceresa https://gianniceresa.com/?p=423 Wed May 24 2017 04:21:27 GMT-0400 (EDT) ODTUG Kscope17 Women in Technology Event & 2017 Women in Technology Scholar http://www.odtug.com/p/bl/et/blogaid=720&source=1 Attend one of the hottest gatherings of the year – the ODTUG Kscope17 Women in Technology Event. Join men and women on Wednesday, June 28, at 12:15 PM for lunch, networking, and conversations surrounding workplace gender equality, workplace perception, work/life balance, and more. ODTUG http://www.odtug.com/p/bl/et/blogaid=720&source=1 Tue May 23 2017 10:51:57 GMT-0400 (EDT) Slides from the Ireland OUG Meetup May 2017 http://www.oralytics.com/2017/05/slides-from-ireland-oug-meetup-may-2017.html <p>Here are some of the slides from our meetup on 11th May 2017.</p> <iframe src="//www.slideshare.net/slideshow/embed_code/key/pxFoglEY2BRADf" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> <div style="margin-bottom:5px"> <strong> <a href="//www.slideshare.net/BrendanTierney/ireland-oug-meetup-may-2017" title="Ireland OUG Meetup May 2017" target="_blank">Ireland OUG Meetup May 2017</a> </strong> from <strong><a target="_blank" href="https://www.slideshare.net/BrendanTierney">Brendan Tierney</a></strong> </div> <p>The remaining slides will be added when they are available. </p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-5842583018264301644 Tue May 23 2017 03:47:00 GMT-0400 (EDT) Using OBIA with Fusion 12 https://blogs.oracle.com/biapps/using-obia-with-fusion-12 <p>Customers who are in the process of upgrading their Fusion Applications to Release 12.1 will need to make the following changes in OBIA to ensure smooth ETL runs.</p> <ol> <li>Download <a href="http://aru.us.oracle.com:8080/ARU/ViewPatchRequest/process_form?aru=21175627">Patch 21175627</a> from MOS that contains the View Object (VO) diffs between FA Rel 11 and Rel 12.1. Customers will have to review the Impact to BI Apps Modules and Interfaces listed in the sheet &#39;BIApps Interfaces Impacted&#39; and make the necessary changes in OBIA.</li> <li>Apply the Cloud Adapter patches documented in the tech notes published on the Master Note id <a href="https://support.oracle.com/epmos/faces/DocumentDisplay?id=2259509.1">2259509.1</a>.</li> </ol> <p>It is also recommended that OBIA customers opt in for the Monthly Fusion patches from June 2017 through October 2017 so that they automatically get the required updates for Fusion.</p> Gunaranjan Vasireddy https://blogs.oracle.com/biapps/using-obia-with-fusion-12 Tue May 23 2017 00:41:13 GMT-0400 (EDT) The Dark Cloud of the H1-B Fallout for Indian Companies: Layoffs or Reduced Valuations http://bi.abhinavagarwal.net/2017/05/the-dark-cloud-of-h1-b-fallout-for.html <div dir="ltr" style="text-align: left;" trbidi="on"><div dir="ltr" style="text-align: left;" trbidi="on"><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-N96Tigg2Tc0/WSK_YbBLtQI/AAAAAAAAOKA/WNuV7OEU9AMYIUeJ5jX2bmL4d8sbhvKZACLcB/s1600/pexels-photo-236970.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="242" src="https://4.bp.blogspot.com/-N96Tigg2Tc0/WSK_YbBLtQI/AAAAAAAAOKA/WNuV7OEU9AMYIUeJ5jX2bmL4d8sbhvKZACLcB/s640/pexels-photo-236970.jpg" width="640" /></a></div><span style="float: left; font-size: 48px; line-height: 30px; padding-right: 2px; padding-top: 2px;">I</span>ndia's second-largest IT company, Infosys, put out a press release on the 2nd of May, 2017 (<a href="https://www.infosys.com/newsroom/press-releases/Pages/technology-innovation-hubs-usa.aspx">link</a>), that it would be hiring "10,000 American Workers Over the Next Two Years and establish four new Technology and Innovation Hubs across the country focusing on cutting-edge technology areas, including artificial intelligence, machine learning, user experience, emerging digital technologies, cloud, and big data."<br /><blockquote class="twitter-tweet" data-lang="en"><div dir="ltr" lang="en">We're looking for top technologists, innovators, engineers, architects, leaders &amp; more. Join us! <a href="https://twitter.com/hashtag/Jobs?src=hash">#Jobs</a> <a href="https://twitter.com/hashtag/Hiring?src=hash">#Hiring</a> <a href="https://twitter.com/hashtag/Tech?src=hash">#Tech</a> <a href="https://t.co/tvN5LylqD9">https://t.co/tvN5LylqD9</a> <a href="https://t.co/LpQxFDKt1y">pic.twitter.com/LpQxFDKt1y</a></div>— Infosys Careers (@InfosysCareers) <a href="https://twitter.com/InfosysCareers/status/859444172784316416">May 2, 2017</a></blockquote>The first hub, the Infosys press release stated, was expected to open by August in Indiana, which coincidentally is also the home state of the US Vice President, and which would create 2,000 new jobs in the state.<br />Infosys wasted no time in advertising for jobs in the United States, prominently linking it to its announcement. Nor was there any dearth of tweets on social media site Twitter to give this news more amplification - see <a href="https://twitter.com/GovHolcomb/status/859398477389942785">this, </a><a href="https://twitter.com/GovHolcomb/status/859393118197739522">this</a>, <a href="https://twitter.com/IndyMayorJoe/status/859400674379001856">this</a>, <a href="https://twitter.com/Indiana_EDC/status/859413971073421312">this</a>, or <a href="https://twitter.com/Indiana_EDC/status/859505008215285761">this</a>. <br /><br />While this is certainly good news for the United States and for its President Donald Trump's goal of making American "Great Again", the impact on outsourcing companies like Infosys is likely to be less positive.<br /><br /><a name='more'></a><br />If Infosys is hiring 10,000 American workers, then it will have to pay them salaries as per prevailing rates. Even accounting for the relatively low cost of labour in the midwestern state of Indiana, the average annual cost of one American worker is likely to be over US$100,000. This includes the wages and overheads like infrastructure, administrative, and other costs. The figure of $100,000 may well be conservative, but it's a nice, round number to work with. Using these two numbers, we get the figure of $1 billion - 10,000 workers multiplied by $100,000 per worker. One billion dollars a year is what Infosys will end up paying these 10,000 American workers. Keep this figure in your mind while we compute a couple of other numbers.<br /><br />Infosys' offshore costs, on the other hand, are much lower. While Infosys does not share out these numbers, it would be very surprising if its Indian offshore costs were more than $25,000 per-year per-employee. Given that Infosys and other Indian services companies hire college graduates at less than ₹5 lacs a year (which is approximately $7,500 a year), and that these companies tend to concentrate their workforce towards the younger end of the age spectrum, the figure of $25,000 per-year per-employee is on the higher side. But let's work with this number and keep this also in your head for just a little bit.<br /><br /><img src="https://media.licdn.com/mpr/mpr/AAEAAQAAAAAAAA0nAAAAJGZlOGRmMDU4LTc2ZGYtNDgxNi1iMWU1LTkyMGYwNWNkOTUyMw.png" /> <br /><br />So what are the implications of this hiring in the US for Infosys? Let us make some assumptions and see where each assumption leads us.<br /><br /><b>Net New Hiring.</b><br />If we assume that this is net new hiring Infosys is looking at, it means an annual increase of $1 billion in costs. For Infosys to maintain its gross margins of <b>39%</b> (for FY2016-17), it means it would have to find an additional revenue of <b>$2.56 billion</b> from these 10,000 employees to keep its margins constant (if Infosys earns $2.56 billion from those 10,000 US-based employees, and if its cost for those 10,000 employees were $1 billion, it would mean its gross margins were 39%). To put that number in perspective, an additional $2.56 billion in revenue would mean an additional <b>24%</b> over its FY2016-17 revenues. Given that revenue growth for Indian services companies has slowed down to the mid-single-digits in recent times, the figure of 24% looks very, very ambitious. And unrealistic.<br /><br /><b>Workforce Optimization ("Layoffs")</b><br />The other option is for Infosys to cut an equivalent number of employees from India. If we take the ratio of <b>1:4</b> for the US-to-Indian costs, we arrive at a number of <b>40,000 </b>employees that Infosys would have to retrench from its Indian operations. That is a huge number, and almost <b>20%</b> of its existing workforce. It cannot possibly hope to achieve such a drastic cut in headcount without serious domestic repercussions. Even then, it's a <b>Sisyphusian</b> task. If it reduces its workforce, then yes - it reduces its costs, but it also lowers the revenue it would have otherwise earned from those 40,000 (or whatever number it comes up with) workers. So it has to reduce its workforce even more. And so on...<br /><blockquote class="twitter-tweet" data-lang="en"><div dir="ltr" lang="en">Infosys will hire 10,000 American workers, build 4 new technology and innovation hubs in the U.S. over next 2 years <a href="https://t.co/NGWdpjxuwU">https://t.co/NGWdpjxuwU</a></div>— Infosys (@Infosys) <a href="https://twitter.com/Infosys/status/859265810304573440">May 2, 2017</a></blockquote><b>Lowered Margins, Lowered Market Cap</b><br />The third option is for Infosys to convince Dalal Street - the financial markets - to live with reduced margins. This could mean either a higher P/E ratio, or a lowered market cap. Will its stockholders, the Board, and its management agree to it? Accepting lowered valuations has its own set of implications for the company, its brand, its ability to hire and retain talent. Clearly, Infosys will accept lowered valuations out of compulsion, and not choice.<br /><br /><b>Robbing Peter to Pay Paul?</b><br />Where will Infosys get the 10,000 new workers from? After all, there is more than one way to skin a cat, as the idiom goes. <b>First</b>, Infosys could move some of its employees from other locations within the United States to the upcoming Indiana and other planned centers. <b>Second</b>, who keeps track of the accounting? I.e., are these going to be 10,000 net new jobs? Are they going to be permanent jobs, or would even temporary jobs created as a result - like in construction - be counted? Is this number of ten-thousand exclusively for technology jobs, or would even administrative, janitorial, and service jobs be included? <b>Third</b>, as per this press release, <a href="http://mailchi.mp/iedc/news-indiana-attracts-global-technology-firm-2000-new-high-skilled-jobs">"Infosys plans to create up to 2,000 new, high-skilled jobs in central Indiana by the end of 2021." </a>Infosys today employs all of 140 people in Indiana, according to the same press release, and will add another 100 new jobs by the end of 2017. These are admittedly small numbers. Yes, there are other centers that Infosys plans to open across the country, but this is only a few hundred new jobs - whose exact nature or skill-level is as-yet uncertain - we are talking about. <br /><blockquote class="twitter-tweet" data-lang="en"><div dir="ltr" lang="en">Heartfelt thx <a href="https://twitter.com/GovHolcomb">@GovHolcomb</a> your team, <a href="https://twitter.com/imravikumars">@imravikumars</a> &amp; team <a href="https://twitter.com/Infosys">@infosys</a> for opening up a great new frontier of innovation &amp; creating our futures! <a href="https://t.co/vsOcjeDJVr">https://t.co/vsOcjeDJVr</a></div>— Vishal Sikka (@vsikka) <a href="https://twitter.com/vsikka/status/859437075787108352">May 2, 2017</a></blockquote><b>A Tax You Cannot Refuse</b><br />The cost of hiring these American workers can be seen as a tax. A tax that will be borne by American customers, by Infosys, or by a third-party, or a mix thereof. How so? If Infosys does nothing new, then, as I have outlined in the preceding paragraphs, its margins will suffer, and consequently, its market valuation. This reduced market-cap is a tax borne entirely by Infosys and its shareholders. If, on the other hand, Infosys is able to pass on these new costs to its customers, then the customers and their customers in turn pay this tax. In all likelihood, those end-customers would be the American taxpayer. Whether Infosys would be able to ask and get a price premium vis-a-vis its competitors is an open question. In many ways, Infosys are trying to make the best of an offer they could not refuse. The offer was a more-or-less order - threat, if you will - from the U.S. President, Donald Trump, for companies to hire American workers and to manufacture in the United States.<br /><iframe allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/SeldwfOwuL8" width="560"></iframe><br /><blockquote class="twitter-tweet" data-lang="en"><div dir="ltr" lang="en">Today, we celebrate <a href="https://twitter.com/Infosys">@Infosys</a> as they commit to bringing 2,000 jobs to our state! <a href="https://twitter.com/vsikka">@vsikka</a> <a href="https://twitter.com/Indiana_EDC">@Indiana_EDC</a> <a href="https://t.co/Dz02hAmevx">pic.twitter.com/Dz02hAmevx</a></div>— Eric Holcomb (@GovHolcomb) <a href="https://twitter.com/GovHolcomb/status/859398477389942785">May 2, 2017</a></blockquote>Which way the Infosys wind blows in this matter will be pretty much way the way for the rest of the Indian outsourcing industry and companies like TCS, Wipro, Cognizant, Tech Mahindra, HCL, and even for multi-national software companies with substantial Indian operations, like IBM, Accenture, and others. There will be some real hiring of net new workers in the United States. There will be much public posturing along with willing American politicians who will play along for the optics. There will be some real pain back home by way of layoffs, reduced pace of hiring, lowered wage hikes, and lowered valuations. Sandwiched in all this will be a lot of sound and noise. Any which way matters proceed, the days of labour cost-arbitrage are coming to an end.<br /><br /><i>Disclaimer: views expressed are personal. I had some inputs on this article from Monty Agarwal.</i><br /><br /></div>This article first appeared in <a href="https://www.linkedin.com/pulse/">LinkedIn Pulse</a> on <a href="https://www.linkedin.com/pulse/dark-cloud-h1-b-fallout-indian-companies-layoffs-reduced-agarwal">May 2nd, 2017</a>.<br /><br /><span style="color: #666666; font-size: x-small;">© 2017, Abhinav Agarwal All rights reserved.</span></div> Abhinav Agarwal tag:blogger.com,1999:blog-13714584.post-2741942209735482023 Mon May 22 2017 07:33:00 GMT-0400 (EDT) NEW ODTUG Kscope17 Content http://www.odtug.com/p/bl/et/blogaid=719&source=1 Stay up to date on all things Kscope17: Introducing the Lunch and Learn, New Oracle Professional Tracks, In the Cloud sessions, On-Prem sessions, and the Kscope17 Schedule at a Glance. ODTUG http://www.odtug.com/p/bl/et/blogaid=719&source=1 Thu May 18 2017 16:37:23 GMT-0400 (EDT) Oracle Data Visualization for Data Scientists – Part I https://realtrigeek.com/2017/05/18/oracle-data-visualization-for-data-scientists-part-i/ <p>This post is the first in a series where I’ll be comparing, integrating, and supplementing R with Data Visualization. At its core, DV is a Java application built on R. Since it’s built on R, we can do some really cool things with custom R scripts and showing how tight the integration is between DV and R.</p> <p>Since I often get questions from customers and conference attendees regarding R, I’ve been ramping up my R game over the past couple weeks. A question I’ve gotten more than once is if R can visualize the data (and mash up data from different sources), what is the benefit of using Data Visualization as a data scientist? I usually try and talk to the fact that you don’t have to code, you can drag and drop data elements, the column joining is done automatically by the tool, and you don’t need to be a data scientist to use DV!</p> <p>But talk is cheap.</p> <p>I’m going to build the same visualization in R as in DV to emphasize what DV brings to the table over just R. I will repeat that DV is a tool built using Java and R, so the tools’ visualizations should look similar.</p> <p>In R, I have built the following script. The first few lines are setting up where to pull in the data then, actually, pulling it into R for analysis.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image0011.png"><img data-attachment-id="1762" data-permalink="https://realtrigeek.com/2017/05/18/oracle-data-visualization-for-data-scientists-part-i/image0011-5/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image0011.png?w=840" data-orig-size="777,421" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0011" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image0011.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image0011.png?w=840?w=777" class="alignnone size-full wp-image-1762" src="https://epmqueen.files.wordpress.com/2017/05/image0011.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image0011.png 777w, https://epmqueen.files.wordpress.com/2017/05/image0011.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image0011.png?w=300 300w, https://epmqueen.files.wordpress.com/2017/05/image0011.png?w=768 768w" sizes="(max-width: 777px) 100vw, 777px" /></a></p> <p>When I run “head(st)”, I get the first 6 rows of the st data set.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image005.png"><img data-attachment-id="1769" data-permalink="https://realtrigeek.com/2017/05/18/oracle-data-visualization-for-data-scientists-part-i/image005-11/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image005.png?w=840" data-orig-size="761,733" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image005" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image005.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image005.png?w=840?w=761" class="alignnone size-full wp-image-1769" src="https://epmqueen.files.wordpress.com/2017/05/image005.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image005.png 761w, https://epmqueen.files.wordpress.com/2017/05/image005.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image005.png?w=300 300w" sizes="(max-width: 761px) 100vw, 761px" /></a></p> <p>After I run the line to activate the ggplot2 package (so that I can create visualizations), I am ready to run my first visualization in R. Here, I am putting Humidity on the x-axis and Fluid Loss per Hour on the y-axis.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image007.png"><img data-attachment-id="1770" data-permalink="https://realtrigeek.com/2017/05/18/oracle-data-visualization-for-data-scientists-part-i/image007-8/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image007.png?w=840" data-orig-size="1862,732" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image007" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image007.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image007.png?w=840?w=840" class="alignnone size-full wp-image-1770" src="https://epmqueen.files.wordpress.com/2017/05/image007.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image007.png?w=840 840w, https://epmqueen.files.wordpress.com/2017/05/image007.png?w=1680 1680w, https://epmqueen.files.wordpress.com/2017/05/image007.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image007.png?w=300 300w, https://epmqueen.files.wordpress.com/2017/05/image007.png?w=768 768w, https://epmqueen.files.wordpress.com/2017/05/image007.png?w=1024 1024w" sizes="(max-width: 840px) 100vw, 840px" /></a></p> <p>In DV, I get the exact same visualization, except I need to add an Attribute. I’ll add a unique row identifier to be safe (CLIENT_ID).</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image006.png"><img data-attachment-id="1763" data-permalink="https://realtrigeek.com/2017/05/18/oracle-data-visualization-for-data-scientists-part-i/image006-10/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image006.png?w=840" data-orig-size="1613,831" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image006" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image006.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image006.png?w=840?w=840" class="alignnone size-full wp-image-1763" src="https://epmqueen.files.wordpress.com/2017/05/image006.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image006.png?w=840 840w, https://epmqueen.files.wordpress.com/2017/05/image006.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image006.png?w=300 300w, https://epmqueen.files.wordpress.com/2017/05/image006.png?w=768 768w, https://epmqueen.files.wordpress.com/2017/05/image006.png?w=1024 1024w, https://epmqueen.files.wordpress.com/2017/05/image006.png 1613w" sizes="(max-width: 840px) 100vw, 840px" /></a></p> <p>To get females only, I need to add in the filter to my code:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image013.png"><img data-attachment-id="1771" data-permalink="https://realtrigeek.com/2017/05/18/oracle-data-visualization-for-data-scientists-part-i/image013-9/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image013.png?w=840" data-orig-size="1860,745" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image013" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image013.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image013.png?w=840?w=840" class="alignnone size-full wp-image-1771" src="https://epmqueen.files.wordpress.com/2017/05/image013.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image013.png?w=840 840w, https://epmqueen.files.wordpress.com/2017/05/image013.png?w=1678 1678w, https://epmqueen.files.wordpress.com/2017/05/image013.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image013.png?w=300 300w, https://epmqueen.files.wordpress.com/2017/05/image013.png?w=768 768w, https://epmqueen.files.wordpress.com/2017/05/image013.png?w=1024 1024w" sizes="(max-width: 840px) 100vw, 840px" /></a></p> <p>However, in DV, I can just drag and drop GENDER to filter:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image011.png"><img data-attachment-id="1767" data-permalink="https://realtrigeek.com/2017/05/18/oracle-data-visualization-for-data-scientists-part-i/image011-10/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image011.png?w=840" data-orig-size="436,542" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image011" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image011.png?w=840?w=241" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image011.png?w=840?w=436" class="alignnone size-full wp-image-1767" src="https://epmqueen.files.wordpress.com/2017/05/image011.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image011.png 436w, https://epmqueen.files.wordpress.com/2017/05/image011.png?w=121 121w, https://epmqueen.files.wordpress.com/2017/05/image011.png?w=241 241w" sizes="(max-width: 436px) 100vw, 436px" /></a></p> <p>…And no need to write code. I also changed the plots to red to keep consistent with my code.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image008.png"><img data-attachment-id="1764" data-permalink="https://realtrigeek.com/2017/05/18/oracle-data-visualization-for-data-scientists-part-i/image008-11/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image008.png?w=840" data-orig-size="1627,838" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image008" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image008.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image008.png?w=840?w=840" class="alignnone size-full wp-image-1764" src="https://epmqueen.files.wordpress.com/2017/05/image008.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image008.png?w=840 840w, https://epmqueen.files.wordpress.com/2017/05/image008.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image008.png?w=300 300w, https://epmqueen.files.wordpress.com/2017/05/image008.png?w=768 768w, https://epmqueen.files.wordpress.com/2017/05/image008.png?w=1024 1024w, https://epmqueen.files.wordpress.com/2017/05/image008.png 1627w" sizes="(max-width: 840px) 100vw, 840px" /></a></p> <p>And the same for males:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image012.png"><img data-attachment-id="1768" data-permalink="https://realtrigeek.com/2017/05/18/oracle-data-visualization-for-data-scientists-part-i/image012-9/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image012.png?w=840" data-orig-size="1853,738" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image012" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image012.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image012.png?w=840?w=840" class="alignnone size-full wp-image-1768" src="https://epmqueen.files.wordpress.com/2017/05/image012.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image012.png?w=840 840w, https://epmqueen.files.wordpress.com/2017/05/image012.png?w=1680 1680w, https://epmqueen.files.wordpress.com/2017/05/image012.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image012.png?w=300 300w, https://epmqueen.files.wordpress.com/2017/05/image012.png?w=768 768w, https://epmqueen.files.wordpress.com/2017/05/image012.png?w=1024 1024w" sizes="(max-width: 840px) 100vw, 840px" /></a></p> <p>In DV:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image009.png"><img data-attachment-id="1765" data-permalink="https://realtrigeek.com/2017/05/18/oracle-data-visualization-for-data-scientists-part-i/image009-11/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image009.png?w=840" data-orig-size="1626,842" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image009" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image009.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image009.png?w=840?w=840" class="alignnone size-full wp-image-1765" src="https://epmqueen.files.wordpress.com/2017/05/image009.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image009.png?w=840 840w, https://epmqueen.files.wordpress.com/2017/05/image009.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image009.png?w=300 300w, https://epmqueen.files.wordpress.com/2017/05/image009.png?w=768 768w, https://epmqueen.files.wordpress.com/2017/05/image009.png?w=1024 1024w, https://epmqueen.files.wordpress.com/2017/05/image009.png 1626w" sizes="(max-width: 840px) 100vw, 840px" /></a></p> <p>This is really to show what can be done visually in R can be done better in Data Visualization. You can also highlight various points in the visualization and choose to drill in on them based on different attributes in your data set.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image010.png"><img data-attachment-id="1766" data-permalink="https://realtrigeek.com/2017/05/18/oracle-data-visualization-for-data-scientists-part-i/image010-9/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image010.png?w=840" data-orig-size="419,396" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image010" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image010.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image010.png?w=840?w=419" class="alignnone size-full wp-image-1766" src="https://epmqueen.files.wordpress.com/2017/05/image010.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image010.png 419w, https://epmqueen.files.wordpress.com/2017/05/image010.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image010.png?w=300 300w" sizes="(max-width: 419px) 100vw, 419px" /></a></p> <p>…You can’t do that in R!</p><br /> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/gocomments/epmqueen.wordpress.com/1761/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/epmqueen.wordpress.com/1761/" /></a> <img alt="" border="0" src="https://pixel.wp.com/b.gif?host=realtrigeek.com&#038;blog=70089387&#038;post=1761&#038;subd=epmqueen&#038;ref=&#038;feed=1" width="1" height="1" /> Sarah Craynon Zumbrum http://realtrigeek.com/?p=1761 Thu May 18 2017 07:24:18 GMT-0400 (EDT) Delivery to Oracle Document Cloud Services (ODCS) Like A Boss https://blogs.oracle.com/xmlpublisher/delivery-to-oracle-document-cloud-services-odcs-like-a-boss p { margin-bottom: 0.1in; direction: ltr; color: rgb(0, 0, 10); line-height: 120%; text-align: left; }p.western { font-family: "Liberation Serif",serif; font-size: 12pt; }p.cjk { font-family: "WenQuanYi Micro Hei"; font-size: 12pt; }p.ctl { font-family: "Lohit Devanagari"; font-size: 12pt; } <p class="western">We have moved to a new blogging platform. This was a post from Pradeep that missed the cut over ...</p> <p class="western">In release 12.2.1.1, BI Publisher added a new feature - Delivery to Oracle Document Cloud Services (ODCS). Around the same time, BI Publisher was also certified against JCS 12.2.1.x and therefore, today if you have hosted your BI Publisher instance on JCS then we recommend Oracle Document Cloud Services as the delivery channel. Several reasons for this:</p> <ol> <li> <p class="western" style="margin-bottom: 0in">Easy to configure and manage ODCS in BI Publisher on Oracle Public Cloud. No port or firewall issues.</p> </li> <li> <p class="western" style="margin-bottom: 0in">ODCS offers a scalable, robust and secure document storage solution on cloud.</p> </li> <li> <p class="western" style="margin-bottom: 0in">ODCS offers document versioning and document metadata support similar to any content management server</p> </li> <li> <p class="western">Supports all business document file formats relevant for BI Publisher</p> </li> </ol> <p class="western"><b>When to use ODCS?</b></p> <p class="western">ODCS can be used for all different scenarios where a document need to be securely stored in a server that can be retained for any duration. The scenarios may include:</p> <ul> <li> <p class="western" style="margin-bottom: 0in">Bursting documents to multiple customers at the same time.</p> <ul> <li> <p class="western" style="margin-bottom: 0in">Invoices to customers</p> </li> <li> <p class="western" style="margin-bottom: 0in">HR Payroll reports to its employees</p> </li> <li> <p class="western">Financial Statements</p> </li> </ul> </li> </ul> <ul> <li> <p class="western" style="margin-bottom: 0in">Storing large or extremely large reports for offline printing</p> <ul> <li> <p class="western" style="margin-bottom: 0in">End of the Month/Year Statements for Financial Institutions</p> </li> <li> <p class="western" style="margin-bottom: 0in">Consolidated department reports</p> </li> <li> <p class="western" style="margin-bottom: 0in">Batch reports for Operational data</p> </li> </ul> </li> <li> <p class="western" style="margin-bottom: 0in">Regulatory Data Archival</p> <ul> <li> <p class="western">Generating PDF/A-1b or PDF/A-2 format documents</p> </li> </ul> </li> </ul> <p class="western"><b>How to Configure ODCS in BI Publisher?</b></p> <p class="western">Configuration of ODCS in BI Publisher requires the&nbsp; URI, username and password. Here the username is expected to have access to the folder where the files are to be delivered.</p> <p class="western"><img align="bottom" border="0" height="252" name="Image1" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/0babe4aa-b7d6-4c34-9bbf-6fa3bd67b0af/File/bd8a8e8418c7a3bc67b78ea546020e5b/bd8a8e8418c7a3bc67b78ea546020e5b.jpeg" width="767" /></p> <p class="western"><br /> &nbsp;</p> <p class="western"><b>How to Schedule and Deliver to ODCS?</b></p> <p class="western">Delivery to ODCS can be managed through both - a Normal Scheduled Job and a Bursting Job.</p> <p class="western">A Normal Scheduled Job allows the end user to select a folder from a list of values as shown below</p> <p class="western"><img align="bottom" border="0" height="367" name="Image2" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/0babe4aa-b7d6-4c34-9bbf-6fa3bd67b0af/File/cb7d782749d45f80e5326b7b05e2a5ce/cb7d782749d45f80e5326b7b05e2a5ce.jpeg" width="771" /></p> <p class="western">\</p> <p class="western">In case of Bursting Job, the ODCS delivery information is to be provided in the bursting query as shown below:</p> <p class="western"><img align="bottom" border="0" height="346" name="Image3" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/0babe4aa-b7d6-4c34-9bbf-6fa3bd67b0af/File/14844e72d8882c32c9e40020e0d5871b/14844e72d8882c32c9e40020e0d5871b.jpeg" width="772" /></p> <p class="western"><b>Accessing Document in ODCS</b></p> <p class="western">Once the documents are delivered to ODCS, they can be accessed by user based on his access to the folder, very similar to FTP or WebDAV access.</p> <p class="western"><img align="bottom" border="0" height="246" name="Image4" src="https://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/0babe4aa-b7d6-4c34-9bbf-6fa3bd67b0af/File/771423e3eb7d7b0a6f593cd7a58809aa/771423e3eb7d7b0a6f593cd7a58809aa.jpeg" width="750" /></p> <p class="western">That&#39;s all for now. Stay tuned for more updates !</p> <p class="western" style="margin-bottom: 0in; line-height: 100%">&nbsp;</p> Tim Dexter https://blogs.oracle.com/xmlpublisher/delivery-to-oracle-document-cloud-services-odcs-like-a-boss Wed May 17 2017 12:53:57 GMT-0400 (EDT) Hunters Eat Better Than Gatherers https://medium.com/red-pill-analytics/hunters-eat-better-than-gatherers-4484b548e2d4?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ehPMkO1KMy8jhKNwp4kDyQ.jpeg" /><figcaption>Photo Credit: <a href="https://unsplash.com/search/food?photo=Pt_YmiYm7a4">Cel Lisboa</a></figcaption></figure><h4>Serving Data Consumers Not Just What They Need, But Also What They Want</h4><p>My last few blog posts have discussed<a href="https://medium.com/red-pill-analytics/bi-dentity-crisis-658ef4daa8"> how data failure happens</a>, and how we can change failure into success (<a href="https://medium.com/red-pill-analytics/supply-demand-and-data-4f75c0701696">with a little economic thinking</a>). Continuing with that train of thought, in this post I’ll be discussing how we can better implement analytic solutions by challenging the convention of requirements gathering.</p><h4>Don’t “Gather Requirements”. Hunt for Questions.</h4><p>Requirements gathering. The meetings that are universally hated on both sides of the table. The business is thinking “I shouldn’t have to explain to them what I want <em>again</em>!”, while the data people are thinking “If you just told me what you wanted the first time, I wouldn’t have to keep asking!” We’ve all been in this situation, whether on one side of the table or another, or maybe we’ve had the fortune of having experienced both sides.</p><p>I have seen these sessions end in frustration and poor outputs. For example: from the data team there is some helpful talk of “What do you do now? Where are your current reports?” or something similar. The answers from the business are usually equally helpful “We just have it all in this spreadsheet.” or “We VLOOKUP to another spreadsheet.” At the end of the meeting, usually after some disappointment and frustration that of IT saying “We can’t do that.” or the business replying “Why isn’t this in your data already?”, everyone leaves. Waiting for them when they return to their desks is an Excel workbook with the some of the points that were captured during the meeting. These workbooks usually become the “requirements” that are used to build with… It makes me frustrated just typing out an example for you!</p><p>I think that part of the problem, again, relates to mindset. The appropriate mindset for both parties is how they can solve a problem <em>together</em>. Far too often transgressions of the past are brought out on how each side of the table has failed the other. And even when requirements are delivered from the business to the data owners, those requirements are symptoms of a greater issue; data dumps from business intelligence or analytical tools are a prime example of this. “I just need to replicate this exact format with these columns”, is a common refrain in these scenarios. (Well… you did ask “What are your requirements?”) And sadly, that is what is delivered usually. It is like seeing a festering wound and the patient only asking for a new band-aid, and maybe a fun <em>Star Wars</em> themed one this time instead of the <em>Captain America</em> one that is being removed. You both know that there is something wrong, but damn, R2-D2 is looking so cute covering it up!</p><p>I think the proper way to approach these meetings is to set the tone of understanding that there is a problem to be solved, set some expectations on how fast or slow things may be delivered, and start to ask <em>real</em> questions. Open ended and hard questions like the following (and I am leading with my favorites):</p><ul><li>What question or questions do you have that you need data for?</li><li>What question or questions does this data set answer for you?</li><li>What actions can you take because of this data?</li><li>What actions can you <em>not</em> take because of a lack of data?</li><li>What story does this data tell you?</li><li>Is this data’s story incomplete?</li><li>How does this data help you do your job?</li><li>What do you do with this data when you get it?</li></ul><p>There are many, many questions like these to ask and to answer. This is how to identify what is really needed or desired by data consumers. These questions are akin to asking someone what flavors they like, or what kind of food they prefer, as opposed to asking “What did you eat in the last 2 days?” and inferring what their tastes are from that. Someone may love oysters, champagne and saffron, but if it is expensive, they won’t have it all the time. (Remember that data marketplace part? Yeah, it’s still relevant here.) Instead, they may be eating frozen fish, drinking boxed wine and seasoning with Morton’s Season All because that is what they can afford, with some of their desires being a treat. And the same is applicable to data consumers; they may be asking for what they have now because they may be unable to afford the effort involved to capture and use that data, their current solution hasn’t been challenged, or they may think that data is simply “off the menu”. At the very least they can’t afford it without help from the suppliers, and that is part of the suppliers role — putting data on the menu for consumers, at (hopefully) at an affordable price.</p><p>Obviously, it’s hard to take the questions above and deliver something quickly. That isn’t the point. The point is to help create organizational change to enhance the data that is being used. Status quo thinking doesn’t bring change, and while change can be a great and needed force, it usually isn’t easy… Don’t fall into the trap of using that method as the default. Stay strong!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/320/1*qzVsYkypZu6r_GDGHbLgPQ.gif" /><figcaption>Here’s a little motivation to get you started…</figcaption></figure><p>My take aways for consumers: Think about what data you need, the data you don’t have and what questions you can’t answer. From there, brainstorm how you can help the data suppliers understand these pain points. An export or data dump is a failing answer in my classroom!</p><p>My advice for suppliers: Try to read between the lines and understand what the consumers are really gunning for. Once you understand that, try to find ways to deliver it!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4484b548e2d4" width="1" height="1"><hr><p><a href="https://medium.com/red-pill-analytics/hunters-eat-better-than-gatherers-4484b548e2d4">Hunters Eat Better Than Gatherers</a> was originally published in <a href="https://medium.com/red-pill-analytics">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Phil Goerdt https://medium.com/p/4484b548e2d4 Wed May 17 2017 09:43:02 GMT-0400 (EDT) Hunters Eat Better Than Gatherers http://redpillanalytics.com/serving_data_consumers/ <p><img width="300" height="203" src="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/HunterGatheringData.jpg?fit=300%2C203" class="attachment-medium size-medium wp-post-image" alt="Serving Data Consumers Not Just What They Need, But Also What They Want" srcset="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/HunterGatheringData.jpg?w=1920 1920w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/HunterGatheringData.jpg?resize=300%2C203 300w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/HunterGatheringData.jpg?resize=768%2C520 768w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/HunterGatheringData.jpg?resize=1024%2C693 1024w" sizes="(max-width: 300px) 100vw, 300px" data-attachment-id="4909" data-permalink="http://redpillanalytics.com/serving_data_consumers/huntergatheringdata/" data-orig-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/HunterGatheringData.jpg?fit=1920%2C1300" data-orig-size="1920,1300" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="HunterGatheringData" data-image-description="&lt;p&gt;Serving Data Consumers Not Just What They Need, But Also What They Want&lt;/p&gt; " data-medium-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/HunterGatheringData.jpg?fit=300%2C203" data-large-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/HunterGatheringData.jpg?fit=1024%2C693" /></p><p>My last few blog posts have discussed<a class="markup--anchor markup--p-anchor" href="https://medium.com/red-pill-analytics/bi-dentity-crisis-658ef4daa8" target="_blank" rel="noopener noreferrer" data-href="https://medium.com/red-pill-analytics/bi-dentity-crisis-658ef4daa8"> how data failure happens</a>, and how we can change failure into success (<a class="markup--anchor markup--p-anchor" href="http://redpillanalytics.com/supply-demand-and-data/" target="_blank" rel="noopener noreferrer" data-href="https://medium.com/red-pill-analytics/supply-demand-and-data-4f75c0701696">with a little economic thinking</a>). Continuing with that train of thought, in this post I’ll be discussing how we can better implement analytic solutions by challenging the convention of requirements gathering.</p> <h4 class="graf graf--h4">Don’t “Gather Requirements”. Hunt for Questions.</h4> <p class="graf graf--p">Requirements gathering. The meetings that are universally hated on both sides of the table. The business is thinking “I shouldn’t have to explain to them what I want <em class="markup--em markup--p-em">again</em>!”, while the data people are thinking “If you just told me what you wanted the first time, I wouldn’t have to keep asking!” We’ve all been in this situation, whether on one side of the table or another, or maybe we’ve had the fortune of having experienced both sides.</p> <p class="graf graf--p">I have seen these sessions end in frustration and poor outputs. For example: from the data team there is some helpful talk of “What do you do now? Where are your current reports?” or something similar. The answers from the business are usually equally helpful “We just have it all in this spreadsheet.” or “We VLOOKUP to another spreadsheet.” At the end of the meeting, usually after some disappointment and frustration that of IT saying “We can’t do that.” or the business replying “Why isn’t this in your data already?”, everyone leaves. Waiting for them when they return to their desks is an Excel workbook with the some of the points that were captured during the meeting. These workbooks usually become the “requirements” that are used to build with… It makes me frustrated just typing out an example for you!</p> <p class="graf graf--p">I think that part of the problem, again, relates to mindset. The appropriate mindset for both parties is how they can solve a problem <em class="markup--em markup--p-em">together</em>. Far too often transgressions of the past are brought out on how each side of the table has failed the other. And even when requirements are delivered from the business to the data owners, those requirements are symptoms of a greater issue; data dumps from business intelligence or analytical tools are a prime example of this. “I just need to replicate this exact format with these columns”, is a common refrain in these scenarios. (Well… you did ask “What are your requirements?”) And sadly, that is what is delivered usually. It is like seeing a festering wound and the patient only asking for a new band-aid, and maybe a fun <em class="markup--em markup--p-em">Star Wars</em> themed one this time instead of the <em class="markup--em markup--p-em">Captain America</em> one that is being removed. You both know that there is something wrong, but damn, R2-D2 is looking so cute covering it up!</p> <p class="graf graf--p">I think the proper way to approach these meetings is to set the tone of understanding that there is a problem to be solved, set some expectations on how fast or slow things may be delivered, and start to ask <em class="markup--em markup--p-em">real</em> questions. Open ended and hard questions like the following (and I am leading with my favorites):</p> <ul class="postList"> <li class="graf graf--li">What question or questions do you have that you need data for?</li> <li class="graf graf--li">What question or questions does this data set answer for you?</li> <li class="graf graf--li">What actions can you take because of this data?</li> <li class="graf graf--li">What actions can you <em class="markup--em markup--li-em">not</em> take because of a lack of data?</li> <li class="graf graf--li">What story does this data tell you?</li> <li class="graf graf--li">Is this data’s story incomplete?</li> <li class="graf graf--li">How does this data help you do your job?</li> <li class="graf graf--li">What do you do with this data when you get it?</li> </ul> <p class="graf graf--p">There are many, many questions like these to ask and to answer. This is how to identify what is really needed or desired by data consumers. These questions are akin to asking someone what flavors they like, or what kind of food they prefer, as opposed to asking “What did you eat in the last 2 days?” and inferring what their tastes are from that. Someone may love oysters, champagne and saffron, but if it is expensive, they won’t have it all the time. (Remember that data marketplace part? Yeah, it’s still relevant here.) Instead, they may be eating frozen fish, drinking boxed wine and seasoning with Morton’s Season All because that is what they can afford, with some of their desires being a treat. And the same is applicable to data consumers; they may be asking for what they have now because they may be unable to afford the effort involved to capture and use that data, their current solution hasn’t been challenged, or they may think that data is simply “off the menu”. At the very least they can’t afford it without help from the suppliers, and that is part of the suppliers role — putting data on the menu for consumers, at (hopefully) at an affordable price.</p> <p class="graf graf--p">Obviously, it’s hard to take the questions above and deliver something quickly. That isn’t the point. The point is to help create organizational change to enhance the data that is being used. Status quo thinking doesn’t bring change, and while change can be a great and needed force, it usually isn’t easy… Don’t fall into the trap of using that method as the default. Stay strong!</p> <figure class="graf graf--figure"> <p><div style="width: 330px" class="wp-caption aligncenter"><img class="graf-image" src="https://i0.wp.com/cdn-images-1.medium.com/max/1600/1*qzVsYkypZu6r_GDGHbLgPQ.gif?resize=320%2C180&#038;ssl=1" alt="" data-image-id="1*qzVsYkypZu6r_GDGHbLgPQ.gif" data-width="320" data-height="180" data-recalc-dims="1" /><p class="wp-caption-text">Here’s a little motivation to get you started…</p></div><figcaption class="imageCaption"></figcaption></figure> <p class="graf graf--p">My take aways for consumers: Think about what data you need, the data you don’t have and what questions you can’t answer. From there, brainstorm how you can help the data suppliers understand these pain points. An export or data dump is a failing answer in my classroom!</p> <p class="graf graf--p">My advice for suppliers: Try to read between the lines and understand what the consumers are really gunning for. Once you understand that, try to find ways to deliver it!</p> Phil Goerdt http://redpillanalytics.com/?p=4906 Wed May 17 2017 09:42:57 GMT-0400 (EDT) OBIEE upgrades and Windows vulnerabilities http://www.rittmanmead.com/blog/2017/05/obiee-upgrades-and-windows-vulnerabilities/ <img src="https://www.rittmanmead.com/blog/content/images/2017/05/padlock.jpg" alt="OBIEE upgrades and Windows vulnerabilities"><p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/padlock.jpg" alt="OBIEE upgrades and Windows vulnerabilities"></p> <p>These two topics may seem unrelated; however, the <a href="http://www.bbc.co.uk/news/technology-39915440">ransomware attacks</a> over the last few days provide us with a reminder of what people can do with known vulnerabilities in an operating system.</p> <p>Organisations consider upgrades a necessary evil; they cost money, take up time and often have little tangible benefit or return on investment (ROI). In the case of upgrades between major version of software, for example, moving from OBIEE 10g to 12c there are significant <a href="https://www.rittmanmead.com/blog/2011/03/so-just-what-does-weblogic-server-do-within-obiee-11g/">architecture</a>, <a href="https://www.rittmanmead.com/blog/2010/10/obiee-11gr1-security-explained-an-11g-security-overview/">security</a>, <a href="https://www.rittmanmead.com/blog/2016/07/obiee-12-2-1-1-0-new-feature-guide/">functional</a> and <a href="https://www.rittmanmead.com/blog/2016/03/obiee-12c-your-answers-after-upgrading/">user interface</a> changes that may justify the upgrade alone, but they are unlikely to significantly change the way an organisation operates and may introduce new components and management processes which produce an additional overhead.</p> <p>There is another reason to perform upgrades: to keep your operating systems compliant with corporate security standards. OBIEE, and most other enterprise software products, come with <a href="http://www.oracle.com/technetwork/middleware/ias/downloads/fusion-certification-100350.html#close">certification matrices</a> that detail the supported operating system for each product. The older the version of OBIEE, the older the supported operating systems are, and this is where the problem starts.</p> <p>If we take an example of an organisation running OBIEE 10g, the most recent certified version of Windows it can run is <a href="http://docs.oracle.com/cd/E10415_01/doc/bi.1013/e10417.pdf">Windows 2008 R2</a>, which will fall outside of your company's security policy. You will be less likely to be patching the operating system on the server as it will either have fallen off the radar or Microsoft may have stopped releasing patches for that version of the operating system.</p> <p>The result leaves a system that has access to critical enterprise data vulnerable to known attacks.</p> <p>The only answer is to upgrade, but how do we justify ROI and obtain budget? I think we need to recognise that there is a cost of ownership associated with maintaining systems, the benefit of which is the mitigation of the risk of an instance like the ransomware attacks. It is highly unlikely that anyone could have predicted those attacks, so you could never have used it as a reason to justify an upgrade. However, these things do happen, and a significant amount of cyber attacks probably go on <a href="https://medium.com/@octskyward/why-cyber-warfare-isnt-9db27b4d50e0">undetected</a>. The best protection you have is to make sure your systems are up to date.</p> Jon Mead 7daf0859-0730-42d0-b5ac-dc240f0641c8 Mon May 15 2017 07:00:44 GMT-0400 (EDT) Oracle 12c Release 2 Partitioning New Features https://gavinsoorma.com/2017/05/oracle-12c-release-2-partitioning-new-features/ <p>A number of enhancements to the Oracle database Partitioning option have been introduced in Oracle Database 12c Release 2.</p> <p>These include:</p> <ul> <li>Automatic List Partitioning</li> <li>Multi-Column List Partitioning</li> <li>Read-only Partitions</li> <li>Filtered Partition maintenance operations</li> <li>Online conversion of non-partitioned to partitioned table</li> <li>Partitioned External Tables</li> </ul> <p>&nbsp;</p> <p>Similar to the interval partitioning method </p><div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2017/05/oracle-12c-release-2-partitioning-new-features/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=7620 Mon May 15 2017 00:06:58 GMT-0400 (EDT) OBIA 11g Installation on PaaS with Oracle Analytics Cloud (OAC) https://blogs.oracle.com/biapps/obia-11g-installation-on-paas-with-oracle-analytics-cloud-oac <p style="text-align: justify;">Oracle BI Applications can now be installed on Oracle PAAS (Platform as a service) platform and used along with Oracle Analytics Cloud (OAC). Refer to this Tech note&nbsp; <a href="https://support.oracle.com/epmos/faces/DocumentDisplay?id=2254057.1">Doc ID 2254057.1</a> on support.oracle.com for more details.</p> <p style="text-align: justify;"><strong>What is Oracle Analytics Cloud?</strong></p> <p style="text-align: justify;">Oracle Analytics Cloud (OAC) is a comprehensive analytics platform that powers your analytics strategy at any scale, in every environment &ndash; cloud, premises, desktop, and data center. From self-service visualization and data preparation to enterprise reporting and advanced analytics, to dynamic user-driven what-if modeling, to self-learning mobile analytics that provide proactive insights, OAC provides everything you need to ask any question of any data, on any device, at any time &ndash; so you can get the value you expect from analytics. Visit<a href="https://cloud.oracle.com/en_US/oac"> OAC website</a> for more details.</p> Guest Author https://blogs.oracle.com/biapps/obia-11g-installation-on-paas-with-oracle-analytics-cloud-oac Wed May 10 2017 14:30:00 GMT-0400 (EDT) Using R Plugins for Data Visualization https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/ <p>While going through an R plugin installation today, I realized there are some steps that some non-R users may be confused with when installing the plugins. I decided to document the steps below. I will also note that I have two versions of R running on my machine – 3.1.1 and 3.3.3 – and ran into some errors when installing my R plugin. I’ve documented resolving those below, as well.</p> <p>I decided to install the Correlation R plugin from the <a href="https://sites.google.com/site/oraclebipublicstore/downloads">Oracle BI Public Store</a> to analyze data I know well – my half marathon times over the past 2.5 years. Why start with data I know well? I can quickly tell if there is an error in the calculation. If I use data I know and get results I expect, then I can trust the script and results for data sets that are unknown to me in the future.</p> <p>Below is the data set I am working with for my example. I wanted to see if there was any correlation between my race times and course elevation, average HR, and max temperature (F) of the day.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image001.png"><img data-attachment-id="1726" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image001-11/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image001.png?w=840" data-orig-size="633,369" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image001" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image001.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image001.png?w=840?w=633" class="alignnone size-full wp-image-1726" src="https://epmqueen.files.wordpress.com/2017/05/image001.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image001.png 633w, https://epmqueen.files.wordpress.com/2017/05/image001.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image001.png?w=300 300w" sizes="(max-width: 633px) 100vw, 633px" /></a></p> <p>From the Oracle BI Public Store, I am choosing “Correlation”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image051.jpg"><img data-attachment-id="1738" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image051-6/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image051.jpg?w=840" data-orig-size="624,272" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image051" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image051.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image051.jpg?w=840?w=624" class="alignnone size-full wp-image-1738" src="https://epmqueen.files.wordpress.com/2017/05/image051.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image051.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image051.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image051.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>When I click on the icon, I get the description and instructions on how to install the R tool. This plugin includes the R script and a visualization to accompany the script.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image052.jpg"><img data-attachment-id="1739" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image052-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image052.jpg?w=840" data-orig-size="624,479" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image052" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image052.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image052.jpg?w=840?w=624" class="alignnone size-full wp-image-1739" src="https://epmqueen.files.wordpress.com/2017/05/image052.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image052.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image052.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image052.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>The first thing I need to do is install the R packages to my R installation. There are two main interfaces I use to build R scripts. The first one is the standard installation console, shown below.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image053.jpg"><img data-attachment-id="1740" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image053-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image053.jpg?w=840" data-orig-size="624,517" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image053" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image053.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image053.jpg?w=840?w=624" class="alignnone size-full wp-image-1740" src="https://epmqueen.files.wordpress.com/2017/05/image053.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image053.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image053.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image053.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>However, I prefer <a href="https://www.rstudio.com/products/rstudio/download2/">R Studio</a> (free download!) so I can see my script, console results, any variables I’ve created, and packages. I can also change the color of the editor to my preference, which is dark background with color coded keywords.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image054.jpg"><img data-attachment-id="1741" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image054-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image054.jpg?w=840" data-orig-size="624,486" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image054" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image054.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image054.jpg?w=840?w=624" class="alignnone size-full wp-image-1741" src="https://epmqueen.files.wordpress.com/2017/05/image054.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image054.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image054.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image054.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>In R studio (or the standard R console, if you prefer), you can type “install.packages(“corrplot”)” then hit enter to install the corrplot package, as told to us in the Store instructions.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image055.jpg"><img data-attachment-id="1742" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image055-5/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image055.jpg?w=840" data-orig-size="624,486" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image055" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image055.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image055.jpg?w=840?w=624" class="alignnone size-full wp-image-1742" src="https://epmqueen.files.wordpress.com/2017/05/image055.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image055.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image055.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image055.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>Once installed, you will get a visual confirmation that the package installed.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image056.jpg"><img data-attachment-id="1743" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image056-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image056.jpg?w=840" data-orig-size="624,486" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image056" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image056.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image056.jpg?w=840?w=624" class="alignnone size-full wp-image-1743" src="https://epmqueen.files.wordpress.com/2017/05/image056.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image056.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image056.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image056.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>In R Studio, you also have the option of installing packages from a list. To do so, click the tab named “Packages”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image014.png"><img data-attachment-id="1727" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image014-9/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image014.png?w=840" data-orig-size="503,475" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image014" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image014.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image014.png?w=840?w=503" class="alignnone size-full wp-image-1727" src="https://epmqueen.files.wordpress.com/2017/05/image014.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image014.png 503w, https://epmqueen.files.wordpress.com/2017/05/image014.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image014.png?w=300 300w" sizes="(max-width: 503px) 100vw, 503px" /></a></p> <p>Click “Install” then enter the name of the package you want to install from the list. Click the button “Install”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image057.jpg"><img data-attachment-id="1744" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image057-5/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image057.jpg?w=840" data-orig-size="624,486" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image057" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image057.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image057.jpg?w=840?w=624" class="alignnone size-full wp-image-1744" src="https://epmqueen.files.wordpress.com/2017/05/image057.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image057.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image057.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image057.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>You will get confirmation in the console that the package installed. Likewise, you can also see the package listed in the full list on the bottom right-hand side of the screen.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image058.jpg"><img data-attachment-id="1745" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image058-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image058.jpg?w=840" data-orig-size="624,486" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image058" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image058.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image058.jpg?w=840?w=624" class="alignnone size-full wp-image-1745" src="https://epmqueen.files.wordpress.com/2017/05/image058.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image058.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image058.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image058.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>Next, I downloaded the zip file associated with the plugin from the Store.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image059.png"><img data-attachment-id="1746" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image059-5/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image059.png?w=840" data-orig-size="624,224" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image059" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image059.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image059.png?w=840?w=624" class="alignnone size-full wp-image-1746" src="https://epmqueen.files.wordpress.com/2017/05/image059.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image059.png 624w, https://epmqueen.files.wordpress.com/2017/05/image059.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image059.png?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>I unzipped the file to a folder on my desktop.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image060.jpg"><img data-attachment-id="1747" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image060-5/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image060.jpg?w=840" data-orig-size="624,155" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image060" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image060.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image060.jpg?w=840?w=624" class="alignnone size-full wp-image-1747" src="https://epmqueen.files.wordpress.com/2017/05/image060.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image060.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image060.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image060.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>Copy the two XML files to the file location shown below (R script repository):</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image061.jpg"><img data-attachment-id="1748" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image061-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image061.jpg?w=840" data-orig-size="624,362" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image061" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image061.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image061.jpg?w=840?w=624" class="alignnone size-full wp-image-1748" src="https://epmqueen.files.wordpress.com/2017/05/image061.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image061.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image061.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image061.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>Next, the instructions tell you to deploy the R Viz(base64image). But what does this mean??</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image062.jpg"><img data-attachment-id="1749" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image062-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image062.jpg?w=840" data-orig-size="624,473" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image062" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image062.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image062.jpg?w=840?w=624" class="alignnone size-full wp-image-1749" src="https://epmqueen.files.wordpress.com/2017/05/image062.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image062.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image062.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image062.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>If you don’t install this plugin, you may get the following error: “A general error was found”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image063.jpg"><img data-attachment-id="1750" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image063-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image063.jpg?w=840" data-orig-size="624,475" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image063" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image063.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image063.jpg?w=840?w=624" class="alignnone size-full wp-image-1750" src="https://epmqueen.files.wordpress.com/2017/05/image063.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image063.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image063.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image063.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>In a not-so-great manner, the instructions want you to install another free plugin from the Store – the R Viz (Base64Image) plugin.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image029.png"><img data-attachment-id="1728" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image029-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image029.png?w=840" data-orig-size="350,387" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image029" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image029.png?w=840?w=271" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image029.png?w=840?w=350" class="alignnone size-full wp-image-1728" src="https://epmqueen.files.wordpress.com/2017/05/image029.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image029.png 350w, https://epmqueen.files.wordpress.com/2017/05/image029.png?w=136 136w, https://epmqueen.files.wordpress.com/2017/05/image029.png?w=271 271w" sizes="(max-width: 350px) 100vw, 350px" /></a></p> <p>Once you download the zipped file, place the <strong><em>zipped</em></strong> file in the plugins path shown below.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image064.jpg"><img data-attachment-id="1751" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image064-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image064.jpg?w=840" data-orig-size="624,250" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image064" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image064.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image064.jpg?w=840?w=624" class="alignnone size-full wp-image-1751" src="https://epmqueen.files.wordpress.com/2017/05/image064.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image064.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image064.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image064.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>Here is an example of where it should be placed:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image032.png"><img data-attachment-id="1729" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image032-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image032.png?w=840" data-orig-size="595,378" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image032" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image032.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image032.png?w=840?w=595" class="alignnone size-full wp-image-1729" src="https://epmqueen.files.wordpress.com/2017/05/image032.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image032.png 595w, https://epmqueen.files.wordpress.com/2017/05/image032.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image032.png?w=300 300w" sizes="(max-width: 595px) 100vw, 595px" /></a></p> <p>Next, you should import the DVA file into Data Visualization Desktop and it *<strong>should</strong>* work.</p> <p>However…</p> <p>If you have more than one version of R running on your machine, you might get the following error: “Error Processing Data”. I read through the error and see that it says “there is no package called ‘corrplot’”. Hmm. But I *<strong>did</strong>* install the package.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image065.jpg"><img data-attachment-id="1752" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image065-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image065.jpg?w=840" data-orig-size="625,333" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image065" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image065.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image065.jpg?w=840?w=625" class="alignnone size-full wp-image-1752" src="https://epmqueen.files.wordpress.com/2017/05/image065.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image065.jpg 625w, https://epmqueen.files.wordpress.com/2017/05/image065.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image065.jpg?w=300 300w" sizes="(max-width: 625px) 100vw, 625px" /></a></p> <p>When I went to my R directory, I see my 2 R installations. It seems as though DVD does not connect to the most recent installation by default. It will run what is installed from the command line when installing Advanced Analytics for Data Visualization Desktop.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image035.png"><img data-attachment-id="1730" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image035-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image035.png?w=840" data-orig-size="591,180" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image035" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image035.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image035.png?w=840?w=591" class="alignnone size-full wp-image-1730" src="https://epmqueen.files.wordpress.com/2017/05/image035.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image035.png 591w, https://epmqueen.files.wordpress.com/2017/05/image035.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image035.png?w=300 300w" sizes="(max-width: 591px) 100vw, 591px" /></a></p> <p>Under 3.3, I have all my packages. I simply copied all of these folders from 3.3 to 3.1 (which was empty).</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image036.png"><img data-attachment-id="1731" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image036-5/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image036.png?w=840" data-orig-size="528,428" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image036" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image036.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image036.png?w=840?w=528" class="alignnone size-full wp-image-1731" src="https://epmqueen.files.wordpress.com/2017/05/image036.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image036.png 528w, https://epmqueen.files.wordpress.com/2017/05/image036.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image036.png?w=300 300w" sizes="(max-width: 528px) 100vw, 528px" /></a></p> <p>I restarted DVD and have my Correlation Analysis project showing correctly!</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image066.jpg"><img data-attachment-id="1753" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image066-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image066.jpg?w=840" data-orig-size="624,318" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image066" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image066.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image066.jpg?w=840?w=624" class="alignnone size-full wp-image-1753" src="https://epmqueen.files.wordpress.com/2017/05/image066.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image066.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image066.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image066.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>To see clues on how to build out my Half Marathon Stats correlation visualization, I took a look at the 3 calculations created for the sample project.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image067.jpg"><img data-attachment-id="1754" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image067-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image067.jpg?w=840" data-orig-size="624,852" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image067" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image067.jpg?w=840?w=220" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image067.jpg?w=840?w=624" class="alignnone size-full wp-image-1754" src="https://epmqueen.files.wordpress.com/2017/05/image067.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image067.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image067.jpg?w=110 110w, https://epmqueen.files.wordpress.com/2017/05/image067.jpg?w=220 220w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>Here is what the img_id(Cat Vs Numerical) calculation looks like in DVD.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image041.png"><img data-attachment-id="1732" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image041-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image041.png?w=840" data-orig-size="631,400" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image041" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image041.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image041.png?w=840?w=631" class="alignnone size-full wp-image-1732" src="https://epmqueen.files.wordpress.com/2017/05/image041.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image041.png 631w, https://epmqueen.files.wordpress.com/2017/05/image041.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image041.png?w=300 300w" sizes="(max-width: 631px) 100vw, 631px" /></a></p> <p>To see how this stacks up against the XML file, I opened the file to see the inputs and outputs.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image068.jpg"><img data-attachment-id="1755" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image068-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image068.jpg?w=840" data-orig-size="750,202" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image068" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image068.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image068.jpg?w=840?w=750" class="alignnone size-full wp-image-1755" src="https://epmqueen.files.wordpress.com/2017/05/image068.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image068.jpg 750w, https://epmqueen.files.wordpress.com/2017/05/image068.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image068.jpg?w=300 300w" sizes="(max-width: 750px) 100vw, 750px" /></a></p> <p>Got it. Now, I am ready to build my own correlation visualization. The first thing I did was add the “Base64ImgViz Plugin” to my canvas.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image069.jpg"><img data-attachment-id="1756" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image069-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image069.jpg?w=840" data-orig-size="624,462" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image069" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image069.jpg?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image069.jpg?w=840?w=624" class="alignnone size-full wp-image-1756" src="https://epmqueen.files.wordpress.com/2017/05/image069.jpg?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image069.jpg 624w, https://epmqueen.files.wordpress.com/2017/05/image069.jpg?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image069.jpg?w=300 300w" sizes="(max-width: 624px) 100vw, 624px" /></a></p> <p>I created a new calculation using the example as my starting point for “img_id”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image046.png"><img data-attachment-id="1733" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image046-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image046.png?w=840" data-orig-size="629,401" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image046" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image046.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image046.png?w=840?w=629" class="alignnone size-full wp-image-1733" src="https://epmqueen.files.wordpress.com/2017/05/image046.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image046.png 629w, https://epmqueen.files.wordpress.com/2017/05/image046.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image046.png?w=300 300w" sizes="(max-width: 629px) 100vw, 629px" /></a></p> <p>To add the “img_part_id”, I copied the exact script I used for “img_id” and changed that keyword to “img_part_id”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image047.png"><img data-attachment-id="1734" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image047-5/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image047.png?w=840" data-orig-size="630,399" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image047" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image047.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image047.png?w=840?w=630" class="alignnone size-full wp-image-1734" src="https://epmqueen.files.wordpress.com/2017/05/image047.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image047.png 630w, https://epmqueen.files.wordpress.com/2017/05/image047.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image047.png?w=300 300w" sizes="(max-width: 630px) 100vw, 630px" /></a></p> <p>And again for “img_part”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image048.png"><img data-attachment-id="1735" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image048-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image048.png?w=840" data-orig-size="628,400" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image048" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image048.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image048.png?w=840?w=628" class="alignnone size-full wp-image-1735" src="https://epmqueen.files.wordpress.com/2017/05/image048.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image048.png 628w, https://epmqueen.files.wordpress.com/2017/05/image048.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image048.png?w=300 300w" sizes="(max-width: 628px) 100vw, 628px" /></a></p> <p>Once I placed the 3 of the calculations in the correct spot:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image049.png"><img data-attachment-id="1736" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image049-5/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image049.png?w=840" data-orig-size="202,312" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image049" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image049.png?w=840?w=194" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image049.png?w=840?w=202" class="alignnone size-full wp-image-1736" src="https://epmqueen.files.wordpress.com/2017/05/image049.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image049.png 202w, https://epmqueen.files.wordpress.com/2017/05/image049.png?w=97 97w" sizes="(max-width: 202px) 100vw, 202px" /></a></p> <p>I get the following visualization.</p> <p>My hypothesis was that elevation and max temperature correlated to my finish time, but statistics tells me otherwise. There is no strong correlation!</p> <p><a href="https://epmqueen.files.wordpress.com/2017/05/image050.png"><img data-attachment-id="1737" data-permalink="https://realtrigeek.com/2017/05/10/using-r-plugins-for-data-visualization/image050-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/05/image050.png?w=840" data-orig-size="577,509" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image050" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/05/image050.png?w=840?w=300" data-large-file="https://epmqueen.files.wordpress.com/2017/05/image050.png?w=840?w=577" class="alignnone size-full wp-image-1737" src="https://epmqueen.files.wordpress.com/2017/05/image050.png?w=840" alt="" srcset="https://epmqueen.files.wordpress.com/2017/05/image050.png 577w, https://epmqueen.files.wordpress.com/2017/05/image050.png?w=150 150w, https://epmqueen.files.wordpress.com/2017/05/image050.png?w=300 300w" sizes="(max-width: 577px) 100vw, 577px" /></a></p><br /> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/gocomments/epmqueen.wordpress.com/1725/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/epmqueen.wordpress.com/1725/" /></a> <img alt="" border="0" src="https://pixel.wp.com/b.gif?host=realtrigeek.com&#038;blog=70089387&#038;post=1725&#038;subd=epmqueen&#038;ref=&#038;feed=1" width="1" height="1" /> Sarah Craynon Zumbrum http://realtrigeek.com/?p=1725 Wed May 10 2017 13:03:03 GMT-0400 (EDT) May ODTUG News http://www.odtug.com/p/bl/et/blogaid=717&source=1 Stay up to date on all things ODTUG: Read about the ODTUG Innovation Award, Kscope17 Updates, ODTUG Community News and Upcoming ODTUG Webinars. ODTUG http://www.odtug.com/p/bl/et/blogaid=717&source=1 Tue May 09 2017 14:15:07 GMT-0400 (EDT) BIP and Mapviewer Mash Up V https://blogs.oracle.com/xmlpublisher/bip-and-mapviewer-mash-up-v <p>The last part on maps, I promise ... its been a fun ride for me at least :0) If you need to catch up on previous episodes:</p> <p> </p> <ul> <li> <a target="_blank" href="/xmlpublisher/entry/bip_and_mapviewer_mash_up_i">BIP and Mapviewer Mash Up I</a></li> <li><a target="_blank" href="/xmlpublisher/entry/bip_and_mapviewer_mash_up">BIP and Mapviewer Mash Up II</a></li> <li><a href="/xmlpublisher/entry/bip_and_mapviewer_mash_up1" target="_blank">BIP and Mapviewer Mash Up III</a></li> <li><a href="/xmlpublisher/entry/bip_and_mapviewer_mash_up3" target="_blank">BIP and Mapviewer Mash Up IV</a></li> </ul> <p>In this post we're looking at map quality. On the left a JPG map, to the right an SVG output.</p> <img src="http://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/Image/28cc44332cb069325ddf8a7b28c01dd3/mapv6.jpg" /> <img src="http://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/Image/130bf3d9f27af5ac359635d49346050f/mapv8.jpg" /> <p>If we ignore the fact that they have different levels of features or layers. Imagine getting the maps into a PDF and then printing them. Its pretty clear that the SVG version of the map is going to render better on paper compared to JPG. </p> <p>Getting the SVG output from mapviewer is pretty straightforward, getting BIP to render it requires a little bit of effort. I have mentioned the XML request that we construct and then do a variable substitution in our servlet. All we need do is add another option to the requested output. Mapviewer supports several flavors of SVG:<br /></p> <ul> <li>If you specify SVG_STREAM, the stream of the image in SVG Basic (SVGB) format is returned directly; </li> <li>If you specify SVG_URL, a URL to an SVG Basic image stored on the MapViewer host system is returned.</li> <li>If you specify SVGZ_STREAM, the stream of the image in SVG Compressed (SVGZ) format is returned directly; </li> <li>If you specify SVGZ_URL, a URL to an SVG Compressed image stored on the MapViewer host system is returned. SVG Compressed format can effectively reduce the size of the SVG map by 40 to 70 percent compared with SVG Basic format, thus providing better performance.</li> <li>If you specify SVGTINY_STREAM, the stream of the image in SVG Tiny (SVGT) format is returned directly; </li> <li>If you specify SVGTINY_URL, a URL to an SVG Tiny image stored on the MapViewer host system is returned. (The SVG Tiny format is designed for devices with limited display capabilities, such as cell phones.)</li> </ul> <p> </p> <p>Dont panic, Ive looked at them all for you and we need to use SVGTINY_STREAM. This sends back a complete XML file representation of the map in SVG format. We have a couple of issues:</p> <ol> <li>We need to strip the XML declaration from the top of the file: &lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt; If we don't BIP will choke on the SVG. Being lazy I just used a string function to strip the line out in my servlet:<br /><br />dd<br /><br /></li> <li>We need to stream the SVG back as text. So we need to set the CONTENT_TYPE for the servlet as 'text/javascript'</li> <li>We need to handle the SVG when it comes back to the template. We do not use the <br /><br /></li> </ol> <p> <br /> <br /></p> <p><br /></p> Tim Dexter https://blogs.oracle.com/xmlpublisher/bip-and-mapviewer-mash-up-v Mon May 08 2017 12:38:46 GMT-0400 (EDT) Supply, Demand and Data https://medium.com/red-pill-analytics/supply-demand-and-data-4f75c0701696?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GzRKNNxmm0m22uJGzQLNfA.jpeg" /><figcaption>Photo Credit: <a href="https://unsplash.com/collections/468813/curved-architectures?photo=tA5eSY_hay8">Joel Mbugua</a></figcaption></figure><h4>Explaining Data Failure with a Little Econ 101</h4><p><a href="https://medium.com/red-pill-analytics/bi-dentity-crisis-658ef4daa8">In my last blog post</a>, I discussed how I think that the way most people talk about BI and analytics is incorrect, how data is fluid, and some of the biggest issues I see when helping clients. I’m going to continue that discussion in this blog post, but pivot the conversation from what is wrong to some things we can do to change it.</p><p>As I’ve said before, the mindset that we bring to the table when working with data is a big part in determining our success. Changing this thinking is hard, and if you’re expecting a click bait list of <a href="https://www.youtube.com/watch?v=dQw4w9WgXcQ">“5 Easy Ways to Get More From Analytics”</a>, you’re in the wrong place. We can however, move further from a state of immature data to mature data by challenging the status quo. Below is the first of a series of blogs that will provide my thoughts on how we can change.</p><h4>Is it a Project, or a Service?</h4><p>Yes, yes. I know what you’re thinking — “Another ‘as a Service’ thing? Seriously?” I don’t blame you for that either, with all of the PaaS, SaaS, IaaS, DBaaS, etc. out there, it is a bit annoying. But… bear with me on this. I’m thinking about this from an economic point of view, and I cannot help but think of data like a utility.</p><h4>Laying A Little Ground Work</h4><p>Let me start with some basic Econ 101. For any who are unfamiliar with the basis of modern economic thinking let’s begin with <a href="https://en.wikipedia.org/wiki/Supply_and_demand">supply and demand</a>. In a marketplace, we have suppliers and consumers. Suppliers, well, supply stuff — whether that be mobile phones, ride sharing services, concert tickets or whiskeys. And you guessed it, the consumers consume those goods and services that are supplied by the suppliers. If the suppliers notice that their supplies are not selling, they will typically lower prices to engage more consumers. And if more and more consumers are buying at a lower price, the suppliers will raise the price again. The same can be said for consumers, as they have the ability to affect the amount of goods and services in the market by spending their finite time and money on those goods and services supplied. When the suppliers and consumers agree on prices and quantities in the market, this is called the equilibrium.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*GtTCP8Ooe-AA0dpErLEMRA.png" /><figcaption>Curves courtesy of<a href="https://en.wikipedia.org/wiki/Alfred_Marshall"> Alfred Marshall</a></figcaption></figure><p>So what is the point of the econ lesson (other than an excuse for me to talk about econ)? Let’s stop and think about these concepts for a moment and how they can relate to data in organizations; we’ll begin with the demand curve.</p><p>The business users have a demand for data; they need it to get their jobs done. If we look at a demand curve, we notice that only a few people want (or need) data that is high cost (left side). We can see that at low cost and high quantity, nearly everyone wants data (right side). Maybe a better way to think about price and quantity would be effort to insight and the amount of data, respectively. And this starts to make more sense, because everyone wants a ton of data that is low effort. Meanwhile, the appetite for taking data that is high effort to clean, prepare and analyze for a small outcome is low.</p><p>Now, let’s stop for a moment and think about the supply curve. Data suppliers aren’t willing to go through the effort of providing just a little bit of data; they prefer when there is sustainable demand. We can see that as the amount of data and effort go up, the higher the number of suppliers are willing to be in the marketplace. And this makes sense, because as more and more data is available, they higher the demand there will be for that data (we can save the law of diminishing returns for a different blog).</p><h4>Point to Point Thinking</h4><p>Of course, the service that being provided is data, infrastructure and tools to access it, and expertise to help make that happen. Some of you reading may be on the supply side of this, and some may be on the demand side. Now, we all know that projects happen in companies to bring more data into a new or current analytics platform. Or maybe the project itself is a new platform. However, if we think about data and it’s relevant components as goods and services, we stop thinking about it as a singular project. I think it’s best to think of these projects as an enhancement of a service, because in reality, that’s what it is. This new addition will lower the cost and increase the quantity of data for the consumers. How? Let’s take a look.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*2hCxNhgh1cqPxFN8rsAc9w.png" /></figure><p>We can see our original supply curve, which is “S1” in the example to the left. We can also see that we have a nice steady equilibrium between the supply and demand curves at “E1”. However, by implementing a project that will deliver new content and data to the users, we have changed the amount of effort and data that our consumers use to get to those answers. You can see that “E2” — the new equilibrium — <em>has lower effort and more data</em>. A win-win for everyone in my book.</p><p>We can also think about how an initiative to engage more users will affect the demand curve. It would shift the curve to the right as well. And this point is salient because it begins to illustrate how fluid data is in an organization. Over time these projects will affect data demand and supply, the amount available and the effort involved to use that data. However, the marketplace will exist whether or not there are projects. There are still consumers to support, and there are still maintenance items to take care of. If we’re stuck in a mindset where we move from project to project, or initiative to initiative, we have lost the bigger picture by keeping our focus on the short term deadlines of a specific project. When this happens, not only does the vision for better data suffer, but concretely, our data services and thus, our consumers suffer as well.</p><h4>Back to Fluidity</h4><p>Because of the data marketplace that occurs in companies, it is better to think about data as a service or better yet, a utility that is provided by some in the organization. I say this because like a utility, modern businesses cannot work without data. (In economics, utilities also have a slightly different supply and demand structure, but maybe I’ll save that for a different blog post). This is also important because it helps frame the way that organizational data providers should be thinking about their role within the organization, how to interact with their consumers (the users) and how to structure their teams.</p><p>I also think that this is an apt analogy because as a consumer, I want as little interruption of service as possible. If my water utility has a burst pipe that prevents me from getting water out of the tap, I want them to fix it and let me know when service has resumed. If they are doing normal maintenance that does not effect me, I don’t want to hear about it. If the quality of my water is poor, I want to be able to call them and talk to a support agent that can address my concern, or at least take my complaint for further investigation. I want to be billed on time and without error. These are reasonable expectations, in my opinion. And the same can be said about data in an organization. Consumers want to be able to feel that they are heard and get the services they need. If service is interrupted, they want to know when it will be resolved. And, most maintenance items (patching, upgrading, code optimizing, etc) do not need consumer buy in (another issue I see from time to time), because these are items that should not have any direct effect on the consumer.</p><p>How does this translate to data suppliers and consumers? If we want uninterrupted service from data suppliers, they need to be given the resources to succeed. This may mean budget for new talent, it may mean more licenses for consumers to become power users, or it may mean effort to move to an agile framework. I also think it is in everyone’s best interest if persons developing new content (doing project work) are not pulled into “keeping the lights on” requests as well. Segmenting these functions into teams to heighten quality and on time delivery. Some organizations have these types of structures in place already, but many do not.</p><p>We also need to recognize that the consumers have an important role to play in shaping the data marketplace. If they are disengaged, how can the suppliers know what the preferences of the consumers are? If there is no dialogue, then suppliers are taking shots in the dark at what they think the consumers need and desire. And if consumers are leaving the marketplace because they do not find something they want or need, that is trouble for everyone. That situation breeds consumers who have to be their own suppliers as well. And we all know how that story ends…</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/1*0LUSyJ85FGZA1Hs9A6mIiA.gif" /><figcaption>Let the data chaos commence!</figcaption></figure><p>This can be wholly avoided if we do two things. Consumers, speak up. Suppliers, listen up.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4f75c0701696" width="1" height="1"><hr><p><a href="https://medium.com/red-pill-analytics/supply-demand-and-data-4f75c0701696">Supply, Demand and Data</a> was originally published in <a href="https://medium.com/red-pill-analytics">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Phil Goerdt https://medium.com/p/4f75c0701696 Fri May 05 2017 15:30:33 GMT-0400 (EDT) Supply, Demand and Data http://redpillanalytics.com/supply-demand-and-data/ <p><img width="300" height="200" src="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/supplydemand.jpg?fit=300%2C200" class="attachment-medium size-medium wp-post-image" alt="Explaining Data Failure with a Little Econ 101" srcset="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/supplydemand.jpg?w=1920 1920w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/supplydemand.jpg?resize=300%2C200 300w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/supplydemand.jpg?resize=768%2C512 768w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/supplydemand.jpg?resize=1024%2C682 1024w" sizes="(max-width: 300px) 100vw, 300px" data-attachment-id="4895" data-permalink="http://redpillanalytics.com/supply-demand-and-data/supplydemand/" data-orig-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/supplydemand.jpg?fit=1920%2C1279" data-orig-size="1920,1279" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Explaining Data Failure with a Little Econ 101" data-image-description="&lt;p&gt;Explaining Data Failure with a Little Econ 101&lt;/p&gt; " data-medium-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/supplydemand.jpg?fit=300%2C200" data-large-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/05/supplydemand.jpg?fit=1024%2C682" /></p><p id="7143" class="graf graf--h3 graf-after--figure graf--title"><a class="markup--anchor markup--p-anchor" href="http://redpillanalytics.com/bi-dentity-crisis/" target="_blank" rel="noopener noreferrer" data-href="https://medium.com/red-pill-analytics/bi-dentity-crisis-658ef4daa8">In my last blog post</a>, I discussed how I think that the way most people talk about BI and analytics is incorrect, how data is fluid, and some of the biggest issues I see when helping clients. I’m going to continue that discussion in this blog post, but pivot the conversation from what is wrong to some things we can do to change it.</p> <p id="bb62" class="graf graf--p graf-after--p">As I’ve said before, the mindset that we bring to the table when working with data is a big part in determining our success. Changing this thinking is hard, and if you’re expecting a click bait list of <a class="markup--anchor markup--p-anchor" href="https://www.youtube.com/watch?v=dQw4w9WgXcQ" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DdQw4w9WgXcQ">“5 Easy Ways to Get More From Analytics”</a>, you’re in the wrong place. We can however, move further from a state of immature data to mature data by challenging the status quo. Below is the first of a series of blogs that will provide my thoughts on how we can change.</p> <h4 id="9a23" class="graf graf--h4 graf-after--p">Is it a Project, or a Service?</h4> <p id="9eab" class="graf graf--p graf-after--h4">Yes, yes. I know what you’re thinking — “Another ‘as a Service’ thing? Seriously?” I don’t blame you for that either, with all of the PaaS, SaaS, IaaS, DBaaS, etc. out there, it is a bit annoying. But… bear with me on this. I’m thinking about this from an economic point of view, and I cannot help but think of data like a utility.</p> <h4 id="249e" class="graf graf--h4 graf-after--p">Laying A Little Ground Work</h4> <p id="5aa1" class="graf graf--p graf-after--h4">Let me start with some basic Econ 101. For any who are unfamiliar with the basis of modern economic thinking let’s begin with <a class="markup--anchor markup--p-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FSupply_and_demand" target="_blank" rel="nofollow noopener noreferrer" data-href="https://medium.com/r/?url=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FSupply_and_demand">supply and demand</a>. In a marketplace, we have suppliers and consumers. Suppliers, well, supply stuff — whether that be mobile phones, ride sharing services, concert tickets or whiskeys. And you guessed it, the consumers consume those goods and services that are supplied by the suppliers. If the suppliers notice that their supplies are not selling, they will typically lower prices to engage more consumers. And if more and more consumers are buying at a lower price, the suppliers will raise the price again. The same can be said for consumers, as they have the ability to affect the amount of goods and services in the market by spending their finite time and money on those goods and services supplied. When the suppliers and consumers agree on prices and quantities in the market, this is called the equilibrium.</p> <figure id="27db" class="graf graf--figure graf-after--p"> <div class="aspectRatioPlaceholder is-locked"> <div class="aspectRatioPlaceholder-fill"></div> <div class="progressiveMedia js-progressiveMedia graf-image is-canvasLoaded is-imageLoaded" data-image-id="1*GtTCP8Ooe-AA0dpErLEMRA.png" data-width="1282" data-height="792" data-action="zoom" data-action-value="1*GtTCP8Ooe-AA0dpErLEMRA.png" data-scroll="native"> <p>&nbsp;</p> <div style="width: 510px" class="wp-caption aligncenter"><img class="progressiveMedia-image js-progressiveMedia-image" src="https://i1.wp.com/cdn-images-1.medium.com/max/1600/1*GtTCP8Ooe-AA0dpErLEMRA.png?resize=500%2C308&#038;ssl=1" alt="" data-src="https://i1.wp.com/cdn-images-1.medium.com/max/1600/1*GtTCP8Ooe-AA0dpErLEMRA.png?resize=500%2C308&#038;ssl=1" data-recalc-dims="1" /><p class="wp-caption-text">Curves courtesy of Alfred Marshall</p></div> </div> </div><figcaption class="imageCaption"></figcaption></figure> <p id="86e0" class="graf graf--p graf-after--figure">So what is the point of the econ lesson (other than an excuse for me to talk about econ)? Let’s stop and think about these concepts for a moment and how they can relate to data in organizations; we’ll begin with the demand curve.</p> <p id="2066" class="graf graf--p graf-after--p">The business users have a demand for data; they need it to get their jobs done. If we look at a demand curve, we notice that only a few people want (or need) data that is high cost (left side). We can see that at low cost and high quantity, nearly everyone wants data (right side). Maybe a better way to think about price and quantity would be effort to insight and the amount of data, respectively. And this starts to make more sense, because everyone wants a ton of data that is low effort. Meanwhile, the appetite for taking data that is high effort to clean, prepare and analyze for a small outcome is low.</p> <p id="32a9" class="graf graf--p graf-after--p">Now, let’s stop for a moment and think about the supply curve. Data suppliers aren’t willing to go through the effort of providing just a little bit of data; they prefer when there is sustainable demand. We can see that as the amount of data and effort go up, the higher the number of suppliers are willing to be in the marketplace. And this makes sense, because as more and more data is available, they higher the demand there will be for that data (we can save the law of diminishing returns for a different blog).</p> <h4 id="f982" class="graf graf--h4 graf-after--p">Point to Point Thinking</h4> <p id="bcc8" class="graf graf--p graf-after--h4">Of course, the service that being provided is data, infrastructure and tools to access it, and expertise to help make that happen. Some of you reading may be on the supply side of this, and some may be on the demand side. Now, we all know that projects happen in companies to bring more data into a new or current analytics platform. Or maybe the project itself is a new platform. However, if we think about data and it’s relevant components as goods and services, we stop thinking about it as a singular project. I think it’s best to think of these projects as an enhancement of a service, because in reality, that’s what it is. This new addition will lower the cost and increase the quantity of data for the consumers. How? Let’s take a look.</p> <figure id="5ddf" class="graf graf--figure graf--layoutOutsetLeft graf-after--p"> <div class="aspectRatioPlaceholder is-locked"> <div class="aspectRatioPlaceholder-fill"></div> <div class="progressiveMedia js-progressiveMedia graf-image is-canvasLoaded is-imageLoaded" data-image-id="1*2hCxNhgh1cqPxFN8rsAc9w.png" data-width="1180" data-height="842" data-action="zoom" data-action-value="1*2hCxNhgh1cqPxFN8rsAc9w.png" data-scroll="native"><canvas class="progressiveMedia-canvas js-progressiveMedia-canvas" width="75" height="52"></canvas><img class="progressiveMedia-image js-progressiveMedia-image aligncenter" src="https://i1.wp.com/cdn-images-1.medium.com/max/1200/1*2hCxNhgh1cqPxFN8rsAc9w.png?resize=500%2C333&#038;ssl=1" data-src="https://i1.wp.com/cdn-images-1.medium.com/max/1200/1*2hCxNhgh1cqPxFN8rsAc9w.png?resize=500%2C333&#038;ssl=1" data-recalc-dims="1" /></div> </div> </figure> <p id="6969" class="graf graf--p graf-after--figure">We can see our original supply curve, which is “S1” in the example to the left. We can also see that we have a nice steady equilibrium between the supply and demand curves at “E1”. However, by implementing a project that will deliver new content and data to the users, we have changed the amount of effort and data that our consumers use to get to those answers. You can see that “E2” — the new equilibrium — <em class="markup--em markup--p-em">has lower effort and more data</em>. A win-win for everyone in my book.</p> <p id="cdd4" class="graf graf--p graf-after--p">We can also think about how an initiative to engage more users will affect the demand curve. It would shift the curve to the right as well. And this point is salient because it begins to illustrate how fluid data is in an organization. Over time these projects will affect data demand and supply, the amount available and the effort involved to use that data. However, the marketplace will exist whether or not there are projects. There are still consumers to support, and there are still maintenance items to take care of. If we’re stuck in a mindset where we move from project to project, or initiative to initiative, we have lost the bigger picture by keeping our focus on the short term deadlines of a specific project. When this happens, not only does the vision for better data suffer, but concretely, our data services and thus, our consumers suffer as well.</p> <h4 id="934b" class="graf graf--h4 graf-after--p">Back to Fluidity</h4> <p id="0725" class="graf graf--p graf-after--h4">Because of the data marketplace that occurs in companies, it is better to think about data as a service or better yet, a utility that is provided by some in the organization. I say this because like a utility, modern businesses cannot work without data. (In economics, utilities also have a slightly different supply and demand structure, but maybe I’ll save that for a different blog post). This is also important because it helps frame the way that organizational data providers should be thinking about their role within the organization, how to interact with their consumers (the users) and how to structure their teams.</p> <p id="c7c4" class="graf graf--p graf-after--p">I also think that this is an apt analogy because as a consumer, I want as little interruption of service as possible. If my water utility has a burst pipe that prevents me from getting water out of the tap, I want them to fix it and let me know when service has resumed. If they are doing normal maintenance that does not effect me, I don’t want to hear about it. If the quality of my water is poor, I want to be able to call them and talk to a support agent that can address my concern, or at least take my complaint for further investigation. I want to be billed on time and without error. These are reasonable expectations, in my opinion. And the same can be said about data in an organization. Consumers want to be able to feel that they are heard and get the services they need. If service is interrupted, they want to know when it will be resolved. And, most maintenance items (patching, upgrading, code optimizing, etc) do not need consumer buy in (another issue I see from time to time), because these are items that should not have any direct effect on the consumer.</p> <p id="b0ea" class="graf graf--p graf-after--p">How does this translate to data suppliers and consumers? If we want uninterrupted service from data suppliers, they need to be given the resources to succeed. This may mean budget for new talent, it may mean more licenses for consumers to become power users, or it may mean effort to move to an agile framework. I also think it is in everyone’s best interest if persons developing new content (doing project work) are not pulled into “keeping the lights on” requests as well. Segmenting these functions into teams to heighten quality and on time delivery. Some organizations have these types of structures in place already, but many do not.</p> <p id="c56e" class="graf graf--p graf-after--p">We also need to recognize that the consumers have an important role to play in shaping the data marketplace. If they are disengaged, how can the suppliers know what the preferences of the consumers are? If there is no dialogue, then suppliers are taking shots in the dark at what they think the consumers need and desire. And if consumers are leaving the marketplace because they do not find something they want or need, that is trouble for everyone. That situation breeds consumers who have to be their own suppliers as well. And we all know how that story ends…</p> <figure id="bb1b" class="graf graf--figure graf-after--p"> <div class="aspectRatioPlaceholder is-locked"> <div class="aspectRatioPlaceholder-fill"></div> <div class="progressiveMedia js-progressiveMedia graf-image is-canvasLoaded is-imageLoaded" data-image-id="1*0LUSyJ85FGZA1Hs9A6mIiA.gif" data-width="300" data-height="300" data-scroll="native"> <p>&nbsp;</p> <div style="width: 310px" class="wp-caption aligncenter"><img class="progressiveMedia-image js-progressiveMedia-image" src="https://i2.wp.com/cdn-images-1.medium.com/max/1600/1*0LUSyJ85FGZA1Hs9A6mIiA.gif?resize=300%2C300&#038;ssl=1" alt="" data-src="https://i2.wp.com/cdn-images-1.medium.com/max/1600/1*0LUSyJ85FGZA1Hs9A6mIiA.gif?resize=300%2C300&#038;ssl=1" data-recalc-dims="1" /><p class="wp-caption-text">Let the data chaos commence!</p></div> </div> </div><figcaption class="imageCaption"></figcaption></figure> <p id="f102" class="graf graf--p graf-after--figure graf--trailing">This can be wholly avoided if we do two things. Consumers, speak up. Suppliers, listen up.</p> Phil Goerdt http://redpillanalytics.com/?p=4894 Fri May 05 2017 15:30:31 GMT-0400 (EDT) BI and Reporting Kscope17 Track Highlights – Tracy McMullen http://www.odtug.com/p/bl/et/blogaid=716&source=1 Tracy McMullen, BI and Reporting Track Lead for ODTUG Kscope17, shares her top five BI and Reporting Track Sessions with reasons why they are her "don't miss sessions" at Kscope17: ODTUG http://www.odtug.com/p/bl/et/blogaid=716&source=1 Fri May 05 2017 09:54:00 GMT-0400 (EDT) A focus on Higher Education, HEDW 2017 http://www.rittmanmead.com/blog/2017/05/a-focus-on-higher-education-hedw-2017/ <p>First, before I get into a great week of Higher Education Data Warehousing and analytics discussions, I want to thank the HEDW board and their membership. They embraced us with open arms in our first year of conference sponsorship. Our longtime friend and HEDW board member, Phyllis Wykoff, from Miami University of Ohio even spent some time with us behind the booth! </p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/05/University_of_Arizona_mall-1024x466.jpg" alt=""> HEDW was in the lovely desert scape of Tucson, AZ at the University of Arizona. Sunday was a fantastic day of training, followed by three days of outstanding presentations from member institutions and sponsors. Rittman Mead wanted to show how important the higher education community is to us, so along with me, we had our CEO-Jon Mead, our CTO-Jordan Meyer, and our US Managing Director-Charles Elliott. If our AirBnB had ears, it would have heard several solutions to the problems of the world as well as discussions of the fleeting athleticism of days gone past. But alas, that will have to wait.</p> <p>While at the conference, we had a multitude of great conversations with member institutions and there were a few themes that stuck out to us with regard to common issues and questions from our higher education friends. I will talk a little bit about each one below with some context on how Rittman Mead is the right fit to be partners in addressing some big questions out there.</p> <h1 id="legacyinvestmentvsbitooldiversificationorboth">Legacy Investment vs BI tool Diversification (or both)</h1> <p>One theme that was evident from hour one was the influx of Tableau in the higher education community. Rittman Mead is known for being the leader in the Oracle Business Intelligence thought and consulting space and we very much love the OBIEE community. With that said, we have, like all BI practitioners, seen the rapid rise of Tableau within departments and lately as an enterprise solution. It would be silly for the OBIEE community to close their eyes and pretend that it isn’t happening. There are great capabilities coming out of Oracle with Data Visualization but the fact is, people have been buying Tableau for a few years and Tableau footprints exist within organizations. This is a challenge that isn't going away.</p> <h1 id="analyticsmodernizationapproaches">Analytics Modernization Approaches</h1> <p>We had a ton of conversations about how to include newer technologies in institutions’ business intelligence and data warehousing footprints. There is clearly a desire to see how big data technologies like Hadoop, data science topics like the R statistical modeling language, and messaging services like Kafka could positively impact higher education organizations. Understanding how you may eliminate batch loads, predict student success, know if potential financial aid is not being used, know more about your students with analysis of student transactions with machine learning, and store more data with distributed architectures like Hadoop are all situations that are readily solvable. Rittman Mead can help you prioritize what will make the biggest value impact with a Modernization Assessment. We work with organizations to make good plans for implementation of modern technology at the right place and at the right time. If you want more info, <a href="mailto:info+c@rittmanmead.com">please let us know.</a></p> <h1 id="sometimesweneedalittlehelpfromourfriends">Sometimes we need a little help from our friends</h1> <p>Members of HEDW need a different view or another set of eyes sometimes and the feedback we heard is that consulting services like ours can seem out of reach with budgets tighter than ever. That is why we recently announced the Rittman Mead Expert Service Desk. Each month, there are hours available to spend however you would like with Rittman Mead’s experts. Do you have a mini project that never seems to get done? Do you need help with a value proposition for a project or upgrade? Did production just go down and you can’t seem to figure it out? With <a href="https://www.rittmanmead.com/expert-service-desk/">Expert Service desk</a>, you have the full Rittman Mead support model at your fingertips. <a href="mailto:info+c@rittmanmead.com">Let us know</a> if you might want a little help from your friends at Rittman Mead.</p> <h1 id="towrapup">To wrap up</h1> <p>Things are a changing and sometimes it is tough to keep up with all of the moving parts. Rittman Mead is proud to be a champion of sharing new approaches and technologies to our communities. Spending time this week with our higher education friends is proof more that our time spent sharing is well worth it. There are great possibilities out there and we look forward to sharing them throughout the year and at <a href="https://hedw.org">HEDW 2018 in Oregon!</a> </p> Jason Davis 9d029bde-f070-46e3-a342-637796ec9386 Wed May 03 2017 10:04:26 GMT-0400 (EDT) Big Data and Data Warehousing Kscope17 Track Highlights – Michael Rainey http://www.odtug.com/p/bl/et/blogaid=715&source=1 Here is an overview of Big Data and Data Warehousing sessions Track Lead Michael Rainey is most looking forward to at Kscope17 with reasons why he thinks you should attend them: ODTUG http://www.odtug.com/p/bl/et/blogaid=715&source=1 Mon May 01 2017 09:55:08 GMT-0400 (EDT) 2017 ODTUG Innovation Award http://www.odtug.com/p/bl/et/blogaid=711&source=1 ODTUG is excited to announce the 2nd annual ODTUG Innovation Award. The ODTUG Innovation Award honors excellence in creative, effective, innovative use of Oracle development tools within ODTUG's supported communities - ADF, APEX, BI, Database, EPM and Career. ODTUG http://www.odtug.com/p/bl/et/blogaid=711&source=1 Mon May 01 2017 08:32:18 GMT-0400 (EDT) Deliver Reports to Document Cloud Services! https://blogs.oracle.com/xmlpublisher/entry/when_on_cloud_deliver_documents <p>Greetings !</p> <p>In release 12.2.1.1, BI Publisher added a new feature - Delivery to Oracle Document Cloud Services (ODCS). Around the same time, BI Publisher was also certified against JCS 12.2.1.x and therefore, today if you have hosted your BI Publisher instance on JCS then we recommend Oracle Document Cloud Services as the delivery channel. Several reasons for this:</p> <ol> <li>Easy to configure and manage ODCS in BI Publisher on Oracle Public Cloud. No port or firewall issues.</li> <li>ODCS offers a scalable, robust and secure document storage solution on cloud.</li> <li>ODCS offers document versioning and document metadata support similar to any content management server<br /></li> <li>Supports all business document file formats relevant for BI Publisher</li> </ol> <p> </p> <p> </p> <p> </p> <p><b>When to use ODCS?</b></p> <p><b> </b>ODCS can be used for all different scenarios where a document need to be securely stored in a server that can be retained for any duration. The scenarios may include:</p> <ul> <li>Bursting documents to multiple customers at the same time. <br /></li> <ul> <li>Invoices to customers</li> <li>HR Payroll reports to its employees</li> <li>Financial Statements</li> </ul> </ul> <ul> <li>Storing large or extremely large reports for offline printing</li> <ul> <li>End of the Month/Year Statements for Financial Institutions</li> <li>Consolidated department reports</li> <li>Batch reports for Operational data</li> </ul> <li>Regulatory Data Archival</li> <ul> <li>Generating PDF/A-1b or PDF/A-2 format documents</li> </ul> </ul> <p> </p> <p> </p> <p> </p> <p> </p> <p><b>How to Configure ODCS in BI Publisher?</b></p> <p>Configuration of ODCS in BI Publisher requires the&nbsp; URI, username and password. Here the username is expected to have access to the folder where the files are to be delivered. </p> <p><img width="767" height="252" src="https://blogs.oracle.com/xmlpublisher/resource/2017-04-29/ODCSConfig2.jpg" /><br /></p><br /> <p> </p> <p> </p> <p> </p> <p><b>How to Schedule and Deliver to ODCS?</b></p> <p>Delivery to ODCS can be managed through both - a Normal Scheduled Job and a Bursting Job.</p> <p>A Normal Scheduled Job allows the end user to select a folder from a list of values as shown below </p> <p><img width="771" height="367" src="https://blogs.oracle.com/xmlpublisher/resource/2017-04-29/ODCS_Scheduling.jpg" /><br /></p> <p> </p> <p>\</p> <p>In case of Bursting Job, the ODCS delivery information is to be provided in the bursting query as shown below:</p> <p><img width="772" height="346" src="https://blogs.oracle.com/xmlpublisher/resource/2017-04-29/ODCS_Bursting.jpg" /><br /></p> <p> </p> <p> </p> <p><b>Accessing Document in ODCS</b></p> <p>Once the documents are delivered to ODCS, they can be accessed by user based on his access to the folder, very similar to FTP or WebDAV access. </p> <p> </p> <p> </p> <p> </p> <p> </p> <p><img width="750" height="246" src="https://blogs.oracle.com/xmlpublisher/resource/2017-04-29/ODCS_Report.jpg" /></p> <p> </p> <p>That's all for now. Stay tuned for more updates !<br /></p> PradeepSharma-Oracle https://blogs.oracle.com/xmlpublisher/entry/when_on_cloud_deliver_documents Fri Apr 28 2017 17:32:26 GMT-0400 (EDT) The Case for ETL in the Cloud - CAPEX vs OPEX http://www.rittmanmead.com/blog/2017/04/case-for-etl-in-the-cloud-capex-opex/ <p>Recently Oracle <a href="https://www.oracle.com/corporate/pressrelease/launches-integration-cloud-service-021317.html">announced</a> a new cloud service for Oracle Data Integrator. Because I was helping our sales team by doing some estimates and statements of work, I was already thinking of costs, ROI, use cases, and the questions behind making a decision to move to the cloud. I want to explore what is the business case for using or switching to ODICS? </p> <h3 id="oracledataintegrationcloudservices">Oracle Data Integration Cloud Services</h3> <p>First, let me briefly talk about what is Oracle Data Integration Cloud Services? <a href="https://blogs.oracle.com/dataintegration/entry/introducing_oracle_data_integrator_cloud">ODICS</a> is ODI version 12.2.1.2 available on Oracle’s Java Cloud Service known as JCS. <a href="http://www.ateam-oracle.com/lift-and-shift-to-oracle-data-integrator-cloud-service-odics-moving-your-repository-to-the-cloud/">Several</a> <a href="http://www.oracle.com/technetwork/articles/bi/radtke-giampaoli-odi-cloud-3433819.html">posts</a> <a href="http://www.ateam-oracle.com/integrating-oracle-data-integrator-odi-on-premise-with-cloud-services/">cover</a> the implementation, migration, and technical aspects of using ODI in the cloud. Instead of covering the ‘how’, I want to talk about the ‘when’ and ‘why’. </p> <h3 id="usecases">Use Cases</h3> <p>What use cases are there for ODICS? <br> 1. You have or soon plan to have your data warehouse in Oracle’s Cloud. In this situation, you can now have your ODI J2EE agent in the same cloud network, removing network hops and improving performance. <br> 2. If you currently have an ODI license on-premises, you are allowed to install that license on Oracle’s JCS at the JCS <a href="https://cloud.oracle.com/en_US/java/pricing">prices</a>. See <a href="http://docs.oracle.com/en/cloud/paas/java-cloud/jscug/leveraging-premises-licenses-oracle-public-cloud.html#JSCUG-GUID-B4A53E27-B580-492D-80B0-F909F470C7D4">here</a> for more information about installing on JCS. <br> 3. If you currently have an ODI license on-premises, and you don't need the full functionality of ODI JEE agents, you can also use standalone ODI agents in the <a href="https://cloud.oracle.com/en_US/compute">Oracle Compute Cloud</a>. These use cases are described in a webinar posted in the <a href="http://www.oracle.com/technetwork/middleware/data-integrator/odi-11g-webcast-archive-367128.html">PM Webcast Archive</a>. </p> <h3 id="whenandwhy">When and Why?</h3> <p>So when would it make sense to move towards using ODICS? These are the scenarios I imagine being the most likely: <br> 1. <em>A new customer or project.</em> If a business doesn’t already have ODI, this allows them to decide between an all on-premises solution or a complete solution in Oracle’s cloud. With monthly and metered costs, the standard large start-up costs for hardware and licenses are avoided, making this solution available for more small to medium businesses. <br> 2. <em>An existing business with ODI already and considering moving their DW to the cloud.</em> In this scenario, a possible solution would be to move the current license of ODI to JCS (or Compute Cloud) and begin using that to move data, all while tracking JCS costs. When the time comes to review licensing obligations for ODI, compare the calculation for a license to the calculation of expected usage for ODICS and see which one makes the most sense (cents?). For a more detailed explanation of this point, let’s talk CAPEX and OPEX! </p> <h3 id="capexvsopex">CAPEX vs. OPEX</h3> <p>CAPEX and OPEX are short for Capital Expense and Operational Expense, respectively. In a finance and budgeting perspective, these two show up very differently on financial reports. This often has tax considerations for businesses. Traditionally in the past, a data warehouse project was a very large initial capital expenditure, with hardware, licenses, and project costs. This would land it very solidly as CAPEX. Over the last several years, sponsorship for these projects has shifted from CIOs and IT Directors to CFOs and Business Directors. With this shift, several businesses would rather budget and see these expenses monthly as an operating expense as opposed to every few years having large capital expenses, putting these projects into OPEX instead. </p> <h3 id="conclusion">Conclusion</h3> <p>Having monthly and metered service costs in the cloud that are fixed or predictable are appealing. As a bonus, this style of service is highly flexible and can scale up (or down) as demand changes. If you are or will soon be in the process of planning for your future business analytics needs, we provide <a href="https://www.rittmanmead.com/services/">expert services</a>, <a href="https://www.rittmanmead.com/solutions-audit/">assessments</a>, <a href="https://www.rittmanmead.com/accelerators/">accelerators</a>, and executive consultations for assisting with these kinds of decisions. When it is time to talk about actual numbers, your Oracle Sales Representative will have the best <a href="https://cloud.oracle.com/en_US/data-integrator/pricing">prices</a>. Please <a href="mailto:info+bwodics@rittmanmead.com">get in touch</a> for more information.</p> Becky Wagner 6e6dc223-cf9e-404b-a254-584987c0c6ed Thu Apr 27 2017 12:12:14 GMT-0400 (EDT) OUG Ireland Meetup 11th May http://www.oralytics.com/2017/04/oug-ireland-meetup-11th-may.html <p>The next <a href="https://www.meetup.com/meetup-group-jBUwhCLZ/events/238411613/">OUG Ireland Meetup is happening on 11th May</a>, in the Bank of Ireland Grand Canal Dock. This is a free event and is open to every one. You don't have to be a member to attend.</p> <p>Following on from a very successful 2 day OUG Ireland Conference with over 250 attendees, we have organised our next meetup. This was mentioned during the opening session of the conference.</p> <p><a href="https://www.meetup.com/meetup-group-jBUwhCLZ/events/238411613/"><img src="https://lh3.googleusercontent.com/-1za4baymRIQ/WQD_JtG5K6I/AAAAAAAAMKE/xcOBUScYp3Y1u--N0HTtEybXC0AliJeIQCHM/NewImage.png?imgmax=800" alt="NewImage" title="NewImage.png" border="0" width="238" height="91" /></a></p> <p>We typically have 2 presentations at each Meetup and on 11th May we have:</p> <p><span style='text-decoration:underline;'><strong>1. Oracle Analytics Cloud Service. </strong></span></p>Oralce Analytics Cloud Service was only released a few weeks ago and we some local people who have been working with the beta and early adopter releases. They will be giving us some insights on this new product and how it compares with other analytics products like Oracle Data Visualization and OBIEE. <p><span style='text-decoration:underline;'><strong>Running Oracle DataGuard on RAC on Oracle 12c</strong></span></p><p>The second presentation will be on using Oracle DataGuard on RAC on Oracle 12c. We have a very experienced DBA talking about his experiences of using these products how to workaround some key bugs and situations to be aware of for administration purposes. Lots of valuable information to be gained.</p> <p><a href="https://www.meetup.com/meetup-group-jBUwhCLZ/events/238411613/">Check out the full agenda and to register for the Meetup by clicking on this link or on the Meetup image above</a>.</p> <p>There will be some food and refreshments available for you to enjoy.</p> <p>The Meetup will be in Bank of Ireland, Grand Canal Dock. This venue is a very popular locations for Meetups in Dublin.</p> <p><a href="https://www.meetup.com/meetup-group-jBUwhCLZ/events/238411613/"><img src="https://lh3.googleusercontent.com/-rIpDnLfczoY/WQD_du-QpOI/AAAAAAAAMKI/GQPG-nt9rR4TOXK4lZcpL1bGGj23Bt19wCHM/NewImage.png?imgmax=800" alt="NewImage" title="NewImage.png" border="0" width="309" height="200" /></a></p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-3227766059194818477 Thu Apr 27 2017 09:23:00 GMT-0400 (EDT) EPM Business Content Track Kscope17 Session Highlights – Tiffany Briseno http://www.odtug.com/p/bl/et/blogaid=714&source=1 Here is an overview of a few sessions Track Lead Tiffany Briseno is most looking forward to at ODTUG Kscope17 and why she will be attending them: ODTUG http://www.odtug.com/p/bl/et/blogaid=714&source=1 Thu Apr 27 2017 08:52:01 GMT-0400 (EDT) Data Visualization and Advanced Analytics Kscope17 Track Highlights – Kevin McGinley http://www.odtug.com/p/bl/et/blogaid=713&source=1 Here is an overview of Kscope17 sessions Kevin McGinley is most looking forward to and his thoughts on why you should attend them, too: ODTUG http://www.odtug.com/p/bl/et/blogaid=713&source=1 Mon Apr 24 2017 08:08:06 GMT-0400 (EDT) Setting up Oracle Database on Docker http://www.oralytics.com/2017/04/setting-up-oracle-database-on-docker.html <p>A couple of days ago it was <a href="https://blog.docker.com/2017/04/oracle-database-dev-tools-in-docker-store/">announced</a> that several Oracle images were available on the <a href="https://store.docker.com/search?q=oracle&source=verified&type=image">Docker Store</a>.</p> <p>This is by far the easiest Oracle Database install I have every done !</p> <p>You simply have no excuse now for <span style='text-decoration:underline;'>not</span> installing and using an Oracle Database. Just go and do it now!</p> <p>The following steps outlines what I did you get an Oracle 12.1c Database.</p> <p><span style='text-decoration:underline;'>1. Download and Install Docker</span></p><p>There isn't much to say here. Just go to the <a href="https://www.docker.com/">Docker website</a>, select the version docker for your OS, and just install it.</p> <p>You will probably need to create an account with Docker.</p> <p><a href="https://www.docker.com/"><img src="https://lh3.googleusercontent.com/-Ouq0Sh9WveE/WPkh9Hr1FEI/AAAAAAAAMIM/n4yvQH8Izqk/NewImage.png?imgmax=800" alt="NewImage" title="NewImage.png" border="0" width="337" height="232" /></a></p> <p>After Docker is installed it will automatically start and and will be placed in your system tray etc so that it will automatically start each time you restart your laptop/PC.</p> <p><span style='text-decoration:underline;'>2. Adjust the memory allocation</span></p><p>From the system tray open the Docker application. In the Advanced section allocate a bit more memory. This will just make things run a bit smoother. Be a bit careful on how much to allocate.</p> <img src="https://lh3.googleusercontent.com/-ejiJN3ckfAE/WPkmeYUjwVI/AAAAAAAAMIY/6NHZs5ZeBNw/NewImage.png?imgmax=800" alt="NewImage" title="NewImage.png" border="0" width="190" height="223" /> <p>In the General section check the tick-box for automatically backing up Docker VMs. This is assuming you have back-ups setup, for example with Time Machine or something similar.</p> <p><span style='text-decoration:underline;'>3. Download & Edit the Oracle Docker environment File</span></p><p>On the <a href="https://store.docker.com/images/oracle-database-enterprise-edition?tab=description">Oracle Database download Docker webpage</a>, click on the the Get Content button.</p> <p><img src="https://lh3.googleusercontent.com/-JRV5JThwYEU/WPkskSu4TlI/AAAAAAAAMIo/nbnxhzXqmbM/NewImage.png?imgmax=800" alt="NewImage" title="NewImage.png" border="0" width="599" height="135" /></p> <p>You will have to enter some details like your name, company, job title and phone number, then click on the check-box, before clicking on the Get Content button. All of this is necessary for the Oracle License agreement.</p> <p>The next screen lists the Docker Services and Partner Services that you have signed up for.</p><p><img src="https://lh3.googleusercontent.com/--RmLTgbiYzs/WPktV8Q8ScI/AAAAAAAAMIw/8_bRSdpCjT8/NewImage.png?imgmax=800" alt="NewImage" title="NewImage.png" border="0" width="567" height="129" /></p> <p>Click on the Setup button to go to the webpage that contains some of the setup instructions.</p> <p><img src="https://lh3.googleusercontent.com/-BKp2RYM3Hho/WPktiG1XFSI/AAAAAAAAMI0/-2wigSa7Ygc/NewImage.png?imgmax=800" alt="NewImage" title="NewImage.png" border="0" width="515" height="221" /></p> <p>The first thing you need to do is to copy the sample Environment File. Create a new file on your laptop/desktop and paste the environment file contents into the file. There are a few edits you need to make to this file. The following is the edited/modified Environment file that I created and used. The changes are for DB_SID, DB_PASSWD and DB_DOMAIN.</p> <pre><br />####################################################################<br />## Copyright(c) Oracle Corporation 1998,2016. All rights reserved.##<br />## ##<br />## Docker OL7 db12c dat file ##<br />## ##<br />####################################################################<br /><br />##------------------------------------------------------------------<br />## Specify the basic DB parameters<br />##------------------------------------------------------------------<br /><br />## db sid (name)<br />## default : ORCL<br />## cannot be longer than 8 characters<br /><br /><strong>DB_SID=ORCL</strong><br /><br />## db passwd<br />## default : Oracle<br /><br /><strong>DB_PASSWD=oracle</strong><br /><br />## db domain<br />## default : localdomain<br /><br /><strong>DB_DOMAIN=localdomain</strong><br /><br />## db bundle<br />## default : basic<br />## valid : basic / high / extreme<br />## (high and extreme are only available for enterprise edition)<br /><br />DB_BUNDLE=basic<br /><br />## end<br /></pre> <p>I called this file '<code>docker_ora_db.txt</code>'</p> <p><span style='text-decoration:underline;'>4. Download and Configure Oracle Database for Docker</span></p>The following command will download and configure the docker image <pre><br />$ docker run -d --env-file ./docker_ora_db.txt -p 1527:1521 -p 5507:5500 -it --name dockerDB121 --shm-size="8g" store/oracle/database-enterprise:12.1.0.2<br /></pre> <p>This command will create a container called 'dockerDB121'. The 121 at the end indicate the version number of the Oracle Database. If you end up with a number of containers containing different versions of the Oracle Database then you need some way of distinguishing them.</p> <p>Take note of the port mapping in the above command, as you will need this information later.</p> <p>When you run this command, the docker image will be downloaded from the docker website, will be unzipped and the container setup and ready to run.</p> <p><img src="https://lh3.googleusercontent.com/-kXvnz_XNETY/WPnLyFQw_KI/AAAAAAAAMJE/8PuVTt8XdKY/NewImage.png?imgmax=800" alt="NewImage" title="NewImage.png" border="0" width="600" height="120" /></p> <p><span style='text-decoration:underline;'>5. Log-in and Finish the configuration</span></p><p>Although the docker container has been setup, there is still a database configuration to complete. The following images shows that the new containers is there.</p> <p><img src="https://lh3.googleusercontent.com/-xrbnDqIf4gI/WPnR2kP4NWI/AAAAAAAAMJU/bYHNIYavdoA/NewImage.png?imgmax=800" alt="NewImage" title="NewImage.png" border="0" width="600" height="80" /></p> <p>To complete the Database setup, you will need to log into the Docker container.</p> <pre><br />docker exec -it dockerDB121 /bin/bash<br /></pre> <p>Then run the Oracle Database setup and startup script (as the root user).</p> <pre><br />/bin/bash /home/oracle/setup/dockerInit.sh<br /></pre> <img src="https://lh3.googleusercontent.com/-vnY8hgbHAag/WPnUh6myJsI/AAAAAAAAMJg/I5meJvslyJw/NewImage.png?imgmax=800" alt="NewImage" title="NewImage.png" border="0" width="267" height="154" /> <p>This script can take a few minutes to run. On my laptop it took about 2 minutes.</p> <p>When this is finished the terminal session will open as this script goes into a look.</p> <p>To run any other commands in the container you will need to open another terminal session and connect to the Docker container. So go open one now.</p> <p><span style='text-decoration:underline;'>6. Log into the Database in Docker</span></p><p>In a new terminal window, connect to the Docker container and then switch to the oracle user.</p> <pre><br />su - oracle<br /></pre> <p>Check that the Oracle Database processes are running (ps -ef) and then connect as SYSDBA.</p> <pre><br />sqlplus / as sysdba<br /></pre> <p>Let's check out the Database.</p> <pre><br />SQL> select name,DB_UNIQUE_NAME from v$database;<br /><br />NAME DB_UNIQUE_NAME<br />--------- ------------------------------<br />ORCL ORCL<br /><br /><br />SQL> SELECT v.name, v.open_mode, NVL(v.restricted, 'n/a') "RESTRICTED", d.status<br /> FROM v$pdbs v, dba_pdbs d<br /> WHERE v.guid = d.guid<br /> ORDER BY v.create_scn;<br /><br /><br />NAME OPEN_MODE RES STATUS<br />------------------------------ ---------- --- ---------<br />PDB$SEED READ ONLY NO NORMAL<br />PDB1 READ WRITE NO NORMAL<br /></pre> <p>And the <code>tnsnames.ora</code> file contains the following:</p> <pre><br />ORCL = (DESCRIPTION = <br /> (ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521))<br /> (CONNECT_DATA = <br /> (SERVER = DEDICATED)<br /> (SERVICE_NAME = ORCL.localdomain) ) )<br /><br />PDB1 = (DESCRIPTION =<br /> (ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521))<br /> (CONNECT_DATA =<br /> (SERVER = DEDICATED)<br /> (SERVICE_NAME = PDB1.localdomain) ) )<br /></pre> <p>You are now up an running with an Docker container running an Oracle 12.1 Databases.</p> <p><span style='text-decoration:underline;'>7. Configure SQL Developer (on Client) to <p>access the Oracle Database on Docker</span></p>You can not use your client tools to connect to the Oracle Database in a Docker Container. Here is a connection setup in SQL Developer.</p> <p><img src="https://lh3.googleusercontent.com/-b1kYD5DlLTU/WPnYktyp04I/AAAAAAAAMJ0/G_Vk0vohK0o/NewImage.png?imgmax=800" alt="NewImage" title="NewImage.png" border="0" width="420" height="250" /></p> <p>Remember that port number mapping I mentioned in step 4 above. See in this SQL Developer connection that the port number is 1527.</p> <br> <p>Thats it. How easy is that. You now have a fully configured Oracle 12.1c Enterprise Edition Database to play with, to have fun and to explore all the wonderful features of the Oracle Database.</p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-3826288635805757649 Fri Apr 21 2017 06:05:00 GMT-0400 (EDT) BI-dentity Crisis http://redpillanalytics.com/bi-dentity-crisis/ <p><img width="300" height="225" src="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/15wYr4UtYXYliA-KGAyvSUA.jpeg?fit=300%2C225" class="attachment-medium size-medium wp-post-image" alt="BI-dentity Crisis - Figuring Out That Data is Fluid" srcset="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/15wYr4UtYXYliA-KGAyvSUA.jpeg?w=1920 1920w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/15wYr4UtYXYliA-KGAyvSUA.jpeg?resize=300%2C225 300w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/15wYr4UtYXYliA-KGAyvSUA.jpeg?resize=768%2C575 768w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/15wYr4UtYXYliA-KGAyvSUA.jpeg?resize=1024%2C767 1024w" sizes="(max-width: 300px) 100vw, 300px" data-attachment-id="4847" data-permalink="http://redpillanalytics.com/bi-dentity-crisis/15wyr4utyxylia-kgayvsua/" data-orig-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/15wYr4UtYXYliA-KGAyvSUA.jpeg?fit=1920%2C1438" data-orig-size="1920,1438" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="BI-dentity Crisis &#8211; Figuring Out That Data is Fluid" data-image-description="&lt;p&gt;BI-dentity Crisis &#8211; Figuring Out That Data is Fluid&lt;/p&gt; " data-medium-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/15wYr4UtYXYliA-KGAyvSUA.jpeg?fit=300%2C225" data-large-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/15wYr4UtYXYliA-KGAyvSUA.jpeg?fit=1024%2C767" /></p><p id="fe7e" class="graf graf--h3 graf-after--figure graf--title">I had just gotten out of a discovery meeting with a client when she said “I’m sure we’re the worst you’ve seen”. The goal of the meeting was to better understand the process they were using, the data source, and how they were consuming that data. Turns out that data was entered into Share Point forms, fed into MS SQL Server, dumped into Access, cleaned up with VBA, and accessed with Tableau and Excel. I’ve seen plenty of scenarios like this in my time as a consultant, and in a previous life, I helped build a solution similar to this. These scenarios are not uncommon, and usually, the business has a problem that needs to be resolved and they figure out a way to do it with the resources they have on hand. Personally, I have no beef with these types of systems. I even like seeing them because it shows how people can problem solve with minimal resources; ingenuity can be a beautiful thing. Sure, like most people in the BI/DI/analytics/(insert all other related buzzwords here) space, I like to come into a client where everything is clean and segmented, but I suppose one of the fun things of this job is unraveling the ball of yarn and untangling the knots.</p> <p id="0360" class="graf graf--p graf-after--p graf--trailing">Now, back to what one of the clients said to me: “I’m sure we’re the worst you’ve seen…” I think it is troubling to hear that kind of self deprecation on such a regular basis. Why? I think that for a few reasons: Business Intelligence and Analytics (BIA) is a spectrum, data is necessary for modern businesses, and most businesses are not like the ones seen in the latest blogs or with the sexiest, newest tech. There are plenty of clients I have worked with that think that since they do not fit into these boxes, they are worst in class. Most need help, which is why I am there in the first place, but even the “best in class” usually need help in more ways than just a technical solution.</p> <h4 id="8b5b" class="graf graf--h4 graf--leading">The BIA Spectrum</h4> <p id="76fa" class="graf graf--p graf-after--h4">I think that part of the issue is based on semantics. To most people, BI and Analytics are defined like this:</p> <ol class="postList"> <li id="da58" class="graf graf--li graf-after--p"><span class="markup--strong markup--li-strong">Business Intelligence:</span> An analysis in which data is viewed post business activity to assess the business via metrics and indicators.</li> <li id="a271" class="graf graf--li graf-after--li"><span class="markup--strong markup--li-strong">Analytics:</span> An analysis which uses past data to make projections as to what could happen.</li> </ol> <p id="b500" class="graf graf--p graf-after--li">The biggest difference here is that BI is backward-looking (into the past), and Analytics is forward-looking (into the future). This is a Boolean point of view, and frankly, unnuanced.</p> <p id="7f66" class="graf graf--p graf-after--p">It also does not take into account the “unmentionable”: Operational Reporting. Yuck, right? Who the hell wants to do <em class="markup--em markup--p-em">that</em>? And that is (usually) the end of the discussion. The unsexiness of Operational Reporting means that it is forever pushed to the side, resulting in belabored sighs from clients claiming that they are “the worst you’ve seen”, just because they still have stuff running with VBA code to make sure the lights stay on and the orders are filled.</p> <p id="546d" class="graf graf--p graf-after--p">I think the duality of the current definition of BIA is wrong. Here is how I perceive the BIA spectrum:</p> <figure id="8962" class="graf graf--figure graf-after--p"> <div class="aspectRatioPlaceholder is-locked"> <div class="aspectRatioPlaceholder-fill"></div> <div class="progressiveMedia js-progressiveMedia graf-image is-canvasLoaded is-imageLoaded" data-image-id="1*N9yDJVNftQtckpxt6G4Caw.png" data-width="1500" data-height="950" data-action="zoom" data-action-value="1*N9yDJVNftQtckpxt6G4Caw.png" data-scroll="native"><canvas class="progressiveMedia-canvas js-progressiveMedia-canvas" width="75" height="47"></canvas><img class="progressiveMedia-image js-progressiveMedia-image" src="https://i2.wp.com/cdn-images-1.medium.com/max/1600/1*N9yDJVNftQtckpxt6G4Caw.png?w=1170&#038;ssl=1" data-src="https://i2.wp.com/cdn-images-1.medium.com/max/1600/1*N9yDJVNftQtckpxt6G4Caw.png?w=1170&#038;ssl=1" data-recalc-dims="1" /></div> </div><figcaption class="imageCaption">In the wild, things are much more fluid than this.</figcaption></figure> <p id="268b" class="graf graf--p graf-after--figure">This captures the entirety of what is happening in the business with reporting data; data that is being used to solve problems. Whether those problems are in the present, past or future is irrelevant, because if we want to have higher quality data, higher quality decisions and higher data accuracy, why should we only care about 2/3rds of the use case for curated data?</p> <h4 id="2bea" class="graf graf--h4 graf-after--p">Why the Modern Business Needs Nuance</h4> <p id="d732" class="graf graf--p graf-after--h4">I think that we can boil down the definitions of those concepts into three simple questions.</p> <p id="96e9" class="graf graf--p graf-after--p"><strong><span class="markup--strong markup--p-strong">Operational Reporting:</span></strong> <em class="markup--em markup--p-em">What is happening?</em></p> <p id="f196" class="graf graf--p graf-after--p"><strong><span class="markup--strong markup--p-strong">Business Intelligence:</span></strong> <em class="markup--em markup--p-em">How are we performing?</em></p> <p id="b734" class="graf graf--p graf-after--p"><span class="markup--strong markup--p-strong"><strong>Analytics:</strong> </span><em class="markup--em markup--p-em">Why do we care?</em></p> <p id="eef8" class="graf graf--p graf-after--p">Here is an example to illustrate the above concept. Let’s pretend that a company called Company A manufactures charging equipment for smartphones, and that they have two product lines: Android, and iOS. When company A sees that the shipments from the warehouse starting to slow because the line workers headed out to the food truck for lunch, that is Operational Reporting. When Company A sees that shipment numbers are down compared to last month and customers are not getting orders by the promised date, that is Business Intelligence. When Company A determines that shipments are down because an increase in customers are not buying Android smart phone chargers, that is Analytics.</p> <p id="55fc" class="graf graf--p graf-after--p">You may argue that this is all semantics, and you may be right. But I think that semantics matter. And, I think this example highlights another thing that is not usually discussed: if operational data can be used with curated data sets seen in BI and Analytic settings, the possibility for insight is greater. With the advent of Internet of Things connectivity, streaming with services like Kafka, and the ability to use nontraditional data types like JSON, integrating this data is a no brainer.</p> <h4 id="e0f6" class="graf graf--h4 graf-after--p">Is the Future Now?</h4> <p id="8a5a" class="graf graf--p graf-after--h4">Many times I come into organizations and see a mindset that what “we really need [is] a new tool”, and that this new technology will magically fix everything. While technological changes do need to happen, and many organizations need to adapt to current technological standards and methods, the tools themselves do not change the underlying issues (more on that later).</p> <p id="1feb" class="graf graf--p graf-after--p">Technologists love the mantra that the future is now. Better yet, they love to say that the future arrives in waves (meaning that the future arrives at different times in different places). That may be true, but I think that simplistic mindset perpetuates a sense of urgency to be an early adopter of the latest technology. Buying and building a foundation for these concepts (or even BIA) takes considerable time, money and effort. This isn’t like heading down to the Apple store and buying a new iPhone every year. These initiatives are usually million dollar efforts that require multiple teams across multiple years. For comparison, how long does it take to buy an iPhone? An afternoon? Even by percentage terms, the amount of effort is not even comparable between these two tasks. Let’s stop pretending that buying a tool will fix the fundamental issue at hand, because in most cases it will not.</p> <p id="36e4" class="graf graf--p graf-after--p">Another concept that we should take a good, hard look at is the “data lake”. I mentioned above that Internet of Things data can be streamed, captured and analyzed to benefit the business. But just because we can stream it and hold on to it, does that mean it has value? I hear many industry talking heads talk about data lakes, big data, algorithms and the like, but what is the use if there is no use case? Most clients I have worked with don’t understand what data they have or how to use it — is it really feasible to think that these clients will directly benefit from a solution like this? Long term, perhaps they will. Short term, there are better initiatives that can bring higher benefits for lower cost and effort.</p> <p id="2027" class="graf graf--p graf-after--p">A better approach is to focus on how we can use, and most importantly think about data. I dare you to go to HBR (a favorite business publication of mine, so I’m not hating here), and search for the term “analytics”. When I conducted this experiment, I had <span class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">1,521</em></span> results returned. So, clearly, people are talking about this topic. What are they saying? Here are some of the titles: “<a class="markup--anchor markup--p-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fhbr.org%2F2016%2F08%2Ffiguring-out-how-it-analytics-and-operations-should-work-together" target="_blank" rel="nofollow" data-href="https://medium.com/r/?url=https%3A%2F%2Fhbr.org%2F2016%2F08%2Ffiguring-out-how-it-analytics-and-operations-should-work-together">Figuring Out How IT, Analytics, and Operations Should Work Together</a>” (Berkooz), “<a class="markup--anchor markup--p-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fhbr.org%2F2016%2F10%2Fhow-an-analytics-mindset-changes-marketing-culture" target="_blank" rel="nofollow" data-href="https://medium.com/r/?url=https%3A%2F%2Fhbr.org%2F2016%2F10%2Fhow-an-analytics-mindset-changes-marketing-culture">How an Analytics Mindset Changes Marketing Culture</a>” (Sweetwood), “<a class="markup--anchor markup--p-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fhbr.org%2F2016%2F08%2Fthe-reason-so-many-analytics-efforts-fall-short" target="_blank" rel="nofollow" data-href="https://medium.com/r/?url=https%3A%2F%2Fhbr.org%2F2016%2F08%2Fthe-reason-so-many-analytics-efforts-fall-short">The Reason So Many Analytics Efforts Fall Short</a>” (McShea, Oakley, Mazzei), and… I could keep going with examples. Each of these articles, and many others seen have great points describing what mistakes were made. I think that many of these mistakes continue to be made because businesses refuse to change their thinking about analytics, reporting, BI or whatever you’d like to call it. Einstein’s definition of insanity is apt here: “doing the same thing over and over and expecting different results”. I would be remiss if I placed the blame squarely on businesses, but consulting partners also have a role to play in this. Being a technical consultant does not necessarily mean only focusing on a technical solution; it is important to advise on the implications of these solutions and how the business should be thinking and acting.</p> <h4 id="1f86" class="graf graf--h4 graf-after--p">Doctor, It Hurts When I Do This</h4> <p id="6d9b" class="graf graf--p graf-after--h4">These attitudes towards data lead to many situations I see with clients. Many times, I feel as though I am the one who needs to broach the topic and feel akin to someone who needs to tell a friend that s/he really needs an Altoid after that falafel pita they had for lunch. Because businesses are so focused on getting things done, they rarely have the time to focus on the self reflection that would lead them to recognizing these issues. These are difficult discussions — but would we rather not have them and let people drown in data quagmire? The items below are the road blocks that I most commonly see as deterrents from accepting a new mindset around data. Until these fundamental data hurdles are jumped, it will be hard for any organization to overcome an old world view and embrace a new one for a new world.</p> <ol class="postList"> <li id="4abe" class="graf graf--li graf-after--p"><strong><span class="markup--strong markup--li-strong">Data is inaccessible.</span></strong> Many times, the people who need the data do not have ready access to it. This may be because data is difficult to extract (maybe they are accountants that do not have database access). Or maybe it is hoarded (someone has access but only sends limited amounts of information). Perhaps the data is disparate (meaning that it is spread thinly throughout the organization, typically in Excel files). These issues prevent people from making decisions and they spend many, many hours finding loopholes, work arounds and writing their own “underground” code and databases to compile this data, when they should be making decisions or correcting it in a different system.</li> <li id="342a" class="graf graf--li graf-after--li"><span class="markup--strong markup--li-strong"><strong>Data is indecipherable</strong>.</span> Sometimes data is accessible but the data means different things to different people. For example, Jimmy may calculate “On Time Order Percentage” as ((Number of On Time Orders / Total Number of Orders) * 100), where Jane may calculate “On Time Order Percentage” as ((Number of On Time Orders / (Total Number of Orders — Number of Cancelled Orders) * 100). Who is right? In some respects, they both are; in some respects, they are both wrong as well. Because no one can clearly understand what is happening, the data loses it’s meaning. It always needs a qualifier, and thus, is value is decreased because no one fully trusts it. This is also an issue when the data set is complex. If someone needs to understand different codes that represent business processes (100 = order placed, 102 = order packed, 103 = order ready for shipping, etc), or if the data does not represent the business process, the common language between the end user and the data is destroyed, and not only does the data become useless, it becomes meaningless.</li> <li id="b31e" class="graf graf--li graf-after--li"><span class="markup--strong markup--li-strong"><strong>Data lacks vision</strong>.</span> Many times I see companies that “just want reports on X”. X could be order management, or accounting, or purchase orders. However, rarely do I see companies create and execute a vision for their corporate data. This requires thinking of data within several tracks. Operational reporting keeps the lights on; what do I need to do right now to keep us moving forward? Business intelligence provides the business with goals, key performance indicators and metrics to track performance over time. Analytics answer the hard questions regarding what is happening in the business and in the general marketplace; these are usually open ended and have a grey area in terms of what the answer is. The lack of vision is a massive detriment to many companies because it means they move disjointedly when it comes to strategy and execution, particularly when it comes to internal resources.</li> </ol> <p id="bf03" class="graf graf--p graf-after--li graf--trailing">For those of you who are in consulting, I am sure you have more points to add to the list. However, the point isn’t about making a list, it is about recognizing the blind spots many companies have in regards to data. It’s hard for any organization to get on board with data investments when it can’t overcome obstacles like the ones above. It is important for all of us to speak out and help transform the landscape of data from a Boolean point of view into a varchar point of view; it may need some error handling, but at least you can get what you want out of it… most of the time.</p> <h4 id="79bb" class="graf graf--h4 graf--leading">An Ending, but not The End</h4> <p id="5435" class="graf graf--p graf-after--h4">All of the above is great food for thought, but how do we implement a plan to combat the mistakes of the past? How can we start a movement that changes how we think about and interact with data? I’ll be following up this blog with strategies and ideas for winning these battles. Until then, remember that the mindset that we bring to the table when talking about data matters as much as problem we are trying to solve.</p> <figure id="3bd5" class="graf graf--figure graf-after--p"> <div class="progressiveMedia js-progressiveMedia graf-image is-canvasLoaded is-imageLoaded" data-image-id="1*xE1qviqSAFU2a7C7TzmkFA.gif" data-width="500" data-height="240" data-scroll="native"><canvas class="progressiveMedia-canvas js-progressiveMedia-canvas" width="75" height="35"></canvas><img class="progressiveMedia-image js-progressiveMedia-image" src="https://i1.wp.com/cdn-images-1.medium.com/max/1600/1*xE1qviqSAFU2a7C7TzmkFA.gif?w=1170&#038;ssl=1" data-src="https://i1.wp.com/cdn-images-1.medium.com/max/1600/1*xE1qviqSAFU2a7C7TzmkFA.gif?w=1170&#038;ssl=1" data-recalc-dims="1" /></div> </figure> Phil Goerdt http://redpillanalytics.com/?p=4841 Thu Apr 20 2017 09:48:08 GMT-0400 (EDT) BI-dentity Crisis https://medium.com/red-pill-analytics/bi-dentity-crisis-658ef4daa8?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*5wYr4UtYXYliA-KGAyvSUA.jpeg" /><figcaption>Photo Credit: <a href="https://unsplash.com/?photo=NsUcpqfcpCE">Drew Collins</a></figcaption></figure><h4>Figuring Out That Data is Fluid</h4><p>I had just gotten out of a discovery meeting with a client when she said “I’m sure we’re the worst you’ve seen”. The goal of the meeting was to better understand the process they were using, the data source, and how they were consuming that data. Turns out that data was entered into Share Point forms, fed into MS SQL Server, dumped into Access, cleaned up with VBA, and accessed with Tableau and Excel. I’ve seen plenty of scenarios like this in my time as a consultant, and in a previous life, I helped build a solution similar to this. These scenarios are not uncommon, and usually, the business has a problem that needs to be resolved and they figure out a way to do it with the resources they have on hand. Personally, I have no beef with these types of systems. I even like seeing them because it shows how people can problem solve with minimal resources; ingenuity can be a beautiful thing. Sure, like most people in the BI/DI/analytics/(insert all other related buzzwords here) space, I like to come into a client where everything is clean and segmented, but I suppose one of the fun things of this job is unraveling the ball of yarn and untangling the knots.</p><p>Now, back to what one of the clients said to me: “I’m sure we’re the worst you’ve seen…” I think it is troubling to hear that kind of self deprecation on such a regular basis. Why? I think that for a few reasons: Business Intelligence and Analytics (BIA) is a spectrum, data is necessary for modern businesses, and most businesses are not like the ones seen in the latest blogs or with the sexiest, newest tech. There are plenty of clients I have worked with that think that since they do not fit into these boxes, they are worst in class. Most need help, which is why I am there in the first place, but even the “best in class” usually need help in more ways than just a technical solution.</p><h4>The BIA Spectrum</h4><p>I think that part of the issue is based on semantics. To most people, BI and Analytics are defined like this:</p><ol><li><strong>Business Intelligence:</strong> An analysis in which data is viewed post business activity to assess the business via metrics and indicators.</li><li><strong>Analytics:</strong> An analysis which uses past data to make projections as to what could happen.</li></ol><p>The biggest difference here is that BI is backward-looking (into the past), and Analytics is forward-looking (into the future). This is a Boolean point of view, and frankly, unnuanced.</p><p>It also does not take into account the “unmentionable”: Operational Reporting. Yuck, right? Who the hell wants to do <em>that</em>? And that is (usually) the end of the discussion. The unsexiness of Operational Reporting means that it is forever pushed to the side, resulting in belabored sighs from clients claiming that they are “the worst you’ve seen”, just because they still have stuff running with VBA code to make sure the lights stay on and the orders are filled.</p><p>I think the duality of the current definition of BIA is wrong. Here is how I perceive the BIA spectrum:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*N9yDJVNftQtckpxt6G4Caw.png" /><figcaption>In the wild, things are much more fluid than this.</figcaption></figure><p>This captures the entirety of what is happening in the business with reporting data; data that is being used to solve problems. Whether those problems are in the present, past or future is irrelevant, because if we want to have higher quality data, higher quality decisions and higher data accuracy, why should we only care about 2/3rds of the use case for curated data?</p><h4>Why the Modern Business Needs Nuance</h4><p>I think that we can boil down the definitions of those concepts into three simple questions.</p><p><strong>Operational Reporting:</strong> <em>What is happening?</em></p><p><strong>Business Intelligence:</strong> <em>How are we performing?</em></p><p><strong>Analytics: </strong><em>Why do we care?</em></p><p>Here is an example to illustrate the above concept. Let’s pretend that a company called Company A manufactures charging equipment for smartphones, and that they have two product lines: Android, and iOS. When company A sees that the shipments from the warehouse starting to slow because the line workers headed out to the food truck for lunch, that is Operational Reporting. When Company A sees that shipment numbers are down compared to last month and customers are not getting orders by the promised date, that is Business Intelligence. When Company A determines that shipments are down because an increase in customers are not buying Android smart phone chargers, that is Analytics.</p><p>You may argue that this is all semantics, and you may be right. But I think that semantics matter. And, I think this example highlights another thing that is not usually discussed: if operational data can be used with curated data sets seen in BI and Analytic settings, the possibility for insight is greater. With the advent of Internet of Things connectivity, streaming with services like Kafka, and the ability to use nontraditional data types like JSON, integrating this data is a no brainer.</p><h4>Is the Future Now?</h4><p>Many times I come into organizations and see a mindset that what “we really need [is] a new tool”, and that this new technology will magically fix everything. While technological changes do need to happen, and many organizations need to adapt to current technological standards and methods, the tools themselves do not change the underlying issues (more on that later).</p><p>Technologists love the mantra that the future is now. Better yet, they love to say that the future arrives in waves (meaning that the future arrives at different times in different places). That may be true, but I think that simplistic mindset perpetuates a sense of urgency to be an early adopter of the latest technology. Buying and building a foundation for these concepts (or even BIA) takes considerable time, money and effort. This isn’t like heading down to the Apple store and buying a new iPhone every year. These initiatives are usually million dollar efforts that require multiple teams across multiple years. For comparison, how long does it take to buy an iPhone? An afternoon? Even by percentage terms, the amount of effort is not even comparable between these two tasks. Let’s stop pretending that buying a tool will fix the fundamental issue at hand, because in most cases it will not.</p><p>Another concept that we should take a good, hard look at is the “data lake”. I mentioned above that Internet of Things data can be streamed, captured and analyzed to benefit the business. But just because we can stream it and hold on to it, does that mean it has value? I hear many industry talking heads talk about data lakes, big data, algorithms and the like, but what is the use if there is no use case? Most clients I have worked with don’t understand what data they have or how to use it — is it really feasible to think that these clients will directly benefit from a solution like this? Long term, perhaps they will. Short term, there are better initiatives that can bring higher benefits for lower cost and effort.</p><p>A better approach is to focus on how we can use, and most importantly think about data. I dare you to go to HBR (a favorite business publication of mine, so I’m not hating here), and search for the term “analytics”. When I conducted this experiment, I had <strong><em>1,521</em></strong> results returned. So, clearly, people are talking about this topic. What are they saying? Here are some of the titles: “<a href="https://hbr.org/2016/08/figuring-out-how-it-analytics-and-operations-should-work-together">Figuring Out How IT, Analytics, and Operations Should Work Together</a>” (Berkooz), “<a href="https://hbr.org/2016/10/how-an-analytics-mindset-changes-marketing-culture">How an Analytics Mindset Changes Marketing Culture</a>” (Sweetwood), “<a href="https://hbr.org/2016/08/the-reason-so-many-analytics-efforts-fall-short">The Reason So Many Analytics Efforts Fall Short</a>” (McShea, Oakley, Mazzei), and… I could keep going with examples. Each of these articles, and many others seen have great points describing what mistakes were made. I think that many of these mistakes continue to be made because businesses refuse to change their thinking about analytics, reporting, BI or whatever you’d like to call it. Einstein’s definition of insanity is apt here: “doing the same thing over and over and expecting different results”. I would be remiss if I placed the blame squarely on businesses, but consulting partners also have a role to play in this. Being a technical consultant does not necessarily mean only focusing on a technical solution; it is important to advise on the implications of these solutions and how the business should be thinking and acting.</p><h4>Doctor, It Hurts When I Do This</h4><p>These attitudes towards data lead to many situations I see with clients. Many times, I feel as though I am the one who needs to broach the topic and feel akin to someone who needs to tell a friend that s/he really needs an Altoid after that falafel pita they had for lunch. Because businesses are so focused on getting things done, they rarely have the time to focus on the self reflection that would lead them to recognizing these issues. These are difficult discussions — but would we rather not have them and let people drown in data quagmire? The items below are the road blocks that I most commonly see as deterrents from accepting a new mindset around data. Until these fundamental data hurdles are jumped, it will be hard for any organization to overcome an old world view and embrace a new one for a new world.</p><ol><li><strong>Data is inaccessible.</strong> Many times, the people who need the data do not have ready access to it. This may be because data is difficult to extract (maybe they are accountants that do not have database access). Or maybe it is hoarded (someone has access but only sends limited amounts of information). Perhaps the data is disparate (meaning that it is spread thinly throughout the organization, typically in Excel files). These issues prevent people from making decisions and they spend many, many hours finding loopholes, work arounds and writing their own “underground” code and databases to compile this data, when they should be making decisions or correcting it in a different system.</li><li><strong>Data is indecipherable.</strong> Sometimes data is accessible but the data means different things to different people. For example, Jimmy may calculate “On Time Order Percentage” as ((Number of On Time Orders / Total Number of Orders) * 100), where Jane may calculate “On Time Order Percentage” as ((Number of On Time Orders / (Total Number of Orders — Number of Cancelled Orders) * 100). Who is right? In some respects, they both are; in some respects, they are both wrong as well. Because no one can clearly understand what is happening, the data loses it’s meaning. It always needs a qualifier, and thus, is value is decreased because no one fully trusts it. This is also an issue when the data set is complex. If someone needs to understand different codes that represent business processes (100 = order placed, 102 = order packed, 103 = order ready for shipping, etc), or if the data does not represent the business process, the common language between the end user and the data is destroyed, and not only does the data become useless, it becomes meaningless.</li><li><strong>Data lacks vision.</strong> Many times I see companies that “just want reports on X”. X could be order management, or accounting, or purchase orders. However, rarely do I see companies create and execute a vision for their corporate data. This requires thinking of data within several tracks. Operational reporting keeps the lights on; what do I need to do right now to keep us moving forward? Business intelligence provides the business with goals, key performance indicators and metrics to track performance over time. Analytics answer the hard questions regarding what is happening in the business and in the general marketplace; these are usually open ended and have a grey area in terms of what the answer is. The lack of vision is a massive detriment to many companies because it means they move disjointedly when it comes to strategy and execution, particularly when it comes to internal resources.</li></ol><p>For those of you who are in consulting, I am sure you have more points to add to the list. However, the point isn’t about making a list, it is about recognizing the blind spots many companies have in regards to data. It’s hard for any organization to get on board with data investments when it can’t overcome obstacles like the ones above. It is important for all of us to speak out and help transform the landscape of data from a Boolean point of view into a varchar point of view; it may need some error handling, but at least you can get what you want out of it… most of the time.</p><h4>An Ending, but not The End</h4><p>All of the above is great food for thought, but how do we implement a plan to combat the mistakes of the past? How can we start a movement that changes how we think about and interact with data? I’ll be following up this blog with strategies and ideas for winning these battles. Until then, remember that the mindset that we bring to the table when talking about data matters as much as problem we are trying to solve.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/500/1*xE1qviqSAFU2a7C7TzmkFA.gif" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=658ef4daa8" width="1" height="1"><hr><p><a href="https://medium.com/red-pill-analytics/bi-dentity-crisis-658ef4daa8">BI-dentity Crisis</a> was originally published in <a href="https://medium.com/red-pill-analytics">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Phil Goerdt https://medium.com/p/658ef4daa8 Thu Apr 20 2017 09:47:59 GMT-0400 (EDT) Oracle Database 12.2 New Feature – PDB Lockdown Profiles http://gavinsoorma.com/2017/04/oracle-database-12-2-new-feature-pdb-lockdown-profiles/ <p>In an earlier post I had mentioned one of the new features in Oracle Database 12.2 was the ability to set SGA and PGA memory related parameters even at the individual PDB level. So it enables us to further limit or define the resources which a particular PDB can use and enable a more efficient management of resources in a multitenant environment.</p> <p>We can further in Oracle 12c Release 2 now even limit the operations which can be performed within a particular PDB as well as restrict features which can be used or enabled &#8211; all at the individual PDB level. We can also limit network connectivity a PDB can have by enabling or disabling the use of network related packages like UTL_SMTP,UTL_HTTP, UTL_TCP at the PDB level.</p> <p>This is done via the new 12.2 feature called <strong>Lockdown Profiles</strong>.</p> <p>We create lockdown profiles via the CREATE LOCKDOWN PROFILE statement while connected to the root CDB and after the lockdown profile has been created, we add the required restrictions or limits which we would like to enforce via the ALTER LOCKDOWN PROFILE statement.</p> <p>To assign the lockdown profile to a particular PDB, we use the <code class="codeph">PDB_LOCKDOWN</code> initialization parameter which will contain the name of the lockdown profile we have earlier created.</p> <p>If we set the PDB_LOCKDOWN parameter at the CDB level, it will apply to all the PDB&#8217;s in the CDB. We can also set the PDB_LOCKDOWN parameter at the PDB level and we can maybe have different PDB_LOCKDOWN values for different PDB&#8217;s as we will see in the example below.</p> <p>Let us have a look at an example of PDB Lockdown Profiles at work.</p> <p>In our CDB, we have two pluggable databases PDB1 and PDB2. We want to limit some kind of operations depending on the PDB involved.</p> <p>Our requirements are the following:</p> <ul> <li>We want to ensure that in PDB1 the value for SGA_TARGET cannot be altered &#8211; so even a privileged user cannot allocate additional memory to the PDB. However if memory is available, then PGA allocation can be altered.</li> <li>To shutdown PDB1, it can only be done if connected to the root container and not from within the Pluggable Database itself</li> <li>The Partitioning feature is not available in PDB2</li> </ul> <div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">This content is available for purchase. Please select from available options.<br> <a href="http://gavinsoorma.com/transactions/?method=guest_purchase&post_id=7514" class="mgm_purchase_only_link">Purchase Only</a></div> </div></div> <div class="mgm_private_no_access"></div> Gavin Soorma http://gavinsoorma.com/?p=7514 Thu Apr 20 2017 03:57:18 GMT-0400 (EDT) Oracle Database 12.2 New Feature – PDB Lockdown Profiles https://gavinsoorma.com/2017/04/oracle-database-12-2-new-feature-pdb-lockdown-profiles/ <p>In an earlier post I had mentioned one of the new features in Oracle Database 12.2 was the ability to set SGA and PGA memory related parameters even at the individual PDB level. So it enables us to further limit or define the resources which a particular PDB can use and enable a more efficient management of resources in a multitenant environment.</p> <p>We can further in Oracle 12c Release 2 now even limit the operations which can be performed within a particular PDB as well as restrict features which can be used or enabled &#8211; all at the individual PDB level. We can also limit network connectivity a PDB can have by enabling or disabling the use of network related packages like UTL_SMTP,UTL_HTTP, UTL_TCP at the PDB level.</p> <p>This is done via the new 12.2 feature called <strong>Lockdown Profiles</strong>.</p> <p>We create lockdown profiles via the CREATE LOCKDOWN PROFILE statement while connected to the root CDB and after the lockdown profile has been created, we add the required restrictions or limits which we would like to enforce via the ALTER LOCKDOWN PROFILE statement.</p> <p>To assign the lockdown profile to a particular PDB, we use the <code class="codeph">PDB_LOCKDOWN</code> initialization parameter which will contain the name of the lockdown profile we have earlier created.</p> <p>If we set the PDB_LOCKDOWN parameter at the CDB level, it will apply to all the PDB&#8217;s in the CDB. We can also set the PDB_LOCKDOWN parameter at the PDB level and we can maybe have different PDB_LOCKDOWN values for different PDB&#8217;s as we will see in the example below.</p> <p>Let us have a look at an example of PDB Lockdown Profiles at work.</p> <p>In our CDB, we have two pluggable databases PDB1 and PDB2. We want to limit some kind of operations depending on the PDB involved.</p> <p>Our requirements are the following:</p> <ul> <li>We want to ensure that in PDB1 the value for SGA_TARGET cannot be altered &#8211; so even a privileged user cannot allocate additional memory to the PDB. However if memory is available, then PGA allocation can be altered.</li> <li>To shutdown PDB1, it can only be done if connected to the root container and not from within the Pluggable Database itself</li> <li>The Partitioning feature is not available in PDB2</li> </ul> <p><strong>Create the Lockdown Profiles</strong></p> <pre>SQL&gt; show con_name CON_NAME ------------------------------ CDB$ROOT SQL&gt; create lockdown profile pdb1_profile; Lockdown Profile created. SQL&gt; create lockdown profile pdb2_profile; Lockdown Profile created. </pre> <p><strong>Alter Lockdown Profile pdb1_profile</strong></p> <pre>SQL&gt; alter lockdown profile pdb1_profile disable statement =('ALTER SYSTEM') clause=('SET') OPTION = ('SGA_TARGET'); Lockdown Profile altered. SQL&gt; alter lockdown profile pdb1_profile disable statement =('ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE'); Lockdown Profile altered. </pre> <p><strong>Alter Lockdown Profile pdb2_profile</strong></p> <pre>SQL&gt; alter lockdown profile pdb2_profile DISABLE OPTION = ('PARTITIONING'); Lockdown Profile altered. </pre> <p><strong>Enable the Lockdown Profiles for both PDB1 and PDB2 pluggable databases</strong></p> <pre>SQL&gt; conn / as sysdba Connected. SQL&gt; alter session set container=PDB1; Session altered. SQL&gt; alter system set PDB_LOCKDOWN='PDB1_PROFILE'; System altered. SQL&gt; alter session set container=PDB2; Session altered. SQL&gt; alter system set PDB_LOCKDOWN='PDB2_PROFILE'; System altered. </pre> <p><strong><br /> Connect to PDB1 and try and increase the value of the parameter SGA_TARGET and PGA_AGGREGATE_TARGET</strong></p> <p>Note that we cannot alter SGA_TARGET because it is prevented by the lockdown profile in place, but we can alter PGA_AGGREGATE_TARGET because the lockdown profile clause only applies to the ALTER SYSTEM SET SGA_TARGET command.</p> <pre>SQL&gt; alter session set container=PDB1; Session altered. SQL&gt; alter system set sga_target=800m; alter system set sga_target=800m * ERROR at line 1: ORA-01031: insufficient privileges SQL&gt; alter system set pga_aggregate_target=200m; System altered. </pre> <p><strong>Connect to PDB2 and try and create a partitioned table</strong></p> <pre>SQL&gt; CREATE TABLE testme (id NUMBER, name VARCHAR2 (60)) PARTITION BY HASH (id) PARTITIONS 4 ; CREATE TABLE testme * ERROR at line 1: ORA-00439: feature not enabled: Partitioning </pre> <p><strong>Connect to PDB1 and try to shutdown the pluggable database</strong></p> <p>Note that while we cannot shutdown PDB1, we are able to shutdown PDB2.</p> <pre>SQL&gt; alter session set container=pdb1; Session altered. SQL&gt; ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE; ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE * ERROR at line 1: ORA-01031: insufficient privileges SQL&gt; alter session set container=pdb2; Session altered. SQL&gt; ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE; Pluggable database altered. </pre> <p>&nbsp;</p> Gavin Soorma https://gavinsoorma.com/?p=7514 Thu Apr 20 2017 03:57:18 GMT-0400 (EDT) Application Express Kscope17 Track Highlights - Jorge Rimblas http://www.odtug.com/p/bl/et/blogaid=710&source=1 Jorge Rimblas, the APEX track lead for ODTUG Kscope17, shares his recommended “don’t miss sessions” at ODTUG Kscope17: ODTUG http://www.odtug.com/p/bl/et/blogaid=710&source=1 Wed Apr 19 2017 15:21:19 GMT-0400 (EDT) SQL-on-Hadoop: Impala vs Drill http://www.rittmanmead.com/blog/2017/04/sql-on-hadoop-impala-vs-drill/ <img src="http://www.rittmanmead.com/blog/content/images/2017/04/ImpalaAcc-5.gif" alt="SQL-on-Hadoop: Impala vs Drill"><p>I recently wrote a blog post about <a href="https://www.rittmanmead.com/blog/2017/04/metadata-modeling-in-the-database-with-analytic-views/">Oracle's Analytic Views</a> and how those can be used in order to provide a simple SQL interface to end users with data stored in a relational database. In today's post I'm expanding a little bit on my horizons by looking at how to effectively query data in Hadoop using SQL. The SQL-on-Hadoop interface is key for many organizations - it allows querying the Big Data world using existing tools (like OBIEE,Tableau, DVD) and skills (SQL).</p> <p>Analytic Views, together with <a href="https://www.rittmanmead.com/blog/2014/07/taking-a-look-at-the-new-oracle-big-data-sql/">Oracle's Big Data SQL</a> provide what we are looking for and have the benefit of unifying the data dictionary and the SQL dialect in use. It should be noted that Oracle Big Data SQL is licensed separately on top of the database and it's available for Exadata, SuperCluster, and 12c Linux Intel Oracle Databases machines only.</p> <p>Nowadays there is a multitude of open-source projects covering the SQL-on-Hadoop problem. In this post I'll look in detail at two of the most relevant: Cloudera Impala and Apache Drill. We'll see details of each technology, define the similarities, and spot the differences. Finally we'll show that Drill is most suited for exploration with tools like Oracle Data Visualization or Tableau while Impala fits in the explanation area with tools like OBIEE.</p> <p>As we'll see later, both the tools are inspired by <a href="https://research.google.com/pubs/pub36632.html">Dremel</a>, a paper published by Google in 2010 that defines a scalable, interactive ad-hoc query system for the analysis of read-only nested data that is the base of Google's BigQuery. Dremel defines two aspects of big data analytics:</p> <ul> <li>A columnar storage format representation for nested data</li> <li>A query engine</li> </ul> <p>The first point inspired Apache Parquet, the columnar storage format available in Hadoop. The second point provides the basis for both Impala and Drill.</p> <h1 id="clouderaimpala">Cloudera Impala</h1> <p>We started blogging about <a href="https://www.rittmanmead.com/blog/2015/05/obiee-11-1-1-9-now-supports-hiveserver2-and-impala-datasources/">Impala</a> a while ago, as soon as it was officially supported by OBIEE, testing it for reporting on top of big data Hadoop platforms. However, we never went into the details of the tool, which is the purpose of the current post.</p> <p><a href="https://www.cloudera.com/products/open-source/apache-hadoop/impala.html">Impala</a> is an open source project inspired by <a href="https://research.google.com/pubs/pub36632.html">Google's Dremel</a> and one of the massively parallel processing (MPP) SQL engines running natively on Hadoop. And as per <a href="https://www.cloudera.com/documentation/enterprise/5-5-x/topics/impala.html">Cloudera definition</a> is a tool that:</p> <blockquote> <p>provides high-performance, low-latency SQL queries on data stored in popular Apache Hadoop file formats. </p> </blockquote> <p>Two important bits to notice:</p> <ul> <li><strong>High performance and low latency SQL queries</strong>: Impala was created to overcome the slowness of Hive, which relied on MapReduce jobs to execute the queries. Impala uses its own set of <a href="https://www.cloudera.com/documentation/enterprise/5-5-x/topics/impala_components.html#intro_components">daemons</a> running on each of the datanodes saving time by: <ul><li>Avoiding the MapReduce job startup latency</li> <li>Compiling the query code for optimal performance</li> <li>Streaming intermediate results in-memory while MapReduces always writing to disk</li> <li>Starting the aggregation as soon as the first fragment starts returning results</li> <li>Caching metadata definitions</li> <li>Gathering tables and columns statistics</li></ul></li> <li><strong>Data stored in popular Apache Hadoop file formats</strong>: Impala uses the Hive metastore database. Databases and tables are shared between both components. The <a href="https://www.cloudera.com/documentation/enterprise/5-5-x/topics/impala_file_formats.html">list of supported file formats</a> include Parquet, Avro, simple Text and SequenceFile amongst others. Choosing the right file format and the compression codec can have enormous impact on performance. Impala also supports, since CDH 5.8 / Impala 2.6, Amazon S3 filesystem for both writing and reading operations.</li> </ul> <p>One of the performance improvements is related to "Streaming intermediate results": Impala works in memory as much as possible, writing on disk only if the data size is too big to fit in memory; as we'll see later this is called optimistic and pipelined query execution. This has immediate benefits compared to standard MapReduce jobs, which for reliability reasons always writes intermediate results to disk. <br> As per this Cloudera <a href="http://blog.cloudera.com/blog/2012/10/cloudera-impala-real-time-queries-in-apache-hadoop-for-real/">blog</a>, the usage of Impala in combination with Parquet data format is able to achieve the performance benefits explained in the Dremel paper.</p> <h2 id="impalaqueryprocess">Impala Query Process</h2> <p>Impala runs a daemon, called <code>impalad</code> on each <a href="https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html#NameNode+and+DataNodes">Datanode</a> (a node storing data in the Hadoop cluster). The query can be submitted to any daemon in the cluster which will act as <strong>coordinator node</strong> for the query. Impala daemons are always connected to the <strong>statestore</strong>, which is a process keeping a central inventory of all available daemons and related health and pushes back the information to all daemons. A third component called <strong>catalog service</strong> checks for metadata changes driven by Impala SQL in order to invalidate related cache entries. Metadata are cached in Impala for performance reasons: accessing metadata from the cache is much faster than checking against the Hive metastore. The catalog service process is in charge of keeping Impala's metadata cache in sync with the Hive metastore. </p> <p>Once the query is received, the coordinator verifies if the query is valid against the Hive metastore, then information about data location is retrieved from the <a href="https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html#NameNode+and+DataNodes">Namenode</a> (the node in charge of storing the list of blocks and related location in the datanodes), it fragments the query and distribute the fragments to other <code>impalad</code> daemons to execute the query. All the daemons read the needed data blocks, process the query, and stream partial result to the coordinator (avoiding the write to disk), which collects all the results and delivers it back to the requester. The result is returned as soon as it's available: certain SQL operations like aggregations or order by require all the input to be available before Impala can return the end result, while others, like a select of pre-existing columns without a order by can be returned with only partial results.</p> <p><img width="600px" alt="SQL-on-Hadoop: Impala vs Drill" src="http://www.rittmanmead.com/blog/content/images/2017/04/ImpalaAcc-4.gif"> </p> <h1 id="apachedrill">Apache Drill</h1> <p>Defining <a href="https://drill.apache.org">Apache Drill</a> as SQL-on-Hadoop is limiting: also inspired by <a href="https://research.google.com/pubs/pub36632.html">Google's Dremel</a> is a distributed <strong>datasource agnostic</strong> query engine. The datasource agnostic part is very relevant: Drill is not closely coupled with Hadoop, in fact it can query a variety of sources like MongoDB, Azure Blob Storage, or Google Cloud Storage amongst others. </p> <p>One of the most important features is that <strong>data can be queried schema-free</strong>: there is no need of defining the data structure or schema upfront - users can simply point the query to a file directory, MongoDB collection or Amazon S3 bucket and Drill will take care of the rest. For more details, check our <a href="https://www.rittmanmead.com/blog/2016/08/an-introduction-to-apache-drill/">overview</a> of the tool. One of Apache Drill's objectives is cutting down the data modeling and transformation effort providing a zero-day analysis as explained in this <a href="https://www.youtube.com/watch?v=HITzj3ihSUk">MapR video</a>. <br> <img width="500px" alt="SQL-on-Hadoop: Impala vs Drill" src="http://www.rittmanmead.com/blog/content/images/2017/04/Drill-Self-Service.png"></p> <p>Drill is designed for high performance on large datasets, with the following core components:</p> <ul> <li><strong>Distributed engine</strong>: Drill processes, called Drillbits, can be installed in many nodes and are the execution engine of the query. Nodes can be added/reduced manually to adjust the performances. Queries can be sent to any Drillbit in the cluster that will act as Foreman for the query.</li> <li><strong>Columnar execution</strong>: Drill is optimized for columnar storage (e.g. Parquet) and execution using the hierarchical and columnar in-memory data model.</li> <li><strong>Vectorization</strong>: Drill take advantage of the modern CPU's design - operating on record batches rather than iterating on single values.</li> <li><strong>Runtime compilation</strong>: Compiled code is faster than interpreted code and is generated ad-hoc for each query.</li> <li><strong>Optimistic and pipelined query execution</strong>: Drill assumes that none of the processes will fail and thus does all the pipeline operation in memory rather than writing to disk - writing on disk only when memory isn't sufficient. </li> </ul> <h2 id="drillqueryprocess">Drill Query Process</h2> <p>Like Impala's <code>impalad</code>, Drill's main component is the <strong>Drillbit</strong>: a process running on each active Drill node that is capable of coordinating, planning, executing and distributing queries. Installing Drillbit on all of Hadoop's data nodes is not compulsory, however if done gives Drill the ability to achieve the data locality: execute the queries where the data resides without the need of moving it via network. </p> <p>When a query is submitted against Drill, a client/application is sending a SQL statement to a Drillbit in the cluster (any Drillbit can be chosen), which will act as <strong>Foreman</strong> (coordinator in Impala terminology) that will parse the SQL and convert it into a logical plan composed by operators. The next step is the <strong>cost-based optimizer</strong> which, based on optimizations like rule/cost based, data locality and storage engine options, rearranges operations to generate the optimal physical plan. The Foreman then divides the physical plan in phases, called <strong>fragments</strong>, which are organised in a tree and executed in parallel against the data sources. The results are then sent back to the client/application. The following image taken from <a href="https://drill.apache.org/docs/drill-query-execution/">drill.apache.org</a> explains the full process:</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/04/Drill-Execution-Plan.png" alt="SQL-on-Hadoop: Impala vs Drill"></p> <h1 id="similaritiesanddifferences">Similarities and Differences</h1> <p>As we saw above, Drill and Impala have a similar structure - both take advantage of always on daemons (faster compared to the start of a MapReduce job) and assume an optimistic query execution passing results in cache. The code compilation and the distributed engine are also common to both, which are optimized for columnar storage types like Parquet.</p> <p>There are, however, several differences. Impala works only on top of the Hive metastore while Drill supports a larger variety of data sources and can link them together on the fly in the same query. For example, implicit schema-defined files like JSON and XML, which are not supported natively by Impala, can be read <a href="https://www.rittmanmead.com/blog/2016/11/using-sql-to-query-json-files-with-apache-drill/">immediately by Drill</a>. <br> Drill usually doesn't require a metadata definition done upfront, while for Impala, a <em>view</em> or <em>external table</em> has to be declared before querying. Following this point there is no concept of a central and persistent metastore, and there is no metadata repository to manage just for Drill. In OBIEE's world, both <a href="https://www.rittmanmead.com/blog/2015/05/connecting-obiee-11-1-1-9-to-hive-hbase-and-impala-tables-for-a-dw-offloading-project/">Impala</a> and <a href="https://www.rittmanmead.com/blog/2016/08/using-apache-drill-with-obiee-12c/">Drill</a> are supported data sources. The same applies to Data Visualization Desktop. <br> <img src="http://www.rittmanmead.com/blog/content/images/2017/04/DVD-Impala-Drill.png" alt="SQL-on-Hadoop: Impala vs Drill"></p> <p>The aim of this article isn't a performance-wise comparison since those depends on a huge amount of factors including data types, file format, configurations, and query types. A comparison dated back in 2015 can be found <a href="http://allegro.tech/2015/06/fast-data-hackathon.html">here</a>. Please be aware that there are newer versions of the tools since this comparison, which bring a lot of changes and improvements for both projects in terms of performance.</p> <h1 id="conclusion">Conclusion</h1> <p>Impala and Drill share a similar structure - both inspired by Google's Dremel - relying on always active daemons deployed on cluster nodes to provide the best query performances on top of Big Data data structures. So which one to choose and when? <br> As described, the capability of Apache Drill to query a raw data-source without requiring an upfront metadata definition makes the tool perfect for insights discovery on top of raw data. The capacity of joining data coming from one or more <a href="https://drill.apache.org/docs/storage-plugin-registration/">storage plugins</a> in a unique query makes the mash-up of disparate data sources easy and immediate. Data science and prototyping before the design of a reporting schema are perfect use cases of Drill. However, as part of the discovery phase, a metadata definition layer is usually added on top of the data sources. This makes Impala a good candidate for reporting queries. <br> Summarizing, if all the data points are already modeled in the Hive metastore, then Impala is your perfect choice. If instead, you need a mashup with external sources, or need work directly with raw data formats (e.g. JSON), then Drill's auto-exploration and openness capabilities are what you're looking for. <br> Even though both tools are fully compatible with Oracle BIEE and Data Visualization (DV), due to Drill's data exploration nature, it could be considered more in line with DV use cases, while Impala is more suitable for standard reporting like OBIEE. The decision on tooling highly depends on the specific use case - source data types, file formats and configurations have deep impact on the agility of the business analytics process and query performance.</p> <p>If you want to know more about Apache Drill, Impala and the use cases we have experienced, don't hesitate to <a href="mailto:info+ftdi@rittmanmead.com">contact us</a>!</p> Francesco Tisiot b5470339-7405-45f9-ae30-6616d95f4548 Wed Apr 19 2017 11:01:21 GMT-0400 (EDT) Tunneling Through the Clouds https://medium.com/red-pill-analytics/tunneling-through-the-clouds-10a2a5998a62?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*eaImuGcaZVptXt2pprj1cw.jpeg" /></figure><h4>Oracle Cloud and SSH Tunneling</h4><p>That title sounds a little weird, tunnels go through the ground and clouds are those fluffy things that float in the sky. However when we are talking about Oracle Cloud and SSH tunneling things make a lot more sense. Although this blog is specific to the Oracle Cloud Database, tunneling is a generic technique and will work with other cloud providers and your own on-premises systems.</p><h4>Before the how, a little of the why.</h4><p>Cloud resources are plugged into the internet — that is, our connection to the cloud database is going to be to the public IP address of the compute node of the database. Exposing the DB listener to the whole internet is probably a bad idea; even with strong passwords in the database, an exposed listener is a sort of ‘have a go at hacking me’ flag. To minimise such risks the Oracle Cloud Compute Node security rules default to no-access for the public internet to the listener. If our client has a known public IP address we could set up a rule to allow just that IP address to access the listener, however this is probably not sustainable in the long term. Many Internet Service Providers (ISP) present multiple pools of addresses and this hopping about of a user’s IP address can cause a lot of problems with the need to edit rules to accommodate changing IP addresses or leave over generous masks in place to allow blocks of addresses to pass. Add to this the reluctance of some corporate networks to allow outbound SQLNet traffic to pass through their routers and we see that it probably best not to access the Cloud database listener over the internet.</p><p>If only there was a way to provide a secure, encrypted connection over the internet to the database that does not use the listener. There is! Step up tunneling. Here we create a secure SSH connection to the Oracle Cloud Compute Node and then create a port redirect on our local computer so that any traffic to that port is translated to an IP address and port on the remote network.</p><h4>Setting it up</h4><p>Before we create an Oracle Cloud Database we have to generate a public / private key pair and to upload the public key to be used as part of the creation process. The ‘master’ private key is precious as it allows you to create as SSH connection to the the DB and perform highly privileged operations in the underlying operating system.</p><p>Obviously, we DO NOT WANT TO USE THIS KEY for our secure tunnel to the database listener, instead we should create a new private / public key pair and a new Linux user specifically for port forwarding. In my examples I am using <strong><em>tunnel_user</em></strong> as the Linux user and <strong><em>cloud_host</em></strong> as the address of the Oracle Cloud Database Public IP Address.</p><p>Oracle has <a href="http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/compute-iaas/creating_an_ssh_enabled_user/creating_an_ssh_enabled_user.html">posted</a> a tutorial on Oracle Cloud Linux user creation, we can adapt that to suit our needs. The main difference is that we are not setting up a new administrator user, so we can omit the final stages of the tutorial. In outline the steps are:</p><ol><li>Create a new private / public key pair. My preference is to ssh-keygen on my MacBook to make a RSA 2048-bit key pair, but other tools can do this. In this blog I named the key pair <em>tunnel_user. </em>On my MacBook the key pair is created in my user’s .ssh directory or using conventional Nix-like syntax ~/.ssh (by the way, the tilde sign means ‘home of’: ~ = my home, ~oracle = the oracle user’s home)</li><li>Start a ssh terminal session to the Oracle Cloud Compute Node. Using the ‘master’ private key (the one paired with the public key used to create the database) connect as the opc user. <br><em>ssh -i ~/.shh/MasterKey opc@</em><strong><em>cloud_host</em></strong></li><li>Escalate privileges (<em>sudo su</em>)</li><li>Create a new Linux user<br><em>useradd tunnel_user</em> <br>This user will not need a password as we will only be using the ssh key to access the Cloud Compute Node.</li><li>Create a <em>.ssh</em> directory in the new user’s home directory<br><em>mkdir ~tunnel_user/.ssh</em></li><li>Copy the public key you created for this user into your paste buffer and add it to the authorized_keys file in the .ssh directory (the Oracle tutorial uses <em>echo [my public key] &gt; ~[tunnel user name]/.ssh/authorized_keys </em>to do this, which might overwrite an existing file, so maybe <em>&gt;&gt;</em> is better to use than &gt;).</li><li>Using your prefered text editor add the new user to the <em>AllowUsers</em> line of <em>/etc/sshd_config. </em>Search for the line starting with <em>AllowUsers </em>and edit it for example<br><em>AllowUsers opc oracle </em>becomes<em> AllowUsers opc oracle </em><strong><em>tunnel_user</em></strong></li><li>Change the ownership of the key file to the new user<br><em>chown -R tunnel_user:tunnel_user ~tunnel_user/.ssh</em></li><li>We have now finished with the Oracle Tutorial steps, so <em>exit</em> from root and <em>exit</em> from opc.</li><li>Log in as the new user to verify that the new ssh connection works and then log out — this step is strictly not necessary but it makes debugging simpler at this time<br><em>ssh -i ~/.shh/tunnel_user tunnel_user@cloud_host<br>exit</em></li><li>Log in as opc and <em>sudo</em> to root.</li><li>Modify the new user to use the /sbin/nologin shell<br><em>usermod -s /sbin/nologin tunnel_user.</em></li><li>Exit from root and opc then try connecting to the tunnel_user through shh.<br><em>ssh -i ~/.shh/tunnel_user tunnel_user@cloud_host<br></em>You should be politely refused.</li></ol><p>The Linux compute node is now set for tunnelling with port forwarding but can not be used to create a Linux session.</p><p>As I mentioned earlier, port forwarding is where <strong><em>ALL</em></strong> the network traffic for a specific port is redirected down the shh tunnel to the remote server where it passed to an ip address and port visible to the remote server. The target database does not have to be on the machine we have tunnelled to, it just needs to be network accessible from the remote host</p><p>In my use case the database is running on the compute node and used the default (1521) port. My target database connection is 127.0.0.1:1521. The local port I redirect can be any unused user port. Some people use 1521 as the local port, but that is only suitable if you do not need to connect to any Oracle databases on port 1521 on your local network. Remember too, that if you tunnel to multiple destinations each tunnel will need its own local port number.</p><p>Creating the tunnel is relatively simple. If you are using Oracle SQL Developer 4.1 (or later) we can even do that in the GUI. As long as the connection is open in SQL Developer the tunnel is also available to any of your applications that wish to connect to the database — this includes R Studio, SQLDesktopJ, the OBIEE Admin tool and ODI Studio, in fact any way we can connect to the database using an IP address and port number. As I often need to use SQL Developer for looking at the database this is very convenient for me. You can find simple instructions to set this up <a href="https://blogs.oracle.com/dbaas/entry/connecting_to_a_database_cloud">here</a>. I would make a couple of changes to that method. Firstly, do not use your database create key (MasterKey in my example above), instead use the one created specifically for the tunnel_user. The second change is that we should manually specify the port number we want to use in the redirect; leaving it set as automatically assigned is fine if we are only going to use SQL Developer, however we do need to know the local port number if we are going to use the tunnel to connect to the database from other local clients.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/339/1*E9IH1GOlkQ0DwXmIO7isPA.png" /></figure><p>If you want something a little more light-weight we can use a terminal session to host the tunnel — PuTTY works well if you are into Windows, but as an Apple-fan-boy I would use Apple Terminal.</p><p>In PuTTY we create a new session to connect to the Oracle Cloud compute node and then add our authentication details on the SSH-Authentication tab followed by the tunnel details on the SSH-Tunnel tab finally back to the session tab and save it. Open the session and leave it running. As always with PuTTY you could use a command line connection string: the syntax is very like that for ssh given below</p><p>In a terminal application on Linux or a Mac things are even easier. We just invoke SSH to use our key file and create a tunnel using a command like:</p><p><em>ssh -i ~/.shh/tunnel_user -L 1555:127.0.0.1:1521 tunnel_user@cloud_host -N</em></p><p>The -N at the end of the command is important as this tells shh not to establish a command shell connection.</p><p>Remember, the tunnel is a connection to the remote server, not to the database, we will still need to use JDBC or whatever other protocol (OCI, ODBC etc) to create a database session so we still need a valid Oracle user name and a password. The only things we change in our connection is to use the redirected local port and the address we set up in the tunnel command (in my examples this is port 1555 and IP address 127.0.0.1).</p><p>Happy tunneling folks</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=10a2a5998a62" width="1" height="1"><hr><p><a href="https://medium.com/red-pill-analytics/tunneling-through-the-clouds-10a2a5998a62">Tunneling Through the Clouds</a> was originally published in <a href="https://medium.com/red-pill-analytics">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Pete Scott https://medium.com/p/10a2a5998a62 Wed Apr 19 2017 10:20:47 GMT-0400 (EDT) Tunneling Through the Clouds http://redpillanalytics.com/tunneling-through-the-clouds/ <p><img width="300" height="200" src="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/david-east-144349.jpg?fit=300%2C200" class="attachment-medium size-medium wp-post-image" alt="Tunneling Through the Clouds" srcset="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/david-east-144349.jpg?w=1920 1920w, https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/david-east-144349.jpg?resize=300%2C200 300w, https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/david-east-144349.jpg?resize=768%2C512 768w, https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/david-east-144349.jpg?resize=1024%2C683 1024w" sizes="(max-width: 300px) 100vw, 300px" data-attachment-id="4837" data-permalink="http://redpillanalytics.com/tunneling-through-the-clouds/david-east-144349/" data-orig-file="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/david-east-144349.jpg?fit=1920%2C1280" data-orig-size="1920,1280" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Tunneling Through the Clouds" data-image-description="&lt;p&gt;Tunneling Through the Cloud&lt;/p&gt; " data-medium-file="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/david-east-144349.jpg?fit=300%2C200" data-large-file="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/david-east-144349.jpg?fit=1024%2C683" /></p><p class="graf graf--h3">That title sounds a little weird, tunnels go through the ground and clouds are those fluffy things that float in the sky. However when we are talking about Oracle Cloud and SSH tunneling things make a lot more sense. Although this blog is specific to the Oracle Cloud Database, tunneling is a generic technique and will work with other cloud providers and your own on-premises systems.</p> <h4 class="graf graf--h4">Before the how, a little of the why.</h4> <p class="graf graf--p">Cloud resources are plugged into the internet — that is, our connection to the cloud database is going to be to the public IP address of the compute node of the database. Exposing the DB listener to the whole internet is probably a bad idea; even with strong passwords in the database, an exposed listener is a sort of ‘have a go at hacking me’ flag. To minimise such risks the Oracle Cloud Compute Node security rules default to no-access for the public internet to the listener. If our client has a known public IP address we could set up a rule to allow just that IP address to access the listener, however this is probably not sustainable in the long term. Many Internet Service Providers (ISP) present multiple pools of addresses and this hopping about of a user’s IP address can cause a lot of problems with the need to edit rules to accommodate changing IP addresses or leave over generous masks in place to allow blocks of addresses to pass. Add to this the reluctance of some corporate networks to allow outbound SQLNet traffic to pass through their routers and we see that it probably best not to access the Cloud database listener over the internet.</p> <p class="graf graf--p">If only there was a way to provide a secure, encrypted connection over the internet to the database that does not use the listener. There is! Step up tunneling. Here we create a secure SSH connection to the Oracle Cloud Compute Node and then create a port redirect on our local computer so that any traffic to that port is translated to an IP address and port on the remote network.</p> <h4 class="graf graf--h4">Setting it up</h4> <p class="graf graf--p">Before we create an Oracle Cloud Database we have to generate a public / private key pair and to upload the public key to be used as part of the creation process. This ‘master’ private key is precious as it allows you to create as SSH connection to the the DB and perform highly privileged operations in the underlying operating system.</p> <p class="graf graf--p">Obviously, we DO NOT WANT TO USE THIS KEY for our secure tunnel to the database listener, instead we should create a new private / public key pair and a new Linux user specifically for port forwarding. In my examples I am using <strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">tunnel_user</em></strong> as the Linux user and <strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">cloud_host</em></strong> as the address of the Oracle Cloud Database Public IP Address.</p> <p class="graf graf--p">Oracle has <a class="markup--anchor markup--p-anchor" href="http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/compute-iaas/creating_an_ssh_enabled_user/creating_an_ssh_enabled_user.html" target="_blank" rel="noopener" data-href="http://www.oracle.com/webfolder/technetwork/tutorials/obe/cloud/compute-iaas/creating_an_ssh_enabled_user/creating_an_ssh_enabled_user.html">posted</a> a tutorial on Oracle Cloud Linux user creation, we can adapt that to suit our needs. The main difference is that we are not setting up a new administrator user, so we can omit the final stages of the tutorial. In outline the steps are:</p> <ol class="postList"> <li class="graf graf--li">Create a new private / public key pair. My preference is to ssh-keygen on my MacBook to make a RSA 2048-bit key pair, but other tools can do this. In this blog I named the key pair <em class="markup--em markup--li-em">tunnel_user. </em>On my MacBook the key pair is created in my user’s .ssh directory or using conventional Nix-like syntax ~/.ssh (by the way, the tilde sign means ‘home of’: ~ = my home, ~oracle = the oracle user’s home)</li> <li class="graf graf--li">Start a ssh terminal session to the Oracle Cloud Compute Node. Using the ‘master’ private key (the one paired with the public key used to create the database) connect as the opc user.<br /> <em class="markup--em markup--li-em">ssh -i ~/.shh/MasterKey opc@</em><strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">cloud_host</em></strong></li> <li class="graf graf--li">Escalate privileges (<em class="markup--em markup--li-em">sudo su</em>)</li> <li class="graf graf--li">Create a new Linux user<br /> <em class="markup--em markup--li-em">useradd tunnel_user</em><br /> This user will not need a password as we will only be using the ssh key to access the Cloud Compute Node.</li> <li class="graf graf--li">Create a <em class="markup--em markup--li-em">.ssh</em> directory in the new user’s home directory<br /> <em class="markup--em markup--li-em">mkdir ~tunnel_user/.ssh </em></li> <li class="graf graf--li">Copy the public key you created for this user into your paste buffer and add it to the authorized_keys file in the .ssh directory (the Oracle tutorial uses <em class="markup--em markup--li-em">echo [my public key] &gt; ~[tunnel user name]/.ssh/authorized_keys </em>to do this, which might overwrite an existing file, so maybe <em class="markup--em markup--li-em">&gt;&gt;</em> is better to use than &gt;).</li> <li class="graf graf--li">Using your prefered text editor add the new user to the <em class="markup--em markup--li-em">AllowUsers</em> line of <em class="markup--em markup--li-em">/etc/sshd_config. </em>Search for the line starting with <em class="markup--em markup--li-em">AllowUsers </em>and edit it for example<br /> <em class="markup--em markup--li-em">AllowUsers opc oracle </em>becomes<em class="markup--em markup--li-em"> AllowUsers opc oracle </em><strong class="markup--strong markup--li-strong"><em class="markup--em markup--li-em">tunnel_user</em></strong></li> <li class="graf graf--li">Change the ownership of the key file to the new user<br /> <em class="markup--em markup--li-em">chown -R tunnel_user:tunnel_user ~tunnel_user/.ssh</em></li> <li class="graf graf--li">We have now finished with the Oracle Tutorial steps, so <em class="markup--em markup--li-em">exit</em> from root and <em class="markup--em markup--li-em">exit</em> from opc.</li> <li class="graf graf--li">Log in as the new user to verify that the new ssh connection works and then log out — this step is strictly not necessary but it makes debugging simpler at this time<br /> <em class="markup--em markup--li-em">ssh -i ~/.shh/tunnel_user tunnel_user@cloud_host<br /> exit</em></li> <li class="graf graf--li">Log in as opc and <em class="markup--em markup--li-em">sudo</em> to root.</li> <li class="graf graf--li">Modify the new user to use the /sbin/nologin shell<br /> <em class="markup--em markup--li-em">usermod -s /sbin/nologin tunnel_user.</em></li> <li class="graf graf--li">Exit from root and opc then try connecting to the tunnel_user through shh.<br /> <em class="markup--em markup--li-em">ssh -i ~/.shh/tunnel_user tunnel_user@cloud_host<br /> </em>You should be politely refused.</li> </ol> <p class="graf graf--p">The Linux compute node is now set for tunnelling with port forwarding but can not be used to create a Linux session.</p> <p class="graf graf--p">As I mentioned earlier, port forwarding is where <strong class="markup--strong markup--p-strong"><em class="markup--em markup--p-em">ALL</em></strong> the network traffic for a specific port is redirected down the shh tunnel to the remote server where it passed to an ip address and port visible to the remote server. The target database does not have to be on the machine we have tunnelled to, it just needs to be network accessible from the remote host</p> <p class="graf graf--p">In my use case the database is running on the compute node and used the default (1521) port. My target database connection is 127.0.0.1:1521. The local port I redirect can be any unused user port. Some people use 1521 as the local port, but that is only suitable if you do not need to connect to any Oracle databases on port 1521 on your local network. Remember too, that if you tunnel to multiple destinations each tunnel will need its own local port number.</p> <p class="graf graf--p">Creating the tunnel is relatively simple. If you are using Oracle SQL Developer 4.1 (or later) we can even do that in the GUI. As long as the connection is open in SQL Developer the tunnel is also available to any of your applications that wish to connect to the database — this includes R Studio, SQLDesktopJ, the OBIEE Admin tool and ODI Studio, in fact any way we can connect to the database using an IP address and port number. As I often need to use SQL Developer for looking at the database this is very convenient for me. You can find simple instructions to set this up <a class="markup--anchor markup--p-anchor" href="https://blogs.oracle.com/dbaas/entry/connecting_to_a_database_cloud" target="_blank" rel="noopener" data-href="https://blogs.oracle.com/dbaas/entry/connecting_to_a_database_cloud">here</a>. I would make a couple of changes to that method. Firstly, do not use your database create key (MasterKey in my example above), instead use the one created specifically for the tunnel_user. The second change is that we should manually specify the port number we want to use in the redirect; leaving it set as automatically assigned is fine if we are only going to use SQL Developer, however we do need to know the local port number if we are going to use the tunnel to connect to the database from other local clients.</p> <figure class="graf graf--figure"><img class="graf-image aligncenter" src="https://i1.wp.com/cdn-images-1.medium.com/max/1600/1*E9IH1GOlkQ0DwXmIO7isPA.png?resize=339%2C396&#038;ssl=1" data-image-id="1*E9IH1GOlkQ0DwXmIO7isPA.png" data-width="339" data-height="396" data-recalc-dims="1" /></figure> <p class="graf graf--p">If you want something a little more light-weight we can use a terminal session to host the tunnel — PuTTY works well if you are into Windows, but as an Apple-fan-boy I would use Apple Terminal.</p> <p class="graf graf--p">In PuTTY we create a new session to connect to the Oracle Cloud compute node and then add our authentication details on the SSH-Authentication tab followed by the tunnel details on the SSH-Tunnel tab finally back to the session tab and save it. Open the session and leave it running. As always with PuTTY you could use a command line connection string: the syntax is very like that for ssh given below</p> <p class="graf graf--p">In a terminal application on Linux or a Mac things are even easier. We just invoke SSH to use our key file and create a tunnel using a command like:</p> <p class="graf graf--p"><em class="markup--em markup--p-em">ssh -i ~/.shh/tunnel_user -L 1555:127.0.0.1:1521 tunnel_user@cloud_host -N</em></p> <p class="graf graf--p">The -N at the end of the command is important as this tells shh not to establish a command shell connection.</p> <p class="graf graf--p">Remember, the tunnel is a connection to the remote server, not to the database, we will still need to use JDBC or whatever other protocol (OCI, ODBC etc) to create a database session so we still need a valid Oracle user name and a password. The only things we change in our connection is to use the redirected local port and the address we set up in the tunnel command (in my examples this is port 1555 and IP address 127.0.0.1).</p> <p class="graf graf--p">Happy tunneling folks!</p> Pete Scott http://redpillanalytics.com/?p=4834 Wed Apr 19 2017 10:20:35 GMT-0400 (EDT) Data Visualization - Smart Insights http://beyond-just-data.blogspot.com/2017/04/data-visualization-smart-insights.html <br /><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;">Smart Insights have been available in Oracle Data Visualization Desktop for a few releases and now available in Oracle Analytics Cloud (OAC)<br /><br />So what are Smart Insights?<br />• They provide an at-a-glance assessment of your data<br />• Allows analysts to quickly understand the information the data contains<br />• You can easily see how measures are distributed in various attributes/dimensions<br />• Provides starting point for further data analysis<br /><br />In looking at tabular data it is difficult to see patterns and distribution of measures across dimensions</span><br /><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;"></span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-ixSs85eXpBo/WPZ8SVwTU4I/AAAAAAAAK4Y/lm10UJa-hWYnPV71hK8ikw0B3JSm1njzwCEw/s1600/1-TabularData.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="82" src="https://4.bp.blogspot.com/-ixSs85eXpBo/WPZ8SVwTU4I/AAAAAAAAK4Y/lm10UJa-hWYnPV71hK8ikw0B3JSm1njzwCEw/s400/1-TabularData.png" width="400" /></a></div><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;"><br /></span><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;">When we look at the same data via Smart Insights we can see how the data is distributed across </span><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;"><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;">attributes/</span>dimensions.</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-8f55pr1s6Iw/WPZ8SYo2eRI/AAAAAAAAK4U/8xYBpJ39fdERQOPaW-AEF5b17ZuGGh59gCEw/s1600/2-Profile.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="251" src="https://2.bp.blogspot.com/-8f55pr1s6Iw/WPZ8SYo2eRI/AAAAAAAAK4U/8xYBpJ39fdERQOPaW-AEF5b17ZuGGh59gCEw/s400/2-Profile.png" width="400" /></a></div><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;">So how does one access Smart Insights?</span><br /><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;"><br /></span><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;">First add a data source to a project.&nbsp; Then switch to Prepare Mode. Finally switch the view from Data to Visual.</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-xUUmo9_jKIg/WPaEdkK07zI/AAAAAAAAK48/DqAHgYDc4d0_3HkMKSoKo_QSMLHt6s7ZgCLcB/s1600/3-Prepare.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="58" src="https://4.bp.blogspot.com/-xUUmo9_jKIg/WPaEdkK07zI/AAAAAAAAK48/DqAHgYDc4d0_3HkMKSoKo_QSMLHt6s7ZgCLcB/s400/3-Prepare.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;">The Visual Mode creates a series of simple data visualizations.&nbsp; The initial views are the number of rows of data distributed across all the attributes/dimensions in the data set.</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-c9DyO8mTnWo/WPZ8SloB9vI/AAAAAAAAK4g/puekSu5LsP4m4R4jO-ZLd00ihsqyu-LoQCEw/s1600/4-ChangeView.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="195" src="https://4.bp.blogspot.com/-c9DyO8mTnWo/WPZ8SloB9vI/AAAAAAAAK4g/puekSu5LsP4m4R4jO-ZLd00ihsqyu-LoQCEw/s400/4-ChangeView.png" width="400" /></a></div><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;"><br /></span><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;"></span><br /><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;">The view can easily be changed from row count to other measures/facts in the data set by changing the Summarize by.</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-9oZs3yWgEpE/WPZ8SsIbUfI/AAAAAAAAK4k/XFYczDRyVUElxZd7uSQsS-nqu7QiWGYhwCEw/s1600/5-SummarizeBy.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="182" src="https://4.bp.blogspot.com/-9oZs3yWgEpE/WPZ8SsIbUfI/AAAAAAAAK4k/XFYczDRyVUElxZd7uSQsS-nqu7QiWGYhwCEw/s400/5-SummarizeBy.png" width="400" /></a></div><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;">Additional properties that help with analyzing the data is flagging whether show null rows in an attribute/dimension.&nbsp; The include others option is useful when the data has many different values in an attribute since the number of values on an axis also referred to as binning is limited.</span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://2.bp.blogspot.com/-56ujDWHdHCE/WPZ8SyyRJCI/AAAAAAAAK4o/wUOSwQua52MSF8AOfzBWMIDIdBclXj9mwCEw/s1600/6-ChangeView.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="206" src="https://2.bp.blogspot.com/-56ujDWHdHCE/WPZ8SyyRJCI/AAAAAAAAK4o/wUOSwQua52MSF8AOfzBWMIDIdBclXj9mwCEw/s400/6-ChangeView.png" width="400" /></a></div><br /><br /><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;"><span style="font-size: small;">The Binning of values on the X and Y Axis follow these <br />• Number of Bars Depends on Data Distribution<br />• Normally 10 bars are shown and all other data is displayed in a bar called Other<br />• If 20% or more of the data falls into the Other the system will break that data into the number of bars needed</span></span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-0UVcc3qXE-Y/WPZ8S_Thf2I/AAAAAAAAK4w/K2cRniTYHuks5EWTJBpVsXlNhuet2SAZACEw/s1600/8-Binning.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="142" src="https://4.bp.blogspot.com/-0UVcc3qXE-Y/WPZ8S_Thf2I/AAAAAAAAK4w/K2cRniTYHuks5EWTJBpVsXlNhuet2SAZACEw/s400/8-Binning.png" width="400" /></a></div><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;"><span style="font-size: small;"><br /><br />The style of the visualizations align to the type of data in the attribute/dimension.<br />• Non-numeric or Text - Horizontal bar chart<br />• Date and Time - Line chart<br />• Numeric - Vertical bar chart</span></span><br /><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-ui7I4Xzv39Y/WPZ8S_F2bEI/AAAAAAAAK4s/iEavtzpEv7wRY_qks_y_UklzFjQgc9iYwCEw/s1600/7-InsightVisualizationTypes.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="96" src="https://4.bp.blogspot.com/-ui7I4Xzv39Y/WPZ8S_F2bEI/AAAAAAAAK4s/iEavtzpEv7wRY_qks_y_UklzFjQgc9iYwCEw/s400/7-InsightVisualizationTypes.png" width="400" /></a></div><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;"><span style="font-size: small;"><br /><br />From the samples shown throughout this post one can see how beneficial they are to easily understanding the data, finding initial patterns and providing a starting point for further data analysis.</span></span><br /><span style="font-family: &quot;arial&quot; , &quot;helvetica&quot; , sans-serif;"> </span> Wayne D. Van Sluys tag:blogger.com,1999:blog-7768091516190336427.post-1447226091989786174 Tue Apr 18 2017 17:48:00 GMT-0400 (EDT) ODM Model View Details Views in Oracle 12.2 http://www.oralytics.com/2017/04/odm-model-view-details-views-in-oracle.html <p>A new feature for Oracle Data Mining in Oracle 12.2 is the new Model Details views.</p> <p>In Oracle 11.2.0.3 and up to Oracle 12.1 you needed to use a range of PL/SQL functions (in DBMS_DATA_MINING package) to inspect the details of a data mining/machine learning model using SQL. </p> <p>Check out these previous blog posts for some examples of how to use and extract model details in Oracle 12.1 and earlier versions of the database</p> <p><a href="http://www.oralytics.com/search?updated-max=2012-12-13T23:41:00-08:00&max-results=3&reverse-paginate=true">Association Rules in ODM-Part 3</a></p> <p><a href="http://www.oralytics.com/2012/10/extracting-rules-from-odm-decision-tree.html">Extracting the rules from an ODM Decision Tree model</a></p> <p><a href="http://www.oralytics.com/2016/06/cluster-details-with-oracle-data-mining.html">Cluster Details</a></p> <p><a href="http://www.oralytics.com/2015/04/viewing-models-details-for-decision.html">Viewing Decision Tree Details</a> </p> <p>Instead of these functions there are now a lot of DB views available to inspect the details of a model. The following table summarises these various DB Views. Check out the DB views I've listed after the table, as these views might some some of the ones you might end up using most often.</p> <p>I've now chance of remembering all of these and this table is a quick reference for me to find the DB views I need to use. The naming method used is very confusing but I'm sure in time I'll get the hang of them.</p> <p><strong>NOTE:</strong> For the DB Views I've listed in the following table, you will need to append the name of the ODM model to the view prefix that is listed in the table.</p> <style>table, th, td { border: 1px solid black; border-collapse: collapse; text-align: left; } </style> <table style="width:100%"> <tr> <th>Data Mining Type</th> <th>Algorithm & Model Details</th> <th>12.2 DB View</th> <th>Description</th> </tr> <tr> <td>Association</td> <td>Association Rules</td> <td>DM$VR</td> <td> generated rules for Association Rules</td> </tr> <tr> <td></td> <td>Frequent Itemsets</td> <td>DM$VI</td> <td>describes the frequent itemsets</td> <tr> <td></td> <td>Transaction Itemsets</td> <td>DM$VT</td> <td>describes the transactional itemsets view</td> <tr> <td></td> <td>Transactional Rules</td> <td>DM$VA</td> <td>describes the transactional rule view and transactional itemsets</td> <tr> <td>Classification</td> <td> (General views for Classification models)</td> <td>DM$VT <p>DM$VC</p></td> <td>describes the target distribution for Classification models <p>describes the scoring cost matrix for Classification models</p></td> </tr> <tr> <td></td> <td>Decision Tree</td> <td>DM$VP <p>DM$VI</p> <p>DM$VO</p> <p>DM$VM</p></td> <td>describes the DT hierarchy & the split info for each level in DT <p>describes the statistics associated with individual tree nodes</p> <p>Higher level node description</p> <p>describes the cost matrix used by the Decision Tree build</p></td> </tr> <tr> <td></td> <td>Generalized Linear Model</td> <td>DM$VD <p>DM$VA</p></td> <td>describes model info for Linear Regres & Logistic Regres <p>describes row level info for Linear Regres & Logistic Regres</p></td> </tr> <tr> <td></td> <td>Naive Bayes</td> <td>DM$VP <p>DM$VV</p></td> <td>describes the priors of the targets for Naïve Bayes <p>describes the conditional probabilities of Naïve Bayes model</p></td> </tr> <tr> <td></td> <td>Support Vector Machine</td> <td>DM$VL</td> <td>describes the coefficients of a linear SVM algorithm</td> </tr> <tr> <td>Regression ???</td> <td>Doe</td> <td>80</td> <td>50</td> </tr> <tr> <td>Clustering</td> <td>(General views for Clustering models)</td> <td>DM$VD <p>DM$VA</p> <p>DM$VH</p> <p>DM$VR</p> </td> <td>Cluster model description <p>Cluster attribute statistics</p> <p>Cluster historgram statistics</p> <p>Cluster Rule statistics</p> </td> </tr> <tr> <td></td> <td>k-Means</td> <td>DM$VD <p>DM$VA</p> <p>DM$VH</p> <p>DM$VR</p> </td> <td>k-Means model description <p>k-Means attribute statistics</p> <p>k-Means historgram statistics</p> <p>k-Means Rule statistics</p></td> </tr> <tr> <td></td> <td>O-Cluster</td> <td>DM$VD <p>DM$VA</p> <p>DM$VH</p> <p>DM$VR</p></td> <td>O-Cluster model description <p>O-Cluster attribute statistics</p> <p>O-Cluster historgram statistics</p> <p>O-Cluster Rule statistics</p></td> </tr> <tr> <td></td> <td>Expectation Minimization</td> <td>DM$VO <p>DM$VB</p> <p>DM$VI</p> <p>DM$VF</p> <p>DM$VM</p> <p>DM$VP</p><br></td> <td>describes the EM components <p>the pairwise Kullback–Leibler divergence</p> <p>attribute ranking similar to that of Attribute Importance</p> <p>parameters of multi-valued Bernoulli distributions</p> <p>mean & variance parameters for attributes by Gaussian distribution </p> <p>the coefficients used by random projections to map nested columns to a lower dimensional space</p> </td> </tr> <tr> <td>Feature Extraction</td> <td>Non-negative Matrix Factorization</td> <td>DM$VE <p>DM$VI</p></td> <td>Encoding (H) of a NNMF model <p>H inverse matrix for NNMF model</p></td> </tr> <tr> <td></td> <td>Singular Value Decomposition</td> <td>DM$VE <p>DM$VV</p> <p>DM$VU</p></td> <td>Associated PCA information for both classes of models <p>describes the right-singular vectors of SVD model</p> <p>describes the left-singular vectors of a SVD model</p> </td> </tr> <tr> <td></td> <td>Explicit Semantic Analysis</td> <td>DM$VA <p>DM$VF</p> </td> <td>ESA attribute statistics <p>ESA model features</p></td> </tr> <tr> <td>Feature Section</td> <td>Minimum Description Length</td> <td>DM$VA</td> <td>describes the Attribute Importance as well as the Attribute Importance rank</td> </tr></table> <p>Normalizing and Error Handling views created by ODM Automatic Data Processing (ADP)</p><ul> <li>DM$VN : Normalization and Missing Value Handling</li> <li>DM$VB : Binning</li></ul> <p>Global Model Views</p><ul> <li>DM$VG : Model global statistics</li> <li>DM$VS : Computed model settings</li> <li>DM$VW :Alerts issued during model creation</li></ul> <p>Each one of these new DB views needs their own blog post to explain what informations is being explained in each. I'm sure over time I will get round to most of these.</p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-374482624368088660 Tue Apr 18 2017 10:55:00 GMT-0400 (EDT) Custom DV What-If Analysis Using Essbase Data https://realtrigeek.com/2017/04/18/custom-dv-what-if-analysis-using-essbase-data/ <p>I got a LinkedIn message from someone last week asking if what-if analyses can be done in OAC’s DV using the Standard Edition (another way: Can you do what-if analyses in DV using EssCS in OAC Standard Edition, meaning no BI)? The answer is yes! To show an example of how this might work, I have chosen a simple budgeting example using personnel management locality and base pay as an example.</p> <p>My source is a cloud Essbase cube (on-premises would work just fine, too) that has actual and budget data for FY2016-2018.</p> <p>Dimensions:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image0101.png"><img data-attachment-id="1721" data-permalink="https://realtrigeek.com/2017/04/18/custom-dv-what-if-analysis-using-essbase-data/image0101-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image0101.png" data-orig-size="1065,523" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0101" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image0101.png?w=300&#038;h=147" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image0101.png?w=840" class="alignnone size-medium wp-image-1721" src="https://epmqueen.files.wordpress.com/2017/04/image0101.png?w=300&#038;h=147" alt="" width="300" height="147" srcset="https://epmqueen.files.wordpress.com/2017/04/image0101.png?w=300&amp;h=147 300w, https://epmqueen.files.wordpress.com/2017/04/image0101.png?w=600&amp;h=294 600w, https://epmqueen.files.wordpress.com/2017/04/image0101.png?w=150&amp;h=74 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Measures:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image0111.png"><img data-attachment-id="1722" data-permalink="https://realtrigeek.com/2017/04/18/custom-dv-what-if-analysis-using-essbase-data/image0111-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image0111.png" data-orig-size="234,397" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0111" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image0111.png?w=177&#038;h=300" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image0111.png?w=234" class="alignnone size-medium wp-image-1722" src="https://epmqueen.files.wordpress.com/2017/04/image0111.png?w=177&#038;h=300" alt="" width="177" height="300" srcset="https://epmqueen.files.wordpress.com/2017/04/image0111.png?w=177&amp;h=300 177w, https://epmqueen.files.wordpress.com/2017/04/image0111.png?w=88&amp;h=150 88w, https://epmqueen.files.wordpress.com/2017/04/image0111.png 234w" sizes="(max-width: 177px) 100vw, 177px" /></a></p> <p>Where TotalPay = BasePay + LocalityPay</p> <p>I have entered my Budget Version increases for both base pay (General Schedule Increase) and locality pay (Locality Pay Increase) over the previous year’s Actual-&gt;Final pay.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image0011.png"><img data-attachment-id="1712" data-permalink="https://realtrigeek.com/2017/04/18/custom-dv-what-if-analysis-using-essbase-data/image0011-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image0011.png" data-orig-size="562,140" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0011" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image0011.png?w=300&#038;h=75" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image0011.png?w=562" class="alignnone size-medium wp-image-1712" src="https://epmqueen.files.wordpress.com/2017/04/image0011.png?w=300&#038;h=75" alt="" width="300" height="75" srcset="https://epmqueen.files.wordpress.com/2017/04/image0011.png?w=300&amp;h=75 300w, https://epmqueen.files.wordpress.com/2017/04/image0011.png?w=150&amp;h=37 150w, https://epmqueen.files.wordpress.com/2017/04/image0011.png 562w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>If I build a visualization off the different Versions, I can see how they rank compared to each other. Notice that I have used 2 different Y-axes to show the 2 different types of pay. I wanted to compare them, but the disparity between the two didn’t allow for good visual increases/decreases.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image0021.png"><img data-attachment-id="1713" data-permalink="https://realtrigeek.com/2017/04/18/custom-dv-what-if-analysis-using-essbase-data/image0021-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image0021.png" data-orig-size="1624,902" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0021" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image0021.png?w=300&#038;h=167" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image0021.png?w=840" class="alignnone size-medium wp-image-1713" src="https://epmqueen.files.wordpress.com/2017/04/image0021.png?w=300&#038;h=167" alt="" width="300" height="167" srcset="https://epmqueen.files.wordpress.com/2017/04/image0021.png?w=300&amp;h=167 300w, https://epmqueen.files.wordpress.com/2017/04/image0021.png?w=600&amp;h=334 600w, https://epmqueen.files.wordpress.com/2017/04/image0021.png?w=150&amp;h=83 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>But what if you wanted to create a new version, on the fly, for what-if analyses without touching Essbase?</p> <p>Create a calculation!</p> <p>To create a new what-if Version in this case, I created a new Base Pay Increase, Locality Pay Increase, and a total of the two. Here are the calculations:</p> <p>Base Pay Increase – OTF (Final)</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image0031.png"><img data-attachment-id="1714" data-permalink="https://realtrigeek.com/2017/04/18/custom-dv-what-if-analysis-using-essbase-data/image0031-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image0031.png" data-orig-size="633,400" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0031" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image0031.png?w=300&#038;h=190" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image0031.png?w=633" class="alignnone size-medium wp-image-1714" src="https://epmqueen.files.wordpress.com/2017/04/image0031.png?w=300&#038;h=190" alt="" width="300" height="190" srcset="https://epmqueen.files.wordpress.com/2017/04/image0031.png?w=300&amp;h=190 300w, https://epmqueen.files.wordpress.com/2017/04/image0031.png?w=600&amp;h=380 600w, https://epmqueen.files.wordpress.com/2017/04/image0031.png?w=150&amp;h=95 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Locality Pay Increase – OTF (Final)</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image0041.png"><img data-attachment-id="1715" data-permalink="https://realtrigeek.com/2017/04/18/custom-dv-what-if-analysis-using-essbase-data/image0041-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image0041.png" data-orig-size="631,403" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0041" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image0041.png?w=300&#038;h=192" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image0041.png?w=631" class="alignnone size-medium wp-image-1715" src="https://epmqueen.files.wordpress.com/2017/04/image0041.png?w=300&#038;h=192" alt="" width="300" height="192" srcset="https://epmqueen.files.wordpress.com/2017/04/image0041.png?w=300&amp;h=192 300w, https://epmqueen.files.wordpress.com/2017/04/image0041.png?w=600&amp;h=384 600w, https://epmqueen.files.wordpress.com/2017/04/image0041.png?w=150&amp;h=96 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>You can see that I filtered out just the Final version for calculation using a CASE(If) statement.</p> <p>When I add these two items to the graph (and adjust the visualization properties to add the new Locality Pay Increase calculation to the 2nd Y-axis), I get the following visualization:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image0051.png"><img data-attachment-id="1716" data-permalink="https://realtrigeek.com/2017/04/18/custom-dv-what-if-analysis-using-essbase-data/image0051-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image0051.png" data-orig-size="1623,839" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0051" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image0051.png?w=300&#038;h=155" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image0051.png?w=840" class="alignnone size-medium wp-image-1716" src="https://epmqueen.files.wordpress.com/2017/04/image0051.png?w=300&#038;h=155" alt="" width="300" height="155" srcset="https://epmqueen.files.wordpress.com/2017/04/image0051.png?w=300&amp;h=155 300w, https://epmqueen.files.wordpress.com/2017/04/image0051.png?w=600&amp;h=310 600w, https://epmqueen.files.wordpress.com/2017/04/image0051.png?w=150&amp;h=78 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>I’ve, essentially, created a new Version in my Budget scenario without touching the Essbase cube.</p> <p>Note that I filtered out to only Budget by putting the Scenario Name in the visualization filters and chose “Budget”.</p> <p>Now, if I want to adjust the numbers behind my new Version, it’s as simple as adjusting the calculation, not the Essbase cube!</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image0061.png"><img data-attachment-id="1717" data-permalink="https://realtrigeek.com/2017/04/18/custom-dv-what-if-analysis-using-essbase-data/image0061-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image0061.png" data-orig-size="631,401" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0061" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image0061.png?w=300&#038;h=191" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image0061.png?w=631" class="alignnone size-medium wp-image-1717" src="https://epmqueen.files.wordpress.com/2017/04/image0061.png?w=300&#038;h=191" alt="" width="300" height="191" srcset="https://epmqueen.files.wordpress.com/2017/04/image0061.png?w=300&amp;h=191 300w, https://epmqueen.files.wordpress.com/2017/04/image0061.png?w=600&amp;h=382 600w, https://epmqueen.files.wordpress.com/2017/04/image0061.png?w=150&amp;h=95 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>You can also add more dimensions to your CASE statement to get a true intersection of Essbase data.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image0071.png"><img data-attachment-id="1718" data-permalink="https://realtrigeek.com/2017/04/18/custom-dv-what-if-analysis-using-essbase-data/image0071-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image0071.png" data-orig-size="631,402" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0071" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image0071.png?w=300&#038;h=191" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image0071.png?w=631" class="alignnone size-medium wp-image-1718" src="https://epmqueen.files.wordpress.com/2017/04/image0071.png?w=300&#038;h=191" alt="" width="300" height="191" srcset="https://epmqueen.files.wordpress.com/2017/04/image0071.png?w=300&amp;h=191 300w, https://epmqueen.files.wordpress.com/2017/04/image0071.png?w=600&amp;h=382 600w, https://epmqueen.files.wordpress.com/2017/04/image0071.png?w=150&amp;h=96 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Result:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image0081.png"><img data-attachment-id="1719" data-permalink="https://realtrigeek.com/2017/04/18/custom-dv-what-if-analysis-using-essbase-data/image0081-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image0081.png" data-orig-size="1159,695" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0081" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image0081.png?w=300&#038;h=180" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image0081.png?w=840" class="alignnone size-medium wp-image-1719" src="https://epmqueen.files.wordpress.com/2017/04/image0081.png?w=300&#038;h=180" alt="" width="300" height="180" srcset="https://epmqueen.files.wordpress.com/2017/04/image0081.png?w=300&amp;h=180 300w, https://epmqueen.files.wordpress.com/2017/04/image0081.png?w=600&amp;h=360 600w, https://epmqueen.files.wordpress.com/2017/04/image0081.png?w=150&amp;h=90 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>If I choose the total for the year, I can quickly compare the original Budget numbers with my new “Version”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image0091.png"><img data-attachment-id="1720" data-permalink="https://realtrigeek.com/2017/04/18/custom-dv-what-if-analysis-using-essbase-data/image0091-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image0091.png" data-orig-size="1493,834" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0091" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image0091.png?w=300&#038;h=168" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image0091.png?w=840" class="alignnone size-medium wp-image-1720" src="https://epmqueen.files.wordpress.com/2017/04/image0091.png?w=300&#038;h=168" alt="" width="300" height="168" srcset="https://epmqueen.files.wordpress.com/2017/04/image0091.png?w=300&amp;h=168 300w, https://epmqueen.files.wordpress.com/2017/04/image0091.png?w=600&amp;h=336 600w, https://epmqueen.files.wordpress.com/2017/04/image0091.png?w=150&amp;h=84 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>So, there you go! No need for OAC Enterprise Edition to do simple what-if analyses in DV using Essbase data. Just use custom calculations!</p><br /> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/gocomments/epmqueen.wordpress.com/1711/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/epmqueen.wordpress.com/1711/" /></a> <img alt="" border="0" src="https://pixel.wp.com/b.gif?host=realtrigeek.com&#038;blog=70089387&#038;post=1711&#038;subd=epmqueen&#038;ref=&#038;feed=1" width="1" height="1" /> Sarah Craynon Zumbrum http://realtrigeek.com/?p=1711 Tue Apr 18 2017 10:51:07 GMT-0400 (EDT) Product Review: APEX R&D’s APEX Office Print https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/ <p>I am going to start this blog post off with a firm statement that I am not being paid in any way, shape, or form to write this blog. I am writing it because it solved a problem I had in, quite possibly, the simplest way possible!</p> <p>Business Case: I have created an APEX application that generates multiple calculations based off measurement readings that needs to be exported to a formatted document (Word or PDF).</p> <p>Business Problem: APEX doesn’t not export formatted Word or PDF (or other Microsoft documents).</p> <p>History &amp; Resolution: I’ve said this many times, but I want to state, again, that I am not an APEX expert. I like to say I’m an APEX basic user. I know enough knowledge to be dangerous, but not revolutionary. With this said, when I tried to implement exporting from APEX, I was surprised to learn that (this wonderful, fun, dynamic) tool does not export to, say, Word or PDF. This was a bit sad and frustrating for me because this was the last piece of my application’s puzzle. At first, I thought it was my lack of knowledge regarding the APEX toolset, but I quickly found out that it was a tool limitation, not a personal skillset limitation. I did some Googling to see what other APEX developers used to get around this feature limitation and one of the first I found was <a href="https://www.apexofficeprint.com/">APEX Office Print</a> (AOP) by APEX R&amp;D. It seemed like a straightforward process to implement, so I went down the rabbit hole to test the tool out in my apex.oracle.com environment.</p> <p>Recall that I am not an APEX developer… I am a BI &amp; EPM person. I work with KPIs, calculations, statistics, and numbers…not hardcore DBA or APEX standard skillsets. So when I say that AOP is easy to install and get up and running, trust me! I am going to show my process to install the tool via an upgrade process since I did not take screenshots around a year ago when I originally set up the tool.</p> <p>After you create a login to the AOP website, you can download the tool based on your APEX implementation type. My APEX installation is the basic (and free!) apex.oracle.com account, so I chose to download the Cloud Package.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image001.png"><img data-attachment-id="1697" data-permalink="https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/image001-10/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image001.png" data-orig-size="1073,498" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image001" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image001.png?w=300&#038;h=139" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image001.png?w=840" class="alignnone size-medium wp-image-1697" src="https://epmqueen.files.wordpress.com/2017/04/image001.png?w=300&#038;h=139" alt="" width="300" height="139" srcset="https://epmqueen.files.wordpress.com/2017/04/image001.png?w=300&amp;h=139 300w, https://epmqueen.files.wordpress.com/2017/04/image001.png?w=600&amp;h=278 600w, https://epmqueen.files.wordpress.com/2017/04/image001.png?w=150&amp;h=70 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>A zip file is downloaded that contains the files you need to get up and running in a matter of seconds to minutes.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image002.png"><img data-attachment-id="1698" data-permalink="https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/image002-8/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image002.png" data-orig-size="376,208" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image002" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image002.png?w=300&#038;h=166" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image002.png?w=376" class="alignnone size-medium wp-image-1698" src="https://epmqueen.files.wordpress.com/2017/04/image002.png?w=300&#038;h=166" alt="" width="300" height="166" srcset="https://epmqueen.files.wordpress.com/2017/04/image002.png?w=300&amp;h=166 300w, https://epmqueen.files.wordpress.com/2017/04/image002.png?w=150&amp;h=83 150w, https://epmqueen.files.wordpress.com/2017/04/image002.png 376w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>The db -&gt; aop_db_pkg.sql is the first script you will use.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image003.png"><img data-attachment-id="1699" data-permalink="https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/image003-9/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image003.png" data-orig-size="695,206" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image003" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image003.png?w=300&#038;h=89" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image003.png?w=695" class="alignnone size-medium wp-image-1699" src="https://epmqueen.files.wordpress.com/2017/04/image003.png?w=300&#038;h=89" alt="" width="300" height="89" srcset="https://epmqueen.files.wordpress.com/2017/04/image003.png?w=300&amp;h=89 300w, https://epmqueen.files.wordpress.com/2017/04/image003.png?w=600&amp;h=178 600w, https://epmqueen.files.wordpress.com/2017/04/image003.png?w=150&amp;h=44 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>In APEX, you will Upload this script the “SQL Scripts” area then “Run” the script.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image004.png"><img data-attachment-id="1700" data-permalink="https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/image004-9/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image004.png" data-orig-size="1717,214" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image004" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image004.png?w=300&#038;h=37" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image004.png?w=840" class="alignnone size-medium wp-image-1700" src="https://epmqueen.files.wordpress.com/2017/04/image004.png?w=300&#038;h=37" alt="" width="300" height="37" srcset="https://epmqueen.files.wordpress.com/2017/04/image004.png?w=300&amp;h=37 300w, https://epmqueen.files.wordpress.com/2017/04/image004.png?w=594&amp;h=74 594w, https://epmqueen.files.wordpress.com/2017/04/image004.png?w=150&amp;h=19 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Next, I imported the plug-in to my application’s Shared Components -&gt; Plug-ins section:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image005.png"><img data-attachment-id="1701" data-permalink="https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/image005-10/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image005.png" data-orig-size="567,170" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image005" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image005.png?w=300&#038;h=90" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image005.png?w=567" class="alignnone size-medium wp-image-1701" src="https://epmqueen.files.wordpress.com/2017/04/image005.png?w=300&#038;h=90" alt="" width="300" height="90" srcset="https://epmqueen.files.wordpress.com/2017/04/image005.png?w=300&amp;h=90 300w, https://epmqueen.files.wordpress.com/2017/04/image005.png?w=150&amp;h=45 150w, https://epmqueen.files.wordpress.com/2017/04/image005.png 567w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Note that the last 2 digits corresponds to the version of APEX you are running. If using the cloud version, you look in the lower right-hand portion of your screen to get your details:</p> <p><img data-attachment-id="1702" data-permalink="https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/image006-9/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image006.png" data-orig-size="165,19" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image006" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image006.png?w=300&#038;h=35" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image006.png?w=300&#038;h=35" class="alignnone size-medium wp-image-1702" src="https://epmqueen.files.wordpress.com/2017/04/image006.png?w=300&#038;h=35" alt="" srcset="https://epmqueen.files.wordpress.com/2017/04/image006.png 165w, https://epmqueen.files.wordpress.com/2017/04/image006.png?w=150&amp;h=17 150w" sizes="(max-width: 165px) 100vw, 165px" /></p> <p>Once uploaded, I can see my process type plug-in loaded.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image007.png"><img data-attachment-id="1703" data-permalink="https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/image007-7/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image007.png" data-orig-size="1719,248" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image007" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image007.png?w=300&#038;h=43" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image007.png?w=840" class="alignnone size-medium wp-image-1703" src="https://epmqueen.files.wordpress.com/2017/04/image007.png?w=300&#038;h=43" alt="" width="300" height="43" srcset="https://epmqueen.files.wordpress.com/2017/04/image007.png?w=300&amp;h=43 300w, https://epmqueen.files.wordpress.com/2017/04/image007.png?w=596&amp;h=86 596w, https://epmqueen.files.wordpress.com/2017/04/image007.png?w=150&amp;h=22 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>That’s it for installation!</p> <p>Now to use the tool…it’s the same level of easy…</p> <p>Say you have a template you want to use for AOP. I have a *very* simple example template in Word, shown below.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image008.png"><img data-attachment-id="1704" data-permalink="https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/image008-10/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image008.png" data-orig-size="356,275" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image008" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image008.png?w=300&#038;h=232" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image008.png?w=356" class="alignnone size-medium wp-image-1704" src="https://epmqueen.files.wordpress.com/2017/04/image008.png?w=300&#038;h=232" alt="" width="300" height="232" srcset="https://epmqueen.files.wordpress.com/2017/04/image008.png?w=300&amp;h=232 300w, https://epmqueen.files.wordpress.com/2017/04/image008.png?w=150&amp;h=116 150w, https://epmqueen.files.wordpress.com/2017/04/image008.png 356w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Notice the parts that I want pulled from the database are in the curly parentheses.</p> <p>I’m going to load this file to my application’s Shared Components -&gt; Static Application Files folder.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image009.png"><img data-attachment-id="1705" data-permalink="https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/image009-10/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image009.png" data-orig-size="1721,262" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image009" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image009.png?w=300&#038;h=46" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image009.png?w=840" class="alignnone size-medium wp-image-1705" src="https://epmqueen.files.wordpress.com/2017/04/image009.png?w=300&#038;h=46" alt="" width="300" height="46" srcset="https://epmqueen.files.wordpress.com/2017/04/image009.png?w=300&amp;h=46 300w, https://epmqueen.files.wordpress.com/2017/04/image009.png?w=600&amp;h=92 600w, https://epmqueen.files.wordpress.com/2017/04/image009.png?w=150&amp;h=23 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Now I’m going to go to the page where I want to utilize AOP.</p> <p>I’m using it on a button as a dynamic action. To correspond with the numbers below:</p> <p>1. Give the dynamic action a name.</p> <p>2. Choose APEX Office Print (AOP) [Plug-In] as the type.</p> <p>3. Choose Static Application Files as our Template Type since we loaded our Word document there.</p> <p>4. Enter the name of the Word document.</p> <p>5. Enter the SQL (second section below for those details).</p> <p>6. Choose an output file name.</p> <p>7. Choose the type of output you would like.</p> <p>Now, Save and run that page!</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image010.png"><img data-attachment-id="1706" data-permalink="https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/image010-8/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image010.png" data-orig-size="1918,906" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image010" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image010.png?w=300&#038;h=142" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image010.png?w=840" class="alignnone size-medium wp-image-1706" src="https://epmqueen.files.wordpress.com/2017/04/image010.png?w=300&#038;h=142" alt="" width="300" height="142" srcset="https://epmqueen.files.wordpress.com/2017/04/image010.png?w=300&amp;h=142 300w, https://epmqueen.files.wordpress.com/2017/04/image010.png?w=600&amp;h=284 600w, https://epmqueen.files.wordpress.com/2017/04/image010.png?w=150&amp;h=71 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>The SQL I used was this:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image011.png"><img data-attachment-id="1707" data-permalink="https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/image011-9/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image011.png" data-orig-size="362,303" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image011" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image011.png?w=300&#038;h=251" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image011.png?w=362" class="alignnone size-medium wp-image-1707" src="https://epmqueen.files.wordpress.com/2017/04/image011.png?w=300&#038;h=251" alt="" width="300" height="251" srcset="https://epmqueen.files.wordpress.com/2017/04/image011.png?w=300&amp;h=251 300w, https://epmqueen.files.wordpress.com/2017/04/image011.png?w=150&amp;h=126 150w, https://epmqueen.files.wordpress.com/2017/04/image011.png 362w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Notice that the column aliases correspond to the curly parentheses areas on the Word document.</p> <p>When I click the button to run my dynamic process, I get a Word document:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/04/image012.png"><img data-attachment-id="1708" data-permalink="https://realtrigeek.com/2017/04/17/product-review-apex-rds-apex-office-print/image012-8/" data-orig-file="https://epmqueen.files.wordpress.com/2017/04/image012.png" data-orig-size="285,258" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image012" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/04/image012.png?w=300&#038;h=272" data-large-file="https://epmqueen.files.wordpress.com/2017/04/image012.png?w=300&#038;h=272" class="alignnone size-medium wp-image-1708" src="https://epmqueen.files.wordpress.com/2017/04/image012.png?w=300&#038;h=272" alt="" srcset="https://epmqueen.files.wordpress.com/2017/04/image012.png 285w, https://epmqueen.files.wordpress.com/2017/04/image012.png?w=150&amp;h=136 150w" sizes="(max-width: 285px) 100vw, 285px" /></a></p> <p>Perfect and SO simple! That’s it!</p> <p>I want to add that there are certain number formatting options that should be observed when using AOP. With my inexperience with being an Oracle DBA, I struggled a bit to get it all working the first time around. I want to highlight that the support by the APEX R&amp;D team is about the best I have ever encountered. Their response time was within minutes and their patience with an APEX beginner was commendable. I even learned some things about data types from them! When I had issues this past weekend with some output changes, they, again, were extremely responsive and helped me get up and going <em>on the weekend</em>. I have no doubt that they would take care of a large installation just as well (if not better) than they took care of me, an installation that does around 100 printings a year.</p> <p>To conclude, if you are looking for a quick and easy APEX tool or plug-in to print Word, PDF, Excel, PowerPoint, or HTML, this is your tool. Easy to implement and easy to use. …And their support is top notch. Again, I’m not being paid in any way, but I want to share a simple solution to my APEX output issue!</p><br /> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/gocomments/epmqueen.wordpress.com/1696/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/epmqueen.wordpress.com/1696/" /></a> <img alt="" border="0" src="https://pixel.wp.com/b.gif?host=realtrigeek.com&#038;blog=70089387&#038;post=1696&#038;subd=epmqueen&#038;ref=&#038;feed=1" width="1" height="1" /> Sarah Craynon Zumbrum http://realtrigeek.com/?p=1696 Mon Apr 17 2017 17:21:58 GMT-0400 (EDT) Kscope17 Planning Track Highlights – Edina Chmielarski-White http://www.odtug.com/p/bl/et/blogaid=707&source=1 Edina Chmielarski-White has been leading the Planning Track for three years. She is looking forward to all the Planning track presentations, but here are the ones she doesn’t want to miss: ODTUG http://www.odtug.com/p/bl/et/blogaid=707&source=1 Mon Apr 17 2017 09:59:32 GMT-0400 (EDT) Deleting a large number of groups in PBCS http://garycris.blogspot.com/2017/04/deleting-large-number-of-groups-in-pbcs.html Adjusting to working in the cloud takes some time; there are things we are used to being able to do on-premise where the functionality may be more mature or we have additional utilities to assist us.<br /><br />Recently I ran into an issue in PBCS where we had imported a large number of groups and then decided we did not need them. &nbsp;At first glance it appeared the only way to remove them was one by one via the GUI. &nbsp;Since there were over 500 groups I was not willing to do that. &nbsp;I opened an SR and unfortunately Oracle confirmed there was no way to do it and I would need to submit an enhancement request.<br /><br />I did some poking around on blogs and the documentation, along with some more trial and error and actually figured out a way to do it by altering LCM exports and the Import Settings. &nbsp;Below are the steps I took to accomplish this. &nbsp;I suspect this method might be "unsupported" by Oracle, I'm not sure, so full disclosure that you are doing this at your own risk. &nbsp;Please be sure to take a full back up of your environment before you attempt this in case you have to roll back.<br /><br />With that said, these are the steps to delete groups en masse in PBCS.<br /><br /><br />1. LCM Export your current security, and name the snapshot GROUPS<br /><br /><a href="https://2.bp.blogspot.com/-YxP0VpL-3OA/WPDlXDMRqYI/AAAAAAAAAUY/FukbNlz9OsU5rah8Ht876ahRzVXEqKWHACLcB/s1600/Screen%2BShot%2B2017-04-14%2Bat%2B11.05.41%2BAM.png" imageanchor="1" style="clear: right; display: inline !important; margin-bottom: 1em; margin-left: 1em; text-align: center;"><img border="0" height="131" src="https://2.bp.blogspot.com/-YxP0VpL-3OA/WPDlXDMRqYI/AAAAAAAAAUY/FukbNlz9OsU5rah8Ht876ahRzVXEqKWHACLcB/s400/Screen%2BShot%2B2017-04-14%2Bat%2B11.05.41%2BAM.png" width="400" /></a><br /><br /><br />2. Download the GROUPS Snapshot<div><br /><div><a href="https://3.bp.blogspot.com/-SfnsB523b2o/WPDlx43abKI/AAAAAAAAAUc/q7eF1Bbwa1g1YwcO0KkiMHJUNf6kRAATACLcB/s1600/Screen%2BShot%2B2017-04-14%2Bat%2B11.07.23%2BAM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: center;"><img border="0" height="145" src="https://3.bp.blogspot.com/-SfnsB523b2o/WPDlx43abKI/AAAAAAAAAUc/q7eF1Bbwa1g1YwcO0KkiMHJUNf6kRAATACLcB/s400/Screen%2BShot%2B2017-04-14%2Bat%2B11.07.23%2BAM.png" width="400" /></a></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div>3. Once downloaded, extract GROUPS.zip to a temp directory on your PC, such as C:\Temp.&nbsp;</div><div><br />4. Open C:\Temp\GROUPS\HSS-Shared Services\resource\Native Directory\Groups.csv with a text editor.</div><div>&nbsp; &nbsp; &nbsp;- Remove the groups you want to <b>KEEP</b> from the #group section</div><div>&nbsp; &nbsp; &nbsp;- Remove all #group_children sections (you don't need them for delete operation)</div><div><br /></div><div>5. Go to C:\Temp\GROUPS\HSS-Shared Services\resource\Native Directory\Assigned Roles</div><div>&nbsp; &nbsp; &nbsp;- Open the various folders and modify the .csv files in them. &nbsp;You have to remove all references to the groups you are deleting. &nbsp;Leave references to groups that you are keeping.</div><div><ol><ol></ol></ol>6. Now that you are done editing, you can Zip your modified files for re-import to PBCS.</div><div><br /></div><div>*Note that I had a lot of issues when I first tried to re-zip my files. &nbsp;I was getting a lot of errors on upload and import. &nbsp;I will show the method I used that seems to work, this may or may not be optional.</div><div><br /></div><div>-If you do not already have 7zip, download and install it.</div><div>-Open 7zip and navigate to the temp directory where the modified files are located.<ol><ol><ol></ol></ol></ol></div><div>Note it is important that you do not zip the GROUPS folder, you have to archive the folder 'HSS-Shared Services', and the two xml files 'Export.xml' and 'Import.xml' (see screenshot)</div><div><br /></div><div><div style="text-align: start;"><a href="https://1.bp.blogspot.com/-bHeHewc8jxU/WPDrjx8BfvI/AAAAAAAAAUk/mviU8y5PE8o5aSXxSS90BSB5dTX1BuOegCLcB/s1600/Screen%2BShot%2B2017-04-14%2Bat%2B11.32.04%2BAM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em; text-align: center;"><img border="0" height="80" src="https://1.bp.blogspot.com/-bHeHewc8jxU/WPDrjx8BfvI/AAAAAAAAAUk/mviU8y5PE8o5aSXxSS90BSB5dTX1BuOegCLcB/s400/Screen%2BShot%2B2017-04-14%2Bat%2B11.32.04%2BAM.png" width="400" /></a></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div><div><br /></div></div><div><br /></div><div>-Click the Add button&nbsp;</div><div><br /></div><div>In the 'Add to Archive' options select&nbsp;</div><div>Archive format = Zip&nbsp;</div><div>Compression level = Store</div><div><br /></div><div>(This is a case where other compression methods may work but I had errors with the few I tried. Stored essentially puts the file in a zip container but it doesn't actually compress it. &nbsp;This was the way I got it to import back into PBCS successfully.)</div><div><br /></div><div><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-YH8yuuXJmmY/WPDvjFxEKwI/AAAAAAAAAUo/oHYywQyGDOk-rezi6bovIHbllPpBAsZ0gCLcB/s1600/Screen%2BShot%2B2017-04-14%2Bat%2B11.49.11%2BAM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="400" src="https://1.bp.blogspot.com/-YH8yuuXJmmY/WPDvjFxEKwI/AAAAAAAAAUo/oHYywQyGDOk-rezi6bovIHbllPpBAsZ0gCLcB/s400/Screen%2BShot%2B2017-04-14%2Bat%2B11.49.11%2BAM.png" width="396" /></a></div><div class="separator" style="clear: both; text-align: center;"><br /></div><div class="separator" style="clear: both; text-align: left;">7. Rename your zip to GROUPSMOD.zip</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">8. Go to PBCS and import your zip file</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-o42_34tVtmI/WPDw5NcX-sI/AAAAAAAAAUs/ZsaMTpLtia04AhzdMmjWZrXag9IZlV3qwCLcB/s1600/Screen%2BShot%2B2017-04-14%2Bat%2B11.54.30%2BAM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="140" src="https://1.bp.blogspot.com/-o42_34tVtmI/WPDw5NcX-sI/AAAAAAAAAUs/ZsaMTpLtia04AhzdMmjWZrXag9IZlV3qwCLcB/s400/Screen%2BShot%2B2017-04-14%2Bat%2B11.54.30%2BAM.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;">9. The next setting is extremely important. &nbsp;You need to tell LCM to Delete instead of add when you import the files back in. &nbsp;Next to the Refresh button is the 'Import Settings' button, it looks like a hammer and wrench. &nbsp;Open 'Import Settings' and change 'Groups and Membership - Import Mode' to 'Delete' and click 'Save and Close'.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://4.bp.blogspot.com/-fO29EGJuZdM/WPDypC1oWiI/AAAAAAAAAUw/X20UQYBPus8y-DRv9uVuLWbO4c64T43-wCLcB/s1600/Screen%2BShot%2B2017-04-14%2Bat%2B12.00.58%2BPM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="95" src="https://4.bp.blogspot.com/-fO29EGJuZdM/WPDypC1oWiI/AAAAAAAAAUw/X20UQYBPus8y-DRv9uVuLWbO4c64T43-wCLcB/s400/Screen%2BShot%2B2017-04-14%2Bat%2B12.00.58%2BPM.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">10. Import your modified LCM</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: center;"><a href="https://3.bp.blogspot.com/-Xdci2B3kqvQ/WPDzJ6AtKuI/AAAAAAAAAU0/_dYfFbEqYRYOTQeHLmBDBeXBoHtr8TmYwCLcB/s1600/Screen%2BShot%2B2017-04-14%2Bat%2B12.04.14%2BPM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="158" src="https://3.bp.blogspot.com/-Xdci2B3kqvQ/WPDzJ6AtKuI/AAAAAAAAAU0/_dYfFbEqYRYOTQeHLmBDBeXBoHtr8TmYwCLcB/s400/Screen%2BShot%2B2017-04-14%2Bat%2B12.04.14%2BPM.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Check the migration status report for Completion</div><div class="separator" style="clear: both; text-align: center;"><a href="https://1.bp.blogspot.com/-xdV5OIgnlfE/WPDzivi9aiI/AAAAAAAAAU4/B5q7_fdt3REKNOmOl5mmPjt_1DzwlihpgCLcB/s1600/Screen%2BShot%2B2017-04-14%2Bat%2B12.06.01%2BPM.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="61" src="https://1.bp.blogspot.com/-xdV5OIgnlfE/WPDzivi9aiI/AAAAAAAAAU4/B5q7_fdt3REKNOmOl5mmPjt_1DzwlihpgCLcB/s400/Screen%2BShot%2B2017-04-14%2Bat%2B12.06.01%2BPM.png" width="400" /></a></div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">Navigate back to Access Control and you should see your groups have been removed.</div><div class="separator" style="clear: both; text-align: left;"><br /></div><div class="separator" style="clear: both; text-align: left;">***&nbsp;<b>Important</b> - Be sure to go back to 'Import Options' and change it from 'Delete' to the default 'Create or Update'. &nbsp;Failing to do this could result in a big problem the next time someone tries to restore a back up or migrate across environments.</div><div><br /><br /></div></div> Gary Crisci, Oracle Ace tag:blogger.com,1999:blog-5844205335075765519.post-780567371696088795 Fri Apr 14 2017 12:40:00 GMT-0400 (EDT) Stream Me Up (to the Cloud), Scotty https://medium.com/red-pill-analytics/stream-me-up-to-the-cloud-scotty-d7f36316c5a6?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*vMxJLRqFK42rHivVhj5sTQ.jpeg" /></figure><h4>Installing StreamSets Data Collector on Amazon Web Services EC2</h4><p>I’ve had some fun working with StreamSets Data Collector lately and wanted to share how to quickly get up and running on an Amazon Web Services (AWS) Elastic Compute Cloud (EC2) instance and build a simple pipeline.</p><p>For anyone unaware, StreamSets Data Collector is, in their own words, a low-latency ingest infrastructure tool that lets you create continuous data ingest pipelines using a drag and drop UI within an integrated development environment (IDE).</p><p>To be able to follow along, it is encouraged that you have enough working knowledge of AWS to be able to create and start an AWS EC2 instance and create and access an AWS Simple Storage Service (S3) bucket. That being said, these instructions also apply, for the most part, to any linux installation.</p><p>The most important prequisite is to have access to an instance that meets StreamSets installation requirements outlined <a href="https://streamsets.com/documentation/datacollector/latest/help/#Installation/InstallationAndConfig.html%23concept_vzg_n2p_kq">here</a>. I’m running an AWS Red Hat Enterprise Linux (RHEL) t2.micro instance for this demo; you will no doubt want something with a little more horsepower if you intend to use your instance for true development.</p><p>It is important to note that this is just one of many ways to install and configure StreamSets Data Collector. Make sure to check out the <a href="https://streamsets.com">StreamSets site</a> and read through the documentation to determine which method will work best for your use case. Now that the basics (and a slew of acronyms) are covered, we can get started.</p><p>Fire up the AWS EC2 instance and log in. I’m running on a Mac and using the built in terminal; I recommend PuTTY or something similar for folks running Windows.</p><pre>ssh ec2-user@&lt;public_ip&gt; -i &lt;loc_of_pem&gt;/&lt;name_of_pem&gt;.pem</pre><p>Install wget, if you haven’t already.</p><pre>sudo yum install wget</pre><p>Create a new directory for the StreamSets download and navigate to the new directory</p><pre>sudo mkdir /home/ec2-user/StreamSets<br>cd /home/ec2-user/StreamSets</pre><p>Download StreamSets Data Collector using wget. The URL below is for version 2.4.1 rpm install but a new version of Data Collector is likely out by the time this gets published (those guys and gals move quickly!). Be sure to check for the latest version on the <a href="https://streamsets.com/opensource/">StreamSets Data Collector website</a></p><p>Look for the download here:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Ehmoty7lgmM-THGDeq7JiA.png" /></figure><p>You’ll want to right click <em>Full Download (RPM)</em> and select ‘Copy Link’. Replace the link in the command below with the latest and greatest.</p><pre>sudo wget https://archives.streamsets.com/datacollector/2.4.1.0/rpm/streamsets-datacollector-2.4.1.0-all-rpms.tgz</pre><p>Extract the StreamSets Data Collector install files.</p><pre>sudo tar -xzf /home/ec2-user/StreamSets/streamsets-datacollector-2.4.1.0-all-rpms.tgz</pre><p>Install StreamSets using yum/localinstall</p><pre>sudo yum localinstall /home/ec2-user/StreamSets/streamsets-datacollector-2.4.1.0-all-rpms/streamsets*</pre><p>Attempting to start the service reveals there is one step remaining as the command fails.</p><pre>sudo service sdc start</pre><pre>ERROR: sdc has died, see log in &#39;/var/log/sdc&#39;.</pre><p>Note the File descriptors: 32768 line item in the installation requirements</p><p>Running ulimit -n shows 1024, this needs to be ≥32768</p><pre>ulimit -n</pre><pre>1024</pre><p>To increase the limit, edit /etc/security/limits.conf by navigating to /etc/security</p><pre>cd /etc/security</pre><p>As good habits dictate, make a copy of the limits.conf file</p><pre>sudo cp limits.conf orig_limits.conf</pre><p>Edit the limits.conf file</p><pre>sudo vi limits.conf</pre><p>Add the following two lines at the end of the file, setting the limits to a value greater than or equal to 32768</p><pre>* hard nofile 33000<br>* soft nofile 33000</pre><p>Log out of the AWS machine and log back in for the changes to take effect</p><p>Check that the changes were successful by running ulimit -n</p><pre>ulimit -n</pre><pre>33000</pre><p>Start the SDC service</p><pre>sudo service sdc start</pre><p>This message may show up:</p><pre>Unit sdc.service could not be found.</pre><p>The service will start fine. If you’re annoyed enough by the message, stop the service, run the command below, and start the service again</p><pre>sudo systemctl daemon-reload</pre><p>One last thing before we get to building a pipeline. Create a new subdirectory under the streamsets-datacollector directory to store a sample data file.</p><pre>sudo mkdir /opt/streamsets-datacollector/SampleData</pre><p>Create a sample file.</p><pre>sudo vi /opt/streamsets-datacollector/SampleData/TestFile.csv</pre><p>Enter the following records and save the file.</p><pre>Rownum,Descr<br>1,Hello<br>2,World</pre><p>The StreamSets Data Collector user interface (UI) is browser-based. In order to access the UI from your local machine, set up an SSH tunnel to forward port 18630, the port StreamSets runs on, to localhost:18630. Replace the appropriate IP address and .pem information.</p><pre>ssh -N -p 22 ec2-user@&lt;public_ip&gt; -i &lt;loc_of_pem&gt;/&lt;name_of_pem&gt;.pem -L 18630:localhost:18630</pre><p>On the local machine, open a browser, type or paste the following URL and press enter.</p><pre><a href="http://localhost:18630/">http://localhost:18630/</a></pre><p>The StreamSets login page should now be displayed. The initial username/password are admin/admin.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*tfSSQg5-2hriBhtJ_jH_eA.png" /><figcaption>StreamSets login page</figcaption></figure><p>In the following steps, we’ll create a pipeline that streams the data from the TestFile.csv file created in the steps above to Amazon S3. First, create a new pipeline and give it a name.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/936/1*1z90PdWYAKGl-A6HLicf5A.png" /></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Qe60qZKFrybafdIAYDcKbQ.png" /></figure><p>Add an origin and a destination. For this example, I have selected the origin <em>Directory — Basic </em>and the destination <em>Amazon S3 — Amazon Web Services 1.10.59</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ctvU3kNUTSLWfTQTLFYGiQ.png" /></figure><p>Notice that the pipeline will display errors until all required elements are configured.</p><p>Configure the pipeline <em>Error Records</em> to Discard (Library: Basic)</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*A-t1ZT0iDtNpZOUSjtxrqA.png" /></figure><p>To set up the origin, under <em>Files</em>, configure the <em>File Directory</em> and <em>File Name Pattern</em> fields to /opt/streamsets-datacollector/SampleData and *.csv, respectively.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*oreIJO4MRUXeFpyEpa8Zwg.png" /></figure><p>On the <em>Data Format </em>tab, configure the <em>Data Format</em> to Delimited and change the <em>Header Line</em> drop-down to With Header Line</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-vMa0zxFVELpJwVlmJVGfQ.png" /></figure><p>Configure the Amazon S3 items for your <em>Access Key ID</em>, <em>Secret Access Key</em>, and <em>Bucket</em>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*cn_DIIxUGjr7EXN7mDrSGA.png" /></figure><p>Set the <em>Data Format</em> for S3; I’ve chosen Delimited but other options work just fine.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*j8hJWSmjz0zxc1KWT8n1UA.png" /></figure><p>In the top right corner, there are options for previewing and validating the pipeline as well as to start the pipeline. After everything checks out, start the pipeline.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*WwfSfHNt3IoHWdYV5k1PLw.png" /></figure><p>The pipeline is alive and moved the data!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*u_NPiSHErNJdkkqeh56E_Q.png" /></figure><p>A review of S3 shows that the file has been created and contains the records that were created in the steps above. Notice that the pipeline is in ‘Running’ status and will continue to stream data from the directory as changes are made or *.csv files are added.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*CiORgX8dQVZWanWP52j15A.png" /></figure><p>There you have it. This is just a basic pipeline to move a .csv file from one location to another without any data manipulation and is simply the tip of the iceberg. There is an abundance of options available to manipulate the data as well as technologies that StreamSets Data Collector integrates with that go far beyond this example. Happy streaming!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d7f36316c5a6" width="1" height="1"><hr><p><a href="https://medium.com/red-pill-analytics/stream-me-up-to-the-cloud-scotty-d7f36316c5a6">Stream Me Up (to the Cloud), Scotty</a> was originally published in <a href="https://medium.com/red-pill-analytics">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Mike Fuller https://medium.com/p/d7f36316c5a6 Fri Apr 14 2017 10:48:01 GMT-0400 (EDT) Stream Me Up (to the Cloud), Scotty http://redpillanalytics.com/streammeup/ <p><img width="300" height="200" src="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/paul-green-126960.jpg?fit=300%2C200" class="attachment-medium size-medium wp-post-image" alt="Installing StreamSets Data Collector on Amazon Web Services EC2" srcset="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/paul-green-126960.jpg?w=1920 1920w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/paul-green-126960.jpg?resize=300%2C200 300w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/paul-green-126960.jpg?resize=768%2C512 768w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/paul-green-126960.jpg?resize=1024%2C683 1024w" sizes="(max-width: 300px) 100vw, 300px" data-attachment-id="4831" data-permalink="http://redpillanalytics.com/streammeup/paul-green-126960/" data-orig-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/paul-green-126960.jpg?fit=1920%2C1280" data-orig-size="1920,1280" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Installing StreamSets Data Collector on Amazon Web Services EC2" data-image-description="&lt;p&gt;Installing StreamSets Data Collector on Amazon Web Services EC2&lt;/p&gt; " data-medium-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/paul-green-126960.jpg?fit=300%2C200" data-large-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/04/paul-green-126960.jpg?fit=1024%2C683" /></p><p class="graf graf--h3">I’ve had some fun working with StreamSets Data Collector lately and wanted to share how to quickly get up and running on an Amazon Web Services (AWS) Elastic Compute Cloud (EC2) instance and build a simple pipeline.</p> <p class="graf graf--p">For anyone unaware, StreamSets Data Collector is, in their own words, a low-latency ingest infrastructure tool that lets you create continuous data ingest pipelines using a drag and drop UI within an integrated development environment (IDE).</p> <p class="graf graf--p">To be able to follow along, it is encouraged that you have enough working knowledge of AWS to be able to create and start an AWS EC2 instance and create and access an AWS Simple Storage Service (S3) bucket. That being said, these instructions also apply, for the most part, to any linux installation.</p> <p class="graf graf--p">The most important prequisite is to have access to an instance that meets StreamSets installation requirements outlined <a class="markup--anchor markup--p-anchor" href="https://streamsets.com/documentation/datacollector/latest/help/#Installation/InstallationAndConfig.html%23concept_vzg_n2p_kq" target="_blank" rel="noopener" data-href="https://streamsets.com/documentation/datacollector/latest/help/#Installation/InstallationAndConfig.html%23concept_vzg_n2p_kq">here</a>. I’m running an AWS Red Hat Enterprise Linux (RHEL) t2.micro instance for this demo; you will no doubt want something with a little more horsepower if you intend to use your instance for true development.</p> <p class="graf graf--p">It is important to note that this is just one of many ways to install and configure StreamSets Data Collector. Make sure to check out the <a class="markup--anchor markup--p-anchor" href="https://streamsets.com" target="_blank" rel="noopener" data-href="https://streamsets.com">StreamSets site</a> and read through the documentation to determine which method will work best for your use case. Now that the basics (and a slew of acronyms) are covered, we can get started.</p> <p class="graf graf--p">Fire up the AWS EC2 instance and log in. I’m running on a Mac and using the built in terminal; I recommend PuTTY or something similar for folks running Windows.</p> <pre class="graf graf--pre">ssh ec2-user@&lt;public_ip&gt; -i &lt;loc_of_pem&gt;/&lt;name_of_pem&gt;.pem</pre> <p class="graf graf--p">Install wget, if you haven’t already.</p> <pre class="graf graf--pre">sudo yum install wget</pre> <p class="graf graf--p">Create a new directory for the StreamSets download and navigate to the new directory</p> <pre class="graf graf--pre">sudo mkdir /home/ec2-user/StreamSets cd /home/ec2-user/StreamSets</pre> <p class="graf graf--p">Download StreamSets Data Collector using wget. The URL below is for version 2.4.1 rpm install but a new version of Data Collector is likely out by the time this gets published (those guys and gals move quickly!). Be sure to check for the latest version on the <a class="markup--anchor markup--p-anchor" href="https://streamsets.com/opensource/" target="_blank" rel="noopener" data-href="https://streamsets.com/opensource/">StreamSets Data Collector website</a></p> <p class="graf graf--p">Look for the download here:</p> <figure class="graf graf--figure"><img class="graf-image" src="https://i1.wp.com/cdn-images-1.medium.com/max/1600/1*Ehmoty7lgmM-THGDeq7JiA.png?resize=1170%2C520&#038;ssl=1" data-image-id="1*Ehmoty7lgmM-THGDeq7JiA.png" data-width="2458" data-height="1092" data-recalc-dims="1" /></figure> <p class="graf graf--p">You’ll want to right click <em class="markup--em markup--p-em">Full Download (RPM)</em> and select ‘Copy Link’. Replace the link in the command below with the latest and greatest.</p> <pre class="graf graf--pre">sudo wget https://archives.streamsets.com/datacollector/2.4.1.0/rpm/streamsets-datacollector-2.4.1.0-all-rpms.tgz</pre> <p class="graf graf--p">Extract the StreamSets Data Collector install files.</p> <pre class="graf graf--pre">sudo tar -xzf /home/ec2-user/StreamSets/streamsets-datacollector-2.4.1.0-all-rpms.tgz</pre> <p class="graf graf--p">Install StreamSets using yum/localinstall</p> <pre class="graf graf--pre">sudo yum localinstall /home/ec2-user/StreamSets/streamsets-datacollector-2.4.1.0-all-rpms/streamsets*</pre> <p class="graf graf--p">Attempting to start the service reveals there is one step remaining as the command fails.</p> <pre class="graf graf--pre">sudo service sdc start</pre> <pre class="graf graf--pre">ERROR: sdc has died, see log in '/var/log/sdc'.</pre> <p class="graf graf--p">Note the File descriptors: 32768 line item in the installation requirements</p> <p class="graf graf--p">Running ulimit -n shows 1024, this needs to be ≥32768</p> <pre class="graf graf--pre">ulimit -n</pre> <pre class="graf graf--pre">1024</pre> <p class="graf graf--p">To increase the limit, edit /etc/security/limits.conf by navigating to /etc/security</p> <pre class="graf graf--pre">cd /etc/security</pre> <p class="graf graf--p">As good habits dictate, make a copy of the limits.conf file</p> <pre class="graf graf--pre">sudo cp limits.conf orig_limits.conf</pre> <p class="graf graf--p">Edit the limits.conf file</p> <pre class="graf graf--pre">sudo vi limits.conf</pre> <p class="graf graf--p">Add the following two lines at the end of the file, setting the limits to a value greater than or equal to 32768</p> <pre class="graf graf--pre">* hard nofile 33000 * soft nofile 33000</pre> <p class="graf graf--p">Log out of the AWS machine and log back in for the changes to take effect</p> <p class="graf graf--p">Check that the changes were successful by running ulimit -n</p> <pre class="graf graf--pre">ulimit -n</pre> <pre class="graf graf--pre">33000</pre> <p class="graf graf--p">Start the SDC service</p> <pre class="graf graf--pre">sudo service sdc start</pre> <p class="graf graf--p">This message may show up:</p> <pre class="graf graf--pre">Unit sdc.service could not be found.</pre> <p class="graf graf--p">The service will start fine. If you’re annoyed enough by the message, stop the service, run the command below, and start the service again</p> <pre class="graf graf--pre">sudo systemctl daemon-reload</pre> <p class="graf graf--p">One last thing before we get to building a pipeline. Create a new subdirectory under the streamsets-datacollector directory to store a sample data file.</p> <pre class="graf graf--pre">sudo mkdir /opt/streamsets-datacollector/SampleData</pre> <p class="graf graf--p">Create a sample file.</p> <pre class="graf graf--pre">sudo vi /opt/streamsets-datacollector/SampleData/TestFile.csv</pre> <p class="graf graf--p">Enter the following records and save the file.</p> <pre class="graf graf--pre">Rownum,Descr 1,Hello 2,World</pre> <p class="graf graf--p">The StreamSets Data Collector user interface (UI) is browser-based. In order to access the UI from your local machine, set up an SSH tunnel to forward port 18630, the port StreamSets runs on, to localhost:18630. Replace the appropriate IP address and .pem information.</p> <pre class="graf graf--pre">ssh -N -p 22 ec2-user@&lt;public_ip&gt; -i &lt;loc_of_pem&gt;/&lt;name_of_pem&gt;.pem -L 18630:localhost:18630</pre> <p class="graf graf--p">On the local machine, open a browser, type or paste the following URL and press enter.</p> <pre class="graf graf--pre"><a class="markup--anchor markup--pre-anchor" href="http://localhost:18630/" target="_blank" data-href="http://localhost:18630/">http://localhost:18630/</a></pre> <p class="graf graf--p">The StreamSets login page should now be displayed. The initial username/password are admin/admin.</p> <figure class="graf graf--figure"><img class="graf-image" src="https://i0.wp.com/cdn-images-1.medium.com/max/1600/1*tfSSQg5-2hriBhtJ_jH_eA.png?resize=1170%2C508&#038;ssl=1" data-image-id="1*tfSSQg5-2hriBhtJ_jH_eA.png" data-width="2744" data-height="1192" data-recalc-dims="1" /><figcaption class="imageCaption">StreamSets login page</figcaption></figure> <p class="graf graf--p">In the following steps, we’ll create a pipeline that streams the data from the TestFile.csv file created in the steps above to Amazon S3. First, create a new pipeline and give it a name.</p> <div class="section-inner sectionLayout--outsetRow" data-paragraph-count="2"> <figure class="graf graf--figure graf--layoutOutsetRow is-partialWidth"><img class="graf-image" src="https://i1.wp.com/cdn-images-1.medium.com/max/1200/1*1z90PdWYAKGl-A6HLicf5A.png?resize=936%2C514&#038;ssl=1" data-image-id="1*1z90PdWYAKGl-A6HLicf5A.png" data-width="936" data-height="514" data-recalc-dims="1" /></figure> <figure class="graf graf--figure graf--layoutOutsetRowContinue is-partialWidth"><img class="graf-image" src="https://i2.wp.com/cdn-images-1.medium.com/max/1200/1*Qe60qZKFrybafdIAYDcKbQ.png?resize=1170%2C638&#038;ssl=1" data-image-id="1*Qe60qZKFrybafdIAYDcKbQ.png" data-width="1200" data-height="654" data-recalc-dims="1" /></figure> <p class="graf graf--p">Add an origin and a destination. For this example, I have selected the origin <em class="markup--em markup--p-em">Directory — Basic </em>and the destination <em class="markup--em markup--p-em">Amazon S3 — Amazon Web Services 1.10.59</em></p> <figure class="graf graf--figure"><img class="graf-image" src="https://i1.wp.com/cdn-images-1.medium.com/max/1600/1*ctvU3kNUTSLWfTQTLFYGiQ.png?resize=1170%2C606&#038;ssl=1" data-image-id="1*ctvU3kNUTSLWfTQTLFYGiQ.png" data-width="2724" data-height="1412" data-recalc-dims="1" /></figure> <p class="graf graf--p">Notice that the pipeline will display errors until all required elements are configured.</p> <p class="graf graf--p">Configure the pipeline <em class="markup--em markup--p-em">Error Records</em> to Discard (Library: Basic)</p> <figure class="graf graf--figure"><img class="graf-image" src="https://i1.wp.com/cdn-images-1.medium.com/max/1600/1*A-t1ZT0iDtNpZOUSjtxrqA.png?resize=1170%2C605&#038;ssl=1" data-image-id="1*A-t1ZT0iDtNpZOUSjtxrqA.png" data-width="2730" data-height="1412" data-recalc-dims="1" /></figure> <p class="graf graf--p">To set up the origin, under <em class="markup--em markup--p-em">Files</em>, configure the <em class="markup--em markup--p-em">File Directory</em> and <em class="markup--em markup--p-em">File Name Pattern</em> fields to /opt/streamsets-datacollector/SampleData and *.csv, respectively.</p> <figure class="graf graf--figure"><img class="graf-image" src="https://i2.wp.com/cdn-images-1.medium.com/max/1600/1*oreIJO4MRUXeFpyEpa8Zwg.png?resize=1170%2C634&#038;ssl=1" data-image-id="1*oreIJO4MRUXeFpyEpa8Zwg.png" data-width="2364" data-height="1280" data-recalc-dims="1" /></figure> <p class="graf graf--p">On the <em class="markup--em markup--p-em">Data Format </em>tab, configure the <em class="markup--em markup--p-em">Data Format</em> to Delimited and change the <em class="markup--em markup--p-em">Header Line</em> drop-down to With Header Line</p> <figure class="graf graf--figure"><img class="graf-image" src="https://i2.wp.com/cdn-images-1.medium.com/max/1600/1*-vMa0zxFVELpJwVlmJVGfQ.png?resize=1170%2C606&#038;ssl=1" data-image-id="1*-vMa0zxFVELpJwVlmJVGfQ.png" data-width="2728" data-height="1414" data-recalc-dims="1" /></figure> <p class="graf graf--p">Configure the Amazon S3 items for your <em class="markup--em markup--p-em">Access Key ID</em>, <em class="markup--em markup--p-em">Secret Access Key</em>, and <em class="markup--em markup--p-em">Bucket</em>.</p> <figure class="graf graf--figure"><img class="graf-image" src="https://i0.wp.com/cdn-images-1.medium.com/max/1600/1*cn_DIIxUGjr7EXN7mDrSGA.png?resize=1170%2C611&#038;ssl=1" data-image-id="1*cn_DIIxUGjr7EXN7mDrSGA.png" data-width="2736" data-height="1428" data-recalc-dims="1" /></figure> <p class="graf graf--p">Set the <em class="markup--em markup--p-em">Data Format</em> for S3; I’ve chosen Delimited but other options work just fine.</p> <figure class="graf graf--figure"><img class="graf-image" src="https://i1.wp.com/cdn-images-1.medium.com/max/1600/1*j8hJWSmjz0zxc1KWT8n1UA.png?resize=1170%2C606&#038;ssl=1" data-image-id="1*j8hJWSmjz0zxc1KWT8n1UA.png" data-width="2724" data-height="1412" data-recalc-dims="1" /></figure> <p class="graf graf--p">In the top right corner, there are options for previewing and validating the pipeline as well as to start the pipeline. After everything checks out, start the pipeline.</p> <figure class="graf graf--figure"><img class="graf-image" src="https://i0.wp.com/cdn-images-1.medium.com/max/1600/1*WwfSfHNt3IoHWdYV5k1PLw.png?resize=1170%2C606&#038;ssl=1" data-image-id="1*WwfSfHNt3IoHWdYV5k1PLw.png" data-width="2728" data-height="1412" data-recalc-dims="1" /></figure> <p class="graf graf--p">The pipeline is alive and moved the data!</p> <figure class="graf graf--figure"><img class="graf-image" src="https://i1.wp.com/cdn-images-1.medium.com/max/1600/1*u_NPiSHErNJdkkqeh56E_Q.png?resize=1170%2C533&#038;ssl=1" data-image-id="1*u_NPiSHErNJdkkqeh56E_Q.png" data-width="2736" data-height="1246" data-recalc-dims="1" /></figure> <p class="graf graf--p">A review of S3 shows that the file has been created and contains the records that were created in the steps above. Notice that the pipeline is in ‘Running’ status and will continue to stream data from the directory as changes are made or *.csv files are added.</p> <figure class="graf graf--figure"><img class="graf-image" src="https://i0.wp.com/cdn-images-1.medium.com/max/1600/1*CiORgX8dQVZWanWP52j15A.png?resize=1170%2C427&#038;ssl=1" data-image-id="1*CiORgX8dQVZWanWP52j15A.png" data-width="2498" data-height="912" data-recalc-dims="1" /></figure> <p class="graf graf--p">There you have it. This is just a basic pipeline to move a .csv file from one location to another without any data manipulation and is simply the tip of the iceberg. There is an abundance of options available to manipulate the data as well as technologies that StreamSets Data Collector integrates with that go far beyond this example. Happy streaming!</p> </div> Mike Fuller http://redpillanalytics.com/?p=4822 Fri Apr 14 2017 10:47:56 GMT-0400 (EDT) ODTUG Kscope16 Award-Winning Session Recordings Available to All http://www.odtug.com/p/bl/et/blogaid=706&source=1 ODTUG full members get exclusive access to browse all session recordings from past Kscope conferences. To highlight our impressive speaker lineup for Kscope17, we want to give everyone a sneak peek at the quality sessions that are to come at Kscope17. ODTUG http://www.odtug.com/p/bl/et/blogaid=706&source=1 Thu Apr 13 2017 09:39:19 GMT-0400 (EDT) How to Connect DV to Essbase Cloud (Ep 050) https://www.youtube.com/watch?v=5-a3O41t00Y Red Pill Analytics yt:video:5-a3O41t00Y Wed Apr 12 2017 19:29:31 GMT-0400 (EDT) Kscope17 Database Track Session Highlights – Galo Balda http://www.odtug.com/p/bl/et/blogaid=705&source=1 Galo Balda, Database Track Lead for ODTUG Kscope17, shares his top 5 Database Track Sessions with reasons why they are his "don't miss sessions" at Kscope17. ODTUG http://www.odtug.com/p/bl/et/blogaid=705&source=1 Wed Apr 12 2017 10:01:50 GMT-0400 (EDT) Oracle Database 12.2 New Feature – Pluggable Database Performance Profiles http://gavinsoorma.com/2017/04/oracle-database-12-2-new-feature-pluggable-database-performance-profiles/ <p>In the earlier 12.1.0.2 Oracle database version, we could limit the amount of CPU utilization as well as Parallel Server allocation at the PDB level via Resource Plans.</p> <p>Now in 12c Release 2, we can not only regulate CPU and Parallelism at the Pluggable database level, but in addition <strong><em>we can also restrict the amount of memory that each PDB hosted by a Container Database (CDB) uses</em></strong>.</p> <p>Further, we can also limit the amount of I/O operations that each PDB performs so that now we have a far improved Resource Manager at work ensuring that no PDB hogs all the CPU or the IO because of maybe some runaway query and thereby impacts the other PDBs hosted in the same PDB.</p> <p>We can now limit the amount of SGA or PGA that an individual PDB can utilize as well as ensure that certain PDBs always are ensured a minimum level of both available SGA and PGA memory.</p> <p>For example we can now issue SQL statements like these while connected to the individual PDB.</p> <p>&nbsp;</p> <pre>SQL&gt; ALTER SYSTEM SET SGA_TARGET = 500M SCOPE = BOTH; SQL&gt; ALTER SYSTEM SET SGA_MIN_SIZE = 300M SCOPE = BOTH; SQL&gt; ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 200M SCOPE = BOTH; SQL&gt; ALTER SYSTEM SET MAX_IOPS = 10000 SCOPE = BOTH; </pre> <p>&nbsp;<br /> Another 12c Release 2 New Feature related to Multitenancy is <strong>Performance Profiles</strong>.</p> <p>With <strong>Performance Profiles</strong> we can manage resources for large numbers of PDBs by specifying Resource Manager directives for profiles instead for each individual PDB.</p> <p>These profiles are then allocated to the PDB via the initialization parameter <strong>DB_PERFORMANCE_PROFILE</strong></p> <p>Let us look at a worked example of Performance Profiles.</p> <p>In this example we have three PDBs (PDB1, PDB2 and PDB3) hosted in the container database CDB1. PDB1 pluggable database hosts some mission critical applications and we need to ensure that PDB1 gets a higher share of memory,I/O as well as CPU resources as compared to PDB2 and PDB3.</p> <p>So we will be enforcing this resource allocation via two sets of Performance Profiles &#8211; we call those TIER1 and TIER2.</p> <p>Here are the steps:</p> <p>&nbsp;</p> <p><strong>Create a Pending Area</strong></p> <p>&nbsp;</p> <pre>SQL&gt; exec DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA (); PL/SQL procedure successfully completed.</pre> <p>&nbsp;</p> <p>&nbsp;</p> <p><strong>Create a CDB Resource Plan </strong><br /> &nbsp;</p> <pre>SQL&gt; BEGIN  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(    plan   =&gt; 'profile_plan',    comment =&gt; 'Performance Profile Plan allocating highest share of resources to PDB1'); END; /  PL/SQL procedure successfully completed. </pre> <p>&nbsp;<br /> <strong>Create the CDB resource plan directives for the PDBs</strong></p> <p>Tier 1 performance profile ensures at least 60% (3 shares) of available CPU and parallel server resources and no upper limit on CPU utilization or parallel server execution. In addition it ensures a minimum allocation of at least 50% of available memory.</p> <p>&nbsp;</p> <pre>SQL&gt; BEGIN DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE( plan                 =&gt; 'profile_plan', profile              =&gt; 'Tier1', shares               =&gt; 3, memory_min           =&gt; 50); END; / PL/SQL procedure successfully completed.</pre> <p>&nbsp;</p> <p>Tier 2 performance profile is more restrictive in the sense that it has fewer shares as compared to Tier 1 and limits the amount of CPU/Parallel server usage to 40% as well as limits the amount of memory usage at the PDB level to a maximum of 25% of available memory.</p> <p>&nbsp;</p> <pre>SQL&gt; BEGIN  DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(    plan                 =&gt; 'profile_plan',    profile              =&gt; 'Tier2',    shares               =&gt; 2,    utilization_limit    =&gt; 40,    memory_limit          =&gt; 25); END; /    PL/SQL procedure successfully completed.</pre> <p>&nbsp;</p> <p><strong>Validate and Submit the Pending Area </strong></p> <p>&nbsp;</p> <pre>SQL&gt; exec DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA(); PL/SQL procedure successfully completed. SQL&gt; exec DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA(); PL/SQL procedure successfully completed. </pre> <p>&nbsp;<br /> <strong>Allocate Performance Profiles to PDBs</strong></p> <p>&nbsp;</p> <p>TIER1 Performance Profile is allocated to PDB1 and TIER2 Performance Profile is allocated to PDB2 and PDB3.</p> <p>&nbsp;</p> <pre>SQL&gt; alter session set container=pdb1; Session altered. SQL&gt; alter system set DB_PERFORMANCE_PROFILE='TIER1' scope=spfile; System altered. SQL&gt; alter session set container=pdb2; Session altered. SQL&gt; alter system set DB_PERFORMANCE_PROFILE='TIER2' scope=spfile; System altered. SQL&gt; alter session set container=pdb3; Session altered. SQL&gt; alter system set DB_PERFORMANCE_PROFILE='TIER2' scope=spfile; System altered.</pre> <p>&nbsp;</p> <p><strong>Set the Resource Plan at the CDB level </strong></p> <p>&nbsp;</p> <pre>SQL&gt; conn / as sysdba Connected. SQL&gt; alter system set resource_manager_plan='PROFILE_PLAN' scope=both; System altered.</pre> <p>&nbsp;</p> <p><strong>Set the Performance Profiles at the PDB level </strong></p> <p>&nbsp;</p> <pre>SQL&gt; alter pluggable database all close immediate; Pluggable database altered. SQL&gt; alter pluggable database all open; Pluggable database altered. </pre> <p>&nbsp;<br /> <strong>Monitor memory utilization at PDB level </strong></p> <p>&nbsp;</p> <p>The V$RSRCPDBMETRIC view enables us to track the amount memory used by PDBs.</p> <p>We can see that the PDB1 belonging to the profile TIER1 has almost double the memory allocated to the other two PDBs in profile TIER2.</p> <p><img src="https://media.licdn.com/mpr/mpr/AAEAAQAAAAAAAAsiAAAAJDEzNmRmYzk4LThkN2YtNDg5Yi05MzdmLWI2YjRhYzhkMzYzNw.png" /></p> <p>Oracle 12.2 has a lot of new exciting features. <strong>Learn all about these at a forthcoming online training session</strong>. Contact <strong>prosolutions@gavinsoorma.com</strong> to register interest!</p> Gavin Soorma http://gavinsoorma.com/?p=7503 Tue Apr 11 2017 01:17:23 GMT-0400 (EDT) Oracle Database 12.2 New Feature – Pluggable Database Performance Profiles https://gavinsoorma.com/2017/04/oracle-database-12-2-new-feature-pluggable-database-performance-profiles/ <p>In the earlier 12.1.0.2 Oracle database version, we could limit the amount of CPU utilization as well as Parallel Server allocation at the PDB level via Resource Plans.</p> <p>Now in 12c Release 2, we can not only regulate CPU and Parallelism at the Pluggable database level, but in addition <strong><em>we </em></strong></p><div class="mgm_private_no_access"><div style="border-style:solid; border-width:1px; margin-bottom:1em; background-color:#E4F2FD; border-color:#C6D9E9; margin:5px; font-family:'Lucida Grande','Lucida Sans Unicode',Tahoma,Verdana,sans-serif; font-size:13px; color:#333333;"> <div style="margin: 5px 10px;">You need to be logged in to see this part of the content. Please <a href="https://gavinsoorma.com/login/?redirect_to=https://gavinsoorma.com/2017/04/oracle-database-12-2-new-feature-pluggable-database-performance-profiles/"><b>Login</b></a> to access. </div> </div></div> Gavin Soorma https://gavinsoorma.com/?p=7503 Tue Apr 11 2017 01:17:23 GMT-0400 (EDT) Kscope17 Financial Close Track Session Highlights – Chris Barbieri http://www.odtug.com/p/bl/et/blogaid=704&source=1 Chris Barbieri, Financial Close Track Lead for ODTUG Kscope17, shares his recommended Financial Close Track Sessions with reasons why they are his “don’t miss sessions” at Kscope17. ODTUG http://www.odtug.com/p/bl/et/blogaid=704&source=1 Mon Apr 10 2017 09:34:55 GMT-0400 (EDT) Build Custom Datasets From Database Tables (Ep 049) https://www.youtube.com/watch?v=lyh3Z_sNhVs Red Pill Analytics yt:video:lyh3Z_sNhVs Fri Apr 07 2017 10:09:40 GMT-0400 (EDT) Oracle Looking to Buy Accenture? Stranger Things Have Happened. http://bi.abhinavagarwal.net/2017/04/oracle-looking-to-buy-accenture.html <div dir="ltr" style="text-align: left;" trbidi="on"><div class="prose" itemprop="articleBody"><table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody><tr><td style="text-align: center;"><a href="https://1.bp.blogspot.com/-mKC0xeNFRv8/WOElu7w-fJI/AAAAAAAAN9g/Me_Uoce5RXMtve9ufEm0oyyk65JQ5EnUgCLcB/s1600/pixels_chess.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" height="271" src="https://1.bp.blogspot.com/-mKC0xeNFRv8/WOElu7w-fJI/AAAAAAAAN9g/Me_Uoce5RXMtve9ufEm0oyyk65JQ5EnUgCLcB/s400/pixels_chess.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Image credit: pixels.com</td></tr></tbody></table><a href="https://www.theregister.co.uk/" rel="nofollow noopener" target="_blank">The Register</a> <a href="https://www.theregister.co.uk/2017/03/28/oracle_doing_due_diligence_on_accenture_yep_you_heard_that_right/" rel="nofollow noopener" target="_blank">reported</a> that Oracle may be exploring the "feasibility of buying multi-billion dollar consultancy Accenture."<br /><br />To summarize the numbers involved here, Oracle had <strong>FY16 revenues of $37 billion</strong>, net income of <strong>$8.9 billion</strong>, and a <strong>market cap of $180 billion</strong>.<br /><br />On the other hand, Accenture had <strong>FY16 revenues of US$34.8 billion</strong>, net income of <strong>$4.1 billion</strong>, and a <strong>market cap of $77 billion</strong>.<br /><br />Some questions that come to mind:<br /><ol><li><strong>Why?</strong> <a href="https://www.wsj.com/articles/oracle-set-to-complete-9-3-billion-deal-to-buy-netsuite-1478324011" rel="nofollow noopener" target="_blank">Oracle buying NetSuite</a> in 2016 made sense. <a href="https://www.thestreet.com/story/13606330/1/is-salesforce-com-oracle-s-next-target-buy-it-now.html" rel="nofollow noopener" target="_blank">Oracle buying Salesforce</a> would make even more sense. Oracle buying a management consulting and professional services company, and that too one with more than a quarter million employees, on the face of it, makes little sense. Would it help Oracle leapfrog Amazon's AWS cloud business? Would it help Oracle go after a new market segment? The answers are not clear, at all.<br /></li><li><strong>Who would be in charge of this combined entity?</strong> Both have similar revenues, though Accenture has a market cap that is less than half Oracle's and a workforce that is roughly three times Oracle's. The cultural meshing itself would prove to be a challenge. Mark Hurd, one of two CEOs of Oracle (the other CEO is Safra Catz, a former investment banker), has the experience running a large, heterogeneous organization. Prior to his stint at Oracle, he was credited with making the HP and Compaq merger work. At Oracle, however, he has not run software product development, which has been run by Thomas Kurian, and who reports to Larry Ellison, and not Hurd. A merger between Oracle and Accenture would place an even greater emphasis on synergies between Oracle's software division and Accenture's consulting business.<br /></li><li>Oracle would need to spend close to $100 billion to buy Accenture, if it does.<strong> How would it finance it</strong>, even assuming it spends all its $68 billion in cash to do so? Keep in mind that its largest acquisition was in the range of $10 billion. The financial engineering would be staggering. It helps that it has a former investment banker as one of two CEOs.<br /></li><li><strong>Will Oracle make Accenture focus on the Oracle red stack</strong> of software products and applications - both on-premise and in the cloud? If yes, it would need a much smaller-sized workforce than Accenture has. That in turn would diminish the value of Accenture to Oracle, and make the likely sticker price of $100 billion look even costlier.<br /></li><li>Is Oracle looking to become the <strong>IBM of the twenty-first century</strong>? It's certainly been a public ambition of Larry Ellison. In 2009, he <a href="http://www.zdnet.com/article/ellison-wants-to-model-new-oracle-after-t-j-watson-jr-s-ibm-6001024801/" rel="nofollow noopener" target="_blank">said </a>he wanted to pattern Oracle after Thomas Watson Jr's IBM, "combining both hardware and software systems." If Oracle keeps Accenture as a business unit free to pursue non-Oracle deals, does it mean Oracle is keen on morphing into a modern-day avatar of IBM and IBM Global Services, offering hardware, software, and professional services - all under one red, roof?<br /></li><li><strong>Is Oracle serious </strong>about such a merger? An acquisition of this size seems more conjecture than in the realms of possibility, at least as of now. One is reminded of the time in 2003 when <a href="https://news.microsoft.com/2004/06/07/statement-from-microsoft-on-past-exploratory-discussions-with-sap/" rel="nofollow noopener" target="_blank">Microsoft explored the possibility of buying SAP</a>. Those discussions went nowhere, and the idea was dropped. Combining two behemoths is no easy task, even for a company like Oracle, that has stitched together almost <a href="https://en.wikipedia.org/wiki/List_of_acquisitions_by_Oracle" rel="nofollow noopener" target="_blank">50 acquisitions</a> in just the last five years.<br /></li><li>If such an acquisition did go through, there would likely be few anti-trust concerns. That's a big "if".<br /></li><li>Stranger things have happened in the software industry, like <a href="https://www.ft.com/content/e657857a-7113-11e6-a0c9-1365ce54b926" rel="nofollow noopener" target="_blank">HP buying Autonomy</a>.<br /></li><li>I hope the Register piece was not an example of an early April Fool's joke.</li></ol><em>(HT Sangram Aglave whose LinkedIn post alerted me to this article)</em></div><br /><i>I first published this in LinkedIn Pulse on April 1, 2017.</i><br /><br />© 2017, Abhinav Agarwal.</div> Abhinav Agarwal tag:blogger.com,1999:blog-13714584.post-178251562685936179 Fri Apr 07 2017 08:30:00 GMT-0400 (EDT) A-Team Article: Lift and Shift to Oracle Data Integrator Cloud Service (ODICS) https://blogs.oracle.com/dataintegration/entry/a_team_article_lift_and <!--[if gte mso 9]><xml> <w:WordDocument> <w:View>Normal</w:View> <w:Zoom>0</w:Zoom> <w:TrackMoves/> <w:TrackFormatting/> <w:PunctuationKerning/> <w:ValidateAgainstSchemas/> <w:SaveIfXMLInvalid>false</w:SaveIfXMLInvalid> <w:IgnoreMixedContent>false</w:IgnoreMixedContent> <w:AlwaysShowPlaceholderText>false</w:AlwaysShowPlaceholderText> <w:DoNotPromoteQF/> <w:LidThemeOther>EN-US</w:LidThemeOther> <w:LidThemeAsian>X-NONE</w:LidThemeAsian> <w:LidThemeComplexScript>X-NONE</w:LidThemeComplexScript> <w:Compatibility> <w:BreakWrappedTables/> <w:SnapToGridInCell/> <w:WrapTextWithPunct/> <w:UseAsianBreakRules/> <w:DontGrowAutofit/> <w:SplitPgBreakAndParaMark/> <w:DontVertAlignCellWithSp/> <w:DontBreakConstrainedForcedTables/> <w:DontVertAlignInTxbx/> <w:Word11KerningPairs/> <w:CachedColBalance/> </w:Compatibility> <w:BrowserLevel>MicrosoftInternetExplorer4</w:BrowserLevel> <m:mathPr> <m:mathFont m:val="Cambria Math"/> <m:brkBin m:val="before"/> <m:brkBinSub m:val="&#45;-"/> <m:smallFrac m:val="off"/> <m:dispDef/> <m:lMargin m:val="0"/> <m:rMargin m:val="0"/> <m:defJc m:val="centerGroup"/> <m:wrapIndent m:val="1440"/> <m:intLim m:val="subSup"/> <m:naryLim m:val="undOvr"/> </m:mathPr></w:WordDocument> </xml><![endif]--> <p style="line-height: normal;" class="MsoNormal"><span style="font-size: 10pt;">Moving to the Cloud or interested in doing so?<span> </span>Interested in data movement and transformation?<span> </span>Read this A-Team article by Christophe Dupupet:<span> </span><a href="http://www.ateam-oracle.com/lift-and-shift-to-oracle-data-integrator-cloud-service-odics-moving-your-repository-to-the-cloud/"><span style="color: windowtext;">Lift and Shift to Oracle Data Integrator Cloud Service (ODICS) : Moving your Repository to the Cloud</span></a></span></p> <p style="line-height: normal;" class="MsoNormal"><span style="font-size: 10pt;">Oracle Data Integrator (ODI) is available as a Cloud Service: <a href="https://blogs.oracle.com/dataintegration/entry/introducing_oracle_data_integrator_cloud"><span style="color: windowtext;">Oracle Data Integrator Cloud Service (ODICS)</span></a>. For customers who are interested in a subscription model for their ODI installation and want to integrate and transform data in the Cloud, this is the solution.</span></p> <p style="line-height: normal;" class="MsoNormal"><span style="font-size: 10pt;">For customers who already have ODI developments on-premise, and want to migrate their existing, on-premise repository to the Cloud, read for a quick step by step approach in performing this repository migration.</span></p> <!--[if gte mso 9]><xml> <w:LatentStyles DefLockedState="false" DefUnhideWhenUsed="true" DefSemiHidden="true" DefQFormat="false" DefPriority="99" LatentStyleCount="267"> <w:LsdException Locked="false" Priority="0" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Normal"/> <w:LsdException Locked="false" Priority="9" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="heading 1"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 2"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 3"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 4"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 5"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 6"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 7"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 8"/> <w:LsdException Locked="false" Priority="9" QFormat="true" Name="heading 9"/> <w:LsdException Locked="false" Priority="39" Name="toc 1"/> <w:LsdException Locked="false" Priority="39" Name="toc 2"/> <w:LsdException Locked="false" Priority="39" Name="toc 3"/> <w:LsdException Locked="false" Priority="39" Name="toc 4"/> <w:LsdException Locked="false" Priority="39" Name="toc 5"/> <w:LsdException Locked="false" Priority="39" Name="toc 6"/> <w:LsdException Locked="false" Priority="39" Name="toc 7"/> <w:LsdException Locked="false" Priority="39" Name="toc 8"/> <w:LsdException Locked="false" Priority="39" Name="toc 9"/> <w:LsdException Locked="false" Priority="35" QFormat="true" Name="caption"/> <w:LsdException Locked="false" Priority="10" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Title"/> <w:LsdException Locked="false" Priority="1" Name="Default Paragraph Font"/> <w:LsdException Locked="false" Priority="11" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtitle"/> <w:LsdException Locked="false" Priority="22" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Strong"/> <w:LsdException Locked="false" Priority="20" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Emphasis"/> <w:LsdException Locked="false" Priority="59" SemiHidden="false" UnhideWhenUsed="false" Name="Table Grid"/> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Placeholder Text"/> <w:LsdException Locked="false" Priority="1" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="No Spacing"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 1"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 1"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 1"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 1"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 1"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 1"/> <w:LsdException Locked="false" UnhideWhenUsed="false" Name="Revision"/> <w:LsdException Locked="false" Priority="34" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="List Paragraph"/> <w:LsdException Locked="false" Priority="29" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Quote"/> <w:LsdException Locked="false" Priority="30" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Quote"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 1"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 1"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 1"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 1"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 1"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 1"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 1"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 1"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 2"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 2"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 2"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 2"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 2"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 2"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 2"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 2"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 2"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 2"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 2"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 2"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 2"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 2"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 3"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 3"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 3"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 3"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 3"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 3"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 3"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 3"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 3"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 3"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 3"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 3"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 3"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 3"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 4"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 4"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 4"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 4"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 4"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 4"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 4"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 4"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 4"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 4"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 4"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 4"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 4"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 4"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 5"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 5"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 5"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 5"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 5"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 5"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 5"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 5"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 5"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 5"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 5"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 5"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 5"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 5"/> <w:LsdException Locked="false" Priority="60" SemiHidden="false" UnhideWhenUsed="false" Name="Light Shading Accent 6"/> <w:LsdException Locked="false" Priority="61" SemiHidden="false" UnhideWhenUsed="false" Name="Light List Accent 6"/> <w:LsdException Locked="false" Priority="62" SemiHidden="false" UnhideWhenUsed="false" Name="Light Grid Accent 6"/> <w:LsdException Locked="false" Priority="63" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 1 Accent 6"/> <w:LsdException Locked="false" Priority="64" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Shading 2 Accent 6"/> <w:LsdException Locked="false" Priority="65" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 1 Accent 6"/> <w:LsdException Locked="false" Priority="66" SemiHidden="false" UnhideWhenUsed="false" Name="Medium List 2 Accent 6"/> <w:LsdException Locked="false" Priority="67" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 1 Accent 6"/> <w:LsdException Locked="false" Priority="68" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 2 Accent 6"/> <w:LsdException Locked="false" Priority="69" SemiHidden="false" UnhideWhenUsed="false" Name="Medium Grid 3 Accent 6"/> <w:LsdException Locked="false" Priority="70" SemiHidden="false" UnhideWhenUsed="false" Name="Dark List Accent 6"/> <w:LsdException Locked="false" Priority="71" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Shading Accent 6"/> <w:LsdException Locked="false" Priority="72" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful List Accent 6"/> <w:LsdException Locked="false" Priority="73" SemiHidden="false" UnhideWhenUsed="false" Name="Colorful Grid Accent 6"/> <w:LsdException Locked="false" Priority="19" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Emphasis"/> <w:LsdException Locked="false" Priority="21" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Emphasis"/> <w:LsdException Locked="false" Priority="31" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Subtle Reference"/> <w:LsdException Locked="false" Priority="32" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Intense Reference"/> <w:LsdException Locked="false" Priority="33" SemiHidden="false" UnhideWhenUsed="false" QFormat="true" Name="Book Title"/> <w:LsdException Locked="false" Priority="37" Name="Bibliography"/> <w:LsdException Locked="false" Priority="39" QFormat="true" Name="TOC Heading"/> </w:LatentStyles> </xml><![endif]--><!--[if gte mso 10]> <style> /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} </style> <![endif]--> Sandrine Riley-Oracle https://blogs.oracle.com/dataintegration/entry/a_team_article_lift_and Thu Apr 06 2017 20:39:14 GMT-0400 (EDT) A-Team Article: Lift and Shift to Oracle Data Integrator Cloud Service (ODICS) https://blogs.oracle.com/dataintegration/a-team-article%3A-lift-and-shift-to-oracle-data-integrator-cloud-service-odics <p><span>Moving to the Cloud or interested indoing so?<span> </span>Interested in data movementand transformation?<span> </span>Read this A-Teamarticle by Christophe Dupupet:<span> </span><a href="http://www.ateam-oracle.com/lift-and-shift-to-oracle-data-integrator-cloud-service-odics-moving-your-repository-to-the-cloud/"><span>Lift and Shift to Oracle Data Integrator Cloud Service(ODICS) : Moving your Repository to the Cloud</span></a></span></p> <p><span>Oracle Data Integrator (ODI) is available as a Cloud Service: <a href="https://blogs.oracle.com/dataintegration/entry/introducing_oracle_data_integrator_cloud"><span>Oracle Data Integrator Cloud Service (ODICS)</span></a>.For customers who are interested in a subscription model for their ODIinstallation and want to integrate and transform data in the Cloud, this is thesolution.</span></p> <p><span>For customers who already have ODIdevelopments on-premise, and want to migrate their existing, on-premiserepository to the Cloud, read for a quick step by step approach in performingthis repository migration.</span></p> Sandrine Riley https://blogs.oracle.com/dataintegration/a-team-article%3A-lift-and-shift-to-oracle-data-integrator-cloud-service-odics Thu Apr 06 2017 20:39:14 GMT-0400 (EDT) OBIEE Component Status Notifications http://www.rittmanmead.com/blog/2017/04/obiee-component-status-notifications/ <p>At Rittman Mead, we often hear requests for features or solutions generally not provided by Oracle. These requests range from commentary functionality to custom javascript visualizations. There are many ways to implement these functionalities, but organizations often lack the in-house resources to engineer an acceptable solution. </p> <p>Rittman Mead has the capability to engineer any solution desired, and in many cases, has already developed a solution. Many of our accelerators currently offered, such as Chit Chat or User Engagement, grew out of numerous requests for these features.</p> <p>One of the more common requests we hear at Rittman Mead is for BI Administrators to receive notifications for the status of their OBIEE components. They want to be notified of the status of the system components throughout the day in a convenient manner, so any failures are recognized quickly. </p> <p>This particular feature can easily be implemented with Rittman Mead's Performance Analytics service. However, Rittman Mead would like to publicly provide this capability independent of our accelerator. We have developed a small Python script to provide this functionality, and we would like to give this script to the community.</p> <p>The provided script is available free of charge, and available under the MIT license. It has been tested on both OBIEE 11G and 12C environments, as well as on Windows and Linux operating systems. The rest of this blog will detail, at a high level, how the script works, and how to configure it correctly.</p> <p>The script is available through our public Github repository <a href="https://github.com/RittmanMead/scripts/blob/master/obi/alerts/email_component_status.py" target="_blank"> here</a>.</p> <h1 id="scriptoutput">Script Output</h1> <p>First, let's clarify how we will gather the status of the components in the first place. Thankfully, OBIEE includes some scripts to display this information on both Linux and Windows. In 12C, the script is <strong>status.sh/status.cmd</strong>, and in 11G the primary command is <strong>opmnctl status</strong>. </p> <p>When I execute this script on an OBIEE 12C OEL environment, I receive the following response:</p> <p><img width="700" src="http://www.rittmanmead.com/blog/content/images/2016/08/Screen-Shot-2016-08-24-at-9-26-19-AM.png"> <br> The output includes some extra information we don't require, but we can ignore it for now. With some programming knowledge, we can trim what we don't need, organize it into a nice table, and then send the output to nearly anywhere desired. For portability and stability, I will use Python to organize the message contents and I will also use email as the channel of communication.</p> <h1 id="sendingtheoutputthroughemail">Sending the Output Through Email</h1> <p>If we are only concerned with notifying administrators of the current status, one of the better avenues to send this data is through email. An email destination will allow users to be able to receive the status of the components almost instantaneously, and be able to take the appropriate action as soon as possible. </p> <p>Additionally, Python's standard set of modules includes functions to assist in sending SMTP messages, making the script even more portable and maintainable. The simplest method to generate the email is just by sending the complete output as the body of the message. An example of this output is below: <br> <img width="700" src="http://www.rittmanmead.com/blog/content/images/2016/08/Screen-Shot-2016-08-24-at-9-23-27-AM-1.png"> <br> While this works, it's not exactly attractive. With some Python and HTML/CSS skills, we can style the email to look much nicer:</p> <p><img width="700" src="http://www.rittmanmead.com/blog/content/images/2016/08/Screen-Shot-2016-08-25-at-11-37-32-AM-1.png"> <br> Now we have something nice we can send BI Administrators to show the status of the components.</p> <h1 id="configurationanduse">Configuration and Use</h1> <p>To effectively utilize this script, you will have to change some of the configuration parameters, located at the top of the script. The parameters I am using are shown below (with sensitive information hidden, of course): <br> <img src="http://www.rittmanmead.com/blog/content/images/2016/08/Screen-Shot-2016-08-25-at-11-47-44-AM.png" alt=""></p> <p>The sender and username fields should both be the user you are logging in as on the SMTP server to send the email. If you want the email address shown on a message to be different than the user configured on the SMTP server, then these can be set separately. The password field should be the password for the user being configured on the SMTP server. </p> <p>The recipient field should be the address of the user who will be receiving the emails. For simple management, this should be a single user, who should then be configured to forward all incoming status emails to the responsible parties. This will allow easier maintenance, since modifying the list of users can take place in the email configuration, rather than the script configuration. In this example, I am just sending the emails to my Rittman Mead email address.</p> <p>The SMTP settings should also be updated to reflect the SMTP server being utilized. If you are using Gmail as the SMTP server, then the configuration shown should work without modifications.</p> <p>Finally, the python script requires the absolute path to the status command to execute to produce the output (the <strong>opmnctl</strong> or <strong>status</strong> commands). Environment variables may not be taken into consideration by this script, so it's best to not use a variable in this path.</p> <p><strong>NOTE:</strong> If the <strong>\</strong> character is in the path, then you <strong>MUST</strong> use <strong>\\</strong> instead. This is especially true on Windows environments. If this change is required but omitted, the script will not function properly.</p> <p>Additionally, if you don't care about the HTML output (or if it doesn't render nicely in your email client), then it can be disabled by setting the value of <strong>render_html</strong> to <strong>False</strong>. If, for some reason, the nice HTML fails to render, then the email will just send the plain text output, instead of failing to deliver an email at all.</p> <p>Once configured, try executing the script: <br> <code>python email_component_status.py</code></p> <p>If everything worked correctly, then you should have received an email with the status of the components. If you do not receive an email, then you should check both the configuration settings, and the internet connection of the machine (firewalls included). The script will also generate output that should assist you in troubleshooting the cause of the problem.</p> <h1 id="additionalnotificationsordestinations">Additional Notifications or Destinations</h1> <p>The solution provided, while useful, is not perfect. What if you want to send this information to a destination other than an email address, such as a ticketing system like Atlassian JIRA? Or what if you want notifications based on other situations, such as slow running reports, or high CPU usage? </p> <p>There may be many situations in which you would want one, or several, employees to receive different notifications based on events or circumstances that occur in your OBIEE environment. The script in this blog post only provides one of these notifications, but implementing many more will quickly become burdensome.</p> <p>As part of Rittman Mead's Performance Analytics offering, we include custom software and code to fulfill this requirement. In addition to providing dashboards to explore the performance of your OBIEE systems, Performance Analytics can be configured to distribute alerts, based on any quantifiable condition, to any number of external systems. </p> <p>The full Performance Analytics suite can not only alert users of down system components, but of any number of conditions that may occur in your BI environment. </p> <p>If you have questions about this script, Performance Analytics, or anything else <a>Contact Us here</a>.</p> <p>To find out more about Performance Analytics, <a href="mailto:info+npsn@rittmanmead.com">contact us</a>, visit the product page <a href="http://www.rittmanmead.com/blog/performance-analytics" target="_blank">here</a>, or read some of the fantastic blogs from <a href="http://www.rittmanmead.com/blog/2015/10/introducing-the-rittman-mead-obiee-performance-analytics-service/" target="_blank">Robin Moffatt</a>.</p> Nick Padgett 561c951b-f1a7-4ecf-a7a1-19e2247781b8 Wed Apr 05 2017 10:00:00 GMT-0400 (EDT) Oracle Analytics Cloud: Product Overview http://www.rittmanmead.com/blog/2017/04/oracle-analytics-cloud-product-overview/ <img src="http://www.rittmanmead.com/blog/content/images/2017/03/Oracle-Cloud-aaS-3.png" alt="Oracle Analytics Cloud: Product Overview"><p>We at Rittman Mead are always helping our customer solving their problems, many times we heard them</p> <ul> <li>being unsure about the sizing of their server </li> <li>being worried about the upfront cost of the licensing</li> <li>having recurring nightmares about patching</li> <li>willing to try the cloud but couldn't find the right option to replace their on-premises system </li> </ul> <p>This is their lucky day: Oracle officially launched <a href="https://cloud.oracle.com/en_US/oac">Oracle Analytics Cloud </a> (OAC), a new PaaS (Platform as a Service) providing a complete and elastic Business Intelligence platform in the cloud, customizable and managed by you but all on the Oracle Cloud! </p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/04/1mn66d.jpg" alt="Oracle Analytics Cloud: Product Overview"></p> <p>If you haven't been on a remote island you may have noticed that in recent years Oracle's main focus has been around the <a href="https://cloud.oracle.com/en_US/home">Cloud</a>. Several products have been launched covering a vast spectrum of functionalities: Data Management, Application Development, Business Analytics and Security are only some of the areas covered by the Software/Platform/Infrastructure as a Service offering.</p> <p><img width="800px" alt="Oracle Analytics Cloud: Product Overview" src="http://www.rittmanmead.com/blog/content/images/2017/03/Oracle-Cloud-aaS-2.png"></p> <p>In the Business Analytics area, we at Rittman Mead started thinking long time ago on <a href="https://www.rittmanmead.com/blog/2013/11/thoughts-on-running-obiee-in-the-cloud-part-1-the-bi-platform/">how to host Oracle's BI on-premises (OBIEE) in the Cloud</a> and worked closely with Oracle since the beta phase of their first PaaS product: <a href="https://www.rittmanmead.com/blog/2014/09/introduction-to-oracle-bi-cloud-service-product-overview/">BI Cloud Service</a> (BICS). Effectively we put our hands on all the cloud products in the BA family like <a href="https://www.rittmanmead.com/blog/2015/02/introducing-oracle-big-data-discovery-part-1-the-visual-face-of-hadoop/">Big Data Discovery</a> (both on premises and cloud), <a href="https://www.rittmanmead.com/blog/2015/11/oracle-openworld-2015-roundup-part-1-obiee12c-and-data-visualisation-cloud-service/">Data Visualization Cloud Service</a>, <a href="https://www.rittmanmead.com/blog/2015/09/taking-a-look-at-oracle-big-data-preparation-cloud-service-spark-based-data-transformation-in-the-cloud/">Big Data Preparation Service</a>.</p> <h1 id="businessintelligencecloudproducts">Business Intelligence Cloud Products</h1> <p>Until few weeks ago Oracle's main Business Analytics cloud products were <a href="https://cloud.oracle.com/en_US/business_intelligence">BI Cloud Service</a> (BICS) and <a href="https://cloud.oracle.com/en_US/data-visualization">Data Visualization Cloud Service</a> (DVCS). As mentioned in <a href="https://www.rittmanmead.com/blog/2014/09/introduction-to-oracle-bi-cloud-service-product-overview/">our blog</a> both tools aimed initially at departmental use-cases: the simplicity of the data model interface and the lack of admin configuration options stopped them from being a compelling story for hosting a full enterprise Business Intelligence solution.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/04/BICS-and-DVCS.png" alt="Oracle Analytics Cloud: Product Overview"></p> <p>New features like <a href="http://www.oracle.com/technetwork/middleware/bicloud/downloads/index.html">BICS Data Sync</a>, <a href="http://www.oracle.com/technetwork/middleware/bicloud/downloads/index.html">Remote Data Connector</a> and <a href="http://www.ateam-oracle.com/lift-and-shift-on-premise-rpd-to-bi-cloud-service-bics/">RPD lift and shift</a> addressed almost all the limitations but the lack of detailed admin/maintenance capabilities represent a stopper for moving complex environments in the cloud. Still BICS and DVCS are perfect for their aim: business users analysing sets of data without needing to wait the IT to provision a server or to care about upfront licensing costs.</p> <h1 id="oracleanalyticscloud">Oracle Analytics Cloud</h1> <p>Oracle Analytics Cloud extends the watermark in every direction by providing a product that is:</p> <ul> <li><strong>Complete functionality</strong>: most of the tools, procedures and options provided on-premises are now available in OAC.</li> <li><strong>Combining all the offering of BICS, DV, BIEE and Essbase</strong>: OAC includes the features of Oracle's top BI products. </li> <li><strong>Licensing Tailored</strong>: the many options available (discussed in a later post) can be chosen depending on analytical needs, timeframe of service, required performances</li> <li><strong>Easily Scalable</strong>: do you want to expand your BI solution to the double of the users without loosing performances? Just buy some more horsepower!</li> <li><strong>Fully Accessible</strong>: SSH connection available to the server makes it easy to change settings as needed, REST API and Clients are provided for all lifecycle operations </li> <li><strong>Customizable</strong>: settings, images, networking, VPN all settings are available</li> <li><strong>Scriptable</strong>: settings like scaling, instance creation and deletion, start and stop can be easily scripted via the REST-APIs</li> <li><strong>Fully Customer Managed</strong>: Oracle provides the automation to backup and patch but the customer decides when to run them.</li> </ul> <h2 id="whatsthedifference">What's The Difference?</h2> <p>So what's the difference between Oracle Analytics Cloud and the "old" DVCS and BICS? How is OACS going to change Oracle's BI offer in the cloud?</p> <p>The great deal of using OACS is <strong>control</strong>: BICS/DVC limiting factors around admin options and development are solved providing a tool capable of hosting a full enterprise BI solution. Even if the platform is managed by Oracle SSH access is provided meaning that instance configurations can be changed. No more upfront server sizing decisions, now the size of the instance is decided during creation time and can be changed later in the process if the demand changes.</p> <p>The REST-APIs will enable the scripting of the full lifecycle of the instance, providing a way to automate the BI enterprise workflow even in complex environments where concurrent development is needed. Patching and Backups are not a problem anymore with the automated processes provided. </p> <p>Direct RPD online editing is available with the Admin tool. The old BICS Data Modeler is still there for simple models, but Admin Tool can be used in case of complex RPDs. </p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Modeler-1.png" alt="Oracle Analytics Cloud: Product Overview"></p> <p>The front-end is like the BICS and OBIEE 12c one, some new visualization have been added to Visual Analyzer in line with the new additions to Data Visualization Desktop: Parallel Coordinates, Chord, Network, Sankey diagrams are now available.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/New-Viz.png" alt="Oracle Analytics Cloud: Product Overview"></p> <p>A new console is now available in Visual Analyzer allowing settings like Mail or Deliveries that before were only accessible via Weblogic Console, Enterprise Manager or config files.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Console.png" alt="Oracle Analytics Cloud: Product Overview"></p> <p>Finally Essbase is now available in the cloud too with a new web interface! <br> Summarizing, if you wanted to go Cloud, but were worried about missing options, now Oracle Analytics Cloud provides all you need to host a full Enterprise BI solution.</p> <p>In the next few days I'll be analysing various aspects of Oracle Analytics Cloud Suite, so keep in touch!</p> <p>If you need assistance in checking if Oracle Analytics Cloud suits your needs or in planning your migration to the cloud don't hesitate to <a href="mailto:info+ftoac@rittmanmead.com">contact us</a> </p> Francesco Tisiot c316ab73-2e20-4ed5-97b0-49efe13e1e17 Tue Apr 04 2017 11:00:00 GMT-0400 (EDT) Managing memory allocation for Oracle R Enterprise Embedded Execution http://www.oralytics.com/2017/04/managing-memory-allocation-for-oracle-r.html <p>When working with Oracle R Enterprise and particularly when you are using the ORE functions that can spawn multiple R processes, on the DB Server, you need to be very aware of the amount of memory that will be consumed for each call of the ORE function.</p> <p>ORE has two sets of parallel functions for running your user defined R scripts stored in the database, as part of the Embedded R Execution feature of ORE. The R functions are called ore.groupApply, ore.rowApply and ore.indexApply. When using SQL there are "rqGroupApply" and rqRowApply. (There is no SQL function equivalent of the R function ore.indexApply)</p> <p>For each parallel R process that is spawned on the DB server a certain amount of memory (RAM) will be allocated to this R process. The default size of memory to be allocated can be found by using the following query.</p> <pre><br />select name, value from sys.rq_config;<br /><br />NAME VALUE<br />----------------------------------- -----------------------------------<br />VERSION 1.5<br />MIN_VSIZE 32M<br />MAX_VSIZE 4G<br />MIN_NSIZE 2M<br />MAX_NSIZE 20M<br /></pre> <p>The memory allocation is broken out into the amount of memory allocated for Cells and NCells for each R process.</p> <p>If your parallel ORE function create a large number of parallel R processes then you can see that the amount of overall memory consumed can be significant. I've seen a few customers who very quickly run out of memory on their DB servers. Now that is something you do not want to happen.</p> <p>How can you prevent this from happening ?</p> <p>There are a few things you need to keep in mind when using the parallel enabled ORE functions. The first one is, how many R processes will be spawned. For most cases this can be estimated or calculated to a high degree of accuracy. Secondly, how much memory will be used to process each of the R processes. Thirdly, how memory do you have available on the DB server. Fourthly, how many other people will be running parallel R processes at the same time?</p> <p>Examining and answering each of these may look to be a relatively trivial task, but the complexity behind these can increase dramatically depending on the answer to the fourth point/question above.</p> <p>To calculate the amount of memory used during the ORE user defined R script, you can use the R garbage function to calculate the memory usage at the start and at the end of the R script, and then return the calculated amount. Yes you need to add this extra code to your R script and then remove it when you have calculated the memory usage.</p> <pre><br />gc.start <- gc(reset=TRUE)<br />...<br />gc.end <- gc()<br />gc.used <- gc.end[,7] - gc.start[,7] # amount consumed by the processing<br /></pre> <p>Using this information and the answers to the points/questions I listed above you can now look at calculating how much memory you need to allocated to the R processes. You can set this to be static for all R processes or you can use some code to allocate the amount of memory that is needed for each R process. But this starts to become messy. The following gives some examples (using R) of changing the R memory allocations in the Oracle Database. Similar commands can be issued using SQL.</p> <pre><br />> sys.rqconfigset('MIN_VSIZE', '10M') -- min heap 10MB, default 32MB<br />> sys.rqconfigset('MAX_VSIZE', '100M') -- max heap 100MB, default 4GB<br />> sys.rqconfigset('MIN_NSIZE', '500K') -- min number cons cells 500x1024, default 1M<br />> sys.rqconfigset('MAX_NSIZE', '2M') -- max number cons cells 2M, default 20M<br /></pre> <p>Some guidelines - as with all guidelines you have to consider all the other requirements for the Database, and in reality you will have to try to find a balance between what is listed here and what is actually possible.</p> <ul> <li>Set parallel_degree_policy to MANUAL.</li> <li>Set parallel_min_servers to the number of parallel slave processes to be started when the database instances start, this avoids start up time for the R processes. This is not a problem for long running processes. But can save time with processes running for 10s seconds</li> <li>To avoid overloading the CPUs if the parallel_max_servers limit is reached, set the hidden parameter _parallel_statement_queuing to TRUE. Avoids overloading and lets processes wait.</li> <li>Set application tables and their indexes to DOP 1 to reinforce the ability of ORE to determine when to use parallelism.</li></ul> <p>Understanding the memory requirements for your ORE processes can be tricky business and can take some time to work out the right balance between what is needed by the spawned parallel R processes and everything else that is going on in the Database. There will be a lot of trial and error in working this out and it is always good to reach out for some help. If you have a similar scenario and need some help or guidance let me know.</p> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-2421823361078030819 Mon Apr 03 2017 14:22:00 GMT-0400 (EDT) Metadata Modeling in the Database with Analytic Views http://www.rittmanmead.com/blog/2017/04/metadata-modeling-in-the-database-with-analytic-views/ <img src="http://www.rittmanmead.com/blog/content/images/2017/03/Overview.png" alt="Metadata Modeling in the Database with Analytic Views"><p><a href="http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html">12.2</a>, the latest Oracle database release provides a <a href="https://docs.oracle.com/en/cloud/paas/exadata-express-cloud/csdbf/oracle-database-12-2-new-features.html#GUID-D0673E3E-DF05-47C3-B1E2-BEF91FA36CEF">whole set of new features</a> enhancing various aspects of the product including JSON support, Auto-List Partitioning and APEX news among others. One of the biggest news in the Data Warehousing / Analytics area was the introduction of the <a href="https://docs.oracle.com/database/122/DWHSG/overview-analytic-views.htm#RESOURCEID-18323-7EE80A8A">Analytic Views</a>, that as per Oracle's definition are</p> <blockquote> <p>Metadata objects that enable the user to quickly and easily create complex hierarchical and dimensional queries on data in database tables and views</p> </blockquote> <h3 id="tldr">tl;dr</h3> <p>If you are on rush, here is an abstract of what you'll find in this looooong blog post:</p> <p>Metadata modeling can now be done directly in the database using Analytic Views, providing to end users a way of querying database objects without needing a knowledge of joining conditions, aggregation functions or order by clauses. <br> This post will guide you through the creation of an analytic view that replicates a part of a OBIEE's Sampleapp business model. The latest part of the post is dedicated to understanding the usage of analytic views and the benefits for end users especially in cases when self-service BI tools are used.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/1mfupd.jpg" alt="Metadata Modeling in the Database with Analytic Views"></p> <p>If you are still around and interested in the topic please take a drink and sit comfortably, it will be a good journey.</p> <h1 id="metadatamodeling">Metadata Modeling</h1> <p>What are then the Analytics Views in detail? How are they going to improve end user's ability in querying data? </p> <p>To answer above question I would take a step back. Many readers of this blog are familiar with OBIEE and its core: the <strong>Repository</strong>. The repository contains the metadata model from the physical sources till the presentation areas and includes the definition of:</p> <ul> <li><strong>Joins</strong> between tables</li> <li><strong>Hierarchies</strong> for dimensions</li> <li><strong>Aggregation</strong> rules</li> <li><strong>Security</strong> settings</li> <li><strong>Data Filters</strong></li> <li><strong>Data Sources</strong></li> </ul> <p>This allows end users to just pick columns from a Subject Area and display them in the appropriate way without needing to worry about writing SQL or knowing how the data is stored. Moreover definitions are held centrally providing the famous <strong>unique source of truth</strong> across the entire enterprise.</p> <p><img alt="Metadata Modeling in the Database with Analytic Views" width="600px" src="http://www.rittmanmead.com/blog/content/images/2017/03/RPD.png"></p> <p>The wave of self-service BI tools like Tableau or Oracle's <a href="https://www.rittmanmead.com/blog/2016/10/data-visualisation-desktop-12-2-2-0-new-features-2/">Data Visualization Desktop</a> provided products capable of querying almost any kind of data sources in a visual and intuitive way directly in the end user hands. An easy and direct access to data is a good thing for end user but, as stated above, requires knowledge of the data model, joins and aggregation methods. <br> The self-service tools can slightly simplify the process by providing some hints based on column names, types or values but the cruel reality is that the end-user has to build the necessary knowledge of the data source before providing correct results. This is why we've seen several times self-service BI tools being "attached" to OBIEE: get corporate official data from the unique source of truth and mash them up with information coming from external sources like personal Excel files or output of Big Data processes.</p> <h1 id="analyticsviews">Analytics Views</h1> <p>Analytic Views (AV) take OBIEE's metadata modeling concept and move it at database level providing a way of organizing data in a dimensional model so it can be queried with simpler SQL statements. <br> The Analytical Views are standard views with the following extra options:</p> <ul> <li>Enable the definition of facts, dimensions and hierarchies that are included in system-generated columns</li> <li>Automatically aggregate the data based on pre-defined calculations</li> <li>Include presentation metadata</li> </ul> <p>Analytics views are created with a <code>CREATE ANALYTIC VIEW</code> statement, some privileges need to be granted to the creating user, you can find the full list in Oracle's <a href="https://docs.oracle.com/database/122/DWHSG/overview-analytic-views.htm#DWHSG-GUID-6F948948-6AE6-4A89-8AAC-5B8952CEF41D">documentation</a>. </p> <p>Every analytical view is composed by the following metadata objects:</p> <ul> <li><strong>Attribute dimensions</strong>: organising table/view columns into attributes and levels.</li> <li><strong>Hierarchies</strong>: defining hierarchical relationships on top of an attribute dimension object.</li> <li><strong>Analytic view objects</strong>: defining fact data referencing both fact tables and hierarchies.</li> </ul> <p>With all the above high level concepts in mind it's now time to try how Analytical Views could be used in a reporting environment.</p> <h2 id="databaseprovisioning">Database Provisioning</h2> <p>For the purpose blog post I used <a href="https://github.com/oracle/docker-images/tree/master/OracleDatabase">Oracle's 12.2.0.1 database Docker image</a>, provided by <a href="https://twitter.com/geraldvenzl">Gerald Venzl</a>, the quickest way of spinning up a local instance. You just need to:</p> <ul> <li>Install Docker</li> <li>Download database installer from <a href="http://www.oracle.com/technetwork/database/enterprise-edition/downloads/index.html">Oracle's website</a></li> <li>Place the installer in the proper location mentioned in the <a href="https://github.com/oracle/docker-images/tree/master/OracleDatabase">documentation</a></li> <li>Build Oracle Database 12.1.0.2 Enterprise Edition Docker image by executing </li> </ul> <pre><code>./buildDockerImage.sh -v 12.1.0.2 -e </code></pre> <ul> <li>Running the image by executing </li> </ul> <pre><code>docker run --name db12c -p 1521:1521 -p 5500:5500 -e ORACLE_SID=orcl -e ORACLE_PDB=pdborcl -e ORACLE_CHARACTERSET=AL32UTF8 oracle/database:12.2.0.1-ee </code></pre> <p>The detailed parameters definition can be found in the <a href="https://github.com/oracle/docker-images/tree/master/OracleDatabase">GitHub repository</a>. You can then connect via sqlplus to your local instance by executing the standard </p> <pre><code>sqlplus sys/pwd@//localhost:1521/pdborcl as sysdba </code></pre> <p>The password is generated automatically during the first run of the image and can be found in the logs, look for the following string</p> <pre><code>ORACLE AUTO GENERATED PASSWORD FOR SYS, SYSTEM AND PDBAMIN: XXXXxxxxXXX </code></pre> <p>Once the database is created it's time to set the goal: I'll try to recreate a piece of the <a href="http://www.oracle.com/technetwork/middleware/bi-foundation/obiee-samples-167534.html">Oracle's Sampleapp</a> RPD model in the database using Analytic Views.</p> <h1 id="modeldescription">Model description</h1> <p>In this blog post I'll look in the <code>01 - Sample App</code> business model and specifically I'll try to replicate the logic behind Time, Product and the <code>F0 Sales Base Measures</code> using Analytic Views.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Goal.png" alt="Metadata Modeling in the Database with Analytic Views"></p> <h1 id="dimproduct">Dim Product</h1> <p>The Sampleapp's <code>D1 - Products (Level Based Hierarchy)</code> is based on two logical table sources: <code>SAMP_PRODUCTS_D</code> providing product name, description, LOB and Brand and the <code>SAMP_PROD_IMG_D</code> containing product images. For the purpose of this test we'll keep our focus on <code>SAMP_PRODUCTS_D</code> only. <br> The physical mapping of Logical columns is shown in the image below.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Product.png" alt="Metadata Modeling in the Database with Analytic Views"></p> <h3 id="attributedimension">Attribute Dimension</h3> <p>The first piece we're going to build is the <em>attribute dimension</em>, where we'll be defining attributes and levels. The mappings in above image can "easily" be translated into an attributes with the following SQL.</p> <pre><code>CREATE OR REPLACE ATTRIBUTE DIMENSION D1_DIM_PRODUCT USING SAMP_PRODUCTS_D ATTRIBUTES (PROD_KEY as P0_Product_Number CLASSIFICATION caption VALUE 'P0 Product Number', PROD_DSC as P1_Product CLASSIFICATION caption VALUE 'P1 Product', TYPE as P2_Product_Type CLASSIFICATION caption VALUE 'P2 Product Type', TYPE_KEY as P2k_Product_Type CLASSIFICATION caption VALUE 'P2k Product Type', LOB as P3_LOB CLASSIFICATION caption VALUE 'P3 LOB', LOB_KEY as P3k_LOB CLASSIFICATION caption VALUE 'P3k LOB', BRAND as P4_Brand CLASSIFICATION caption VALUE 'P4 Brand', BRAND_KEY as P4k_Brand CLASSIFICATION caption VALUE 'P4k Brand', ATTRIBUTE_1 as P5_Attribute_1 CLASSIFICATION caption VALUE 'P5 Attribute 1', ATTRIBUTE_2 as P6_Attribute_2 CLASSIFICATION caption VALUE 'P6 Attribute 2', SEQUENCE as P7_Product_Sequence CLASSIFICATION caption VALUE 'P7 Product Sequence', TOTAL_VALUE as P99_Total_Value CLASSIFICATION caption VALUE 'P99 Total Value') </code></pre> <p>Few pieces to note:</p> <ul> <li><code>CREATE OR REPLACE ATTRIBUTE DIMENSION</code>: we are currently defining a dimension, the attributes and levels. </li> <li><code>USING SAMP_PRODUCTS_D</code>: defines the datasource, in our case the table <code>SAMP_PRODUCTS_D</code>. Only one datasource is allowed per dimension.</li> <li><code>PROD_KEY as P0_Product_Number</code>: using the standard notification <code>as</code> we can easily recaption columns names</li> <li><code>CLASSIFICATION CAPTION ...</code> several options can be added for each attribute like caption or description</li> </ul> <p>The dimension definition is not complete with only attribute declaration, we also need to define the levels. Those can be taken from OBIEE's hierarchy</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Hierachy.png" alt="Metadata Modeling in the Database with Analytic Views"></p> <p>For each level we can define:</p> <ul> <li>The level name, caption and description</li> <li>The Key</li> <li>the Member Name and Caption</li> <li>the Order by Clause</li> </ul> <p>Translating above OBIEE's hierarchy levels into Oracle SQL</p> <pre><code>LEVEL BRAND CLASSIFICATION caption VALUE 'BRAND' CLASSIFICATION description VALUE 'Brand' KEY P4k_Brand MEMBER NAME P4_Brand MEMBER CAPTION P4_Brand ORDER BY P4_Brand LEVEL Product_LOB CLASSIFICATION caption VALUE 'LOB' CLASSIFICATION description VALUE 'Lob' KEY P3k_LOB MEMBER NAME P3_LOB MEMBER CAPTION P3_LOB ORDER BY P3_LOB DETERMINES(P4k_Brand) LEVEL Product_Type CLASSIFICATION caption VALUE 'Type' CLASSIFICATION description VALUE 'Type' KEY P2k_Product_Type MEMBER NAME P2_Product_Type MEMBER CAPTION P2_Product_Type ORDER BY P2_Product_Type DETERMINES(P3k_LOB,P4k_Brand) LEVEL Product_Details CLASSIFICATION caption VALUE 'Detail' CLASSIFICATION description VALUE 'Detail' KEY P0_Product_Number MEMBER NAME P1_Product MEMBER CAPTION P1_Product ORDER BY P1_Product DETERMINES(P2k_Product_Type,P3k_LOB,P4k_Brand) ALL MEMBER NAME 'ALL PRODUCTS'; </code></pre> <p>There is an additional <code>DETERMINES</code> line in above sql for each level apart from <code>Brand</code>, this is how we can specify the relationship between level keys. If we take the <code>Product_LOB</code> example, the <code>DETERMINES(P4k_Brand)</code> defines that any LOB in our table automatically determines a Brand (in OBIEE terms that LOB is a child of Brand).</p> <h3 id="hierarchy">Hierarchy</h3> <p>Next step is defining a hierarchy on top of the attribute dimension <code>D1_PRODUCTS</code> defined above. We can create it just by specifying:</p> <ul> <li>the attribute dimension to use</li> <li>the list of levels and the relation between them</li> </ul> <p>which in our case becomes</p> <pre><code>CREATE OR REPLACE HIERARCHY PRODUCT_HIER CLASSIFICATION caption VALUE 'Products Hierarchy' USING D1_DIM_PRODUCT (Product_Details CHILD OF Product_Type CHILD OF Product_LOB CHILD OF BRAND); </code></pre> <p>When looking into the hierarchy <code>Product_hier</code> we can see that it's creating an OLAP-style dimension with a row for each member at each level of the hierarchy and extra fields like <code>DEPT</code>, <code>IS_LEAF</code> and <code>HIER_ORDER</code></p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Hierarchy-Details.png" alt="Metadata Modeling in the Database with Analytic Views"></p> <p>The columns contained in <code>Product_hier</code> are:</p> <ul> <li>One for each Attribute defined in attribute dimension <code>D1_PRODUCTS</code> like <code>P0_PRODUCT_NUMBER</code> or <code>P2K_PRODUCT_TYPE</code></li> <li>The member name, caption and description and unique name </li> <li>The level name in the hierarchy and related depth</li> <li>The relative order of the member in the hierarchy</li> <li>A field <code>IS_LEAF</code> flagging hierarchy endpoints</li> <li>References to the parent level</li> </ul> <h3 id="memberuniquenames">Member Unique Names</h3> <p>A particularity to notice is that the <code>MEMBER_UNIQUE_NAME</code> of <code>Cell Phones</code> is <code>[PRODUCT_TYPE].&amp;[101]</code> which is the concatenation of the <code>LEVEL</code> and the <code>P2K_PRODUCT_TYPE</code> value. <br> One could expect the member unique name being represented as the concatenation of all the preceding hierarchy members, Brand and LOB, and the member key itself in a string like <code>[PRODUCT_TYPE].&amp;[10001]&amp;[1001]&amp;[101]</code>. </p> <p>This is the default behaviour, however in our case is not happening since we set the <code>DETERMINES(P3k_LOB,P4k_Brand)</code> in the attribute dimension definition. We Specified that Brand (<code>[10001]</code>) and LOB (<code>[1001]</code>) can automatically be inferred by the Product Type so there is no need to store those values in the member key. We can find the same setting in OBIEE's Product Type logical level</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Product-Type-Key.png" alt="Metadata Modeling in the Database with Analytic Views"></p> <h1 id="dimdate">Dim Date</h1> <p>The basic <code>D0 Dim Date</code> can be built starting from the table <code>SAMP_TIME_DAY_D</code> following the same process as above. Like in OBIEE, some additional settings are required when creating a time dimension:</p> <ul> <li><code>DIMENSION TYPE TIME</code>: the time dimension type <strong>need</strong> to be specified</li> <li><code>LEVEL TYPE &lt;LEVEL_NAME&gt;</code>: each level in the time hierarchy needs to belong to a precise level type chosen from: <ul><li>YEARS</li> <li>HALF_YEARS</li> <li>QUARTERS</li> <li>MONTHS</li> <li>WEEKS</li> <li>DAYS</li> <li>HOURS</li> <li>MINUTES</li> <li>SECONDS</li></ul></li> </ul> <h2 id="attributedimension">Attribute Dimension</h2> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Time-Column-Mapping.png" alt="Metadata Modeling in the Database with Analytic Views"></p> <p>Taking into consideration the additional settings, the <code>Dim Date</code> column mappings in above image can be translated in the following attribute dimension SQL definition.</p> <pre><code>CREATE OR REPLACE ATTRIBUTE DIMENSION D0_DIM_DATE DIMENSION TYPE TIME USING SAMP_TIME_DAY_D ATTRIBUTES (CALENDAR_DATE AS TOO_CALENDAR_DATE, PER_NAME_MONTH AS T02_PER_NAME_MONTH, PER_NAME_QTR AS T03_PER_NAME_QTR, PER_NAME_YEAR AS T04_PER_NAME_YEAR, DAY_KEY AS T06_ROW_WID, BEG_OF_MTH_WID AS T22_BEG_OF_MTH_WID, BEG_OF_QTR_WID AS T23_BEG_OF_QTR_WID ) LEVEL CAL_DAY LEVEL TYPE DAYS KEY TOO_CALENDAR_DATE ORDER BY TOO_CALENDAR_DATE DETERMINES(T22_BEG_OF_MTH_WID, T23_BEG_OF_QTR_WID,T04_PER_NAME_YEAR) LEVEL CAL_MONTH LEVEL TYPE MONTHS KEY T22_BEG_OF_MTH_WID MEMBER NAME T02_PER_NAME_MONTH ORDER BY T22_BEG_OF_MTH_WID DETERMINES(T23_BEG_OF_QTR_WID,T04_PER_NAME_YEAR) LEVEL CAL_QUARTER LEVEL TYPE QUARTERS KEY T23_BEG_OF_QTR_WID MEMBER NAME T03_PER_NAME_QTR ORDER BY T23_BEG_OF_QTR_WID DETERMINES(T04_PER_NAME_YEAR) LEVEL CAL_YEAR LEVEL TYPE YEARS KEY T04_PER_NAME_YEAR MEMBER NAME T04_PER_NAME_YEAR ORDER BY T04_PER_NAME_YEAR ALL MEMBER NAME 'ALL TIMES'; </code></pre> <p>You may have noticed a different mapping of keys, member names and order by attributes. Let's take the <code>CAL_MONTH</code> as example. It's defined by two columns </p> <ul> <li><code>BEG_OF_MTH_WID</code>: used for joins and ordering</li> <li><code>PER_NAME_MONTH</code>: used as "display label"</li> </ul> <p><code>PER_NAME_MONTH</code> in the <code>YYYY / MM</code> format could be also used for ordering, but most of the times end user requests months in the <code>MM / YYYY</code> format. Being able to set a ordering column different from the member name allows us to properly manage the hierarchy.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Cal_month.png" alt="Metadata Modeling in the Database with Analytic Views"></p> <h2 id="hierarchy">Hierarchy</h2> <p>Time hierarchy follows the same rules as the product one, no additional settings are required.</p> <pre><code>CREATE OR REPLACE HIERARCHY TIME_HIER USING D0_DIM_DATE (CAL_DAY CHILD OF CAL_MONTH CHILD OF CAL_QUARTER CHILD OF CAL_YEAR); </code></pre> <h1 id="factsales">Fact Sales</h1> <p>The last step in the journey is the definition of the analytic view of the fact table that as per Oracle's <a href="https://docs.oracle.com/database/122/DWHSG/analytic-views.htm#RESOURCEID-186323-7FDC9206">documentation</a> </p> <blockquote> <p>An analytic view specifies the source of its fact data and defines measures that describe calculations or other analytic operations to perform on the data. An analytic view also specifies the attribute dimensions and hierarchies that define the rows of the analytic view.</p> </blockquote> <p>The analytic view definition contains the following specifications:</p> <ul> <li>The <strong>data source</strong>: the table or view that will be used for the calculation</li> <li>The <strong>columns</strong>: which columns from the source objects to use in the calculations</li> <li>The <strong>attribute dimensions</strong> and <strong>hierarchies</strong>: defining both the list of attributes and the levels of the analysis</li> <li>The <strong>measures</strong>: a set of aggregations based on the predefined columns from the data source.</li> </ul> <p>Within analytical views definition a <strong>materialized view</strong> can be defined in order to store aggregated values. This is a similar to OBIEE's Logical Table Source setting for aggregates.</p> <h2 id="analyticviewdefinition">Analytic View Definition</h2> <p>For the purpose of the post I'll use <code>SAMP_REVENUE_F</code> which is one of the sources of <code>F0 Sales Base Measures</code> in Sampleapp. The following image shows the logical column mapping.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Fact-Table-Column-Mapping.png" alt="Metadata Modeling in the Database with Analytic Views"></p> <p>The above mappings can be translated in the following SQL</p> <pre><code>CREATE OR REPLACE ANALYTIC VIEW F0_SALES_BASE_MEASURES USING SAMP_REVENUE_F DIMENSION BY (D0_DIM_DATE KEY BILL_DAY_DT REFERENCES TOO_CALENDAR_DATE HIERARCHIES ( TIME_HIER DEFAULT), D1_DIM_PRODUCT KEY PROD_KEY REFERENCES P0_Product_Number HIERARCHIES ( PRODUCT_HIER DEFAULT) ) MEASURES (F1_REVENUE FACT REVENUE AGGREGATE BY SUM, F10_VARIABLE_COST FACT COST_VARIABLE AGGREGATE BY SUM, F11_FIXED_COST FACT COST_FIXED AGGREGATE BY SUM, F2_BILLED_QTY FACT UNITS, F3_DISCOUNT_AMOUNT FACT DISCNT_VALUE AGGREGATE BY SUM, F4_AVG_REVENUE FACT REVENUE AGGREGATE BY AVG, F21_REVENUE_AGO AS (LAG(F1_REVENUE) OVER (HIERARCHY TIME_HIER OFFSET 1)) ) DEFAULT MEASURE F1_REVENUE; </code></pre> <p>Some important parts need to be highlighted:</p> <ul> <li><code>USING SAMP_REVENUE_F</code>: defines the analytic view source, in our case the table <code>SAMP_REVENUE_F</code></li> <li><code>DIMENSION BY</code>: this section provides the list of dimensions and related hierarchies to take into account</li> <li><code>KEY BILL_DAY_DT REFERENCES TOO_CALENDAR_DATE</code>: defines the join between the fact table and attribute dimension</li> <li><code>HIERARCHIES (TIME_HIER DEFAULT)</code>: multiple hierarchies can be defined on top of an attribute dimension and used in an analytical view, however like in OBIEE only one will be used by default</li> <li><code>F1_REVENUE FACT REVENUE AGGREGATE BY SUM</code>: defines the measure with alias, source column and aggregation method</li> <li><code>F2_BILLED_QTY FACT UNITS</code>: if aggregation method is not defined it replies on default <code>SUM</code> </li> <li><code>F21_REVENUE_AGO</code>: new metrics can be calculated based on previously defined columns replicating OBIEE functions like time-series. The formula <code>(LAG(F1_REVENUE) OVER (HIERARCHY TIME_HIER OFFSET 1))</code> calculates the equivalent of the OBIEE's <code>AGO</code> function for each level of the hierarchy.</li> <li><code>DEFAULT MEASURE F1_REVENUE</code>: defines the default measure of the analytic view</li> </ul> <h1 id="usinganalyticviews">Using Analytic Views</h1> <p>After the analytic view definition, it's time to analyse what benefits end users have when using them. We are going to take a simple example: a query to return the <em>Revenue</em> and <em>Billed Qty</em> per <em>Month</em> and <em>Brand</em>.</p> <p>Using only the original tables we would have the following SQL</p> <pre><code>SELECT D.CAL_MONTH, D.BEG_OF_MTH_WID, P.BRAND, SUM(F.REVENUE) AS F01_REVENUE, SUM(F.UNITS) AS F02_BILLED_QTY FROM SAMP_REVENUE_F F JOIN SAMP_PRODUCTS_D P ON (F.PROD_KEY = P.PROD_KEY) JOIN SAMP_TIME_DAY_D D ON (F.BILL_DAY_DT = D.CALENDAR_DATE) GROUP BY D.CAL_MONTH, D.BEG_OF_MTH_WID, P.BRAND ORDER BY D.BEG_OF_MTH_WID, P.BRAND; </code></pre> <p>The above SQL requires the knowledge of:</p> <ul> <li>Aggregation methods</li> <li>Joins</li> <li>Group by</li> <li>Ordering</li> </ul> <p>Even if this is an oversimplification of the analytic view usage you can already spot that some knowledge of the base data structure and SQL language is needed.</p> <p>Using the analytic views defined above, the query can be written as </p> <pre><code>SELECT TIME_HIER.MEMBER_NAME AS TIME_SLICE, PRODUCT_HIER.MEMBER_NAME AS PRODUCT_SLICE, F1_REVENUE, F2_BILLED_QTY FROM F0_SALES_BASE_MEASURES WHERE TIME_HIER.LEVEL_NAME IN ('CAL_MONTH') AND PRODUCT_HIER.LEVEL_NAME IN ('BRAND') ORDER BY TIME_HIER.HIER_ORDER, PRODUCT_HIER.HIER_ORDER; </code></pre> <p>As you can see, there is a simplification of the SQL statement: no more aggregation, joining conditions and group by predicates are needed. All the end-user has to know is the analytical view name, and the related hierarchies that can be used.</p> <p>The additional benefit is that if we want to change the level of granularity of the above query we just need to change the <code>WHERE</code> condition. E.g. to have the rollup per Year and LOB we just have to substitute</p> <pre><code>WHERE TIME_HIER.LEVEL_NAME IN ('CAL_MONTH') AND PRODUCT_HIER.LEVEL_NAME IN ('BRAND') </code></pre> <p>with </p> <pre><code>WHERE TIME_HIER.LEVEL_NAME IN ('CAL_YEAR') AND PRODUCT_HIER.LEVEL_NAME IN ('LOB') </code></pre> <p>without touching granularity, group by and order by statements.</p> <h2 id="usinganalyticviewsindvd">Using Analytic Views in DVD</h2> <p>At the beginning of my blog post I wrote that Analytic Views could be useful when used in conjunction with self-service BI tools. Let's have a look at how the end user journey is simplified in the case of Oracle's <a href="https://www.rittmanmead.com/blog/2016/10/data-visualisation-desktop-12-2-2-0-new-features-2/">Data Visualization Desktop</a>.</p> <p>Without AV the end-user had two options to source the data:</p> <ul> <li><strong>Write the complex SQL statement</strong> with joining condition, group and order by clause in the SQL editor to retrieve data at the correct level with the related dimension</li> <li>Import the fact table and dimensions as <strong>separate datasources</strong> and join them together in DVD's project.</li> </ul> <p>Both options require a SQL and joining conditions knowledge in order to being able to present correct data. Using Analytic Views the process is simplified. We just need to create a new source pointing to the database where the analytic views are sitting. <br> Next step is retrieve the necessary columns from the analytic view. Unfortunately analytic views are not visible from DVD object explorer (only standard table and views are shown)</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/List-of-Tables.png" alt="Metadata Modeling in the Database with Analytic Views"></p> <p>We can however specify with a simple SQL statement all the informations we need like Time and Member Slice, the related levels and the order in hierarchy.</p> <pre><code>SELECT TIME_HIER.MEMBER_NAME AS TIME_SLICE, PRODUCT_HIER.MEMBER_NAME AS PRODUCT_SLICE, TIME_HIER.LEVEL_NAME AS TIME_LEVEL, PRODUCT_HIER.LEVEL_NAME AS PRODUCT_LEVEL, TIME_HIER.HIER_ORDER AS TIME_HIER_ORDER, PRODUCT_HIER.HIER_ORDER AS PRODUCT_HIER_ORDER, F1_REVENUE, F2_BILLED_QTY FROM F0_SALES_BASE_MEASURES ORDER BY TIME_HIER.HIER_ORDER, PRODUCT_HIER.HIER_ORDER; </code></pre> <p>You may have noted that I'm not specifying any <code>WHERE</code> clause for level filtering: as end user I want to be able to retrieve all the necessary levels by just changing a filter in my DVD project. After including the above SQL in the datasource definition and amending the measure/attribute definition I can start playing with the analytic view data.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/DVD-Source.png" alt="Metadata Modeling in the Database with Analytic Views"></p> <p>I can simply include the dimension's <code>MEMBER_NAME</code> in the graphs together with the measures and add the <code>LEVEL_NAME</code> in the filters. In this way I can change the graph granularity by simply selecting the appropriate <code>LEVEL</code> in the filter selector for all the dimensions available.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/DVD-Graphs.png" alt="Metadata Modeling in the Database with Analytic Views"></p> <p>One particular to notice however is that all the data coming from various columns like date, month and year are "condensed" into a single <code>VARCHAR</code> column. In case of different datatypes (like date in the time dimension) this will prevent a correct usage of some DVD's capabilities like time series or trending functions. However if a particular type of graph is needed for a specific level, either an ad-hoc query or a casting operation can be used.</p> <h1 id="conclusion">Conclusion</h1> <p>In this blog post we analysed the Analytic Views, a new component in Oracle Database 12.2 and how those can be used to "move" the metadata modeling at DB level to provide an easier query syntax to end-users.</p> <p>Usually metadata modeling is done in reporting tools like OBIEE that offers additional set of features on top of the one included in analytic views. However centralized reporting tools like OBIEE are not present everywhere and, with the wave of self-service BI tools, analytic views represent a perfect method of enabling users not familiar with SQL to simply query their enterprise data.</p> <p>If you are interested in understanding more about analytic views or metadata modeling, don't hesitate to <a href="https://www.rittmanmead.com/contact/">contact us</a>! <br> If you want to improve the SQL skills of your company workforce, check out our recently launched <a href="https://www.rittmanmead.com/sql-for-beginners/?utm_campaign=SQL%2520Training&amp;utm_source=SQL%2520Training">SQL for beginners training</a>!</p> Francesco Tisiot bb5fb1ef-3c82-474b-be4f-abe49c2d0ae7 Mon Apr 03 2017 10:00:00 GMT-0400 (EDT) Oracle GoldenGate Consultant http://gavinsoorma.com/2017/04/oracle-goldengate-consultant/ <p><a href="http://gavinsoorma.com/wp-content/uploads/2017/04/ad2.jpg" rel="attachment wp-att-7498"><img class="aligncenter size-full wp-image-7498" src="http://gavinsoorma.com/wp-content/uploads/2017/04/ad2.jpg" alt="ad2" width="1281" height="483" srcset="http://gavinsoorma.com/wp-content/uploads/2017/04/ad2.jpg 1281w, http://gavinsoorma.com/wp-content/uploads/2017/04/ad2-300x113.jpg 300w, http://gavinsoorma.com/wp-content/uploads/2017/04/ad2-768x290.jpg 768w, http://gavinsoorma.com/wp-content/uploads/2017/04/ad2-1024x386.jpg 1024w" sizes="(max-width: 1281px) 100vw, 1281px" /></a></p> Gavin Soorma http://gavinsoorma.com/?p=7500 Mon Apr 03 2017 00:49:14 GMT-0400 (EDT) Oracle GoldenGate Consultant https://gavinsoorma.com/2017/04/oracle-goldengate-consultant/ <p><a href="https://gavinsoorma.com/wp-content/uploads/2017/04/ad2.jpg" rel="attachment wp-att-7498"><img class="aligncenter size-full wp-image-7498" src="https://gavinsoorma.com/wp-content/uploads/2017/04/ad2.jpg" alt="ad2" width="1281" height="483" srcset="https://gavinsoorma.com/wp-content/uploads/2017/04/ad2.jpg 1281w, https://gavinsoorma.com/wp-content/uploads/2017/04/ad2-300x113.jpg 300w, https://gavinsoorma.com/wp-content/uploads/2017/04/ad2-768x290.jpg 768w, https://gavinsoorma.com/wp-content/uploads/2017/04/ad2-1024x386.jpg 1024w" sizes="(max-width: 1281px) 100vw, 1281px" /></a></p> Gavin Soorma https://gavinsoorma.com/?p=7500 Mon Apr 03 2017 00:49:14 GMT-0400 (EDT) Oracle OpenWorld 2017: Call for Speakers Is Now Open http://www.odtug.com/p/bl/et/blogaid=695&source=1 What’s the big idea? Share yours with the world's largest gathering of Oracle technologists and business leaders during Oracle OpenWorld 2017, happening October 1-5, 2017 in San Francisco. This year’s Call for Speakers is NOW open. ODTUG http://www.odtug.com/p/bl/et/blogaid=695&source=1 Fri Mar 31 2017 11:19:25 GMT-0400 (EDT) Installing OBIEE 12c on Exalytics https://medium.com/red-pill-analytics/installing-obiee-12c-on-exalytics-a5042778eb12?source=rss----abcc62a8d63e---4 <figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MOovkVtOVq8JNqe_3P0JlA.jpeg" /><figcaption>Photo Credit: <a href="https://unsplash.com/collections/614563/utility-infrastructure?photo=v_CxSroHKWg">Matthew Hamilton</a></figcaption></figure><h4>The Definitive Guide (Since Oracle Doesn’t Have One Yet)</h4><p>Red Pill Analytics recently performed the installation of OBIEE 12c on an Exalytics machine for one of our customers. There is no option to use the <a href="https://docs.oracle.com/cd/E62968_01/bi.1/e62967/install_procedure.htm#BIXIA3634">Domain Management Utility</a> for installing OBIEE 12c… that’s still only capable of installing OBIEE 11g. Therefore OBIEE 12c is a manual install on Exalytics, but it’s very similar to installing OBIEE 12c on OEL 6.6 instead.</p><p>The customer wanted to host both the DEV and TEST environments on one machine. We could have used one Middleware Home and multiple Domains — which is very easy in 12c — but the client wanted complete isolation between DEV and TEST, which is best delivered by persisting dual Middleware Homes. There are of course pros and cons to each approach, but due to the customer requirements, we took this route.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/754/1*tcyRxDOONaxKa4-1hvWSpw.png" /><figcaption>This is the recommended architecture for this installation.</figcaption></figure><p>Finally, we also wanted to split the BI Domains from the Middleware Homes as well, which is the <a href="https://docs.oracle.com/middleware/12212/lcm/BIEIG/GUID-16F78BFD-4095-45EE-9C3B-DB49AD5CBAAD.htm#GUID-EF0F7D8D-6338-4AB4-AE53-F8F7797D2316">recommended install technique for 12c</a>. This was not an option in OBIEE 11g, as the Domain was really useless, and it was the Instance that was key, so patching meant upgrading the installation and the configuration at the same time. Now, everything that was done by the Instance in 11g is managed by the Domain, which like most of Oracle Middleware, is where it belongs.</p><p>So what does this actually mean? In simple terms, an upgrade of OBIEE 12c means creating a new Fusion Middleware Home, and then upgrading the Domain to use that new Fusion Middleware Home. With an architecture like this, we can avoid some confusion. For example if we place the Domain inside of the Fusion Middleware Home (old school), when we upgrade, the Domain will be within an older version of FMW, but pointed at a newer version. Seems strange and confusing to me. Don’t feel bad if you didn’t know this OBIEE 12c fun fact; we were in contact with Oracle ACS during this process, and surprisingly even they were unaware of it!</p><p>This post will walk through the steps of accomplishing these tasks. These steps could be adapted for installing on any Linux environment that is pursuing an architecture set up like this.</p><p>A few additional notes before we get started, then. I have set up the directory structure so that each OBI instance will be on it’s own mount point (/u01 and /u02). I also created two separate oracle users — one for each instance. Usernames like Oracle01 and Oracle02 (or something similar) work well in this case; these easily correspond to the mount points our environments are deployed on. Why bother with separate users at all? This allows us to avoid any complications of having one user be the software owner for multiple instances. You may need to edit the memory limits set for these users in the /etc/security/limits.conf file. Our default limits were lower than what we needed to run the services, which could make you look a bit like Han Solo when you see the config failing upon the start up phase.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/268/1*ZoAixM33Muh_u8ObAb9Y8Q.gif" /><figcaption>Yeah, yeah. We know it wasn’t in the documentation to edit that file.</figcaption></figure><p>I installed using /u02, so the instructions will reflect those paths. Be sure to change your scripts to match your directories where applicable. Also note that I will be installing via response file, not with the GUI. Here are the response files for the <a href="https://redpillanalytics.slack.com/files/phil.goerdt/F4S5UJYH1/fmw_install.rsp">FMW Infrastructure Install,</a> the <a href="https://slack-files.com/T06TQHAHM-F4S8ZJ160-835e6a8325">OBIEE Install</a>, and the <a href="https://slack-files.com/T06TQHAHM-F4S8ZE0DA-6ef87c6397">OBIEE Configuration</a>.</p><ol><li>Obtain the software files and move them to your Linux environment. You can download them and SCP them to the machine; or, you can use WGET to have the files land directly on to your machine. For this installation you will need JDK1.8.0 version 101 or higher, the FMW Infrastructure installer, and the OBIEE 12c files.</li><li>Install the JDK (be sure to have 64 bit) into the appropriate directory by unzipping the package and moving the directory. We’ll be using partitioned JDKs in this environment, meaning that we will have a JDK that supports each OBI environment. I placed mine in /u02/oracle/java.</li><li>Then set the JAVA_HOME variable to your new jdk directory by running an export command. Additionally, export this to the PATH variable. Make sure that they have exported correctly by echoing these variables. Finally, check the java version with the java version command. The commands are listed below.</li></ol><pre>export JAVA_HOME=/u02/oracle/java/jdk1.8.0_111</pre><pre>export PATH=$JAVA_HOME/bin:$PATH</pre><pre>echo $JAVA_HOME $PATH</pre><pre>java -version</pre><p>To work around any potential X forwarding or tunneling issues you may have on your machine, I’ll be describing how to do the silent install. This install type will run the install without a GUI window, meaning that all of the parameters for the install must be set within the response file. This also means that we will have to create all of the directories that we want our files to land in, as we won’t be able to create them on the fly within the GUI. Let’s begin.</p><p>Run the following commands to create the directory structure:</p><pre>mkdir /u02/oracle/product<br>mkdir /u02/oracle/product/12.2.1.2<br>mkdir /u02/oracle/product/12.2.1.2/obi_1<br>mkdir /u02/oracle/config<br>mkdir /u02/oracle/config/domains<br>mkdir /u02/oracle/config/domains/bi<br>mkdir /u02/oracle/oraInventory<br>echo &#39;inventory_loc=/u02/oracle/oraInventory&#39; &gt;&gt; /u02/oracle/oraInventory/oraInst.loc<br>echo &#39;inst_group=oracle&#39; &gt;&gt; /u02/oracle/oraInventory/oraInst.loc</pre><p>Once you have created the directory and the install files, it’s time to edit the response files. In the location where you have unzipped the software, open the .rsp file and edit the ORACLE_HOME value to match the directory above. Mine was /u02/oracle/product/12.2.1.2/obi_1. Also edit any other parameters in the response file that need to be changed for your environment. Then execute the following command to install the infrastructure. Your command may be different depending on your directory set up.</p><pre>java -jar fmw_12.2.1.2.0_infrastructure.jar -silent -responseFile ./fmw_12.2.1.2.0_infrastructure.rsp -invPtrLoc /u02/oracle/oraInventory/oraInst.loc </pre><p>Once the infrastructure has finished installing, we’ll need to export several variables for the OBIEE software install. Run the following commands to export the DOMAIN_HOME, FMW_HOME, and PATH variables. As before, I recommend setting these in the user’s bash profile as well. Once you have set the variables, be sure to check them with the echo command.</p><pre>export DOMAIN_HOME=/u02/oracle/config/domains/bi<br>export FMW_HOME=/u02/oracle/product/12.2.1.2/obi_1<br>export PATH=$PATH:$DOMAIN_HOME/bitools/bin:$FMW_HOME/oracle_common/common/bin<br>echo $DOMAIN_HOME $FMW_HOME $PATH</pre><p>Now unzip the OBIEE software packages and edit the response file to reflect the environment you’d like to create. Specifically, you will have to edit the DECLINE_AUTO_UPDATES, ORACLE_HOME, and INSTALL_TYPE parameters to conduct the install. Once the file is ready, execute the installer by running the following command (again, be sure to tailor it to your environment).</p><pre>./biplatform-12.2.1.2.0_linux64.bin -silent -responseFile /u02/oracle/bi_platform-12.2.1.2.0_linux64.rsp -invPtrLoc /u02/oracle/oraInventory/oraInst.loc</pre><p>Once this install has completed, make edits to your config response file, and navigate to the following path: $FMW_HOME/bi/bin. Here you will find the config scripts for running the configuration of your environment. Run the below command to kick off the configuration (with edits made for your environment, of course).</p><pre>./config.sh -silent -responseFile /u01/oracle/config.rsp</pre><p>Once the configuration has completed, the services should start right up, and you’ll be able to start using your environment!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a5042778eb12" width="1" height="1"><hr><p><a href="https://medium.com/red-pill-analytics/installing-obiee-12c-on-exalytics-a5042778eb12">Installing OBIEE 12c on Exalytics</a> was originally published in <a href="https://medium.com/red-pill-analytics">Red Pill Analytics</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p> Phil Goerdt https://medium.com/p/a5042778eb12 Fri Mar 31 2017 09:13:07 GMT-0400 (EDT) Installing OBIEE 12c on Exalytics http://redpillanalytics.com/installing-obiee-12c-on-exalytics/ <p><img width="300" height="200" src="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil2.jpg?fit=300%2C200" class="attachment-medium size-medium wp-post-image" alt="Installing OBIEE 12c on Exalytics" srcset="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil2.jpg?w=2000 2000w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil2.jpg?resize=300%2C200 300w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil2.jpg?resize=768%2C512 768w, https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil2.jpg?resize=1024%2C682 1024w" sizes="(max-width: 300px) 100vw, 300px" data-attachment-id="4776" data-permalink="http://redpillanalytics.com/installing-obiee-12c-on-exalytics/phil2/" data-orig-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil2.jpg?fit=2000%2C1333" data-orig-size="2000,1333" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Installing OBIEE 12c on Exalytics" data-image-description="&lt;p&gt;Installing OBIEE 12c on Exalytics&lt;/p&gt; " data-medium-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil2.jpg?fit=300%2C200" data-large-file="https://i1.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil2.jpg?fit=1024%2C682" /></p><p id="fa33" class="graf graf--p graf-after--h4">Red Pill Analytics recently performed the installation of OBIEE 12c on an Exalytics machine for one of our customers. There is no option to use the <a class="markup--anchor markup--p-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fdocs.oracle.com%2Fcd%2FE62968_01%2Fbi.1%2Fe62967%2Finstall_procedure.htm%23BIXIA3634" target="_blank" rel="nofollow" data-href="https://medium.com/r/?url=https%3A%2F%2Fdocs.oracle.com%2Fcd%2FE62968_01%2Fbi.1%2Fe62967%2Finstall_procedure.htm%23BIXIA3634">Domain Management Utility</a> for installing OBIEE 12c… that’s still only capable of installing OBIEE 11g. Therefore OBIEE 12c is a manual install on Exalytics, but it’s very similar to installing OBIEE 12c on OEL 6.6 instead.</p> <p id="b062" class="graf graf--p graf-after--p">The customer wanted to host both the DEV and TEST environments on one machine. We could have used one Middleware Home and multiple Domains — which is very easy in 12c — but the client wanted complete isolation between DEV and TEST, which is best delivered by persisting dual Middleware Homes. There are of course pros and cons to each approach, but due to the customer requirements, we took this route.</p> <figure class="graf graf--figure graf--layoutOutsetLeft"></figure> <div id="attachment_4775" style="width: 764px" class="wp-caption alignnone"><img data-attachment-id="4775" data-permalink="http://redpillanalytics.com/installing-obiee-12c-on-exalytics/phil1/" data-orig-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil1.png?fit=754%2C600" data-orig-size="754,600" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Recommended Install" data-image-description="" data-medium-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil1.png?fit=300%2C239" data-large-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil1.png?fit=754%2C600" class="wp-image-4775 size-full" src="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil1.png?resize=754%2C600" alt="Installing OBIEE 12c on Exalytics" srcset="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil1.png?w=754 754w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Phil1.png?resize=300%2C239 300w" sizes="(max-width: 754px) 100vw, 754px" data-recalc-dims="1" /><p class="wp-caption-text">This is the recommended architecture for this installation.</p></div> <hr /> <p id="9292" class="graf graf--p graf-after--figure">Finally, we also wanted to split the BI Domains from the Middleware Homes as well, which is the <a class="markup--anchor markup--p-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fdocs.oracle.com%2Fmiddleware%2F12212%2Flcm%2FBIEIG%2FGUID-16F78BFD-4095-45EE-9C3B-DB49AD5CBAAD.htm%23GUID-EF0F7D8D-6338-4AB4-AE53-F8F7797D2316" target="_blank" rel="nofollow" data-href="https://medium.com/r/?url=https%3A%2F%2Fdocs.oracle.com%2Fmiddleware%2F12212%2Flcm%2FBIEIG%2FGUID-16F78BFD-4095-45EE-9C3B-DB49AD5CBAAD.htm%23GUID-EF0F7D8D-6338-4AB4-AE53-F8F7797D2316">recommended install technique for 12c</a>. This was not an option in OBIEE 11g, as the Domain was really useless, and it was the Instance that was key, so patching meant upgrading the installation and the configuration at the same time. Now, everything that was done by the Instance in 11g is managed by the Domain, which like most of Oracle Middleware, is where it belongs.</p> <p id="a846" class="graf graf--p graf-after--p">So what does this actually mean? In simple terms, an upgrade of OBIEE 12c means creating a new Fusion Middleware Home, and then upgrading the Domain to use that new Fusion Middleware Home. With an architecture like this, we can avoid some confusion. For example if we place the Domain inside of the Fusion Middleware Home (old school), when we upgrade, the Domain will be within an older version of FMW, but pointed at a newer version. Seems strange and confusing to me. Don’t feel bad if you didn’t know this OBIEE 12c fun fact; we were in contact with Oracle ACS during this process, and surprisingly even they were unaware of it!</p> <p class="graf graf--p">This post will walk through the steps of accomplishing these tasks. These steps could be adapted for installing on any Linux environment that is pursuing an architecture set up like this.</p> <hr /> <p id="8ebb" class="graf graf--p graf--leading">A few additional notes before we get started, then. I have set up the directory structure so that each OBI instance will be on it’s own mount point (/u01 and /u02). I also created two separate oracle users — one for each instance. Usernames like Oracle01 and Oracle02 (or something similar) work well in this case; these easily correspond to the mount points our environments are deployed on. Why bother with separate users at all? This allows us to avoid any complications of having one user be the software owner for multiple instances. You may need to edit the memory limits set for these users in the /etc/security/limits.conf file. Our default limits were lower than what we needed to run the services, which could make you look a bit like Han Solo when you see the config failing upon the start up phase.</p> <figure id="d47c" class="graf graf--figure graf-after--p"> <div class="aspectRatioPlaceholder is-locked"> <div class="aspectRatioPlaceholder-fill"></div> <div class="progressiveMedia js-progressiveMedia graf-image is-canvasLoaded is-imageLoaded" data-image-id="1*ZoAixM33Muh_u8ObAb9Y8Q.gif" data-width="268" data-height="290" data-scroll="native"> <p>&nbsp;</p> <div style="width: 278px" class="wp-caption aligncenter"><img class="progressiveMedia-image js-progressiveMedia-image" src="https://i1.wp.com/cdn-images-1.medium.com/max/1600/1*ZoAixM33Muh_u8ObAb9Y8Q.gif?resize=268%2C290&#038;ssl=1" alt="" data-src="https://i1.wp.com/cdn-images-1.medium.com/max/1600/1*ZoAixM33Muh_u8ObAb9Y8Q.gif?resize=268%2C290&#038;ssl=1" data-recalc-dims="1" /><p class="wp-caption-text">Yeah, yeah. We know it wasn’t in the documentation to edit that file.</p></div> </div> </div><figcaption class="imageCaption"></figcaption></figure> <p id="b180" class="graf graf--p graf-after--figure">I installed using /u02, so the instructions will reflect those paths. Be sure to change your scripts to match your directories where applicable. Also note that I will be installing via response file, not with the GUI. Here are the response files for the <a class="markup--anchor markup--p-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fredpillanalytics.slack.com%2Ffiles%2Fphil.goerdt%2FF4S5UJYH1%2Ffmw_install.rsp" target="_blank" rel="nofollow" data-href="https://medium.com/r/?url=https%3A%2F%2Fredpillanalytics.slack.com%2Ffiles%2Fphil.goerdt%2FF4S5UJYH1%2Ffmw_install.rsp">FMW Infrastructure Install,</a> the <a class="markup--anchor markup--p-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fslack-files.com%2FT06TQHAHM-F4S8ZJ160-835e6a8325" target="_blank" rel="nofollow" data-href="https://medium.com/r/?url=https%3A%2F%2Fslack-files.com%2FT06TQHAHM-F4S8ZJ160-835e6a8325">OBIEE Install</a>, and the <a class="markup--anchor markup--p-anchor" href="https://medium.com/r/?url=https%3A%2F%2Fslack-files.com%2FT06TQHAHM-F4S8ZE0DA-6ef87c6397" target="_blank" rel="nofollow" data-href="https://medium.com/r/?url=https%3A%2F%2Fslack-files.com%2FT06TQHAHM-F4S8ZE0DA-6ef87c6397">OBIEE Configuration</a>.</p> <ol class="postList"> <li id="a303" class="graf graf--li graf-after--p">Obtain the software files and move them to your Linux environment. You can download them and SCP them to the machine; or, you can use WGET to have the files land directly on to your machine. For this installation you will need JDK1.8.0 version 101 or higher, the FMW Infrastructure installer, and the OBIEE 12c files.</li> <li id="a259" class="graf graf--li graf-after--li">Install the JDK (be sure to have 64 bit) into the appropriate directory by unzipping the package and moving the directory. We’ll be using partitioned JDKs in this environment, meaning that we will have a JDK that supports each OBI environment. I placed mine in /u02/oracle/java.</li> <li id="8626" class="graf graf--li graf-after--li">Then set the JAVA_HOME variable to your new jdk directory by running an export command. Additionally, export this to the PATH variable. Make sure that they have exported correctly by echoing these variables. Finally, check the java version with the java version command. The commands are listed below.</li> </ol> <p>&nbsp;</p> <p><code> export JAVA_HOME=/u02/oracle/product/java/jdk1.8.0_111<br /> export PATH=$JAVA_HOME/bin:$PATH<br /> echo $JAVA_HOME $PATH<br /> java -version</code></p> <p>&nbsp;</p> <p id="6061" class="graf graf--p graf-after--pre">To work around any potential X forwarding or tunneling issues you may have on your machine, I’ll be describing how to do the silent install. This install type will run the install without a GUI window, meaning that all of the parameters for the install must be set within the response file. This also means that we will have to create all of the directories that we want our files to land in, as we won’t be able to create them on the fly within the GUI. Let’s begin.</p> <p id="e572" class="graf graf--p graf-after--p">Run the following commands to create the directory structure:</p> <p><code>mkdir /u02/oracle/product<br /> mkdir /u02/oracle/product/12.2.1.2<br /> mkdir /u02/oracle/product/12.2.1.2/obi_1<br /> mkdir /u02/oracle/config<br /> mkdir /u02/oracle/config/domains<br /> mkdir /u02/oracle/config/domains/bi<br /> mkdir /u02/oracle/oraInventory<br /> echo 'inventory_loc=/u02/oracle/oraInventory' &gt;&gt; /u02/oracle/oraInventory/oraInst.loc<br /> echo 'inst_group=oracle' &gt;&gt; /u02/oracle/oraInventory/oraInst.loc</code></p> <p>&nbsp;</p> <p>Once you have created the directory and the install files, it’s time to edit the response files. In the location where you have unzipped the software, open the .rsp file and edit the ORACLE_HOME value to match the directory above. Mine was /u02/oracle/product/12.2.1.2/obi_1. Also edit any other parameters in the response file that need to be changed for your environment. Then execute the following command to install the infrastructure. Your command may be different depending on your directory set up.</p> <p>&nbsp;</p> <p><code>java -jar fmw_12.2.1.2.0_infrastructure.jar -silent -responseFile ./fmw_12.2.1.2.0_infrastructure.rsp -invPtrLoc /u02/oracle/oraInventory/oraInst.loc</code></p> <p>&nbsp;</p> <p>Once the infrastructure has finished installing, we’ll need to export several variables for the OBIEE software install. Run the following commands to export the DOMAIN_HOME, FMW_HOME, and PATH variables. As before, I recommend setting these in the user’s bash profile as well. Once you have set the variables, be sure to check them with the echo command.</p> <p>&nbsp;</p> <p><code>export DOMAIN_HOME=/u02/oracle/config/domains/bi<br /> export FMW_HOME=/u02/oracle/product/12.2.1.2/obi_1<br /> export PATH=$PATH:$DOMAIN_HOME/bitools/bin:$FMW_HOME/oracle_common/common/bin<br /> echo $DOMAIN_HOME $FMW_HOME $PATH</code></p> <p>&nbsp;</p> <p>Now unzip the OBIEE software packages and edit the response file to reflect the environment you’d like to create. Specifically, you will have to edit the DECLINE_AUTO_UPDATES, ORACLE_HOME, and INSTALL_TYPE parameters to conduct the install. Once the file is ready, execute the installer by running the following command (again, be sure to tailor it to your environment).</p> <p>&nbsp;</p> <p><code>./biplatform-12.2.1.2.0_linux64.bin -silent -responseFile /u02/oracle/bi_platform-12.2.1.2.0_linux64.rsp -invPtrLoc /u02/oracle/oraInventory/oraInst.loc</code></p> <p>&nbsp;</p> <p>Once this install has completed, make edits to your config response file, and navigate to the following path: $FMW_HOME/bi/bin. Here you will find the config scripts for running the configuration of your environment. Run the below command to kick off the configuration (with edits made for your environment, of course).</p> <p>&nbsp;</p> <p><code>./config.sh -silent -responseFile /u01/oracle/config.rsp</code></p> <p>&nbsp;</p> <p>Once the configuration has completed, the services should start right up, and you’ll be able to start using your environment!</p> Phil Goerdt http://redpillanalytics.com/?p=4767 Fri Mar 31 2017 09:12:17 GMT-0400 (EDT) GaOUG Tech Days Session http://redpillanalytics.com/gaoug-session/ <p><img width="300" height="150" src="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/atlanta.jpg?fit=300%2C150" class="attachment-medium size-medium wp-post-image" alt="GAOUG" srcset="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/atlanta.jpg?w=3397 3397w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/atlanta.jpg?resize=300%2C150 300w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/atlanta.jpg?resize=768%2C385 768w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/atlanta.jpg?resize=1024%2C513 1024w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/atlanta.jpg?w=2340 2340w" sizes="(max-width: 300px) 100vw, 300px" data-attachment-id="4806" data-permalink="http://redpillanalytics.com/gaoug-session/atlanta/" data-orig-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/atlanta.jpg?fit=3397%2C1703" data-orig-size="3397,1703" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;8&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;NIKON D200&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;1168297991&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;155&quot;,&quot;iso&quot;:&quot;100&quot;,&quot;shutter_speed&quot;:&quot;30&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;1&quot;}" data-image-title="GAOUG" data-image-description="&lt;p&gt;GAOUG&lt;/p&gt; " data-medium-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/atlanta.jpg?fit=300%2C150" data-large-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/atlanta.jpg?fit=1024%2C513" /></p><p><a href="http://gaoug.strikingly.com/#tech-days-2017">GaOUG Tech Days</a> is just around the corner: May 9-10 to be exact. We have an incredible venue in Downtown Atlanta called <a href="http://loudermilkcenter.com">the Loudermilk Center</a> with exceptional event space throughout. We have a great <a href="https://www.technicalconferencesolutions.com/pls/caat/caat_abstract_reports.schedule?conference_id=174">collection of national speakers</a> from the Oracle Community, so the locals in Atlanta can get a feel for larger, more national conferences without leaving the Dirty Dirty. Additionally, we have the hugely popular, always engaging, Oracle Master Product Manager <a href="https://twitter.com/sqlmaria">Maria Colgan</a> giving the keynote <a href="http://gaoug.strikingly.com/#keynote-address">What to Expect from Oracle Database 12c</a>.</p> <div id="attachment_4784" style="width: 656px" class="wp-caption aligncenter"><img data-attachment-id="4784" data-permalink="http://redpillanalytics.com/gaoug-session/2f861d5/" data-orig-file="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/2f861d5.png?fit=646%2C223" data-orig-size="646,223" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="The Loudermilk Center in Downtown Atlanta" data-image-description="" data-medium-file="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/2f861d5.png?fit=300%2C104" data-large-file="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/2f861d5.png?fit=646%2C223" class="wp-image-4784 size-full" src="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/2f861d5.png?resize=646%2C223" alt="" srcset="https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/2f861d5.png?w=646 646w, https://i2.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/2f861d5.png?resize=300%2C104 300w" sizes="(max-width: 646px) 100vw, 646px" data-recalc-dims="1" /><p class="wp-caption-text">The Loudermilk Center in Downtown Atlanta is the perfect venue for GaOUG Tech Days.</p></div> <p>This blog post is a member of a &#8220;blog hop&#8221; from several of the GaOUG Tech Days speakers. A blog hop is similar to a <a href="https://en.wikipedia.org/wiki/Webring">webring</a> (I&#8217;m dating myself here): a group of bloggers get together and orchestrate a series of posts that are published in unison with links between them. We tried to represent several streams running at Tech Days: Cloud, Database, Middleware, and Big Data. Just like my post here, each of the blog posts below will be talking about sessions, and why you should attend GaOUG Tech Days. Thanks to the speakers for all that they do, including helping us get the word out about the conference.</p> <p>&nbsp;</p> <p><a href="http://dbasolved.com/2017/03/31/oracle-goldengate-101-at-ioug-17-and-gaoug-techdays-17-within-two-month/">Bobby Curtis</a>, Oracle, &#8220;Oracle GoldenGate 101&#8221;</p> <p><a href="http://blog.dbvisit.com/kafka-for-the-oracle-dba/">Chris Lawless</a>, Dbvisit, &#8220;Kafka for the Oracle DBA&#8221;</p> <p><a href="http://dbaontap.com/2017/03/31/gaoug-tech-days-session-rest-request-json-apex-packages/">Danny Bryant</a>, Accenture Enkitec, &#8220;REST Request and JSON with APEX Packages&#8221;</p> <p><a href="https://theitside.net/2017/03/30/the-east-coast-gets-some-epm-lovegatechdays17/">Eric Helmer</a>, Mercury Technology Group, &#8220;Maintaining, Monitoring, Administering, and Patching Oracle EPM Systems&#8221;</p> <p><a href="https://jimczuprynski.wordpress.com/2017/03/31/2017-05-09_gaoug/">Jim Czuprynski</a>, OnX Enterprise Solutions, &#8220;DBA 3.0: Transform Yourself into a Cloud DBA, or Face a Stormy Future&#8221; and &#8220;Stop Guessing, Start Analyzing: New Analytic View Features in Oracle Database 12cR2&#8221;</p> <h2>Apache Kafka and Data Streaming</h2> <p>So you&#8217;d like me to tell you a little bit about my session? Sure. Of course. If you say so. As you can guess from the title, I&#8217;ll be talking about Apache Kafka. With new data management and data pipeline frameworks burrowing their way into the enterprise, Kafka has been a popular project lately because it provides a single point of ingestion for enterprise data regardless of their downstream implication. Traditionally, when building standard ETL processes, we would tightly couple the ingestion of data, the processing of data, and their downstream delivery: it&#8217;s all right there in the name: &#8220;extract, transform, load.&#8221; Kafka brings an elevated order to this chaos. It serves as the enterprise distributed commit log and enables us to ingest data without worrying about how we plan to use them.</p> <div id="attachment_4789" style="width: 710px" class="wp-caption aligncenter"><img data-attachment-id="4789" data-permalink="http://redpillanalytics.com/gaoug-session/kafka-high-level/" data-orig-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Kafka-High-Level.png?fit=2666%2C1500" data-orig-size="2666,1500" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="Kafka High Level" data-image-description="" data-medium-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Kafka-High-Level.png?fit=300%2C169" data-large-file="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Kafka-High-Level.png?fit=1024%2C576" class="wp-image-4789" src="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Kafka-High-Level.png?resize=700%2C394" alt="" srcset="https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Kafka-High-Level.png?resize=1024%2C576 1024w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Kafka-High-Level.png?resize=300%2C169 300w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Kafka-High-Level.png?resize=768%2C432 768w, https://i0.wp.com/redpillanalytics.com/wp-content/uploads/2017/03/Kafka-High-Level.png?w=2340 2340w" sizes="(max-width: 700px) 100vw, 700px" data-recalc-dims="1" /><p class="wp-caption-text">Apache Kafka is a single point of Data Ingestion</p></div> <p>Even though it wasn&#8217;t available when I submitted the abstract, <a href="https://cloud.oracle.com/event-hub">Event Hub in the Oracle Cloud</a> is now available, which is full-on Apache Kafka in the Cloud, and perhaps the first of the enterprise Cloud vendors to introduce this. There&#8217;s quite a bit of scaffolding around Event Hub to make it gel with the rest of the Oracle Cloud; we&#8217;ll take a look and see how &#8220;Kafka&#8221; it is, and whether this additional scaffolding bears fruit for developers and administrators.</p> <h2>Analytic Microservices</h2> <p>With Kafka as the backbone, I&#8217;ll prescribe a new way of thinking about data and analytics. Taking a beat from the modern practice of building applications as disparate collections of separate, connected applications, we&#8217;ll investigate this practice and see why it&#8217;s so appealing. We&#8217;ll explore how Apache Kafka enables the use of similar design techniques to deliver a cohesive analytics platform.</p> <h2>See us at GaOUG Tech Days</h2> <p>If you&#8217;re going to be in Atlanta on May 9 and 10, then <a href="https://www.eventbrite.com/e/gaoug-tech-days-2017-registration-28290002158">register for GaOUG Tech Days</a> and join us for a great couple of days. 2017 will be our best event to date for the GaOUG, including excellent content and a can&#8217;t-be-beat keynote speaker in Maria Colgan. If you want to chat with me directly about Tech Days, then <a href="http://www.twitter.com/stewartbryson">reach out of me</a> and let me know how I can help. We look forward to seeing you in Atlanta in May.</p> Stewart Bryson http://redpillanalytics.com/?p=4761 Fri Mar 31 2017 00:39:44 GMT-0400 (EDT) Oracle Database 12c Release 2 New Feature – Application Containers http://gavinsoorma.com/2017/03/oracle-database-12c-release-2-new-feature-application-containers/ <p>One of the new multitenancy related features in Oracle 12c Release 2 is <strong>Application Containers</strong>.</p> <p>In 12c Release 1, we could have a Container database (CDB) host a number of optional pluggable databases or PDBs. Now in 12.2.0.1, the multitenancy feature has been enhanced further and we can now have not only CDBs and PDBs but also have another component called an Application Container which in essence is a hybrid of a CDB and a PDB.</p> <p>So now in 12.2.0.1, a CDB can contain (optionally) user created Application Containers and then Application Containers can in turn host one or more PDBs.</p> <p>For example, an Application Container can contain a number of PDBs which contain individual sales data of different regions, but at the same time can share what are called common objects.</p> <p>Maybe each region&#8217;s PDB has data just for that region, but the table structure is the same regardless of the region. In that case the table definition (or metadata) is stored in the application container accessible to all the PDBs hosted by that application container. If any changes are required to be made for application tables, then that DDL change need only be made once in the central application container and that change will then be visible to all the PDBs hosted by that application container.</p> <p>Or there are some tables which are common to all the PDBs &#8211; some kind of master data maybe. And rather than have to store this common data in each individual PDB (as was the case in 12.1.0.2), we just store it once in a central location which is the application container and then that data is visible to all the hosted PDBs.</p> <p>In other words, <strong>an application container functions as an application specific CDB within a CDB</strong>.</p> <p>Think of a Software as a Service (SaaS) deployment model where we are hosting a number of customers and each customer has its own individual data which needs to be stored securely in a separate database but at the same time we need to share some metadata or data which is common to all the customers.</p> <p>Let&#8217;s have a look a simple example of 12c Release 2 Application Containers at work.</p> <p>The basic steps are:</p> <ul> <li>Create the Application Container</li> <li>Create the Pluggable Databases</li> <li>Install the Application</li> <li>After installing the application, synchronize the pluggable databases with the application container root so that any changes in terms of DDL or DML made by the application are now visible to all hosted pluggable databases</li> <li>Optionally upgrade or deinstall the application </li> </ul> <p>&nbsp;</p> <p><strong>Create</strong> the <strong>Application Container</strong><br /> &nbsp;</p> <pre>SQL&gt; CREATE PLUGGABLE DATABASE appcon1 AS APPLICATION CONTAINER ADMIN USER appadm IDENTIFIED BY oracle FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/');  Pluggable database created. </pre> <p>&nbsp;<br /> <strong>Create the Pluggable Databases</strong> which are to be hosted by the Application Container by connecting to the application container root<br /> &nbsp;</p> <pre> SQL&gt; alter session set container=appcon1; &nbsp; Session altered. &nbsp; SQL&gt; alter pluggable database open; &nbsp; Pluggable database altered. &nbsp; SQL&gt; CREATE PLUGGABLE DATABASE pdbhr1 ADMIN USER pdbhr1_adm IDENTIFIED BY oracle FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/pdbhr1/'); &nbsp; Pluggable database created. &nbsp; SQL&gt; SQL&gt; CREATE PLUGGABLE DATABASE pdbhr2 ADMIN USER pdbhr2_adm IDENTIFIED BY oracle FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/pdbhr2/'); &nbsp; Pluggable database created. &nbsp; SQL&gt; SQL&gt; alter pluggable database all open; &nbsp; Pluggable database altered. </pre> <p>&nbsp;</p> <p><strong>Install </strong>the application<br /> &nbsp;<br /> In the first example we will be seeing how some common data is being shared among all the pluggable databases. Note the keyword <strong>SHARING=DATA</strong>.<br /> &nbsp;</p> <pre> SQL&gt; alter pluggable database application region_app begin install '1.0'; &nbsp; Pluggable database altered. &nbsp; SQL&gt; create user app_owner identified by oracle; &nbsp; User created. &nbsp; SQL&gt; grant connect,resource,unlimited tablespace to app_Owner; &nbsp; Grant succeeded. &nbsp; SQL&gt; create table app_owner.regions 2  sharing=data 3  (region_id number, region_name varchar2(20)); &nbsp; Table created. &nbsp; SQL&gt; insert into app_owner.regions 2  values (1,'North'); &nbsp; 1 row created. &nbsp; SQL&gt; insert into app_owner.regions 2  values (2,'South'); &nbsp; 1 row created. &nbsp; SQL&gt; commit; &nbsp; Commit complete. &nbsp; SQL&gt; alter pluggable database application region_app end install '1.0'; &nbsp; Pluggable database altered. </pre> <p>&nbsp;</p> <p><strong>View </strong>information about Application Containers via the <strong>DBA_APPLICATIONS</strong> view</p> <p>&nbsp;</p> <pre>SQL&gt; select app_name,app_status from dba_applications; APP_NAME -------------------------------------------------------------------------------- APP_STATUS ------------ APP$4BDAAF8836A20F9CE053650AA8C0AF21 NORMAL REGION_APP NORMAL </pre> <p><strong>Synchronize</strong> the pluggable databases with the application root<br /> &nbsp;<br /> Note that until this is done, changes made by the application install are not visible to the hosted PDBs.<br /> &nbsp;</p> <pre> SQL> alter session set container=pdbhr1; Session altered. SQL> select * from app_owner.regions; select * from app_owner.regions * ERROR at line 1: ORA-00942: table or view does not exist SQL> alter pluggable database application region_app sync; Pluggable database altered. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 North 2 South SQL> alter session set container=pdbhr2; Session altered. SQL> alter pluggable database application region_app sync; Pluggable database altered. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 North 2 South </pre> <p>&nbsp;</p> <p>Note that any direct DDL or DML is not permitted in this case<br /> &nbsp;</p> <pre> SQL> drop table app_owner.regions; drop table app_owner.regions * ERROR at line 1: ORA-65274: operation not allowed from outside an application action SQL> insert into app_owner.regions values (3,'East'); insert into app_owner.regions values (3,'East') * ERROR at line 1: ORA-65097: DML into a data link table is outside an application action </pre> <p>Let us now <strong>upgrade the application</strong> we just created and create the same application table, but this time with the keyword <strong>SHARING=METADATA</strong><br /> &nbsp;</p> <pre> SQL> alter pluggable database application region_app begin upgrade '1.0' to '1.1'; Pluggable database altered. SQL> select app_name,app_status from dba_applications; APP_NAME -------------------------------------------------------------------------------- APP_STATUS ------------ APP$4BDAAF8836A20F9CE053650AA8C0AF21 NORMAL REGION_APP UPGRADING SQL> drop table app_owner.regions; Table dropped. SQL> create table app_owner.regions 2 sharing=metadata 3 (region_id number,region_name varchar2(20)); Table created. SQL> alter pluggable database application region_app end upgrade; Pluggable database altered. </pre> <p>&nbsp;<br /> We can now see that the table definition is the same in both the PDBs, but each PDB can now insert its own individual data in the table.<br /> &nbsp;</p> <pre> SQL> alter session set container=pdbhr1; Session altered. SQL> alter pluggable database application region_app sync; Pluggable database altered. SQL> desc app_owner.regions Name Null? Type ----------------------------------------- -------- ---------------------------- REGION_ID NUMBER REGION_NAME VARCHAR2(20) SQL> insert into app_owner.regions 2 values (1,'North'); 1 row created. SQL> insert into app_owner.regions 2 values (2,'North-East'); 1 row created. SQL> commit; Commit complete. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 North 2 North-East SQL> alter session set container=pdbhr2; Session altered. SQL> alter pluggable database application region_app sync; Pluggable database altered. SQL> desc app_owner.regions Name Null? Type ----------------------------------------- -------- ---------------------------- REGION_ID NUMBER REGION_NAME VARCHAR2(20) SQL> select * from app_owner.regions; no rows selected SQL> insert into app_owner.regions 2 values (1,'South'); 1 row created. SQL> insert into app_owner.regions 2 values (2,'South-East'); 1 row created. SQL> commit; Commit complete. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 South 2 South-East </pre> <p>&nbsp;<br /> While DML activity was permitted in this case, still any DDL activity is not permitted.<br /> &nbsp;</p> <pre> SQL> drop table app_owner.regions; drop table app_owner.regions * ERROR at line 1: ORA-65274: operation not allowed from outside an application action SQL> alter table app_owner.regions 2 add (region_location varchar2(10)); alter table app_owner.regions * ERROR at line 1: ORA-65274: operation not allowed from outside an application action </pre> <p>&nbsp;<br /> We will now perform another upgrade to the application and this time note the keyword <strong>SHARING=EXTENDED DATA</strong>. In this case while some portion of the data is common and shared among all the PDBs, the individual PDBs can still have the flexibility to have additional data specific to that PDB stored in the table along with the common data which is the same for all the PDBs.<br /> &nbsp;</p> <pre> SQL> alter session set container=appcon1; Session altered. SQL> alter pluggable database application region_app begin upgrade '1.1' to '1.2'; Pluggable database altered. SQL> drop table app_owner.regions; Table dropped. SQL> create table app_owner.regions 2 sharing=extended data 3 (region_id number,region_name varchar2(20)); Table created. SQL> insert into app_owner.regions 2 values (1,'North'); 1 row created. SQL> commit; Commit complete. SQL> alter pluggable database application region_app end upgrade; Pluggable database altered. </pre> <p>&nbsp;<br /> Note that the PDBs share some common data, but individual PDB can insert its own data.<br /> &nbsp;</p> <pre> SQL> alter session set container=pdbhr1; Session altered. SQL> alter pluggable database application region_app sync; Pluggable database altered. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 North SQL> insert into app_owner.regions 2 values 3 (2,'North-West'); 1 row created. SQL> commit; Commit complete. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 North 2 North-West SQL> alter session set container=pdbhr2; Session altered. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 South 2 South-East SQL> alter pluggable database application region_app sync; Pluggable database altered. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 North </pre> Gavin Soorma http://gavinsoorma.com/?p=7489 Wed Mar 29 2017 23:23:22 GMT-0400 (EDT) Oracle Database 12c Release 2 New Feature – Application Containers https://gavinsoorma.com/2017/03/oracle-database-12c-release-2-new-feature-application-containers/ <p>One of the new multitenancy related features in Oracle 12c Release 2 is <strong>Application Containers</strong>.</p> <p>In 12c Release 1, we could have a Container database (CDB) host a number of optional pluggable databases or PDBs. Now in 12.2.0.1, the multitenancy feature has been enhanced further and we can now have not only CDBs and PDBs but also have another component called an Application Container which in essence is a hybrid of a CDB and a PDB.</p> <p>So now in 12.2.0.1, a CDB can contain (optionally) user created Application Containers and then Application Containers can in turn host one or more PDBs.</p> <p>For example, an Application Container can contain a number of PDBs which contain individual sales data of different regions, but at the same time can share what are called common objects.</p> <p>Maybe each region&#8217;s PDB has data just for that region, but the table structure is the same regardless of the region. In that case the table definition (or metadata) is stored in the application container accessible to all the PDBs hosted by that application container. If any changes are required to be made for application tables, then that DDL change need only be made once in the central application container and that change will then be visible to all the PDBs hosted by that application container.</p> <p>Or there are some tables which are common to all the PDBs &#8211; some kind of master data maybe. And rather than have to store this common data in each individual PDB (as was the case in 12.1.0.2), we just store it once in a central location which is the application container and then that data is visible to all the hosted PDBs.</p> <p>In other words, <strong>an application container functions as an application specific CDB within a CDB</strong>.</p> <p>Think of a Software as a Service (SaaS) deployment model where we are hosting a number of customers and each customer has its own individual data which needs to be stored securely in a separate database but at the same time we need to share some metadata or data which is common to all the customers.</p> <p>Let&#8217;s have a look a simple example of 12c Release 2 Application Containers at work.</p> <p>The basic steps are:</p> <ul> <li>Create the Application Container</li> <li>Create the Pluggable Databases</li> <li>Install the Application</li> <li>After installing the application, synchronize the pluggable databases with the application container root so that any changes in terms of DDL or DML made by the application are now visible to all hosted pluggable databases</li> <li>Optionally upgrade or deinstall the application </li> </ul> <p>&nbsp;</p> <p><strong>Create</strong> the <strong>Application Container</strong><br /> &nbsp;</p> <pre>SQL&gt; CREATE PLUGGABLE DATABASE appcon1 AS APPLICATION CONTAINER ADMIN USER appadm IDENTIFIED BY oracle FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/');  Pluggable database created. </pre> <p>&nbsp;<br /> <strong>Create the Pluggable Databases</strong> which are to be hosted by the Application Container by connecting to the application container root<br /> &nbsp;</p> <pre> SQL&gt; alter session set container=appcon1; &nbsp; Session altered. &nbsp; SQL&gt; alter pluggable database open; &nbsp; Pluggable database altered. &nbsp; SQL&gt; CREATE PLUGGABLE DATABASE pdbhr1 ADMIN USER pdbhr1_adm IDENTIFIED BY oracle FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/pdbhr1/'); &nbsp; Pluggable database created. &nbsp; SQL&gt; SQL&gt; CREATE PLUGGABLE DATABASE pdbhr2 ADMIN USER pdbhr2_adm IDENTIFIED BY oracle FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/pdbhr2/'); &nbsp; Pluggable database created. &nbsp; SQL&gt; SQL&gt; alter pluggable database all open; &nbsp; Pluggable database altered. </pre> <p>&nbsp;</p> <p><strong>Install </strong>the application<br /> &nbsp;<br /> In the first example we will be seeing how some common data is being shared among all the pluggable databases. Note the keyword <strong>SHARING=DATA</strong>.<br /> &nbsp;</p> <pre> SQL&gt; alter pluggable database application region_app begin install '1.0'; &nbsp; Pluggable database altered. &nbsp; SQL&gt; create user app_owner identified by oracle; &nbsp; User created. &nbsp; SQL&gt; grant connect,resource,unlimited tablespace to app_Owner; &nbsp; Grant succeeded. &nbsp; SQL&gt; create table app_owner.regions 2  sharing=data 3  (region_id number, region_name varchar2(20)); &nbsp; Table created. &nbsp; SQL&gt; insert into app_owner.regions 2  values (1,'North'); &nbsp; 1 row created. &nbsp; SQL&gt; insert into app_owner.regions 2  values (2,'South'); &nbsp; 1 row created. &nbsp; SQL&gt; commit; &nbsp; Commit complete. &nbsp; SQL&gt; alter pluggable database application region_app end install '1.0'; &nbsp; Pluggable database altered. </pre> <p>&nbsp;</p> <p><strong>View </strong>information about Application Containers via the <strong>DBA_APPLICATIONS</strong> view</p> <p>&nbsp;</p> <pre>SQL&gt; select app_name,app_status from dba_applications; APP_NAME -------------------------------------------------------------------------------- APP_STATUS ------------ APP$4BDAAF8836A20F9CE053650AA8C0AF21 NORMAL REGION_APP NORMAL </pre> <p><strong>Synchronize</strong> the pluggable databases with the application root<br /> &nbsp;<br /> Note that until this is done, changes made by the application install are not visible to the hosted PDBs.<br /> &nbsp;</p> <pre> SQL> alter session set container=pdbhr1; Session altered. SQL> select * from app_owner.regions; select * from app_owner.regions * ERROR at line 1: ORA-00942: table or view does not exist SQL> alter pluggable database application region_app sync; Pluggable database altered. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 North 2 South SQL> alter session set container=pdbhr2; Session altered. SQL> alter pluggable database application region_app sync; Pluggable database altered. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 North 2 South </pre> <p>&nbsp;</p> <p>Note that any direct DDL or DML is not permitted in this case<br /> &nbsp;</p> <pre> SQL> drop table app_owner.regions; drop table app_owner.regions * ERROR at line 1: ORA-65274: operation not allowed from outside an application action SQL> insert into app_owner.regions values (3,'East'); insert into app_owner.regions values (3,'East') * ERROR at line 1: ORA-65097: DML into a data link table is outside an application action </pre> <p>Let us now <strong>upgrade the application</strong> we just created and create the same application table, but this time with the keyword <strong>SHARING=METADATA</strong><br /> &nbsp;</p> <pre> SQL> alter pluggable database application region_app begin upgrade '1.0' to '1.1'; Pluggable database altered. SQL> select app_name,app_status from dba_applications; APP_NAME -------------------------------------------------------------------------------- APP_STATUS ------------ APP$4BDAAF8836A20F9CE053650AA8C0AF21 NORMAL REGION_APP UPGRADING SQL> drop table app_owner.regions; Table dropped. SQL> create table app_owner.regions 2 sharing=metadata 3 (region_id number,region_name varchar2(20)); Table created. SQL> alter pluggable database application region_app end upgrade; Pluggable database altered. </pre> <p>&nbsp;<br /> We can now see that the table definition is the same in both the PDBs, but each PDB can now insert its own individual data in the table.<br /> &nbsp;</p> <pre> SQL> alter session set container=pdbhr1; Session altered. SQL> alter pluggable database application region_app sync; Pluggable database altered. SQL> desc app_owner.regions Name Null? Type ----------------------------------------- -------- ---------------------------- REGION_ID NUMBER REGION_NAME VARCHAR2(20) SQL> insert into app_owner.regions 2 values (1,'North'); 1 row created. SQL> insert into app_owner.regions 2 values (2,'North-East'); 1 row created. SQL> commit; Commit complete. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 North 2 North-East SQL> alter session set container=pdbhr2; Session altered. SQL> alter pluggable database application region_app sync; Pluggable database altered. SQL> desc app_owner.regions Name Null? Type ----------------------------------------- -------- ---------------------------- REGION_ID NUMBER REGION_NAME VARCHAR2(20) SQL> select * from app_owner.regions; no rows selected SQL> insert into app_owner.regions 2 values (1,'South'); 1 row created. SQL> insert into app_owner.regions 2 values (2,'South-East'); 1 row created. SQL> commit; Commit complete. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 South 2 South-East </pre> <p>&nbsp;<br /> While DML activity was permitted in this case, still any DDL activity is not permitted.<br /> &nbsp;</p> <pre> SQL> drop table app_owner.regions; drop table app_owner.regions * ERROR at line 1: ORA-65274: operation not allowed from outside an application action SQL> alter table app_owner.regions 2 add (region_location varchar2(10)); alter table app_owner.regions * ERROR at line 1: ORA-65274: operation not allowed from outside an application action </pre> <p>&nbsp;<br /> We will now perform another upgrade to the application and this time note the keyword <strong>SHARING=EXTENDED DATA</strong>. In this case while some portion of the data is common and shared among all the PDBs, the individual PDBs can still have the flexibility to have additional data specific to that PDB stored in the table along with the common data which is the same for all the PDBs.<br /> &nbsp;</p> <pre> SQL> alter session set container=appcon1; Session altered. SQL> alter pluggable database application region_app begin upgrade '1.1' to '1.2'; Pluggable database altered. SQL> drop table app_owner.regions; Table dropped. SQL> create table app_owner.regions 2 sharing=extended data 3 (region_id number,region_name varchar2(20)); Table created. SQL> insert into app_owner.regions 2 values (1,'North'); 1 row created. SQL> commit; Commit complete. SQL> alter pluggable database application region_app end upgrade; Pluggable database altered. </pre> <p>&nbsp;<br /> Note that the PDBs share some common data, but individual PDB can insert its own data.<br /> &nbsp;</p> <pre> SQL> alter session set container=pdbhr1; Session altered. SQL> alter pluggable database application region_app sync; Pluggable database altered. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 North SQL> insert into app_owner.regions 2 values 3 (2,'North-West'); 1 row created. SQL> commit; Commit complete. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 North 2 North-West SQL> alter session set container=pdbhr2; Session altered. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 South 2 South-East SQL> alter pluggable database application region_app sync; Pluggable database altered. SQL> select * from app_owner.regions; REGION_ID REGION_NAME ---------- -------------------- 1 North </pre> Gavin Soorma https://gavinsoorma.com/?p=7489 Wed Mar 29 2017 23:23:22 GMT-0400 (EDT) OUG Ireland 2017 Presentation http://www.oralytics.com/2017/03/oug-ireland-2017-presentation.html Here are the slides from my presentation at OUG Ireland 2017. All about running R using SQL. <iframe src="//www.slideshare.net/slideshow/embed_code/key/if3lYyJZEXrzEE" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen> </iframe> <div style="margin-bottom:5px"> <strong> <a href="//www.slideshare.net/BrendanTierney/embedded-r-execution-using-sql" title="Embedded R Execution using SQL" target="_blank">Embedded R Execution using SQL</a> </strong> from <strong><a target="_blank" href="//www.slideshare.net/BrendanTierney">Brendan Tierney</a></strong> </div> Brendan Tierney tag:blogger.com,1999:blog-4669933501315263808.post-2092800735146639166 Wed Mar 29 2017 12:36:00 GMT-0400 (EDT) Real World OBIEE: Demystification of Variables Pt. 3 http://www.rittmanmead.com/blog/2017/03/real-world-obiee-demystification-of-variables-pt-3/ <img src="http://www.rittmanmead.com/blog/content/images/2017/03/OBIEE-12c.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"><p>In <a href="https://www.rittmanmead.com/blog/2017/03/real-world-obiee-demystification-of-variables-pt-2/">part two</a> of this blog series, I went over using Repository, System and Presentation Variables to make reports dynamic for any series of time. In part three, I am going to talk about making reports dynamic for periods of time using built in functions within Answers itself. </p> <p><strong>Real World</strong></p> <p>While it's a lot more efficient to create Repository Variables to use in filters and prompts for time dimensions, sometimes it is simply not possible. Perhaps you are a front end developer for OBIEE and have no access to the RPD or the database . Perhaps you have no communication with the person in your organization who handles all of the RPD development and therefore can not submit any change requests.</p> <p>Don't worry. We've got you covered.</p> <p>There are several functions and tricks you can use within Answers itself to make reports dynamic and eliminate having to hardcode dates. </p> <p><strong>The Scenario</strong></p> <p>I am going to use the same scenario I used for <a href="https://www.rittmanmead.com/blog/2017/03/real-world-obiee-demystification-of-variables-pt-2/">part two</a> of this series for the example one. To recap, here are the requirements:</p> <p>I have been asked to create a report that is going to reside on a products dashboard. It needs to have the same product grouping as the report I used in <a href="https://www.rittmanmead.com/blog/2017/03/real-world-obiee-demystification-of-variables-pt-1/">part one</a> of this series, needs to contain 'Gross Rev $', 'Net Rev $' and '# of Orders' and have a prompt that can select between the first and current day of the month and every day in-between. The person who requested the report wants the prompt to change dynamically with each month and does not want users to be able to select future dates.</p> <p>In <a href="https://www.rittmanmead.com/blog/2017/03/real-world-obiee-demystification-of-variables-pt-2/">part two</a>, I used a custom SQL statment which used Repository Variables I created to populate all the date values from the first of every month to the current date for Variable Prompts. There is a gap in the data loads for # of Orders in which data does not update until the 2nd or 3rd of each new month. The person who requested the report wanted a summary of the previous months '# of Orders' to be shown until the data is updated for the current month. I used a Repository Variable that returned the value of the previous month with the current year and used a CASE statement with along with Filter Expressions to switch between the Filter Expression using the Repository Variable (<em>Prev_Month</em>) if the date was &lt;=2 or if # of Orders is null and the Filter Expression which contained the <em>Start Date</em> and <em>End_Date</em> Presentation Variable placeholders which were defined in my Variable Prompts.</p> <p><strong>Example One</strong></p> <p>In this example, I have to figure out a way to make the report dynamic with only the functions available within Answers. There are two parts to this example. First I need to use a function that will return the previous month's value for the Calendar Year Month column to use with the '# of Orders' column. This will replace the Repository Variable <em>Prev_Month</em> I used in <a href="https://www.rittmanmead.com/blog/2017/03/real-world-obiee-demystification-of-variables-pt-2/">part two</a> of this series. Second I need to write a new SQL statment for the <em>Start Date</em> and <em>End_Date</em> prompts I created in <a href="https://www.rittmanmead.com/blog/2017/03/real-world-obiee-demystification-of-variables-pt-2/">part two</a> and also define a new SQL statment for the default values in those prompts.</p> <p><strong>Part 1</strong></p> <p>I am going to start by creating a new statement to return the previous month's value for the Calendar Year Month column. I can use the <em>TIMESTAMPADD</em> function in conjunction with the extraction syntax 'YEAR' and 'MONTH' to return the desired results. Let's take a look at the entire statment and then I will break it down. </p> <pre><code>CAST(YEAR(timestampadd(SQL_TSI_YEAR, -1,CURRENT_DATE))*100+MONTH(timestampadd(SQL_TSI_MONTH, -1,CURRENT_DATE)) AS VARCHAR(6))</code></pre> <p><strong>1</strong>. <em>TIMESTAMPADD</em> - This is what defines that I am going to use addition to return a date by adding or one date to another.</p> <p><strong>2</strong>. <em>SQL_TSI_YEAR</em> and <em>SQL_TSI_MONTH</em> - The first argument in the function. It defines what interval of time the function will work with.</p> <p><strong>3</strong>. -1. This the interval of time that is compared to the third argument. </p> <p><strong>4</strong>. <em>CURRENT_DATE</em> - The third argument in the function. This is what the second argument is compared against.</p> <p><strong>5</strong>. <em>YEAR</em> and <em>MONTH</em> - This is the extraction syntax that will return only the year and the month respectively.</p> <p>Also notice that I have used <em>VARCHAR(6)</em> for the <em>CAST</em> argument. If I use <em>VARCHAR</em>, I can specify the exact number of characters I want returned. </p> <p>Now I need to copy my column formula and paste it into the column formula that I created for '# of Orders' in <a href="https://www.rittmanmead.com/blog/2017/03/real-world-obiee-demystification-of-variables-pt-2/">part two</a> of this series.</p> <p>I am going to replace the <em>Prev_Month</em> Repository Variable with my statement, which will look like this.</p> <pre><code>CASE WHEN DAY(CURRENT_DATE)<=2 or="" "sales="" -="" fact="" sales"."measures"."#="" of="" orders"="" is="" null="" then="" filter("sales="" using="" ("sales="" sales"."periods"."calendar="" year="" month"="CAST(YEAR(timestampadd(SQL_TSI_MONTH," -1,current_date))*100+month(timestampadd(sql_tsi_month,="" -1,current_date))="" as="" varchar(6))))="" else="" ("periods"."day="" date"="" between="" @{pv_start_dt}{date="" '2015-10-01'}="" and="" @{pv_end_dt}{date="" '2015-10-15'}))="" end<="" code=""></=2></code></pre> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-29-at-9.16.06-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p>If I run the report, my results return as expected.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-01-at-3.13.57-PM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p><strong>Part 2</strong></p> <p>Now I need to write a new SQL statement for my <em>Start Date</em> and <em>End Date</em> prompts. In order to do this, I am going to need to use two functions: TIMESTAMPS and CURRENT_DATE. First, lets take a look at the TIMESTAMP function. </p> <p>I am going to use the TIMESTAMP function to filter the <em>Day Date</em> column for the first day of the month. To demonstrate, I am going to create a new analysis and use the TIMESTAMP function in a column formula. My column formula looks like the following:</p> <pre><code>TIMESTAMPADD(SQL_TSI_DAY, -DAYOFMONTH(CURRENT_DATE) +1, CURRENT_DATE)</code></pre> <p>This formula can be broken down into four parts:</p> <p><strong>1</strong>. <em>TIMESTAMPADD</em> - This is what defines that I am going to use addition to return a date by adding or one date to another.</p> <p><strong>2</strong>. <em>SQL_TSI_DAY</em> - The first argument in the function. It defines what interval of time the function will work with (in this case days)</p> <p><strong>3</strong>. <em>-DAYOFMONTH(CURRENT_DATE)+1</em> - This the interval of time that is compared to the third argument. In this case I am taking the negative value of the day of the month, adding 1 and then adding it with current date which always returns 1 or the first day of the month. </p> <p><strong>4</strong>. <em>CURRENT_DATE</em> - The third argument in the function. This is what the second argument is compared against. </p> <p>This is only scratching the surface of what you can do with the <em>TIMESTAMP</em> function. If you would like more information, check out the blog on <a href="https://www.rittmanmead.com/blog/2014/12/timestamps-and-presentation-variables/">TIMESTAMPS</a> written by Brian Hall.</p> <p>I am going to add an additional column to the Criteria and use the <em>CURRENT_DATE</em> function in a column formula.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-01-at-3.30.39-PM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-8.36.13-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p>Now I am going to click on Results to show the results of the <em>TIMESTAMP</em> function and the <em>CURRRENT_DATE</em> function.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-01-at-3.30.53-PM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p>From the results you can see that I have both the first day of the month and the current date. Now I need to convert this into a filter for the <em>Day Date</em> column so that I can get the logical SQL query for my <em>Start Date</em> and <em>End Date</em> prompts.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-8.54.16-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p>In the New Filter window, I need to change the operator to <em>is between</em> and click on <em>Add More Options</em> to add a <em>SQL Expression</em>.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-8.54.47-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p>In the <em>SQL Expression</em> box, I need to put the <em>TIMESTAMP</em> function for current date from the previous example. In addition I need to add another <em>SQL Expression</em> for the <em>CURRENT_DATE</em> function.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-8.55.27-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-8.56.10-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p>When I return to my Criteria, I can see the filter I created in the Filter window.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-9.17.24-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p>I can click on Results to run the report. The results for the <em>Day Date</em> column return as expected.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-8.58.22-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p>Now I can click on the Advanced tab and copy the logical SQL statement to use for my <em>Start Date</em> and <em>End Date</em> prompts.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-9.23.34-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p>Now I am going to paste the following into my <em>Start Date</em> Variable Prompt</p> <p><strong>Choice List Values > SQL Results</strong></p> <pre><code>SELECT "Sales - Fact Sales"."Periods"."Day Date" FROM "Sales - Fact Sales" WHERE ("Periods"."Day Date" BETWEEN TIMESTAMPADD(SQL_TSI_DAY, -DAYOFMONTH(CURRENT_DATE), CURRENT_DATE) AND CURRENT_DATE) ORDER BY "Periods"."Day Date"</code></pre> <p><strong>Default Selection > SQL Results</strong></p> <pre><code>SELECT TIMESTAMPADD(SQL_TSI_DAY, -DAYOFMONTH(CURRENT_DATE) +1, CURRENT_DATE) FROM "Sales - Fact Sales" FETCH FIRST 65001 ROWS ONLY</code></pre> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-9.50.24-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p>For the default selection, I am using a SQL statment that is selecting the first day of the month using the same <em>TIMESTAMP</em> function used in the above query from my subject area "Sales - Fact Sales".</p> <p>Now I need to change the SQL query for both the Choice List Values and Default Selection for my <em>End Date</em> Variable Prompt.</p> <p>I am going to use the same SQL query for the Choice List Values in my <em>End Date</em> prompt as I did in my <em>Start Date</em> prompt. I am going to change the default selection to the following:</p> <pre><code>SELECT CURRENT_DATE FROM "Sales - Fact Sales" FETCH FIRST 65001 ROWS ONLY</code></pre> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-10.03.56-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p>If I go to the <em>Display</em> window, I can view the results of my changes.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-10.06.11-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-10.06.27-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p>Notice that the results are exactly the same as the results in <a href="http://">part two</a> of this series.</p> <p>I can save the dashboard prompt and go to my dashboard and test the prompt.</p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-10.07.38-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-10.12.30-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p><img src="http://www.rittmanmead.com/blog/content/images/2017/03/Screen-Shot-2017-03-02-at-10.16.35-AM.png" alt="Real World OBIEE: Demystification of Variables Pt. 3"></p> <p><strong>In Conclusion</strong></p> <p>In <a href="https://www.rittmanmead.com/blog/2017/03/real-world-obiee-demystification-of-variables-pt-1/">part one</a> of this series, we looked at using Bins, CASE statements to create custom grouping for values and switch between those groups and values using Presentation Variables.</p> <p>In <a href="https://www.rittmanmead.com/blog/2017/03/real-world-obiee-demystification-of-variables-pt-2/">part two</a> of this series, we looked at creating Repository Variables to make reports dynamic using those Repository Variables in Variable Prompts and passing them into column formulas using Presentation Variables.</p> <p>In the third and final part of this series, we looked at making reports dynamic by using built in functions within Answers such as <em>TIMESTAMPS</em> and <em>CURRENT_DATE</em>.</p> <p>My hope is that you can take these examples and apply them in your own OBIEE development. If you would like to know more about front end or RPD development, please check out the <a href="https://www.rittmanmead.com/training/">variety of training courses</a> we offer at Rittman Mead. Until next time.</p> Matthew Walding b02e3da2-70b9-43e3-a249-01f634fdeebf Tue Mar 28 2017 09:00:00 GMT-0400 (EDT) EssCS Command Line Interface Notes https://realtrigeek.com/2017/03/27/esscs-command-line-interface-notes/ <p>This was not supposed to be a two blog post day, but here I am living the dream…</p> <p>I wanted to show how you can use EssCS Command Line Interface (CLI) to kick off various tasks in EssCS. And I will… But I found an interesting bug that I want to show you, as well, so if you are Googling on this error, you can see how to work around it (details later…sorry for the vagueness).</p> <p>The EssCS CLI is a tool to do various tasks outside of the EssCS GUI. You can kickoff calc scripts, build dimensions, upload/download items, and do LCM imports/exports among other things. It’s kind of like MaxL for the cloud. I ran a few scripts and copied them to Notepad++ to show you what the commands look like and how the job status is returned. I’ve highlighted various commands in red boxes (I will get to the errors in just a minute…).</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0012.png"><img data-attachment-id="1689" data-permalink="https://realtrigeek.com/2017/03/27/esscs-command-line-interface-notes/image0012-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0012.png" data-orig-size="1916,923" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0012" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0012.png?w=300&#038;h=145" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0012.png?w=840" class="alignnone size-medium wp-image-1689" src="https://epmqueen.files.wordpress.com/2017/03/image0012.png?w=300&#038;h=145" alt="" width="300" height="145" srcset="https://epmqueen.files.wordpress.com/2017/03/image0012.png?w=300&amp;h=145 300w, https://epmqueen.files.wordpress.com/2017/03/image0012.png?w=600&amp;h=290 600w, https://epmqueen.files.wordpress.com/2017/03/image0012.png?w=150&amp;h=72 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>It’s a straightforward tool to use and doesn’t take as much understanding (or, at times, patience) as MaxL. The examples I showed were:</p> <p>· Logging into EssCS CLI</p> <p>· Executing a calculation</p> <p>· Downloading a file (ie: a calc script, in my example)</p> <p>· Listing all the files from my user</p> <p>· Listing the version of EssCS I’m using</p> <p>Something you may have noticed was each time I tried to execute a calculation, I would get an error “Error happened while executing Calculation job. Please make sure all artifacts exist and valid”. I logged into EssCS to see the Job console to see if I could get details on why the calc execution failed. However, I see that my calcs executed successfully. Hmm…</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image003.jpg"><img data-attachment-id="1691" data-permalink="https://realtrigeek.com/2017/03/27/esscs-command-line-interface-notes/image003-8/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image003.jpg" data-orig-size="833,252" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image003" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image003.jpg?w=300&#038;h=91" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image003.jpg?w=833" class="alignnone size-medium wp-image-1691" src="https://epmqueen.files.wordpress.com/2017/03/image003.jpg?w=300&#038;h=91" alt="" width="300" height="91" srcset="https://epmqueen.files.wordpress.com/2017/03/image003.jpg?w=300&amp;h=91 300w, https://epmqueen.files.wordpress.com/2017/03/image003.jpg?w=600&amp;h=182 600w, https://epmqueen.files.wordpress.com/2017/03/image003.jpg?w=150&amp;h=45 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>I decided to check the application logs to see if there was something in them to give me an answer… They also showed success.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image005.jpg"><img data-attachment-id="1692" data-permalink="https://realtrigeek.com/2017/03/27/esscs-command-line-interface-notes/image005-9/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image005.jpg" data-orig-size="624,430" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image005" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image005.jpg?w=300&#038;h=207" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image005.jpg?w=624" class="alignnone size-medium wp-image-1692" src="https://epmqueen.files.wordpress.com/2017/03/image005.jpg?w=300&#038;h=207" alt="" width="300" height="207" srcset="https://epmqueen.files.wordpress.com/2017/03/image005.jpg?w=300&amp;h=207 300w, https://epmqueen.files.wordpress.com/2017/03/image005.jpg?w=600&amp;h=414 600w, https://epmqueen.files.wordpress.com/2017/03/image005.jpg?w=150&amp;h=103 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Well, this is odd. I’m happy my scripts ran successfully, however, the CLI was not reporting back correctly. CLI gives an option of adding a “-v” to each command which tells the system to return “verbose output” for the command. I decided to use this for one of my calc scripts and got the following:</p> <p>C:cli&gt;esscs calc -v -application USG_WF -db Emp_Dets -script Calc2018.CSC</p> <p>Picked up _JAVA_OPTIONS: -Xmx512M</p> <p><strong>POST: {&#8220;application&#8221;:&#8221;USG_WF&#8221;,&#8221;db&#8221;:&#8221;Emp_Dets&#8221;,&#8221;jobtype&#8221;:&#8221;calc&#8221;,&#8221;parameters&#8221;:{&#8220;script&#8221;:&#8221;Calc2018.CSC&#8221;}}</strong></p> <p><strong>RETURN: {&#8220;JOB_ID&#8221;:40,&#8221;appName&#8221;:&#8221;USG_WF&#8221;,&#8221;jobfileName&#8221;:&#8221;Calc2018&#8243;,&#8221;dbName&#8221;:&#8221;Emp_Dets&#8221;,&#8221;startTime&#8221;:&#8221;2017-03-27 18:08:01 UTC&#8221;,&#8221;endTime&#8221;:&#8221;2017-03-27 18:08:01 UTC&#8221;,&#8221;jobInputInfo&#8221;:&#8221;{&#8220;calcScriptHasRTSV&#8221;:false,&#8221;calcScriptExecDefault&#8221;:false,&#8221;calcScriptName&#8221;:&#8221;Calc2018&#8243;,&#8221;calcScriptIsScript&#8221;:false}&#8221;,&#8221;userName&#8221;:&#8221;cloud.user&#8221;,&#8221;jobtype&#8221;:&#8221;Calc Execution&#8221;,&#8221;statusMessage&#8221;:&#8221;In Progress&#8221;,&#8221;status&#8221;:100,&#8221;jobOutputInfo&#8221;:&#8221;&#8221;,&#8221;links&#8221;:[{&#8220;rel&#8221;:&#8221;self&#8221;,&#8221;href&#8221;:&#8221;<a href="http://localhost:9000/essbase/rest/v1/jobs&#038;#8221" rel="nofollow">http://localhost:9000/essbase/rest/v1/jobs&#038;#8221</a>;,&#8221;method&#8221;:&#8221;POST&#8221;},{&#8220;rel&#8221;:&#8221;canonical&#8221;,&#8221;href&#8221;:&#8221;<a href="http://localhost:9000/essbase/rest/v1/jobs&#038;#8221" rel="nofollow">http://localhost:9000/essbase/rest/v1/jobs&#038;#8221</a>;,&#8221;method&#8221;:&#8221;POST&#8221;},{&#8220;rel&#8221;:&#8221;Job Status&#8221;,&#8221;href&#8221;:&#8221;<a href="http://localhost:9000/essbase/rest/v1/jobs/40&#038;#8243" rel="nofollow">http://localhost:9000/essbase/rest/v1/jobs/40&#038;#8243</a>;,&#8221;method&#8221;:&#8221;GET&#8221;}]}</strong></p> <p>Error happened while executing Calculation job. Please make sure all artifacts exist and valid.</p> <p>Okay…I’ve been playing around (and presented a couple times) with REST and recognize that’s what is occurring in the background. I decide to go to the first URL and see the jobs:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0081.png"><img data-attachment-id="1690" data-permalink="https://realtrigeek.com/2017/03/27/esscs-command-line-interface-notes/image0081-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0081.png" data-orig-size="417,218" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0081" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0081.png?w=300&#038;h=157" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0081.png?w=417" class="alignnone size-medium wp-image-1690" src="https://epmqueen.files.wordpress.com/2017/03/image0081.png?w=300&#038;h=157" alt="" width="300" height="157" srcset="https://epmqueen.files.wordpress.com/2017/03/image0081.png?w=300&amp;h=157 300w, https://epmqueen.files.wordpress.com/2017/03/image0081.png?w=150&amp;h=78 150w, https://epmqueen.files.wordpress.com/2017/03/image0081.png 417w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Nothing of real value. I then tried the second URL, the actual job number, and got the following. I have a handy JSON viewer plugin for my browser, so I got formatted JSON details:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0102.png"><img data-attachment-id="1693" data-permalink="https://realtrigeek.com/2017/03/27/esscs-command-line-interface-notes/image0102-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0102.png" data-orig-size="1055,621" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0102" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0102.png?w=300&#038;h=177" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0102.png?w=840" class="alignnone size-medium wp-image-1693" src="https://epmqueen.files.wordpress.com/2017/03/image0102.png?w=300&#038;h=177" alt="" width="300" height="177" srcset="https://epmqueen.files.wordpress.com/2017/03/image0102.png?w=300&amp;h=177 300w, https://epmqueen.files.wordpress.com/2017/03/image0102.png?w=600&amp;h=354 600w, https://epmqueen.files.wordpress.com/2017/03/image0102.png?w=150&amp;h=88 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>I see all the details of my job. The status is 200, which in HTTP status code speak means “OK”. This is a total guess, but if you compare PBCS REST status codes, 0 means OK. Perhaps a translation misstep between 200 and 0 to signify success? I’m not really sure, but if you see this error using CLI, check in EssCS to see if it is really an error, or use the verbose command to get the REST URL for the job. It may just be this bug to look past for now!</p><br /> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/gocomments/epmqueen.wordpress.com/1688/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/epmqueen.wordpress.com/1688/" /></a> <img alt="" border="0" src="https://pixel.wp.com/b.gif?host=realtrigeek.com&#038;blog=70089387&#038;post=1688&#038;subd=epmqueen&#038;ref=&#038;feed=1" width="1" height="1" /> Sarah Craynon Zumbrum http://realtrigeek.com/?p=1688 Mon Mar 27 2017 15:23:54 GMT-0400 (EDT) EssCS – Essbase LCM Utility https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/ <p>About a week and a half ago I <a href="https://realtrigeek.com/2017/03/17/esscs-command-line-scripts/">wrote</a> at a high level about 2 Essbase Cloud Service (EssCS) command line scripts, the Export Utility and Command Line Tool. Another utility in the mix that I didn’t talk about (due to environment constraints) was the Essbase LCM Utility. I finally got my environment set up to use the tool and played around with an export and import today.</p> <p>On the EssCS homepage, you will see a green icon named “Utilities”. I clicked on this to download the utility.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image019.jpg"><img data-attachment-id="1675" data-permalink="https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/image019-6/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image019.jpg" data-orig-size="624,253" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image019" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image019.jpg?w=300&#038;h=122" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image019.jpg?w=624" class="alignnone size-medium wp-image-1675" src="https://epmqueen.files.wordpress.com/2017/03/image019.jpg?w=300&#038;h=122" alt="" width="300" height="122" srcset="https://epmqueen.files.wordpress.com/2017/03/image019.jpg?w=300&amp;h=122 300w, https://epmqueen.files.wordpress.com/2017/03/image019.jpg?w=600&amp;h=244 600w, https://epmqueen.files.wordpress.com/2017/03/image019.jpg?w=150&amp;h=61 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>In the list, it will be the third item. I clicked the down arrow next to “Life Cycle Management”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image020.jpg"><img data-attachment-id="1676" data-permalink="https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/image020-7/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image020.jpg" data-orig-size="624,300" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image020" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image020.jpg?w=300&#038;h=144" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image020.jpg?w=624" class="alignnone size-medium wp-image-1676" src="https://epmqueen.files.wordpress.com/2017/03/image020.jpg?w=300&#038;h=144" alt="" width="300" height="144" srcset="https://epmqueen.files.wordpress.com/2017/03/image020.jpg?w=300&amp;h=144 300w, https://epmqueen.files.wordpress.com/2017/03/image020.jpg?w=600&amp;h=288 600w, https://epmqueen.files.wordpress.com/2017/03/image020.jpg?w=150&amp;h=72 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>After I downloaded and extracted the tool to my favorite utility location, I read the “README.txt”. Below I have highlight some items I thought were key:</p> <p>· The supported releases of Essbase are 11.1.2.4.0xx, 11.1.2.4.5xx , and 12c EssCS.</p> <p>· The command and variables to use when exporting a cube.</p> <p>· The command and variables to use when importing a cube.</p> <p>Note that there are also details about how to deal with scenarios (workflow and/or sandbox) and partitions. I’m not using these in my example cube, so I won’t be addressing them in this blog post.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image021.jpg"><img data-attachment-id="1677" data-permalink="https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/image021-7/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image021.jpg" data-orig-size="623,260" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image021" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image021.jpg?w=300&#038;h=125" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image021.jpg?w=623" class="alignnone size-medium wp-image-1677" src="https://epmqueen.files.wordpress.com/2017/03/image021.jpg?w=300&#038;h=125" alt="" width="300" height="125" srcset="https://epmqueen.files.wordpress.com/2017/03/image021.jpg?w=300&amp;h=125 300w, https://epmqueen.files.wordpress.com/2017/03/image021.jpg?w=600&amp;h=250 600w, https://epmqueen.files.wordpress.com/2017/03/image021.jpg?w=150&amp;h=63 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>The first thing I want to do is export my on-premises (OP) cube to a zip file. I have highlighted that syntax below in the window. Also, I have supplied it to copy and paste for your environment.</p> <p>EssbaseLCM export –server {servername}:1423 –user {username} –password {password} –application {appname} –zipFile {FileName}.zip</p> <p>If all goes successfully, you should get a line for each artifact in your application. You can also choose to not import data by adding “-nodata” at the end of the export command.</p> <p>Note: The parameters given in the command can be in any order, not just the one shown!</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image022.jpg"><img data-attachment-id="1678" data-permalink="https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/image022-5/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image022.jpg" data-orig-size="624,327" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image022" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image022.jpg?w=300&#038;h=157" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image022.jpg?w=624" class="alignnone size-medium wp-image-1678" src="https://epmqueen.files.wordpress.com/2017/03/image022.jpg?w=300&#038;h=157" alt="" width="300" height="157" srcset="https://epmqueen.files.wordpress.com/2017/03/image022.jpg?w=300&amp;h=157 300w, https://epmqueen.files.wordpress.com/2017/03/image022.jpg?w=600&amp;h=314 600w, https://epmqueen.files.wordpress.com/2017/03/image022.jpg?w=150&amp;h=79 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Because I’m curious, I opened the zip file to see how it was organized.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0091.png"><img data-attachment-id="1673" data-permalink="https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/image0091-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0091.png" data-orig-size="389,415" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0091" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0091.png?w=281&#038;h=300" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0091.png?w=389" class="alignnone size-medium wp-image-1673" src="https://epmqueen.files.wordpress.com/2017/03/image0091.png?w=281&#038;h=300" alt="" width="281" height="300" srcset="https://epmqueen.files.wordpress.com/2017/03/image0091.png?w=281&amp;h=300 281w, https://epmqueen.files.wordpress.com/2017/03/image0091.png?w=141&amp;h=150 141w, https://epmqueen.files.wordpress.com/2017/03/image0091.png 389w" sizes="(max-width: 281px) 100vw, 281px" /></a></p> <p>I can see there are folders for the various artifacts.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0101.png"><img data-attachment-id="1674" data-permalink="https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/image0101-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0101.png" data-orig-size="396,286" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0101" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0101.png?w=300&#038;h=217" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0101.png?w=396" class="alignnone size-medium wp-image-1674" src="https://epmqueen.files.wordpress.com/2017/03/image0101.png?w=300&#038;h=217" alt="" width="300" height="217" srcset="https://epmqueen.files.wordpress.com/2017/03/image0101.png?w=300&amp;h=217 300w, https://epmqueen.files.wordpress.com/2017/03/image0101.png?w=150&amp;h=108 150w, https://epmqueen.files.wordpress.com/2017/03/image0101.png 396w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Now to import the application… I tried the import command for a new cube first, but because I already have the cube in my environment, I was given an error and told to use “-overwrite” in my parameters. So, that’s what I did (shown with the yellow box). The command I used for the import was:</p> <p>EssbaseLCM –server {servername}:1423 –user {username} –password {password} –application {appname} –zipFile {filename}.zip -overwrite</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image023.jpg"><img data-attachment-id="1679" data-permalink="https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/image023-5/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image023.jpg" data-orig-size="624,424" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image023" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image023.jpg?w=300&#038;h=204" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image023.jpg?w=624" class="alignnone size-medium wp-image-1679" src="https://epmqueen.files.wordpress.com/2017/03/image023.jpg?w=300&#038;h=204" alt="" width="300" height="204" srcset="https://epmqueen.files.wordpress.com/2017/03/image023.jpg?w=300&amp;h=204 300w, https://epmqueen.files.wordpress.com/2017/03/image023.jpg?w=600&amp;h=408 600w, https://epmqueen.files.wordpress.com/2017/03/image023.jpg?w=150&amp;h=102 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>To double-check to make sure it worked, I logged into EssCS, chose my database name, and clicked “Settings”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image024.jpg"><img data-attachment-id="1680" data-permalink="https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/image024-5/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image024.jpg" data-orig-size="624,321" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image024" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image024.jpg?w=300&#038;h=154" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image024.jpg?w=624" class="alignnone size-medium wp-image-1680" src="https://epmqueen.files.wordpress.com/2017/03/image024.jpg?w=300&#038;h=154" alt="" width="300" height="154" srcset="https://epmqueen.files.wordpress.com/2017/03/image024.jpg?w=300&amp;h=154 300w, https://epmqueen.files.wordpress.com/2017/03/image024.jpg?w=600&amp;h=308 600w, https://epmqueen.files.wordpress.com/2017/03/image024.jpg?w=150&amp;h=77 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>In the Properties tab under Basic, I can see that the expected number of members were loaded as well as data. Perfect!</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image025.jpg"><img data-attachment-id="1681" data-permalink="https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/image025-6/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image025.jpg" data-orig-size="624,252" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image025" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image025.jpg?w=300&#038;h=121" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image025.jpg?w=624" class="alignnone size-medium wp-image-1681" src="https://epmqueen.files.wordpress.com/2017/03/image025.jpg?w=300&#038;h=121" alt="" width="300" height="121" srcset="https://epmqueen.files.wordpress.com/2017/03/image025.jpg?w=300&amp;h=121 300w, https://epmqueen.files.wordpress.com/2017/03/image025.jpg?w=600&amp;h=242 600w, https://epmqueen.files.wordpress.com/2017/03/image025.jpg?w=150&amp;h=61 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>The data is not aggregated, so I thought it would be a good chance to see if my calc scripts came in…Yep! From here I could execute the scripts if I desired.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image026.jpg"><img data-attachment-id="1682" data-permalink="https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/image026-6/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image026.jpg" data-orig-size="624,473" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image026" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image026.jpg?w=300&#038;h=227" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image026.jpg?w=624" class="alignnone size-medium wp-image-1682" src="https://epmqueen.files.wordpress.com/2017/03/image026.jpg?w=300&#038;h=227" alt="" width="300" height="227" srcset="https://epmqueen.files.wordpress.com/2017/03/image026.jpg?w=300&amp;h=227 300w, https://epmqueen.files.wordpress.com/2017/03/image026.jpg?w=600&amp;h=454 600w, https://epmqueen.files.wordpress.com/2017/03/image026.jpg?w=150&amp;h=114 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>And I’m done! That was easy enough…</p> <p>And to take a step further, this is <em>easily</em> something that could be run routinely via a batch (or shell) script. This would be great for backing up your environment, versioning, or simply moving cubes from one environment to another. …And if you were wondering if you can only download from OP, nope, cloud is an option as shown below:</p> <p>Command Line:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0272.png"><img data-attachment-id="1683" data-permalink="https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/image0272/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0272.png" data-orig-size="979,665" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0272" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0272.png?w=300&#038;h=204" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0272.png?w=840" class="alignnone size-medium wp-image-1683" src="https://epmqueen.files.wordpress.com/2017/03/image0272.png?w=300&#038;h=204" alt="" width="300" height="204" srcset="https://epmqueen.files.wordpress.com/2017/03/image0272.png?w=300&amp;h=204 300w, https://epmqueen.files.wordpress.com/2017/03/image0272.png?w=600&amp;h=408 600w, https://epmqueen.files.wordpress.com/2017/03/image0272.png?w=150&amp;h=102 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Folder:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0282.png"><img data-attachment-id="1684" data-permalink="https://realtrigeek.com/2017/03/27/esscs-essbase-lcm-utility/image0282/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0282.png" data-orig-size="364,423" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0282" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0282.png?w=258&#038;h=300" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0282.png?w=364" class="alignnone size-medium wp-image-1684" src="https://epmqueen.files.wordpress.com/2017/03/image0282.png?w=258&#038;h=300" alt="" width="258" height="300" srcset="https://epmqueen.files.wordpress.com/2017/03/image0282.png?w=258&amp;h=300 258w, https://epmqueen.files.wordpress.com/2017/03/image0282.png?w=129&amp;h=150 129w, https://epmqueen.files.wordpress.com/2017/03/image0282.png 364w" sizes="(max-width: 258px) 100vw, 258px" /></a></p><br /> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/gocomments/epmqueen.wordpress.com/1672/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/epmqueen.wordpress.com/1672/" /></a> <img alt="" border="0" src="https://pixel.wp.com/b.gif?host=realtrigeek.com&#038;blog=70089387&#038;post=1672&#038;subd=epmqueen&#038;ref=&#038;feed=1" width="1" height="1" /> Sarah Craynon Zumbrum http://realtrigeek.com/?p=1672 Mon Mar 27 2017 13:22:04 GMT-0400 (EDT) Tips for Using Essbase in Data Visualization https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/ <p>Last Friday I wrote a <a href="https://realtrigeek.com/2017/03/17/essbase-as-a-data-source-in-oracle-data-visualization/">blog</a> on how to connect Essbase and Essbase Cloud Service (EssCS) to Data Visualization (DV). I’ve decided to put together a couple tips to help you start using Essbase/EssCS as a data source in DV.</p> <p>Let’s start…</p> <p>If you followed the steps and have Essbase or EssCS as a DV data source, you might find yourself annoyed that the hierarchy names came in as Generations. Here is one of my examples from last week:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0011.png"><img data-attachment-id="1640" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image0011-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0011.png" data-orig-size="381,530" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0011" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0011.png?w=216&#038;h=300" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0011.png?w=381" class="alignnone size-medium wp-image-1640" src="https://epmqueen.files.wordpress.com/2017/03/image0011.png?w=216&#038;h=300" alt="" width="216" height="300" srcset="https://epmqueen.files.wordpress.com/2017/03/image0011.png?w=216&amp;h=300 216w, https://epmqueen.files.wordpress.com/2017/03/image0011.png?w=108&amp;h=150 108w, https://epmqueen.files.wordpress.com/2017/03/image0011.png 381w" sizes="(max-width: 216px) 100vw, 216px" /></a></p> <p>If you administer a cube, you might know what each generation refers to in the dimension. For example, “Gen2, Location” might be country, “Gen3, Location” might be state, and “Gen4, Location” might be city. However, if you are the end user, you might get confused with these details.</p> <p>Doesn’t this look easier to navigate?</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image027.jpg"><img data-attachment-id="1660" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image027-5/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image027.jpg" data-orig-size="184,279" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image027" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image027.jpg?w=198&#038;h=300" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image027.jpg?w=198&#038;h=300" class="alignnone size-medium wp-image-1660" src="https://epmqueen.files.wordpress.com/2017/03/image027.jpg?w=198&#038;h=300" alt="" srcset="https://epmqueen.files.wordpress.com/2017/03/image027.jpg 184w, https://epmqueen.files.wordpress.com/2017/03/image027.jpg?w=99&amp;h=150 99w" sizes="(max-width: 184px) 100vw, 184px" /></a></p> <p>So, how can you give the generations a clearer definition of the hierarchy granularity? It’s actually pretty simple! I’ll show you how to do this in EssCS then in Essbase, but it’s really the logic, just different steps.</p> <p><strong>EssCS</strong></p> <p>There are two ways we can define generations for EssCS. You can choose which one you like best.</p> <p>The first way is to log into EssCS, choose your cube, and click “Settings”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image003.png"><img data-attachment-id="1641" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image003-7/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image003.png" data-orig-size="1160,390" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image003" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image003.png?w=300&#038;h=101" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image003.png?w=840" class="alignnone size-medium wp-image-1641" src="https://epmqueen.files.wordpress.com/2017/03/image003.png?w=300&#038;h=101" alt="" width="300" height="101" srcset="https://epmqueen.files.wordpress.com/2017/03/image003.png?w=300&amp;h=101 300w, https://epmqueen.files.wordpress.com/2017/03/image003.png?w=600&amp;h=202 600w, https://epmqueen.files.wordpress.com/2017/03/image003.png?w=150&amp;h=50 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>In the “Dimensions” section, you will see the each dimension listed with details.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image004.png"><img data-attachment-id="1642" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image004-8/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image004.png" data-orig-size="1207,446" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image004" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image004.png?w=300&#038;h=111" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image004.png?w=840" class="alignnone size-medium wp-image-1642" src="https://epmqueen.files.wordpress.com/2017/03/image004.png?w=300&#038;h=111" alt="" width="300" height="111" srcset="https://epmqueen.files.wordpress.com/2017/03/image004.png?w=300&amp;h=111 300w, https://epmqueen.files.wordpress.com/2017/03/image004.png?w=600&amp;h=222 600w, https://epmqueen.files.wordpress.com/2017/03/image004.png?w=150&amp;h=55 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>For demo purposes, I’m going to update Years.</p> <p>Note that the generation names are “Gen1” and “Gen2”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image005.png"><img data-attachment-id="1643" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image005-8/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image005.png" data-orig-size="793,242" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image005" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image005.png?w=300&#038;h=92" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image005.png?w=793" class="alignnone size-medium wp-image-1643" src="https://epmqueen.files.wordpress.com/2017/03/image005.png?w=300&#038;h=92" alt="" width="300" height="92" srcset="https://epmqueen.files.wordpress.com/2017/03/image005.png?w=300&amp;h=92 300w, https://epmqueen.files.wordpress.com/2017/03/image005.png?w=600&amp;h=184 600w, https://epmqueen.files.wordpress.com/2017/03/image005.png?w=150&amp;h=46 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Let’s update “Gen1” to be “All Years” and “Gen2” to be “Year”. This is done by clicking on the Name (“Gen1” or “Gen2”) and entering the name you would like to enter.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image006.png"><img data-attachment-id="1644" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image006-8/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image006.png" data-orig-size="795,251" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image006" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image006.png?w=300&#038;h=95" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image006.png?w=795" class="alignnone size-medium wp-image-1644" src="https://epmqueen.files.wordpress.com/2017/03/image006.png?w=300&#038;h=95" alt="" width="300" height="95" srcset="https://epmqueen.files.wordpress.com/2017/03/image006.png?w=300&amp;h=95 300w, https://epmqueen.files.wordpress.com/2017/03/image006.png?w=600&amp;h=190 600w, https://epmqueen.files.wordpress.com/2017/03/image006.png?w=150&amp;h=47 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Once I have finished naming my generations, I click “Save”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image007.png"><img data-attachment-id="1645" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image007-6/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image007.png" data-orig-size="1905,288" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image007" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image007.png?w=300&#038;h=45" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image007.png?w=840" class="alignnone size-medium wp-image-1645" src="https://epmqueen.files.wordpress.com/2017/03/image007.png?w=300&#038;h=45" alt="" width="300" height="45" srcset="https://epmqueen.files.wordpress.com/2017/03/image007.png?w=300&amp;h=45 300w, https://epmqueen.files.wordpress.com/2017/03/image007.png?w=595&amp;h=90 595w, https://epmqueen.files.wordpress.com/2017/03/image007.png?w=150&amp;h=23 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Now, when I go into DV and connect to this cube, I have an updated data element name.</p> <p>Previous:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image028.jpg"><img data-attachment-id="1661" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image028-6/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image028.jpg" data-orig-size="177,254" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image028" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image028.jpg?w=209&#038;h=300" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image028.jpg?w=209&#038;h=300" class="alignnone size-medium wp-image-1661" src="https://epmqueen.files.wordpress.com/2017/03/image028.jpg?w=209&#038;h=300" alt="" srcset="https://epmqueen.files.wordpress.com/2017/03/image028.jpg 177w, https://epmqueen.files.wordpress.com/2017/03/image028.jpg?w=105&amp;h=150 105w" sizes="(max-width: 177px) 100vw, 177px" /></a></p> <p>Current:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image029.jpg"><img data-attachment-id="1662" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image029-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image029.jpg" data-orig-size="183,251" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image029" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image029.jpg?w=219&#038;h=300" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image029.jpg?w=219&#038;h=300" class="alignnone size-medium wp-image-1662" src="https://epmqueen.files.wordpress.com/2017/03/image029.jpg?w=219&#038;h=300" alt="" srcset="https://epmqueen.files.wordpress.com/2017/03/image029.jpg 183w, https://epmqueen.files.wordpress.com/2017/03/image029.jpg?w=109&amp;h=150 109w" sizes="(max-width: 183px) 100vw, 183px" /></a></p> <p>The second way to do this in EssCS is to use the Cube Designer workbook. On the “Cube.Generations” tab, enter the names of the generations for the dimensions you want updated.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image010.png"><img data-attachment-id="1646" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image010-7/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image010.png" data-orig-size="539,820" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image010" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image010.png?w=197&#038;h=300" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image010.png?w=539" class="alignnone size-medium wp-image-1646" src="https://epmqueen.files.wordpress.com/2017/03/image010.png?w=197&#038;h=300" alt="" width="197" height="300" srcset="https://epmqueen.files.wordpress.com/2017/03/image010.png?w=197&amp;h=300 197w, https://epmqueen.files.wordpress.com/2017/03/image010.png?w=394&amp;h=600 394w, https://epmqueen.files.wordpress.com/2017/03/image010.png?w=99&amp;h=150 99w" sizes="(max-width: 197px) 100vw, 197px" /></a></p> <p>From the Cube Designer ribbon, choose “Build Cube”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image011.png"><img data-attachment-id="1647" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image011-8/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image011.png" data-orig-size="643,117" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image011" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image011.png?w=300&#038;h=55" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image011.png?w=643" class="alignnone size-medium wp-image-1647" src="https://epmqueen.files.wordpress.com/2017/03/image011.png?w=300&#038;h=55" alt="" width="300" height="55" srcset="https://epmqueen.files.wordpress.com/2017/03/image011.png?w=300&amp;h=55 300w, https://epmqueen.files.wordpress.com/2017/03/image011.png?w=600&amp;h=110 600w, https://epmqueen.files.wordpress.com/2017/03/image011.png?w=150&amp;h=27 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Choose to “Update Cube – Retain All Data” and click “Run”. You will likely not want to load the data sheets, but you can if you would like to.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image012.png"><img data-attachment-id="1648" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image012-7/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image012.png" data-orig-size="336,309" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image012" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image012.png?w=300&#038;h=276" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image012.png?w=336" class="alignnone size-medium wp-image-1648" src="https://epmqueen.files.wordpress.com/2017/03/image012.png?w=300&#038;h=276" alt="" width="300" height="276" srcset="https://epmqueen.files.wordpress.com/2017/03/image012.png?w=300&amp;h=276 300w, https://epmqueen.files.wordpress.com/2017/03/image012.png?w=150&amp;h=138 150w, https://epmqueen.files.wordpress.com/2017/03/image012.png 336w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Confirm that you want to update the listed cube.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image013.png"><img data-attachment-id="1649" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image013-8/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image013.png" data-orig-size="237,188" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image013" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image013.png?w=300&#038;h=238" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image013.png?w=300&#038;h=238" class="alignnone size-medium wp-image-1649" src="https://epmqueen.files.wordpress.com/2017/03/image013.png?w=300&#038;h=238" alt="" srcset="https://epmqueen.files.wordpress.com/2017/03/image013.png 237w, https://epmqueen.files.wordpress.com/2017/03/image013.png?w=150&amp;h=119 150w" sizes="(max-width: 237px) 100vw, 237px" /></a></p> <p>Choose Yes to see the job run.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image014.png"><img data-attachment-id="1650" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image014-8/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image014.png" data-orig-size="242,173" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image014" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image014.png?w=300&#038;h=214" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image014.png?w=300&#038;h=214" class="alignnone size-medium wp-image-1650" src="https://epmqueen.files.wordpress.com/2017/03/image014.png?w=300&#038;h=214" alt="" srcset="https://epmqueen.files.wordpress.com/2017/03/image014.png 242w, https://epmqueen.files.wordpress.com/2017/03/image014.png?w=150&amp;h=107 150w" sizes="(max-width: 242px) 100vw, 242px" /></a></p> <p>Once the job finishes, you should have generation names available in DV.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image030.jpg"><img data-attachment-id="1663" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image030-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image030.jpg" data-orig-size="227,474" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image030" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image030.jpg?w=144&#038;h=300" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image030.jpg?w=227" class="alignnone size-medium wp-image-1663" src="https://epmqueen.files.wordpress.com/2017/03/image030.jpg?w=144&#038;h=300" alt="" width="144" height="300" srcset="https://epmqueen.files.wordpress.com/2017/03/image030.jpg?w=144&amp;h=300 144w, https://epmqueen.files.wordpress.com/2017/03/image030.jpg?w=72&amp;h=150 72w, https://epmqueen.files.wordpress.com/2017/03/image030.jpg 227w" sizes="(max-width: 144px) 100vw, 144px" /></a></p> <p><strong>Essbase</strong></p> <p>If want to name the generations in Essbase, choose “Generations” from the right-click menu of the dimension you want to update:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image016.png"><img data-attachment-id="1651" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image016-8/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image016.png" data-orig-size="327,560" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image016" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image016.png?w=175&#038;h=300" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image016.png?w=327" class="alignnone size-medium wp-image-1651" src="https://epmqueen.files.wordpress.com/2017/03/image016.png?w=175&#038;h=300" alt="" width="175" height="300" srcset="https://epmqueen.files.wordpress.com/2017/03/image016.png?w=175&amp;h=300 175w, https://epmqueen.files.wordpress.com/2017/03/image016.png?w=88&amp;h=150 88w, https://epmqueen.files.wordpress.com/2017/03/image016.png 327w" sizes="(max-width: 175px) 100vw, 175px" /></a></p> <p>Enter the names of the generations for that dimension and click “OK”.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image031.jpg"><img data-attachment-id="1664" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image031-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image031.jpg" data-orig-size="924,425" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image031" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image031.jpg?w=300&#038;h=138" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image031.jpg?w=840" class="alignnone size-medium wp-image-1664" src="https://epmqueen.files.wordpress.com/2017/03/image031.jpg?w=300&#038;h=138" alt="" width="300" height="138" srcset="https://epmqueen.files.wordpress.com/2017/03/image031.jpg?w=300&amp;h=138 300w, https://epmqueen.files.wordpress.com/2017/03/image031.jpg?w=600&amp;h=276 600w, https://epmqueen.files.wordpress.com/2017/03/image031.jpg?w=150&amp;h=69 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>When I go to DV, I see my generations updated for Department.</p> <p>Note: To prove I wasn’t pulling any trickery, this is to prove that my screenshot is from my OP Essbase cube, not my cloud Essbase cube. …The Years were not updated in the OP cube!</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image032.jpg"><img data-attachment-id="1665" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image032-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image032.jpg" data-orig-size="240,450" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image032" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image032.jpg?w=160&#038;h=300" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image032.jpg?w=240" class="alignnone size-medium wp-image-1665" src="https://epmqueen.files.wordpress.com/2017/03/image032.jpg?w=160&#038;h=300" alt="" width="160" height="300" srcset="https://epmqueen.files.wordpress.com/2017/03/image032.jpg?w=160&amp;h=300 160w, https://epmqueen.files.wordpress.com/2017/03/image032.jpg?w=80&amp;h=150 80w, https://epmqueen.files.wordpress.com/2017/03/image032.jpg 240w" sizes="(max-width: 160px) 100vw, 160px" /></a></p> <p>Now we can build more meaningful visualizations in DV!</p> <p><strong>Visualizations in DV</strong></p> <p>Unlike in Smart View or HFR, you are NOT required to specify a member from every dimension in DV. Using the below as an example, I built a Row Expander visualization that is very something that we would normally build in Smart View for comparison. To the right, I used a stacked bar graph using just a few data elements. I’ve modified the visualization to show the currency amounts for the whole of what the government paid towards employee’s Basic Benefit and Thrift Savings Plans (no, it’s not real data!).</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0191.png"><img data-attachment-id="1652" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image0191-3/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0191.png" data-orig-size="1920,1017" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0191" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0191.png?w=300&#038;h=159" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0191.png?w=840" class="alignnone size-medium wp-image-1652" src="https://epmqueen.files.wordpress.com/2017/03/image0191.png?w=300&#038;h=159" alt="" width="300" height="159" srcset="https://epmqueen.files.wordpress.com/2017/03/image0191.png?w=300&amp;h=159 300w, https://epmqueen.files.wordpress.com/2017/03/image0191.png?w=600&amp;h=318 600w, https://epmqueen.files.wordpress.com/2017/03/image0191.png?w=150&amp;h=79 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Imagine being able to drill into data visually and NOT in Excel, having to move around columns, dimensions, etc. How neat is that? Let’s drill into FY2017’s Basic Benefit Plan. I right-click on the portion of data I want to go deeper into for analysis.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0201.png"><img data-attachment-id="1653" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image0201-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0201.png" data-orig-size="714,489" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0201" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0201.png?w=300&#038;h=205" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0201.png?w=714" class="alignnone size-medium wp-image-1653" src="https://epmqueen.files.wordpress.com/2017/03/image0201.png?w=300&#038;h=205" alt="" width="300" height="205" srcset="https://epmqueen.files.wordpress.com/2017/03/image0201.png?w=300&amp;h=205 300w, https://epmqueen.files.wordpress.com/2017/03/image0201.png?w=600&amp;h=410 600w, https://epmqueen.files.wordpress.com/2017/03/image0201.png?w=150&amp;h=103 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>I can choose from any dimension.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image021.png"><img data-attachment-id="1654" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image021-6/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image021.png" data-orig-size="411,398" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image021" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image021.png?w=300&#038;h=291" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image021.png?w=411" class="alignnone size-medium wp-image-1654" src="https://epmqueen.files.wordpress.com/2017/03/image021.png?w=300&#038;h=291" alt="" width="300" height="291" srcset="https://epmqueen.files.wordpress.com/2017/03/image021.png?w=300&amp;h=291 300w, https://epmqueen.files.wordpress.com/2017/03/image021.png?w=150&amp;h=145 150w, https://epmqueen.files.wordpress.com/2017/03/image021.png 411w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Let’s choose by Job Function.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image022.png"><img data-attachment-id="1655" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image022-4/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image022.png" data-orig-size="418,411" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image022" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image022.png?w=300&#038;h=295" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image022.png?w=418" class="alignnone size-medium wp-image-1655" src="https://epmqueen.files.wordpress.com/2017/03/image022.png?w=300&#038;h=295" alt="" width="300" height="295" srcset="https://epmqueen.files.wordpress.com/2017/03/image022.png?w=300&amp;h=295 300w, https://epmqueen.files.wordpress.com/2017/03/image022.png?w=150&amp;h=147 150w, https://epmqueen.files.wordpress.com/2017/03/image022.png 418w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Notice that <em>both</em> visualizations drill to the Job Function level.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0232.png"><img data-attachment-id="1656" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image0232/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0232.png" data-orig-size="1632,941" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0232" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0232.png?w=300&#038;h=173" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0232.png?w=840" class="alignnone size-medium wp-image-1656" src="https://epmqueen.files.wordpress.com/2017/03/image0232.png?w=300&#038;h=173" alt="" width="300" height="173" srcset="https://epmqueen.files.wordpress.com/2017/03/image0232.png?w=300&amp;h=173 300w, https://epmqueen.files.wordpress.com/2017/03/image0232.png?w=600&amp;h=346 600w, https://epmqueen.files.wordpress.com/2017/03/image0232.png?w=150&amp;h=86 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Let’s keep drilling into the one with the most contributions, GS0600, to the Group level.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0241.png"><img data-attachment-id="1657" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image0241-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0241.png" data-orig-size="1625,904" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0241" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0241.png?w=300&#038;h=167" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0241.png?w=840" class="alignnone size-medium wp-image-1657" src="https://epmqueen.files.wordpress.com/2017/03/image0241.png?w=300&#038;h=167" alt="" width="300" height="167" srcset="https://epmqueen.files.wordpress.com/2017/03/image0241.png?w=300&amp;h=167 300w, https://epmqueen.files.wordpress.com/2017/03/image0241.png?w=600&amp;h=334 600w, https://epmqueen.files.wordpress.com/2017/03/image0241.png?w=150&amp;h=83 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>And, finally, to the Employee level for GS0688.</p> <p>We now see all the employees that are contributing towards the GS0688 numbers. I’ve highlighted the filters in the screenshot to show it is very easy to get back to the starting point by deleting (or even altering) the filters.</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0251.png"><img data-attachment-id="1658" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image0251/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0251.png" data-orig-size="1625,903" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0251" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0251.png?w=300&#038;h=167" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0251.png?w=840" class="alignnone size-medium wp-image-1658" src="https://epmqueen.files.wordpress.com/2017/03/image0251.png?w=300&#038;h=167" alt="" width="300" height="167" srcset="https://epmqueen.files.wordpress.com/2017/03/image0251.png?w=300&amp;h=167 300w, https://epmqueen.files.wordpress.com/2017/03/image0251.png?w=600&amp;h=334 600w, https://epmqueen.files.wordpress.com/2017/03/image0251.png?w=150&amp;h=83 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Since the Basic Benefit Plan values are 1% of the employee’s base pay if they have served at least 5 years for the government, I want to see the actual number of years, grade, and step of the employees.</p> <p>I’m going to alter the Row Expander visualization.</p> <p>The numbers make complete sense and I was able to see data next to a visualization that update with each other. Pretty cool, huh??</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0262.png"><img data-attachment-id="1659" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image0262/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0262.png" data-orig-size="1629,897" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0262" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0262.png?w=300&#038;h=165" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0262.png?w=840" class="alignnone size-medium wp-image-1659" src="https://epmqueen.files.wordpress.com/2017/03/image0262.png?w=300&#038;h=165" alt="" width="300" height="165" srcset="https://epmqueen.files.wordpress.com/2017/03/image0262.png?w=300&amp;h=165 300w, https://epmqueen.files.wordpress.com/2017/03/image0262.png?w=600&amp;h=330 600w, https://epmqueen.files.wordpress.com/2017/03/image0262.png?w=150&amp;h=83 150w" sizes="(max-width: 300px) 100vw, 300px" /></a></p> <p>Now, at this point, I really wanted to show how you can mashup data in DV, specifically using the Essbase data source. I had this great example of a spreadsheet that listed each employee’s job title:</p> <p><a href="https://epmqueen.files.wordpress.com/2017/03/image0331.png"><img data-attachment-id="1666" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image0331/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0331.png" data-orig-size="286,138" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0331" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0331.png?w=300&#038;h=145" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0331.png?w=300&#038;h=145" class="alignnone size-medium wp-image-1666" src="https://epmqueen.files.wordpress.com/2017/03/image0331.png?w=300&#038;h=145" alt="" srcset="https://epmqueen.files.wordpress.com/2017/03/image0331.png 286w, https://epmqueen.files.wordpress.com/2017/03/image0331.png?w=150&amp;h=72 150w" sizes="(max-width: 286px) 100vw, 286px" /></a></p> <p>And I was going to join them together in a visualization to show how they can be presented as one. However, as I was trying, I was having trouble with joining the sources. Since the Essbase connection is still in beta, there are a few caveats. One of them I found in the latest documentation:</p> <p><img data-attachment-id="1667" data-permalink="https://realtrigeek.com/2017/03/23/tips-for-using-essbase-in-data-visualization/image0341-2/" data-orig-file="https://epmqueen.files.wordpress.com/2017/03/image0341.png" data-orig-size="1006,27" data-comments-opened="1" data-image-meta="{&quot;aperture&quot;:&quot;0&quot;,&quot;credit&quot;:&quot;&quot;,&quot;camera&quot;:&quot;&quot;,&quot;caption&quot;:&quot;&quot;,&quot;created_timestamp&quot;:&quot;0&quot;,&quot;copyright&quot;:&quot;&quot;,&quot;focal_length&quot;:&quot;0&quot;,&quot;iso&quot;:&quot;0&quot;,&quot;shutter_speed&quot;:&quot;0&quot;,&quot;title&quot;:&quot;&quot;,&quot;orientation&quot;:&quot;0&quot;}" data-image-title="image0341" data-image-description="" data-medium-file="https://epmqueen.files.wordpress.com/2017/03/image0341.png?w=300&#038;h=8" data-large-file="https://epmqueen.files.wordpress.com/2017/03/image0341.png?w=840" class="alignnone size-medium wp-image-1667" src="https://epmqueen.files.wordpress.com/2017/03/image0341.png?w=300&#038;h=8" alt="" width="300" height="8" srcset="https://epmqueen.files.wordpress.com/2017/03/image0341.png?w=300&amp;h=8 300w, https://epmqueen.files.wordpress.com/2017/03/image0341.png?w=600&amp;h=16 600w, https://epmqueen.files.wordpress.com/2017/03/image0341.png?w=150&amp;h=4 150w" sizes="(max-width: 300px) 100vw, 300px" /></p> <p>Well, darn. I guess this post will have to be updated when the bugs are worked out!</p> <p>As you can see, you can modify the settings in Essbase to make DV very simple to use for Essbase analysis. As I build out my demo more, I try to show more reasons why Essbase and DV are perfect compliments!</p><br /> <a rel="nofollow" href="http://feeds.wordpress.com/1.0/gocomments/epmqueen.wordpress.com/1639/"><img alt="" border="0" src="http://feeds.wordpress.com/1.0/comments/epmqueen.wordpress.com/1639/" /></a> <img alt="" border="0" src="https://pixel.wp.com/b.gif?host=realtrigeek.com&#038;blog=70089387&#038;post=1639&#038;subd=epmqueen&#038;ref=&#038;feed=1" width="1" height="1" /> Sarah Craynon Zumbrum http://realtrigeek.com/?p=1639 Thu Mar 23 2017 15:40:28 GMT-0400 (EDT) BIAPPS on PAAS – Backup and Restore - Introduction (Part1) https://blogs.oracle.com/biapps/entry/biapps_on_paas_backup_and <p align="justify" class="MsoNormal" style="text-align: justify;"><font face="verdana,arial,helvetica,sans-serif" size="2">BI Applications (BIApps) is an integrated application involving multiple components. A backup of BIApps therefore would mean a backup of the integrated application and not just the datawarehouse or the database. </font></p> <p> <font face="verdana,arial,helvetica,sans-serif" size="2"><span style="font-size: 11pt; line-height: 115%;">High level architecture of BIAPPS on PAAS is shown below:</span></font></p> <p><img src="https://blogs.oracle.com/biapps/resource/bop_backup1_biappsarchitecture.png" style="max-width: 99%;" /> </p> <p><font face="verdana,arial,helvetica,sans-serif" size="2"> </font> </p> <p align="justify" style="text-align: justify;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2">There are four different cloud service instances involved at a minimum when using BIAPPS on PAAS. Following table shows the components/software that are installed on each:</font></p><font face="verdana,arial,helvetica,sans-serif" size="2"> </font> <table border="1" cellspacing="0" cellpadding="0" style="border-collapse: collapse; border: medium none;" class="MsoTableGrid"> <tbody> <tr> <td width="205" valign="top" style="width: 153.9pt; border: 1pt solid windowtext; padding: 0in 5.4pt;"> <p style="margin-bottom: 0.0001pt; text-align: justify; line-height: normal;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2">DBCS (Database Cloud Service)</font></p> </td> <td width="433" valign="top" style="width: 324.9pt; border-width: 1pt 1pt 1pt medium; border-style: solid solid solid none; border-color: windowtext windowtext windowtext -moz-use-text-color; -moz-border-top-colors: none; -moz-border-right-colors: none; -moz-border-bottom-colors: none; -moz-border-left-colors: none; border-image: none; padding: 0in 5.4pt;"> <p style="margin-bottom: 0.0001pt; text-align: justify; line-height: normal;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2">Database that has the ODI repository, BIACM Repository, SDS schemas and the Datawarehouse Schema</font></p> </td> </tr> <tr> <td width="205" valign="top" style="width: 153.9pt; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; -moz-border-top-colors: none; -moz-border-right-colors: none; -moz-border-bottom-colors: none; -moz-border-left-colors: none; border-image: none; padding: 0in 5.4pt;"> <p style="margin-bottom: 0.0001pt; text-align: justify; line-height: normal;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2">BICS (BI Cloud Service)</font></p> </td> <td width="433" valign="top" style="width: 324.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0in 5.4pt;"> <p style="margin-bottom: 0.0001pt; text-align: justify; line-height: normal;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2">RPD, Webcat, Jazn file</font></p> </td> </tr> <tr> <td width="205" valign="top" style="width: 153.9pt; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; -moz-border-top-colors: none; -moz-border-right-colors: none; -moz-border-bottom-colors: none; -moz-border-left-colors: none; border-image: none; padding: 0in 5.4pt;"> <p style="margin-bottom: 0.0001pt; text-align: justify; line-height: normal;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2">Compute Cloud</font></p> </td> <td width="433" valign="top" style="width: 324.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0in 5.4pt;"><font size="2"><font face="verdana,arial,helvetica,sans-serif">Weblogic Server<br />ODI Server<br />BIACM<br />BIAPPS Shiphome<br />Customer Data stored as files (E.g. Source Files, Universal Adaptor files)<br /><br />*Optionally the below if installed<br />Corente VPN ?<br />Vnc<br />Dev tools like sql developer/browser/ODI studio and their associated files</font></font><br /></td> </tr> <tr> <td width="205" valign="top" style="width: 153.9pt; border-width: medium 1pt 1pt; border-style: none solid solid; border-color: -moz-use-text-color windowtext windowtext; -moz-border-top-colors: none; -moz-border-right-colors: none; -moz-border-bottom-colors: none; -moz-border-left-colors: none; border-image: none; padding: 0in 5.4pt;"> <p style="margin-bottom: 0.0001pt; text-align: justify; line-height: normal;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2">Storage Service</font></p> </td> <td width="433" valign="top" style="width: 324.9pt; border-width: medium 1pt 1pt medium; border-style: none solid solid none; border-color: -moz-use-text-color windowtext windowtext -moz-use-text-color; padding: 0in 5.4pt;"> <p style="margin-bottom: 0.0001pt; text-align: justify; line-height: normal;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2">backups for Compute/DB<br /></font></p> </td> </tr> </tbody> </table><font face="verdana,arial,helvetica,sans-serif" size="2"> </font> <p style="text-align: justify;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2"> </font></p><font face="verdana,arial,helvetica,sans-serif" size="2"> </font> <p align="justify" style="text-align: justify;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2">You will need to backup all of the above instances to be able to successfully restore the BI Applications. Each of the Cloud services provides its own backup mechanism.<span> </span>Relevant information for backing up each of these cloud services is available in the Oracle Cloud documentation and will be detailed in subsequent blogs in this backup series. Customer may also want to look at Oracle Database Backup Cloud Service that is a separate optional cloud service that is available to take Oracle Database backups. <br /></font></p> <div align="justify"><font face="verdana,arial,helvetica,sans-serif" size="2"> </font></div> <p align="justify" style="text-align: justify;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2">However is it adequate if you just backed up each of the cloud instance independently? Following section details some of the considerations related to this question by drawing on few examples.</font></p> <div align="justify"><font face="verdana,arial,helvetica,sans-serif" size="2"> </font></div> <p align="justify" style="text-align: justify;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2"><b><u>Example 1:</u></b> Database has been backed up on weekend Saturday 11<sup>th</sup> March 11pm. Compute Cloud Instance was backed up earlier at 10 am the same day.<span> </span>Configuration that is done on weblogic (JDBC data sources, memory settings etc) is not stored in the database. So if any of the configuration was done between 10am 11<sup>th</sup> March and 11pm 11<sup>th</sup> March, that would be lost.<span> </span>And in that sense, the backup and restore does not truly reflect the state of the integrated BIAPPS environment as it would have been at 11pm 11<sup>th</sup> March.</font></p> <div align="justify"><font face="verdana,arial,helvetica,sans-serif" size="2"> </font></div> <p align="justify" style="text-align: justify;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2"><b><u>Example2</u></b>: BICS was backed up on 10 pm on Sunday 12<sup>th</sup> March.<span> </span>Now continuing from example1, we have the database backup from 11pm 11<sup>th</sup> Match and BICS backup from 10 pm on Sunday 12<sup>th</sup> March. If there were any changes that were done to the database (like adding a new table/column) that was followed by a change in the RPD, then there is a chance that when we restore the database and the BICS instance, we can have a failure since they are no longer in sync. </font></p> <div align="justify"><font face="verdana,arial,helvetica,sans-serif" size="2"> </font></div> <p align="justify" style="text-align: justify;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2">As you can see from the above examples, backing up the cloud instances at different points in time can cause a potential problem. That said, there is a no single button that can be clicked to backup all the instances at exact same time. However with a right process in place, it is still easy enough to backup and restore the BIAPPS Application. Following are some of the <b><u>best practices/guidelines</u></b> that the customer can take to avoid the above issues:</font></p> <div align="justify"> <ol> <li><font face="verdana,arial,helvetica,sans-serif" size="2">The most volatile of the BIAPPS components is the database. That said, the database is primarily changed when the ETL is run. So it is probably a good idea to backup the database outside the ETL window and as frequent as is possible. Configuration changes done in BIACM will also reside in the database, but these are less likely to occur once the initial configuration is done. Similarly ODI repository changes also reside in the database but in a production instance this should not be done everyday but rather during limited controlled windows.</font></li> <li><font face="verdana,arial,helvetica,sans-serif" size="2">BICS RPD is tightly coupled with the database. So the customer could restrict the RPD changes to certain limited days in a month and ensure that there is proper database backup along with accompanying BICS backup outside the change period. In other words, have a quiet period for making RPD changes and ensure that the BICS and DB are backed up in that quiet period.&nbsp;</font></li> <li><font face="verdana,arial,helvetica,sans-serif" size="2">Most of the configuration required for Weblogic is done during the initial configuration. So after the full load, ensure there is a blackout period when you back up all the cloud instances. Subsequently similar to the BICS quiet periods, ensure that the changes to the weblogic and other domains on the Compute are done only during certain days and ensure that they are being backed up during the quiet periods.&nbsp;</font></li> <li><font face="verdana,arial,helvetica,sans-serif" size="2">Most of the cloud services have either an API or command line utility to backup that instance. You could consider using those to automate the backup of all those instances. Better still, you can have that script kicked off automatically at the end of the ETL load plan.&nbsp;</font></li> <li><font face="verdana,arial,helvetica,sans-serif" size="2">When restoring the system from a backup, consider the impact of any extract from Source Systems. Most of the Sources Systems have incremental extracts. If the last extracted date is stored in the database, then that date will also be restored as part of the database restore. However if the extract date is stored outside the BIAPPS Database (E.g. Fusion or any on Prem Sources which you are replicating via a Replication tool), then you will need to ensure that post the database restore, you reset the extract dates to match the data in the database and also clear any pending data in transit (Like in UCM).&nbsp;</font></li> <li><font face="verdana,arial,helvetica,sans-serif" size="2">A full reset of the SDS and the warehouse, followed by a full load will fix any issues with the SDS/warehouse. However Full loads are expensive and certain Source Systems have restrictions on how much data can be extracted in a day (E.g. Taleo). Further you can potentially lose any existing snapshot data if doing reset of the warehouse (and if the snapshot data is not available in the source).&nbsp;</font></li> <li><font face="verdana,arial,helvetica,sans-serif" size="2">When you restore a database, you will be restoring all the tables and all the schemas. It is not easy to restore a single table. Therefore it is best to keep activities that impact different schemas separate. E.g. If doing major configuration in BIACM, then do that when no ETL is running and take a adhoc backup before and after those changes. Similarly when promoting code to the ODI Production repository, do it outside the ETL window and at a time when no BIACM changes are happening and take a backup, before and after those changes. This will ensure that you can use the db backup to restore the database to the point in time before those changes are done without worrying about impact to other schemas. For the same reasons, if you are making a change to a single warehouse table, keep a backup of that table (and other dependent tables) in the warehouse schema along with the data, so that you can use those to restore the table rather than use the complete database backup. </font><br /></li> </ol> </div> <p align="justify"> </p> <p align="justify" class="MsoNormal" style="text-align: justify;"><font face="verdana,arial,helvetica,sans-serif" size="2">There are other components that are also involved in the BIAPPS on PAAS Solution and need to be included in the backup strategy. These include but not limited to:</font></p> <div align="justify"><font face="verdana,arial,helvetica,sans-serif" size="2"> </font><font face="verdana,arial,helvetica,sans-serif" size="2"> </font> <ol><font face="verdana,arial,helvetica,sans-serif" size="2"> </font> <li><font face="verdana,arial,helvetica,sans-serif" size="2">Source Systems: These are the systems from which BIAPPS gets the data from. The backup of those systems is also required when considering the entire application. However those are typically taken care by the Source System administrators and hence not listed here.&nbsp; </font></li> <li><font face="verdana,arial,helvetica,sans-serif" size="2">Replication Tools: If you are not using VPN to connect to the On Prem Source Systems, then it is likely, you have some kind of Replication mechanism to transfer the data from the On Premise Source System to the SDS. So your backup strategy ought to cover those as well.</font></li> <li><font face="verdana,arial,helvetica,sans-serif" size="2">Identity Domain/ Users &amp; Roles:&nbsp; These are usually maintained from the Service Admin Console (SAC). Refer to SAC documentation on how to back these up.</font></li> <li><font face="verdana,arial,helvetica,sans-serif" size="2">Any Network/Security Rules you setup between these various instances.</font><br /></li> </ol> </div> <p align="justify" style="text-align: justify;" class="MsoNormal"><font face="verdana,arial,helvetica,sans-serif" size="2">The customer ought to therefore understand the entire BIAPPS archictecture and then design the backup strategy accordingly. The customer will also likely have a Dev/Test/Prod environment, each of which is a complete BIAPPS application in itself. The customer will have to ensure that the backup strategy covers all those environments. Special care should also be taken if customer has a T2P process (Test to Production) and one of the environments requires to be restored. </font></p> <p><font face="verdana,arial,helvetica,sans-serif" size="2">The subsequent blogs in this series, will attempt to drill into the relevant backup functionality that is present for the individual components that make up the BIAPPS on PAAS solution. Below are few links that point to the backup documentation for the relevant cloud services:</font></p> <p><a href="http://docs.oracle.com/en/cloud/paas/database-dbaas-cloud/csdbi/backing.html#CSDBI-GUID-21980FCF-FA0C-4FD8-94DC-7C373CFB4C52"><font face="verdana,arial,helvetica,sans-serif" size="2"></font><font face="verdana,arial,helvetica,sans-serif" size="2">Backing up Deployments on Database Cloud Service</font></a></p> <p><a href="https://docs.oracle.com/cloud/latest/dbbackup_gs/CSDBB/GUID-8A5C40B4-859F-46C3-9431-55C56D588B58.htm#CSDBB-GUID-8A5C40B4-859F-46C3-9431-55C56D588B58"><font face="verdana,arial,helvetica,sans-serif" size="2">About Database Backup Cloud Service (Optional cloud service that can be used to backup Oracle databases)</font></a> <br /></p> <p><a href="http://docs.oracle.com/cloud/latest/stcomputecs/STCSG/GUID-0C04E7C5-0D24-4D16-9D83-92EC1E737622.htm#STCSG-GUID-0C04E7C5-0D24-4D16-9D83-92EC1E737622"><font face="verdana,arial,helvetica,sans-serif" size="2">Backing up and Restoring Storage Volumes - Compute</font></a></p> <p><font face="verdana,arial,helvetica,sans-serif" size="2"> <a href="https://docs.oracle.com/cloud/latest/reportingcs_use/BILPD/GUID-553DEE53-98D7-4002-A648-CC1CB95AB968.htm#BILUG482">Backing up and Restoring BICS</a></font><br /></p> <p><font face="verdana,arial,helvetica,sans-serif" size="2"></font><br /></p> <p align="justify"><i><span style="font-size: 10pt; font-family: &quot;Verdana&quot;,&quot;sans-serif&quot;;">Disclaimer: Refer to the latest BIAPPS and Oracle Cloud Documentation as things might have changed since this blog was written.</span></i></p> <p> <span style="font-size: 11pt; line-height: 115%; font-family: &quot;Calibri&quot;,&quot;sans-serif&quot;;"><a href="https://blogs.oracle.com/biapps/tags/biapps_on_paas">All blogs related to BIAPPS on PAAS</a></span></p> <p><span style="font-size: 11pt; line-height: 115%; font-family: &quot;Calibri&quot;,&quot;sans-serif&quot;;"><a href="https://blogs.oracle.com/biapps/tags/biapps_on_paas_backup">BIAPPS on PAAS Backup Blog Series</a></span><br /><br /> </p> <p> </p> <p><span style="font-size: 11pt; line-height: 115%; font-family: &quot;Calibri&quot;,&quot;sans-serif&quot;;"></span> </p> Guna Vasireddy-Oracle https://blogs.oracle.com/biapps/entry/biapps_on_paas_backup_and Thu Mar 23 2017 06:51:58 GMT-0400 (EDT) BIAPPS on PAAS – Backup and Restore - Introduction (Part1) https://blogs.oracle.com/biapps/biapps-on-paas-backup-and-restore-introduction-part1 <p style="text-align: justify;">BI Applications (BIApps) is an integrated application involving multiple components. A backup of BIApps therefore would mean a backup of the integrated application and not just the datawarehouse or the database. <span>High level architecture of BIAPPS on PAAS is shown below:</span></p> <p style="text-align: justify;"><img height="363" src="http://cdn.app.compendium.com/uploads/user/e7c690e8-6ff9-102a-ac6d-e4aebca50425/f4a5b21d-66fa-4885-92bf-c4e81c06d916/Image/474a2852b87be4250e3ce12819e393ce/bop_backup1_biappsarchitecture.png" style="max-width: 99%;" width="662" /></p> <p style="text-align: justify;">There are four different cloud service instances involved at a minimum when using BIAPPS on PAAS. Following table shows the components/software that are installed on each:</p> <p style="text-align: justify;">DBCS (Database Cloud Service)</p> <p style="text-align: justify;">Database that has the ODI repository, BIACM<br /> Repository, SDS schemas and the Datawarehouse Schema</p> <p style="text-align: justify;">BICS (BI Cloud Service)</p> <p style="text-align: justify;">RPD, Webcat, Jazn file</p> <p style="text-align: justify;">Compute Cloud</p> <p>Weblogic Server<br /> ODI Server<br /> BIACM<br /> BIAPPS Shiphome<br /> Customer Data stored as files (E.g. Source Files, Universal Adaptor files)<br /> <br /> *Optionally the below if installed<br /> Corente VPN ?<br /> Vnc<br /> Dev tools like sql developer/browser/ODI studio and their associated files</p> <p style="text-align: justify;">Storage Service</p> <p style="text-align: justify;">backups for Compute/DB</p> <p style="text-align: justify;">You will need to backup all of the above instances to be able to successfully restore the BI Applications. Each of the Cloud services provides its own backup mechanism.<span> </span>Relevant information for backing up each of these cloud services is available in the Oracle Cloud documentation and will be detailed in subsequent blogs in this backup series. Customer may also want to look at Oracle Database Backup Cloud Service that is a separate optional cloud service that is available to take Oracle Database backups. </p> <p style="text-align: justify;">However is it adequate if you just backed up each of the cloud instance independently? Following section details some of the considerations related to this question by drawing on few examples.</p> <p style="text-align: justify;"><b><u>Example 1:</u></b> Database has been backed up on weekend Saturday 11th March 11pm. Compute Cloud Instance was backed up earlier at 10am the same day. Configuration that is done on weblogic (JDBC data sources, memory settings etc) is not stored in the database. So if any of the configuration was done between 10am 11thMarch and 11pm 11th March, that would be lost.<span> </span>And in that sense, the backup and restore does not truly reflect the state of the integrated BIAPPS environment as it would have been at 11pm 11th March.</p> <p style="text-align: justify;"><b><u>Example2</u></b>: BICS was backed up on 10 pm on Sunday 12th March.<span> </span>Now continuing from example1, we have the database backup from 11pm 11th&nbsp; Match and BICS backup from 10 pm on Sunday 12th March. If there were any changes that were done to the database (like adding a new table/column) that was followed by a change in the RPD, then there is a chance that when we restore the database and the BICS instance, we can have a failure since they are no longer in sync. </p> <p style="text-align: justify;">As you can see from the above examples, backing up the cloud instances at different points in time can cause a potential problem. That said, there is a no single button that can be clicked to backup all the instances at exact same time. However with a right process in place, it is still easy enough to backup and restore the BIAPPS Application. Following are some of the <b><u>best practices/guidelines</u></b> that the customer can take to avoid the above issues:</p> <div> <ol> <li style="text-align: justify;">The most volatile of the BIAPPS components is the database. That said, the database is primarily changed when the ETL is run. So it is probably a good idea to backup the database outside the ETL window and as frequent as is possible. Configuration changes done in BIACM will also reside in the database, but these are less likely to occur once the initial configuration is done. Similarly ODI repository changes also reside in the database but in a production instance this should not be done everyday but rather during limited controlled windows. </li> <li style="text-align: justify;">BICS RPD is tightly coupled with the database. So the customer could restrict the RPD changes to certain limited days in a month and ensure that there is proper database backup along with accompanying BICS backup outside the change period. In other words, have a quiet period for making RPD changes and ensure that the BICS and DB are backed up in that quiet period.&nbsp;</li> <li style="text-align: justify;">Most of the configuration required for Weblogic is done during the initial configuration. So after the full load, ensure there is a blackout period when you back up all the cloud instances. Subsequently similar to the BICS quiet periods, ensure that the changes to the weblogic and other domains on the Compute are done only during certain days and ensure that they are being backed up during the quiet periods.&nbsp;</li> <li style="text-align: justify;">Most of the cloud services have either an API or command line utility to backup that instance. You could consider using those to automate the backup of<br /> all those instances. Better still, you can have that script kicked off automatically at the end of the ETL load plan.&nbsp;</li> <li style="text-align: justify;">When restoring the system from a backup, consider the impact of any extract from Source Systems. Most of the Sources Systems have incremental<br /> extracts. If the last extracted date is stored in the database, then that date will also be restored as part of the database restore. However if the extract date is stored outside the BIAPPS Database (E.g. Fusion or any on Prem Sources which you are replicating via a Replication tool), then you will need to ensure that post the database restore, you reset the extract dates to match the data in the database and also clear any pending data in transit (Like in UCM).&nbsp;</li> <li style="text-align: justify;">A full reset of the SDS and the warehouse, followed by a full load will fix any issues with the SDS/warehouse. However Full loads are expensive and<br /> certain Source Systems have restrictions on how much data can be extracted in a day (E.g. Taleo). Further you can potentially lose any existing snapshot data if doing reset of the warehouse (and if the snapshot data is not available in the source).&nbsp; </li> <li style="text-align: justify;">When you restore a database, you will be restoring all the tables and all the schemas. It is not easy to restore a single table. Therefore it is best to keep activities that impact different schemas separate. E.g. If doing major configuration in BIACM, then do that when no ETL is running and take a adhoc backup before and after those changes. Similarly when promoting code to the ODI Production repository, do it outside the ETL window and at a time when no BIACM changes are happening and take a backup, before and after those changes. This will ensure that you can use the db backup to restore the database to the point in time before those changes are done without worrying about impact to other schemas. For the same reasons, if you are making a change to a single warehouse table, keep a backup of that table (and other dependent tables) in the warehouse schema along with the data, so that you can use those to restore the table rather than use the complete database backup. </li> </ol> </div> <p style="text-align: justify;">There are other components that are also involved in the BIAPPS on PAAS Solution and need to be included in the backup strategy. These include but not limited to:</p> <div> <ol> <li style="text-align: justify;">Source Systems: These are the systems from which BIAPPS gets the data from. The backup of those systems is also required when considering the entire application. However those are typically taken care by the Source System administrators and hence not listed here.&nbsp;&nbsp; </li> <li style="text-align: justify;">Replication Tools: If you are not using VPN to connect to the On Prem Source Systems, then it is likely, you have some kind of Replication mechanism<br /> to transfer the data from the On Premise Source System to the SDS. So your backup strategy ought to cover those as well.</li> <li style="text-align: justify;">Identity Domain/ Users &amp; Roles:&nbsp; These are usually maintained from the Service Admin Console (SAC). Refer to SAC documentation on how to back<br /> these up.</li> <li style="text-align: justify;">Any Network/Security Rules you setup between these various instances.</li> </ol> </div> <p style="text-align: justify;">The customer ought to therefore understand the entire BIAPPS archictecture and then design the backup strategy accordingly. The customer will also likely have a Dev/Test/Prod environment, each of which is a complete BIAPPS application in itself. The customer will have to ensure that the backup strategy covers all those environments. Special care should also be taken if customer has a T2P process (Test to Production) and one of the environments requires to be restored. </p> <p style="text-align: justify;">The subsequent blogs in this series, will attempt to dril