About PostgreSQL

PostgreSQL is a powerful, open source object-relational database system with over 30 years of active development that has earned it a strong reputation for reliability, feature robustness, and performance.

There is a wealth of information to be found describing how to install and use PostgreSQL through the official documentation. The PostgreSQL community provides many helpful places to become familiar with the technology, discover how it works, and find career opportunities. Reach out to the community here.


Features

  • Conduit makes it easy to connect your data to your favorite BI and data science tools, including Power BI. Your data approachable and interactive – in a matter of minutes, no matter where it's stored.
  • Data aggregation and JOINs 
  • Access your data in real-time. Conduit allows you to connector in DirectQuery mode vs. Power BI’s standard import mode, which limits your data refreshes per day. 
  • Advanced Parquet Store cache for a fast performance. Configurable expiration and re-caching.
  • Custom pick data to use only specific columns needed for reporting to speed things up even more.
  • Built-in data governance and security controls. Flexible yet robust.

Prerequisites

If you haven’t already done so, be sure to sign up for a Conduit account.  Try the power and flexibility of Conduit firsthand with a free trial.

For your PostgreSQL datasource, have the following handy:

  • Server name
  • Database name
  • Username of database
  • Password of database

Create Connector

Connectors can be created from the main dashboard. To create new connector, click on "Add New Connector" button, then select desired connector type to load wizard for configuring the new connector.

There are a few basic steps to getting any connector up an running:

  1. Define your datasource
  2. Configure access
  3. Select what data you want to make available via connector
  4. Configure virtualization and caching options







Datasources

Define your connector name and connection URL.

  • Connector Name
    • Required
    • Will be used to identify published tables
    • Only lowercase letters, numbers and underscore symbols are allowed
    • Can be changed only before the connector is saved
  • Description
    • Optional field for notes about connector; visible in Conduit only
    • Can be changed at any point
  • Server Name / Database Name
    • Required. See tooltip for detailed information on expected format.
    • Server can be changed only before the connector is saved
    • Database name may be updated based on tables selection on Publish step


Click Next button (blue right arrow) to go to the Authentication tab to continue configuring your connector.

To cancel connector creation, click Close button.


Authentication

Define how external BI users should be authorized by Conduit to access specific data and how Conduit is connecting to the datasource.

  • Select Authentication Method for external users connecting to Conduit:
    •  Anonymous with Impersonation
      • Anyone with the connector link has read access to all tables/data published through the connector
      • BI users are not required to provide any form of credentials
      • Default option
    •  Conduit Authentication with Impersonation
      • Allows Conduit Admins to configure data access only to users from specific Conduit Group(s)
      • BI users are required to provide credentials that are looked up by Conduit in its user database
    •  Active Directory with Impersonation
      • Allows Conduit Admins to configure data access only to users from specific Active Directory Groups(s) for a selected User Subscription. The access to the database will be done by Conduit authentication credentials.
  •  Enter the service account credentials to be used by Conduit to execute all runtime queries against the datasource

    • Username

    • Password

Click Next button to go to the next tab to continue configuring your connector.

To cancel connector creation, click Close button.



Publish

Select what data will be available to the BI users. Choose to publish one or more tables, specific columns only or entire table(s). Selection should be limited to tables from the same database.

Publish tab provides an interface to prune tables to include only fields required for analytics, thus reducing the resource load while querying and improving querying times.

Use Search to find specific fields you would like to select. 

Selection should be limited to tables from the same database.

Once all the desired fields/tables are selected, the user has 2 options:

  1. Save the connector using the default settings:
    1. Caching not enabled.
    2. Conduit SQL engine for join queries enabled.
    3. Authorization not enabled; all authenticated users will have access to the published data.
    4. Default Advanced tab settings
  2. Continue configuring the connector.

To save connector, click Submit button.

To continue configuring connector properties, click Next button.

To cancel connector creation, click Close button.


Virtualization

On Virtualization tab you can configure the following:

  • Enable Query Caching
    • When enabled, Conduit will store query results for all queries for the connector's datasets so that when the exact same query is called again, the query results will be returned from memory
    • The results set exceeding one page of retrieved records - for PowerBI it's 10000 - will not be cached to avoid OOM
    • Recommended to enable when expensive queries are expected and/or when underlying data is not expected to change often
    • Caching expiration is 30 min by default, and can be customized for each connector's dataset as needed
  • Enable Connector Caching
    • When enabled, Conduit will create temporary secure parquet store of all connector's datasets for a quick future access
    • Recommended to enable for large datasets and/or when expensive queries are expected  
    • Selected tables for the connector will be cached in the parquet store. All queries for this connector will be ran against the parquet store
    • Caching expiration is 30 min by default, and can be customized for each connector's dataset as needed
    • When connector data is cached, query results will be cached in memory for small/medium results set to further enhance performance. Query Cache will expire with data cache
    • Conduit SQL Engine will be used to run all queries
    • List of existing stored parquet files and their expected expiration times can be accessed on Performance>Parquet Store page

  • Enable Conduit SQL engine for hybrid join queries 
    • Enabled by default
    • When the checkbox is not checked, the reporting tool will throw a message to the analyst and won't run any hybrid joins (joins with tables from a different data source type or different Azure SQL server instance queries). Running hybrid joins requires the Conduit SQL engine enabled.
    • Conduit SQL Engine will be used to run all queries when Connector Caching is checked

Authorization

Configure access for a selected Authentication type.

If you've selected on the Authentication tab "Conduit Authentication with Impersonation" or "Active Directory with Impersonation" authentication type, then here you can configure which Conduit Group (s) Or Active Directory Group(s) should grant access to published table(s).

  • By default Authorization is not enabled, meaning all authenticated users will have access to all published tables for a given connector.
  • To enforce Authorization click Enable Authorization
  • From a group list you can select which groups(s) should grant access to the connector
    • Access is granted on a table level.
      • If you need some group(s) to have access to certain fields from table A, and other group(s) should have access to another set of fields from the same table A, please create two connectors to pruned versions of the table A, one for each permissions case. 
    • If Authorization is enabled but not groups are selected, the connector's tables will be accessible to no one. 

Only Admins are allowed to view and modify Authorization tab. 

Authentication type and Authorization configuration can be changed at any time. If permissions are revoked, the data will no longer be accessible to external user(s) as well as connector to a restricted table will no longer be present in connector list in BI tools. 



Advanced

Fine-tune how your selections should be published.

For each table the following can be configured:

  • Alias
    • A user-friendly table name to be used to identify published tables by external users
    • Optional; if not specified, real table name will be used for identification
  • Cache now
    • Displayed when Connector Cache enabled on Virtualization tab; disabled by default
    • Conduit will initiate caching of the data source on connector save to avoid waiting for cache upon initial query
  • Auto refresh
    • Displayed when Connector Cache enabled on Virtualization tab; enabled by default
    • Conduit will re-cache connector in Parquet Store when existing data cache expires
  • Caching Expiration
    • Displayed when Cache Query or Connector Cache has been enabled on Virtualization tab
    • Default cache expiration time is 30 minutes, can be customized for each connector’s dataset as needed
    • Connectors to large datasets would benefit from having less frequent caching
    • After expiration, cache will re-create either when previous cache expires (if Auto refresh option enabled) or when a non-native or join query is ran (if Auto refresh option disabled)
  • Other settings
    • Partition column
      • You can choose what partition column to use; by default no columns are selected
    • Partition Count 
      • You can select how many partitions will be used by Spark. partition in Spark is an atomic chunk of data (logical division of data) stored on a node in the cluster. Partitions are basic units of parallelism in Apache Spark.
      • By default partition count set to 4

Endpoints

This page contains the endpoints for the newly created connector that you can use to access the data from different applications:

  • JDBC/ODBC/Thrift Endpoint  to connect to dataset(s) defined on the connector from various BI and data science tools.
  • Power BI Spark Connector  to connect to dataset(s) defined on the connector from Power BI.
  • Tableau Spark Connector – to connect to dataset(s) defined on the connector from Tableau.