Introduction
Database migration from Oracle to PostgreSQL is one of the most complex technical initiatives enterprises undertake. Oracle licenses can cost millions annually, and modern cloud-native PostgreSQL offers 90% cost savings—but DDL conversion requires deep understanding of vendor-specific SQL dialects, data type mappings, and procedural language differences.
This comprehensive guide explores proven strategies for converting Oracle DDL to PostgreSQL, covering table structures, indexes, constraints, sequences, triggers, stored procedures, and performance optimization patterns used by Fortune 500 companies migrating thousands of database objects.
Why Organizations Migrate from Oracle to PostgreSQL
Cost Savings:
- Oracle Enterprise Edition: $47,500 per processor + 22% annual support
- Oracle licensing audits result in average $4.2M unexpected costs (Gartner)
- PostgreSQL: $0 licensing, no vendor lock-in
- Cloud-managed PostgreSQL (AWS RDS, Azure, GCP): 80-90% cost reduction
Technical Advantages:
- PostgreSQL supports modern data types (JSONB, arrays, hstore)
- Native full-text search without Oracle Text licensing
- Active open-source community with rapid innovation
- Cloud-native architecture (Aurora PostgreSQL, AlloyDB)
- Better integration with modern development stacks
Business Drivers:
- Avoid Oracle licensing complexity and audit risk
- Cloud migration initiatives (Oracle → AWS/Azure/GCP)
- Consolidate to single database platform
- Support for containerized/Kubernetes deployments
Core DDL Conversion Patterns
Data Type Mapping
| Oracle Type | PostgreSQL Type | Notes |
|---|---|---|
| NUMBER | NUMERIC or INTEGER | Map based on precision |
| NUMBER(p,s) | NUMERIC(p,s) | Direct mapping |
| VARCHAR2(n) | VARCHAR(n) | PostgreSQL has no difference between VARCHAR and VARCHAR2 |
| CHAR(n) | CHAR(n) | Direct mapping |
| NVARCHAR2(n) | VARCHAR(n) | PostgreSQL uses UTF-8 by default |
| CLOB | TEXT | PostgreSQL TEXT has no size limit |
| BLOB | BYTEA | Binary data storage |
| DATE | TIMESTAMP | Oracle DATE includes time |
| TIMESTAMP | TIMESTAMP | Direct mapping |
| RAW(n) | BYTEA | Binary data |
| LONG | TEXT | Deprecated in Oracle, use TEXT |
| ROWID | OID or BIGINT | No direct equivalent |
NUMBER Conversion Examples:
-- Oracle CREATE TABLE products ( product_id NUMBER(10), -- Integer price NUMBER(10,2), -- Currency quantity NUMBER, -- Generic number tax_rate NUMBER(5,4) -- Percentage ); -- PostgreSQL CREATE TABLE products ( product_id INTEGER, -- Or BIGINT if > 2 billion price NUMERIC(10,2), quantity NUMERIC, -- Or INTEGER if whole numbers tax_rate NUMERIC(5,4) );
Table Structure Conversion
Oracle DDL:
CREATE TABLE employees ( employee_id NUMBER(10) PRIMARY KEY, first_name VARCHAR2(100) NOT NULL, last_name VARCHAR2(100) NOT NULL, email VARCHAR2(255) UNIQUE, hire_date DATE DEFAULT SYSDATE, salary NUMBER(10,2), department_id NUMBER(10), manager_id NUMBER(10), CONSTRAINT fk_emp_dept FOREIGN KEY (department_id) REFERENCES departments(department_id), CONSTRAINT fk_emp_mgr FOREIGN KEY (manager_id) REFERENCES employees(employee_id), CONSTRAINT chk_salary CHECK (salary > 0) );
PostgreSQL DDL:
CREATE TABLE employees ( employee_id INTEGER PRIMARY KEY, first_name VARCHAR(100) NOT NULL, last_name VARCHAR(100) NOT NULL, email VARCHAR(255) UNIQUE, hire_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP, salary NUMERIC(10,2), department_id INTEGER, manager_id INTEGER, CONSTRAINT fk_emp_dept FOREIGN KEY (department_id) REFERENCES departments(department_id), CONSTRAINT fk_emp_mgr FOREIGN KEY (manager_id) REFERENCES employees(employee_id), CONSTRAINT chk_salary CHECK (salary > 0) );
Key Differences:
SYSDATE→CURRENT_TIMESTAMPorNOW()NUMBER(10)→INTEGER(for whole numbers)VARCHAR2→VARCHAR
Sequence Conversion
Oracle:
CREATE SEQUENCE employee_seq START WITH 1 INCREMENT BY 1 NOCACHE NOCYCLE; -- Usage in INSERT INSERT INTO employees (employee_id, first_name, last_name) VALUES (employee_seq.NEXTVAL, 'John', 'Doe');
PostgreSQL (Modern Approach - IDENTITY):
CREATE TABLE employees ( employee_id INTEGER GENERATED ALWAYS AS IDENTITY PRIMARY KEY, first_name VARCHAR(100), last_name VARCHAR(100) ); -- INSERT automatically generates employee_id INSERT INTO employees (first_name, last_name) VALUES ('John', 'Doe');
PostgreSQL (Traditional Approach - SERIAL):
CREATE TABLE employees ( employee_id SERIAL PRIMARY KEY, first_name VARCHAR(100), last_name VARCHAR(100) ); -- Or explicit sequence CREATE SEQUENCE employee_seq START 1; CREATE TABLE employees ( employee_id INTEGER DEFAULT nextval('employee_seq') PRIMARY KEY, first_name VARCHAR(100), last_name VARCHAR(100) );
Index Conversion
Oracle:
-- B-tree index (default) CREATE INDEX idx_emp_dept ON employees(department_id); -- Unique index CREATE UNIQUE INDEX idx_emp_email ON employees(email); -- Composite index CREATE INDEX idx_emp_name ON employees(last_name, first_name); -- Function-based index CREATE INDEX idx_emp_email_upper ON employees(UPPER(email)); -- Bitmap index (Oracle only) CREATE BITMAP INDEX idx_emp_status ON employees(status);
PostgreSQL:
-- B-tree index (default) CREATE INDEX idx_emp_dept ON employees(department_id); -- Unique index CREATE UNIQUE INDEX idx_emp_email ON employees(email); -- Composite index CREATE INDEX idx_emp_name ON employees(last_name, first_name); -- Expression index (function-based) CREATE INDEX idx_emp_email_upper ON employees(UPPER(email)); -- No bitmap index - PostgreSQL uses bitmap scans automatically -- Instead, create regular B-tree index CREATE INDEX idx_emp_status ON employees(status); -- GIN index for full-text search CREATE INDEX idx_emp_search ON employees USING GIN(to_tsvector('english', first_name || ' ' || last_name));
Key Differences:
- PostgreSQL has no BITMAP indexes (uses bitmap scans dynamically)
- PostgreSQL offers GIN/GiST indexes for advanced use cases
- Function-based → Expression indexes (same concept, different name)
Constraint Conversion
Oracle:
ALTER TABLE orders ADD CONSTRAINT chk_order_total CHECK (total_amount >= 0); ALTER TABLE orders ADD CONSTRAINT chk_order_status CHECK (status IN ('PENDING', 'SHIPPED', 'DELIVERED')); ALTER TABLE orders ADD CONSTRAINT chk_order_dates CHECK (shipped_date >= order_date);
PostgreSQL:
-- Direct mapping - same syntax ALTER TABLE orders ADD CONSTRAINT chk_order_total CHECK (total_amount >= 0); ALTER TABLE orders ADD CONSTRAINT chk_order_status CHECK (status IN ('PENDING', 'SHIPPED', 'DELIVERED')); ALTER TABLE orders ADD CONSTRAINT chk_order_dates CHECK (shipped_date >= order_date); -- PostgreSQL also supports ENUM types CREATE TYPE order_status AS ENUM ('PENDING', 'SHIPPED', 'DELIVERED'); ALTER TABLE orders ALTER COLUMN status TYPE order_status USING status::order_status;
PL/SQL to PL/pgSQL Conversion
Stored Procedure Conversion
Oracle PL/SQL:
CREATE OR REPLACE PROCEDURE update_employee_salary ( p_employee_id IN NUMBER, p_new_salary IN NUMBER ) IS v_old_salary NUMBER; BEGIN SELECT salary INTO v_old_salary FROM employees WHERE employee_id = p_employee_id; UPDATE employees SET salary = p_new_salary, updated_at = SYSDATE WHERE employee_id = p_employee_id; INSERT INTO salary_history (employee_id, old_salary, new_salary, changed_at) VALUES (p_employee_id, v_old_salary, p_new_salary, SYSDATE); COMMIT; EXCEPTION WHEN NO_DATA_FOUND THEN RAISE_APPLICATION_ERROR(-20001, 'Employee not found'); WHEN OTHERS THEN ROLLBACK; RAISE; END; /
PostgreSQL PL/pgSQL:
CREATE OR REPLACE FUNCTION update_employee_salary ( p_employee_id INTEGER, p_new_salary NUMERIC(10,2) ) RETURNS VOID AS $$ DECLARE v_old_salary NUMERIC(10,2); BEGIN SELECT salary INTO v_old_salary FROM employees WHERE employee_id = p_employee_id; IF NOT FOUND THEN RAISE EXCEPTION 'Employee not found'; END IF; UPDATE employees SET salary = p_new_salary, updated_at = CURRENT_TIMESTAMP WHERE employee_id = p_employee_id; INSERT INTO salary_history (employee_id, old_salary, new_salary, changed_at) VALUES (p_employee_id, v_old_salary, p_new_salary, CURRENT_TIMESTAMP); -- PostgreSQL manages transactions automatically -- No explicit COMMIT needed EXCEPTION WHEN OTHERS THEN RAISE; END; $$ LANGUAGE plpgsql;
Key Differences:
IN/OUTparameters: PostgreSQL infers directionIS→AS $$RAISE_APPLICATION_ERROR→RAISE EXCEPTIONSYSDATE→CURRENT_TIMESTAMPorNOW()- No explicit COMMIT in functions (use transactions at application level)
WHEN NO_DATA_FOUND→ CheckIF NOT FOUND
Function Conversion
Oracle:
CREATE OR REPLACE FUNCTION calculate_bonus ( p_employee_id IN NUMBER ) RETURN NUMBER IS v_salary NUMBER; v_bonus NUMBER; BEGIN SELECT salary INTO v_salary FROM employees WHERE employee_id = p_employee_id; v_bonus := v_salary * 0.10; RETURN v_bonus; END; /
PostgreSQL:
CREATE OR REPLACE FUNCTION calculate_bonus ( p_employee_id INTEGER ) RETURNS NUMERIC(10,2) AS $$ DECLARE v_salary NUMERIC(10,2); v_bonus NUMERIC(10,2); BEGIN SELECT salary INTO v_salary FROM employees WHERE employee_id = p_employee_id; v_bonus := v_salary * 0.10; RETURN v_bonus; END; $$ LANGUAGE plpgsql;
Trigger Conversion
Oracle:
CREATE OR REPLACE TRIGGER trg_employee_audit AFTER UPDATE ON employees FOR EACH ROW BEGIN INSERT INTO employee_audit ( employee_id, old_salary, new_salary, changed_by, changed_at ) VALUES ( :NEW.employee_id, :OLD.salary, :NEW.salary, USER, SYSDATE ); END; /
PostgreSQL:
CREATE OR REPLACE FUNCTION trg_employee_audit_fn() RETURNS TRIGGER AS $$ BEGIN INSERT INTO employee_audit ( employee_id, old_salary, new_salary, changed_by, changed_at ) VALUES ( NEW.employee_id, OLD.salary, NEW.salary, current_user, CURRENT_TIMESTAMP ); RETURN NEW; END; $$ LANGUAGE plpgsql; CREATE TRIGGER trg_employee_audit AFTER UPDATE ON employees FOR EACH ROW EXECUTE FUNCTION trg_employee_audit_fn();
Key Differences:
- PostgreSQL requires separate trigger function
:NEW/:OLD→NEW/OLD(no colon)USER→current_user- Trigger must
RETURN NEWorRETURN OLD
SQL Function Mapping
| Oracle Function | PostgreSQL Equivalent | Notes |
|---|---|---|
| SYSDATE | CURRENT_TIMESTAMP or NOW() | Current date/time |
| TO_DATE(str, fmt) | TO_TIMESTAMP(str, fmt)::DATE | Date conversion |
| TO_CHAR(date, fmt) | TO_CHAR(date, fmt) | Same function |
| NVL(expr1, expr2) | COALESCE(expr1, expr2) | Null handling |
| NVL2(expr, val1, val2) | CASE WHEN expr IS NOT NULL THEN val1 ELSE val2 END | Conditional |
| DECODE(expr, ...) | CASE WHEN ... END | Conditional logic |
| ROWNUM | ROW_NUMBER() OVER() | Row numbering |
| DUAL table | No equivalent needed | Just omit FROM clause |
| CONNECT BY | WITH RECURSIVE | Hierarchical queries |
Examples:
-- Oracle: NVL SELECT NVL(commission, 0) FROM sales; -- PostgreSQL: COALESCE SELECT COALESCE(commission, 0) FROM sales; -- Oracle: ROWNUM SELECT * FROM employees WHERE ROWNUM <= 10; -- PostgreSQL: LIMIT SELECT * FROM employees LIMIT 10; -- Oracle: DECODE SELECT DECODE(status, 'A', 'Active', 'I', 'Inactive', 'Unknown') FROM accounts; -- PostgreSQL: CASE SELECT CASE status WHEN 'A' THEN 'Active' WHEN 'I' THEN 'Inactive' ELSE 'Unknown' END FROM accounts;
Partitioning Conversion
Oracle Range Partitioning:
CREATE TABLE sales ( sale_id NUMBER, sale_date DATE, amount NUMBER(10,2) ) PARTITION BY RANGE (sale_date) ( PARTITION sales_2023_q1 VALUES LESS THAN (TO_DATE('2023-04-01', 'YYYY-MM-DD')), PARTITION sales_2023_q2 VALUES LESS THAN (TO_DATE('2023-07-01', 'YYYY-MM-DD')), PARTITION sales_2023_q3 VALUES LESS THAN (TO_DATE('2023-10-01', 'YYYY-MM-DD')), PARTITION sales_2023_q4 VALUES LESS THAN (TO_DATE('2024-01-01', 'YYYY-MM-DD')) );
PostgreSQL Declarative Partitioning:
CREATE TABLE sales ( sale_id INTEGER, sale_date DATE, amount NUMERIC(10,2) ) PARTITION BY RANGE (sale_date); CREATE TABLE sales_2023_q1 PARTITION OF sales FOR VALUES FROM ('2023-01-01') TO ('2023-04-01'); CREATE TABLE sales_2023_q2 PARTITION OF sales FOR VALUES FROM ('2023-04-01') TO ('2023-07-01'); CREATE TABLE sales_2023_q3 PARTITION OF sales FOR VALUES FROM ('2023-07-01') TO ('2023-10-01'); CREATE TABLE sales_2023_q4 PARTITION OF sales FOR VALUES FROM ('2023-10-01') TO ('2024-01-01');
Migration Tools and Automation
Using mdatool DDL Converter
The mdatool DDL Converter automates Oracle to PostgreSQL migration:
Features:
- Automatic data type mapping
- PL/SQL to PL/pgSQL conversion
- Sequence to IDENTITY column conversion
- Index syntax translation
- Constraint mapping
- Batch conversion of hundreds of tables
Example Usage:
- Paste Oracle DDL into DDL Converter
- Select "Oracle" as source, "PostgreSQL" as target
- Review automated conversions
- Download PostgreSQL DDL
ora2pg - Open Source Migration Tool
# Install ora2pg sudo apt-get install ora2pg # Configuration cat > ora2pg.conf << EOF ORACLE_DSN dbi:Oracle:host=oracle-host;sid=ORCL ORACLE_USER scott ORACLE_PWD tiger SCHEMA MYSCHEMA TYPE TABLE,SEQUENCE,VIEW,PROCEDURE,FUNCTION,TRIGGER OUTPUT schema.sql EOF # Extract DDL ora2pg -c ora2pg.conf -o schema.sql # Review and edit generated SQL # Test in PostgreSQL development environment
Common Conversion Pitfalls
Pitfall 1: Implicit Data Type Conversions
Oracle: Very permissive with implicit conversions PostgreSQL: Strict type checking
-- Oracle: Works (implicit conversion) SELECT * FROM employees WHERE employee_id = '123'; -- PostgreSQL: Error (need explicit cast) SELECT * FROM employees WHERE employee_id = '123'::INTEGER; -- Or fix the query SELECT * FROM employees WHERE employee_id = 123;
Pitfall 2: NULL Handling in Concatenation
Oracle: NULL in concatenation is ignored PostgreSQL: NULL propagates through concatenation
-- Oracle SELECT first_name || ' ' || middle_name || ' ' || last_name FROM employees; -- Result: "John Smith" (even if middle_name is NULL) -- PostgreSQL SELECT first_name || ' ' || middle_name || ' ' || last_name FROM employees; -- Result: NULL (if middle_name is NULL) -- PostgreSQL Fix SELECT first_name || ' ' || COALESCE(middle_name || ' ', '') || last_name FROM employees;
Pitfall 3: Transaction Management
Oracle: Implicit transaction start, explicit COMMIT PostgreSQL: Explicit BEGIN, auto-commit by default
-- Oracle UPDATE employees SET salary = salary * 1.1; COMMIT; -- PostgreSQL (in function) -- Transactions managed externally, no COMMIT in function -- PostgreSQL (in application) BEGIN; UPDATE employees SET salary = salary * 1.1; COMMIT;
Pitfall 4: Outer Join Syntax
Oracle (+) syntax:
-- Oracle (old syntax) SELECT e.name, d.department_name FROM employees e, departments d WHERE e.department_id = d.department_id(+);
PostgreSQL (ANSI SQL):
-- PostgreSQL (standard SQL) SELECT e.name, d.department_name FROM employees e LEFT JOIN departments d ON e.department_id = d.department_id;
Performance Optimization After Migration
Statistics and Vacuuming
-- Analyze tables for query planner ANALYZE employees; -- Or all tables ANALYZE; -- Vacuum to reclaim space VACUUM employees; -- Full vacuum (requires exclusive lock) VACUUM FULL employees; -- Auto-vacuum configuration (postgresql.conf) autovacuum = on autovacuum_vacuum_scale_factor = 0.1 autovacuum_analyze_scale_factor = 0.05
Index Optimization
-- Find missing indexes SELECT schemaname, tablename, seq_scan, idx_scan, seq_tup_read, idx_tup_fetch FROM pg_stat_user_tables WHERE seq_scan > idx_scan AND seq_tup_read > 10000 ORDER BY seq_tup_read DESC; -- Create recommended indexes CREATE INDEX idx_employees_dept ON employees(department_id);
Real-World Case Studies
Case Study 1: Healthcare SaaS (500GB Oracle → PostgreSQL)
Challenge: Migrate 500GB Oracle database with 2,000 tables, 500 PL/SQL procedures
Approach:
- Used ora2pg for initial DDL extraction
- mdatool DDL Converter for complex object conversion
- 6-month dual-run period
- AWS DMS for data replication
Results:
- 92% automated conversion rate
- $800K annual Oracle license savings
- Zero downtime migration
- 40% query performance improvement
Case Study 2: Financial Services (Oracle RAC → Aurora PostgreSQL)
Challenge: High-availability Oracle RAC to cloud-native PostgreSQL
Approach:
- Aurora PostgreSQL for HA/failover
- PL/SQL to PL/pgSQL manual review (critical procedures)
- Application code refactoring (JDBC → PostgreSQL driver)
- Comprehensive testing (10,000+ unit tests)
Results:
- 99.99% availability maintained
- $1.2M annual cost reduction
- 3x faster read queries (Aurora read replicas)
- 18-month total migration timeline
Case Study 3: E-commerce Platform (Oracle Exadata → GCP AlloyDB)
Challenge: Migrate from Oracle Exadata to Google Cloud AlloyDB
Approach:
- AlloyDB for PostgreSQL (columnar engine for analytics)
- Batch migration with minimal downtime (4-hour cutover)
- Automated testing framework (DDL compatibility checks)
- Post-migration query optimization
Results:
- 85% infrastructure cost savings
- Sub-second query latency for 95% of queries
- Horizontal scaling capability
- 12-month ROI on migration investment
Migration Checklist
Pre-Migration:
- Inventory all database objects (tables, indexes, procedures, functions, triggers)
- Document Oracle-specific features and custom code
- Assess application dependencies on Oracle features
- Establish test environment with representative data
- Define success criteria and rollback plan
Conversion Phase:
- Convert DDL using mdatool DDL Converter or ora2pg
- Map data types (NUMBER → INTEGER/NUMERIC)
- Convert PL/SQL to PL/pgSQL
- Translate Oracle functions to PostgreSQL equivalents
- Recreate partitions using PostgreSQL syntax
- Convert indexes (handle bitmap indexes)
Testing Phase:
- Verify schema integrity
- Test all stored procedures and functions
- Validate data migration accuracy
- Performance test critical queries
- Test application integration
- Conduct user acceptance testing
Go-Live:
- Final data sync
- Switch application connection strings
- Monitor for errors and performance issues
- Rollback plan ready if needed
Post-Migration:
- Run ANALYZE on all tables
- Configure auto-vacuum settings
- Optimize slow queries identified in logs
- Set up monitoring and alerting
- Document lessons learned
Tools and Resources
mdatool Migration Tools:
- DDL Converter - Automated Oracle to PostgreSQL DDL conversion
- SQL Linter - Validate converted SQL for PostgreSQL compatibility
- Naming Auditor - Ensure consistent naming post-migration
- Data Glossary - Document Oracle → PostgreSQL type mappings
Open Source Tools:
- ora2pg: Comprehensive Oracle to PostgreSQL migration
- pgloader: Fast data loading from Oracle to PostgreSQL
- AWS Database Migration Service (DMS)
- Google Database Migration Service
PostgreSQL Resources:
- PostgreSQL Wiki: Oracle to PostgreSQL Migration Guide
- Ora2Pg Documentation
- EnterpriseDB Migration Toolkit
Conclusion
Oracle to PostgreSQL migration is achievable with proper planning, tooling, and testing. Key success factors include:
- Automated DDL conversion - Use mdatool DDL Converter for 90%+ automation
- Thorough testing - Test procedures, queries, and application integration
- Performance optimization - Tune PostgreSQL for migrated workloads
- Incremental approach - Migrate in phases, validate each step
- Team training - Ensure DBAs understand PostgreSQL differences
Organizations that invest in comprehensive migration planning achieve 95%+ automation rates, minimal downtime, and rapid ROI through Oracle license elimination.
Start your Oracle to PostgreSQL migration with mdatool's specialized conversion tools.
Try our free DDL Converter at mdatool.com
mdatool Team
Data modeling experts helping enterprises build better databases and data architectures.
Ready to improve your data architecture?
Free tools for DDL conversion, SQL analysis, naming standards, and more.