Skip to content

Commit 9b4aeef

Browse files
committed
Started
1 parent b2b8e7b commit 9b4aeef

34 files changed

+12537
-0
lines changed
Lines changed: 200 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,200 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "b583fcbc-47e7-40ff-8c0e-4049230c1788",
6+
"metadata": {},
7+
"source": [
8+
"## Oracle AI Data Platform v1.0\n",
9+
"\n",
10+
"Copyright © 2025, Oracle and/or its affiliates.\n",
11+
"\n",
12+
"Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/\n"
13+
]
14+
},
15+
{
16+
"cell_type": "markdown",
17+
"id": "acb1f028-cba1-4bb1-905e-e597c129a9b7",
18+
"metadata": {
19+
"execution": {
20+
"iopub.status.busy": "2025-03-25T18:25:40.094Z"
21+
},
22+
"type": "python"
23+
},
24+
"source": [
25+
"# Connect Using Custom JDBC Driver\n",
26+
"\n",
27+
"**Overview**\n",
28+
"\n",
29+
"This notebook demonstrates using custom JDBC JAR files added to the compute cluster, this notebooks includes examples for;\n",
30+
"- SQLLite\n",
31+
"- Snowflake\n",
32+
"\n",
33+
"You can install your own JDBC driver and follow similar steps as below."
34+
]
35+
},
36+
{
37+
"cell_type": "markdown",
38+
"id": "7e209c1e-76df-471c-8240-93187d4f7586",
39+
"metadata": {},
40+
"source": [
41+
"# Connect Using Custom JDBC Driver - SQLLite\n",
42+
"\n",
43+
" **Prerequisites**\n",
44+
"\n",
45+
"1. For this example, we will download a lightweight JDBC JAR file that we can demonstrate extensibility of adding custom JDBC JAR files with minimal dependency. Download the SQLLite JDBC Jar file here - https://github.com/xerial/sqlite-jdbc/releases/download/3.46.1.3/sqlite-jdbc-3.46.1.3.jar\n",
46+
"2. Install the JAR file in a compute cluster. You will have to then restart the cluster to use the JDBC JAR.\n",
47+
"\n",
48+
"**Overview**\n",
49+
"\n",
50+
"This notebook demonstrates using a new JDBC JAR file added to the compute cluster. It covers:\n",
51+
" \n",
52+
" 1. Create dataframe from a table represented by a SQL query\n"
53+
]
54+
},
55+
{
56+
"cell_type": "code",
57+
"execution_count": null,
58+
"id": "8e8ddb66-468c-4eb5-9ee9-b32082f004b1",
59+
"metadata": {},
60+
"outputs": [],
61+
"source": []
62+
},
63+
{
64+
"cell_type": "code",
65+
"execution_count": 1,
66+
"id": "82356f31-7cba-432a-85ed-9bea208105e7",
67+
"metadata": {
68+
"execution": {
69+
"iopub.status.busy": "2025-03-25T19:02:05.267Z"
70+
},
71+
"type": "python"
72+
},
73+
"outputs": [
74+
{
75+
"data": {
76+
"text/html": [
77+
"<pre>Reading data from db....\n",
78+
"+--------------------+--------------------+\n",
79+
"| c1| c2|\n",
80+
"+--------------------+--------------------+\n",
81+
"|1.000000000000000000|2.000000000000000000|\n",
82+
"+--------------------+--------------------+\n",
83+
"\n",
84+
"</pre>"
85+
]
86+
},
87+
"metadata": {},
88+
"output_type": "display_data"
89+
}
90+
],
91+
"source": [
92+
"JDBC_URL = \"jdbc:sqlite:memory:myDb\"\n",
93+
"DRIVER = \"org.sqlite.JDBC\"\n",
94+
"SRC_TABLE = \"(SELECT 1 c1, 2 c2)\"\n",
95+
"fetch_size = 1000\n",
96+
" \n",
97+
"print(\"Reading data from db....\")\n",
98+
"jdbc_url = \"{}\".format(JDBC_URL)\n",
99+
" \n",
100+
"properties = {\n",
101+
" \"driver\": \"{}\".format(DRIVER),\n",
102+
" \"password\": \"\",\n",
103+
" \"user\": \"sa\",\n",
104+
" \"fetchsize\": fetch_size\n",
105+
" }\n",
106+
"\n",
107+
"src_df = spark.read.format(\"jdbc\").options(**properties).option(\"dbtable\",SRC_TABLE).option(\"url\",jdbc_url).load()\n",
108+
"src_df.show() "
109+
]
110+
},
111+
{
112+
"cell_type": "markdown",
113+
"id": "4141a691-17df-4cf4-a791-78e74fe58e53",
114+
"metadata": {},
115+
"source": [
116+
"# Connect Using Custom JDBC Driver - Snowflake\n",
117+
"\n",
118+
" **Prerequisites**\n",
119+
"\n",
120+
"1. For this example, we will download a Snowflake Spark and JDBC JAR file that we can demonstrate extensibility of adding custom JDBC JAR files with minimal dependency. Download the Spark Jar file here - https://docs.snowflake.com/en/user-guide/spark-connector-install and the JDBC driver from here; https://docs.snowflake.com/en/release-notes/clients-drivers/jdbc-2025\n",
121+
"2. Install the Snowflake Spark and JDBC JAR files in a compute cluster. You will have to then restart the cluster to use the JDBC JAR. This was tested with;\n",
122+
"- spark-snowflake_2.12-3.1.1.jar\n",
123+
"- snowflake-jdbc-3.19.0.jar to cluster.\n",
124+
"\n",
125+
"\n",
126+
"**Overview**\n",
127+
"\n",
128+
"This notebook demonstrates using a new JDBC JAR file added to the compute cluster. It covers:\n",
129+
" \n",
130+
" 1. Create dataframe from a table represented by a SQL query\n",
131+
"\n"
132+
]
133+
},
134+
{
135+
"cell_type": "code",
136+
"execution_count": null,
137+
"id": "642d8046-9eb0-4452-9ba7-3fbacce747b0",
138+
"metadata": {},
139+
"outputs": [
140+
{
141+
"data": {
142+
"text/html": [
143+
"<pre>\n",
144+
"+--------------------+--------------------+\n",
145+
"| c1| c2|\n",
146+
"+--------------------+--------------------+\n",
147+
"|1.000000000000000000|2.000000000000000000|\n",
148+
"+--------------------+--------------------+\n",
149+
"\n",
150+
"</pre>"
151+
]
152+
},
153+
"metadata": {},
154+
"output_type": "display_data"
155+
}
156+
],
157+
"source": [
158+
"#Snowflake properties\n",
159+
"snowflake_options = {\n",
160+
" \"sfUrl: \"\",\n",
161+
" \"sfUser\": \"\",\n",
162+
" \"sfPassword\": \"\",\n",
163+
" \"sfDatabase\": \"DATAFLOW\",\n",
164+
" \"sfSchema\": \"DF_SCHEMA\",\n",
165+
" \"swarehouse\": \"COMPUTE_WH\"\n",
166+
"}\n",
167+
"\n",
168+
"df = spark.read \\\n",
169+
" .format(\"snowflake\") \\\n",
170+
" .options(**snowflake_options) \\\n",
171+
" .option(\"dbtable\", \"test_1\") \\\n",
172+
" .load()\n",
173+
"\n",
174+
"df.show(5)"
175+
]
176+
}
177+
],
178+
"metadata": {
179+
"Last_Active_Cell_Index": 4,
180+
"kernelspec": {
181+
"display_name": "Python 3 (ipykernel)",
182+
"language": "python",
183+
"name": "python3"
184+
},
185+
"language_info": {
186+
"codemirror_mode": {
187+
"name": "ipython",
188+
"version": 3
189+
},
190+
"file_extension": ".py",
191+
"mimetype": "text/x-python",
192+
"name": "python",
193+
"nbconvert_exporter": "python",
194+
"pygments_lexer": "ipython3",
195+
"version": "3.9.12"
196+
}
197+
},
198+
"nbformat": 4,
199+
"nbformat_minor": 5
200+
}
Lines changed: 156 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,156 @@
1+
{
2+
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"id": "433df165-49d5-4a00-904a-103df8480d92",
6+
"metadata": {},
7+
"source": [
8+
"## Oracle AI Data Platform v1.0\n",
9+
"\n",
10+
"Copyright © 2025, Oracle and/or its affiliates.\n",
11+
"\n",
12+
"Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl/"
13+
]
14+
},
15+
{
16+
"cell_type": "markdown",
17+
"id": "2ad176b5-9ec9-4c97-80e3-751c08fb23be",
18+
"metadata": {},
19+
"source": [
20+
"# Execute Oracle SQL on Oracle ADW\n",
21+
"\n",
22+
"## Prerequisites\n",
23+
"1. install oracledb in your cluster (create requirements.txt and add oracledb package)\n",
24+
"2. upload your tnsnames.ora and ewallet.pem from your wallet into the workspace\n",
25+
"\n",
26+
"## Overview\n",
27+
"\n",
28+
"This example defines a function you can reuse, it generates a sample SQL query and executes it."
29+
]
30+
},
31+
{
32+
"cell_type": "code",
33+
"execution_count": null,
34+
"id": "9ddb1eeb-ae80-4c36-acea-259361859c9b",
35+
"metadata": {},
36+
"outputs": [],
37+
"source": [
38+
"# Define the Oracle function execute_oracle_sql to execute simple SQL statements on Oracle ADW\n",
39+
"#\n",
40+
"# Parameters:\n",
41+
"# - v_sql_file_path: path to Oracle SQL file in workspace or volume\n",
42+
"# - v_config_dir: directry where your tnsnames.ora resides\n",
43+
"# - v_wallet_dir: directory where your ewallet.pem resides\n",
44+
"# - v_user: Oracle database user\n",
45+
"# - v_password: Oracle database user password\n",
46+
"# - v_dsn: Oracle DSN\n",
47+
"# - v_wallet_password: password fow wallet\n",
48+
"# - fail_on_error: True/False if statement fails, setting to True will stop the script execution, otherwise it till continue\n",
49+
"#\n",
50+
"import oracledb\n",
51+
" \n",
52+
"def execute_oracle_sql(v_sql_file_path, v_config_dir, v_wallet_dir, v_user, v_password, v_dsn, v_wallet_password, fail_on_error=False):\n",
53+
" # Read the SQL file\n",
54+
" sql_script=\"\"\n",
55+
" with open(v_sql_file_path, 'r') as file:\n",
56+
" sql_script = file.read()\n",
57+
" \n",
58+
" # Split the script into individual statements (naive split by semicolon)\n",
59+
" statements = [stmt.strip() for stmt in sql_script.split(';') if stmt.strip()]\n",
60+
" try:\n",
61+
" # Connect to Oracle\n",
62+
" with oracledb.connect(\n",
63+
" config_dir=v_config_dir,\n",
64+
" user=v_user,\n",
65+
" password=v_password,\n",
66+
" dsn=v_dsn,\n",
67+
" wallet_location=v_wallet_dir,\n",
68+
" wallet_password=v_wallet_password) as connection:\n",
69+
" with connection.cursor() as cursor:\n",
70+
" for statement in statements:\n",
71+
" print(f\"Executing: {statement}\")\n",
72+
" try:\n",
73+
" cursor.execute(statement)\n",
74+
" except Exception as e:\n",
75+
" if (fail_on_error):\n",
76+
" raise e;\n",
77+
" else:\n",
78+
" print(\" Statement failed (but continuing):\", e)\n",
79+
" connection.commit()\n",
80+
" print(\"SQL script executed successfully.\")\n",
81+
" except Exception as e:\n",
82+
" print(\"Error:\", e)\n"
83+
]
84+
},
85+
{
86+
"cell_type": "markdown",
87+
"id": "c9b33030-4657-4fed-a7e5-18a273faa26b",
88+
"metadata": {},
89+
"source": [
90+
"## Generate Sample SQL script\n",
91+
"End each statement with a ; it can be multi-line, no PLSQL blocks supported as simplified parsing used. It can also be DDL to create tables."
92+
]
93+
},
94+
{
95+
"cell_type": "code",
96+
"execution_count": null,
97+
"id": "22b19f1a-76a2-4695-8d24-9b917346241e",
98+
"metadata": {},
99+
"outputs": [],
100+
"source": [
101+
"%%writefile /Workspace/my_oracle_script.sql\n",
102+
"select \"Connected\" msg from dual;\n",
103+
"select \"Connected\" msg from dual;"
104+
]
105+
},
106+
{
107+
"cell_type": "markdown",
108+
"id": "88013578-513b-44a7-b87a-c6bc8ff6ad00",
109+
"metadata": {},
110+
"source": [
111+
"## Execute Sample SQL Script\n",
112+
"\n",
113+
"Update the variables below with your values."
114+
]
115+
},
116+
{
117+
"cell_type": "code",
118+
"execution_count": null,
119+
"id": "c0d09e31-ac66-4d2b-bf03-64c5c815b509",
120+
"metadata": {},
121+
"outputs": [],
122+
"source": [
123+
"sql_file_path = \"/Workspace/my_oracle_script.sql\"\n",
124+
"config_dir=\"/Workspace/your_folder_location_for_tns_names_ora\",\n",
125+
"user=\"your_user\",\n",
126+
"password=\"\",\n",
127+
"dsn=\"your_tns_alias\",\n",
128+
"wallet_location=\"/Workspace/your_folder_location_for_wallet_pem\",\n",
129+
"wallet_password=\"\"\n",
130+
" \n",
131+
"execute_oracle_sql(sql_file_path, config_dir, wallet_location, user, password, dsn, wallet_password)\n"
132+
]
133+
}
134+
],
135+
"metadata": {
136+
"kernelspec": {
137+
"display_name": "Python 3 (ipykernel)",
138+
"language": "python",
139+
"name": "python3"
140+
},
141+
"language_info": {
142+
"codemirror_mode": {
143+
"name": "ipython",
144+
"version": 3
145+
},
146+
"file_extension": ".py",
147+
"mimetype": "text/x-python",
148+
"name": "python",
149+
"nbconvert_exporter": "python",
150+
"pygments_lexer": "ipython3",
151+
"version": "3.9.12"
152+
}
153+
},
154+
"nbformat": 4,
155+
"nbformat_minor": 5
156+
}
6 KB
Binary file not shown.

0 commit comments

Comments
 (0)