Error in DataBricks SQL 'charindex'. This function is neither a registered temporary function nor a permanent function registered in the database 'test'.; line 6 pos 32
This is my Technical area of troubleshooting and learning new Programming skills and many more. Here you will find answers for many new technologies like asp.net 2.0/3.5,4.0 C# access, mysql, Amazon Webservice ,Sql-server, JD Edwards, SAS, Salesforce, APIs, MVC and many more. please visit & discuss.
Monday, February 27, 2023
Error Databricks 'charindex'. This function is neither a registered temporary function nor a permanent function registered in the database 'test'.; line 6 pos 32
Sunday, February 26, 2023
Generate Insert SQL Script from SQL Server to DataBricks Azure
DECLARE
@TABLENAME VARCHAR(MAX) ='GLaccountsEIMcsv',
@FILTER_C VARCHAR(MAX)='' -- where SLNO = 15 or some value
DECLARE @TABLE_NAME VARCHAR(MAX),
@CSV_COLUMN VARCHAR(MAX),
@QUOTED_DATA VARCHAR(MAX),
@SQLTEXT VARCHAR(MAX),
@FILTER VARCHAR(MAX)
SET @TABLE_NAME=@TABLENAME
SELECT @FILTER=@FILTER_C
SELECT @CSV_COLUMN=STUFF
(
(
SELECT ',['+ NAME +']' FROM sys.all_columns
WHERE OBJECT_ID=OBJECT_ID(@TABLE_NAME) AND
is_identity!=1 FOR XML PATH('')
),1,1,''
)
SELECT @QUOTED_DATA=STUFF
(
(
SELECT ' ISNULL(QUOTENAME('+NAME+','+QUOTENAME('''','''''')+'),'+'''NULL'''+')+'','''+'+' FROM sys.all_columns
WHERE OBJECT_ID=OBJECT_ID(@TABLE_NAME) AND
is_identity!=1 FOR XML PATH('')
),1,1,''
)
-- FOR SQL SERVER
--SELECT @SQLTEXT='SELECT ''INSERT INTO '+@TABLE_NAME+'('+@CSV_COLUMN+')VALUES('''+'+'+SUBSTRING(@QUOTED_DATA,1,LEN(@QUOTED_DATA)-5)+'+'+''')'''+' Insert_Scripts FROM '+@TABLE_NAME + @FILTER
-- FOR DATABRICKS SQL
SELECT @SQLTEXT='SELECT ''INSERT INTO '+@TABLE_NAME+' VALUES('''+'+'+SUBSTRING(@QUOTED_DATA,1,LEN(@QUOTED_DATA)-5)+'+'+''')'''+' Insert_Scripts FROM '+@TABLE_NAME + @FILTER
EXECUTE (@SQLTEXT)
SQL
https://www.mytecbits.com/microsoft/sql-server/auto-generate-insert-statements
Sunday, February 12, 2023
Create Table Databricks select into a temp table
%sql
CREATE TABLE tblBU_list_XX40_Desc_01 as
select * from tblBU_list_XX40_Desc
How to read Parquet file in DataBricks
# Databricks notebook source
df_GLDATAXYZA = spark.read.option("header",True).parquet("/mnt/ABCprdadls/2_RAW/mload/XYZA_GL/GLDATA",inferSchema="True")
df_GLDATAXYZA.createOrReplaceTempView("GLDATA")
df_GLDATAXYZA = spark.read.option("header",True).parquet("/mnt/ABCprdadls/2_RAW/mload/XYZA_GL/GL_Details",inferSchema="True")
df_GLDATAXYZA.createOrReplaceTempView("GL_Details")
df_GLDATAXYZA = spark.read.option("header",True).parquet("/mnt/ABCprdadls/2_RAW/mload/XYZA_GL/GLDetails_2023",inferSchema="True")
df_GLDATAXYZA.createOrReplaceTempView("GLDetails_2023")
df_GLDATAXYZA = spark.read.option("header",True).parquet("/mnt/ABCprdadls/2_RAW/mload/XYZA_GL/tblBU_list_MF40_Desc",inferSchema="True")
df_GLDATAXYZA.createOrReplaceTempView("tblBU_list_MF40_Desc")
df_GLDATAXYZA = spark.read.option("header",True).parquet("/mnt/ABCprdadls/2_RAW/mload/XYZA_GL/BudgetData",inferSchema="True")
df_GLDATAXYZA.createOrReplaceTempView("Budgetdata")
df_GLDATAXYZA = spark.read.option("header",True).parquet("/mnt/ABCprdadls/2_RAW/mload/XYZA_GL/Tbl_PurePeriod",inferSchema="True")
df_GLDATAXYZA.createOrReplaceTempView("Tbl_PurePeriod")
spark.catalog.setCurrentDatabase("test")
# COMMAND ----------
# MAGIC %sql
# MAGIC CREATE TEMPORARY VIEW VWtempGLDetails
# MAGIC AS
# MAGIC select * from GLDetails_2023
# MAGIC union
# MAGIC Select * from GL_Details --where glfy=22 and glpn=12
# COMMAND ----------
# MAGIC %sql
# MAGIC select * from VWtempGLDetails where glfy=23 and glpn=1 and gllt in('AA','AL','AN') and glco=2
# MAGIC limit 10
# COMMAND ----------
# MAGIC %sql
# MAGIC SELECT * FROM GLDATA WHERE fy=23 AND lt IN('AA','AL','AN') AND co=2
# COMMAND ----------
# MAGIC %sql
# MAGIC select * from Tbl_PurePeriod
Friday, February 10, 2023
Databricks create table from parquet file Azure Datalake
%sql
CREATE TABLE tblBU_list_MF40_Desc
USING parquet
OPTIONS (path "File Location")
e.g /mnt/Dir1/Dir2/Dir3/Fin_DB_GL/tblBU_data
Friday, February 3, 2023
How to read Parquet files in Databricks
How to read parquet file in databricks and run select statement.
1. Write following code in cell1 and execute
df_GLDATAMFDB = spark.read.option("header",True).parquet("/mnt/FileServer/Folder1/Folder2/Folder3/",inferSchema="True")
df_GLDATAMFDB.createOrReplaceTempView("MFDB_GL")
2. After that in second cell run following:-
%sql
select * from MFDB_GL
How to Create Next Number for Batch Processing in SQL SERVER
CREATE TABLE [dbo].[TBL_BatchNN](
[NN] [int] IDENTITY(1,1) NOT NULL,
[strgetdate] [datetime] NULL,
CONSTRAINT [PK_TBL_BatchNN] PRIMARY KEY CLUSTERED
(
[NN] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
) ON [PRIMARY]
ALTER Proc [dbo].[SP_BatchLoadNN] AS
Set nocount on
declare @NN int
Insert into TBL_BatchNN(dateupdate) select getdate()
select @NN=max(NN) from TBL_BatchNN
delete TBL_BatchNN
SELECT @NN