Snowflake SnowPro Specialty - Snowpark Certification Exam Syllabus

SPS-C01 Dumps Questions, SPS-C01 PDF, SnowPro Specialty - Snowpark Exam Questions PDF, Snowflake SPS-C01 Dumps Free, SnowPro Specialty - Snowpark Official Cert Guide PDF, Snowflake SnowPro Specialty - Snowpark Dumps, Snowflake SnowPro Specialty - Snowpark PDFThe Snowflake SPS-C01 exam preparation guide is designed to provide candidates with necessary information about the SnowPro Specialty - Snowpark exam. It includes exam summary, sample questions, practice test, objectives and ways to interpret the exam objectives to enable candidates to assess the types of questions-answers that may be asked during the Snowflake Certified SnowPro Specialty - Snowpark exam.

It is recommended for all the candidates to refer the SPS-C01 objectives and sample questions provided in this preparation guide. The Snowflake SnowPro Specialty - Snowpark certification is mainly targeted to the candidates who want to build their career in Specialty domain and demonstrate their expertise. We suggest you to use practice exam listed in this cert guide to get used to with exam environment and identify the knowledge areas where you need more work prior to taking the actual Snowflake SnowPro Specialty - Snowpark exam.

Snowflake SPS-C01 Exam Summary:

Exam Name
Snowflake SnowPro Specialty - Snowpark
Exam Code SPS-C01
Exam Price $225 USD
Duration 85 minutes
Number of Questions 55
Passing Score 750 + Scaled Scoring from 0 - 1000%
Recommended Training / Books Snowpark DataFrame Programming Training Course
SnowPro Specialty: Snowpark Exam Study Guide
Level Up Snowpark Essentials Track 
Schedule Exam PEARSON VUE
Sample Questions Snowflake SPS-C01 Sample Questions
Recommended Practice Snowflake Certified SnowPro Specialty - Snowpark Practice Test

Snowflake SnowPro Specialty - Snowpark Syllabus:

Section Objectives

Snowpark Concepts - 15%

Outline Snowpark architecture - Lazy evaluation
- Use of key objects
  • Snowpark DataFrames
  • User-Defined Functions(UDFs)
  • User-Defined Table Functions(UDTFs)
  • Stored procedures
  • File operations
- Types of libraries (DataFrames, Machine Learning)
  • Anaconda repository (Python packages directly into Snowflake)
  • ​Other third-party libraries (not managed by Anaconda repository)
- Client-side and server-side capabilities
Set-up Snowpark
- Installation
  • Versioning
  • Python environment

- Development environments

  • Third-party tools
  • Snowflake Notebooks
  • Jupyter Notebooks
  • Microsoft Visual Studio Code (VS Code)

Snowpark API for Python - 30%

Create and manage user sessions - Account identifiers
- Parameters for the CONNECT
function
- Authentication methods
  • Construct a dictionary
  • Key pair authentication
  • Snowflake CLI or .env parameters

- Session creation
- SessionBuilder
- Session methods
- Session attributes
- Asyncjob

Use Snowpark with unstructured data
- Read files with SnowflakeFile object
- Use UDFs and UDTFs to process files
- Use stored procedures to process files
Create Snowpark DataFrames - Multiple methods to create Snowpark DataFrames
  • From Snowflake tables/views
  • From Python objects (list, dictionary)
  • From SQL statements
  • From files (JSON, CSV, Parquet, XML)
  • From pandas DataFrames

- Schemas (apply to DataFrames)
- Data types (for example, IntegerType, StringType, DateType)

Operationalize UDFs and UDTFs in Snowpark
- Create UDFs from files (locally, on a stage)
- Use Python modules (packaged Python code) with UDFs
- Write Python function to create UDFs and UDTFs
- Register UDFs and UDTFs (for example, session.utf(...), functions.utf(...))
- Secure UDFs and UDTFs
  • Use SQL to alter UDFs and UDTFs created with Snowpark
  • Grant access to UDFs and UDTFs to share code
  • ​Understanding how to grant object permissions so other Snowflake users can see and use the UDFs and UDTFs

- Data types (type hints vs. registration

  • Provide the data types as parameters when creating a UDFor UDTF to return as Pythonhints/specify them as part of the registration
- Compare scalar and vectorized operations
Operationalize Snowpark stored procedures
- Create stored procedures from files (locally, on stage)
- Write Python functions to power stored procedures
- Use Python modules (packaged code, Anaconda) with stored procedures
- Register stored procedures
- Make dependencies available to code
- Secure stored procedures
  • Use SQL to alter stored procedures created with Snowpark
  • Caller versus owner rights
- Use Snowpark Python storedprocedures to run workloads
- Data types (type hints vs. registration API)
  • Provide the data types as parameters when creating a stored procedure to return as Python hints/specify them as part of the registration
- Create Directed Acyclic Graphs (tasks) executing stored procedures
  • Python API
- Bring Python modules (packaged code) to be used with UDFs
  • Stored procedures to enable reuse of code

Snowpark for Data Transformations - 35%

Apply operations for filtering and transforming data
- Use scalar functions and operators
- Sort and limit results
- Input/output (parameters)
- Snowpark DataFrames
- Columns
- Data type casting
- Rows and data extraction from a Rows object
Clean and enrich data using
Snowpark for Python
- Perform joins
- Handle missing values
- Sample data
Perform aggregate and set-based operations on DataFrames
- Functions
- Window
- Grouping
- Table functions
- UDFs
Transform semi-structured data in DataFrames
- Traverse semi-structured data
- Explicitly cast values in semi-structured data
- Flatten an array of objects into rows
- Load semi-structured data into DataFrames
Persist the results of Snowpark DataFrames
- Create views from DataFrames
- Save DataFrame results as Snowflake tables
- Save DataFrame results as files in a stage
Perform DML operations using Snowpark DataFrames
- Delete data
- Update data
- Insert data
- Merge data

Snowpark Performance Optimization - 20%

Configure Snowpark-optimized warehouses
- Use cases for Snowpark-optimized virtual warehouses
- Modify Snowpark-optimized virtual warehouse properties
- Billing for Snowpark-optimized virtual warehouses
- When to scale up/down virtual warehouses
Enhance performance in
Snowpark applications
- Materialize results (caching)
  • Caching DataFrames (using.cache_result()) and understanding why this is useful
  • Create a temporary table

- Vectorization

  • Understanding the difference between vectorized and scalar UDFs
  • Vectorized UDFs for batching
  • Snowpark DataFrames versus pandas on Snowflake

- Synchronous versus asynchronous calls

  • Block parameter
Troubleshoot common errors in Snowpark - Event tables
- Snowpark Python local testing framework
- Writing tests (pyTest)
- Query history (SQL equivalency to help identify bottlenecks)

 

Your rating: None Rating: 5 / 5 (1 vote)