Flink provides a command-line interface (CLI) atDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/apache/flink/llms.txt
Use this file to discover all available pages before exploring further.
bin/flink for submitting packaged jobs, monitoring executions, and managing job lifecycle. The CLI connects to the JobManager specified in your conf/config.yaml.
Submitting a job
Useflink run to upload a JAR and start a job:
--detached flag returns immediately after submission. The output includes the assigned job ID:
Passing configuration at submission
Pass any Flink configuration option with-D:
On session clusters, only execution configuration parameters (those under the
execution prefix) take effect at submission time. Cluster-level options like memory sizes are fixed at cluster startup.Monitoring jobs
List all running and scheduled jobs:Creating savepoints
A savepoint captures the full state of a running job so you can stop and resume it later:--detached to avoid a client timeout:
Disposing a savepoint
Remove savepoint data and clean up metadata:ClassNotFoundException:
Creating checkpoints manually
You can trigger a checkpoint on demand:Stopping a job gracefully
Thestop command sends a final checkpoint barrier from sources to sinks, creates a savepoint, then calls cancel() on each source. This is the recommended way to stop a streaming job you intend to resume:
--drain to emit MAX_WATERMARK before stopping, flushing all event-time timers and windows:
Cancelling a job
Cancel a job immediately without creating a savepoint:RUNNING to CANCELLED and all computation stops.
Resuming from a savepoint
Restart a job from a previously created savepoint:--allowNonRestoredState:
Selecting a deployment target
Use--target to specify where and how to submit the job:
| Target | Description |
|---|---|
remote | Submit to an already-running standalone cluster |
local | Run in a local MiniCluster (development only) |
yarn-session | Submit to a running Flink on YARN session |
yarn-application | Start a new Flink cluster on YARN in Application mode |
kubernetes-session | Submit to a running Flink on Kubernetes session |
kubernetes-application | Start a new Flink cluster on Kubernetes in Application mode |
CLI actions reference
| Action | Purpose |
|---|---|
run | Submit and optionally execute a job |
run-application | Submit a job in Application mode |
info | Print the optimized execution graph without running the job |
list | List running and scheduled jobs |
savepoint | Create or dispose a savepoint |
checkpoint | Trigger a manual checkpoint |
cancel | Cancel a running job immediately |
stop | Gracefully stop a job with a final savepoint |
Submitting PyFlink jobs
PyFlink jobs are submitted without a JAR file path. Python 3.9 or higher is required:Python CLI options
| Option | Description |
|---|---|
-py, --python | Python script containing the job entry point |
-pym, --pyModule | Python module name containing the entry point (requires --pyFiles) |
-pyfs, --pyFiles | Additional files or directories to add to PYTHONPATH |
-pyarch, --pyArchives | Archive files (e.g., virtual environments) distributed to workers |
-pyexec, --pyExecutable | Path to the Python interpreter used on workers |
-pyclientexec, --pyClientExecutable | Path to the Python interpreter used on the client |
-pyreq, --pyRequirements | requirements.txt file for third-party dependencies |

