Does anyone have experience with kinesis analytics and Flink+Python? I am trying to run a simple application, querying from kinesis stream and sink into S3. Checking logs is a nightmare, they seem to be embedded in a json field.
I managed to get it working locally (well, it doesnt crash…) with following conf:

    os.environ['AWS_PROFILE'] ='dev'
    os.environ['AWS_DEFAULT_REGION'] = 'us-east-2'

    # only for local, overwrite variable to properties and pass in your jars delimited by a semicolon (;)
    APPLICATION_PROPERTIES_FILE_PATH = "application_properties.json"  # local

    CURRENT_DIR = os.path.dirname(os.path.realpath(__file__))
        + CURRENT_DIR
        + "/flink-sql-connector-kinesis_2.12-1.13.0.jar;file:///"
        + CURRENT_DIR
        + "/flink-s3-fs-hadoop-1.13.0.jar",

    table_env.get_config().get_configuration().set_string("execution.checkpointing.mode", "EXACTLY_ONCE")
    table_env.get_config().get_configuration().set_string("execution.checkpointing.interval", "1min")```
In AWS I use `amazon-kinesis-connector-flink-2.0.2.jar` and now I see no logs at all, in the flink dashboard I dont see running jobs and no logs either

I am basically unable to even get this example working:
org.apache.flink.table.api.ValidationException: Unable to create a source for reading table 'default_catalog.default_database.input_table'."
and tons of more errors…

Def will use/test java version