Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
Sometimes you might see this WARN log continuously in DAS analyser nodes, This happened, the __spark_meta_table keeps some meta data info about the CAR files we deploy. When those information got corrupted, you can see this WARN log in console.
"Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources"
This is a known issue in spark and PR is already given to the spark.
As a workaround we can remove the __spark_meta_table table and restart the node.
Since the __spark_meta_table table is encrypted, we just can't delete it from database, we have to use the Analytics data backup and restore tool
"Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources"
This is a known issue in spark and PR is already given to the spark.
As a workaround we can remove the __spark_meta_table table and restart the node.
Since the __spark_meta_table table is encrypted, we just can't delete it from database, we have to use the Analytics data backup and restore tool
Comments
If we execute this shell command there will be a NullPointerException.
Correct command should be
./analytics-backup.sh -deleteTables -tables "__spark_meta_table" -tenantId -5000
Correct argument is not the -table, and it should be a -tables.