How the driver replicates prepared statements on a node that just came back up or joined the cluster.
- Value parameters:
- checkSystemTable
Whether to check
system.prepared_statements
on the target node before repreparing. This table exists since CASSANDRA-8831 (merged in 3.10). It stores the statements already prepared on the node, and preserves them across restarts. Checking the table first avoids repreparing unnecessarily, but the cost of the query is not always worth the improvement, especially if the number of statements is low. If the table does not exist, or the query fails for any other reason, the error is ignored and the driver proceeds to reprepare statements according to the other parameters.- enabled
Whether the driver tries to prepare on new nodes at all. The reason why you might want to disable it is to optimize reconnection time when you believe nodes often get marked down because of temporary network issues, rather than the node really crashing. In that case, the node still has prepared statements in its cache when the driver reconnects, so re-preparing is redundant. On the other hand, if that assumption turns out to be wrong and the node had really restarted, its prepared statement cache is empty (before CASSANDRA-8831), and statements need to be re-prepared on the fly the first time they get executed; this causes a performance penalty (one extra roundtrip to resend the query to prepare, and another to retry the execution).
- maxParallelism
The maximum number of concurrent requests when repreparing.
- maxStatements
The maximum number of statements that should be reprepared. 0 or a negative value means no limit.
- timeout
The request timeout. This applies both to querying the system.prepared_statements table (if relevant), and the prepare requests themselves.
- Companion:
- object