Cannot start node if snitch's data center (dc1) differs from previous data center (datacenter1). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.

Recently while working in  Cassandra environment, I encountered below error and due to which it was not allowing me to start my Cassandra node.

Error Details:
=========================

 Cannot start node if snitch's data center (dc1) differs from previous data center (datacenter1). Please fix the snitch configuration, decommission and rebootstrap this node or use the flag -Dcassandra.ignore_dc=true.

This error usually occurs when the node starts and see’s that it has information indicating that it was previously part of a different datacenter. This occurs if the datacenter was different on a prior boot and was then changed.

In my scenario, while adding a new  DC to my existing setup I had to change the value of the “endpoint_snitch” parameter in cassandra.yaml from SimpleSnitch to GossipingPropertyFileSnitch as it was not allowing me to add new DC.

As per Datastax documentation, SimpleSnitch can be used only for single-datacenter deployments.

https://docs.datastax.com/en/archived/cassandra/2.1/cassandra/architecture/architectureSnitchesAbout_c.html

Upon changing the value of “endpoint_snitch”  parameter it was not allowing me to start my Cassandra node and it failed with the above-stated error.

Even though this error is quite descriptive and tells us that we need to add “Dcassandra.ignore_dc=true” flag to get rid of this error, however, the interesting part over here is that its nowhere mentioned in the error that in which file we need to add this parameter.

As a Solution:

  1. We need to add this ““Dcassandra.ignore_dc=true”” flag in the last line of cassandra-env.sh as given below .This cassandra-env.sh” file is located under the config files location.

JVM_OPTS="$JVM_OPTS -Dcassandra.ignore_dc=true"

2. Once the node starts successfully then don’t forget to execute :
nodetool repair
nodetool cleanup

[cassandra@oel9-server3 ~]$ nodetool status

nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection refused (Connection refused)'.

[cassandra@server1 ~]$ Cassandra 

INFO  [main] 2025-02-01 08:40:42,811 Gossiper.java:2364 - No gossip backlog; proceeding

INFO  [main] 2025-02-01 08:40:42,847 CassandraDaemon.java:488 - Prewarming of auth caches is disabled

From Seed Server, i tried the below output you can found

[cassandra@server1 conf]$ nodetool repair

[2025-02-01 08:41:44,330] Replication factor is 1. No repair is needed for keyspace 'system_auth'

[2025-02-01 08:41:44,349] Replication factor is 1. No repair is needed for keyspace 'mydb'

[2025-02-01 08:41:44,476] Starting repair command #1 (43f81bf0-e04a-11ef-be77-273e06446ff4), repairing keyspace system_traces with repair options (parallelism: parallel, primary range: false, incremental: true, job threads: 1, ColumnFamilies: [], dataCenters: [], hosts: [], previewKind: NONE, # of ranges: 32, pull repair: false, force repair: false, optimise streams: false, ignore unreplicated keyspaces: false, repairPaxos: true, paxosOnly: false)

[2025-02-01 08:41:44,561] Repair command #1 failed with error Endpoint not alive: /192.168.17.137:7000

[2025-02-01 08:41:44,567] Repair command #1 finished with error

error: Repair job has failed with the error message: Repair command #1 failed with error Endpoint not alive: /192.168.17.137:7000. Check the logs on the repair participants for further details

-- StackTrace --

java.lang.RuntimeException: Repair job has failed with the error message: Repair command #1 failed with error Endpoint not alive: /192.168.17.137:7000. Check the logs on the repair participants for further details

        at org.apache.cassandra.tools.RepairRunner.progress(RepairRunner.java:137)

        at org.apache.cassandra.utils.progress.jmx.JMXNotificationProgressListener.handleNotification(JMXNotificationProgressListener.java:77)

        at com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.dispatchNotification(ClientNotifForwarder.java:583)

        at com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.doRun(ClientNotifForwarder.java:533)

        at com.sun.jmx.remote.internal.ClientNotifForwarder$NotifFetcher.run(ClientNotifForwarder.java:452)

        at com.sun.jmx.remote.internal.ClientNotifForwarder$LinearExecutor$1.run(ClientNotifForwarder.java:108)

 

[cassandra@server1 ~]$ nodetool status
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address         Load        Tokens  Owns (effective)  Host ID                               Rack
UN  192.168.17.142  1.64 MiB    16      59.3%             ecb093ac-2b06-44d3-b2ad-62bc0bf9720f  rack1
UN  192.168.17.137  1.86 MiB    16      76.0%             addec934-b398-4413-aa5b-4e4ae4436379  rack1
UN  192.168.17.139  204.01 KiB  16      64.7%             13cbe655-5743-45e8-8735-a57433a4b461  rack1


Next Post Previous Post
No Comment
Add Comment
comment url