Attaching to von_node3_1, von_node1_1, von_webserver_1, von_node2_1, von_node4_1 node1_1 | Ledger does not exist - Creating... node1_1 | von_generate_transactions -n 1 node1_1 | node1_1 | node1_1 | ================================================================================================ node1_1 | Generating genesis transaction file: node1_1 | nodeArg: --nodeNum 1 node1_1 | ipAddresses: 10.0.2.15,10.0.2.15,10.0.2.15,10.0.2.15 node1_1 | genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis node1_1 | ------------------------------------------------------------------------------------------------ node1_1 | generate_indy_pool_transactions --nodes 4 --clients 0 --nodeNum 1 --ips 10.0.2.15,10.0.2.15,10.0.2.15,10.0.2.15 node1_1 | node1_1 | Generating keys for provided seed b'000000000000000000000000000Node1' node1_1 | Init local keys for client-stack node1_1 | Public key is HXrfcFWDjWusENBoXhV8mARzq51f1npWYWaA1GzfeMDG node1_1 | Verification key is Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv node1_1 | Init local keys for node-stack node1_1 | Public key is HXrfcFWDjWusENBoXhV8mARzq51f1npWYWaA1GzfeMDG node1_1 | Verification key is Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv node1_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba node1_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1 node1_1 | Nodes will not run locally, so writing /etc/indy/indy.env node1_1 | This node with name Node1 will use ports 9701 and 9702 for nodestack and clientstack respectively node1_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk node1_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5 node1_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5 node1_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh node1_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw node1_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP node1_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba node1_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1 node1_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk node1_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5 node1_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5 node1_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh node1_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw node1_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP node1_1 | node1_1 | ------------------------------------------------------------------------------------------------ node1_1 | Generated genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis node1_1 | node1_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","blskey_pop":"RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1","client_ip":"10.0.2.15","client_port":9702,"node_ip":"10.0.2.15","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"} node1_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","blskey_pop":"Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5","client_ip":"10.0.2.15","client_port":9704,"node_ip":"10.0.2.15","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"} node1_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","blskey_pop":"QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh","client_ip":"10.0.2.15","client_port":9706,"node_ip":"10.0.2.15","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"} node2_1 | Ledger does not exist - Creating... node2_1 | von_generate_transactions -n 2 node2_1 | node2_1 | node2_1 | ================================================================================================ node2_1 | Generating genesis transaction file: node2_1 | nodeArg: --nodeNum 2 node2_1 | ipAddresses: 10.0.2.15,10.0.2.15,10.0.2.15,10.0.2.15 node2_1 | genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis node2_1 | ------------------------------------------------------------------------------------------------ node2_1 | generate_indy_pool_transactions --nodes 4 --clients 0 --nodeNum 2 --ips 10.0.2.15,10.0.2.15,10.0.2.15,10.0.2.15 node2_1 | node2_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba node2_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1 node2_1 | Generating keys for provided seed b'000000000000000000000000000Node2' node2_1 | Init local keys for client-stack node2_1 | Public key is Fsp2dyt7D2B4GA53hKnEmLym5Y75ExGFz2ZBzcQMNKsB node2_1 | Verification key is 8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb node2_1 | Init local keys for node-stack node2_1 | Public key is Fsp2dyt7D2B4GA53hKnEmLym5Y75ExGFz2ZBzcQMNKsB node2_1 | Verification key is 8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb node2_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk node2_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5 node2_1 | Nodes will not run locally, so writing /etc/indy/indy.env node2_1 | This node with name Node2 will use ports 9703 and 9704 for nodestack and clientstack respectively node2_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5 node2_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh node2_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw node2_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP node2_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba node2_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1 node2_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk node2_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5 node2_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5 node2_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh node2_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw node2_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP node2_1 | node2_1 | ------------------------------------------------------------------------------------------------ node2_1 | Generated genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis node2_1 | node2_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","blskey_pop":"RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1","client_ip":"10.0.2.15","client_port":9702,"node_ip":"10.0.2.15","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"} node2_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","blskey_pop":"Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5","client_ip":"10.0.2.15","client_port":9704,"node_ip":"10.0.2.15","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"} node2_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","blskey_pop":"QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh","client_ip":"10.0.2.15","client_port":9706,"node_ip":"10.0.2.15","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"} node1_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","blskey_pop":"RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP","client_ip":"10.0.2.15","client_port":9708,"node_ip":"10.0.2.15","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"} node1_1 | ================================================================================================ node1_1 | node1_1 | start_indy_node Node1 0.0.0.0 9701 0.0.0.0 9702 node1_1 | 2020-12-15 12:35:07,812|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node1_1 | 2020-12-15 12:35:09,408|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node1_1 | 2020-12-15 12:35:10,358|INFO|looper.py|Starting up indy-node node1_1 | 2020-12-15 12:35:10,687|INFO|ledger.py|Starting ledger... node1_1 | 2020-12-15 12:35:10,767|INFO|ledger.py|Recovering tree from transaction log node1_1 | 2020-12-15 12:35:10,902|INFO|ledger.py|Recovered tree in 0.13536232899991774 seconds node1_1 | 2020-12-15 12:35:11,131|INFO|ledger.py|Starting ledger... node1_1 | 2020-12-15 12:35:11,193|INFO|ledger.py|Recovering tree from transaction log node1_1 | 2020-12-15 12:35:11,331|INFO|ledger.py|Recovered tree in 0.13853048999999373 seconds node1_1 | 2020-12-15 12:35:11,489|INFO|ledger.py|Starting ledger... node1_1 | 2020-12-15 12:35:11,544|INFO|ledger.py|Recovering tree from transaction log node1_1 | 2020-12-15 12:35:11,666|INFO|ledger.py|Recovered tree in 0.1217335640001238 seconds node1_1 | 2020-12-15 12:35:11,810|INFO|ledger.py|Starting ledger... node1_1 | 2020-12-15 12:35:11,851|INFO|ledger.py|Recovering tree from transaction log node1_1 | 2020-12-15 12:35:11,934|INFO|ledger.py|Recovered tree in 0.08298903400009294 seconds node1_1 | 2020-12-15 12:35:12,237|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node1 node1_1 | 2020-12-15 12:35:12,238|INFO|pool_manager.py|Node1 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5 node1_1 | 2020-12-15 12:35:12,238|INFO|pool_manager.py|Node1 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5 node1_1 | 2020-12-15 12:35:12,238|INFO|pool_manager.py|Node1 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5 node1_1 | 2020-12-15 12:35:12,238|INFO|pool_manager.py|Node1 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5 node1_1 | 2020-12-15 12:35:12,281|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node1_1 | 2020-12-15 12:35:12,285|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node1_1 | 2020-12-15 12:35:12,306|INFO|stacks.py|Node1C: clients connections tracking is enabled. node1_1 | 2020-12-15 12:35:12,307|INFO|stacks.py|Node1C: client stack restart is enabled. node1_1 | 2020-12-15 12:35:12,560|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute node1_1 | 2020-12-15 12:35:12,560|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node1_1 | 2020-12-15 12:35:12,560|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node1_1 | 2020-12-15 12:35:12,560|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node1_1 | 2020-12-15 12:35:12,561|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node1_1 | 2020-12-15 12:35:12,561|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer node1_1 | 2020-12-15 12:35:12,564|INFO|node.py|Node1 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node2', 'Node3', 'Node4', 'Node1'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)} node1_1 | 2020-12-15 12:35:12,564|INFO|consensus_shared_data.py|Node1:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4'] node1_1 | 2020-12-15 12:35:12,571|INFO|primary_connection_monitor_service.py|Node1:0 scheduling primary connection check in 60 sec node1_1 | 2020-12-15 12:35:12,571|NOTIFICATION|replicas.py|Node1 added replica Node1:0 to instance 0 (master) node1_1 | 2020-12-15 12:35:12,571|INFO|replicas.py|reset monitor due to replica addition node1_1 | 2020-12-15 12:35:12,572|INFO|consensus_shared_data.py|Node1:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4'] node1_1 | 2020-12-15 12:35:12,577|NOTIFICATION|replicas.py|Node1 added replica Node1:1 to instance 1 (backup) node1_1 | 2020-12-15 12:35:12,577|INFO|replicas.py|reset monitor due to replica addition node1_1 | 2020-12-15 12:35:12,582|INFO|consensus_shared_data.py|Node1:0 updated validators list to {'Node2', 'Node3', 'Node4', 'Node1'} node1_1 | 2020-12-15 12:35:12,583|INFO|consensus_shared_data.py|Node1:1 updated validators list to {'Node2', 'Node3', 'Node4', 'Node1'} node1_1 | 2020-12-15 12:35:12,583|INFO|replica.py|Node1:0 setting primaryName for view no 0 to: Node1:0 node1_1 | 2020-12-15 12:35:12,583|NOTIFICATION|primary_connection_monitor_service.py|Node1:0 restored connection to primary of master node1_1 | 2020-12-15 12:35:12,584|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0) node1_1 | 2020-12-15 12:35:12,584|INFO|replica.py|Node1:1 setting primaryName for view no 0 to: Node2:1 node1_1 | 2020-12-15 12:35:12,584|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0) node1_1 | 2020-12-15 12:35:12,585|INFO|node.py|total plugins loaded in node: 0 node1_1 | 2020-12-15 12:35:12,588|INFO|ledgers_bootstrap.py| found state to be empty, recreating from ledger 0 node1_1 | 2020-12-15 12:35:12,592|INFO|ledgers_bootstrap.py| initialized state for ledger 0: state root 9mYGVPSfvA24aJo5vnrC1JDBNf3dVSYEAJUVwaapkjeL node1_1 | 2020-12-15 12:35:12,592|INFO|ledgers_bootstrap.py| found state to be empty, recreating from ledger 2 node1_1 | 2020-12-15 12:35:12,596|INFO|ledgers_bootstrap.py| initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA node1_1 | 2020-12-15 12:35:12,596|INFO|ledgers_bootstrap.py| found state to be empty, recreating from ledger 1 node1_1 | 2020-12-15 12:35:12,602|INFO|ledgers_bootstrap.py| initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM node1_1 | 2020-12-15 12:35:12,602|INFO|motor.py|Node1 changing status from stopped to starting node1_1 | 2020-12-15 12:35:12,603|INFO|zstack.py|Node1 will bind its listener at 0.0.0.0:9701 node1_1 | 2020-12-15 12:35:12,604|INFO|stacks.py|CONNECTION: Node1 listening for other nodes at 0.0.0.0:9701 node1_1 | 2020-12-15 12:35:12,609|INFO|zstack.py|Node1C will bind its listener at 0.0.0.0:9702 node3_1 | Ledger does not exist - Creating... node3_1 | von_generate_transactions -n 3 node3_1 | node3_1 | node3_1 | ================================================================================================ node3_1 | Generating genesis transaction file: node3_1 | nodeArg: --nodeNum 3 node3_1 | ipAddresses: 10.0.2.15,10.0.2.15,10.0.2.15,10.0.2.15 node3_1 | genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis node3_1 | ------------------------------------------------------------------------------------------------ node3_1 | generate_indy_pool_transactions --nodes 4 --clients 0 --nodeNum 3 --ips 10.0.2.15,10.0.2.15,10.0.2.15,10.0.2.15 node3_1 | node3_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba node3_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1 node3_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk node3_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5 node3_1 | Generating keys for provided seed b'000000000000000000000000000Node3' node3_1 | Init local keys for client-stack node3_1 | Public key is 6KTs7Q9Lng5uX6oWCkVifiJ6hSpkdHiRijAsXtAunnGN node3_1 | Verification key is DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya node3_1 | Init local keys for node-stack node3_1 | Public key is 6KTs7Q9Lng5uX6oWCkVifiJ6hSpkdHiRijAsXtAunnGN node3_1 | Verification key is DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya node3_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5 node3_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh node3_1 | Nodes will not run locally, so writing /etc/indy/indy.env node3_1 | This node with name Node3 will use ports 9705 and 9706 for nodestack and clientstack respectively node3_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw node3_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP node3_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba node3_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1 node3_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk node3_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5 node3_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5 node3_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh node3_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw node3_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP node3_1 | node3_1 | ------------------------------------------------------------------------------------------------ node3_1 | Generated genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis node3_1 | node3_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","blskey_pop":"RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1","client_ip":"10.0.2.15","client_port":9702,"node_ip":"10.0.2.15","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"} node3_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","blskey_pop":"Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5","client_ip":"10.0.2.15","client_port":9704,"node_ip":"10.0.2.15","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"} node3_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","blskey_pop":"QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh","client_ip":"10.0.2.15","client_port":9706,"node_ip":"10.0.2.15","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"} node2_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","blskey_pop":"RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP","client_ip":"10.0.2.15","client_port":9708,"node_ip":"10.0.2.15","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"} node2_1 | ================================================================================================ node2_1 | node2_1 | start_indy_node Node2 0.0.0.0 9703 0.0.0.0 9704 node2_1 | 2020-12-15 12:35:01,903|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node2_1 | 2020-12-15 12:35:03,475|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node2_1 | 2020-12-15 12:35:04,487|INFO|looper.py|Starting up indy-node node2_1 | 2020-12-15 12:35:04,816|INFO|ledger.py|Starting ledger... node2_1 | 2020-12-15 12:35:04,875|INFO|ledger.py|Recovering tree from transaction log node2_1 | 2020-12-15 12:35:04,993|INFO|ledger.py|Recovered tree in 0.11756180499992297 seconds node2_1 | 2020-12-15 12:35:05,163|INFO|ledger.py|Starting ledger... node2_1 | 2020-12-15 12:35:05,216|INFO|ledger.py|Recovering tree from transaction log node2_1 | 2020-12-15 12:35:05,326|INFO|ledger.py|Recovered tree in 0.10947222200002216 seconds node2_1 | 2020-12-15 12:35:05,551|INFO|ledger.py|Starting ledger... node2_1 | 2020-12-15 12:35:05,593|INFO|ledger.py|Recovering tree from transaction log node2_1 | 2020-12-15 12:35:05,697|INFO|ledger.py|Recovered tree in 0.10349994700004572 seconds node2_1 | 2020-12-15 12:35:05,874|INFO|ledger.py|Starting ledger... node2_1 | 2020-12-15 12:35:05,942|INFO|ledger.py|Recovering tree from transaction log node2_1 | 2020-12-15 12:35:06,045|INFO|ledger.py|Recovered tree in 0.1028115510000589 seconds node2_1 | 2020-12-15 12:35:06,410|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node2 node2_1 | 2020-12-15 12:35:06,410|INFO|pool_manager.py|Node2 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5 node2_1 | 2020-12-15 12:35:06,410|INFO|pool_manager.py|Node2 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5 node2_1 | 2020-12-15 12:35:06,410|INFO|pool_manager.py|Node2 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5 node2_1 | 2020-12-15 12:35:06,410|INFO|pool_manager.py|Node2 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5 node2_1 | 2020-12-15 12:35:06,501|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node2_1 | 2020-12-15 12:35:06,525|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node2_1 | 2020-12-15 12:35:06,581|INFO|stacks.py|Node2C: clients connections tracking is enabled. node2_1 | 2020-12-15 12:35:06,599|INFO|stacks.py|Node2C: client stack restart is enabled. node2_1 | 2020-12-15 12:35:07,334|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute node2_1 | 2020-12-15 12:35:07,343|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node2_1 | 2020-12-15 12:35:07,347|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node2_1 | 2020-12-15 12:35:07,349|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node2_1 | 2020-12-15 12:35:07,359|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node2_1 | 2020-12-15 12:35:07,359|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer node2_1 | 2020-12-15 12:35:07,360|INFO|node.py|Node2 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node4', 'Node2', 'Node3', 'Node1'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)} node2_1 | 2020-12-15 12:35:07,361|INFO|consensus_shared_data.py|Node2:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4'] node2_1 | 2020-12-15 12:35:07,435|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec node2_1 | 2020-12-15 12:35:07,436|NOTIFICATION|replicas.py|Node2 added replica Node2:0 to instance 0 (master) node2_1 | 2020-12-15 12:35:07,436|INFO|replicas.py|reset monitor due to replica addition node2_1 | 2020-12-15 12:35:07,438|INFO|consensus_shared_data.py|Node2:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4'] node2_1 | 2020-12-15 12:35:07,450|NOTIFICATION|replicas.py|Node2 added replica Node2:1 to instance 1 (backup) node2_1 | 2020-12-15 12:35:07,474|INFO|replicas.py|reset monitor due to replica addition node2_1 | 2020-12-15 12:35:07,491|INFO|consensus_shared_data.py|Node2:0 updated validators list to {'Node4', 'Node2', 'Node3', 'Node1'} node2_1 | 2020-12-15 12:35:07,492|INFO|consensus_shared_data.py|Node2:1 updated validators list to {'Node4', 'Node2', 'Node3', 'Node1'} node2_1 | 2020-12-15 12:35:07,492|INFO|replica.py|Node2:0 setting primaryName for view no 0 to: Node1:0 node2_1 | 2020-12-15 12:35:07,492|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 lost connection to primary of master node2_1 | 2020-12-15 12:35:07,493|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec node2_1 | 2020-12-15 12:35:07,493|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0) node2_1 | 2020-12-15 12:35:07,494|INFO|replica.py|Node2:1 setting primaryName for view no 0 to: Node2:1 node2_1 | 2020-12-15 12:35:07,494|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0) node2_1 | 2020-12-15 12:35:07,508|INFO|node.py|total plugins loaded in node: 0 node2_1 | 2020-12-15 12:35:07,513|INFO|ledgers_bootstrap.py| found state to be empty, recreating from ledger 0 node2_1 | 2020-12-15 12:35:07,625|INFO|ledgers_bootstrap.py| initialized state for ledger 0: state root 9mYGVPSfvA24aJo5vnrC1JDBNf3dVSYEAJUVwaapkjeL node2_1 | 2020-12-15 12:35:07,647|INFO|ledgers_bootstrap.py| found state to be empty, recreating from ledger 2 node2_1 | 2020-12-15 12:35:07,649|INFO|ledgers_bootstrap.py| initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA node2_1 | 2020-12-15 12:35:07,649|INFO|ledgers_bootstrap.py| found state to be empty, recreating from ledger 1 node2_1 | 2020-12-15 12:35:07,694|INFO|ledgers_bootstrap.py| initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM node2_1 | 2020-12-15 12:35:07,702|INFO|motor.py|Node2 changing status from stopped to starting node2_1 | 2020-12-15 12:35:07,720|INFO|zstack.py|Node2 will bind its listener at 0.0.0.0:9703 node1_1 | 2020-12-15 12:35:12,669|INFO|node.py|Node1 first time running... node1_1 | 2020-12-15 12:35:12,673|INFO|node.py|Node1 processed 0 Ordered batches for instance 0 before starting catch up node1_1 | 2020-12-15 12:35:12,676|INFO|node.py|Node1 processed 0 Ordered batches for instance 1 before starting catch up node1_1 | 2020-12-15 12:35:12,676|INFO|ordering_service.py|Node1:0 reverted 0 batches before starting catch up node1_1 | 2020-12-15 12:35:12,676|INFO|node_leecher_service.py|Node1:NodeLeecherService starting catchup (is_initial=True) node1_1 | 2020-12-15 12:35:12,676|INFO|node_leecher_service.py|Node1:NodeLeecherService transitioning from Idle to PreSyncingPool node1_1 | 2020-12-15 12:35:12,677|INFO|cons_proof_service.py|Node1:ConsProofService:0 starts node1_1 | 2020-12-15 12:35:12,679|INFO|zstack.py|Node1 not yet caught-up, stashing message from unknown remote node1_1 | 2020-12-15 12:35:12,679|INFO|zstack.py|Node1 not yet caught-up, stashing message from unknown remote node1_1 | 2020-12-15 12:35:12,680|INFO|zstack.py|Node1 not yet caught-up, stashing message from unknown remote node1_1 | 2020-12-15 12:35:12,680|INFO|zstack.py|Node1 not yet caught-up, stashing message from unknown remote node1_1 | 2020-12-15 12:35:12,680|INFO|kit_zstack.py|CONNECTION: Node1 found the following missing connections: Node2, Node3, Node4 node1_1 | 2020-12-15 12:35:12,700|INFO|zstack.py|CONNECTION: Node1 looking for Node2 at 10.0.2.15:9703 node1_1 | 2020-12-15 12:35:12,704|INFO|zstack.py|CONNECTION: Node1 looking for Node3 at 10.0.2.15:9705 node1_1 | 2020-12-15 12:35:12,706|INFO|zstack.py|CONNECTION: Node1 looking for Node4 at 10.0.2.15:9707 node1_1 | 2020-12-15 12:35:12,783|NOTIFICATION|keep_in_touch.py|Node1's connections changed from set() to {'Node3'} node1_1 | 2020-12-15 12:35:12,794|NOTIFICATION|keep_in_touch.py|CONNECTION: Node1 now connected to Node3 node1_1 | 2020-12-15 12:35:12,795|INFO|node.py|Node1 joined nodes {'Node3'} but status is 2 node1_1 | 2020-12-15 12:35:12,863|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node4 node1_1 | 2020-12-15 12:35:12,871|INFO|cons_proof_service.py|Node1:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node1_1 | 2020-12-15 12:35:12,872|INFO|cons_proof_service.py|Node1:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node1_1 | 2020-12-15 12:35:12,875|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node2 node1_1 | 2020-12-15 12:35:12,876|INFO|cons_proof_service.py|Node1:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node1_1 | 2020-12-15 12:35:12,877|INFO|cons_proof_service.py|Node1:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node1_1 | 2020-12-15 12:35:12,879|INFO|cons_proof_service.py|Node1:ConsProofService:0 finished with consistency proof None node1_1 | 2020-12-15 12:35:12,885|INFO|catchup_rep_service.py|Node1:CatchupRepService:0 started catching up with LedgerCatchupStart(ledger_id=0, catchup_till=None, nodes_ledger_sizes={}) node1_1 | 2020-12-15 12:35:12,886|INFO|catchup_rep_service.py|CATCH-UP: Node1:CatchupRepService:0 completed catching up ledger 0, caught up 0 in total node1_1 | 2020-12-15 12:35:12,886|INFO|node_leecher_service.py|Node1:NodeLeecherService transitioning from PreSyncingPool to SyncingAudit node1_1 | 2020-12-15 12:35:12,886|INFO|cons_proof_service.py|Node1:ConsProofService:3 starts node1_1 | 2020-12-15 12:35:12,887|INFO|cons_proof_service.py|Node1:ConsProofService:3 asking for ledger status of ledger 3 node1_1 | 2020-12-15 12:35:12,903|NOTIFICATION|keep_in_touch.py|Node1's connections changed from {'Node3'} to {'Node2', 'Node3', 'Node4'} node1_1 | 2020-12-15 12:35:12,903|NOTIFICATION|keep_in_touch.py|CONNECTION: Node1 now connected to Node2 node1_1 | 2020-12-15 12:35:12,903|NOTIFICATION|keep_in_touch.py|CONNECTION: Node1 now connected to Node4 node1_1 | 2020-12-15 12:35:12,904|INFO|motor.py|Node1 changing status from starting to started node1_1 | 2020-12-15 12:35:12,917|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node3 node1_1 | 2020-12-15 12:35:12,918|INFO|cons_proof_service.py|Node1:ConsProofService:0 ignoring LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} since it is not gathering ledger statuses node1_1 | 2020-12-15 12:35:12,918|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node2 node1_1 | 2020-12-15 12:35:12,918|INFO|cons_proof_service.py|Node1:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node1_1 | 2020-12-15 12:35:12,918|INFO|cons_proof_service.py|Node1:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node1_1 | 2020-12-15 12:35:12,919|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node4 node1_1 | 2020-12-15 12:35:12,919|INFO|cons_proof_service.py|Node1:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node1_1 | 2020-12-15 12:35:12,919|INFO|cons_proof_service.py|Node1:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node1_1 | 2020-12-15 12:35:12,919|INFO|cons_proof_service.py|Node1:ConsProofService:3 finished with consistency proof None node1_1 | 2020-12-15 12:35:12,920|INFO|catchup_rep_service.py|Node1:CatchupRepService:3 started catching up with LedgerCatchupStart(ledger_id=3, catchup_till=None, nodes_ledger_sizes={}) node1_1 | 2020-12-15 12:35:12,920|INFO|catchup_rep_service.py|CATCH-UP: Node1:CatchupRepService:3 completed catching up ledger 3, caught up 0 in total node1_1 | 2020-12-15 12:35:12,920|INFO|node_leecher_service.py|Node1:NodeLeecherService transitioning from SyncingAudit to SyncingPool node1_1 | 2020-12-15 12:35:12,920|INFO|cons_proof_service.py|Node1:ConsProofService:0 starts node1_1 | 2020-12-15 12:35:12,920|INFO|cons_proof_service.py|Node1:ConsProofService:0 asking for ledger status of ledger 0 node1_1 | 2020-12-15 12:35:12,924|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node3 node1_1 | 2020-12-15 12:35:12,925|INFO|cons_proof_service.py|Node1:ConsProofService:3 ignoring LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} since it is not gathering ledger statuses node1_1 | 2020-12-15 12:35:12,951|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node3 node1_1 | 2020-12-15 12:35:12,954|INFO|cons_proof_service.py|Node1:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node1_1 | 2020-12-15 12:35:12,955|INFO|cons_proof_service.py|Node1:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node1_1 | 2020-12-15 12:35:12,956|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node2 node4_1 | Ledger does not exist - Creating... node4_1 | von_generate_transactions -n 4 node4_1 | node4_1 | node4_1 | ================================================================================================ node4_1 | Generating genesis transaction file: node4_1 | nodeArg: --nodeNum 4 node4_1 | ipAddresses: 10.0.2.15,10.0.2.15,10.0.2.15,10.0.2.15 node4_1 | genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis node4_1 | ------------------------------------------------------------------------------------------------ node4_1 | generate_indy_pool_transactions --nodes 4 --clients 0 --nodeNum 4 --ips 10.0.2.15,10.0.2.15,10.0.2.15,10.0.2.15 node4_1 | node4_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba node4_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1 node4_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk node4_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5 node4_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5 node4_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh node4_1 | Generating keys for provided seed b'000000000000000000000000000Node4' node4_1 | Init local keys for client-stack node4_1 | Public key is ECUd5UfoYa2yUBkmxNkMbkfGKcZ8Voh5Mi3JzBwWEDpm node4_1 | Verification key is 4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA node4_1 | Init local keys for node-stack node4_1 | Public key is ECUd5UfoYa2yUBkmxNkMbkfGKcZ8Voh5Mi3JzBwWEDpm node4_1 | Verification key is 4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA node4_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw node4_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP node4_1 | Nodes will not run locally, so writing /etc/indy/indy.env node4_1 | This node with name Node4 will use ports 9707 and 9708 for nodestack and clientstack respectively node4_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba node4_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1 node4_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk node4_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5 node4_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5 node4_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh node4_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw node4_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP node4_1 | node4_1 | ------------------------------------------------------------------------------------------------ node4_1 | Generated genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis node4_1 | node4_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","blskey_pop":"RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1","client_ip":"10.0.2.15","client_port":9702,"node_ip":"10.0.2.15","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"} node4_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","blskey_pop":"Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5","client_ip":"10.0.2.15","client_port":9704,"node_ip":"10.0.2.15","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"} node4_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","blskey_pop":"QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh","client_ip":"10.0.2.15","client_port":9706,"node_ip":"10.0.2.15","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"} node3_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","blskey_pop":"RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP","client_ip":"10.0.2.15","client_port":9708,"node_ip":"10.0.2.15","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"} node3_1 | ================================================================================================ node3_1 | node3_1 | start_indy_node Node3 0.0.0.0 9705 0.0.0.0 9706 node3_1 | 2020-12-15 12:35:06,851|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node3_1 | 2020-12-15 12:35:08,447|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node3_1 | 2020-12-15 12:35:09,579|INFO|looper.py|Starting up indy-node node3_1 | 2020-12-15 12:35:09,990|INFO|ledger.py|Starting ledger... node3_1 | 2020-12-15 12:35:10,061|INFO|ledger.py|Recovering tree from transaction log node3_1 | 2020-12-15 12:35:10,212|INFO|ledger.py|Recovered tree in 0.15106985500005976 seconds node3_1 | 2020-12-15 12:35:10,416|INFO|ledger.py|Starting ledger... node3_1 | 2020-12-15 12:35:10,481|INFO|ledger.py|Recovering tree from transaction log node3_1 | 2020-12-15 12:35:10,599|INFO|ledger.py|Recovered tree in 0.11806580699999358 seconds node3_1 | 2020-12-15 12:35:10,829|INFO|ledger.py|Starting ledger... node3_1 | 2020-12-15 12:35:10,874|INFO|ledger.py|Recovering tree from transaction log node3_1 | 2020-12-15 12:35:11,017|INFO|ledger.py|Recovered tree in 0.14236314900017533 seconds node3_1 | 2020-12-15 12:35:11,220|INFO|ledger.py|Starting ledger... node3_1 | 2020-12-15 12:35:11,293|INFO|ledger.py|Recovering tree from transaction log node3_1 | 2020-12-15 12:35:11,397|INFO|ledger.py|Recovered tree in 0.10485242300001119 seconds node3_1 | 2020-12-15 12:35:11,770|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node3 node3_1 | 2020-12-15 12:35:11,772|INFO|pool_manager.py|Node3 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5 node3_1 | 2020-12-15 12:35:11,772|INFO|pool_manager.py|Node3 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5 node3_1 | 2020-12-15 12:35:11,773|INFO|pool_manager.py|Node3 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5 node3_1 | 2020-12-15 12:35:11,773|INFO|pool_manager.py|Node3 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5 node3_1 | 2020-12-15 12:35:11,787|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node3_1 | 2020-12-15 12:35:11,789|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node3_1 | 2020-12-15 12:35:11,796|INFO|stacks.py|Node3C: clients connections tracking is enabled. node3_1 | 2020-12-15 12:35:11,799|INFO|stacks.py|Node3C: client stack restart is enabled. node3_1 | 2020-12-15 12:35:11,905|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute node3_1 | 2020-12-15 12:35:11,906|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node3_1 | 2020-12-15 12:35:11,906|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node3_1 | 2020-12-15 12:35:11,906|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node3_1 | 2020-12-15 12:35:11,907|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node3_1 | 2020-12-15 12:35:11,911|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer node3_1 | 2020-12-15 12:35:11,912|INFO|node.py|Node3 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node3', 'Node4', 'Node2', 'Node1'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)} node3_1 | 2020-12-15 12:35:11,913|INFO|consensus_shared_data.py|Node3:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4'] node3_1 | 2020-12-15 12:35:11,916|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec node3_1 | 2020-12-15 12:35:11,917|NOTIFICATION|replicas.py|Node3 added replica Node3:0 to instance 0 (master) node3_1 | 2020-12-15 12:35:11,918|INFO|replicas.py|reset monitor due to replica addition node3_1 | 2020-12-15 12:35:11,919|INFO|consensus_shared_data.py|Node3:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4'] node3_1 | 2020-12-15 12:35:11,927|NOTIFICATION|replicas.py|Node3 added replica Node3:1 to instance 1 (backup) node3_1 | 2020-12-15 12:35:11,927|INFO|replicas.py|reset monitor due to replica addition node3_1 | 2020-12-15 12:35:11,928|INFO|consensus_shared_data.py|Node3:0 updated validators list to {'Node3', 'Node4', 'Node2', 'Node1'} node3_1 | 2020-12-15 12:35:11,928|INFO|consensus_shared_data.py|Node3:1 updated validators list to {'Node3', 'Node4', 'Node2', 'Node1'} node3_1 | 2020-12-15 12:35:11,928|INFO|replica.py|Node3:0 setting primaryName for view no 0 to: Node1:0 node3_1 | 2020-12-15 12:35:11,928|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 lost connection to primary of master node3_1 | 2020-12-15 12:35:11,929|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec node3_1 | 2020-12-15 12:35:11,929|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0) node3_1 | 2020-12-15 12:35:11,929|INFO|replica.py|Node3:1 setting primaryName for view no 0 to: Node2:1 node3_1 | 2020-12-15 12:35:11,929|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0) node3_1 | 2020-12-15 12:35:11,930|INFO|node.py|total plugins loaded in node: 0 node3_1 | 2020-12-15 12:35:11,937|INFO|ledgers_bootstrap.py| found state to be empty, recreating from ledger 0 node3_1 | 2020-12-15 12:35:11,941|INFO|ledgers_bootstrap.py| initialized state for ledger 0: state root 9mYGVPSfvA24aJo5vnrC1JDBNf3dVSYEAJUVwaapkjeL node3_1 | 2020-12-15 12:35:11,942|INFO|ledgers_bootstrap.py| found state to be empty, recreating from ledger 2 node3_1 | 2020-12-15 12:35:11,942|INFO|ledgers_bootstrap.py| initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA node3_1 | 2020-12-15 12:35:11,943|INFO|ledgers_bootstrap.py| found state to be empty, recreating from ledger 1 node3_1 | 2020-12-15 12:35:11,952|INFO|ledgers_bootstrap.py| initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM node3_1 | 2020-12-15 12:35:11,952|INFO|motor.py|Node3 changing status from stopped to starting node3_1 | 2020-12-15 12:35:11,968|INFO|zstack.py|Node3 will bind its listener at 0.0.0.0:9705 node2_1 | 2020-12-15 12:35:07,725|INFO|stacks.py|CONNECTION: Node2 listening for other nodes at 0.0.0.0:9703 node2_1 | 2020-12-15 12:35:07,727|INFO|zstack.py|Node2C will bind its listener at 0.0.0.0:9704 node2_1 | 2020-12-15 12:35:08,128|INFO|node.py|Node2 first time running... node2_1 | 2020-12-15 12:35:08,134|INFO|node.py|Node2 processed 0 Ordered batches for instance 0 before starting catch up node2_1 | 2020-12-15 12:35:08,136|INFO|node.py|Node2 processed 0 Ordered batches for instance 1 before starting catch up node2_1 | 2020-12-15 12:35:08,137|INFO|ordering_service.py|Node2:0 reverted 0 batches before starting catch up node2_1 | 2020-12-15 12:35:08,137|INFO|node_leecher_service.py|Node2:NodeLeecherService starting catchup (is_initial=True) node2_1 | 2020-12-15 12:35:08,140|INFO|node_leecher_service.py|Node2:NodeLeecherService transitioning from Idle to PreSyncingPool node2_1 | 2020-12-15 12:35:08,141|INFO|cons_proof_service.py|Node2:ConsProofService:0 starts node2_1 | 2020-12-15 12:35:08,165|INFO|kit_zstack.py|CONNECTION: Node2 found the following missing connections: Node4, Node3, Node1 node2_1 | 2020-12-15 12:35:08,184|INFO|zstack.py|CONNECTION: Node2 looking for Node4 at 10.0.2.15:9707 node2_1 | 2020-12-15 12:35:08,226|INFO|zstack.py|CONNECTION: Node2 looking for Node3 at 10.0.2.15:9705 node2_1 | 2020-12-15 12:35:08,239|INFO|zstack.py|CONNECTION: Node2 looking for Node1 at 10.0.2.15:9701 node2_1 | 2020-12-15 12:35:11,593|NOTIFICATION|keep_in_touch.py|Node2's connections changed from set() to {'Node4'} node2_1 | 2020-12-15 12:35:11,594|NOTIFICATION|keep_in_touch.py|CONNECTION: Node2 now connected to Node4 node2_1 | 2020-12-15 12:35:11,594|INFO|node.py|Node2 joined nodes {'Node4'} but status is 2 node2_1 | 2020-12-15 12:35:11,702|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node4 node2_1 | 2020-12-15 12:35:11,702|INFO|cons_proof_service.py|Node2:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node2_1 | 2020-12-15 12:35:11,704|INFO|cons_proof_service.py|Node2:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node2_1 | 2020-12-15 12:35:12,153|NOTIFICATION|keep_in_touch.py|Node2's connections changed from {'Node4'} to {'Node4', 'Node3'} node2_1 | 2020-12-15 12:35:12,155|NOTIFICATION|keep_in_touch.py|CONNECTION: Node2 now connected to Node3 node2_1 | 2020-12-15 12:35:12,155|INFO|motor.py|Node2 changing status from starting to started_hungry node2_1 | 2020-12-15 12:35:12,155|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec node2_1 | 2020-12-15 12:35:12,300|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node3 node2_1 | 2020-12-15 12:35:12,300|INFO|cons_proof_service.py|Node2:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node2_1 | 2020-12-15 12:35:12,300|INFO|cons_proof_service.py|Node2:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node2_1 | 2020-12-15 12:35:12,303|INFO|cons_proof_service.py|Node2:ConsProofService:0 finished with consistency proof None node2_1 | 2020-12-15 12:35:12,303|INFO|catchup_rep_service.py|Node2:CatchupRepService:0 started catching up with LedgerCatchupStart(ledger_id=0, catchup_till=None, nodes_ledger_sizes={}) node2_1 | 2020-12-15 12:35:12,306|INFO|catchup_rep_service.py|CATCH-UP: Node2:CatchupRepService:0 completed catching up ledger 0, caught up 0 in total node2_1 | 2020-12-15 12:35:12,307|INFO|node_leecher_service.py|Node2:NodeLeecherService transitioning from PreSyncingPool to SyncingAudit node2_1 | 2020-12-15 12:35:12,308|INFO|cons_proof_service.py|Node2:ConsProofService:3 starts node2_1 | 2020-12-15 12:35:12,311|INFO|cons_proof_service.py|Node2:ConsProofService:3 asking for ledger status of ledger 3 node2_1 | 2020-12-15 12:35:12,354|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node3 node2_1 | 2020-12-15 12:35:12,355|INFO|cons_proof_service.py|Node2:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node2_1 | 2020-12-15 12:35:12,356|INFO|cons_proof_service.py|Node2:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node2_1 | 2020-12-15 12:35:12,356|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node4 node2_1 | 2020-12-15 12:35:12,358|INFO|cons_proof_service.py|Node2:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node2_1 | 2020-12-15 12:35:12,359|INFO|cons_proof_service.py|Node2:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node2_1 | 2020-12-15 12:35:12,360|INFO|cons_proof_service.py|Node2:ConsProofService:3 finished with consistency proof None node2_1 | 2020-12-15 12:35:12,360|INFO|catchup_rep_service.py|Node2:CatchupRepService:3 started catching up with LedgerCatchupStart(ledger_id=3, catchup_till=None, nodes_ledger_sizes={}) node2_1 | 2020-12-15 12:35:12,360|INFO|catchup_rep_service.py|CATCH-UP: Node2:CatchupRepService:3 completed catching up ledger 3, caught up 0 in total node2_1 | 2020-12-15 12:35:12,361|INFO|node_leecher_service.py|Node2:NodeLeecherService transitioning from SyncingAudit to SyncingPool node2_1 | 2020-12-15 12:35:12,362|INFO|cons_proof_service.py|Node2:ConsProofService:0 starts node2_1 | 2020-12-15 12:35:12,362|INFO|cons_proof_service.py|Node2:ConsProofService:0 asking for ledger status of ledger 0 node2_1 | 2020-12-15 12:35:12,384|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node4 node2_1 | 2020-12-15 12:35:12,388|INFO|cons_proof_service.py|Node2:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node2_1 | 2020-12-15 12:35:12,388|INFO|cons_proof_service.py|Node2:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node2_1 | 2020-12-15 12:35:12,401|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node3 node2_1 | 2020-12-15 12:35:12,401|INFO|cons_proof_service.py|Node2:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node2_1 | 2020-12-15 12:35:12,402|INFO|cons_proof_service.py|Node2:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node2_1 | 2020-12-15 12:35:12,402|INFO|cons_proof_service.py|Node2:ConsProofService:0 finished with consistency proof None node2_1 | 2020-12-15 12:35:12,403|INFO|catchup_rep_service.py|Node2:CatchupRepService:0 started catching up with LedgerCatchupStart(ledger_id=0, catchup_till=None, nodes_ledger_sizes={}) node2_1 | 2020-12-15 12:35:12,404|INFO|catchup_rep_service.py|CATCH-UP: Node2:CatchupRepService:0 completed catching up ledger 0, caught up 0 in total node2_1 | 2020-12-15 12:35:12,406|INFO|node_leecher_service.py|Node2:NodeLeecherService transitioning from SyncingPool to SyncingConfig node2_1 | 2020-12-15 12:35:12,406|INFO|cons_proof_service.py|Node2:ConsProofService:2 starts node2_1 | 2020-12-15 12:35:12,408|INFO|cons_proof_service.py|Node2:ConsProofService:2 asking for ledger status of ledger 2 node2_1 | 2020-12-15 12:35:12,436|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node4 node1_1 | 2020-12-15 12:35:12,956|INFO|cons_proof_service.py|Node1:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node1_1 | 2020-12-15 12:35:12,961|INFO|cons_proof_service.py|Node1:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node1_1 | 2020-12-15 12:35:12,961|INFO|cons_proof_service.py|Node1:ConsProofService:0 finished with consistency proof None node1_1 | 2020-12-15 12:35:12,961|INFO|catchup_rep_service.py|Node1:CatchupRepService:0 started catching up with LedgerCatchupStart(ledger_id=0, catchup_till=None, nodes_ledger_sizes={}) node1_1 | 2020-12-15 12:35:12,962|INFO|catchup_rep_service.py|CATCH-UP: Node1:CatchupRepService:0 completed catching up ledger 0, caught up 0 in total node1_1 | 2020-12-15 12:35:12,962|INFO|node_leecher_service.py|Node1:NodeLeecherService transitioning from SyncingPool to SyncingConfig node1_1 | 2020-12-15 12:35:12,962|INFO|cons_proof_service.py|Node1:ConsProofService:2 starts node1_1 | 2020-12-15 12:35:12,963|INFO|cons_proof_service.py|Node1:ConsProofService:2 asking for ledger status of ledger 2 node1_1 | 2020-12-15 12:35:12,968|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node4 node1_1 | 2020-12-15 12:35:12,968|INFO|cons_proof_service.py|Node1:ConsProofService:0 ignoring LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} since it is not gathering ledger statuses node1_1 | 2020-12-15 12:35:12,997|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node3 node1_1 | 2020-12-15 12:35:12,999|INFO|cons_proof_service.py|Node1:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node1_1 | 2020-12-15 12:35:12,999|INFO|cons_proof_service.py|Node1:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node1_1 | 2020-12-15 12:35:12,999|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node2 node1_1 | 2020-12-15 12:35:13,000|INFO|cons_proof_service.py|Node1:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node1_1 | 2020-12-15 12:35:13,000|INFO|cons_proof_service.py|Node1:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node1_1 | 2020-12-15 12:35:13,002|INFO|cons_proof_service.py|Node1:ConsProofService:2 finished with consistency proof None node1_1 | 2020-12-15 12:35:13,002|INFO|catchup_rep_service.py|Node1:CatchupRepService:2 started catching up with LedgerCatchupStart(ledger_id=2, catchup_till=None, nodes_ledger_sizes={}) node1_1 | 2020-12-15 12:35:13,002|INFO|catchup_rep_service.py|CATCH-UP: Node1:CatchupRepService:2 completed catching up ledger 2, caught up 0 in total node1_1 | 2020-12-15 12:35:13,003|INFO|node_leecher_service.py|Node1:NodeLeecherService transitioning from SyncingConfig to SyncingOthers node1_1 | 2020-12-15 12:35:13,003|INFO|cons_proof_service.py|Node1:ConsProofService:1 starts node1_1 | 2020-12-15 12:35:13,003|INFO|cons_proof_service.py|Node1:ConsProofService:1 asking for ledger status of ledger 1 node1_1 | 2020-12-15 12:35:13,004|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node4 node1_1 | 2020-12-15 12:35:13,004|INFO|cons_proof_service.py|Node1:ConsProofService:2 ignoring LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} since it is not gathering ledger statuses node1_1 | 2020-12-15 12:35:13,023|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} from Node4 node1_1 | 2020-12-15 12:35:13,024|INFO|cons_proof_service.py|Node1:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node1_1 | 2020-12-15 12:35:13,024|INFO|cons_proof_service.py|Node1:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node1_1 | 2020-12-15 12:35:13,025|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} from Node3 node1_1 | 2020-12-15 12:35:13,025|INFO|cons_proof_service.py|Node1:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node1_1 | 2020-12-15 12:35:13,025|INFO|cons_proof_service.py|Node1:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node1_1 | 2020-12-15 12:35:13,026|INFO|cons_proof_service.py|Node1:ConsProofService:1 finished with consistency proof None node1_1 | 2020-12-15 12:35:13,026|INFO|catchup_rep_service.py|Node1:CatchupRepService:1 started catching up with LedgerCatchupStart(ledger_id=1, catchup_till=None, nodes_ledger_sizes={}) node1_1 | 2020-12-15 12:35:13,026|INFO|catchup_rep_service.py|CATCH-UP: Node1:CatchupRepService:1 completed catching up ledger 1, caught up 0 in total node1_1 | 2020-12-15 12:35:13,027|INFO|node_leecher_service.py|Node1:NodeLeecherService transitioning from SyncingOthers to Idle node1_1 | 2020-12-15 12:35:13,033|INFO|node.py|Node1 caught up to 0 txns in the last catchup node1_1 | 2020-12-15 12:35:13,034|INFO|node_reg_handler.py|Loaded current node registry from the ledger: ['Node1', 'Node2', 'Node3', 'Node4'] node1_1 | 2020-12-15 12:35:13,034|INFO|node_reg_handler.py|Current committed node registry for previous views: [(0, ['Node1', 'Node2', 'Node3', 'Node4'])] node1_1 | 2020-12-15 12:35:13,035|INFO|node_reg_handler.py|Current uncommitted node registry for previous views: [(0, ['Node1', 'Node2', 'Node3', 'Node4'])] node1_1 | 2020-12-15 12:35:13,035|INFO|node_reg_handler.py|Current active node registry: ['Node1', 'Node2', 'Node3', 'Node4'] node1_1 | 2020-12-15 12:35:13,035|INFO|node.py|CATCH-UP: Node1 caught up till (0, 0) node1_1 | 2020-12-15 12:35:13,035|INFO|node.py|CATCH-UP: Node1 does not need any more catchups node1_1 | 2020-12-15 12:35:13,035|INFO|replica.py|Node1:0 setting primaryName for view no 0 to: None node1_1 | 2020-12-15 12:35:13,036|INFO|replica.py|Node1:1 setting primaryName for view no 0 to: None node1_1 | 2020-12-15 12:35:13,036|INFO|node.py|Node1 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node2', 'Node3', 'Node4', 'Node1'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)} node1_1 | 2020-12-15 12:35:13,036|INFO|consensus_shared_data.py|Node1:0 updated validators list to {'Node2', 'Node3', 'Node4', 'Node1'} node1_1 | 2020-12-15 12:35:13,036|INFO|consensus_shared_data.py|Node1:1 updated validators list to {'Node2', 'Node3', 'Node4', 'Node1'} node1_1 | 2020-12-15 12:35:13,037|INFO|replica.py|Node1:0 setting primaryName for view no 0 to: Node1:0 node1_1 | 2020-12-15 12:35:13,037|NOTIFICATION|primary_connection_monitor_service.py|Node1:0 restored connection to primary of master node4_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","blskey_pop":"RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP","client_ip":"10.0.2.15","client_port":9708,"node_ip":"10.0.2.15","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"} node4_1 | ================================================================================================ node4_1 | node4_1 | start_indy_node Node4 0.0.0.0 9707 0.0.0.0 9708 node4_1 | 2020-12-15 12:35:06,530|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node4_1 | 2020-12-15 12:35:08,143|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node4_1 | 2020-12-15 12:35:09,071|INFO|looper.py|Starting up indy-node node4_1 | 2020-12-15 12:35:09,443|INFO|ledger.py|Starting ledger... node4_1 | 2020-12-15 12:35:09,496|INFO|ledger.py|Recovering tree from transaction log node4_1 | 2020-12-15 12:35:09,608|INFO|ledger.py|Recovered tree in 0.1122269969998797 seconds node4_1 | 2020-12-15 12:35:09,757|INFO|ledger.py|Starting ledger... node4_1 | 2020-12-15 12:35:09,810|INFO|ledger.py|Recovering tree from transaction log node4_1 | 2020-12-15 12:35:09,934|INFO|ledger.py|Recovered tree in 0.1246711440001036 seconds node4_1 | 2020-12-15 12:35:10,182|INFO|ledger.py|Starting ledger... node4_1 | 2020-12-15 12:35:10,260|INFO|ledger.py|Recovering tree from transaction log node4_1 | 2020-12-15 12:35:10,404|INFO|ledger.py|Recovered tree in 0.14390526499983025 seconds node4_1 | 2020-12-15 12:35:10,596|INFO|ledger.py|Starting ledger... node4_1 | 2020-12-15 12:35:10,664|INFO|ledger.py|Recovering tree from transaction log node4_1 | 2020-12-15 12:35:10,827|INFO|ledger.py|Recovered tree in 0.16301821399997607 seconds node4_1 | 2020-12-15 12:35:11,321|NOTIFICATION|node_bootstrap.py|BLS: BLS Signatures will be used for Node Node4 node4_1 | 2020-12-15 12:35:11,321|INFO|pool_manager.py|Node4 sets node Node1 (Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv) order to 5 node4_1 | 2020-12-15 12:35:11,322|INFO|pool_manager.py|Node4 sets node Node2 (8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb) order to 5 node4_1 | 2020-12-15 12:35:11,322|INFO|pool_manager.py|Node4 sets node Node3 (DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya) order to 5 node4_1 | 2020-12-15 12:35:11,322|INFO|pool_manager.py|Node4 sets node Node4 (4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA) order to 5 node4_1 | 2020-12-15 12:35:11,343|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node4_1 | 2020-12-15 12:35:11,345|INFO|notifier_plugin_manager.py|Found notifier plugins: [] node4_1 | 2020-12-15 12:35:11,350|INFO|stacks.py|Node4C: clients connections tracking is enabled. node4_1 | 2020-12-15 12:35:11,351|INFO|stacks.py|Node4C: client stack restart is enabled. node4_1 | 2020-12-15 12:35:11,454|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: typing.Dict] because it does not have a 'pluginType' attribute node4_1 | 2020-12-15 12:35:11,454|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node4_1 | 2020-12-15 12:35:11,455|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node4_1 | 2020-12-15 12:35:11,455|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node4_1 | 2020-12-15 12:35:11,456|NOTIFICATION|plugin_loader.py|skipping plugin plugin_firebase_stats_consumer[class: ] because it does not have a 'pluginType' attribute node4_1 | 2020-12-15 12:35:11,459|INFO|plugin_loader.py|plugin FirebaseStatsConsumer successfully loaded from module plugin_firebase_stats_consumer node4_1 | 2020-12-15 12:35:11,460|INFO|node.py|Node4 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node3', 'Node2', 'Node1', 'Node4'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)} node4_1 | 2020-12-15 12:35:11,461|INFO|consensus_shared_data.py|Node4:0 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4'] node4_1 | 2020-12-15 12:35:11,464|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec node4_1 | 2020-12-15 12:35:11,465|NOTIFICATION|replicas.py|Node4 added replica Node4:0 to instance 0 (master) node4_1 | 2020-12-15 12:35:11,465|INFO|replicas.py|reset monitor due to replica addition node4_1 | 2020-12-15 12:35:11,466|INFO|consensus_shared_data.py|Node4:1 updated validators list to ['Node1', 'Node2', 'Node3', 'Node4'] node4_1 | 2020-12-15 12:35:11,473|NOTIFICATION|replicas.py|Node4 added replica Node4:1 to instance 1 (backup) node4_1 | 2020-12-15 12:35:11,473|INFO|replicas.py|reset monitor due to replica addition node4_1 | 2020-12-15 12:35:11,474|INFO|consensus_shared_data.py|Node4:0 updated validators list to {'Node3', 'Node2', 'Node1', 'Node4'} node4_1 | 2020-12-15 12:35:11,474|INFO|consensus_shared_data.py|Node4:1 updated validators list to {'Node3', 'Node2', 'Node1', 'Node4'} node4_1 | 2020-12-15 12:35:11,474|INFO|replica.py|Node4:0 setting primaryName for view no 0 to: Node1:0 node4_1 | 2020-12-15 12:35:11,475|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 lost connection to primary of master node4_1 | 2020-12-15 12:35:11,475|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec node4_1 | 2020-12-15 12:35:11,475|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0) node4_1 | 2020-12-15 12:35:11,475|INFO|replica.py|Node4:1 setting primaryName for view no 0 to: Node2:1 node4_1 | 2020-12-15 12:35:11,476|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0) node4_1 | 2020-12-15 12:35:11,476|INFO|node.py|total plugins loaded in node: 0 node4_1 | 2020-12-15 12:35:11,478|INFO|ledgers_bootstrap.py| found state to be empty, recreating from ledger 0 node4_1 | 2020-12-15 12:35:11,491|INFO|ledgers_bootstrap.py| initialized state for ledger 0: state root 9mYGVPSfvA24aJo5vnrC1JDBNf3dVSYEAJUVwaapkjeL node4_1 | 2020-12-15 12:35:11,492|INFO|ledgers_bootstrap.py| found state to be empty, recreating from ledger 2 node4_1 | 2020-12-15 12:35:11,492|INFO|ledgers_bootstrap.py| initialized state for ledger 2: state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA node4_1 | 2020-12-15 12:35:11,492|INFO|ledgers_bootstrap.py| found state to be empty, recreating from ledger 1 node4_1 | 2020-12-15 12:35:11,500|INFO|ledgers_bootstrap.py| initialized state for ledger 1: state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM node4_1 | 2020-12-15 12:35:11,500|INFO|motor.py|Node4 changing status from stopped to starting node4_1 | 2020-12-15 12:35:11,502|INFO|zstack.py|Node4 will bind its listener at 0.0.0.0:9707 node4_1 | 2020-12-15 12:35:11,502|INFO|stacks.py|CONNECTION: Node4 listening for other nodes at 0.0.0.0:9707 node4_1 | 2020-12-15 12:35:11,503|INFO|zstack.py|Node4C will bind its listener at 0.0.0.0:9708 webserver_1 | Ledger does not exist - Creating genesis data... webserver_1 | von_generate_transactions webserver_1 | webserver_1 | webserver_1 | ================================================================================================ webserver_1 | Generating genesis transaction file: webserver_1 | nodeArg: webserver_1 | ipAddresses: 10.0.2.15,10.0.2.15,10.0.2.15,10.0.2.15 webserver_1 | genesisFilePath: /home/indy/ledger/sandbox/pool_transactions_genesis webserver_1 | ------------------------------------------------------------------------------------------------ webserver_1 | generate_indy_pool_transactions --nodes 4 --clients 0 --ips 10.0.2.15,10.0.2.15,10.0.2.15,10.0.2.15 webserver_1 | webserver_1 | BLS Public key is 4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba webserver_1 | Proof of possession for BLS key is RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1 webserver_1 | BLS Public key is 37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk webserver_1 | Proof of possession for BLS key is Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5 webserver_1 | BLS Public key is 3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5 webserver_1 | Proof of possession for BLS key is QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh webserver_1 | BLS Public key is 2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw webserver_1 | Proof of possession for BLS key is RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP webserver_1 | webserver_1 | ------------------------------------------------------------------------------------------------ webserver_1 | Generated genesis transaction file; /home/indy/ledger/sandbox/pool_transactions_genesis webserver_1 | webserver_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node1","blskey":"4N8aUNHSgjQVgkpm8nhNEfDf6txHznoYREg9kirmJrkivgL4oSEimFF6nsQ6M41QvhM2Z33nves5vfSn9n1UwNFJBYtWVnHYMATn76vLuL3zU88KyeAYcHfsih3He6UHcXDxcaecHVz6jhCYz1P2UZn2bDVruL5wXpehgBfBaLKm3Ba","blskey_pop":"RahHYiCvoNCtPTrVtP7nMC5eTYrsUA8WjXbdhNc8debh1agE9bGiJxWBXYNFbnJXoXhWFMvyqhqhRoq737YQemH5ik9oL7R4NTTCz2LEZhkgLJzB3QRQqJyBNyv7acbdHrAT8nQ9UkLbaVL9NBpnWXBTw4LEMePaSHEw66RzPNdAX1","client_ip":"10.0.2.15","client_port":9702,"node_ip":"10.0.2.15","node_port":9701,"services":["VALIDATOR"]},"dest":"Gw6pDLhcBcoQesN72qfotTgFa7cbuqZpkX3Xo6pLhPhv"},"metadata":{"from":"Th7MpTaRZVRYnPiabds81Y"},"type":"0"},"txnMetadata":{"seqNo":1,"txnId":"fea82e10e894419fe2bea7d96296a6d46f50f93f9eeda954ec461b2ed2950b62"},"ver":"1"} webserver_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node2","blskey":"37rAPpXVoxzKhz7d9gkUe52XuXryuLXoM6P6LbWDB7LSbG62Lsb33sfG7zqS8TK1MXwuCHj1FKNzVpsnafmqLG1vXN88rt38mNFs9TENzm4QHdBzsvCuoBnPH7rpYYDo9DZNJePaDvRvqJKByCabubJz3XXKbEeshzpz4Ma5QYpJqjk","blskey_pop":"Qr658mWZ2YC8JXGXwMDQTzuZCWF7NK9EwxphGmcBvCh6ybUuLxbG65nsX4JvD4SPNtkJ2w9ug1yLTj6fgmuDg41TgECXjLCij3RMsV8CwewBVgVN67wsA45DFWvqvLtu4rjNnE9JbdFTc1Z4WCPA3Xan44K1HoHAq9EVeaRYs8zoF5","client_ip":"10.0.2.15","client_port":9704,"node_ip":"10.0.2.15","node_port":9703,"services":["VALIDATOR"]},"dest":"8ECVSk179mjsjKRLWiQtssMLgp6EPhWXtaYyStWPSGAb"},"metadata":{"from":"EbP4aYNeTHL6q385GuVpRV"},"type":"0"},"txnMetadata":{"seqNo":2,"txnId":"1ac8aece2a18ced660fef8694b61aac3af08ba875ce3026a160acbc3a3af35fc"},"ver":"1"} webserver_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node3","blskey":"3WFpdbg7C5cnLYZwFZevJqhubkFALBfCBBok15GdrKMUhUjGsk3jV6QKj6MZgEubF7oqCafxNdkm7eswgA4sdKTRc82tLGzZBd6vNqU8dupzup6uYUf32KTHTPQbuUM8Yk4QFXjEf2Usu2TJcNkdgpyeUSX42u5LqdDDpNSWUK5deC5","blskey_pop":"QwDeb2CkNSx6r8QC8vGQK3GRv7Yndn84TGNijX8YXHPiagXajyfTjoR87rXUu4G4QLk2cF8NNyqWiYMus1623dELWwx57rLCFqGh7N4ZRbGDRP4fnVcaKg1BcUxQ866Ven4gw8y4N56S5HzxXNBZtLYmhGHvDtk6PFkFwCvxYrNYjh","client_ip":"10.0.2.15","client_port":9706,"node_ip":"10.0.2.15","node_port":9705,"services":["VALIDATOR"]},"dest":"DKVxG2fXXTU8yT5N7hGEbXB3dfdAnYv1JczDUHpmDxya"},"metadata":{"from":"4cU41vWW82ArfxJxHkzXPG"},"type":"0"},"txnMetadata":{"seqNo":3,"txnId":"7e9f355dffa78ed24668f0e0e369fd8c224076571c51e2ea8be5f26479edebe4"},"ver":"1"} webserver_1 | {"reqSignature":{},"txn":{"data":{"data":{"alias":"Node4","blskey":"2zN3bHM1m4rLz54MJHYSwvqzPchYp8jkHswveCLAEJVcX6Mm1wHQD1SkPYMzUDTZvWvhuE6VNAkK3KxVeEmsanSmvjVkReDeBEMxeDaayjcZjFGPydyey1qxBHmTvAnBKoPydvuTAqx5f7YNNRAdeLmUi99gERUU7TD8KfAa6MpQ9bw","blskey_pop":"RPLagxaR5xdimFzwmzYnz4ZhWtYQEj8iR5ZU53T2gitPCyCHQneUn2Huc4oeLd2B2HzkGnjAff4hWTJT6C7qHYB1Mv2wU5iHHGFWkhnTX9WsEAbunJCV2qcaXScKj4tTfvdDKfLiVuU2av6hbsMztirRze7LvYBkRHV3tGwyCptsrP","client_ip":"10.0.2.15","client_port":9708,"node_ip":"10.0.2.15","node_port":9707,"services":["VALIDATOR"]},"dest":"4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA"},"metadata":{"from":"TWwCRQRZ2ZHMJFn9TzLp7W"},"type":"0"},"txnMetadata":{"seqNo":4,"txnId":"aa5e817d7cc626170eca175822029339a444eb0ee8f0bd20d3b0b76e566fb008"},"ver":"1"} webserver_1 | ================================================================================================ webserver_1 | webserver_1 | 2020-12-15 12:34:02,403|INFO|server.py|REGISTER_NEW_DIDS is set to True webserver_1 | 2020-12-15 12:34:02,476|INFO|server.py|LEDGER_INSTANCE_NAME is set to "localhost" webserver_1 | 2020-12-15 12:34:02,482|INFO|server.py|Web analytics are DISABLED webserver_1 | 2020-12-15 12:34:02,510|INFO|server.py|Running webserver... webserver_1 | 2020-12-15 12:34:02,528|DEBUG|selector_events.py|Using selector: EpollSelector webserver_1 | 2020-12-15 12:34:02,529|INFO|server.py|Creating trust anchor... webserver_1 | 2020-12-15 12:34:02,530|INFO|anchor.py|Ledger cache will be stored in :memory: webserver_1 | ======== Running on http://0.0.0.0:8000 ======== webserver_1 | (Press CTRL+C to quit) webserver_1 | 2020-12-15 12:34:02,625|DEBUG|core.py|executing functools.partial(.connector at 0x7fc310530d08>) webserver_1 | 2020-12-15 12:34:02,635|DEBUG|core.py|returning webserver_1 | 2020-12-15 12:34:02,647|INFO|anchor.py|Initializing transaction database webserver_1 | 2020-12-15 12:34:02,664|DEBUG|core.py|executing functools.partial(, '\n CREATE TABLE existent (\n ledger integer PRIMARY KEY,\n seqno integer NOT NULL DEFAULT 0\n );\n CREATE TABLE latest (\n ledger integer PRIMARY KEY,\n seqno integer NOT NULL DEFAULT 0\n );\n CREATE TABLE transactions (\n ledger integer NOT NULL,\n seqno integer NOT NULL,\n txntype integer NOT NULL,\n termsid integer,\n txnid text,\n added timestamp,\n value text,\n PRIMARY KEY (ledger, seqno)\n );\n CREATE INDEX txn_id ON transactions (txnid);\n CREATE VIRTUAL TABLE terms USING\n fts3(txnid, sender, ident, alias, verkey, short_verkey, data);\n ') webserver_1 | 2020-12-15 12:34:02,676|DEBUG|core.py|returning webserver_1 | 2020-12-15 12:34:02,699|DEBUG|core.py|executing functools.partial() webserver_1 | 2020-12-15 12:34:02,700|DEBUG|core.py|returning None node3_1 | 2020-12-15 12:35:11,968|INFO|stacks.py|CONNECTION: Node3 listening for other nodes at 0.0.0.0:9705 node3_1 | 2020-12-15 12:35:11,969|INFO|zstack.py|Node3C will bind its listener at 0.0.0.0:9706 node3_1 | 2020-12-15 12:35:12,021|INFO|node.py|Node3 first time running... node3_1 | 2020-12-15 12:35:12,022|INFO|node.py|Node3 processed 0 Ordered batches for instance 0 before starting catch up node3_1 | 2020-12-15 12:35:12,022|INFO|node.py|Node3 processed 0 Ordered batches for instance 1 before starting catch up node3_1 | 2020-12-15 12:35:12,022|INFO|ordering_service.py|Node3:0 reverted 0 batches before starting catch up node3_1 | 2020-12-15 12:35:12,023|INFO|node_leecher_service.py|Node3:NodeLeecherService starting catchup (is_initial=True) node3_1 | 2020-12-15 12:35:12,028|INFO|node_leecher_service.py|Node3:NodeLeecherService transitioning from Idle to PreSyncingPool node3_1 | 2020-12-15 12:35:12,029|INFO|cons_proof_service.py|Node3:ConsProofService:0 starts node3_1 | 2020-12-15 12:35:12,031|INFO|kit_zstack.py|CONNECTION: Node3 found the following missing connections: Node4, Node2, Node1 node3_1 | 2020-12-15 12:35:12,045|INFO|zstack.py|CONNECTION: Node3 looking for Node4 at 10.0.2.15:9707 node3_1 | 2020-12-15 12:35:12,075|INFO|zstack.py|CONNECTION: Node3 looking for Node2 at 10.0.2.15:9703 node3_1 | 2020-12-15 12:35:12,081|INFO|zstack.py|CONNECTION: Node3 looking for Node1 at 10.0.2.15:9701 node3_1 | 2020-12-15 12:35:12,198|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node4 node3_1 | 2020-12-15 12:35:12,202|INFO|cons_proof_service.py|Node3:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node3_1 | 2020-12-15 12:35:12,202|INFO|cons_proof_service.py|Node3:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node3_1 | 2020-12-15 12:35:12,204|NOTIFICATION|keep_in_touch.py|Node3's connections changed from set() to {'Node4'} node3_1 | 2020-12-15 12:35:12,204|NOTIFICATION|keep_in_touch.py|CONNECTION: Node3 now connected to Node4 node3_1 | 2020-12-15 12:35:12,205|INFO|node.py|Node3 joined nodes {'Node4'} but status is 2 node3_1 | 2020-12-15 12:35:12,234|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node2 node3_1 | 2020-12-15 12:35:12,235|INFO|cons_proof_service.py|Node3:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node3_1 | 2020-12-15 12:35:12,236|INFO|cons_proof_service.py|Node3:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node3_1 | 2020-12-15 12:35:12,245|INFO|cons_proof_service.py|Node3:ConsProofService:0 finished with consistency proof None node3_1 | 2020-12-15 12:35:12,247|INFO|catchup_rep_service.py|Node3:CatchupRepService:0 started catching up with LedgerCatchupStart(ledger_id=0, catchup_till=None, nodes_ledger_sizes={}) node3_1 | 2020-12-15 12:35:12,248|INFO|catchup_rep_service.py|CATCH-UP: Node3:CatchupRepService:0 completed catching up ledger 0, caught up 0 in total node3_1 | 2020-12-15 12:35:12,249|INFO|node_leecher_service.py|Node3:NodeLeecherService transitioning from PreSyncingPool to SyncingAudit node3_1 | 2020-12-15 12:35:12,250|INFO|cons_proof_service.py|Node3:ConsProofService:3 starts node3_1 | 2020-12-15 12:35:12,251|INFO|cons_proof_service.py|Node3:ConsProofService:3 asking for ledger status of ledger 3 node3_1 | 2020-12-15 12:35:12,260|NOTIFICATION|keep_in_touch.py|Node3's connections changed from {'Node4'} to {'Node4', 'Node2'} node3_1 | 2020-12-15 12:35:12,267|NOTIFICATION|keep_in_touch.py|CONNECTION: Node3 now connected to Node2 node3_1 | 2020-12-15 12:35:12,267|INFO|motor.py|Node3 changing status from starting to started_hungry node3_1 | 2020-12-15 12:35:12,271|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec node3_1 | 2020-12-15 12:35:12,274|INFO|primary_connection_monitor_service.py|Node3:1 scheduling primary connection check in 60 sec node3_1 | 2020-12-15 12:35:12,288|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node2 node3_1 | 2020-12-15 12:35:12,289|INFO|cons_proof_service.py|Node3:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node3_1 | 2020-12-15 12:35:12,290|INFO|cons_proof_service.py|Node3:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node3_1 | 2020-12-15 12:35:12,302|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node4 node3_1 | 2020-12-15 12:35:12,305|INFO|cons_proof_service.py|Node3:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node3_1 | 2020-12-15 12:35:12,306|INFO|cons_proof_service.py|Node3:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node3_1 | 2020-12-15 12:35:12,307|INFO|cons_proof_service.py|Node3:ConsProofService:3 finished with consistency proof None node3_1 | 2020-12-15 12:35:12,307|INFO|catchup_rep_service.py|Node3:CatchupRepService:3 started catching up with LedgerCatchupStart(ledger_id=3, catchup_till=None, nodes_ledger_sizes={}) node3_1 | 2020-12-15 12:35:12,309|INFO|catchup_rep_service.py|CATCH-UP: Node3:CatchupRepService:3 completed catching up ledger 3, caught up 0 in total node3_1 | 2020-12-15 12:35:12,310|INFO|node_leecher_service.py|Node3:NodeLeecherService transitioning from SyncingAudit to SyncingPool node3_1 | 2020-12-15 12:35:12,311|INFO|cons_proof_service.py|Node3:ConsProofService:0 starts node3_1 | 2020-12-15 12:35:12,312|INFO|cons_proof_service.py|Node3:ConsProofService:0 asking for ledger status of ledger 0 node3_1 | 2020-12-15 12:35:12,352|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node2 node3_1 | 2020-12-15 12:35:12,352|INFO|cons_proof_service.py|Node3:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node3_1 | 2020-12-15 12:35:12,352|INFO|cons_proof_service.py|Node3:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node3_1 | 2020-12-15 12:35:12,353|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node4 node3_1 | 2020-12-15 12:35:12,354|INFO|cons_proof_service.py|Node3:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node3_1 | 2020-12-15 12:35:12,354|INFO|cons_proof_service.py|Node3:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node3_1 | 2020-12-15 12:35:12,355|INFO|cons_proof_service.py|Node3:ConsProofService:0 finished with consistency proof None node3_1 | 2020-12-15 12:35:12,355|INFO|catchup_rep_service.py|Node3:CatchupRepService:0 started catching up with LedgerCatchupStart(ledger_id=0, catchup_till=None, nodes_ledger_sizes={}) node3_1 | 2020-12-15 12:35:12,356|INFO|catchup_rep_service.py|CATCH-UP: Node3:CatchupRepService:0 completed catching up ledger 0, caught up 0 in total node3_1 | 2020-12-15 12:35:12,357|INFO|node_leecher_service.py|Node3:NodeLeecherService transitioning from SyncingPool to SyncingConfig node3_1 | 2020-12-15 12:35:12,357|INFO|cons_proof_service.py|Node3:ConsProofService:2 starts node3_1 | 2020-12-15 12:35:12,358|INFO|cons_proof_service.py|Node3:ConsProofService:2 asking for ledger status of ledger 2 node2_1 | 2020-12-15 12:35:12,439|INFO|cons_proof_service.py|Node2:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node2_1 | 2020-12-15 12:35:12,440|INFO|cons_proof_service.py|Node2:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node2_1 | 2020-12-15 12:35:12,440|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node3 node2_1 | 2020-12-15 12:35:12,441|INFO|cons_proof_service.py|Node2:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node2_1 | 2020-12-15 12:35:12,442|INFO|cons_proof_service.py|Node2:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node2_1 | 2020-12-15 12:35:12,444|INFO|cons_proof_service.py|Node2:ConsProofService:2 finished with consistency proof None node2_1 | 2020-12-15 12:35:12,445|INFO|catchup_rep_service.py|Node2:CatchupRepService:2 started catching up with LedgerCatchupStart(ledger_id=2, catchup_till=None, nodes_ledger_sizes={}) node2_1 | 2020-12-15 12:35:12,446|INFO|catchup_rep_service.py|CATCH-UP: Node2:CatchupRepService:2 completed catching up ledger 2, caught up 0 in total node2_1 | 2020-12-15 12:35:12,448|INFO|node_leecher_service.py|Node2:NodeLeecherService transitioning from SyncingConfig to SyncingOthers node2_1 | 2020-12-15 12:35:12,449|INFO|cons_proof_service.py|Node2:ConsProofService:1 starts node2_1 | 2020-12-15 12:35:12,450|INFO|cons_proof_service.py|Node2:ConsProofService:1 asking for ledger status of ledger 1 node2_1 | 2020-12-15 12:35:12,491|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} from Node4 node2_1 | 2020-12-15 12:35:12,492|INFO|cons_proof_service.py|Node2:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node2_1 | 2020-12-15 12:35:12,492|INFO|cons_proof_service.py|Node2:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node2_1 | 2020-12-15 12:35:12,493|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} from Node3 node2_1 | 2020-12-15 12:35:12,493|INFO|cons_proof_service.py|Node2:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node2_1 | 2020-12-15 12:35:12,494|INFO|cons_proof_service.py|Node2:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node2_1 | 2020-12-15 12:35:12,495|INFO|cons_proof_service.py|Node2:ConsProofService:1 finished with consistency proof None node2_1 | 2020-12-15 12:35:12,495|INFO|catchup_rep_service.py|Node2:CatchupRepService:1 started catching up with LedgerCatchupStart(ledger_id=1, catchup_till=None, nodes_ledger_sizes={}) node2_1 | 2020-12-15 12:35:12,499|INFO|catchup_rep_service.py|CATCH-UP: Node2:CatchupRepService:1 completed catching up ledger 1, caught up 0 in total node2_1 | 2020-12-15 12:35:12,499|INFO|node_leecher_service.py|Node2:NodeLeecherService transitioning from SyncingOthers to Idle node2_1 | 2020-12-15 12:35:12,502|INFO|node.py|Node2 caught up to 0 txns in the last catchup node2_1 | 2020-12-15 12:35:12,503|INFO|node_reg_handler.py|Loaded current node registry from the ledger: ['Node1', 'Node2', 'Node3', 'Node4'] node2_1 | 2020-12-15 12:35:12,505|INFO|node_reg_handler.py|Current committed node registry for previous views: [(0, ['Node1', 'Node2', 'Node3', 'Node4'])] node2_1 | 2020-12-15 12:35:12,505|INFO|node_reg_handler.py|Current uncommitted node registry for previous views: [(0, ['Node1', 'Node2', 'Node3', 'Node4'])] node2_1 | 2020-12-15 12:35:12,505|INFO|node_reg_handler.py|Current active node registry: ['Node1', 'Node2', 'Node3', 'Node4'] node2_1 | 2020-12-15 12:35:12,505|INFO|node.py|CATCH-UP: Node2 caught up till (0, 0) node2_1 | 2020-12-15 12:35:12,506|INFO|node.py|CATCH-UP: Node2 does not need any more catchups node2_1 | 2020-12-15 12:35:12,506|INFO|replica.py|Node2:0 setting primaryName for view no 0 to: None node2_1 | 2020-12-15 12:35:12,506|INFO|replica.py|Node2:1 setting primaryName for view no 0 to: None node2_1 | 2020-12-15 12:35:12,506|INFO|node.py|Node2 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node4', 'Node2', 'Node3', 'Node1'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)} node2_1 | 2020-12-15 12:35:12,506|INFO|consensus_shared_data.py|Node2:0 updated validators list to {'Node4', 'Node2', 'Node3', 'Node1'} node2_1 | 2020-12-15 12:35:12,507|INFO|consensus_shared_data.py|Node2:1 updated validators list to {'Node4', 'Node2', 'Node3', 'Node1'} node2_1 | 2020-12-15 12:35:12,507|INFO|replica.py|Node2:0 setting primaryName for view no 0 to: Node1:0 node2_1 | 2020-12-15 12:35:12,508|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 lost connection to primary of master node2_1 | 2020-12-15 12:35:12,509|INFO|primary_connection_monitor_service.py|Node2:0 scheduling primary connection check in 60 sec node2_1 | 2020-12-15 12:35:12,509|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0) node2_1 | 2020-12-15 12:35:12,509|INFO|replica.py|Node2:1 setting primaryName for view no 0 to: Node2:1 node2_1 | 2020-12-15 12:35:12,510|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0) node2_1 | 2020-12-15 12:35:12,510|INFO|node.py|Node2 started participating node2_1 | 2020-12-15 12:35:12,510|INFO|checkpoint_service.py|update_watermark_from_3pc to (0, 0) node2_1 | 2020-12-15 12:35:12,510|INFO|checkpoint_service.py|Node2:0 - checkpoint_service set watermarks as 0 300 node2_1 | 2020-12-15 12:35:12,511|INFO|last_sent_pp_store_helper.py|Node2 erasing stored lastSentPrePrepare node2_1 | 2020-12-15 12:35:12,512|INFO|zstack.py|Processing messages from previously unknown remotes. node2_1 | 2020-12-15 12:35:12,724|NOTIFICATION|keep_in_touch.py|Node2's connections changed from {'Node4', 'Node3'} to {'Node4', 'Node3', 'Node1'} node2_1 | 2020-12-15 12:35:12,730|NOTIFICATION|keep_in_touch.py|CONNECTION: Node2 now connected to Node1 node2_1 | 2020-12-15 12:35:12,730|INFO|motor.py|Node2 changing status from started_hungry to started node2_1 | 2020-12-15 12:35:12,732|NOTIFICATION|primary_connection_monitor_service.py|Node2:0 restored connection to primary of master node2_1 | 2020-12-15 12:35:12,897|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node1 node2_1 | 2020-12-15 12:35:12,898|INFO|cons_proof_service.py|Node2:ConsProofService:3 ignoring LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} since it is not gathering ledger statuses node2_1 | 2020-12-15 12:35:12,898|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node1 webserver_1 | 2020-12-15 12:34:02,708|DEBUG|pool.py|set_protocol_version: >>> protocol_version: 2 webserver_1 | 2020-12-15 12:34:02,721|DEBUG|pool.py|set_protocol_version: Creating callback webserver_1 | 2020-12-15 12:34:02,817|DEBUG|pool.py|set_protocol_version: <<< res: None webserver_1 | 2020-12-15 12:34:02,818|DEBUG|pool.py|list_pools: >>> webserver_1 | 2020-12-15 12:34:02,818|DEBUG|pool.py|list_pools: Creating callback webserver_1 | 2020-12-15 12:34:02,824|DEBUG|pool.py|list_pools: <<< res: [] webserver_1 | 2020-12-15 12:34:02,860|INFO|anchor.py|Genesis file already exists: /home/indy/ledger/sandbox/pool_transactions_genesis webserver_1 | 2020-12-15 12:34:02,868|DEBUG|pool.py|create_pool_ledger_config: >>> config_name: 'nodepool', config: '{"genesis_txn": "/home/indy/ledger/sandbox/pool_transactions_genesis"}' webserver_1 | 2020-12-15 12:34:02,868|DEBUG|pool.py|create_pool_ledger_config: Creating callback webserver_1 | 2020-12-15 12:34:02,886|DEBUG|pool.py|create_pool_ledger_config: <<< res: None webserver_1 | 2020-12-15 12:34:02,906|DEBUG|pool.py|open_pool_ledger: >>> config_name: 'nodepool', config: '{}' webserver_1 | 2020-12-15 12:34:02,906|DEBUG|pool.py|open_pool_ledger: Creating callback webserver_1 | 2020-12-15 12:34:09,359|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:34:09 +0000] "GET / HTTP/1.1" 200 10386 "-" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:34:09,646|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:34:09 +0000] "GET /include/css/index.css HTTP/1.1" 304 140 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:34:09,688|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:34:09 +0000] "GET /include/js/common.js HTTP/1.1" 304 140 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:34:09,718|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:34:09 +0000] "GET /include/js/index.js HTTP/1.1" 304 140 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:34:10,321|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:34:10 +0000] "GET /status?validators= HTTP/1.1" 200 303 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:34:20,489|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:34:20 +0000] "GET /status?validators= HTTP/1.1" 200 303 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:34:30,507|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:34:30 +0000] "GET /status?validators= HTTP/1.1" 200 303 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:34:40,520|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:34:40 +0000] "GET /status?validators= HTTP/1.1" 200 303 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:34:50,544|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:34:50 +0000] "GET /status?validators= HTTP/1.1" 200 303 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:35:00,557|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:35:00 +0000] "GET /status?validators= HTTP/1.1" 200 303 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:35:03,076|WARNING|libindy.py|_indy_loop_callback: Function returned error webserver_1 | 2020-12-15 12:35:03,078|ERROR|anchor.py|Initialization error: webserver_1 | Traceback (most recent call last): webserver_1 | File "/home/indy/server/anchor.py", line 221, in _open_pool webserver_1 | self._pool = await pool.open_pool_ledger(pool_name, json.dumps(pool_cfg)) webserver_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy/pool.py", line 88, in open_pool_ledger webserver_1 | open_pool_ledger.cb) webserver_1 | indy.error.PoolLedgerTimeout webserver_1 | webserver_1 | The above exception was the direct cause of the following exception: webserver_1 | webserver_1 | Traceback (most recent call last): webserver_1 | File "/home/indy/server/anchor.py", line 317, in open webserver_1 | await self._open_pool() webserver_1 | File "/home/indy/server/anchor.py", line 223, in _open_pool webserver_1 | raise AnchorException("Error opening pool ledger connection") from e webserver_1 | server.anchor.AnchorException: Error opening pool ledger connection webserver_1 | 2020-12-15 12:35:03,080|INFO|server.py|--- Trust anchor initialized --- webserver_1 | 2020-12-15 12:35:10,566|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:35:10 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:35:20,579|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:35:20 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:35:30,590|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:35:30 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:35:40,598|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:35:40 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:35:50,622|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:35:50 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:36:00,640|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:36:00 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:36:10,649|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:36:10 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:36:20,659|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:36:20 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:36:30,691|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:36:30 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:36:40,704|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:36:40 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:36:50,715|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:36:50 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:37:00,725|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:37:00 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:37:10,735|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:37:10 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:37:20,763|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:37:20 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" node1_1 | 2020-12-15 12:35:13,037|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0) node1_1 | 2020-12-15 12:35:13,038|INFO|replica.py|Node1:1 setting primaryName for view no 0 to: Node2:1 node1_1 | 2020-12-15 12:35:13,038|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0) node1_1 | 2020-12-15 12:35:13,038|INFO|node.py|Node1 started participating node1_1 | 2020-12-15 12:35:13,038|INFO|checkpoint_service.py|update_watermark_from_3pc to (0, 0) node1_1 | 2020-12-15 12:35:13,039|INFO|checkpoint_service.py|Node1:0 - checkpoint_service set watermarks as 0 300 node1_1 | 2020-12-15 12:35:13,039|INFO|checkpoint_service.py|Node1:1 - checkpoint_service set watermarks as 0 9223372036854775807 node1_1 | 2020-12-15 12:35:13,040|INFO|last_sent_pp_store_helper.py|Node1 erasing stored lastSentPrePrepare node1_1 | 2020-12-15 12:35:13,041|INFO|zstack.py|Processing messages from previously unknown remotes. node1_1 | 2020-12-15 12:35:13,048|INFO|seeder_service.py|Node1 received ledger status: LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} from Node2 node1_1 | 2020-12-15 12:35:13,049|INFO|cons_proof_service.py|Node1:ConsProofService:1 ignoring LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} since it is not gathering ledger statuses node1_1 | 2020-12-15 12:40:13,008|INFO|ordering_service.py|Ledger 0 is not updated for 301 seconds, so its freshness state is going to be updated now node1_1 | 2020-12-15 12:40:13,023|INFO|ordering_service.py|Ledger 1 is not updated for 301 seconds, so its freshness state is going to be updated now node1_1 | 2020-12-15 12:40:13,046|INFO|ordering_service.py|Ledger 2 is not updated for 301 seconds, so its freshness state is going to be updated now node1_1 | 2020-12-15 12:40:13,585|INFO|ordering_service.py|Node1:0 set last ordered as (0, 1) node1_1 | 2020-12-15 12:40:13,626|NOTIFICATION|looper.py|Looper shutting down now... node1_1 | 2020-12-15 12:40:13,650|INFO|motor.py|Node1 changing status from started to stopping node1_1 | 2020-12-15 12:40:13,653|INFO|node.py|node Node1 current stats node1_1 | -------------------------------------------------------- node1_1 | node inbox size : 0 node1_1 | client inbox size : 0 node1_1 | age (seconds) : 303.27827429771423 node1_1 | next check for reconnect: -0.22468537800000377 node1_1 | node connections : {'Node2', 'Node3', 'Node4'} node1_1 | f : 1 node1_1 | master instance : 0 node1_1 | replicas : 2 node1_1 | view no : 0 node1_1 | rank : 0 node1_1 | msgs to replicas : 7 node1_1 | action queue : 0 139841397266224 node1_1 | action queue stash : 5 139841397266328 node1_1 | 2020-12-15 12:40:13,658|INFO|node.py|Node1 reseting... node1_1 | 2020-12-15 12:40:13,708|NOTIFICATION|zstack.py|stack Node1 closing its listener node1_1 | 2020-12-15 12:40:13,710|NOTIFICATION|zstack.py|stack Node1 stopped node1_1 | 2020-12-15 12:40:13,710|NOTIFICATION|zstack.py|stack Node1C closing its listener node1_1 | 2020-12-15 12:40:13,710|NOTIFICATION|zstack.py|stack Node1C stopped node1_1 | 2020-12-15 12:40:13,711|INFO|node.py|Node1 closing key-value storages node1_1 | 2020-12-15 12:40:13,713|INFO|motor.py|Node1 changing status from stopping to stopped node1_1 | 2020-12-15 12:40:13,714|NOTIFICATION|looper.py|Looper shut down in 0.064 seconds. node1_1 | Traceback (most recent call last): node1_1 | File "/home/indy/.pyenv/versions/3.6.9/bin/start_indy_node", line 19, in node1_1 | client_ip=sys.argv[4], client_port=int(sys.argv[5])) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy_node/utils/node_runner.py", line 53, in run_node node1_1 | looper.run() node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 263, in run node1_1 | return self.loop.run_until_complete(what) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete node1_1 | return future.result() node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 227, in runForever node1_1 | await self.runOnceNicely() node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 210, in runOnceNicely node1_1 | msgsProcessed = await self.prodAllOnce() node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 152, in prodAllOnce node1_1 | s += await n.prod(limit) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy_node/server/node.py", line 232, in prod node1_1 | c = await super().prod(limit) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/metrics_collector.py", line 373, in wrapper node1_1 | return await f(self, *args, **kwargs) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1056, in prod node1_1 | c += await self.serviceReplicas(limit) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/metrics_collector.py", line 373, in wrapper node1_1 | return await f(self, *args, **kwargs) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1087, in serviceReplicas node1_1 | return self._process_replica_messages(limit) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1090, in _process_replica_messages node1_1 | inbox_processed = self.replicas.service_inboxes(limit) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replicas.py", line 102, in service_inboxes node1_1 | sum(replica.serviceQueues(limit) for replica in self._replicas.values()) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replicas.py", line 102, in node1_1 | sum(replica.serviceQueues(limit) for replica in self._replicas.values()) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 74, in wrapper node1_1 | return f(self, *args, **kwargs) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 527, in serviceQueues node1_1 | r += self._handle_external_messages(self.inBox, limit) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 549, in _handle_external_messages node1_1 | self._external_bus.process_incoming(external_msg, sender) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/event_bus.py", line 33, in process_incoming node1_1 | self._route(message, frm) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/router.py", line 49, in _route node1_1 | return [handler(message, *args) for handler in handlers] node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/router.py", line 49, in node1_1 | return [handler(message, *args) for handler in handlers] node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/stashing_router.py", line 172, in _process node1_1 | result = handler(message, *args) node4_1 | 2020-12-15 12:35:11,540|INFO|node.py|Node4 first time running... node4_1 | 2020-12-15 12:35:11,540|INFO|node.py|Node4 processed 0 Ordered batches for instance 0 before starting catch up node4_1 | 2020-12-15 12:35:11,541|INFO|node.py|Node4 processed 0 Ordered batches for instance 1 before starting catch up node4_1 | 2020-12-15 12:35:11,541|INFO|ordering_service.py|Node4:0 reverted 0 batches before starting catch up node4_1 | 2020-12-15 12:35:11,542|INFO|node_leecher_service.py|Node4:NodeLeecherService starting catchup (is_initial=True) node4_1 | 2020-12-15 12:35:11,542|INFO|node_leecher_service.py|Node4:NodeLeecherService transitioning from Idle to PreSyncingPool node4_1 | 2020-12-15 12:35:11,543|INFO|cons_proof_service.py|Node4:ConsProofService:0 starts node4_1 | 2020-12-15 12:35:11,550|INFO|kit_zstack.py|CONNECTION: Node4 found the following missing connections: Node3, Node2, Node1 node4_1 | 2020-12-15 12:35:11,552|INFO|zstack.py|CONNECTION: Node4 looking for Node3 at 10.0.2.15:9705 node4_1 | 2020-12-15 12:35:11,567|INFO|zstack.py|CONNECTION: Node4 looking for Node2 at 10.0.2.15:9703 node4_1 | 2020-12-15 12:35:11,568|INFO|zstack.py|CONNECTION: Node4 looking for Node1 at 10.0.2.15:9701 node4_1 | 2020-12-15 12:35:11,688|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node2 node4_1 | 2020-12-15 12:35:11,689|INFO|cons_proof_service.py|Node4:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node4_1 | 2020-12-15 12:35:11,690|INFO|cons_proof_service.py|Node4:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node4_1 | 2020-12-15 12:35:11,690|NOTIFICATION|keep_in_touch.py|Node4's connections changed from set() to {'Node2'} node4_1 | 2020-12-15 12:35:11,691|NOTIFICATION|keep_in_touch.py|CONNECTION: Node4 now connected to Node2 node4_1 | 2020-12-15 12:35:11,691|INFO|node.py|Node4 joined nodes {'Node2'} but status is 2 node4_1 | 2020-12-15 12:35:12,095|NOTIFICATION|keep_in_touch.py|Node4's connections changed from {'Node2'} to {'Node3', 'Node2'} node4_1 | 2020-12-15 12:35:12,096|NOTIFICATION|keep_in_touch.py|CONNECTION: Node4 now connected to Node3 node4_1 | 2020-12-15 12:35:12,096|INFO|motor.py|Node4 changing status from starting to started_hungry node4_1 | 2020-12-15 12:35:12,096|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec node4_1 | 2020-12-15 12:35:12,225|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node3 node4_1 | 2020-12-15 12:35:12,226|INFO|cons_proof_service.py|Node4:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node4_1 | 2020-12-15 12:35:12,226|INFO|cons_proof_service.py|Node4:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node4_1 | 2020-12-15 12:35:12,232|INFO|cons_proof_service.py|Node4:ConsProofService:0 finished with consistency proof None node4_1 | 2020-12-15 12:35:12,233|INFO|catchup_rep_service.py|Node4:CatchupRepService:0 started catching up with LedgerCatchupStart(ledger_id=0, catchup_till=None, nodes_ledger_sizes={}) node4_1 | 2020-12-15 12:35:12,233|INFO|catchup_rep_service.py|CATCH-UP: Node4:CatchupRepService:0 completed catching up ledger 0, caught up 0 in total node4_1 | 2020-12-15 12:35:12,235|INFO|node_leecher_service.py|Node4:NodeLeecherService transitioning from PreSyncingPool to SyncingAudit node4_1 | 2020-12-15 12:35:12,235|INFO|cons_proof_service.py|Node4:ConsProofService:3 starts node4_1 | 2020-12-15 12:35:12,236|INFO|cons_proof_service.py|Node4:ConsProofService:3 asking for ledger status of ledger 3 node4_1 | 2020-12-15 12:35:12,266|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node2 node4_1 | 2020-12-15 12:35:12,275|INFO|cons_proof_service.py|Node4:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node4_1 | 2020-12-15 12:35:12,281|INFO|cons_proof_service.py|Node4:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node4_1 | 2020-12-15 12:35:12,296|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node3 node4_1 | 2020-12-15 12:35:12,296|INFO|cons_proof_service.py|Node4:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node4_1 | 2020-12-15 12:35:12,296|INFO|cons_proof_service.py|Node4:ConsProofService:3 comparing its ledger 3 of size 0 with 0 node4_1 | 2020-12-15 12:35:12,297|INFO|cons_proof_service.py|Node4:ConsProofService:3 finished with consistency proof None node4_1 | 2020-12-15 12:35:12,298|INFO|catchup_rep_service.py|Node4:CatchupRepService:3 started catching up with LedgerCatchupStart(ledger_id=3, catchup_till=None, nodes_ledger_sizes={}) node4_1 | 2020-12-15 12:35:12,301|INFO|catchup_rep_service.py|CATCH-UP: Node4:CatchupRepService:3 completed catching up ledger 3, caught up 0 in total node4_1 | 2020-12-15 12:35:12,302|INFO|node_leecher_service.py|Node4:NodeLeecherService transitioning from SyncingAudit to SyncingPool node4_1 | 2020-12-15 12:35:12,303|INFO|cons_proof_service.py|Node4:ConsProofService:0 starts node4_1 | 2020-12-15 12:35:12,304|INFO|cons_proof_service.py|Node4:ConsProofService:0 asking for ledger status of ledger 0 node4_1 | 2020-12-15 12:35:12,336|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node3 node4_1 | 2020-12-15 12:35:12,337|INFO|cons_proof_service.py|Node4:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node4_1 | 2020-12-15 12:35:12,337|INFO|cons_proof_service.py|Node4:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node4_1 | 2020-12-15 12:35:12,337|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node2 node4_1 | 2020-12-15 12:35:12,338|INFO|cons_proof_service.py|Node4:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node4_1 | 2020-12-15 12:35:12,338|INFO|cons_proof_service.py|Node4:ConsProofService:0 comparing its ledger 0 of size 4 with 4 node4_1 | 2020-12-15 12:35:12,339|INFO|cons_proof_service.py|Node4:ConsProofService:0 finished with consistency proof None node4_1 | 2020-12-15 12:35:12,339|INFO|catchup_rep_service.py|Node4:CatchupRepService:0 started catching up with LedgerCatchupStart(ledger_id=0, catchup_till=None, nodes_ledger_sizes={}) node4_1 | 2020-12-15 12:35:12,340|INFO|catchup_rep_service.py|CATCH-UP: Node4:CatchupRepService:0 completed catching up ledger 0, caught up 0 in total node4_1 | 2020-12-15 12:35:12,341|INFO|node_leecher_service.py|Node4:NodeLeecherService transitioning from SyncingPool to SyncingConfig node4_1 | 2020-12-15 12:35:12,342|INFO|cons_proof_service.py|Node4:ConsProofService:2 starts node4_1 | 2020-12-15 12:35:12,342|INFO|cons_proof_service.py|Node4:ConsProofService:2 asking for ledger status of ledger 2 node4_1 | 2020-12-15 12:35:12,371|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node3 node4_1 | 2020-12-15 12:35:12,372|INFO|cons_proof_service.py|Node4:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node3_1 | 2020-12-15 12:35:12,378|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node4 node3_1 | 2020-12-15 12:35:12,381|INFO|cons_proof_service.py|Node3:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node3_1 | 2020-12-15 12:35:12,382|INFO|cons_proof_service.py|Node3:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node3_1 | 2020-12-15 12:35:12,386|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node2 node3_1 | 2020-12-15 12:35:12,389|INFO|cons_proof_service.py|Node3:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node3_1 | 2020-12-15 12:35:12,395|INFO|cons_proof_service.py|Node3:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node3_1 | 2020-12-15 12:35:12,396|INFO|cons_proof_service.py|Node3:ConsProofService:2 finished with consistency proof None node3_1 | 2020-12-15 12:35:12,397|INFO|catchup_rep_service.py|Node3:CatchupRepService:2 started catching up with LedgerCatchupStart(ledger_id=2, catchup_till=None, nodes_ledger_sizes={}) node3_1 | 2020-12-15 12:35:12,399|INFO|catchup_rep_service.py|CATCH-UP: Node3:CatchupRepService:2 completed catching up ledger 2, caught up 0 in total node3_1 | 2020-12-15 12:35:12,401|INFO|node_leecher_service.py|Node3:NodeLeecherService transitioning from SyncingConfig to SyncingOthers node3_1 | 2020-12-15 12:35:12,402|INFO|cons_proof_service.py|Node3:ConsProofService:1 starts node3_1 | 2020-12-15 12:35:12,402|INFO|cons_proof_service.py|Node3:ConsProofService:1 asking for ledger status of ledger 1 node3_1 | 2020-12-15 12:35:12,429|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} from Node4 node3_1 | 2020-12-15 12:35:12,430|INFO|cons_proof_service.py|Node3:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node3_1 | 2020-12-15 12:35:12,431|INFO|cons_proof_service.py|Node3:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node3_1 | 2020-12-15 12:35:12,438|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} from Node2 node3_1 | 2020-12-15 12:35:12,441|INFO|cons_proof_service.py|Node3:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node3_1 | 2020-12-15 12:35:12,442|INFO|cons_proof_service.py|Node3:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node3_1 | 2020-12-15 12:35:12,444|INFO|cons_proof_service.py|Node3:ConsProofService:1 finished with consistency proof None node3_1 | 2020-12-15 12:35:12,445|INFO|catchup_rep_service.py|Node3:CatchupRepService:1 started catching up with LedgerCatchupStart(ledger_id=1, catchup_till=None, nodes_ledger_sizes={}) node3_1 | 2020-12-15 12:35:12,446|INFO|catchup_rep_service.py|CATCH-UP: Node3:CatchupRepService:1 completed catching up ledger 1, caught up 0 in total node3_1 | 2020-12-15 12:35:12,447|INFO|node_leecher_service.py|Node3:NodeLeecherService transitioning from SyncingOthers to Idle node3_1 | 2020-12-15 12:35:12,448|INFO|node.py|Node3 caught up to 0 txns in the last catchup node3_1 | 2020-12-15 12:35:12,449|INFO|node_reg_handler.py|Loaded current node registry from the ledger: ['Node1', 'Node2', 'Node3', 'Node4'] node3_1 | 2020-12-15 12:35:12,452|INFO|node_reg_handler.py|Current committed node registry for previous views: [(0, ['Node1', 'Node2', 'Node3', 'Node4'])] node3_1 | 2020-12-15 12:35:12,453|INFO|node_reg_handler.py|Current uncommitted node registry for previous views: [(0, ['Node1', 'Node2', 'Node3', 'Node4'])] node3_1 | 2020-12-15 12:35:12,453|INFO|node_reg_handler.py|Current active node registry: ['Node1', 'Node2', 'Node3', 'Node4'] node3_1 | 2020-12-15 12:35:12,454|INFO|node.py|CATCH-UP: Node3 caught up till (0, 0) node3_1 | 2020-12-15 12:35:12,455|INFO|node.py|CATCH-UP: Node3 does not need any more catchups node3_1 | 2020-12-15 12:35:12,459|INFO|replica.py|Node3:0 setting primaryName for view no 0 to: None node3_1 | 2020-12-15 12:35:12,460|INFO|replica.py|Node3:1 setting primaryName for view no 0 to: None node3_1 | 2020-12-15 12:35:12,461|INFO|node.py|Node3 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node3', 'Node4', 'Node2', 'Node1'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)} node3_1 | 2020-12-15 12:35:12,464|INFO|consensus_shared_data.py|Node3:0 updated validators list to {'Node3', 'Node4', 'Node2', 'Node1'} node3_1 | 2020-12-15 12:35:12,465|INFO|consensus_shared_data.py|Node3:1 updated validators list to {'Node3', 'Node4', 'Node2', 'Node1'} node3_1 | 2020-12-15 12:35:12,465|INFO|replica.py|Node3:0 setting primaryName for view no 0 to: Node1:0 node3_1 | 2020-12-15 12:35:12,466|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 lost connection to primary of master node3_1 | 2020-12-15 12:35:12,467|INFO|primary_connection_monitor_service.py|Node3:0 scheduling primary connection check in 60 sec node3_1 | 2020-12-15 12:35:12,467|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0) node3_1 | 2020-12-15 12:35:12,468|INFO|replica.py|Node3:1 setting primaryName for view no 0 to: Node2:1 node3_1 | 2020-12-15 12:35:12,468|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0) node3_1 | 2020-12-15 12:35:12,470|INFO|node.py|Node3 started participating node3_1 | 2020-12-15 12:35:12,471|INFO|checkpoint_service.py|update_watermark_from_3pc to (0, 0) node3_1 | 2020-12-15 12:35:12,471|INFO|checkpoint_service.py|Node3:0 - checkpoint_service set watermarks as 0 300 node3_1 | 2020-12-15 12:35:12,476|INFO|checkpoint_service.py|Node3:1 - checkpoint_service set watermarks as 0 9223372036854775807 node3_1 | 2020-12-15 12:35:12,478|INFO|last_sent_pp_store_helper.py|Node3 erasing stored lastSentPrePrepare node3_1 | 2020-12-15 12:35:12,479|INFO|zstack.py|Processing messages from previously unknown remotes. node3_1 | 2020-12-15 12:35:12,859|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node1 node3_1 | 2020-12-15 12:35:12,860|INFO|cons_proof_service.py|Node3:ConsProofService:3 ignoring LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} since it is not gathering ledger statuses node3_1 | 2020-12-15 12:35:12,864|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node1 node3_1 | 2020-12-15 12:35:12,864|INFO|cons_proof_service.py|Node3:ConsProofService:0 ignoring LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} since it is not gathering ledger statuses node2_1 | 2020-12-15 12:35:12,899|INFO|cons_proof_service.py|Node2:ConsProofService:0 ignoring LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} since it is not gathering ledger statuses node2_1 | 2020-12-15 12:35:12,899|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node1 node2_1 | 2020-12-15 12:35:12,900|INFO|cons_proof_service.py|Node2:ConsProofService:2 ignoring LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} since it is not gathering ledger statuses node2_1 | 2020-12-15 12:35:12,900|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} from Node1 node2_1 | 2020-12-15 12:35:12,901|INFO|cons_proof_service.py|Node2:ConsProofService:1 ignoring LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} since it is not gathering ledger statuses node2_1 | 2020-12-15 12:35:12,936|INFO|seeder_service.py|Node2 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node1 node2_1 | 2020-12-15 12:35:12,937|INFO|cons_proof_service.py|Node2:ConsProofService:0 ignoring LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} since it is not gathering ledger statuses node2_1 | 2020-12-15 12:40:13,564|INFO|ordering_service.py|Node2:0 set last ordered as (0, 1) node2_1 | 2020-12-15 12:40:13,623|NOTIFICATION|looper.py|Looper shutting down now... node2_1 | 2020-12-15 12:40:13,623|INFO|motor.py|Node2 changing status from started to stopping node2_1 | 2020-12-15 12:40:13,632|INFO|node.py|node Node2 current stats node2_1 | -------------------------------------------------------- node2_1 | node inbox size : 0 node2_1 | client inbox size : 0 node2_1 | age (seconds) : 309.1442642211914 node2_1 | next check for reconnect: 0.13020716699998047 node2_1 | node connections : {'Node4', 'Node3', 'Node1'} node2_1 | f : 1 node2_1 | master instance : 0 node2_1 | replicas : 2 node2_1 | view no : 0 node2_1 | rank : 1 node2_1 | msgs to replicas : 2 node2_1 | action queue : 0 140514239817520 node2_1 | action queue stash : 5 140514239817624 node2_1 | 2020-12-15 12:40:13,640|INFO|node.py|Node2 reseting... node2_1 | 2020-12-15 12:40:13,669|NOTIFICATION|zstack.py|stack Node2 closing its listener node2_1 | 2020-12-15 12:40:13,672|NOTIFICATION|zstack.py|stack Node2 stopped node2_1 | 2020-12-15 12:40:13,672|NOTIFICATION|zstack.py|stack Node2C closing its listener node2_1 | 2020-12-15 12:40:13,674|NOTIFICATION|zstack.py|stack Node2C stopped node2_1 | 2020-12-15 12:40:13,674|INFO|node.py|Node2 closing key-value storages node2_1 | 2020-12-15 12:40:13,681|INFO|motor.py|Node2 changing status from stopping to stopped node2_1 | 2020-12-15 12:40:13,681|NOTIFICATION|looper.py|Looper shut down in 0.058 seconds. node2_1 | Traceback (most recent call last): node2_1 | File "/home/indy/.pyenv/versions/3.6.9/bin/start_indy_node", line 19, in node2_1 | client_ip=sys.argv[4], client_port=int(sys.argv[5])) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy_node/utils/node_runner.py", line 53, in run_node node2_1 | looper.run() node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 263, in run node2_1 | return self.loop.run_until_complete(what) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete node2_1 | return future.result() node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 227, in runForever node2_1 | await self.runOnceNicely() node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 210, in runOnceNicely node2_1 | msgsProcessed = await self.prodAllOnce() node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 152, in prodAllOnce node2_1 | s += await n.prod(limit) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy_node/server/node.py", line 232, in prod node2_1 | c = await super().prod(limit) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/metrics_collector.py", line 373, in wrapper node2_1 | return await f(self, *args, **kwargs) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1056, in prod node2_1 | c += await self.serviceReplicas(limit) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/metrics_collector.py", line 373, in wrapper node2_1 | return await f(self, *args, **kwargs) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1087, in serviceReplicas node2_1 | return self._process_replica_messages(limit) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1090, in _process_replica_messages node2_1 | inbox_processed = self.replicas.service_inboxes(limit) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replicas.py", line 102, in service_inboxes node2_1 | sum(replica.serviceQueues(limit) for replica in self._replicas.values()) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replicas.py", line 102, in node2_1 | sum(replica.serviceQueues(limit) for replica in self._replicas.values()) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 74, in wrapper node2_1 | return f(self, *args, **kwargs) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 527, in serviceQueues node2_1 | r += self._handle_external_messages(self.inBox, limit) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 549, in _handle_external_messages node2_1 | self._external_bus.process_incoming(external_msg, sender) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/event_bus.py", line 33, in process_incoming node2_1 | self._route(message, frm) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/router.py", line 49, in _route node2_1 | return [handler(message, *args) for handler in handlers] node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/router.py", line 49, in node2_1 | return [handler(message, *args) for handler in handlers] node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/stashing_router.py", line 172, in _process node2_1 | result = handler(message, *args) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/metrics_decorator.py", line 13, in wrapper node2_1 | return f(self, *args, **kwargs) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 451, in process_commit webserver_1 | 2020-12-15 12:37:30,769|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:37:30 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:37:40,802|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:37:40 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:37:50,806|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:37:50 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:38:00,824|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:38:00 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:38:10,838|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:38:10 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:38:20,845|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:38:20 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:38:30,851|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:38:30 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:38:40,860|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:38:40 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:38:50,865|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:38:50 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:39:00,877|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:39:00 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:39:10,878|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:39:10 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:39:20,884|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:39:20 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:39:30,892|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:39:30 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:39:40,897|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:39:40 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:39:50,909|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:39:50 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:40:00,913|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:40:00 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:40:10,920|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:40:10 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:40:20,924|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:40:20 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:40:30,931|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:40:30 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:40:40,936|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:40:40 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:40:50,948|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:40:50 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:41:00,955|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:41:00 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:41:10,961|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:41:10 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:41:20,964|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:41:20 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:41:30,969|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:41:30 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:41:40,973|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:41:40 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" node4_1 | 2020-12-15 12:35:12,372|INFO|cons_proof_service.py|Node4:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node4_1 | 2020-12-15 12:35:12,391|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node2 node4_1 | 2020-12-15 12:35:12,392|INFO|cons_proof_service.py|Node4:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node4_1 | 2020-12-15 12:35:12,392|INFO|cons_proof_service.py|Node4:ConsProofService:2 comparing its ledger 2 of size 0 with 0 node4_1 | 2020-12-15 12:35:12,393|INFO|cons_proof_service.py|Node4:ConsProofService:2 finished with consistency proof None node4_1 | 2020-12-15 12:35:12,396|INFO|catchup_rep_service.py|Node4:CatchupRepService:2 started catching up with LedgerCatchupStart(ledger_id=2, catchup_till=None, nodes_ledger_sizes={}) node4_1 | 2020-12-15 12:35:12,397|INFO|catchup_rep_service.py|CATCH-UP: Node4:CatchupRepService:2 completed catching up ledger 2, caught up 0 in total node4_1 | 2020-12-15 12:35:12,398|INFO|node_leecher_service.py|Node4:NodeLeecherService transitioning from SyncingConfig to SyncingOthers node4_1 | 2020-12-15 12:35:12,399|INFO|cons_proof_service.py|Node4:ConsProofService:1 starts node4_1 | 2020-12-15 12:35:12,400|INFO|cons_proof_service.py|Node4:ConsProofService:1 asking for ledger status of ledger 1 node4_1 | 2020-12-15 12:35:12,433|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} from Node3 node4_1 | 2020-12-15 12:35:12,438|INFO|cons_proof_service.py|Node4:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node4_1 | 2020-12-15 12:35:12,439|INFO|cons_proof_service.py|Node4:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node4_1 | 2020-12-15 12:35:12,439|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} from Node2 node4_1 | 2020-12-15 12:35:12,440|INFO|cons_proof_service.py|Node4:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node4_1 | 2020-12-15 12:35:12,441|INFO|cons_proof_service.py|Node4:ConsProofService:1 comparing its ledger 1 of size 5 with 5 node4_1 | 2020-12-15 12:35:12,442|INFO|cons_proof_service.py|Node4:ConsProofService:1 finished with consistency proof None node4_1 | 2020-12-15 12:35:12,443|INFO|catchup_rep_service.py|Node4:CatchupRepService:1 started catching up with LedgerCatchupStart(ledger_id=1, catchup_till=None, nodes_ledger_sizes={}) node4_1 | 2020-12-15 12:35:12,444|INFO|catchup_rep_service.py|CATCH-UP: Node4:CatchupRepService:1 completed catching up ledger 1, caught up 0 in total node4_1 | 2020-12-15 12:35:12,445|INFO|node_leecher_service.py|Node4:NodeLeecherService transitioning from SyncingOthers to Idle node4_1 | 2020-12-15 12:35:12,445|INFO|node.py|Node4 caught up to 0 txns in the last catchup node4_1 | 2020-12-15 12:35:12,447|INFO|node_reg_handler.py|Loaded current node registry from the ledger: ['Node1', 'Node2', 'Node3', 'Node4'] node4_1 | 2020-12-15 12:35:12,450|INFO|node_reg_handler.py|Current committed node registry for previous views: [(0, ['Node1', 'Node2', 'Node3', 'Node4'])] node4_1 | 2020-12-15 12:35:12,450|INFO|node_reg_handler.py|Current uncommitted node registry for previous views: [(0, ['Node1', 'Node2', 'Node3', 'Node4'])] node4_1 | 2020-12-15 12:35:12,452|INFO|node_reg_handler.py|Current active node registry: ['Node1', 'Node2', 'Node3', 'Node4'] node4_1 | 2020-12-15 12:35:12,453|INFO|node.py|CATCH-UP: Node4 caught up till (0, 0) node4_1 | 2020-12-15 12:35:12,454|INFO|node.py|CATCH-UP: Node4 does not need any more catchups node4_1 | 2020-12-15 12:35:12,455|INFO|replica.py|Node4:0 setting primaryName for view no 0 to: None node4_1 | 2020-12-15 12:35:12,455|INFO|replica.py|Node4:1 setting primaryName for view no 0 to: None node4_1 | 2020-12-15 12:35:12,460|INFO|node.py|Node4 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node3', 'Node2', 'Node1', 'Node4'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)} node4_1 | 2020-12-15 12:35:12,461|INFO|consensus_shared_data.py|Node4:0 updated validators list to {'Node3', 'Node2', 'Node1', 'Node4'} node4_1 | 2020-12-15 12:35:12,464|INFO|consensus_shared_data.py|Node4:1 updated validators list to {'Node3', 'Node2', 'Node1', 'Node4'} node4_1 | 2020-12-15 12:35:12,464|INFO|replica.py|Node4:0 setting primaryName for view no 0 to: Node1:0 node4_1 | 2020-12-15 12:35:12,465|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 lost connection to primary of master node4_1 | 2020-12-15 12:35:12,466|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec node4_1 | 2020-12-15 12:35:12,466|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node1:0 for instance 0 (view 0) node4_1 | 2020-12-15 12:35:12,467|INFO|replica.py|Node4:1 setting primaryName for view no 0 to: Node2:1 node4_1 | 2020-12-15 12:35:12,467|NOTIFICATION|node.py|PRIMARY SELECTION: selected primary Node2:1 for instance 1 (view 0) node4_1 | 2020-12-15 12:35:12,469|INFO|node.py|Node4 started participating node4_1 | 2020-12-15 12:35:12,470|INFO|checkpoint_service.py|update_watermark_from_3pc to (0, 0) node4_1 | 2020-12-15 12:35:12,471|INFO|checkpoint_service.py|Node4:0 - checkpoint_service set watermarks as 0 300 node4_1 | 2020-12-15 12:35:12,471|INFO|checkpoint_service.py|Node4:1 - checkpoint_service set watermarks as 0 9223372036854775807 node4_1 | 2020-12-15 12:35:12,472|INFO|last_sent_pp_store_helper.py|Node4 erasing stored lastSentPrePrepare node4_1 | 2020-12-15 12:35:12,478|INFO|zstack.py|Processing messages from previously unknown remotes. node4_1 | 2020-12-15 12:35:12,839|NOTIFICATION|keep_in_touch.py|Node4's connections changed from {'Node3', 'Node2'} to {'Node3', 'Node2', 'Node1'} node4_1 | 2020-12-15 12:35:12,839|NOTIFICATION|keep_in_touch.py|CONNECTION: Node4 now connected to Node1 node4_1 | 2020-12-15 12:35:12,840|INFO|motor.py|Node4 changing status from started_hungry to started node4_1 | 2020-12-15 12:35:12,840|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 restored connection to primary of master node4_1 | 2020-12-15 12:35:12,937|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node1 node4_1 | 2020-12-15 12:35:12,938|INFO|cons_proof_service.py|Node4:ConsProofService:0 ignoring LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} since it is not gathering ledger statuses node4_1 | 2020-12-15 12:35:13,062|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node1 node4_1 | 2020-12-15 12:35:13,062|INFO|cons_proof_service.py|Node4:ConsProofService:3 ignoring LEDGER_STATUS{'ledgerId': 3, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} since it is not gathering ledger statuses node3_1 | 2020-12-15 12:35:12,864|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node1 node3_1 | 2020-12-15 12:35:12,864|INFO|cons_proof_service.py|Node3:ConsProofService:2 ignoring LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} since it is not gathering ledger statuses node3_1 | 2020-12-15 12:35:12,865|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} from Node1 node3_1 | 2020-12-15 12:35:12,871|INFO|cons_proof_service.py|Node3:ConsProofService:1 ignoring LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} since it is not gathering ledger statuses node3_1 | 2020-12-15 12:35:12,872|INFO|seeder_service.py|Node3 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node1 node3_1 | 2020-12-15 12:35:12,872|INFO|cons_proof_service.py|Node3:ConsProofService:0 ignoring LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} since it is not gathering ledger statuses node3_1 | 2020-12-15 12:35:12,874|NOTIFICATION|keep_in_touch.py|Node3's connections changed from {'Node4', 'Node2'} to {'Node4', 'Node2', 'Node1'} node3_1 | 2020-12-15 12:35:12,876|NOTIFICATION|keep_in_touch.py|CONNECTION: Node3 now connected to Node1 node3_1 | 2020-12-15 12:35:12,877|INFO|motor.py|Node3 changing status from started_hungry to started node3_1 | 2020-12-15 12:35:12,879|NOTIFICATION|primary_connection_monitor_service.py|Node3:0 restored connection to primary of master node3_1 | 2020-12-15 12:36:12,286|INFO|primary_connection_monitor_service.py|Node3:1 The primary is already connected so view change will not be proposed node3_1 | 2020-12-15 12:40:13,572|INFO|ordering_service.py|Node3:0 set last ordered as (0, 1) node3_1 | 2020-12-15 12:40:13,620|NOTIFICATION|looper.py|Looper shutting down now... node3_1 | 2020-12-15 12:40:13,622|INFO|motor.py|Node3 changing status from started to stopping node3_1 | 2020-12-15 12:40:13,632|INFO|node.py|node Node3 current stats node3_1 | -------------------------------------------------------- node3_1 | node inbox size : 0 node3_1 | client inbox size : 0 node3_1 | age (seconds) : 304.0285816192627 node3_1 | next check for reconnect: -1.6498002470000301 node3_1 | node connections : {'Node4', 'Node2', 'Node1'} node3_1 | f : 1 node3_1 | master instance : 0 node3_1 | replicas : 2 node3_1 | view no : 0 node3_1 | rank : 2 node3_1 | msgs to replicas : 7 node3_1 | action queue : 0 140300636616496 node3_1 | action queue stash : 5 140300636616600 node3_1 | 2020-12-15 12:40:13,640|INFO|node.py|Node3 reseting... node3_1 | --- Logging error --- node3_1 | Traceback (most recent call last): node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy_node/utils/node_runner.py", line 53, in run_node node3_1 | looper.run() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 263, in run node3_1 | return self.loop.run_until_complete(what) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete node3_1 | return future.result() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 227, in runForever node3_1 | await self.runOnceNicely() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 210, in runOnceNicely node3_1 | msgsProcessed = await self.prodAllOnce() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 152, in prodAllOnce node3_1 | s += await n.prod(limit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy_node/server/node.py", line 232, in prod node3_1 | c = await super().prod(limit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/metrics_collector.py", line 373, in wrapper node3_1 | return await f(self, *args, **kwargs) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1056, in prod node3_1 | c += await self.serviceReplicas(limit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/metrics_collector.py", line 373, in wrapper node3_1 | return await f(self, *args, **kwargs) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1087, in serviceReplicas node3_1 | return self._process_replica_messages(limit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1090, in _process_replica_messages node3_1 | inbox_processed = self.replicas.service_inboxes(limit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replicas.py", line 102, in service_inboxes node3_1 | sum(replica.serviceQueues(limit) for replica in self._replicas.values()) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replicas.py", line 102, in node3_1 | sum(replica.serviceQueues(limit) for replica in self._replicas.values()) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 74, in wrapper node3_1 | return f(self, *args, **kwargs) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 527, in serviceQueues node3_1 | r += self._handle_external_messages(self.inBox, limit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 549, in _handle_external_messages node3_1 | self._external_bus.process_incoming(external_msg, sender) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/event_bus.py", line 33, in process_incoming node3_1 | self._route(message, frm) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/router.py", line 49, in _route node3_1 | return [handler(message, *args) for handler in handlers] node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/router.py", line 49, in node3_1 | return [handler(message, *args) for handler in handlers] node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/stashing_router.py", line 172, in _process node3_1 | result = handler(message, *args) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/metrics_decorator.py", line 13, in wrapper node3_1 | return f(self, *args, **kwargs) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 451, in process_commit node3_1 | self._add_to_commits(commit, sender) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1451, in _add_to_commits node3_1 | self._try_order(commit) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/metrics_decorator.py", line 13, in wrapper node1_1 | return f(self, *args, **kwargs) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 451, in process_commit node1_1 | self._add_to_commits(commit, sender) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1451, in _add_to_commits node1_1 | self._try_order(commit) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1463, in _try_order node1_1 | self._do_order(commit) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1477, in _do_order node1_1 | return self._order_3pc_key(key) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/metrics_decorator.py", line 13, in wrapper node1_1 | return f(self, *args, **kwargs) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1524, in _order_3pc_key node1_1 | self.l_bls_bft_replica.process_order(key, self._data.quorums, pp) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/bls/bls_bft_replica_plenum.py", line 170, in process_order node1_1 | self._save_multi_sig_local(bls_multi_sig) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/bls/bls_bft_replica_plenum.py", line 301, in _save_multi_sig_local node1_1 | self._bls_bft.bls_store.put(multi_sig) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/bls/bls_store.py", line 27, in put node1_1 | self._kvs.put(state_root_hash, serialized_multi_sign) node1_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/storage/kv_store_rocksdb.py", line 108, in put node1_1 | self._db.put(key, value) node1_1 | File "rocksdb/_rocksdb.pyx", line 1472, in rocksdb._rocksdb.DB.put node1_1 | File "rocksdb/_rocksdb.pyx", line 84, in rocksdb._rocksdb.check_status node1_1 | rocksdb.errors.RocksIOError: b'IO error: No space left on deviceWhile appending to file: /home/indy/ledger/sandbox/data/Node1/state_signature/000003.log: No space left on device' node2_1 | self._add_to_commits(commit, sender) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1451, in _add_to_commits node2_1 | self._try_order(commit) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1463, in _try_order node2_1 | self._do_order(commit) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1477, in _do_order node2_1 | return self._order_3pc_key(key) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/metrics_decorator.py", line 13, in wrapper node2_1 | return f(self, *args, **kwargs) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1524, in _order_3pc_key node2_1 | self.l_bls_bft_replica.process_order(key, self._data.quorums, pp) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/bls/bls_bft_replica_plenum.py", line 170, in process_order node2_1 | self._save_multi_sig_local(bls_multi_sig) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/bls/bls_bft_replica_plenum.py", line 301, in _save_multi_sig_local node2_1 | self._bls_bft.bls_store.put(multi_sig) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/bls/bls_store.py", line 27, in put node2_1 | self._kvs.put(state_root_hash, serialized_multi_sign) node2_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/storage/kv_store_rocksdb.py", line 108, in put node2_1 | self._db.put(key, value) node2_1 | File "rocksdb/_rocksdb.pyx", line 1472, in rocksdb._rocksdb.DB.put node2_1 | File "rocksdb/_rocksdb.pyx", line 84, in rocksdb._rocksdb.check_status node2_1 | rocksdb.errors.RocksIOError: b'IO error: No space left on deviceWhile appending to file: /home/indy/ledger/sandbox/data/Node2/state_signature/000003.log: No space left on device' node4_1 | 2020-12-15 12:35:13,063|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} from Node1 node4_1 | 2020-12-15 12:35:13,063|INFO|cons_proof_service.py|Node4:ConsProofService:0 ignoring LEDGER_STATUS{'ledgerId': 0, 'txnSeqNo': 4, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z', 'protocolVersion': 2} since it is not gathering ledger statuses node4_1 | 2020-12-15 12:35:13,063|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} from Node1 node4_1 | 2020-12-15 12:35:13,063|INFO|cons_proof_service.py|Node4:ConsProofService:2 ignoring LEDGER_STATUS{'ledgerId': 2, 'txnSeqNo': 0, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn', 'protocolVersion': 2} since it is not gathering ledger statuses node4_1 | 2020-12-15 12:35:13,064|INFO|seeder_service.py|Node4 received ledger status: LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} from Node1 node4_1 | 2020-12-15 12:35:13,064|INFO|cons_proof_service.py|Node4:ConsProofService:1 ignoring LEDGER_STATUS{'ledgerId': 1, 'txnSeqNo': 5, 'viewNo': None, 'ppSeqNo': None, 'merkleRoot': 'DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p', 'protocolVersion': 2} since it is not gathering ledger statuses node4_1 | 2020-12-15 12:40:13,523|INFO|ordering_service.py|Node4:0 set last ordered as (0, 1) node4_1 | 2020-12-15 12:40:13,631|INFO|ordering_service.py|Node4:0 ordered batch request, view no 0, ppSeqNo 1, ledger 0, state root 9mYGVPSfvA24aJo5vnrC1JDBNf3dVSYEAJUVwaapkjeL, txn root uxUiVfSkoaFW1wi2fwd3XH1rRx7bWnA3KsM4GhFsu1Z, audit root 2QcSokfSXdUM5PQPud1tMMyTLKLk85M1AGh9Ptn9o3bD, requests ordered 0, discarded 0 node4_1 | --- Logging error --- node4_1 | 2020-12-15 12:40:13,636|INFO|node.py|Node4 ordered the first batch in the new view 0. Going to adjust replicas. node4_1 | 2020-12-15 12:40:13,637|INFO|node.py|Node4 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node3', 'Node2', 'Node1', 'Node4'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)} node4_1 | Traceback (most recent call last): node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/logging/__init__.py", line 998, in emit node4_1 | self.flush() node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/logging/__init__.py", line 978, in flush node4_1 | self.stream.flush() node4_1 | OSError: [Errno 28] No space left on device node4_1 | Call stack: node4_1 | File "/home/indy/.pyenv/versions/3.6.9/bin/start_indy_node", line 19, in node4_1 | client_ip=sys.argv[4], client_port=int(sys.argv[5])) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy_node/utils/node_runner.py", line 53, in run_node node4_1 | looper.run() node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 263, in run node4_1 | return self.loop.run_until_complete(what) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/asyncio/base_events.py", line 471, in run_until_complete node4_1 | self.run_forever() node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/asyncio/base_events.py", line 438, in run_forever node4_1 | self._run_once() node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/asyncio/base_events.py", line 1451, in _run_once node4_1 | handle._run() node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/asyncio/events.py", line 145, in _run node4_1 | self._callback(*self._args) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 227, in runForever node4_1 | await self.runOnceNicely() node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 210, in runOnceNicely node4_1 | msgsProcessed = await self.prodAllOnce() node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 152, in prodAllOnce node4_1 | s += await n.prod(limit) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy_node/server/node.py", line 232, in prod node4_1 | c = await super().prod(limit) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/metrics_collector.py", line 373, in wrapper node4_1 | return await f(self, *args, **kwargs) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1056, in prod node4_1 | c += await self.serviceReplicas(limit) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/metrics_collector.py", line 373, in wrapper node4_1 | return await f(self, *args, **kwargs) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1087, in serviceReplicas node4_1 | return self._process_replica_messages(limit) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1090, in _process_replica_messages node4_1 | inbox_processed = self.replicas.service_inboxes(limit) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replicas.py", line 102, in service_inboxes node4_1 | sum(replica.serviceQueues(limit) for replica in self._replicas.values()) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replicas.py", line 102, in node4_1 | sum(replica.serviceQueues(limit) for replica in self._replicas.values()) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 74, in wrapper node4_1 | return f(self, *args, **kwargs) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 527, in serviceQueues node4_1 | r += self._handle_external_messages(self.inBox, limit) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 549, in _handle_external_messages node4_1 | self._external_bus.process_incoming(external_msg, sender) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/event_bus.py", line 33, in process_incoming node4_1 | self._route(message, frm) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/router.py", line 49, in _route node4_1 | return [handler(message, *args) for handler in handlers] node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/router.py", line 49, in node4_1 | return [handler(message, *args) for handler in handlers] node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/stashing_router.py", line 172, in _process node4_1 | result = handler(message, *args) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1463, in _try_order node3_1 | self._do_order(commit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1477, in _do_order node3_1 | return self._order_3pc_key(key) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/metrics_decorator.py", line 13, in wrapper node3_1 | return f(self, *args, **kwargs) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1524, in _order_3pc_key node3_1 | self.l_bls_bft_replica.process_order(key, self._data.quorums, pp) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/bls/bls_bft_replica_plenum.py", line 170, in process_order node3_1 | self._save_multi_sig_local(bls_multi_sig) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/bls/bls_bft_replica_plenum.py", line 301, in _save_multi_sig_local node3_1 | self._bls_bft.bls_store.put(multi_sig) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/bls/bls_store.py", line 27, in put node3_1 | self._kvs.put(state_root_hash, serialized_multi_sign) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/storage/kv_store_rocksdb.py", line 108, in put node3_1 | self._db.put(key, value) node3_1 | File "rocksdb/_rocksdb.pyx", line 1472, in rocksdb._rocksdb.DB.put node3_1 | File "rocksdb/_rocksdb.pyx", line 84, in rocksdb._rocksdb.check_status node3_1 | rocksdb.errors.RocksIOError: b'IO error: No space left on deviceWhile appending to file: /home/indy/ledger/sandbox/data/Node3/state_signature/000003.log: No space left on device' node3_1 | node3_1 | During handling of the above exception, another exception occurred: node3_1 | node3_1 | Traceback (most recent call last): node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/logging/__init__.py", line 998, in emit node3_1 | self.flush() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/logging/__init__.py", line 978, in flush node3_1 | self.stream.flush() node3_1 | OSError: [Errno 28] No space left on device node3_1 | Call stack: node3_1 | File "/home/indy/.pyenv/versions/3.6.9/bin/start_indy_node", line 19, in node3_1 | client_ip=sys.argv[4], client_port=int(sys.argv[5])) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy_node/utils/node_runner.py", line 53, in run_node node3_1 | looper.run() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 297, in __exit__ node3_1 | self.shutdownSync() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 293, in shutdownSync node3_1 | self.loop.run_until_complete(self.shutdown()) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/asyncio/base_events.py", line 471, in run_until_complete node3_1 | self.run_forever() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/asyncio/base_events.py", line 438, in run_forever node3_1 | self._run_once() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/asyncio/base_events.py", line 1451, in _run_once node3_1 | handle._run() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/asyncio/events.py", line 145, in _run node3_1 | self._callback(*self._args) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 280, in shutdown node3_1 | self.stopall() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 318, in stopall node3_1 | n.stop() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/motor.py", line 65, in stop node3_1 | self.onStopping(*args, **kwargs) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1001, in onStopping node3_1 | self.reset() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1024, in reset node3_1 | logger.info("{} reseting...".format(self), extra={"cli": False}) node3_1 | Message: 'Node3 reseting...' node3_1 | Arguments: () node3_1 | 2020-12-15 12:40:13,814|NOTIFICATION|zstack.py|stack Node3 closing its listener node3_1 | 2020-12-15 12:40:13,815|NOTIFICATION|zstack.py|stack Node3 stopped node3_1 | 2020-12-15 12:40:13,815|NOTIFICATION|zstack.py|stack Node3C closing its listener node3_1 | 2020-12-15 12:40:13,815|NOTIFICATION|zstack.py|stack Node3C stopped node3_1 | 2020-12-15 12:40:13,815|INFO|node.py|Node3 closing key-value storages node3_1 | 2020-12-15 12:40:13,825|INFO|motor.py|Node3 changing status from stopping to stopped node3_1 | 2020-12-15 12:40:13,852|NOTIFICATION|looper.py|Looper shut down in 0.230 seconds. node3_1 | Traceback (most recent call last): node3_1 | File "/home/indy/.pyenv/versions/3.6.9/bin/start_indy_node", line 19, in node3_1 | client_ip=sys.argv[4], client_port=int(sys.argv[5])) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy_node/utils/node_runner.py", line 53, in run_node node3_1 | looper.run() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 263, in run node3_1 | return self.loop.run_until_complete(what) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/asyncio/base_events.py", line 484, in run_until_complete node3_1 | return future.result() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 227, in runForever node3_1 | await self.runOnceNicely() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 210, in runOnceNicely node3_1 | msgsProcessed = await self.prodAllOnce() node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/stp_core/loop/looper.py", line 152, in prodAllOnce node3_1 | s += await n.prod(limit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy_node/server/node.py", line 232, in prod node3_1 | c = await super().prod(limit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/metrics_collector.py", line 373, in wrapper node3_1 | return await f(self, *args, **kwargs) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1056, in prod node3_1 | c += await self.serviceReplicas(limit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/metrics_collector.py", line 373, in wrapper node3_1 | return await f(self, *args, **kwargs) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1087, in serviceReplicas node3_1 | return self._process_replica_messages(limit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 1090, in _process_replica_messages node3_1 | inbox_processed = self.replicas.service_inboxes(limit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replicas.py", line 102, in service_inboxes node3_1 | sum(replica.serviceQueues(limit) for replica in self._replicas.values()) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replicas.py", line 102, in von_node2_1 exited with code 1 von_node1_1 exited with code 1 node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/metrics_decorator.py", line 13, in wrapper node4_1 | return f(self, *args, **kwargs) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 451, in process_commit node4_1 | self._add_to_commits(commit, sender) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1451, in _add_to_commits node4_1 | self._try_order(commit) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1463, in _try_order node4_1 | self._do_order(commit) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1477, in _do_order node4_1 | return self._order_3pc_key(key) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/metrics_decorator.py", line 13, in wrapper node4_1 | return f(self, *args, **kwargs) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1544, in _order_3pc_key node4_1 | self._try_finish_reordering_after_vc(key[1]) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1471, in _try_finish_reordering_after_vc node4_1 | self._bus.send(MasterReorderedAfterVC()) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/event_bus.py", line 8, in send node4_1 | self._route(message, *args) node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/router.py", line 49, in _route node4_1 | return [handler(message, *args) for handler in handlers] node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/router.py", line 49, in node4_1 | return [handler(message, *args) for handler in handlers] node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 3182, in _process_master_reordered node4_1 | self.on_first_batch_in_view_committed() node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 2746, in on_first_batch_in_view_committed node4_1 | self.setPoolParams() node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/node.py", line 758, in setPoolParams node4_1 | self.minimumNodes, self.quorums)) node4_1 | Message: "Node4 updated its pool parameters: f 1, totalNodes 4, allNodeNames {'Node3', 'Node2', 'Node1', 'Node4'}, requiredNumberOfInstances 2, minimumNodes 3, quorums {'n': 4, 'f': 1, 'weak': Quorum(2), 'strong': Quorum(3), 'propagate': Quorum(2), 'prepare': Quorum(2), 'commit': Quorum(3), 'reply': Quorum(2), 'view_change': Quorum(3), 'election': Quorum(3), 'view_change_ack': Quorum(2), 'view_change_done': Quorum(3), 'same_consistency_proof': Quorum(2), 'consistency_proof': Quorum(2), 'ledger_status': Quorum(2), 'ledger_status_last_3PC': Quorum(2), 'checkpoint': Quorum(2), 'timestamp': Quorum(2), 'bls_signatures': Quorum(3), 'observer_data': Quorum(2), 'backup_instance_faulty': Quorum(2)}" node4_1 | Arguments: () node4_1 | 2020-12-15 12:40:13,733|INFO|consensus_shared_data.py|Node4:0 updated validators list to {'Node3', 'Node2', 'Node1', 'Node4'} node4_1 | 2020-12-15 12:40:13,734|INFO|consensus_shared_data.py|Node4:1 updated validators list to {'Node3', 'Node2', 'Node1', 'Node4'} node4_1 | 2020-12-15 12:40:13,829|INFO|ordering_service.py|Node4:0 set last ordered as (0, 2) node4_1 | 2020-12-15 12:40:13,902|INFO|ordering_service.py|Node4:0 ordered batch request, view no 0, ppSeqNo 2, ledger 1, state root GWEizEWQdGH5QFdxRpoQGvPR42zSUEJoFkQKGBHiNDbM, txn root DJLzEifT7n9DbiCHqQ5KWrWUPFBiZt349popapDfzo5p, audit root 2C7f9zHRc6guUchrEE6FxWJsB2FPsiGvntT7ct8qyThb, requests ordered 0, discarded 0 node4_1 | 2020-12-15 12:40:13,977|INFO|ordering_service.py|Node4:0 set last ordered as (0, 3) node4_1 | 2020-12-15 12:40:13,997|INFO|ordering_service.py|Node4:0 ordered batch request, view no 0, ppSeqNo 3, ledger 2, state root DfNLmH4DAHTKv63YPFJzuRdeEtVwF5RtVnvKYHd8iLEA, txn root GKot5hBsd81kMupNCXHaqbhv3huEbxAFMLnpcX2hniwn, audit root 72SeU4CNRp4DxADfzJmvVSmyxxVVez5B3kkoYe6iDu61, requests ordered 0, discarded 0 node4_1 | 2020-12-15 12:40:14,044|NOTIFICATION|keep_in_touch.py|Node4's connections changed from {'Node3', 'Node2', 'Node1'} to set() node4_1 | 2020-12-15 12:40:14,044|NOTIFICATION|keep_in_touch.py|CONNECTION: Node4 disconnected from Node3 node4_1 | 2020-12-15 12:40:14,044|NOTIFICATION|keep_in_touch.py|CONNECTION: Node4 disconnected from Node2 node4_1 | 2020-12-15 12:40:14,044|NOTIFICATION|keep_in_touch.py|CONNECTION: Node4 disconnected from Node1 node4_1 | 2020-12-15 12:40:14,044|INFO|motor.py|Node4 changing status from started to starting node4_1 | 2020-12-15 12:40:14,044|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 lost connection to primary of master node4_1 | 2020-12-15 12:40:14,045|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec node4_1 | 2020-12-15 12:40:14,045|INFO|node.py|Node4 scheduling replica removal for instance 1 in 180 sec node4_1 | 2020-12-15 12:40:14,045|INFO|node.py|Node4 joined nodes set() but status is 2 node4_1 | 2020-12-15 12:40:14,045|WARNING|node.py|Suspecting inconsistent 3PC state, going to restart in 90 seconds node4_1 | 2020-12-15 12:40:14,045|INFO|restarter.py|Node 'Node4' schedules restart node4_1 | 2020-12-15 12:40:14,045|INFO|restarter.py|4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA's restarter processing restart node4_1 | 2020-12-15 12:40:14,045|INFO|restarter.py|Restart of node 'Node4' has been scheduled on 2020-12-15 12:41:44.045485+00:00 node4_1 | 2020-12-15 12:41:14,047|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long node4_1 | 2020-12-15 12:41:14,047|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled. node4_1 | 2020-12-15 12:41:14,047|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec node3_1 | sum(replica.serviceQueues(limit) for replica in self._replicas.values()) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 74, in wrapper node3_1 | return f(self, *args, **kwargs) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 527, in serviceQueues node3_1 | r += self._handle_external_messages(self.inBox, limit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/replica.py", line 549, in _handle_external_messages node3_1 | self._external_bus.process_incoming(external_msg, sender) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/event_bus.py", line 33, in process_incoming node3_1 | self._route(message, frm) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/router.py", line 49, in _route node3_1 | return [handler(message, *args) for handler in handlers] node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/router.py", line 49, in node3_1 | return [handler(message, *args) for handler in handlers] node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/common/stashing_router.py", line 172, in _process node3_1 | result = handler(message, *args) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/metrics_decorator.py", line 13, in wrapper node3_1 | return f(self, *args, **kwargs) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 451, in process_commit node3_1 | self._add_to_commits(commit, sender) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1451, in _add_to_commits node3_1 | self._try_order(commit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1463, in _try_order node3_1 | self._do_order(commit) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1477, in _do_order node3_1 | return self._order_3pc_key(key) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/metrics_decorator.py", line 13, in wrapper node3_1 | return f(self, *args, **kwargs) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/server/consensus/ordering_service.py", line 1524, in _order_3pc_key node3_1 | self.l_bls_bft_replica.process_order(key, self._data.quorums, pp) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/bls/bls_bft_replica_plenum.py", line 170, in process_order node3_1 | self._save_multi_sig_local(bls_multi_sig) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/bls/bls_bft_replica_plenum.py", line 301, in _save_multi_sig_local node3_1 | self._bls_bft.bls_store.put(multi_sig) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/plenum/bls/bls_store.py", line 27, in put node3_1 | self._kvs.put(state_root_hash, serialized_multi_sign) node3_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/storage/kv_store_rocksdb.py", line 108, in put node3_1 | self._db.put(key, value) node3_1 | File "rocksdb/_rocksdb.pyx", line 1472, in rocksdb._rocksdb.DB.put node3_1 | File "rocksdb/_rocksdb.pyx", line 84, in rocksdb._rocksdb.check_status node3_1 | rocksdb.errors.RocksIOError: b'IO error: No space left on deviceWhile appending to file: /home/indy/ledger/sandbox/data/Node3/state_signature/000003.log: No space left on device' von_node3_1 exited with code 1 node4_1 | 2020-12-15 12:41:44,051|INFO|restarter.py|4PS3EDQ3dW1tci1Bp6543CfuuebjFrg36kLAUcskGfaA's restart calling agent for restart node4_1 | 2020-12-15 12:41:44,053|INFO|restarter.py|Sending message to control tool: {"message_type": "restart"} node4_1 | 2020-12-15 12:41:44,053|WARNING|restarter.py|Failed to communicate to control tool: [Errno 111] Connect call failed ('127.0.0.1', 30003) node4_1 | 2020-12-15 12:41:44,053|INFO|restarter.py|Sending message to control tool: {"message_type": "restart"} node4_1 | 2020-12-15 12:41:44,054|WARNING|restarter.py|Failed to communicate to control tool: [Errno 111] Connect call failed ('127.0.0.1', 30003) node4_1 | 2020-12-15 12:41:44,054|INFO|restarter.py|Sending message to control tool: {"message_type": "restart"} node4_1 | 2020-12-15 12:41:44,054|WARNING|restarter.py|Failed to communicate to control tool: [Errno 111] Connect call failed ('127.0.0.1', 30003) node4_1 | 2020-12-15 12:41:44,054|ERROR|base_events.py|Task exception was never retrieved node4_1 | future: exception=TypeError('_action_failed() takes 1 positional argument but 2 positional arguments (and 1 keyword-only argument) were given',)> node4_1 | Traceback (most recent call last): node4_1 | File "/home/indy/.pyenv/versions/3.6.9/lib/python3.6/site-packages/indy_node/server/restarter.py", line 203, in _sendUpdateRequest node4_1 | reason="problems in communication with " node4_1 | TypeError: _action_failed() takes 1 positional argument but 2 positional arguments (and 1 keyword-only argument) were given webserver_1 | 2020-12-15 12:41:50,976|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:41:50 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:42:00,986|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:42:00 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" webserver_1 | 2020-12-15 12:42:10,992|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:42:10 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0" node4_1 | 2020-12-15 12:42:14,048|NOTIFICATION|primary_connection_monitor_service.py|Node4:0 primary has been disconnected for too long node4_1 | 2020-12-15 12:42:14,049|INFO|primary_connection_monitor_service.py|Node4:0 The node is not ready yet so view change will not be proposed now, but re-scheduled. node4_1 | 2020-12-15 12:42:14,049|INFO|primary_connection_monitor_service.py|Node4:0 scheduling primary connection check in 60 sec webserver_1 | 2020-12-15 12:42:21,000|INFO|web_log.py|172.19.0.1 [15/Dec/2020:12:42:20 +0000] "GET /status?validators= HTTP/1.1" 200 331 "http://localhost:9000/" "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:79.0) Gecko/20100101 Firefox/79.0"