markparz (Thu, 02 Feb 2017 23:33:41 GMT):
Discussion regarding consensus protocols
markparz (Thu, 02 Feb 2017 23:33:56 GMT):
Discussion regarding orderers and consensus protocols
donjohnny (Thu, 02 Feb 2017 23:39:28 GMT):
Has joined the channel.
C0rWin (Fri, 03 Feb 2017 00:10:47 GMT):
Has joined the channel.
kostas (Fri, 03 Feb 2017 00:16:16 GMT):
Has joined the channel.
mastersingh24 (Fri, 03 Feb 2017 00:49:41 GMT):
Has joined the channel.
salmanbaset (Fri, 03 Feb 2017 01:19:19 GMT):
Has joined the channel.
tuand (Fri, 03 Feb 2017 02:09:05 GMT):
Has joined the channel.
grapebaba (Fri, 03 Feb 2017 02:24:19 GMT):
Has joined the channel.
timblankers (Fri, 03 Feb 2017 02:43:17 GMT):
Has joined the channel.
SriramaSharma (Fri, 03 Feb 2017 03:23:19 GMT):
Has joined the channel.
didnotgetagoodname (Fri, 03 Feb 2017 03:57:04 GMT):
Has joined the channel.
frbrkoala (Fri, 03 Feb 2017 07:01:58 GMT):
Has joined the channel.
nvlasov (Fri, 03 Feb 2017 07:13:08 GMT):
Has joined the channel.
cca88 (Fri, 03 Feb 2017 08:19:17 GMT):
Has joined the channel.
jdockter (Fri, 03 Feb 2017 11:38:10 GMT):
Has joined the channel.
dante (Fri, 03 Feb 2017 13:10:54 GMT):
Has joined the channel.
silliman (Fri, 03 Feb 2017 13:56:09 GMT):
Has joined the channel.
gormand (Fri, 03 Feb 2017 14:25:15 GMT):
Has joined the channel.
scottz (Fri, 03 Feb 2017 14:48:41 GMT):
Has joined the channel.
jyellick (Fri, 03 Feb 2017 14:51:53 GMT):
Has joined the channel.
kletkeman (Fri, 03 Feb 2017 15:19:15 GMT):
Has joined the channel.
yacovm (Fri, 03 Feb 2017 15:25:53 GMT):
Has joined the channel.
ssaddem (Fri, 03 Feb 2017 15:54:10 GMT):
Has joined the channel.
latitiah (Fri, 03 Feb 2017 15:54:45 GMT):
Has joined the channel.
Nishi (Fri, 03 Feb 2017 18:31:05 GMT):
Has joined the channel.
rickr (Fri, 03 Feb 2017 19:01:36 GMT):
Has joined the channel.
fz (Fri, 03 Feb 2017 19:27:32 GMT):
Has joined the channel.
karkal (Fri, 03 Feb 2017 19:51:14 GMT):
Has joined the channel.
stanliberman (Fri, 03 Feb 2017 20:44:41 GMT):
Has joined the channel.
zmanian (Fri, 03 Feb 2017 22:04:47 GMT):
Has joined the channel.
genggjh (Sat, 04 Feb 2017 03:43:47 GMT):
Has joined the channel.
bfuentes@fr.ibm.com (Sat, 04 Feb 2017 09:19:47 GMT):
Has joined the channel.
MadhavaReddy (Sat, 04 Feb 2017 18:12:55 GMT):
Has joined the channel.
jeffgarratt (Sat, 04 Feb 2017 18:44:32 GMT):
Has joined the channel.
patchpon (Sun, 05 Feb 2017 10:07:09 GMT):
Has joined the channel.
mandler (Sun, 05 Feb 2017 10:46:56 GMT):
Has joined the channel.
Honglei (Sun, 05 Feb 2017 23:56:34 GMT):
Has joined the channel.
bryanhuang (Mon, 06 Feb 2017 04:32:07 GMT):
Has joined the channel.
rascal-3 (Mon, 06 Feb 2017 05:40:39 GMT):
Has joined the channel.
gennadyl (Mon, 06 Feb 2017 08:19:27 GMT):
Has joined the channel.
david.peyronnin (Mon, 06 Feb 2017 09:48:19 GMT):
Has joined the channel.
aarenw (Mon, 06 Feb 2017 10:01:49 GMT):
Has joined the channel.
harrijk (Mon, 06 Feb 2017 15:13:44 GMT):
Has joined the channel.
DennisM330 (Mon, 06 Feb 2017 15:37:39 GMT):
Has joined the channel.
yuki-kon (Mon, 06 Feb 2017 16:52:15 GMT):
Has joined the channel.
yuki-kon (Mon, 06 Feb 2017 16:52:30 GMT):
Has left the channel.
joshhus (Mon, 06 Feb 2017 17:29:52 GMT):
Has joined the channel.
karan-bharadwaj (Mon, 06 Feb 2017 17:55:39 GMT):
Has joined the channel.
weeds (Mon, 06 Feb 2017 20:23:46 GMT):
Has joined the channel.
mastersingh24 (Mon, 06 Feb 2017 21:48:23 GMT):
@jyellick - https://gerrit.hyperledger.org/r/#/c/4997/ - had a comment on it, but overall looks good. is this something you want to get in soon?
mastersingh24 (Mon, 06 Feb 2017 21:48:53 GMT):
seems like we'll need this to complete the items I tried to lay out in #fabric-maintainers for an alpha?
crazybit (Tue, 07 Feb 2017 05:31:05 GMT):
Has joined the channel.
bur (Tue, 07 Feb 2017 07:32:33 GMT):
Has joined the channel.
xixuejia (Tue, 07 Feb 2017 07:51:12 GMT):
Has joined the channel.
zlliu (Tue, 07 Feb 2017 08:17:37 GMT):
Has joined the channel.
vitaly.ilinykh (Tue, 07 Feb 2017 08:40:06 GMT):
Has joined the channel.
brianeno (Tue, 07 Feb 2017 10:52:51 GMT):
Has joined the channel.
vukolic (Tue, 07 Feb 2017 11:03:05 GMT):
Has joined the channel.
jyellick (Tue, 07 Feb 2017 13:58:41 GMT):
@mastersingh24 That CR will likely be ultimately abandoned, as things get pushed in incrementally. It was for feedback on the overall design. You should start seeing the incremental CRs soon.
mastersingh24 (Tue, 07 Feb 2017 14:04:33 GMT):
ok - in order to drive the list I posted in #fabric-maintainers , we really need to have a near final config block structure
reachk.raj (Tue, 07 Feb 2017 14:15:25 GMT):
Has joined the channel.
jyellick (Tue, 07 Feb 2017 14:34:01 GMT):
Totally agree, will try to turn this around ASAP
jyellick (Tue, 07 Feb 2017 14:36:16 GMT):
@mastersingh24 But there are 10+ CRs already stacked up out there which could use some review if you are looking to help
mastersingh24 (Tue, 07 Feb 2017 14:36:53 GMT):
all from you? I think +2 a bunch from you, but will go back through
jyellick (Tue, 07 Feb 2017 14:37:04 GMT):
All from me
mastersingh24 (Tue, 07 Feb 2017 14:37:11 GMT):
well that's easy then ;)
jyellick (Tue, 07 Feb 2017 14:37:11 GMT):
You may have +2-ed in the past, but, merge conflicts and rebasing
cbf (Tue, 07 Feb 2017 15:13:58 GMT):
Has joined the channel.
Tetiana (Tue, 07 Feb 2017 15:17:29 GMT):
Has joined the channel.
adc (Tue, 07 Feb 2017 15:18:21 GMT):
Has joined the channel.
mihaig (Tue, 07 Feb 2017 15:20:32 GMT):
Has joined the channel.
mpage (Tue, 07 Feb 2017 15:30:25 GMT):
Has joined the channel.
jansony1 (Tue, 07 Feb 2017 15:57:44 GMT):
Has joined the channel.
sukrit.handa@gmail.com (Tue, 07 Feb 2017 15:58:02 GMT):
Has joined the channel.
umasuthan (Tue, 07 Feb 2017 16:03:58 GMT):
Has joined the channel.
gdinhof (Tue, 07 Feb 2017 16:08:05 GMT):
Has joined the channel.
troyronda (Tue, 07 Feb 2017 16:33:17 GMT):
Has joined the channel.
elli-androulaki (Tue, 07 Feb 2017 20:35:28 GMT):
Has joined the channel.
rahulhegde (Tue, 07 Feb 2017 20:44:32 GMT):
Has joined the channel.
beauson45 (Tue, 07 Feb 2017 20:56:12 GMT):
Has joined the channel.
SivaKannan (Tue, 07 Feb 2017 21:06:16 GMT):
Has joined the channel.
Asara (Tue, 07 Feb 2017 21:35:59 GMT):
Has joined the channel.
Asara (Tue, 07 Feb 2017 21:36:52 GMT):
Hey guys, trying to understand the consensus/consenting service. This is just a service that runs on peers correct? It isn't a type of server, but rather a service running on the peers that allows peers to subscribe to different channels
tuand (Tue, 07 Feb 2017 21:42:25 GMT):
@Asara , for V0.6, consensus ran as part of the peer process, i.e. all peers participated in consensus then ran the chaincode ... for v1.0, it is now an ordering service running separately from the peer
Asara (Tue, 07 Feb 2017 21:43:17 GMT):
so are there dedicated 'consensus servers'?
tuand (Tue, 07 Feb 2017 21:43:43 GMT):
note that consensus is about agreeing on the order of the transactions, not on the result of a transaction
tuand (Tue, 07 Feb 2017 21:44:16 GMT):
dedicated orderers, yes
Asara (Tue, 07 Feb 2017 21:45:00 GMT):
https://camo.githubusercontent.com/59163c231574dc2a82b7f6a11a956780229225f9/687474703a2f2f76756b6f6c69632e636f6d2f68797065726c65646765722f666c6f772d322e706e67
Asara (Tue, 07 Feb 2017 21:46:00 GMT):
so looking at this, the consenters are a cluster of servers that will achieve a consensus on the order of transactions, and then send that order to all peers listening on that channel?
tuand (Tue, 07 Feb 2017 21:47:07 GMT):
right
Asara (Tue, 07 Feb 2017 21:47:42 GMT):
are these consenters also subscribed to a channel? or are they agnostic to who/what is making these transactions
tuand (Tue, 07 Feb 2017 21:49:19 GMT):
orderers can handle multiple channels
tuand (Tue, 07 Feb 2017 21:51:04 GMT):
btw, orderer don't care about what's in a transaction, it's just a blob that need to have an sequence number assigned to it
tuand (Tue, 07 Feb 2017 21:51:04 GMT):
btw, orderer don't care about what's in a transaction, it's just a blob that needs to have an sequence number assigned to it
Asara (Tue, 07 Feb 2017 21:51:25 GMT):
can you direct me towards some documentation for orderers/consenters. Trying to design an architecture and I'm still a little confused about them
Asara (Tue, 07 Feb 2017 21:52:02 GMT):
actually never mind, found https://github.com/hyperledger/fabric/tree/master/orderer
tuand (Tue, 07 Feb 2017 21:53:03 GMT):
the fabric design docs on the wiki https://wiki.hyperledger.org/community/fabric-design-docs
Asara (Tue, 07 Feb 2017 21:53:30 GMT):
@tuand thanks
mgk (Tue, 07 Feb 2017 22:07:52 GMT):
Has joined the channel.
jiangyaoguo (Wed, 08 Feb 2017 02:18:15 GMT):
Has joined the channel.
bobbiejc (Wed, 08 Feb 2017 02:26:06 GMT):
Has joined the channel.
johnm (Wed, 08 Feb 2017 02:26:18 GMT):
Has joined the channel.
py (Wed, 08 Feb 2017 03:25:48 GMT):
Has joined the channel.
rjkuro (Wed, 08 Feb 2017 03:30:28 GMT):
Has joined the channel.
haidong (Wed, 08 Feb 2017 04:00:22 GMT):
Has joined the channel.
vigneswaran.r (Wed, 08 Feb 2017 04:25:26 GMT):
Has joined the channel.
niteshsolanki (Wed, 08 Feb 2017 08:54:46 GMT):
Has joined the channel.
bravera (Wed, 08 Feb 2017 10:43:50 GMT):
Has joined the channel.
RistoAlas (Wed, 08 Feb 2017 10:58:01 GMT):
Has joined the channel.
pipor (Wed, 08 Feb 2017 13:47:00 GMT):
Has joined the channel.
vbortnik (Wed, 08 Feb 2017 15:56:40 GMT):
Has joined the channel.
jkirke (Wed, 08 Feb 2017 16:33:56 GMT):
Has joined the channel.
kelly_ (Wed, 08 Feb 2017 19:21:20 GMT):
Has joined the channel.
tirumaha (Wed, 08 Feb 2017 23:41:35 GMT):
Has joined the channel.
warong (Thu, 09 Feb 2017 02:51:35 GMT):
Has joined the channel.
zhoupeiwen (Thu, 09 Feb 2017 03:23:48 GMT):
Has joined the channel.
chenxl (Thu, 09 Feb 2017 03:30:07 GMT):
Has joined the channel.
mbaizan (Thu, 09 Feb 2017 07:28:30 GMT):
Has joined the channel.
mathiasb303 (Thu, 09 Feb 2017 10:02:14 GMT):
Has joined the channel.
eragnoli (Thu, 09 Feb 2017 10:54:13 GMT):
Has joined the channel.
jonreid (Thu, 09 Feb 2017 11:42:48 GMT):
Has joined the channel.
kenzhang (Fri, 10 Feb 2017 01:10:19 GMT):
Has joined the channel.
scottz (Fri, 10 Feb 2017 01:57:25 GMT):
Could I get a developer on the consensus squad *with a mac* to do a quick clone and test of this Orderer Traffic Engine (OTE) tool that we developed? @jyellick @kostas @sanchezl I am submitting it to gerrit as https://gerrit.hyperledger.org/r/#/c/5821/ , and you could consider this as part of the work to inspect/validate. Our Quality Assurance squad only has x86 Linux laptops, so this testing on a mac would help confirm it can be used on the mac platform - and would introduce you to how it works so you can quickly rerun it as you solve the bugs we created! OTE incorporates much of the broadcast and deliver clients code into one application and drives traffic and compares all the counters. It is not the prettiest code ever written, but it is stable and works well for what we need. Since it is not approved and merged yet, I don't know gerrit enough to know if you can actually get the code from its new location in bddtests/regression/go/ote/; if not then "git clone https://github.com/suryalnvs/otd/tree/development"
sanchezl (Fri, 10 Feb 2017 01:57:25 GMT):
Has joined the channel.
jyellick (Fri, 10 Feb 2017 01:57:49 GMT):
No mac here, sorry
rnsastry (Fri, 10 Feb 2017 09:17:07 GMT):
Has joined the channel.
kostas (Fri, 10 Feb 2017 09:58:53 GMT):
@scottz On it once I come back from vacation on Monday.
tsnyder (Fri, 10 Feb 2017 13:37:04 GMT):
Has joined the channel.
sanchezl (Fri, 10 Feb 2017 14:34:22 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=HH5aACaYq67i6NmvA) @scottz I'll take a look next week also.
sanchezl (Fri, 10 Feb 2017 14:34:22 GMT):
I'll take a look next week also.
sanchezl (Fri, 10 Feb 2017 14:34:22 GMT):
I'll take a look next week also.
scottz (Fri, 10 Feb 2017 14:54:27 GMT):
Thanks guys. And I just noticed your update 5803, @jyellick , that moved the orderer genesis items out of orderer.yaml into new config file genesis.yaml. I will have to update this code to accomodate that movement, same as you updated sample_clients codefiles.
jyellick (Fri, 10 Feb 2017 14:55:23 GMT):
@scottz be aware more changes to genesis are in the pipeline, note that that `genesis.yaml` file is moving
scottz (Fri, 10 Feb 2017 15:00:23 GMT):
thx. I will keep my eyes open. I guess whenever the sample_clients broadcast client or deliver client are touched, then we will probably have to modify OTE tool too.
dave.enyeart (Fri, 10 Feb 2017 15:25:19 GMT):
Has joined the channel.
dave.enyeart (Fri, 10 Feb 2017 15:26:28 GMT):
@jyellick @kostas @sanchezl I received this question from solutions team. Can you help with a reply?
dave.enyeart (Fri, 10 Feb 2017 15:26:31 GMT):
Q. Even if there are multiple private chains, the Orderers can see all the transactions, which means they are the single point of trust. This is not acceptable in some use cases. Is there any plans to make orderers decentralized?
jyellick (Fri, 10 Feb 2017 15:33:31 GMT):
@dave.enyeart
> Even if there are multiple private chains, the Orderers can see all the transactions, which means they are the single point of trust.
This is generally true. The orderers do receive a higher degree of trust, and they must necessarily see all channels and their membership. I would point out however, that the orderers only see the information which passes through them. Certain pieces of the transaction, like the proposal can be configured not to go through ordering. Similarly, any data which is referenced by hash or encrypted would be opaque to the orderers.
> This is not acceptable in some use cases. Is there any plans to make orderers decentralized?
I'm not sure I understand this portion of the question. There are other ordering protocols in the works, like SBFT, which allow further decentralization by allowing for more parties to take part because of the byzantine fault tolerance. But, this does not really solve the confidentiality problem, in fact, distributing it makes it worse as there are more potential information leakage points.
I would point out that ordering does not necessarily need to be centralized. A single peer could participate with many different ordering networks. (This is not necessarily targeted for v1, but the architecture does explicitly intend this)
dave.enyeart (Fri, 10 Feb 2017 15:35:31 GMT):
thanks, makes sense, I'll convey back
idiaz5 (Fri, 10 Feb 2017 17:33:24 GMT):
Has joined the channel.
lehors (Fri, 10 Feb 2017 19:47:04 GMT):
Has joined the channel.
lehors (Fri, 10 Feb 2017 19:47:11 GMT):
hi there
lehors (Fri, 10 Feb 2017 19:47:21 GMT):
are you guys familiar with Algorand?
lehors (Fri, 10 Feb 2017 19:47:40 GMT):
I just learned about it and it looks pretty interesting
lehors (Fri, 10 Feb 2017 19:47:55 GMT):
I was wondering whether this something that has been discussed
lehors (Fri, 10 Feb 2017 19:48:04 GMT):
http://www.the-blockchain.com/2017/01/05/move-bitcoin-mit-cryptographer-silvio-micali-public-ledger-algorand-future-blockchain/
lehors (Fri, 10 Feb 2017 20:18:06 GMT):
the explanation in the video really starts at 38 minute
jyellick (Sat, 11 Feb 2017 03:53:26 GMT):
@lehors Thanks for the interesting link, it's an interesting approach. For now, our emphasis has been on consensus algorithms which leverage permissioned networks (like PBFT), but hyperledger fabric has been designed explicitly to be able to swap the consenus mechanism (with an eyes proof of work as an non-permissioned example), so perhaps this is something we could look at in the future
jyellick (Sat, 11 Feb 2017 03:53:26 GMT):
@lehors Thanks for the interesting link, it's an interesting approach. For now, our emphasis has been on consensus algorithms which leverage permissioned networks (like PBFT), but hyperledger fabric has been designed explicitly to be able to swap the consenus mechanism (with an eye towards proof of work as an non-permissioned example), so perhaps this is something we could look at in the future
jyellick (Sat, 11 Feb 2017 03:53:26 GMT):
@lehors Thanks for the link, it's an interesting approach. For now, our emphasis has been on consensus algorithms which leverage permissioned networks (like PBFT), but hyperledger fabric has been designed explicitly to be able to swap the consenus mechanism (with an eye towards proof of work as an non-permissioned example), so perhaps this is something we could look at in the future
jyellick (Sat, 11 Feb 2017 03:53:26 GMT):
@lehors Thanks for the link, it's an interesting approach. For now, our emphasis has been on consensus algorithms which leverage permissioned networks (like PBFT), but hyperledger fabric has been designed explicitly to be able to swap the consenus mechanism (with an eye towards proof of work as a non-permissioned example), so perhaps this is something we could look at in the future
UshKrish (Sat, 11 Feb 2017 04:41:31 GMT):
Has joined the channel.
frankylu (Sun, 12 Feb 2017 00:07:39 GMT):
Has joined the channel.
t-watana (Mon, 13 Feb 2017 04:33:13 GMT):
Has joined the channel.
kostas (Mon, 13 Feb 2017 15:02:43 GMT):
@jyellick Is there any document that captures the high-level overview of the recent config work? (I'm back and looking to catch up.)
yacovm (Mon, 13 Feb 2017 15:03:19 GMT):
@kostas https://gerrit.hyperledger.org/r/#/c/4997/
jyellick (Mon, 13 Feb 2017 15:03:51 GMT):
Things have deviated a bit from 4997, but yes, that was the original proposal and is still largely correct
jyellick (Mon, 13 Feb 2017 15:04:06 GMT):
I also have a little slide deck with some pictures, but rocketchat won't let me post it
kostas (Mon, 13 Feb 2017 15:04:07 GMT):
Ah, so this basically provides all the necessary pointers? Cool, will check and get back with questions.
kostas (Mon, 13 Feb 2017 15:04:18 GMT):
Can you send via Slack?
jyellick (Mon, 13 Feb 2017 15:04:24 GMT):
Let me try to convert to pdf, otherwise will slack
jyellick (Mon, 13 Feb 2017 15:04:45 GMT):
Message Attachments
s.narayanan (Mon, 13 Feb 2017 15:56:25 GMT):
Has joined the channel.
kostas (Mon, 13 Feb 2017 20:22:33 GMT):
I get why the current/old schema is problematic for, say, MSP or AnchorPeer definitions.
kostas (Mon, 13 Feb 2017 20:22:47 GMT):
But this bit is not clear to me yet:
kostas (Mon, 13 Feb 2017 20:22:51 GMT):
> The only way for orderer to reference peer policies, is by name, requiring that the peer creates them.
kostas (Mon, 13 Feb 2017 20:23:18 GMT):
Rather, it's not clear to me what the problem is and how the config work is addressing it.
kostas (Mon, 13 Feb 2017 20:23:23 GMT):
Is there an example I can refer to?
kostas (Mon, 13 Feb 2017 20:38:38 GMT):
Slide 7 is titled "Cross Channel Common Configuration". Are we sure about the "cross" word there? It seems that we're describing config items that we expect to be the same on a per-channel basis.
jyellick (Mon, 13 Feb 2017 22:45:24 GMT):
@kostas Sorry for the delay:
> Rather, it's not clear to me what the problem is and how the config work is addressing it.
The orderer needs to enforce who can broadcast/deliver. It's important that the orderer orgs permissions in this respect never change without signatures from the orderers.
However, the application side needs to specify what subset of them can broadcast/deliver, and be able to modify this.
So, the question is, how do we make it such that the application can add/remove rights to broadcast/deliver without risk of adding/removing orderer orgs permissions.
jyellick (Mon, 13 Feb 2017 22:45:48 GMT):
The way we solve this in the old style configuration is to specify a list of policy names for the broadcast/deliver authorization
jyellick (Mon, 13 Feb 2017 22:46:56 GMT):
But this is very clunky, because the peer side has to know the name of this policy to create and maintain, and the linkage isn't obvious without inspecting the orderer configuration.
jyellick (Mon, 13 Feb 2017 22:47:36 GMT):
But, with the notion of hierarchy, the top level policies, may safely refer to sub-policies. So the channel broadcast policy can be set to the union of Application and Orderer broadcast policy.
mastersingh24 (Tue, 14 Feb 2017 11:03:02 GMT):
[@qb - moving the conversation to #fabric-consensus - yes - this is the case. depending on how far behind the peer is, it will either transfer and process blocks from another peer or it will do state transfer with another peer where it gets the blocks but then just updates its local state without having to process the transactions. When you bring the peer back up, it will take some rounds of the protocol before the peer will actually initiate state transfer. I honestly forget the exact settings which govern this, but I believe that with the defaults this is somewhere between 20-60 blocks](https://chat.hyperledger.org/channel/general?msg=2h2DovyzSHHJnDfAe) @qb
qb (Tue, 14 Feb 2017 11:03:02 GMT):
Has joined the channel.
qb (Tue, 14 Feb 2017 12:04:19 GMT):
@mastersingh24 Thanks. Actually, there is just one transaction happened during the offline period. But when I bring the offline peer up, the replication had not been executed, I wait for more than 2 hours. the local query result is still old. I'd tried do invoke via the peer(old status), the transaction was written in the other peers but this peer.
mastersingh24 (Tue, 14 Feb 2017 12:05:00 GMT):
yeah - you'll likely need to submit 20-40 transactions before sync will start
qb (Tue, 14 Feb 2017 12:06:28 GMT):
@mastersingh24 Oh? Can I change the setting to let the replication sync at once where the peer becomes online?
qb (Tue, 14 Feb 2017 12:07:13 GMT):
@mastersingh24 is it by changing transaction count per block?
qb (Tue, 14 Feb 2017 12:08:23 GMT):
@mastersingh24 I found the fact that I tried to shutdown one of other peers, the sync had been forced to execute.
mastersingh24 (Tue, 14 Feb 2017 12:10:55 GMT):
right - the network requires 3 peers to be running - so if you shutdown 1 peer, restart it, then shutdown another peer, the peer you restarted must now become part of the consensus process and will sync
mastersingh24 (Tue, 14 Feb 2017 12:14:32 GMT):
@qb https://developer.ibm.com/answers/questions/336783/when-does-the-blocks-across-the-peers-get-synchron.html#answer-338477
mastersingh24 (Tue, 14 Feb 2017 12:14:39 GMT):
pretty good explanation of what happens
qb (Tue, 14 Feb 2017 12:17:05 GMT):
@mastersingh24 Thank you very much. Could you please help to confirm how to change the sync batch transaction count? Is it possible to let the sync execute at once when the peer becomes online?
mastersingh24 (Tue, 14 Feb 2017 12:32:07 GMT):
I don't think that is possible - there is a setting https://github.com/hyperledger/fabric/blob/v0.6/consensus/pbft/config.yaml#L60 which will enable null requests which should help force a sync in the absence of new transactions
qb (Tue, 14 Feb 2017 12:37:27 GMT):
@mastersingh24 thank you very much. Let me try.:grin:
kostas (Tue, 14 Feb 2017 17:45:03 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=7mcbp8M777QCBwoRa) @jyellick Not sure I follow. To begin with: why would there be risk of adding/removing orderer orgs permissions?
jyellick (Tue, 14 Feb 2017 17:45:41 GMT):
Imagine you had a single policy which specified who could read from a channel
jyellick (Tue, 14 Feb 2017 17:46:16 GMT):
Then who has rights to modify that policy?
kostas (Tue, 14 Feb 2017 17:47:45 GMT):
I get the problem there. Is it the peer orgs, the ordering orgs, etc.
kostas (Tue, 14 Feb 2017 17:48:05 GMT):
To go with your example - where is the risk of adding orderer orgs permissions?
jyellick (Tue, 14 Feb 2017 17:50:52 GMT):
I suppose adding is only an attack in very novel circumstances. Let's focus instead on removing.
kostas (Tue, 14 Feb 2017 17:52:05 GMT):
OK. https://chat.hyperledger.org/channel/fabric-consensus?msg=udrw7A3ZyR2sj9smw
jyellick (Tue, 14 Feb 2017 17:54:00 GMT):
I think I just gave one? Having a single policy which represents read permission is problematic. The old config mechanism works around this in the hacky way of enumerating read policies as a list, which the orderers control, but it's unintuitive, hard to reason about, and non-obvious from structure. The new config supports this by allowing policy inheritance through hierarchy, which is IMO, fixes all those problems with the old way.
jyellick (Tue, 14 Feb 2017 17:54:00 GMT):
I think I just gave one? Having a single policy which represents read permission is problematic. The old config mechanism works around this in the hacky way of enumerating read policies as a list, which the orderers control, but it's unintuitive, hard to reason about, and non-obvious from structure. The new config supports this by allowing policy inheritance through hierarchy, which IMO fixes all those problems with the old way.
jyellick (Tue, 14 Feb 2017 17:54:00 GMT):
I think I just gave one? Having a single policy which represents read permission is problematic (because it's not obvious how to allow the peer orgs to add members without risk of allowing them to remove orderer members). The old config mechanism works around this in the hacky way of enumerating read policies as a list, which the orderers control, but it's unintuitive, hard to reason about, and non-obvious from structure. The new config supports this by allowing policy inheritance through hierarchy, which IMO fixes all those problems with the old way.
kostas (Tue, 14 Feb 2017 17:56:10 GMT):
It's still not clear to me where the risk of removing orgs permissions come in.
kostas (Tue, 14 Feb 2017 17:56:10 GMT):
It's still not clear to me where the risk of removing orgs permissions comes in.
kostas (Tue, 14 Feb 2017 17:56:22 GMT):
Perhaps because removing orgs permissions is not clear to begin with.
jyellick (Tue, 14 Feb 2017 17:56:26 GMT):
Ah, okay
jyellick (Tue, 14 Feb 2017 17:56:30 GMT):
So, back to our example
jyellick (Tue, 14 Feb 2017 17:56:42 GMT):
We have a single policy which expresses channel readers.
jyellick (Tue, 14 Feb 2017 17:56:55 GMT):
We want to allow the application admins to add members to this list of readers.
jyellick (Tue, 14 Feb 2017 17:56:55 GMT):
We want to allow the application admins to add (and remove) members to this list of readers.
jyellick (Tue, 14 Feb 2017 17:57:23 GMT):
But, we do not want to allow the application admins to remove orderers which are already in this list of readers.
kostas (Tue, 14 Feb 2017 17:59:04 GMT):
And how was this worked around again under the old scheme?
jyellick (Tue, 14 Feb 2017 18:00:20 GMT):
We created a named configuration item called `IngressPolicyNames`, which was a list of strings. The orderer in special case code, knew that when validating a `Deliver` call it should look up a Policy for each name in `IngressPolicyNames` and evaluate the `Deliver` request against it. If any of them passed, the Deliver would succeed.
jyellick (Tue, 14 Feb 2017 18:01:06 GMT):
Then we created an `OrdererReadersPolicy` and a `PeerReadersPolicy`, put those two names into the `IngressPolicyNames` and set the modification those to the orderer admins and the peer admins respectively.
jyellick (Tue, 14 Feb 2017 18:01:45 GMT):
In this way, the `PeerReadersPolicy` could be modified without intervention from the ordering admins, and, without risk of modifying the `OrdererReadersPolicy`.
jyellick (Tue, 14 Feb 2017 18:04:10 GMT):
Personally, I find this to be very hacky. One obvious reason is because `IngressPolicyNames` is not a policy, but it acts like one. The names inside it are strings, but there's no guarantee that the corresponding policies even exist. had looked at implementing a union policy type, but this proved to be very problematic because of dependency cycles. This scheme worked, but not elegantly.
kostas (Tue, 14 Feb 2017 18:05:07 GMT):
So this was basically a hacky way of implementing a union of a policy controlled by the orderers, and one controlled by the peers.
jyellick (Tue, 14 Feb 2017 18:07:30 GMT):
COrrect
jyellick (Tue, 14 Feb 2017 18:07:30 GMT):
Correct
jyellick (Tue, 14 Feb 2017 18:07:40 GMT):
Consider instead the following:
```
Channel:
Policies: (Readers: any of Groups.Readers)
Groups:
Application:
Policies: (Readers: any of Groups.Readers)
Groups:
pOrg1:
Policies: (Readers: member)
pOrg2:
Policies: (Readers: member)
Orderer:
Policies: (Readers: any of Groups.Readers)
Groups:
oOrg1:
Policies: (Readers: member)
```
jyellick (Tue, 14 Feb 2017 18:08:07 GMT):
In this case, if you remove `pOrg1`, or add `pOrg3`, the top level set of readers is automatically updated
jyellick (Tue, 14 Feb 2017 18:08:27 GMT):
And to me, it's very intuitive to track the policy down to see ultimately who is a member of it, rather than through arbitrary named references
kostas (Tue, 14 Feb 2017 18:09:25 GMT):
Parsing `new_configuration.proto` still, will get back with questions.
jyellick (Tue, 14 Feb 2017 18:10:04 GMT):
Sounds good. FYI, biggest deviation from `new_configuration.proto` is that MSPs are now stored at the org level, rather than the top level
jyellick (Tue, 14 Feb 2017 18:10:13 GMT):
Just something to keep in mind
kostas (Tue, 14 Feb 2017 19:00:38 GMT):
```channel_config: (Type ChannelConfig)
// version: 0
// mod_policy: RejectAll
// policies: [ "RejectAll" -> "1 out of 0", "Readers" -> groups.Readers, "Writers" -> groups.Writers ]
// groups: [ "Orderer" -> defined_below, "Peer" -> defined_below ]
// values: [ "HashingAlgorithm" -> defined_below, "BlockHashingDataStructure" -> defined_below, "OrdererAddresses" -> defined_below ]
// identities: [ "pOrg1" -> defined_below, "pOrg2" -> defined_below, "oOrg1" -> defined_below```
kostas (Tue, 14 Feb 2017 19:00:50 GMT):
(This is from `new_configuration.proto`.)
kostas (Tue, 14 Feb 2017 19:02:02 GMT):
`channel_config.policies["Readers"` points to `groups.Readers` which doesn't seem to exist? Is there a typo there (which is OK, cause I understand this is just an example), or am I parsing this wrong?
kostas (Tue, 14 Feb 2017 19:02:02 GMT):
`channel_config.policies["Readers"]` points to `groups.Readers` which doesn't seem to exist? Is there a typo there (which is OK, cause I understand this is just an example), or am I parsing this wrong?
kostas (Tue, 14 Feb 2017 19:02:42 GMT):
To begin with, is the `groups` prefix in `groups.Readers` there a reference to `channel_config.groups`?
kostas (Tue, 14 Feb 2017 19:07:02 GMT):
(My guess is that it's supposed to be `group.Readers`, a reference to the type.)
jyellick (Tue, 14 Feb 2017 19:08:03 GMT):
Ah, so, this is a stylistic shorthand I was using, where `groups.Readers` means effectively:
```
var groupReaders []Policy
for _, group := range groups {
groupReaders = append(groupReaders, group.Policies["Reader"])
}
return groupReaders
```
jyellick (Tue, 14 Feb 2017 19:08:03 GMT):
Ah, so, this is a stylistic shorthand I was using, where `groups.Readers` means effectively:
```
var groupReaders []Policy
for _, group := range groups {
groupReaders = append(groupReaders, group.Policies["Readers"])
}
return groupReaders
```
jyellick (Tue, 14 Feb 2017 19:10:11 GMT):
We can have a policy type which actually implicitly does this for us, or, we can require that the groups be listed explicitly, but `groups.Readers` was simply meant to imply a reference to the set of `.Policies["Readers"]` for all groups
kostas (Tue, 14 Feb 2017 19:10:41 GMT):
Cool, that makes sense now.
kostas (Tue, 14 Feb 2017 19:18:57 GMT):
(Is it just me or would you also agree that the way to think of config item updates is to basically start from the key that you want to update (line 210) and then work your way upwards (line 203, etc.)?)
kostas (Tue, 14 Feb 2017 19:24:10 GMT):
In the particular example update to add `oOrg2` (lines 239-onwards): do you agree that this particular update would fail? The `mod_policy` is `RejectAll`.
jyellick (Tue, 14 Feb 2017 19:25:11 GMT):
Ah, so the policy is only evaluated if the contents of the element are modified.
jyellick (Tue, 14 Feb 2017 19:25:35 GMT):
For groups, we consider only the _keys_ in the maps, not their values
kostas (Tue, 14 Feb 2017 19:25:51 GMT):
You are adding an element to the map though.
kostas (Tue, 14 Feb 2017 19:26:10 GMT):
And you define the `mod_policy` as follows: "The policy required to add or remove elements to the below maps"
kostas (Tue, 14 Feb 2017 19:27:03 GMT):
Ah, gimme a sec.
jyellick (Tue, 14 Feb 2017 19:27:14 GMT):
Line 281
kostas (Tue, 14 Feb 2017 19:28:10 GMT):
I was about to say... doesn't line 245 support my point?
kostas (Tue, 14 Feb 2017 19:28:14 GMT):
But I'll check line 281 now.
kostas (Tue, 14 Feb 2017 19:29:45 GMT):
OK, line 281 aside (which I get), I think in line 245 there's this going: https://chat.hyperledger.org/channel/fabric-consensus?msg=btjLb6HzJMD7DTmBE
kostas (Tue, 14 Feb 2017 19:29:53 GMT):
Perhaps I'm missing something?
jyellick (Tue, 14 Feb 2017 19:37:51 GMT):
Ah, yes, please ignore line 245. As I indicated previously, the top level identities structure is gone and that is now embedded at the lower level.
jyellick (Tue, 14 Feb 2017 19:37:51 GMT):
Ah, yes, please ignore line 245. As I indicated previously, the top level identities structure (of MSPs) is gone and that is now embedded at the lower level (with one MSP per org).
jyellick (Tue, 14 Feb 2017 19:38:08 GMT):
That top level identity structure had special semantics
kostas (Tue, 14 Feb 2017 19:40:52 GMT):
Alright. So here's the thought process behind adding `oOrg2`:
kostas (Tue, 14 Feb 2017 19:45:35 GMT):
Before I go any further, I would argue that the `RejectAll` policy there would _still_ give you trouble because you're modifying the value of the `OrdererAddresses` key of the `channel_config.values` map, no?
kostas (Tue, 14 Feb 2017 19:45:35 GMT):
Before I go any further, I would argue that the `RejectAll` policy there would _still_ give you trouble because you're (implicitly) modifying the value of the `OrdererAddresses` key of the `channel_config.values` map, no?
jyellick (Tue, 14 Feb 2017 19:50:44 GMT):
No. We only care about keys, not values. Since `OrdererAddresses` already exists, we evaluate its modification against the policy for that config value, not the group olicy
jyellick (Tue, 14 Feb 2017 19:50:44 GMT):
No. We only care about keys, not values. Since `OrdererAddresses` already exists, we evaluate its modification against the policy for that config value, not the group policy
jyellick (Tue, 14 Feb 2017 19:51:12 GMT):
If we were to add a new values key, a new policy, a new group, (or remove any of those), then that is where `RejectAll` would give us problems
kostas (Tue, 14 Feb 2017 19:55:43 GMT):
Ah, you wrote: https://chat.hyperledger.org/channel/fabric-consensus?msg=9yjRpQx5bPFZWTS4K
kostas (Tue, 14 Feb 2017 19:56:05 GMT):
So I assumed that this restriction applied only to the `groups` map, which seemed kinda arbitrary to begin with.
kostas (Tue, 14 Feb 2017 19:56:05 GMT):
So I assumed that this restriction applied only to the `groups` map, which seemed kind of arbitrary to begin with.
kostas (Tue, 14 Feb 2017 19:56:05 GMT):
So I assumed that this restriction applied only to the `groups` map, which seemed kind of arbitrary to begin with. I see what the logic is now.
kostas (Tue, 14 Feb 2017 20:03:58 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Azh3L6TnqQxDCRLDu) So going back to this:
kostas (Tue, 14 Feb 2017 20:10:40 GMT):
1. Add the "oOrg2" key-value pair to the `channel_definition.groups["Orderer"].groups` map. This is evaluated against the `mod_policy` of the container, i.e. `channel_definition.groups["Orderer"]`.
kostas (Tue, 14 Feb 2017 20:10:40 GMT):
1. Add the "oOrg2" key to the `channel_definition.groups["Orderer"].groups` map (and populate the value of `channel_definition.groups["Orderer"].groups["oOrg2"]`). This is evaluated against the `mod_policy` of the container, i.e. `channel_definition.groups["Orderer"]`.
kostas (Tue, 14 Feb 2017 20:10:40 GMT):
1. Create a `GroupConfig` object that holds the "oOrg2" specific values.
2. For `channel_definition.groups["Orderer"]`:
2a. Add an "oOrg2" key to its `groups` map. Set its value to object instantiated at Step 1.
2b. Increment the `version` by 1.
kostas (Tue, 14 Feb 2017 20:13:30 GMT):
2. The `version` of `channel_definition.groups["Orderer"]` needs to be incremented by 1. The `groups` maps is modified as we saw in Step 1.
kostas (Tue, 14 Feb 2017 20:17:04 GMT):
The phrasing of the next step (2c) seems a bit tricky.
kostas (Tue, 14 Feb 2017 20:17:04 GMT):
The phrasing of the next steps (2c-2d) seem a bit tricky.
kostas (Tue, 14 Feb 2017 20:18:34 GMT):
2c. For all the non-map types: leave the same values.
kostas (Tue, 14 Feb 2017 20:19:04 GMT):
2d. For the map types: leave the keys intact, and for the values use the old version and leave other fields as nil?
kostas (Tue, 14 Feb 2017 20:19:04 GMT):
2d. For the map types: leave the keys intact. For their values, use the old `version`; all other fields should remain unset/nil.
kostas (Tue, 14 Feb 2017 20:21:09 GMT):
3. Proceed in the same fashion for the `channel_definition` object.
kostas (Tue, 14 Feb 2017 20:21:27 GMT):
(4. Profit. :moneybag:)
kostas (Tue, 14 Feb 2017 20:22:26 GMT):
Once we've clarified that (esp. 2c-2d), I got one more workflow-related question and then I think I'm fully caught up.
jyellick (Tue, 14 Feb 2017 20:49:57 GMT):
I agree, 2c-2d is tricky, and, hasn't been implemented yet
jyellick (Tue, 14 Feb 2017 20:50:24 GMT):
Essentially, today, the config has been converted to be hierarchical, and to use the new structures to prepare for 2c-2d
jyellick (Tue, 14 Feb 2017 20:50:38 GMT):
But we are still operating under the assumption that updates contain the entire config.
kostas (Tue, 14 Feb 2017 20:56:23 GMT):
You mean we operate _currently_ under this assumption, or is this the thinking in the example in `new_configuration.proto`? (Because it doesn't seem to be.)
jyellick (Tue, 14 Feb 2017 20:57:08 GMT):
Correct. The goal is to operate as described in `new_configuration.proto`, but this is being implemented incrementally and is not complete yet.
kostas (Tue, 14 Feb 2017 20:57:47 GMT):
Cool.
kostas (Tue, 14 Feb 2017 21:05:53 GMT):
Alright, so then two questions to wrap this up:
kostas (Tue, 14 Feb 2017 21:10:08 GMT):
(Looking at `bootstrap.feature` for inspiration first.)
suryalanka (Tue, 14 Feb 2017 21:11:30 GMT):
Has joined the channel.
scottz (Tue, 14 Feb 2017 21:11:51 GMT):
what name should we use to set the environment variables that were moved to the genesis.yaml file from orderer.yaml? I rebased my code and seem to be able to read in the genesis config file. For example, should we be using the likes of the following?GENESIS_ORDERER_ORDERERTYPE=kafka GENESIS_ORDERER_BATCHSIZE_MAXMESSAGECOUNT=1000
scottz (Tue, 14 Feb 2017 21:13:18 GMT):
@jyellick ^^^ by the way, we tried those values and were unable to override the default values, so maybe there is a bug?
jyellick (Tue, 14 Feb 2017 21:14:13 GMT):
Use the prefix `CONFIGTX` instead of `GENESIS`, ie `CONFIGTX_ORDERER_ORDERERTYPE`
jyellick (Tue, 14 Feb 2017 21:14:35 GMT):
This is probably confusing
jyellick (Tue, 14 Feb 2017 21:15:11 GMT):
I moved the `genesis.xml` to become the starting point for the configtx (and genesis) tool work, but did not rename it to `configtx.yaml`
jyellick (Tue, 14 Feb 2017 21:15:18 GMT):
The prefix however was set to `CONFIGTX`
scottz (Tue, 14 Feb 2017 21:19:26 GMT):
are there plans to change that to genesis, or not?
jyellick (Tue, 14 Feb 2017 21:19:54 GMT):
I would say no.
jyellick (Tue, 14 Feb 2017 21:20:09 GMT):
The tool is still pending, and so those decisions have not been made.
jyellick (Tue, 14 Feb 2017 21:20:18 GMT):
But remember, these are bootstrapping parameters.
jyellick (Tue, 14 Feb 2017 21:20:48 GMT):
It is convenient to set them through environment for testing and automation
jyellick (Tue, 14 Feb 2017 21:21:28 GMT):
But if you have an ordering service which is already stood up, and you want to modify the batch size, this variables do nothing for you. An appropriately signed configtx which reconfigures the desired properties must be submitted to the ordering service.
scottz (Tue, 14 Feb 2017 22:22:54 GMT):
ok. While I am working with genesis.yaml, can anyone provide a definition of CONFIGTX_ORDERER_ADDRESSES? There are no comments in the genesis.yaml file. Is it a list of addresses which the orderer will receive broadcast msgs, or to which it delivers batches, or both, or anything else?
scottz (Tue, 14 Feb 2017 22:22:54 GMT):
ok. While I am working with genesis.yaml, can anyone provide a definition of CONFIGTX_ORDERER_ADDRESSES? There are no comments in the genesis.yaml file. I believe it is a list of addresses which the orderer will receive broadcast msgs, and also to which it delivers batches. Is there anything else?
scottz (Tue, 14 Feb 2017 22:32:33 GMT):
Hmmm. I may have it wrong. that may be the use of OrdererGeneralListenAddress:OrderGeneralListenPort. So is the CONFIGTX_ORDERER_ADDRESSES really the list of the orderers' ListenAddress:ListenPort values for all orderers in the orderer service, which is of course needed in the config block?
jyellick (Tue, 14 Feb 2017 23:31:14 GMT):
@scottz it's not currently used, but the intended use is to enumerate the addresses and ports of all orderers available for that channel
jyellick (Tue, 14 Feb 2017 23:31:39 GMT):
IE places to connect to to broadcast and deliver
scottz (Tue, 14 Feb 2017 23:33:36 GMT):
thx
yuki-kon (Wed, 15 Feb 2017 16:49:34 GMT):
Has joined the channel.
varadatibm (Wed, 15 Feb 2017 18:15:59 GMT):
Has joined the channel.
scottz (Wed, 15 Feb 2017 19:15:33 GMT):
@kostas Using 1 or more orderers, with 3 kafka brokers, we can stop 1 or 2 KBrokers and still function. When we stop the final KB, we see an orderer log about not being able to reach any kbrokers. And we also see SERVICE UNAVAILABLE Nack msg from orderer sent back to the broadcaster client that sends any transactions. That is all good. And, can we expect the orderer to maintain indefinitely the communcation path with the broadcast client? And if we restart one or more KBrokers then the orderers should reconnect with kbrokers and the orderer service should resume, right?
kostas (Wed, 15 Feb 2017 20:15:02 GMT):
@scottz:
> can we expect the orderer to maintain indefinitely the communication path with the broadcast client?
In the scenario that you describe, yes. There are no errors in the gRPC client-to-server communication so the stream connection holds.
> And if we restart one or more KBrokers then the orderers should reconnect with kbrokers and the orderer service should resume, right?
Yes -- as it should. (I expect your testing to reveal paths that we need to harden. Let me know if you see erratic behavior.)
kostas (Wed, 15 Feb 2017 20:34:44 GMT):
@hgabre: Looking at `fabric/orderer/main.go`. Just curious -- is there any particular reason behind the decision to put `makeSbftConsensusConfig` and `makeSbftStackConfig` in that file, instead of somewhere in the `sbft` package? (Introduced in commit `fff6ed66d`.)
hgabre (Wed, 15 Feb 2017 20:34:44 GMT):
Has joined the channel.
scottz (Wed, 15 Feb 2017 20:37:54 GMT):
@kostas yes, will do. Surya is collecting logs now for a new bug, when an orderer panic() when restarting one KB (after all were stopped). We also saw some other interesting things but still trying to understand if is a problem, and to reproduce and isolate scenarios.
kostas (Wed, 15 Feb 2017 20:38:44 GMT):
@scottz: Perfect, thank you.
kostas (Wed, 15 Feb 2017 20:41:11 GMT):
@jyellick: Is it madness to try to pick up things from where the master actually is (w/r/t your config work), or should I cut straight to the tip of your currently pushed/pending branch?
kostas (Wed, 15 Feb 2017 20:41:46 GMT):
(I am doing the former BTW, and picked up a few `XXX temporary hack` references, hence my question.)
jyellick (Wed, 15 Feb 2017 20:42:01 GMT):
You will only see 1 or 2 CRs still out there, master is relatively in sync with the work that is done
kostas (Wed, 15 Feb 2017 20:42:19 GMT):
So the `master` approach is good.
jyellick (Wed, 15 Feb 2017 20:42:21 GMT):
yes
scottz (Wed, 15 Feb 2017 22:21:53 GMT):
@kostas https://jira.hyperledger.org/browse/FAB-2259 has been created in component fabric-consensus
dave.enyeart (Thu, 16 Feb 2017 02:31:33 GMT):
@jyellick Hi, I'm up to date on master, and can't start orderer. Get this error:
dave.enyeart (Thu, 16 Feb 2017 02:31:46 GMT):
```
vagrant@hyperledger-devenv:v0.3.0-36bbeb6:/opt/gopath/src/github.com/hyperledger/fabric/orderer$ go build
vagrant@hyperledger-devenv:v0.3.0-36bbeb6:/opt/gopath/src/github.com/hyperledger/fabric/orderer$ ORDERER_GENERAL_LOGLEVEL=debug ./orderer
2017-02-16 01:31:29.407 UTC [orderer/config] Load -> INFO 001 No orderer cfg path set, assuming development environment, deriving from go path
2017-02-16 01:31:29.408 UTC [orderer/config] Load -> INFO 002 Setting ORDERER_CFG_PATH to: /opt/gopath/src/github.com/hyperledger/fabric/orderer
2017-02-16 01:31:29.412 UTC [viperutil] EnhancedExactUnmarshal -> INFO 003 map[sbftlocal:map[PeerCommAddr::6101 CertFile:sbft/testdata/cert1.pem KeyFile:sbft/testdata/key.pem DataDir:/tmp] genesis:map[DeprecatedBatchSize:99 MB SbftShared:map[N:1 F:0 RequestTimeoutNsec:1000000000 Peers:map[:6101:sbft/testdata/cert1.pem]] DeprecatedBatchTimeout:10s] general:map[GenesisFile:./genesisblock LocalMSPID:DEFAULT ListenAddress:127.0.0.1 LocalMSPDir:../msp/ TLS:map[RootCAs:
jyellick (Thu, 16 Feb 2017 02:32:18 GMT):
https://gerrit.hyperledger.org/r/#/c/6075/
dave.enyeart (Thu, 16 Feb 2017 02:32:23 GMT):
ok thx
jyellick (Thu, 16 Feb 2017 02:33:19 GMT):
Sorry for the breakage, was a regression from a fix of the orderer docker image
dave.enyeart (Thu, 16 Feb 2017 02:35:36 GMT):
ok, so now i can start orderer, but `peer channel create -c myc1` panics orderer
dave.enyeart (Thu, 16 Feb 2017 02:36:16 GMT):
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x8751f1]
goroutine 27 [running]:
panic(0x9842e0, 0xc42000c0e0)
/opt/go/src/runtime/panic.go:500 +0x1a1
github.com/hyperledger/fabric/common/configtx/handlers/application.(*ApplicationOrgConfig).ProposeConfig(0xc420411ca0, 0xc42038fb70, 0xb, 0xc4203750b0, 0xf0a520, 0xc420411ca0)
/opt/gopath/src/github.com/hyperledger/fabric/common/configtx/handlers/application/organization.go:100 +0x2b1
github.com/hyperledger/fabric/common/configtx.(*configManager).proposeConfig(0xc4203b4de0, 0xc420375a10, 0xc4203fe200, 0x7fbd10344d08)
/opt/gopath/src/github.com/hyperledger/fabric/common/configtx/manager.go:177 +0x221
github.com/hyperledger/fabric/common/configtx.NewManagerImpl(0xc42038fa10, 0xf18e60, 0xc4203fe200, 0x0, 0x0, 0x0, 0xc420375290, 0x6c67a7, 0xc4203f0000, 0x3e7d)
/opt/gopath/src/github.com/hyperledger/fabric/common/configtx/manager.go:141 +0x22d
```
dave.enyeart (Thu, 16 Feb 2017 02:37:00 GMT):
@jyellick ^^^^^^
jyellick (Thu, 16 Feb 2017 02:37:20 GMT):
Yes
jyellick (Thu, 16 Feb 2017 02:37:23 GMT):
Sorry about htat
jyellick (Thu, 16 Feb 2017 02:37:23 GMT):
Sorry about that
jyellick (Thu, 16 Feb 2017 02:37:33 GMT):
I am aware of this, and working to resolve
dave.enyeart (Thu, 16 Feb 2017 02:37:37 GMT):
are you getting tired of saying that :)
jyellick (Thu, 16 Feb 2017 02:38:05 GMT):
Honestly a little surprised it's only been you and @muralisr who have brought this to me
muralisr (Thu, 16 Feb 2017 02:38:05 GMT):
Has joined the channel.
dave.enyeart (Thu, 16 Feb 2017 02:38:29 GMT):
know the world knows :)
dave.enyeart (Thu, 16 Feb 2017 02:38:29 GMT):
now the world knows :)
jyellick (Thu, 16 Feb 2017 02:39:08 GMT):
Was hoping to push a quick fix, but the underlying problem has some unexpected side effects, so, trying to come up with a more elegant fix
dave.enyeart (Thu, 16 Feb 2017 02:39:31 GMT):
no worries, impressive progress the last few days, i'll leave you alone
jyellick (Thu, 16 Feb 2017 02:40:03 GMT):
No problem, I'll post here and tag you when I push a CR to fix
dave.enyeart (Thu, 16 Feb 2017 02:40:07 GMT):
thx
jyellick (Thu, 16 Feb 2017 06:48:30 GMT):
@dave.enyeart @muralisr https://gerrit.hyperledger.org/r/#/c/6081/ fixes the problem you were seeing with channel creation
wutongtree (Thu, 16 Feb 2017 09:50:38 GMT):
Has joined the channel.
murrekatt (Thu, 16 Feb 2017 10:11:55 GMT):
Has joined the channel.
fabianpo (Thu, 16 Feb 2017 12:16:30 GMT):
Has joined the channel.
muralisr (Thu, 16 Feb 2017 13:10:30 GMT):
@jyellick thanks, looking
MadhavaReddy (Thu, 16 Feb 2017 14:28:30 GMT):
Hi All, what would be the recommended consensus model which needs to used in V1.0 for production environment?
jyellick (Thu, 16 Feb 2017 14:32:41 GMT):
@MadhavaReddy Kafka is the recommended production mode for v1.0
MadhavaReddy (Thu, 16 Feb 2017 14:37:57 GMT):
Thanks @jyellick, how many min nodes are required for Kafka i know for Solo its one
jyellick (Thu, 16 Feb 2017 14:39:49 GMT):
There is minimally one orderer process, one kafka broker, and one zookeeper node (and they could all be colocated). However, for crash fault tolerance, we suggest more.
MadhavaReddy (Thu, 16 Feb 2017 14:41:01 GMT):
thank you @jyellick
MadhavaReddy (Thu, 16 Feb 2017 14:43:36 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=qm6dGSSgCeADyrasR) @jyellick any reason why Kafka is recommended over PBFT?
jyellick (Thu, 16 Feb 2017 14:52:22 GMT):
The architecture for v1.0 has a strong emphasis on channels for confidentiality. In the BFT model of PBFT, the confidentiality proposition does not make as much sense. So, our emphasis this release has been focused on the CFT case with Kafka.
MadhavaReddy (Thu, 16 Feb 2017 14:57:39 GMT):
ok thank u, will there be any change in arch going forward because 1.0 arch is completely different than 0.6, wondering when will have production ready Fabric version
jeffgarratt (Thu, 16 Feb 2017 14:58:20 GMT):
@jyellick can I get some of your time after scrum?
jyellick (Thu, 16 Feb 2017 14:59:50 GMT):
@jeffgarratt Sure
jeffgarratt (Thu, 16 Feb 2017 14:59:56 GMT):
k
kostas (Thu, 16 Feb 2017 15:02:38 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=qc7XYAPZoZbTqEKGj) @scottz On it.
kostas (Thu, 16 Feb 2017 15:02:38 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=qc7XYAPZoZbTqEKGj) @scottz: On it.
jyellick (Thu, 16 Feb 2017 15:03:19 GMT):
@MadhavaReddy Yes, the v1 arch is significantly different than v0.6, but I think the v1 arch should be fairly stable going forward.
MadhavaReddy (Thu, 16 Feb 2017 15:04:15 GMT):
Thank you @jyellick
jyellick (Thu, 16 Feb 2017 15:39:23 GMT):
Attention @kostas @sanchezl after https://gerrit.hyperledger.org/r/#/c/6075/ executing `./orderer` inside the orderer directory will be broken indefinitely without overriding ORDERER_GENERAL_LOCALMSPDIR. Orderer must now be executed from `fabric/`
MadhavaReddy (Thu, 16 Feb 2017 16:11:11 GMT):
Hi, what would happen when application sdk submit a proposal and if two peers send different response to sdk?
MadhavaReddy (Thu, 16 Feb 2017 16:11:11 GMT):
Hi, what would happen when application sdk submit a proposal and if two peers send different response to sdk? consider chain code on one of the peer has been changed, please clarify
kostas (Thu, 16 Feb 2017 16:16:58 GMT):
@MadhavaReddy: Maybe #fabric or #fabric-sdk is a better venue for this question?
MadhavaReddy (Thu, 16 Feb 2017 16:17:17 GMT):
thank you @kostas
tiennv (Thu, 16 Feb 2017 16:28:31 GMT):
Has joined the channel.
kostas (Thu, 16 Feb 2017 20:07:08 GMT):
@sanchezl: Can you confirm that even with the 6075 fix, the environments in `bddtests/environments` crash? I suspect that the paths there need some tweaking still.
kostas (Thu, 16 Feb 2017 20:08:00 GMT):
There == `orderer.yaml` OR `docker-compose.yml`
latitiah (Thu, 16 Feb 2017 20:40:33 GMT):
Hi guys, I'm seeing some issues (commit a5714ce017a00da10762bc057ee4e44574d584f0) where the `ORDERER_KAFKA_BROKERS` is always set to `[127.0.0.1:9092]` regardless of what I set it to. Any pointers as to how to override this? Has there been a change for this env var as well?
latitiah (Thu, 16 Feb 2017 20:43:37 GMT):
@kostas @jyellick @sanchezl : ^^
latitiah (Thu, 16 Feb 2017 20:46:52 GMT):
To clarify: while the env var is set the value used in the orderer is set to that value such the orderer logs show the following when connecting to the kafka brokers:
```
[orderer/kafka] newProducer -> DEBU 047 Connecting to Kafka cluster: [127.0.0.1:9092]
```
latitiah (Thu, 16 Feb 2017 20:46:52 GMT):
To clarify: while the env var is set the value that I entered in the compose file, what is used in the orderer is set to the localhost value such the orderer logs show the following when connecting to the kafka brokers:
```
[orderer/kafka] newProducer -> DEBU 047 Connecting to Kafka cluster: [127.0.0.1:9092]
```
kostas (Thu, 16 Feb 2017 20:53:20 GMT):
@latitiah: There was some shifting around of config files while I was gone, and still catching up to this but for now -- are you using the `CONFIGTX` prefix for this environment variable?
kostas (Thu, 16 Feb 2017 20:54:23 GMT):
So that would be: `CONFIGTX_ORDERER_KAFKA_BROKERS`
latitiah (Thu, 16 Feb 2017 20:55:08 GMT):
Ah. cool. Thx! Is that pretty much across the board for all of the env vars?
sanchezl (Thu, 16 Feb 2017 20:55:13 GMT):
@jyellick, @kostas, With the move of genesis config to common, did the key names changed? e.g. `ORDERER_GENESIS_
sanchezl (Thu, 16 Feb 2017 20:55:19 GMT):
beat me to it.
kostas (Thu, 16 Feb 2017 20:56:11 GMT):
@latitiah: For those variables that you see here: https://github.com/hyperledger/fabric/blob/master/common/configtx/tool/genesis.yaml
latitiah (Thu, 16 Feb 2017 20:56:25 GMT):
cool beans! Thx!
ylsGit (Fri, 17 Feb 2017 02:27:47 GMT):
Has joined the channel.
wwendy (Fri, 17 Feb 2017 10:35:27 GMT):
Has joined the channel.
dave.enyeart (Fri, 17 Feb 2017 15:20:54 GMT):
@jyellick I pulled the latest this morning. I can't start orderer again:
dave.enyeart (Fri, 17 Feb 2017 15:21:03 GMT):
```2017-02-17 15:17:13.199 UTC [orderer/config] completeInitialization -> INFO 004 Validated configuration to: &{General:{LedgerType:ram QueueSize:10 MaxWindowSize:1000 ListenAddress:127.0.0.1 ListenPort:7050 TLS:{Enabled:false PrivateKey: Certificate: RootCAs:[] ClientAuthEnabled:false ClientRootCAs:[]} GenesisMethod:provisional GenesisFile:./genesisblock Profile:{Enabled:false Address:0.0.0.0:6060} LogLevel:debug LocalMSPDir:msp/sampleconfig LocalMSPID:DEFAULT} RAMLedger:{HistorySize:1000} FileLedger:{Location: Prefix:hyperledger-fabric-ordererledger} Kafka:{Retry:{Period:3s Stop:1m0s} Verbose:false Version:{version:[0 9 0 1]} TLS:{Enabled:false PrivateKey: Certificate: RootCAs:[] ClientAuthEnabled:false ClientRootCAs:[]}} Genesis:{DeprecatedBatchTimeout:10s DeprecatedBatchSize:103809024 SbftShared:{N:1 F:0 RequestTimeoutNsec:1000000000 Peers:map[:6101:sbft/testdata/cert1.pem]}} SbftLocal:{PeerCommAddr::6101 CertFile:sbft/testdata/cert1.pem KeyFile:sbft/testdata/key.pem DataDir:/tmp}}
2017-02-17 15:17:13.201 UTC [msp] getPemMaterialFromDir -> DEBU 005 Reading directory msp/sampleconfig/cacerts
panic: Failed initializing crypto [Could not load a valid ca certificate from directory msp/sampleconfig/cacerts, err Could not read directory open msp/sampleconfig/cacerts: no such file or directory, err msp/sampleconfig/cacerts]
goroutine 1 [running]:
panic(0x973fa0, 0xc4202280d0)
/opt/go/src/runtime/panic.go:500 +0x1a1
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:90 +0x16f8```
wlahti (Fri, 17 Feb 2017 15:23:48 GMT):
Has joined the channel.
jyellick (Fri, 17 Feb 2017 15:24:24 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=g6WMzt4RgXiKBuWis)
jyellick (Fri, 17 Feb 2017 15:24:30 GMT):
^ @dave.enyeart
dave.enyeart (Fri, 17 Feb 2017 15:26:21 GMT):
confirmed - `make orderer` and start `orderer` work from /fabric directory
farhan3 (Fri, 17 Feb 2017 16:18:33 GMT):
Has joined the channel.
farhan3 (Fri, 17 Feb 2017 16:18:50 GMT):
Hi - from the peer yaml file, I can see that the state is stored in a database, e.g. CouchDB. Is the ledger also stored in the same database?
kostas (Fri, 17 Feb 2017 16:19:45 GMT):
@farhan3: #fabric or #fabric-ledger are your best bets.
farhan3 (Fri, 17 Feb 2017 16:20:13 GMT):
In #general you told me to ask here :D
kostas (Fri, 17 Feb 2017 16:20:24 GMT):
Oh damn it.
kostas (Fri, 17 Feb 2017 16:20:47 GMT):
Sorry for this, typo. Long day already. :upside_down:
farhan3 (Fri, 17 Feb 2017 16:20:59 GMT):
Haha
chris.elder (Fri, 17 Feb 2017 16:52:46 GMT):
Has joined the channel.
mwall (Fri, 17 Feb 2017 17:00:39 GMT):
Has joined the channel.
jsong1230 (Sat, 18 Feb 2017 12:42:49 GMT):
Has joined the channel.
jyellick (Sat, 18 Feb 2017 17:31:01 GMT):
[ ](https://chat.hyperledger.org/channel/fabric?msg=Z8MwpDk2RN6suSKhk)
jyellick (Sat, 18 Feb 2017 17:33:01 GMT):
[ ](https://chat.hyperledger.org/channel/fabric?msg=6vSosoJWmEJEtdv9y) [ ](https://chat.hyperledger.org/channel/fabric?msg=Z8MwpDk2RN6suSKhk) [ ](https://chat.hyperledger.org/channel/fabric?msg=nBSXqzC7Rv9J9Ma62)
^ From #fabric , current master oderer image is broken
jyellick (Sat, 18 Feb 2017 17:33:01 GMT):
[ ](https://chat.hyperledger.org/channel/fabric?msg=6vSosoJWmEJEtdv9y) [ ](https://chat.hyperledger.org/channel/fabric?msg=Z8MwpDk2RN6suSKhk) [ ](https://chat.hyperledger.org/channel/fabric?msg=nBSXqzC7Rv9J9Ma62)
From #fabric , current master oderer image is broken
kostas (Sun, 19 Feb 2017 03:24:00 GMT):
@scottz, @suryalanka: Still working on FAB-2259. This comes down to the consumer returning a nil message, but nailing down the exact cause of this is trickier than I thought and will require me to write some code to figure out what's going on behind the scenes. Will post here and on the JIRA issue when I have updates.
kostas (Sun, 19 Feb 2017 03:24:00 GMT):
@scottz, @suryalanka: Still working on FAB-2259. This comes down to the consumer returning a nil message, but nailing down the exact cause of this is trickier than I thought. Writing some code to figure out what's going on behind the scenes. Will post here and on the JIRA issue when I have updates.
jimthematrix (Sun, 19 Feb 2017 16:00:31 GMT):
Has joined the channel.
jimthematrix (Sun, 19 Feb 2017 16:03:01 GMT):
@jyellick I just pulled the latest fabric (and fabric-ca) which has 6171 fix, but the orderer still exits after docker-compose up, here's the content i'm using: https://github.com/hyperledger/fabric-sdk-node/blob/master/test/fixtures/docker-compose.yml
jimthematrix (Sun, 19 Feb 2017 16:03:11 GMT):
@cdaughtr see above ^^^
cdaughtr (Sun, 19 Feb 2017 16:03:11 GMT):
Has joined the channel.
jyellick (Sun, 19 Feb 2017 16:03:34 GMT):
I am aware, I gave @cdaughtr a workaround for this yesterday
jyellick (Sun, 19 Feb 2017 16:03:49 GMT):
[ ](https://chat.hyperledger.org/channel/fabric?msg=Nav9FZT5M2NrGeEeS)
jyellick (Sun, 19 Feb 2017 16:03:55 GMT):
I will be pushing this to gerrit very soon
jyellick (Sun, 19 Feb 2017 16:04:14 GMT):
@jimthematrix ^
jimthematrix (Sun, 19 Feb 2017 16:04:51 GMT):
ah got it, will try that, thanks Jason!
rameshthoomu (Sun, 19 Feb 2017 16:05:11 GMT):
Has joined the channel.
jimthematrix (Sun, 19 Feb 2017 16:05:53 GMT):
working better now... :thumbsup:
rickr (Sun, 19 Feb 2017 16:50:04 GMT):
@jyellick for us hard core that don't use docker compose is that an issue ?
jyellick (Sun, 19 Feb 2017 16:50:40 GMT):
It is not
kostas (Sun, 19 Feb 2017 16:51:04 GMT):
Not sure where we stand with the profiles work, and until this work settles down, I'm backing off, but for anyone working at earlier commit levels, please note this: https://gerrit.hyperledger.org/r/#/c/6187/
kostas (Sun, 19 Feb 2017 16:51:39 GMT):
I noticed the use of the incorrect ENV var in @jimthematrix's Docker Compose file. And @dbshah was commenting on the fact that they run solo when they set the ENV var to Kafka.
kostas (Sun, 19 Feb 2017 16:51:39 GMT):
I noticed the use of the incorrect ENV var in @jimthematrix's Docker Compose file. And @dbshah was just commenting on the fact that they run solo when they set the ENV var to Kafka.
dbshah (Sun, 19 Feb 2017 16:51:39 GMT):
Has joined the channel.
jyellick (Sun, 19 Feb 2017 16:52:08 GMT):
That has a fix pushed
jyellick (Sun, 19 Feb 2017 16:52:15 GMT):
https://gerrit.hyperledger.org/r/#/c/6159/
jyellick (Sun, 19 Feb 2017 16:52:33 GMT):
Apparently the way viper does parsing
jyellick (Sun, 19 Feb 2017 16:52:45 GMT):
It generates the yaml tree before applying the overrides (not surprising)
jyellick (Sun, 19 Feb 2017 16:53:09 GMT):
So, to use env to override the the orderer type
jyellick (Sun, 19 Feb 2017 16:53:09 GMT):
So, to use env to override the the orderer type, you needed to include the full profile path, but this requirement is removed in 6159 above
kostas (Sun, 19 Feb 2017 16:53:14 GMT):
As I said, this is a quick fix for anyone working at earlier commit levels.
kostas (Sun, 19 Feb 2017 16:53:14 GMT):
As I said, this is a quick ~fix~/FYI/whatever for anyone working at earlier commit levels.
kostas (Sun, 19 Feb 2017 16:54:39 GMT):
I'm not pushing for merging this (which is why I am posting this here and not in the pr-review channel). In fact, I will update the title to reflect that.
kostas (Sun, 19 Feb 2017 16:54:39 GMT):
I'm not pushing for merging this (which is why I am posting this here and not in the pr-review channel). In fact, I will ~update the title~ abandon the changeset to reflect that.
kostas (Sun, 19 Feb 2017 17:02:40 GMT):
https://gerrit.hyperledger.org/r/#/c/6187/
jimthematrix (Sun, 19 Feb 2017 20:12:01 GMT):
@jyellick @kostas @muralisr I'm seeing the following error when starting the orderer with docker-compose:
```
2017-02-19 20:09:46.282 UTC [deliveryClient] NewDeliverService -> INFO 1d1 Creating delivery service to get blocks from the ordering service, orderer:7050
2017-02-19 20:09:49.283 UTC [deliveryClient] NewDeliverService -> ERRO 1d2 Cannot dial to orderer:7050, because of grpc: timed out when dialing
2017-02-19 20:09:49.284 UTC [gossip/service] InitializeChannel -> WARN 1d3 Cannot create delivery client, due to grpc: timed out when dialing
2017-02-19 20:09:49.284 UTC [gossip/service] InitializeChannel -> WARN 1d4 Delivery client is down won't be able to pull blocks for chain testchainid
```
jimthematrix (Sun, 19 Feb 2017 20:12:25 GMT):
i've tried to start just the orderer container first and then the rest
jyellick (Sun, 19 Feb 2017 20:12:29 GMT):
That sounds like the orderer process is not running
jimthematrix (Sun, 19 Feb 2017 20:12:31 GMT):
so it's not a timing issue
jyellick (Sun, 19 Feb 2017 20:12:42 GMT):
Do you see anything in the logs of the orderer?
jimthematrix (Sun, 19 Feb 2017 20:12:42 GMT):
but the order process is running fine
jimthematrix (Sun, 19 Feb 2017 20:12:54 GMT):
and port-wise are mapped correctly
jyellick (Sun, 19 Feb 2017 20:13:30 GMT):
Can you paste me the tail of the orderer logs?
jyellick (Sun, 19 Feb 2017 20:13:54 GMT):
(Nothing around the gRPC code has changed recently to my knowledge, so this is very odd)
jimthematrix (Sun, 19 Feb 2017 20:15:20 GMT):
Message Attachments
jimthematrix (Sun, 19 Feb 2017 20:16:51 GMT):
again the docker-compose file is at https://github.com/hyperledger/fabric-sdk-node/blob/master/test/fixtures/docker-compose.yml
jyellick (Sun, 19 Feb 2017 20:16:53 GMT):
There is a message:
```
logger.Debugf("Starting new Deliver handler")
```
Which should be printed whenever a gRPC invocation of `Deliver` occurs
jyellick (Sun, 19 Feb 2017 20:17:16 GMT):
This is extremely early in the stack, before any filtering, unmarshaling etc.
jyellick (Sun, 19 Feb 2017 20:17:21 GMT):
And I am not seeing that
jyellick (Sun, 19 Feb 2017 20:17:51 GMT):
It makes me believe it is a problem with the network connection between the containers
jimthematrix (Sun, 19 Feb 2017 20:22:31 GMT):
found this in the log:
```
2017-02-19 20:09:27.084 UTC [viperutil] getKeysRecursively -> DEBU 01f Found real value for orderer.Addresses setting to []interface {} [127.0.0.1:7050]
```
jyellick (Sun, 19 Feb 2017 20:22:36 GMT):
(As does the timeout message in the original logs you pasted)
jyellick (Sun, 19 Feb 2017 20:22:42 GMT):
That is a red herring, please ignore it
jimthematrix (Sun, 19 Feb 2017 20:22:47 GMT):
so it'll not take requests from outside of the container
jyellick (Sun, 19 Feb 2017 20:23:07 GMT):
That is the orderer address which is encoded inside the genesis block
jyellick (Sun, 19 Feb 2017 20:23:30 GMT):
The idea is that the peer can use this address to connect to the orderer, but the peer does not currently consume this variable, nor is it set correctly in our images
jimthematrix (Sun, 19 Feb 2017 20:23:42 GMT):
ok
jyellick (Sun, 19 Feb 2017 20:25:24 GMT):
I see in the printing of the orderer log that: `ListenPort:7050 LocalMSPID:DEFAULT MaxWindowSize:1000 ListenAddress:0.0.0.0`
jyellick (Sun, 19 Feb 2017 20:25:34 GMT):
So your configuration appears to be set fine
jyellick (Sun, 19 Feb 2017 20:26:49 GMT):
All seems well from the orderer log, so I would again assert that this very likely some sort of docker networking error
jimthematrix (Sun, 19 Feb 2017 20:33:57 GMT):
just cleared up everything and removed all instances, started over (again orderer first then the rest), now seeing the following error:
```
orderer_1 | 2017/02/19 20:31:45 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: write tcp 172.20.0.2:7050->172.20.0.5:47040: write: broken pipe"
orderer_1 | 2017/02/19 20:31:50 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: write tcp 172.20.0.2:7050->172.20.0.4:43966: write: connection reset by peer"
```
jyellick (Sun, 19 Feb 2017 20:39:59 GMT):
Are you certain everyone is at the same level of protos?
jyellick (Sun, 19 Feb 2017 20:40:26 GMT):
(If you are not seeing that deliver log statement in the orderer log, that means something is going wrong lower in the stack)
jimthematrix (Sun, 19 Feb 2017 20:44:21 GMT):
ok I just updated native docker and restarted it, now seems to be working fine. your assertion was spot on Jason ;-)
jimthematrix (Sun, 19 Feb 2017 20:47:54 GMT):
now with https://gerrit.hyperledger.org/r/#/c/6213/ the install-instantiate-invoke-query cycle works again with the latest fabric + https://gerrit.hyperledger.org/r/#/c/6159
jimthematrix (Sun, 19 Feb 2017 20:48:09 GMT):
so i'll merge 6159
jimthematrix (Sun, 19 Feb 2017 20:49:36 GMT):
merged
jimthematrix (Sun, 19 Feb 2017 20:49:47 GMT):
thanks Jason for the fix
levinkwong (Mon, 20 Feb 2017 01:58:57 GMT):
Has joined the channel.
redpanda (Mon, 20 Feb 2017 09:21:27 GMT):
Has joined the channel.
vinayakkumar (Mon, 20 Feb 2017 12:45:17 GMT):
Has joined the channel.
warm3snow (Tue, 21 Feb 2017 05:18:20 GMT):
Has joined the channel.
jessilb (Tue, 21 Feb 2017 11:39:33 GMT):
Has joined the channel.
agaragiola (Tue, 21 Feb 2017 20:59:55 GMT):
Has joined the channel.
danielleekc (Wed, 22 Feb 2017 13:57:22 GMT):
Has joined the channel.
danielleekc (Wed, 22 Feb 2017 14:02:24 GMT):
Hi everyone, I am using Hyperledger 0.6. I have a question on PBFT. Said, a node A shut down accidentally and other nodes are doing transactions after that. What will node A do when it restarts? will it copy all incremental transactions from other nodes?
tuand (Wed, 22 Feb 2017 14:04:46 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=NpQiLubygQr6XDLFG) @danielleekc eventually, node A will see that it is behind and will issue state transfer requests to the other peers
danielleekc (Wed, 22 Feb 2017 14:07:39 GMT):
@tuand How long would this action takes? Will node A stop serving client requests because of the inconsistency? Or at the moment node A just restarted, the client will get incorrect world state?
gomsb143 (Wed, 22 Feb 2017 15:07:33 GMT):
Has joined the channel.
psa (Wed, 22 Feb 2017 15:55:30 GMT):
Has joined the channel.
kletkeman (Wed, 22 Feb 2017 19:02:12 GMT):
@tuand In v0.6, the peer that falls behind will wait until it is approximately 60 blocks behind and sometime shortly after that will initiate the "catch up" ... except that it never quite catches up. It is my understanding that the protocol is not quite designed to ensure that the chain heights match, but rather to ensure that the chain heights do not differ by more than approximately 60. I don't know of a way to provoke a peer that is only a few blocks behind to catch up. This is no doubt going to be completely different under v1, so I'm not sure that it is worth exploring too deeply on v0.6. The peer that is behind can receive and process requests while it is behind.
tuand (Wed, 22 Feb 2017 19:06:45 GMT):
thanks @kletkeman ! I got busy and forgot to reply to @danielleekc ... a V0.6 peer knows it's behind when it receives a checkpoint message from the other peers ... when the checkpoint is generated is based on the batchsize and the log size, details are in the Castro&Liskov PBFT paper
tuand (Wed, 22 Feb 2017 19:08:48 GMT):
I think there's was a fix late in V0.6 that handled the chain heights not quite matching but as you mentioned, we're all beating up on V1.0 now ... @danielleekc , you might want to look at the sbft orderer in V1.0
bretharrison (Wed, 22 Feb 2017 19:45:00 GMT):
Has joined the channel.
bretharrison (Wed, 22 Feb 2017 20:04:49 GMT):
Where would I find how a configuration block is constructed. How would I get the latest configuration block.
kletkeman (Wed, 22 Feb 2017 20:19:05 GMT):
@tuand I don't see any branches other than v0.6 and releases past v0.6.1. Is there a v0.6.x version somewhere else that can catch up?
tuand (Wed, 22 Feb 2017 20:34:13 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=88ABgn6zxwMZnER4u) @kletkeman check with Matt Lishok
tuand (Wed, 22 Feb 2017 20:35:13 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=zuxXyj2qvhQ2ZZMNj) @bretharrison @jyellick @jeffgarratt
jyellick (Wed, 22 Feb 2017 20:44:42 GMT):
@bretharrison
> Where would I find how a configuration block is constructed. How would I get the latest configuration block.
Every block has a metadata field which encodes the index of the last configuration block. It is signed with signatures matching the block validation policy.
jyellick (Wed, 22 Feb 2017 20:45:28 GMT):
See `fabric/protos/common/common.proto`
```
// This enum enlists indexes of the block metadata array
enum BlockMetadataIndex {
SIGNATURES = 0; // Block metadata array position for block signatures
LAST_CONFIG = 1; // Block metadata array poistion to store last configuration block sequence number
TRANSACTIONS_FILTER = 2; // Block metadata array poistion to store serialized bit array filter of invalid transactions
ORDERER = 3; // Block metadata array position to store operational metadata for orderers
// e.g. For Kafka, this is where we store the last offset written to the local ledger.
}
// LastConfig is the encoded value for the Metadata message which is encoded in the LAST_CONFIGURATION block metadata index
message LastConfig {
uint64 index = 1;
}
// Metadata is a common structure to be used to encode block metadata
message Metadata {
bytes value = 1;
repeated MetadataSignature signatures = 2;
}
message MetadataSignature {
bytes signature_header = 1; // An encoded SignatureHeader
bytes signature = 2; // The signature over the concatenation of the Metadata value bytes, signatureHeader, and block header
}
```
jyellick (Wed, 22 Feb 2017 20:46:08 GMT):
You can retrieve the `Metadata` which is encoded at index `LAST_CONFIG` as marshaled bytes. Then decode the `value` to `LastConfig` which will give you the block index of the last configuration block.
jyellick (Wed, 22 Feb 2017 20:46:21 GMT):
Then simply call `Deliver` specifying that block number as start and finish, and you will have the last config block.
bretharrison (Wed, 22 Feb 2017 20:47:04 GMT):
@jyellick thanks
bartcant (Thu, 23 Feb 2017 02:18:23 GMT):
Has joined the channel.
Anh-Dung (Thu, 23 Feb 2017 10:10:57 GMT):
Has joined the channel.
ianj_mitchell@uk.ibm.com (Thu, 23 Feb 2017 11:15:42 GMT):
Has joined the channel.
srm (Thu, 23 Feb 2017 13:30:41 GMT):
Has joined the channel.
aambati (Thu, 23 Feb 2017 15:16:45 GMT):
Has joined the channel.
kdj (Thu, 23 Feb 2017 16:50:12 GMT):
Has joined the channel.
CarlXK (Fri, 24 Feb 2017 02:02:12 GMT):
Has joined the channel.
ryokawajp (Fri, 24 Feb 2017 11:00:10 GMT):
Has joined the channel.
tom.appleyard (Fri, 24 Feb 2017 11:52:27 GMT):
Has joined the channel.
rickr (Fri, 24 Feb 2017 14:28:01 GMT):
@muralisr @binhn FYI https://jira.hyperledger.org/browse/FAB-2463
binhn (Fri, 24 Feb 2017 14:28:01 GMT):
Has joined the channel.
joacomoreno (Fri, 24 Feb 2017 17:53:49 GMT):
Has joined the channel.
gdhh (Fri, 24 Feb 2017 18:03:26 GMT):
Has joined the channel.
rickr (Fri, 24 Feb 2017 19:45:47 GMT):
,think I produced the above with both node and Java sdk
rickr (Fri, 24 Feb 2017 19:47:00 GMT):
@here blocker r Mel
jyellick (Fri, 24 Feb 2017 19:48:03 GMT):
@muralisr I suspect this all stems from not signing with an admin cert
jyellick (Fri, 24 Feb 2017 19:48:35 GMT):
Can you confirm? I do not know the different peer policy paths for the assorted lifecycle invocation commands
muralisr (Fri, 24 Feb 2017 19:56:09 GMT):
@jyellick its possible.. but depends on the error we get back... @rickr what do you get back from fabric ?
rickr (Fri, 24 Feb 2017 20:53:22 GMT):
@muralisr `The creator's signature over the proposal is not valid, err The signature is invalid, cause=null`
```
commit 709d87b30060937c26b434db8baf9796b529449a
Merge: 187f36a 8021182
```
jyellick (Fri, 24 Feb 2017 21:29:23 GMT):
```
mspObj := mspmgmt.GetIdentityDeserializer(ChainID)
if mspObj == nil {
return fmt.Errorf("could not get msp for chain [%s]", ChainID)
}
// get the identity of the creator
creator, err := mspObj.DeserializeIdentity(creatorBytes)
if err != nil {
return fmt.Errorf("Failed to deserialize creator identity, err %s", err)
}
putilsLogger.Infof("checkSignatureFromCreator info: creator is %s", creator.GetIdentifier())
// ensure that creator is a valid certificate
err = creator.Validate()
if err != nil {
return fmt.Errorf("The creator certificate is not valid, err %s", err)
}
putilsLogger.Infof("checkSignatureFromCreator info: creator is valid")
// validate the signature
err = creator.Verify(msg, sig)
if err != nil {
return fmt.Errorf("The creator's signature over the proposal is not valid, err %s", err)
}
```
@muralisr this looks unrelated to the policy work to me
jyellick (Fri, 24 Feb 2017 21:29:39 GMT):
Perhaps this has something to do with the BCCSP stuff? I don't know
cdaughtr (Fri, 24 Feb 2017 21:34:49 GMT):
@muralisr @jyellick About the invalid signature, I did a test with Jim's help where we took our admin cert and put it in the fabric/msp/sampleconfig/admin and rebuilt the dockers. Unfortunately, this did not resolve the problem.
jyellick (Fri, 24 Feb 2017 21:36:03 GMT):
Thanks @cdaughtr per the error pasted by @rickr above, this seems unrelated to policy or role, instead, the signature for the provided cert (regardless of the cert's permissions) is not valid.
jyellick (Fri, 24 Feb 2017 21:36:53 GMT):
This should probably be moved to #fabric as it is definitely not a #fabric-consensus related problem
vpaprots (Fri, 24 Feb 2017 22:15:41 GMT):
Has joined the channel.
rrader (Sat, 25 Feb 2017 14:48:04 GMT):
Has joined the channel.
kostas (Sat, 25 Feb 2017 23:16:54 GMT):
I pushed a fix for FAB-2259. Please let me know if there are any questions.
kostas (Sat, 25 Feb 2017 23:16:58 GMT):
I'll take over FAB-2001 now.
kostas (Sat, 25 Feb 2017 23:17:04 GMT):
Let me know if FAB-2137 and FAB-2242 persists (I've left comments on the JIRA items) and I'll look into these next.
kostas (Sat, 25 Feb 2017 23:17:13 GMT):
^^ @scottz @suryalanka
kostas (Sat, 25 Feb 2017 23:19:26 GMT):
In general, unless otherwise noted, whatever orderer-related bugs you find, feel free to assign them to me. (Label them with `fabric-consensus` as usual.)
ray (Mon, 27 Feb 2017 05:46:32 GMT):
Has joined the channel.
hgabre (Mon, 27 Feb 2017 13:51:10 GMT):
do we still need this? https://gerrit.hyperledger.org/r/#/c/5109/
hgabre (Mon, 27 Feb 2017 13:51:36 GMT):
@sanchezl
mayerwa (Mon, 27 Feb 2017 13:58:54 GMT):
Has joined the channel.
sanchezl (Mon, 27 Feb 2017 14:28:19 GMT):
@hgabre , I'm going to abandon for now.
kostas (Mon, 27 Feb 2017 18:39:24 GMT):
@hgabre: When you find some time, chime in on this one if you have thoughts: https://jira.hyperledger.org/browse/FAB-2489
kostas (Mon, 27 Feb 2017 18:42:10 GMT):
Related to that, what is your plan for moving `sbft_test.go` and `network_test.go` to the `sbft` package? We'll have to get the structure right before we cut for a release. As I understand it, these files were moved to the root of the orderer package temporarily.
kostas (Mon, 27 Feb 2017 18:42:10 GMT):
Related to that, what is your plan for moving `sbft_test.go` and `network_test.go` to the `sbft` package? We'll have to get the structure right before we cut for a release. As I understand it, these files were moved to the root of the orderer package only as a temporary measure
steigensonne (Tue, 28 Feb 2017 02:08:37 GMT):
Has joined the channel.
steigensonne (Tue, 28 Feb 2017 02:08:54 GMT):
Can I ask general questions on fabric consensus?
Should SBFT work on/with Kafka?
or we can just choose one of them? (solo/kafka/sbft)?
I am just wondering SBFT is working over kafka.
I hope someone could ellaborate this. Please correct me if I have wrong ideas.
tuand (Tue, 28 Feb 2017 03:21:58 GMT):
You choose o
tuand (Tue, 28 Feb 2017 03:22:38 GMT):
You choose one of solo/kafka/solo
tuand (Tue, 28 Feb 2017 03:22:38 GMT):
You choose one of solo/kafka/sbft
steigensonne (Tue, 28 Feb 2017 04:11:13 GMT):
solo/kafka/solo? last one would be "sbft"?
steigensonne (Tue, 28 Feb 2017 04:13:46 GMT):
Can we say that each channel can have different consensus type? Channl A (Solo), Channel B(SBFT), Channel C(KAFKA).
steigensonne (Tue, 28 Feb 2017 04:19:55 GMT):
Is is possible to change the consensus in the middle of service time? First its set by SBFT, and then want to change SBFT to KAFKA. How do you think this scenario? Thanks in advance.
vukolic (Tue, 28 Feb 2017 13:10:25 GMT):
@steigensonne good question - separate consensus protocols on separate channels would certainly be enabled down the road
vukolic (Tue, 28 Feb 2017 13:10:29 GMT):
but we are not there yet
s.narayanan (Tue, 28 Feb 2017 13:30:38 GMT):
I have a few question on consensus service. Appreciate if someone could help me understand these
1. As per the Kafka design doc (https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing), Orderer node maintains a local log of blocks. Is this what is referred to as Orderer ledger in the documentation? I presume this ledger contains blocks with transactions that have not been validated yet by committers. How is this log maintained by ordered nodes ?
2. For a Kafka based ordering service, how many orderer nodes should be deployed from fault tolerance perspective? Should this at a minimum based on the number of In sync replicas deployed for partition within Kafka? For instance, a Kafka partition with 3 ISRs can tolerate up to two failures. Does this mean one has to deploy 3 orderer nodes ?
3. What would deployment topology for OSNs look like in a scenario where peers are deployed in multiple data centers (say within participant premises)? Would Ordering service be offered as a centralized service hosted by one of the participants (or central authority such as regulator)? The concern specifically with Kafka implementation is that if clusters are located within multiple data centers how can you ensure total order of messages produced across clusters?
s.narayanan (Tue, 28 Feb 2017 13:30:38 GMT):
I have a few question on consensus service. Appreciate if someone could help me understand these
1. As per the Kafka design doc (https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing), Orderer node maintains a local log of blocks. Is this what is referred to as Orderer ledger in the documentation? I presume this ledger contains blocks with transactions that have not been validated yet by committers. How is this log maintained by ordered nodes (file, in memory etc.) ?
2. For a Kafka based ordering service, how many orderer nodes should be deployed from fault tolerance perspective? Should this at a minimum based on the number of In sync replicas deployed for partition within Kafka? For instance, a Kafka partition with 3 ISRs can tolerate up to two failures. Does this mean one has to deploy 3 orderer nodes ?
3. What would deployment topology for OSNs look like in a scenario where peers are deployed in multiple data centers (say within participant premises)? Would Ordering service be offered as a centralized service hosted by one of the participants (or central authority such as regulator)? The concern specifically with Kafka implementation is that if clusters are located within multiple data centers how can you ensure total order of messages produced across clusters?
tuand (Tue, 28 Feb 2017 14:04:49 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=DwZXwmMinFxrp6vRe) @steigensonne yes, my typo
Dan (Tue, 28 Feb 2017 14:18:04 GMT):
Has joined the channel.
dave.enyeart (Tue, 28 Feb 2017 14:40:22 GMT):
@jyellick @muralisr I'm seeing a new issue today:
dave.enyeart (Tue, 28 Feb 2017 14:40:27 GMT):
```vagrant@hyperledger-devenv:v0.3.0-6daec3f:/opt/gopath/src/github.com/hyperledger/fabric$ peer channel create -c myc1
panic: Fatal error when setting up MSP from directory msp/sampleconfig: err KeyMaterial not found in SigningIdentityInfo```
muralisr (Tue, 28 Feb 2017 14:40:45 GMT):
@dave.enyeart thanks
muralisr (Tue, 28 Feb 2017 14:40:49 GMT):
will chek it out
dave.enyeart (Tue, 28 Feb 2017 14:41:14 GMT):
does this impact the 'real' end to end, or only people using the sampleconfig?
muralisr (Tue, 28 Feb 2017 14:55:17 GMT):
@dave.enyeart good question... hope to know soon
kostas (Tue, 28 Feb 2017 15:18:38 GMT):
@s.narayanan -- good questions. See below for answers:
1. This is indeed the orderer ledger. It contains transactions not-yet-validateded y committers. It is maintained in file. (There is a "ram" ledger option available but that is just for quick testing.)
2. There is no one-to-one mapping between Kafka brokers and OSNs. So, no this does not mean that one needs to deploy 3 OSNs.
3. Why would there be a total order problem if you were running across multiple data centers? At any point time, whether you're running on 1 data center or 5, there is one partition leader, and this is the broker deciding on the total order. (Realistically however, I'd expect all the brokers to be collocated on the same datacenter and a MirrorMaker setup replicating to another datacenter for redundancy.)
vpaprots (Tue, 28 Feb 2017 15:23:25 GMT):
@dave.enyeart Is there a backtrace to go with that panic?
dave.enyeart (Tue, 28 Feb 2017 15:23:56 GMT):
```goroutine 1 [running]:
panic(0xb03120, 0xc42021ea70)
/opt/go/src/runtime/panic.go:500 +0x1a1
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/peer/main.go:107 +0x67a```
dave.enyeart (Tue, 28 Feb 2017 15:34:11 GMT):
@muralisr @jyellick @vpaprots False alarm... `make peer` cleaned up this issue
vpaprots (Tue, 28 Feb 2017 15:34:34 GMT):
whew!
vpaprots (Tue, 28 Feb 2017 15:34:56 GMT):
I was typing you a long PM..
vpaprots (Tue, 28 Feb 2017 15:35:06 GMT):
```
logging.SetLevel(logging.DEBUG, "BCCSP_SW")
logging.SetLevel(logging.DEBUG, "BCCSP_FACTORY")
```
vpaprots (Tue, 28 Feb 2017 15:35:27 GMT):
we haven't added a way to control those yet :/
s.narayanan (Tue, 28 Feb 2017 16:20:55 GMT):
@kostas thanks. I am not clear on #3. If peers are deployed in different data centers (say data center A and B), and orderer node in each data center is configured to produce messages to local Kafka clusters, the MirrorMaker can be set up to replicate presumably to the cluster in one of the datacenters say data center B (or a 3rd data center which we can ignore for this example). In this event when it comes to delivery of messages, the orderer nodes have to deliver based on messages (i.e. blocks) from Kafka Cluster in data center B since this is the cluster that can provide total order of messages (across both data centers)?
s.narayanan (Tue, 28 Feb 2017 16:21:52 GMT):
@kostas sorry for the unintended smiley due to typo, I meant data center B
kostas (Tue, 28 Feb 2017 16:23:30 GMT):
No worries. So assume you have your set of Kafka brokers in DC A.
kostas (Tue, 28 Feb 2017 16:23:44 GMT):
This is your "primary" set.
kostas (Tue, 28 Feb 2017 16:24:06 GMT):
For redundancym you set up another cluster in DC B. Via MirrorMaker it replicates what happens in DC A.
kostas (Tue, 28 Feb 2017 16:24:06 GMT):
For redundancy you set up another cluster in DC B. Via MirrorMaker it replicates what happens in DC A.
kostas (Tue, 28 Feb 2017 16:25:35 GMT):
Note that the OSNs do not necessarily have to be collocated with the brokers.
kostas (Tue, 28 Feb 2017 16:26:08 GMT):
So, to take this example to an extreme, assume that you have an OSN in DC C.
kostas (Tue, 28 Feb 2017 16:26:49 GMT):
When this OSN is to serve a broadcast request, it will communicate with the Kafka brokers in DC A.
kostas (Tue, 28 Feb 2017 16:27:16 GMT):
When this OSN needs to update its local ledger (to serve deliver requests), it reaches out to DC A as well.
kostas (Tue, 28 Feb 2017 16:27:56 GMT):
Is there anything that concerns you in this setup?
s.narayanan (Tue, 28 Feb 2017 16:32:45 GMT):
@kostas this setup is good since with this OSN serves broadcast and deliver from the same cluster. What I am unclear is how this works when peers are distributed across DCs. In this case do you designate one cluster (within a DC) as the cluster for serving broadcast and deliver requests
kostas (Tue, 28 Feb 2017 16:34:08 GMT):
When you refer to peers, do you mean OSNs?
s.narayanan (Tue, 28 Feb 2017 16:34:42 GMT):
yes
kostas (Tue, 28 Feb 2017 16:35:02 GMT):
The answer is exactly the same.
kostas (Tue, 28 Feb 2017 16:35:23 GMT):
You can have an OSN in DC C and another one in DC D. Each does the same thing:
kostas (Tue, 28 Feb 2017 16:35:27 GMT):
https://chat.hyperledger.org/channel/fabric-consensus?msg=BBdFNDJzYy79n2aJd
kostas (Tue, 28 Feb 2017 16:35:32 GMT):
https://chat.hyperledger.org/channel/fabric-consensus?msg=D7HyXTv7hspPJSMJp
s.narayanan (Tue, 28 Feb 2017 16:40:04 GMT):
@kostas so to make sure I understand ... If we have OSNs deployed in mutiple DCs, then they need to use a Kafka cluster within a specific DC for broadcast/deliver.
kostas (Tue, 28 Feb 2017 16:41:45 GMT):
No. Nothing prevents you from distributing OSNs across various datacenters, and doing the same for the brokers of a cluster. (As I wrote above, you probably want to keep the cluster contained within the same DC for practical reasons, but nothing stops you from going the opposite way.)
s.narayanan (Tue, 28 Feb 2017 16:45:09 GMT):
@kostas thanks, your responses have been very helpful. I will give this some more thought and follow up
kostas (Tue, 28 Feb 2017 16:45:40 GMT):
@s.narayanan: Any time. You're welcome.
crmiles (Tue, 28 Feb 2017 17:17:35 GMT):
Has joined the channel.
bretharrison (Tue, 28 Feb 2017 18:14:38 GMT):
Has left the channel.
levinkwong (Wed, 01 Mar 2017 04:29:18 GMT):
Hi all, in fabric, the block order is provided by orderer (using sbft/kafka), does it mean each block will not & cannot have information from the previous block?
kostas (Wed, 01 Mar 2017 05:49:42 GMT):
Block n contains a hash of block n-1, and that's it.
kostas (Wed, 01 Mar 2017 05:49:42 GMT):
@levinkwong: Block n contains a hash of block n-1, and that's it.
levinkwong (Wed, 01 Mar 2017 05:55:55 GMT):
@kostas Thanks!
levinkwong (Wed, 01 Mar 2017 05:58:25 GMT):
Do we have any paper/article to reference for our sbft algorithm?
kostas (Wed, 01 Mar 2017 07:16:03 GMT):
It's PBFT plus these changes: https://jira.hyperledger.org/browse/FAB-378
levinkwong (Wed, 01 Mar 2017 07:23:33 GMT):
Thanks
levinkwong (Wed, 01 Mar 2017 07:39:04 GMT):
Is Kafka built-in/auto-created when I start a peer with orderer role? As my understanding , it does not, we need to specify Kafka broker addresses into config
levinkwong (Wed, 01 Mar 2017 07:39:04 GMT):
Does Kafka built-in/auto-created when I start a peer with orderer role? As my understanding , it does not, we need to specify Kafka broker addresses into config
kostas (Wed, 01 Mar 2017 07:42:39 GMT):
Your understanding is correct.
levinkwong (Wed, 01 Mar 2017 07:44:40 GMT):
@kostas Thanks, then my concern will be how data transfer and stored in Kafka is secured?
kostas (Wed, 01 Mar 2017 07:45:16 GMT):
Not sure where "securing" comes in? Do you mean "happen"?
levinkwong (Wed, 01 Mar 2017 07:47:31 GMT):
I mean the security (sorry for my poor English), Data (transactions) stored in the Kafka is encrypted ? And the data is transferred through secured channel?
kostas (Wed, 01 Mar 2017 07:48:46 GMT):
We support TLS connections between the orderers and the Kafka cluster, so the answer is yes.
levinkwong (Wed, 01 Mar 2017 07:49:37 GMT):
And I also concern if I choose sbft, no Kafka cluster is set up, than how the 'fabric channel' concept is achieved? Am I mixing concept up?
kostas (Wed, 01 Mar 2017 07:53:18 GMT):
No support for channels in SBFT yet. (And when it does come, some asterisks will apply w/r/t trust model.)
kostas (Wed, 01 Mar 2017 07:53:30 GMT):
You do _not_ need a Kafka cluster for the sbft orderer.
levinkwong (Wed, 01 Mar 2017 07:54:23 GMT):
So multi-channel only support Kafka consensus
kostas (Wed, 01 Mar 2017 07:55:24 GMT):
The production-ready solution for the next few months is going to be Kafka. SBFT to follow up.
kostas (Wed, 01 Mar 2017 07:55:24 GMT):
The only production-ready solution for the next few months is going to be Kafka. SBFT to follow up.
DannyWong (Wed, 01 Mar 2017 07:55:58 GMT):
Has joined the channel.
levinkwong (Wed, 01 Mar 2017 07:57:11 GMT):
Thank you!
kostas (Wed, 01 Mar 2017 07:57:32 GMT):
Any time, you're welcome.
levinkwong (Wed, 01 Mar 2017 08:32:24 GMT):
@kostas
Consider I have 3 endorsers endorsed with same transaction, BUT all of them return a different writeset (for example, the chaincode update a field with current time).
How the orderer fails this transaction?
kostas (Wed, 01 Mar 2017 09:15:25 GMT):
It won't — that's not the orderer's job. This will fail on the committing peers when the validation chaincode execution kicks in.
levinkwong (Wed, 01 Mar 2017 09:25:35 GMT):
So orderer ledger would contain invalid transactions
antitoine (Wed, 01 Mar 2017 10:41:31 GMT):
Has joined the channel.
kostas (Wed, 01 Mar 2017 10:47:53 GMT):
Correct.
silliman (Wed, 01 Mar 2017 17:29:01 GMT):
@kosta @levinkwong Wouldn't it usually be the case that if 3 endorsers all returned a different write set, that a well thought-out application would decide something is wrong and not even bother sending an endorsement to the orderer?
silliman (Wed, 01 Mar 2017 17:29:01 GMT):
@kostas @levinkwong Wouldn't it usually be the case that if 3 endorsers all returned a different write set, that a well thought-out application would decide something is wrong and not even bother sending an endorsement to the orderer?
kostas (Wed, 01 Mar 2017 17:46:47 GMT):
This would be the case indeed. (This is not what the original question asked though.)
kostas (Wed, 01 Mar 2017 17:46:47 GMT):
@silliman: This would be the case indeed. (This is not what the original question asked though.)
silliman (Wed, 01 Mar 2017 18:07:27 GMT):
@kosta understood re the original question... just checking on my understanding of the whole process :-)
silliman (Wed, 01 Mar 2017 18:07:27 GMT):
@kostas understood re the original question... just checking on my understanding of the whole process :-)
kostas (Wed, 01 Mar 2017 18:10:00 GMT):
Gotcha. You are absolutely right :v:
Donald Liu (Thu, 02 Mar 2017 01:23:19 GMT):
Has joined the channel.
baohua (Thu, 02 Mar 2017 01:54:01 GMT):
Has joined the channel.
baohua (Thu, 02 Mar 2017 01:54:32 GMT):
hi, anyone know if we have a workable example with SBFT enabled now? Only saw solo and kafka demo yet. thanks!
DannyWong (Thu, 02 Mar 2017 02:48:01 GMT):
@baohua just scroll up a little bit, we just knew that SBFT is not ready in upcoming few months (likely)
baohua (Thu, 02 Mar 2017 02:48:34 GMT):
thanks @DannyWong :)
Liew.SC (Thu, 02 Mar 2017 02:57:28 GMT):
Has joined the channel.
WeiHu (Thu, 02 Mar 2017 07:29:41 GMT):
Has joined the channel.
icordoba (Thu, 02 Mar 2017 12:05:34 GMT):
Has joined the channel.
tom.appleyard (Thu, 02 Mar 2017 12:43:33 GMT):
Is this still an accurate description of how consensus works:
https://github.com/hyperledger/fabric/blob/master/proposals/r1/Next-Consensus-Architecture-Proposal.md
kostas (Thu, 02 Mar 2017 13:53:06 GMT):
@tom.appleyard: Yes.
DG0011 (Thu, 02 Mar 2017 17:07:49 GMT):
Has joined the channel.
DG0011 (Thu, 02 Mar 2017 17:09:08 GMT):
Can anyone let me know what consensus algorithms are supported in current state of fabric 1.0? Solo, kafka and PBFT? Also, I assume consensus is pluggable?
kostas (Thu, 02 Mar 2017 17:13:29 GMT):
@DG0011: Consensus is most certainly pluggable. Solo is good to go but meant for development end-to-end testing and not a consensus algorithm per se. Kafka will be the production-ready solution (though there are still defects I need to fix). SBFT which is like PBFT with a few modifications will follow up suit later on in the year (the logic is there, it's just the proper integration that needs to happen).
DG0011 (Thu, 02 Mar 2017 17:15:05 GMT):
thank you. When will production-ready kafak be available? is there a doc I can read in the meanwhile..
weeds (Fri, 03 Mar 2017 03:14:45 GMT):
@DG0011 you can go look at the following https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit
s.narayanan (Fri, 03 Mar 2017 12:01:42 GMT):
Within a Blockchain network, assuming you have multiple chains, can you use different ordering service implementation across chains (e.g. Kafka for one chain, SBFT for another) or is it that choice of implementation is common for all chains? I presume it is the latter since the Ordering service is used to define channels for all chains (i.e. in some sense a centralized entity) and thus the choice of underlying implementation can only be made once for a network
weeds (Fri, 03 Mar 2017 13:27:33 GMT):
@s.narayanan For version 1.0, I believe they are just now trying to get in the code to be able to have multiple ordering service support. I believe they are trying to get the code in at the gossip layer still and of course will have to be tested. I think they are tracking this in 6757 in gerrit, but @yacovm would know
yacovm (Fri, 03 Mar 2017 13:28:46 GMT):
https://gerrit.hyperledger.org/r/#/c/6757/ indeed addresses this issue, implemented by Artem.
weeds (Fri, 03 Mar 2017 13:28:48 GMT):
@DG0011 I should have also pointed you to the very top level description also for consensus in the publications.- http://hyperledger-fabric.readthedocs.io/en/latest/fabric_model.html#consensus
DG0011 (Fri, 03 Mar 2017 16:06:17 GMT):
Thanks @weeds
kelly_ (Fri, 03 Mar 2017 17:27:15 GMT):
trying this here - I have a question regarding the ordering service on Fabric. What 'validity' checks do the orderers do? I know that Endorsers sign transactions first, do orderers check these signatures? Howe do the orderers know what signatures to check?
kelly_ (Fri, 03 Mar 2017 17:29:48 GMT):
or do they just order opaque transaction identifiers with zero checks?
yacovm (Fri, 03 Mar 2017 17:41:08 GMT):
Kelly, I'm not from the conesnsus team but I think they have policy checks on the broadcast messages sent to the ordering service and some of them are signature policies but I might be wrong, @kostas \ @jyellick can correct me if I'm wrong
yacovm (Fri, 03 Mar 2017 17:41:08 GMT):
@kelly_ , I'm not from the conesnsus team but I think they have policy checks on the broadcast messages sent to the ordering service and some of them are signature policies but I might be wrong, @kostas \ @jyellick can correct me if I'm wrong
jyellick (Fri, 03 Mar 2017 17:43:29 GMT):
@kelly_ For each channel, there is a policy which defines who may submit transactions to ordering, and who may receive blocks from ordering. These may be set to very specific rules, but, by default, any submitter who meets the Writers policy for any organization is allowed to submit, and any requester which meets the Readers policy for any organization is allowed to receive blocks.
jyellick (Fri, 03 Mar 2017 17:43:51 GMT):
This is entirely configurable however as needed by specific use cases.
szaidi (Fri, 03 Mar 2017 17:48:14 GMT):
Has joined the channel.
kelly_ (Fri, 03 Mar 2017 18:23:16 GMT):
thanks @jyellick @yacovm
s.narayanan (Fri, 03 Mar 2017 20:05:13 GMT):
@weeds my question was more around whether one can have multiple implementations of ordering service by channel, for instance a set of peers on one channel use Kafka based ordered while another (or same) set of peers on another channel use SBFT etc.
kostas (Fri, 03 Mar 2017 20:21:31 GMT):
@s.narayanan: The answer to this, at least for now, is no.
s.narayanan (Fri, 03 Mar 2017 20:23:30 GMT):
@kostas thanks.
toddinpal (Fri, 03 Mar 2017 22:12:38 GMT):
Has joined the channel.
rrader (Sat, 04 Mar 2017 14:24:43 GMT):
Correct me if I am wrong, or confirm, is concensus synchronizing history between peers?
rrader (Sat, 04 Mar 2017 14:24:43 GMT):
Correct me if I am wrong, or confirm, is consensus synchronizing history between peers?
weeds (Sat, 04 Mar 2017 15:04:02 GMT):
@rrader I think you may be asking about master branch- version 1.0- http://hyperledger-fabric.readthedocs.io/en/latest/txflow.html This describes the transaction flow.
yacovm (Sat, 04 Mar 2017 15:05:17 GMT):
@rrader The gossip layer is in charge of keeping peers in sync
weeds (Sat, 04 Mar 2017 15:07:14 GMT):
@nickgaski Correct Yacov- which means we should update in the doc- as I don't see it in there anywhere now that i look at it
nickgaski (Sat, 04 Mar 2017 15:07:14 GMT):
Has joined the channel.
rrader (Sat, 04 Mar 2017 15:09:55 GMT):
sorry, I forgot to mention, I am asking about v0.6
yacovm (Sat, 04 Mar 2017 15:14:30 GMT):
So for v0.6 there is some state synchronization between peers. It is not using consensus from what I know
weeds (Sat, 04 Mar 2017 15:22:58 GMT):
For 1.0- Material on gossip- https://jira.hyperledger.org/browse/FAB-170?jql=text%20~%20%22gossip%22
weeds (Sat, 04 Mar 2017 15:24:44 GMT):
In version 0.6- they utlizie PBFT-
weeds (Sat, 04 Mar 2017 15:25:28 GMT):
Practical Byzantine Fault Tolerance (PBFT) is one flavor of consensus protocol. The function of a consensus protocol is to maintain the order of transactions on a blockchain network, despite threats to this order. One such threat is the arbitrary concurrent failure (a type of Byzantine fault) of multiple network nodes. Using PBFT, a blockchain network of (N) nodes can withstand (f) number of Byzantine nodes, where f = (N-1)/3. In other words, PBFT ensures that a minimum of 2*f + 1 nodes reach consensus on the order of transactions before appending them to the shared ledger. Deriving either formula reveals the rule that a PBFT network guarantees data consistency and integrity despite Byzantine faults on fewer than one-third of all network nodes.
jyellick (Sat, 04 Mar 2017 19:55:24 GMT):
@rrader In v0.6, PBFT is used to select state transfer targets. These targets are a block number and hash. Consensus assures that the state transfer target is valid (because the consensus protocol agrees it is valid). Then, code outside of consensus can take a known block hash, and perform the state transfer safely without having to actually consent on each piece of state transferred, because of the hash-chain nature of the system.
rrader (Sat, 04 Mar 2017 20:45:56 GMT):
@yacovm @jyellick I want to understand where are stored peers, from this picture https://www.altoros.com/blog/wp-content/uploads/2017/02/Hyperledger-Webinar-Thomas-Marckx-architecture.jpg
I understand that peers are stored in Hyperledger Network, am I correct?
yacovm (Sat, 04 Mar 2017 20:47:02 GMT):
A peer is just a server as well as a p2p node
yacovm (Sat, 04 Mar 2017 20:47:21 GMT):
It can run wherever you want it to
yacovm (Sat, 04 Mar 2017 20:47:44 GMT):
the hyperledger network is defined by the peers, not the other way around
rrader (Sat, 04 Mar 2017 21:02:05 GMT):
I thought they all are stored on one machine, but if they all are distributed in NET a transaction could take some time if more than 50% will be offline
as I understand, to validate a transaction must be 50%+1 online and approve the transaction
yacovm (Sat, 04 Mar 2017 21:04:23 GMT):
if the peers are configured to use pbft (v0.6) then if you have `f` peers that are down, you need `f+1` peers alive
yacovm (Sat, 04 Mar 2017 21:05:31 GMT):
depends
yacovm (Sat, 04 Mar 2017 21:06:37 GMT):
if you have `n` peers then you can have `(n-1) / 3` of them be "dead"
yacovm (Sat, 04 Mar 2017 21:06:37 GMT):
if you have `n` peers then you can have `(n-1) / 3` of them be "dead" if you use pbft in v0.6
jyellick (Sat, 04 Mar 2017 21:11:42 GMT):
@rrader For v0.6, the guarantees are per the PBFT consensus algorithm. The PBFT consensus algorithm is designed to tolerate up to `f` failures. The minimal number of peers required to tolerate `f` failures is `3f+1`. This is where @yacovm is getting the `(n-1)/3` number, and for most configurations, this will be true though in some novel cases, it might not be. For details on PBFT, you can see this paper: http://research.microsoft.com/en-us/um/people/mcastro/publications/p398-castro-bft-tocs.pdf
jyellick (Sat, 04 Mar 2017 21:15:32 GMT):
PBFT uses the notion of quorum certs, which requires that `2f+1` votes, and weak certs, which require `f+1` votes. In general, for the network to process transactions, there must be enough non-byzantine peers to reach a quorum cert of `2f+1`, however, for the network to recover after a failure, we only require a weak checkpoint certificate which is `f+1`.
jyellick (Sat, 04 Mar 2017 21:15:32 GMT):
PBFT uses the notion of quorum certs, which requires `2f+1` votes, and weak certs, which require `f+1` votes. In general, for the network to process transactions, there must be enough non-byzantine peers to reach a quorum cert of `2f+1`, however, for the network to recover after a failure, we only require a weak checkpoint certificate which is `f+1`.
jyellick (Sat, 04 Mar 2017 21:17:02 GMT):
So, to put this more concretely, in a `4` peer network, so long as `3` are good, the network will continue to make progress. So long as `2` are good, the network can always catch up a peer which is rejoining.
jyellick (Sat, 04 Mar 2017 21:17:02 GMT):
So, to put this more concretely, in a `4` peer network, so long as `3` are good, the network will continue to make progress. So long as `2` are good, the network can always catch up a peer which is rejoining (enabling it to become that third good member, and have the network begin to make progress again).
rrader (Sat, 04 Mar 2017 21:25:35 GMT):
thanks
So Hyperledger Network knows how many peers are in network, then a peer must to register in the network?
jyellick (Sat, 04 Mar 2017 21:27:35 GMT):
@rrader The hyperledger fabric is targeting permissioned networks first, and the PBFT consensus algorithm requires that the number of nodes and identities of those nodes be known ahead of time, but our consensus design is intended to be be plug-able and at some point in the future we may support other non-permissioned consensus types.
jyellick (Sat, 04 Mar 2017 21:28:43 GMT):
In the case of v0.6, peers all participate in consensus. In v1, only dedicated ordering nodes participate in consensus, so that the number of peers can scale independent of the consensus algorithm.
jyellick (Sat, 04 Mar 2017 21:28:43 GMT):
In the case of v0.6, peers all participate in consensus. In v1, only dedicated ordering nodes participate in consensus, so that the number of peers can scale independent of the consensus algorithm. (And therefore, the number of peers, and the need for a peer to register in a network becomes much more flexible).
rrader (Sat, 04 Mar 2017 22:11:00 GMT):
thanks
By identities you mean users that can add transactions to peers?
rrader (Sat, 04 Mar 2017 22:11:00 GMT):
@jyellick thanks
By identities you mean users that can add transactions to peers?
jyellick (Sun, 05 Mar 2017 16:00:12 GMT):
Assuming you are referring to identities here:
> the PBFT consensus algorithm requires that the number of nodes and identities of those nodes be known ahead of time
Then the intent was the cryptographic identity (public key) of each node to participate in the PBFT consensus. I meant this to imply that public key distribution must be handled out of band to ensure that the system is secure and that some other node cannot impersonate one of the PBFT nodes.
MadhavaReddy (Mon, 06 Mar 2017 17:55:28 GMT):
Hi All, am referring 0.6, i created a network with 5 peers with PBFT, the consensus is working when i stop one peer however its not working when i stop two peers, but ideally it should work right because n-1/3 ( 5-2/3 ), could you please clarify why its failing
kostas (Mon, 06 Mar 2017 18:08:08 GMT):
To tolerate a failure of two (f=2) peers you need 3*f+1 = 7, not 5 peers.
kostas (Mon, 06 Mar 2017 18:08:08 GMT):
To tolerate a failure of two (f=2) peers you need a minimum of 3*f+1 = 7, not 5 peers.
tbrooke (Tue, 07 Mar 2017 01:40:20 GMT):
Has joined the channel.
MadhavaReddy (Tue, 07 Mar 2017 04:40:01 GMT):
Thanks @kostas
padmaja (Tue, 07 Mar 2017 06:00:01 GMT):
Has joined the channel.
MadhavaReddy (Tue, 07 Mar 2017 09:10:23 GMT):
Hi All, am seeing in all PBFT examples with peers as docker containers, would like to know the setup/steps with different nodes
MadhavaReddy (Tue, 07 Mar 2017 10:26:29 GMT):
also as i tested PBFT with 4 peers and vp0 being the master, when i stop vp0 the network is not working though still there are three nodes running, in PBFT is the master node fixed?
jyellick (Tue, 07 Mar 2017 14:04:40 GMT):
@MadhavaReddy How is the network not working? Are you certain you are sending requests to one of the remaining three nodes? The PBFT network should elect a new leader on failure.
s.narayanan (Tue, 07 Mar 2017 14:59:23 GMT):
Is there any documentation on SBFT and how it differs from PBFT?
kostas (Tue, 07 Mar 2017 15:36:46 GMT):
@s.narayanan: https://jira.hyperledger.org/browse/FAB-378
mlishok (Tue, 07 Mar 2017 17:11:17 GMT):
Has joined the channel.
MadhavaReddy (Tue, 07 Mar 2017 18:04:09 GMT):
@jyellick in invoke or query function call we don't provide node name right we just provide the chain code id, function name and arg
jyellick (Tue, 07 Mar 2017 18:05:25 GMT):
@MadhavaReddy That invoke/query must be being routed to some particular TCP host, I suspect that ultimately, it is attempting to connect to the peer which is down.
MadhavaReddy (Tue, 07 Mar 2017 18:08:48 GMT):
@jyellick in docker-compose file what i see is each peer is linked to one peer which vp0, is that the reason why its trying to connect to vp0 though its down
MadhavaReddy (Tue, 07 Mar 2017 18:11:56 GMT):
Message Attachments
MadhavaReddy (Tue, 07 Mar 2017 18:12:24 GMT):
@jyellick attached is the docker-compose yml file am using to run the network
divyank (Tue, 07 Mar 2017 20:21:00 GMT):
Has joined the channel.
divyank (Tue, 07 Mar 2017 20:22:17 GMT):
I noticed that the batch size and timeout configs in orderer.yaml have been deprecated. How is this configured?
kostas (Tue, 07 Mar 2017 21:15:40 GMT):
@divyank: These are settings that need to be common for all ordering service nodes and thus shared in the genesis block. Modify them in `configtx.yaml`: https://github.com/hyperledger/fabric/blob/master/common/configtx/tool/configtx.yaml
divyank (Tue, 07 Mar 2017 21:20:57 GMT):
Thank you
s.narayanan (Tue, 07 Mar 2017 21:47:05 GMT):
@kostas - I reviewed the description in https://jira.hyperledger.org/browse/FAB-378. I am not fully clear on underlying differences. One clarification on quorum size though. It appears that with SBFT your quorum is reduced to f+1 instead of 2f+1 (as needed in PBFT) is this correct?
grapebaba (Wed, 08 Mar 2017 00:53:05 GMT):
@kostas did we have a multiple orderers example docker compose file?
kostas (Wed, 08 Mar 2017 02:31:51 GMT):
@s.narayanan: @vukolic is the expert on SBFT and can shed more light into it if need be. Note that the f+1 quorum refers to _signed_ messages and a fourth phase.
kostas (Wed, 08 Mar 2017 02:31:51 GMT):
@s.narayanan: @vukolic is the expert on SBFT and can shed more light on it. Note that the f+1 quorum refers to _signed_ messages and a fourth phase.
kostas (Wed, 08 Mar 2017 02:42:08 GMT):
@grapebaba: We do. Look for `orderer-n-kafka-n` in the `bddtests/environments` directory and follow the instructions in the README file. I see an obvious mistake in the `docker-compose.yml` file that was added via a changeset I was unaware of. I'll fix it first thing tomorrow morning. For now, remove the `container_name: orderer` line.
grapebaba (Wed, 08 Mar 2017 02:45:45 GMT):
thanks, will look
grapebaba (Wed, 08 Mar 2017 02:46:31 GMT):
also did we have a example docker compose for multiple channels ?
kostas (Wed, 08 Mar 2017 03:17:11 GMT):
Not that I know of. Maybe @scottz does?
MadhavaReddy (Wed, 08 Mar 2017 05:56:37 GMT):
Hi All, am seeing in all PBFT examples with peers as docker containers in one VM's, are there any instructions on how to do the same setup when peers are running on a different VM's ( 4 VM's and each peer running one VM )
danielleekc (Wed, 08 Mar 2017 06:44:57 GMT):
Hi all, I currently setup PBFT network of 5 Peers (hyperledger 0.6).
Test scenario:
For Nodes A,B,C,D,E.
Node A was shutdown and restarted.
There is no transaction on other Nodes during A's down time.
The example chaincode02 is being used in this case.
Assumed that we have a=100 , b=100 at this point.
(i)
Then, I send a Invoke request (a give 10 to b) to Node C.
(ii)
I send query request to Node B and C, they return a=110.
Then, I send query requst to Node A, it returns 100.
wait for a while, I send query request to Node A again.
a error message is returned:
```
{"jsonrpc":"2.0","error":{"code":-32003,"message":"Query failure","data":"Error when querying chaincode: Error: state may be inconsistent, cannot query"},"id":5}
```
(iii)
After about 1~2 minutes,
Node A returns the correct value a=110.
And then I try to repeat (i) and (ii),(iii) happened again.
What is this situation and how can I fix this delay or inconsistent state?
berserkr (Wed, 08 Mar 2017 08:14:05 GMT):
Has joined the channel.
Suma (Wed, 08 Mar 2017 15:42:45 GMT):
Has joined the channel.
mdozturk (Wed, 08 Mar 2017 17:18:40 GMT):
Has joined the channel.
mdozturk (Wed, 08 Mar 2017 17:23:28 GMT):
Say we have a hyperledger network with 50 nodes that represents libraries in a city (say New York). Mary and Paul both checkout the same book at approximately same time. Mary makes the request to one node, and Paul to another. 25 nodes think Mary got the book and 25 other nodes thinks Paul did. How does the PBFT algorithm deal with this case?
jyellick (Wed, 08 Mar 2017 17:24:02 GMT):
@danielleekc please see https://jira.hyperledger.org/browse/FAB-707 and https://github.com/hyperledger-archives/fabric/issues/1120
jyellick (Wed, 08 Mar 2017 17:24:35 GMT):
Essentially, the PBFT protocol is designed to allow some nodes to fall behind, because it is very difficult to differentiate between slow and byzantine nodes.
jyellick (Wed, 08 Mar 2017 17:24:53 GMT):
So, if you wish to ensure you are getting the most current value, you need to do a strong read across at least f+1 nodes.
mdozturk (Wed, 08 Mar 2017 17:25:07 GMT):
but this is a write issue
mdozturk (Wed, 08 Mar 2017 17:25:33 GMT):
I assume Paul's request and Mary's requests trickles down to the other nodes
mdozturk (Wed, 08 Mar 2017 17:25:38 GMT):
at an equal rate
jyellick (Wed, 08 Mar 2017 17:25:51 GMT):
@mdozturk This is the purpose of consensus, it provides a total ordering for transactions to ensure that all nodes have the same view of the order of transactions, and can therefore deterministically compute the state
mdozturk (Wed, 08 Mar 2017 17:26:19 GMT):
who orders transactions?
jyellick (Wed, 08 Mar 2017 17:26:50 GMT):
All nodes participating in consensus
jyellick (Wed, 08 Mar 2017 17:27:13 GMT):
In 0.6, this is every peer. In v1, this is the ordering nodes
mdozturk (Wed, 08 Mar 2017 17:27:26 GMT):
50% of the nodes think the order is Mary, Paul, the remaining think its Paul, Mary. How does this case get resolved?
jyellick (Wed, 08 Mar 2017 17:28:48 GMT):
That is not the case. You can think of it as "50% of the nodes hear the transactions in the order Mary, Paul, 50% in the order Paul, Mary. They then perform a consensus algorithm to determine whether the order will be Mary, Paul, or Paul, Mary. The outcome of the consensus algorithm determines who actually gets the book"
mdozturk (Wed, 08 Mar 2017 17:29:29 GMT):
The PBFT assumes that only 1/3 of the nodes will be wrong though
jyellick (Wed, 08 Mar 2017 17:29:43 GMT):
PBFT assumes only f nodes are byzantine
mdozturk (Wed, 08 Mar 2017 17:29:46 GMT):
so it seems like it cannot handle the 1/2 wrong case
jyellick (Wed, 08 Mar 2017 17:30:07 GMT):
Byzantine and 'wrong' as being used here are different notions.
jyellick (Wed, 08 Mar 2017 17:30:24 GMT):
When a node hears "Mary wants the book", the node submits that transaction for ordering by consensus
jyellick (Wed, 08 Mar 2017 17:30:37 GMT):
When a ndoe hears "Paul wants the book", the node submits that transaction for ordering by consensus
jyellick (Wed, 08 Mar 2017 17:31:03 GMT):
The PBFT consensus ensures that, all honest nodes in the network will agree on the result order, either Paul, Mary, or Mary, Paul
mdozturk (Wed, 08 Mar 2017 17:31:19 GMT):
Let's assume everyone is honest
jyellick (Wed, 08 Mar 2017 17:31:27 GMT):
This guarantee holds true so long as there are no more than f byzantine members of the network, trying to split the decision.
jyellick (Wed, 08 Mar 2017 17:31:37 GMT):
In the case that everyone is honest, then all nodes will agree on the same order.
mdozturk (Wed, 08 Mar 2017 17:31:45 GMT):
how do they reach a consensus when 50% think Paul ordered first and 50% think Mary orrdered first
jyellick (Wed, 08 Mar 2017 17:32:04 GMT):
This is the nature of a consensus protocol
jyellick (Wed, 08 Mar 2017 17:32:23 GMT):
If you would like details, you may read http://research.microsoft.com/en-us/um/people/mcastro/publications/p398-castro-bft-tocs.pdf
mdozturk (Wed, 08 Mar 2017 17:32:51 GMT):
I was reading though that and was surprised to hear 1/3 is the cutoff
jyellick (Wed, 08 Mar 2017 17:33:01 GMT):
There are other similar leader based protocols, like https://en.wikipedia.org/wiki/Paxos_(computer_science)
mdozturk (Wed, 08 Mar 2017 17:33:17 GMT):
where there seems there are practical cases where half of the network will disagree with the other half
jyellick (Wed, 08 Mar 2017 17:33:33 GMT):
PBFT is about tolerating byzantine members. IE members who will lie to try to convince some members the order is one way, and other members the order is another way
jyellick (Wed, 08 Mar 2017 17:33:49 GMT):
Disagreement is fine, the consensus algorithm resolves it.
mdozturk (Wed, 08 Mar 2017 17:34:00 GMT):
it doesn't seem to in the case I mentioned
jyellick (Wed, 08 Mar 2017 17:34:42 GMT):
You may find these articles helpful https://en.wikipedia.org/wiki/Atomic_broadcast https://en.wikipedia.org/wiki/Consensus_(computer_science)
mdozturk (Wed, 08 Mar 2017 17:34:54 GMT):
Thanks, I appreciate it
MadhavaReddy (Wed, 08 Mar 2017 17:48:37 GMT):
@jyellick in 0.6 is PBFT however in 1.0 its SBFT, is SBFT will work same as PBFT like tolerating byzantine members
MadhavaReddy (Wed, 08 Mar 2017 17:48:37 GMT):
@jyellick in 0.6 it's PBFT however in 1.0 its SBFT, is SBFT will work same as PBFT like tolerating byzantine members
jyellick (Wed, 08 Mar 2017 18:08:30 GMT):
@MadhavaReddy correct, SBFT is simply a PBFT variant that makes some simplifying assumptions based on things like FIFO links which the original Castro paper does not
jyellick (Wed, 08 Mar 2017 18:09:19 GMT):
It is still a byzantine fault tolerant consensus protocol
MadhavaReddy (Wed, 08 Mar 2017 18:15:09 GMT):
ok does it verify the all the endorser response after simulation, i mean does it verify the world state of a transactions from all endorser er's before it submits to peers for commit
MadhavaReddy (Wed, 08 Mar 2017 18:15:09 GMT):
ok does it verify the all the endorser's response after simulation, i mean does it verify the world state of a transactions from all endorser er's before it submits to peers for commit
MadhavaReddy (Wed, 08 Mar 2017 18:15:09 GMT):
ok does it verify the all the endorser's response after simulation, i mean does it verify the world state of a transactions from all endorser's before it submits to peers for commit
MadhavaReddy (Wed, 08 Mar 2017 18:16:35 GMT):
apart from the ordering of the tranactions
jyellick (Wed, 08 Mar 2017 18:19:14 GMT):
@MadhavaReddy The ordering service verifies that the transaction submitters are authorized to write transactions onto the channel. The actual endorsement checking and MVCC conflict resolution etc. is handled at the committing peer.
jorgedr (Thu, 09 Mar 2017 00:03:42 GMT):
Has joined the channel.
xiangyw (Thu, 09 Mar 2017 01:35:08 GMT):
Has joined the channel.
MadhavaReddy (Thu, 09 Mar 2017 03:21:13 GMT):
@jyellick as per the v1.0 arch the client submits transactions to endorsement nodes first after simulation the response gets submitted to orderer, so why should even endorser needs to execute transaction knowing that orderer will verify the authorization to write transaction onto the channel ( is orderer reject transactions if submitters are not authorized to write transactions?)
MadhavaReddy (Thu, 09 Mar 2017 03:21:13 GMT):
@jyellick as per the v1.0 arch the client submits transactions to endorsement nodes first, after simulation the response gets submitted to orderer, so why should even endorser needs to execute transaction knowing that orderer will verify the authorization to write transaction onto the channel ( is orderer reject transactions if submitters are not authorized to write transactions?)
jyellick (Thu, 09 Mar 2017 03:22:53 GMT):
@MadhavaReddy The endorsement process is about getting the result of the transaction attested to by an authorized peer. The orderer is only verifying that the submitter is someone that is authorized to transact on the channel, nothing about the result of that transaction
jyellick (Thu, 09 Mar 2017 03:22:53 GMT):
@MadhavaReddy The endorsement process is about getting the result of the transaction attested to by an authorized peer (or peers). The orderer is only verifying that the submitter is someone that is authorized to transact on the channel, nothing about the result of that transaction
MadhavaReddy (Thu, 09 Mar 2017 03:26:48 GMT):
ok thank you, still one small clarification, between endorsement process & orderer which one gets executed first?
MadhavaReddy (Thu, 09 Mar 2017 03:28:46 GMT):
as per 1.0 arch endorser peer/peers gets the transactions and do the simulation
jyellick (Thu, 09 Mar 2017 03:29:37 GMT):
@MadhavaReddy
1. The client creates a proposal
2. The client sends the proposal to 1 or more endorsers, which executes a chaincode, and produces a proposal result
3. The client combines the proposal results and submits the result into a transaction which is sent to ordering
4. The orderer network validates that the transaction is authorized for a given channel, then consents upon the order of that transaction relative to other
5. The peer network receives a block which contains the transaction
6. The peer network validates that the transaction is valid, and if so, applies is to the peer state
jyellick (Thu, 09 Mar 2017 03:29:37 GMT):
@MadhavaReddy
1. The client creates a proposal
2. The client sends the proposal to 1 or more endorsers, which executes a chaincode, and produces a proposal result
3. The client combines the proposal results to form a transaction which is sent to ordering
4. The orderer network validates that the transaction is authorized for a given channel, then consents upon the order of that transaction relative to other
5. The peer network receives a block which contains the transaction
6. The peer network validates that the transaction is valid, and if so, applies is to the peer state
jyellick (Thu, 09 Mar 2017 03:29:37 GMT):
@MadhavaReddy
1. The client creates a proposal
2. The client sends the proposal to 1 or more endorsers, which executes a chaincode, and produces a proposal result
3. The client combines the proposal results to form a transaction which is sent to ordering
4. The orderer network validates that the transaction is authorized for a given channel, then consents upon the order of that transaction relative to others
5. The peer network receives a block which contains the transaction
6. The peer network validates that the transaction is valid, and if so, applies is to the peer state
jyellick (Thu, 09 Mar 2017 03:29:37 GMT):
@MadhavaReddy
1. The client creates a proposal
2. The client sends the proposal to 1 or more endorsers, which executes a chaincode, and produces a proposal result
3. The client combines the proposal results to form a transaction which is sent to ordering
4. The orderer network validates that the transaction is authorized for a given channel, then consents upon the order of that transaction relative to others
5. The peer network receives a block which contains the transaction
6. The peer network validates that the transaction is valid, and if so, applies its result to the peer state
MadhavaReddy (Thu, 09 Mar 2017 03:31:51 GMT):
by knowing ( referring 4th point ) why should even endorsers need to execute the transaction, what does orderer will do if authorization fails
jyellick (Thu, 09 Mar 2017 03:32:56 GMT):
In (4) the orderer network is validating that the submitter is authorized to transact on the channel, not whether the transaction is valid or not.
jyellick (Thu, 09 Mar 2017 03:33:31 GMT):
If the submitter is not authorized so transact on the channel, the orderer will reject it, and not consent on its order (and it will never appear in the blockchain)
jyellick (Thu, 09 Mar 2017 03:35:13 GMT):
You can imagine that a user might be authorized to transact on a network, but, might for instance submit a transaction which is not valid after ordering
MadhavaReddy (Thu, 09 Mar 2017 03:35:27 GMT):
why can't this be done as a first step, why should orderer verify the authorization first and then submit transaction to endorser for execution other orderer is rejecting transactions after execution first 3 steps
MadhavaReddy (Thu, 09 Mar 2017 03:35:27 GMT):
why can't this be done as a first step, why can't orderer verify the authorization first and then submit transaction to endorser for execution other orderer is rejecting transactions after execution first 3 steps
MadhavaReddy (Thu, 09 Mar 2017 03:35:27 GMT):
why can't this be done as a first step, why can't orderer verify the authorization first and then submit transaction to endorser for execution rather than orderer is rejecting transactions after execution first 3 steps
jyellick (Thu, 09 Mar 2017 03:38:21 GMT):
This is one of the features of the hyperledger fabric. By executing before ordering, and utilizing deterministic transactions via MVCC+postimage, we can achieve determinism without requiring that the chaincode is deterministic. This is in contrast to other blockchain technologies which require that the logic be written in special purpose deterministic languages. By doing endorsement, then ordering, the committing, developers can write their code in conventional non-deterministic languages like golang or java without worrying about non-deterministic results.
jyellick (Thu, 09 Mar 2017 03:38:21 GMT):
This is one of the features of the hyperledger fabric. By executing before ordering, and utilizing deterministic transactions via MVCC+postimage, we can achieve determinism without requiring that the chaincode is deterministic. This is in contrast to other blockchain technologies which require that the logic be written in special purpose deterministic languages. By doing endorsement, then ordering, then committing, developers can write their code in conventional non-deterministic languages like golang or java without worrying about non-deterministic results.
MadhavaReddy (Thu, 09 Mar 2017 03:40:07 GMT):
Thanks @jyellick let me understand this need more time :-)
MadhavaReddy (Thu, 09 Mar 2017 03:40:07 GMT):
Thanks @jyellick let me understand this need more time :-) i mean your last comments
jeffchi (Thu, 09 Mar 2017 04:25:06 GMT):
Has joined the channel.
berserkr (Thu, 09 Mar 2017 05:02:30 GMT):
for # 4 here, the orderers append the tx to the channel block right?
berserkr (Thu, 09 Mar 2017 05:02:35 GMT):
one thing that is not clear is
berserkr (Thu, 09 Mar 2017 05:02:51 GMT):
do orderers wait until the block is created with some predetermined # of tx?
berserkr (Thu, 09 Mar 2017 05:03:23 GMT):
then send the block to the peer network, which validates the tx and if valid applies the write set
berserkr (Thu, 09 Mar 2017 05:03:27 GMT):
is that correct?
berserkr (Thu, 09 Mar 2017 05:03:35 GMT):
there is no notion of dropped blocks right?
berserkr (Thu, 09 Mar 2017 05:03:55 GMT):
if a tx is appended to a block, can it be invalidated upon peer verification?
xiangyw (Thu, 09 Mar 2017 05:50:17 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=4oYLoLoxpRE2baJx8) @jyellick hi,where and when consensus happen?
berserkr (Thu, 09 Mar 2017 05:51:05 GMT):
step 3, at the ordering service level
berserkr (Thu, 09 Mar 2017 05:51:22 GMT):
sorry 3/4, when the order is agreed upon
xiangyw (Thu, 09 Mar 2017 05:52:36 GMT):
if client sent the proposal to 5 endorsers at step2,then there are 5 endorsed Tx back to client ,how many message will be send to ordering ?
berserkr (Thu, 09 Mar 2017 05:54:03 GMT):
one as far as I am concerned, the client is responsible for that
berserkr (Thu, 09 Mar 2017 05:54:13 GMT):
when it gets back enough messages (endorsements)
berserkr (Thu, 09 Mar 2017 05:54:41 GMT):
https://camo.githubusercontent.com/0abf3d4b04ed9751850fbb45feb2b3b0f55d8d95/687474703a2f2f76756b6f6c69632e636f6d2f68797065726c65646765722f666c6f772d342e706e67
xiangyw (Thu, 09 Mar 2017 05:57:38 GMT):
where and who to make sure the 5 endorsed Tx (including peer's Signature and read set ,write set) are the same,i mean maybe there is one endorser is evil
berserkr (Thu, 09 Mar 2017 05:59:05 GMT):
For transaction with a valid endorsement, we now start using the ordering service. The submitting client invokes ordering service using the broadcast(blob), where blob=endorsement. If the client does not have capability of invoking ordering service directly, it may proxy its broadcast through some peer of its choice. Such a peer must be trusted by the client not to remove any message from the endorsement or otherwise the transaction may be deemed invalid. Notice that, however, a proxy peer may not fabricate a valid endorsement.
berserkr (Thu, 09 Mar 2017 06:00:40 GMT):
The collection of signed TRANSACTION-ENDORSED messages from endorsing peers which establish that a transaction is endorsed is called an endorsement and denoted by endorsement
berserkr (Thu, 09 Mar 2017 06:01:06 GMT):
so based on my understanding, when you get endorsements, if they satisfy the policy at hand, it will collect all responses and use the ordering servicce
berserkr (Thu, 09 Mar 2017 06:01:15 GMT):
which is then responsible for broadcasting to the peers
xiangyw (Thu, 09 Mar 2017 06:50:55 GMT):
@berserkr thanks, as we known there are noops and pbft in v0.6, but i get there solo and kafka in v1.0, there solo sound like the same as noops(one node),what is kafka mean? i didn't get useful info from fabric main doc
kostas (Thu, 09 Mar 2017 06:56:00 GMT):
@berserkr:
> do orderers wait until the block is created with some predetermined # of tx?
Yes. They wait until a certain number of transactions has been collected (batch size), or a certain amount of time after the first transaction in the new batch/block has elapsed (batch timeout) — whichever comes first.
> if a tx is appended to a block, can it be invalidated upon peer verification?
It can, by the committing peers. See: https://github.com/hyperledger/fabric/blob/master/protos/common/common.proto#L50
kostas (Thu, 09 Mar 2017 07:00:46 GMT):
@xiangyw: @berserkr is correct in that consensus happens (or rather: is reached) during step 4, and in that one message is sent to ordering by a client.
> where and who to make sure the 5 endorsed Tx (including peer's Signature and read set ,write set) are the same,i mean maybe there is one endorser is evil
It is the submitting client's responsibility to attach the right+necessary endorsements to its proposal. (If an endorser is evil, and its endorsement is necessary for the proposal to be validated, then the submitting client is blocked from executing - as they should.)
kostas (Thu, 09 Mar 2017 07:02:41 GMT):
@xiangyw: For the Kafka-based ordering service, you may want to have a look at this doc for now: https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit (especially the diagrams). Let me know if there are follow-up questions.
xiangyw (Thu, 09 Mar 2017 07:03:24 GMT):
ths
xiangyw (Thu, 09 Mar 2017 07:04:48 GMT):
oh, i am in greet wall can't access it :dizzy_face:
kostas (Thu, 09 Mar 2017 07:05:17 GMT):
Ah, let me paste a PDF doc in here. Just a sec.
kostas (Thu, 09 Mar 2017 07:06:07 GMT):
Message Attachments
kostas (Thu, 09 Mar 2017 07:06:10 GMT):
@xiangyw ^^
xiangyw (Thu, 09 Mar 2017 07:06:31 GMT):
thank you very much
kostas (Thu, 09 Mar 2017 07:06:40 GMT):
Any time, you're welcome.
danielleekc (Thu, 09 Mar 2017 07:09:20 GMT):
@jyellick
I tried another case:
For a PBFT network, N: 5, f: 1, K: 10
(1) Node A,B,C,D are connected and commit transactions successfully.
(2) a new Node E connect to this network.
(3) Node E is connected and catched up the world state
assumed that, a=100,b100 in world state right now.
(4) Send invoke request to Node A (a give 10 to b)
(5) Send query requst to Node A,B,C,D, they return a=90
(6) Send query request to Node E , E return a=100. ( The same with previous case I posted before )
Did I configure corretly?
I wondered that could we add new vaildating node to existing PBFT network without this problem?
xiangyw (Thu, 09 Mar 2017 07:17:00 GMT):
@kostas please paste another doc for me https://docs.google.com/document/d/1GuVNHZ5Jqq-gTVKflnZ1YiJfEoozvugqenC6QEQFQj4/edit?usp=sharing
xiangyw (Thu, 09 Mar 2017 07:17:22 GMT):
about explorer
aberfou (Thu, 09 Mar 2017 13:47:36 GMT):
Has joined the channel.
jyellick (Thu, 09 Mar 2017 14:29:16 GMT):
@danielleekc For v0.6, there is no support for dynamically adding nodes to the network
berserkr (Thu, 09 Mar 2017 15:22:20 GMT):
@kostas @xiangyw , thank you both. Yes, I started reading the ordering service doc, it has the information I was looking for
samirsadeghi (Thu, 09 Mar 2017 18:41:15 GMT):
Has joined the channel.
kelly_ (Thu, 09 Mar 2017 19:12:59 GMT):
@berserkr - is this Danny?
berserkr (Thu, 09 Mar 2017 19:13:29 GMT):
@kelly_ yes :D
kelly_ (Thu, 09 Mar 2017 19:13:48 GMT):
Thought I recognized the Avatar! We should catch up when I'm in Santa Clara next
kelly_ (Thu, 09 Mar 2017 19:14:28 GMT):
hope everything is going well :)
berserkr (Thu, 09 Mar 2017 19:14:50 GMT):
yeah man sounds good!
berserkr (Thu, 09 Mar 2017 19:14:59 GMT):
just let me know when you are around :)
kelly_ (Thu, 09 Mar 2017 19:15:55 GMT):
will you DM me your e-mail?
weeds (Thu, 09 Mar 2017 19:27:20 GMT):
@danielleekc just to be clear, version 1.0 does allow you to dynamically add nodes into the network-we tested back in December testing adding nodes all over the world hooking up and running- it was pretty cool seeing it work
berserkr (Thu, 09 Mar 2017 19:29:31 GMT):
is this for the ordering service?
dave.enyeart (Thu, 09 Mar 2017 22:26:58 GMT):
@jyellick @kostas How to configure block cutting default from 10s to something nicer like 2s?
jyellick (Thu, 09 Mar 2017 22:27:26 GMT):
@dave.enyeart How are you bootstrapping your orderer?
dave.enyeart (Thu, 09 Mar 2017 22:27:52 GMT):
`peer channel create -o 127.0.0.1:7050 -c myc1`
dave.enyeart (Thu, 09 Mar 2017 22:28:16 GMT):
default/sample msp
jyellick (Thu, 09 Mar 2017 22:29:04 GMT):
You can try setting:
```
CONFIGTX_ORDERER_BATCHTIMEOUT=2s
```
jyellick (Thu, 09 Mar 2017 22:29:17 GMT):
I believe if you have the most recent level of the peer, this should work for you
dave.enyeart (Thu, 09 Mar 2017 22:29:37 GMT):
ok, so is the info here outdated now?
dave.enyeart (Thu, 09 Mar 2017 22:29:39 GMT):
https://jira.hyperledger.org/browse/FAB-1919
jyellick (Thu, 09 Mar 2017 22:29:39 GMT):
https://gerrit.hyperledger.org/r/#/c/7031/ in particular
jyellick (Thu, 09 Mar 2017 22:30:10 GMT):
Yes, FAB-1919 is indeed quite outdated
jyellick (Thu, 09 Mar 2017 22:31:21 GMT):
Hmmm, actually 7031 may not do it
jyellick (Thu, 09 Mar 2017 22:31:29 GMT):
It may be in another as of yet pending CR
jyellick (Thu, 09 Mar 2017 22:31:33 GMT):
Let me take a look at the code quickly
jyellick (Thu, 09 Mar 2017 22:33:39 GMT):
Okay, yes, it looks like if you set that env variable, or edit `common/configtx/tool/configtx.yaml` to set the appropriate timeout
jyellick (Thu, 09 Mar 2017 22:33:41 GMT):
You should be set
jyellick (Thu, 09 Mar 2017 22:34:09 GMT):
(^ @dave.enyeart )
dave.enyeart (Thu, 09 Mar 2017 22:35:04 GMT):
set the env variable on orderer or upon `peer channel create`?
dave.enyeart (Thu, 09 Mar 2017 22:35:52 GMT):
btw, i'm helping the guys with e2e docker sample
dave.enyeart (Thu, 09 Mar 2017 22:43:04 GMT):
@jyellick I just pulled the latest master and updated configtx.yaml, and now I get an error upon join:
dave.enyeart (Thu, 09 Mar 2017 22:43:10 GMT):
```vagrant@hyperledger-devenv:v0.3.0-1a1467d:/opt/gopath/src/github.com/hyperledger/fabric$ peer channel join -b myc1.block
Error: proposal failed (err: rpc error: code = 2 desc = Error deserializing key IngressPolicyNames for group /Channel/Orderer: Unexpected key IngressPolicyNames)```
jyellick (Thu, 09 Mar 2017 22:43:27 GMT):
Sounds like you have a stale genesis block somewhere
jyellick (Thu, 09 Mar 2017 22:43:57 GMT):
My guess is `myc1.block` was not regenerated?
AdnanC (Thu, 09 Mar 2017 22:44:57 GMT):
Has joined the channel.
dave.enyeart (Thu, 09 Mar 2017 22:46:10 GMT):
hmmmm, i did the same thing i've always done: `peer channel create -o 127.0.0.1:7050 -c myc1.block`
dave.enyeart (Thu, 09 Mar 2017 22:46:10 GMT):
@jyellick hmmmm, i did the same thing i've always done: `peer channel create -o 127.0.0.1:7050 -c myc1.block`
dave.enyeart (Thu, 09 Mar 2017 22:46:19 GMT):
and this time it created myc1.block.block
dave.enyeart (Thu, 09 Mar 2017 22:46:37 GMT):
so the join with myc1.block picked up my old genesis block
dave.enyeart (Thu, 09 Mar 2017 22:51:37 GMT):
and even after joining with myc1.block.block, i get an error when i try to use the channel (instantiate cc):
dave.enyeart (Thu, 09 Mar 2017 22:51:41 GMT):
`Error: Error endorsing chaincode: rpc error: code = 2 desc = Failed to deserialize creator identity, err MSP DEFAULT is unknown`
jyellick (Thu, 09 Mar 2017 22:54:00 GMT):
The default config that is picked up via the path you described contains no MSPs
jyellick (Thu, 09 Mar 2017 22:54:00 GMT):
~The default config that is picked up via the path you described contains no MSPs~ The profile this path loads has no MSPs, then, one is added manually to it in another section of code
jyellick (Thu, 09 Mar 2017 22:54:35 GMT):
Which would certainly explain the error
jyellick (Thu, 09 Mar 2017 22:54:35 GMT):
~Which would certainly explain the error~
jyellick (Thu, 09 Mar 2017 22:55:12 GMT):
It sounds to me, like something was created, requiring a signature from MSP DEFAULT, but because there are no MSPs defined, we have no way to validate it.
jyellick (Thu, 09 Mar 2017 22:56:10 GMT):
I had worked with @aso around that CR 7031 who verified that the default chain was working again
aso (Thu, 09 Mar 2017 22:56:10 GMT):
Has joined the channel.
dave.enyeart (Thu, 09 Mar 2017 22:57:30 GMT):
btw this has been generally working until today
jyellick (Thu, 09 Mar 2017 22:58:31 GMT):
Okay, so here is my theory:
jyellick (Thu, 09 Mar 2017 22:58:46 GMT):
The LSCC was recently changed to compute a default policy if none was specified
jyellick (Thu, 09 Mar 2017 22:59:24 GMT):
It does this by looking through the defined MSPs, and creating a policy which allows any member of any org to invoke.
dave.enyeart (Thu, 09 Mar 2017 22:59:29 GMT):
ok, so are there new instructions available for people that want a simple path?
dave.enyeart (Thu, 09 Mar 2017 23:00:02 GMT):
or we need a fix here?
jyellick (Thu, 09 Mar 2017 23:04:36 GMT):
To my mind, the simple path has always been a little hacky and broken. Looking again at how this transaction is put together, it looks like that MSP should be there, so I am wondering why you are getting that error
jyellick (Thu, 09 Mar 2017 23:06:29 GMT):
@dave.enyeart What commit are you at?
jyellick (Thu, 09 Mar 2017 23:10:32 GMT):
```
.. code:: bash
Error: Error endorsing chaincode: rpc error: code = 2 desc = Error installing chaincode code mycc:1.0(chaincode /var/hyperledger/production/chaincodes/mycc.1.0 exits)
You likely have chaincode images (e.g. ``peer0-peer0-mycc-1.0`` or
``peer1-peer0-mycc1-1.0``) from prior runs. Remove them and try
again.
```
I see the above in @muralisr's doc, that is the only reference to the string `Error endorsing chaincode` in the entire codebase, so it makes me think something is stale in your env
jyellick (Thu, 09 Mar 2017 23:12:24 GMT):
Ah, looks like it might be coming from elsewhere, such as:
```
proposalResponse, err := cf.EndorserClient.ProcessProposal(context.Background(), signedProp)
if err != nil {
return nil, fmt.Errorf("Error endorsing %s: %s", chainFuncName, err)
}
```
jyellick (Thu, 09 Mar 2017 23:12:46 GMT):
(in `peer/chaincode/instantiate.go`)
dave.enyeart (Thu, 09 Mar 2017 23:13:28 GMT):
@jyellick latest is working for me now
jyellick (Thu, 09 Mar 2017 23:13:49 GMT):
Any clue what it was? Or just cleaned and it works?
dave.enyeart (Thu, 09 Mar 2017 23:13:53 GMT):
peer channel create -o 127.0.0.1:7050 -c myc8
dave.enyeart (Thu, 09 Mar 2017 23:14:11 GMT):
peer channel join -b myc8.block
dave.enyeart (Thu, 09 Mar 2017 23:14:29 GMT):
the addition of the extra ".block" was screwing me up
dave.enyeart (Thu, 09 Mar 2017 23:14:42 GMT):
my channel was called myc1.block
dave.enyeart (Thu, 09 Mar 2017 23:14:58 GMT):
the above commands work now
danielleekc (Fri, 10 Mar 2017 01:24:58 GMT):
@jyellick @weeds Thanks both of you. To be clear, does PBFT in v1.0 support dynamically adding nodes? Or we have another consensus that supports this? Thanks!
weeds (Fri, 10 Mar 2017 01:39:13 GMT):
@danielleekc To summarize what is in version 1.0 at this point in time, and I'm specifically talking about the ordering service. Different configuration options for ordering service exist at this point for the 1.0 version code- Version 1.0 today includes solo (single node for development) and the other option is kafka/zookeeper (1:n nodes providing crash fault tolerance). SBFT is something that is immediately next to go with.
weeds (Fri, 10 Mar 2017 01:39:55 GMT):
for Dynamically adding nodes along with utilizing channels- I would suggest using Kafka/Zookeeper
kuangchao (Fri, 10 Mar 2017 02:14:01 GMT):
Has joined the channel.
AlanLee (Fri, 10 Mar 2017 04:14:46 GMT):
Has joined the channel.
AlanLee (Fri, 10 Mar 2017 04:16:33 GMT):
@weeds Does it mean that the default PBFT consensus algorithm does not support "Adding Node" feature? or it is supported but we haven't yet implemented the controlling? Thank you.
kostas (Fri, 10 Mar 2017 04:24:37 GMT):
I'm pretty sure there is an extension of the PBFT protocol that describes a process with which we can add more nodes. As far as I recall, there's no such reference in the original Castro paper that we based our 0.6 (and the 1.0 for sBFT) work on.
kostas (Fri, 10 Mar 2017 04:24:37 GMT):
I'm pretty sure there is an extension of the PBFT protocol that describes a process with which we can add more nodes. As far as I recall, there's no such reference in the original Castro paper that we based our 0.6 (and the 1.0 for sBFT) work on.
kostas (Fri, 10 Mar 2017 04:24:38 GMT):
We do not support this yet for sure, and even when SBFT gets tweaked so that it's good to go, the ability to add nodes dynamically won't be there to begin with. It's a tricky and complicated matter.
kostas (Fri, 10 Mar 2017 04:24:38 GMT):
We do not support this yet for sure, and even when SBFT gets tweaked so that it's good to go, the ability to add nodes dynamically won't be there initially. It's a tricky and complicated matter.
kostas (Fri, 10 Mar 2017 04:24:38 GMT):
We do not support this yet for sure, and even when SBFT gets tweaked so that it's good to go, the ability to add nodes dynamically won't be there initially. It's not impossible, but it's a tricky and complicated matter.
kostas (Fri, 10 Mar 2017 04:24:38 GMT):
We do not support this yet for sure for the BFT case, and even when SBFT gets tweaked so that it's good to go, the ability to add nodes dynamically won't be there initially. It's not impossible, but it's a tricky and complicated matter.
AlanLee (Fri, 10 Mar 2017 04:33:45 GMT):
The fact is that, we tried to kill a node and make it up again, PBFT will not catch up the ledger. Any suggestion we should do? Use another consensus or someone is working on this feature? Thank you.
kostas (Fri, 10 Mar 2017 04:36:59 GMT):
Why would the restarted node not catch up?
kostas (Fri, 10 Mar 2017 04:37:23 GMT):
It should once enough transactions have circulated on the network and it catches wind of the checkpoints that are being exchanged.
kostas (Fri, 10 Mar 2017 04:37:23 GMT):
It should once enough transactions have circulated on the network and your restarted node catches wind of the checkpoints that are being exchanged.
kostas (Fri, 10 Mar 2017 04:37:48 GMT):
(Nobody is working on this feature that I know of.)
berserkr (Fri, 10 Mar 2017 06:38:29 GMT):
Hi Guys, so as far as I am concerned v1 is kafka-based, and there is no consensus right? There is plan on doing pbft at some point, but how will that work? will the kafka implementation go away?
jyellick (Fri, 10 Mar 2017 06:43:41 GMT):
@berserkr I would say more accurately, the consensus supported by v1 is Kafka based. Kafka does perform crash fault tolerant consensus, but is not byzantine fault tolerant. SBFT will be supported as an alternative form of consensus shortly after v1, which is a byzantine fault tolerant version of consensus.
jyellick (Fri, 10 Mar 2017 06:44:50 GMT):
In general, if an application can be viably supported via CFT consensus, then using a CFT consensus model is most likely the superior option, as there is lower overhead, and throughput is likely to be superior.
jyellick (Fri, 10 Mar 2017 06:45:06 GMT):
Of course SBFT will ultimately be an option for those applications in which CFT consensus is not sufficient.
jyellick (Fri, 10 Mar 2017 06:46:05 GMT):
And of course, the hyperledger fabric consensus model is designed to be plug-able, so that other flavors of consensus may be added in the future.
berserkr (Fri, 10 Mar 2017 06:47:03 GMT):
i saw a paper at osdi describing cft/paxos
berserkr (Fri, 10 Mar 2017 06:47:38 GMT):
taht would also be a good alternative
berserkr (Fri, 10 Mar 2017 06:48:28 GMT):
are there performance numbers on the throughput we can get from v1?
berserkr (Fri, 10 Mar 2017 06:48:50 GMT):
I remember also seeing a study a while back that showed old codebase had scalability issues
jyellick (Fri, 10 Mar 2017 06:49:33 GMT):
Kafka/Zookeeper actually bears some resemblance to Paxos based consensus, you can read some on that here: https://www.confluent.io/blog/distributed-consensus-reloaded-apache-zookeeper-and-replication-in-kafka/
jyellick (Fri, 10 Mar 2017 06:50:15 GMT):
Certainly, adding a raft/etcd or other consensus plugin could be in the future
berserkr (Fri, 10 Mar 2017 06:50:33 GMT):
oh, good read, will take a look, thank you @jyellick !
cca88 (Fri, 10 Mar 2017 10:13:20 GMT):
@berserkr - Kafka implements consensus/atomic broadcast and tolerates crashes, relies on Zookeeper internally for the consensus consistency, as explained there. its arch and fault model is more complex to describe than a standard Paxos/Raft, but this makes it more scalable, that is, using ZK/Raft/Etcd directly would be more overhead than with Kafka.
About OSDI: XFT here? https://www.usenix.org/system/files/conference/osdi16/osdi16-liu.pdf :) that's indeed the plan.
passkit (Fri, 10 Mar 2017 13:10:40 GMT):
Has joined the channel.
passkit (Fri, 10 Mar 2017 13:12:27 GMT):
I am having some problems with the ordered with TLS
passkit (Fri, 10 Mar 2017 13:13:46 GMT):
Firstly it wouldn't connect because renegotiation wasn't supported - so I modified the tls.Config in kafka/utils.go
passkit (Fri, 10 Mar 2017 13:13:52 GMT):
Now it will not handshake
passkit (Fri, 10 Mar 2017 13:14:07 GMT):
Receiving this message when trying to connect to the Kafka broker `tls: received unexpected handshake message of type *tls.serverHelloMsg when waiting for *tls.finishedMsg`
passkit (Fri, 10 Mar 2017 13:14:37 GMT):
I can connect successfully with TLS using the Kafka producer and consumer scripts
passkit (Fri, 10 Mar 2017 13:14:49 GMT):
And OpenSSL also checks out ok
kostas (Fri, 10 Mar 2017 13:59:42 GMT):
@sanchezl Do you happen to have any pointers for @passkit? (If not, I can look into it.)
dragosh (Fri, 10 Mar 2017 14:09:29 GMT):
Has joined the channel.
sanchezl (Fri, 10 Mar 2017 14:19:05 GMT):
@kostas, @passkit. I'm not familiar with that error.
kostas (Fri, 10 Mar 2017 15:14:06 GMT):
No problem. @passkit are you familiar with [JIRA](https://jira.hyperledger.org/secure/Dashboard.jspa)? If you can create an issue and assign it to me, that'd be great. Otherwise, you can send me a private message here. At any rate, I'll need a detailed description of what you've tried, incl. any modifications in our files, so that I can reproduce the issue you're having.
dragosh (Fri, 10 Mar 2017 15:40:14 GMT):
Has left the channel.
JatinderBali (Fri, 10 Mar 2017 15:43:14 GMT):
Has joined the channel.
passkit (Fri, 10 Mar 2017 16:48:44 GMT):
@kostas - thanks for your response. There is no issue. Turns out, first time, my cyphers were not supported by go. Then after regenerating new roots and certs, I forgot to update the trust keystore which ultimately was causing the renegotiation and handshake problems.
passkit (Fri, 10 Mar 2017 16:48:44 GMT):
@kostas - thanks for your response. There is no issue. Turns out, first time, my ciphers were not supported by go. Then after regenerating new roots and certs, I forgot to update the trust keystore which ultimately was causing the renegotiation and handshake problems.
kostas (Fri, 10 Mar 2017 18:01:53 GMT):
Sure thing. Thanks for the update!
kostas (Fri, 10 Mar 2017 18:01:53 GMT):
@passkit: Sure thing. Thanks for the update.
AlanLee (Fri, 10 Mar 2017 23:29:38 GMT):
Thanks @kostas. So, if we need node fault tolerant (if nodes go down and restart up again, it can catch up), PBFT is not support right (We are still reading the source code of PBFT on viewchange)? Which consensus is more ready or you would suggest? Thank you very much.
kostas (Fri, 10 Mar 2017 23:37:58 GMT):
@AlanLee: The PBFT mode in 0.6 most definitely supports the crash fault case that you refer to. For 1.0, Kafka is the only production-level option at the moment, and it also supports the scenario that interests you, so that's what I would go with.
AlanLee (Fri, 10 Mar 2017 23:46:04 GMT):
Dear @kostas, we are new in using v1.0. Can you give me some pointers on how to use Kafka ? Thank you.
kostas (Fri, 10 Mar 2017 23:49:12 GMT):
Easiest way for now is to pick the `SampleInsecureKafka` profile when using the [configtxgen tool](http://hyperledger-fabric.readthedocs.io/en/latest/configtxgen.html). From thereon, you proceed as usual with 1.0.
passkit (Sun, 12 Mar 2017 03:23:30 GMT):
When setting up TLS on the orderer GRPC server I assumed that the PrivateKey and Certificate should be file paths. However, they require the certificate data as a string in order to work.
passkit (Sun, 12 Mar 2017 03:24:16 GMT):
Line 140 in server.go is passing the values directly to `tls.X509KeyPair`
passkit (Sun, 12 Mar 2017 03:24:20 GMT):
`cert, err := tls.X509KeyPair(secureConfig.ServerCertificate, secureConfig.ServerKey)`
passkit (Sun, 12 Mar 2017 03:29:08 GMT):
Is this by design, or an oversight?
jyellick (Sun, 12 Mar 2017 03:48:59 GMT):
@passkit You may specify the cert data or a file path. To specify a file, put a subtag `file:` under the element. Or, include `_FILE` at the end of the associated variable.
jyellick (Sun, 12 Mar 2017 03:50:10 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-maintainers?msg=mT4GZer2jyAi8KfaS)
passkit (Sun, 12 Mar 2017 04:02:36 GMT):
I have tried amending orderer.yaml as follows:
passkit (Sun, 12 Mar 2017 04:02:41 GMT):
```TLS:
Enabled: true
PrivateKey:
File: /var/hyperledger/orderer/ssl/server.key.pem
Certificate:
File: /var/hyperledger/orderer/ssl/server.cert.pem
RootCAs:
- File: /var/hyperledger/orderer/ssl/ca.cert.pem
ClientAuthEnabled: false
ClientRootCAs:
- File: /var/hyperledger/orderer/ssl/ca-chain.cert.pem
- File: /var/hyperledger/orderer/ssl/bc-ca-chain.cert.pem
- File: /var/hyperledger/orderer/ssl/ll-ca-chain.cert.pem```
passkit (Sun, 12 Mar 2017 04:04:10 GMT):
But this results in a panic
passkit (Sun, 12 Mar 2017 04:05:14 GMT):
`panic: interface conversion: interface {} is map[interface {}]interface {}, not map[string]interface {}`
passkit (Sun, 12 Mar 2017 04:05:34 GMT):
```github.com/hyperledger/fabric/common/viperutil.EnhancedExactUnmarshal(0xc4201878c0, 0x4646fc0, 0xc4201f6d80, 0xc4201dc9f0, 0x0)
/Users/Nick/Documents/go/src/github.com/hyperledger/fabric/common/viperutil/config_util.go:280 +0x2ed
github.com/hyperledger/fabric/orderer/localconfig.Load(0x4016945)
/Users/Nick/Documents/go/src/github.com/hyperledger/fabric/orderer/localconfig/config.go:295 +0x6fd
main.main()
/Users/Nick/Documents/go/src/github.com/hyperledger/fabric/orderer/main.go:50 +0x37```
jyellick (Sun, 12 Mar 2017 04:07:23 GMT):
What commit are you at?
passkit (Sun, 12 Mar 2017 04:10:15 GMT):
3d1c71a6bbbd200139108932d748b35defb7bdbf
passkit (Sun, 12 Mar 2017 04:19:36 GMT):
Looks like the problem is that the client root ca's are chains (intermediate + root). If add them individually then it works ok
jyellick (Sun, 12 Mar 2017 04:23:12 GMT):
The TLS is variable parsing is a recent addition, if you wouldn't mind opening a bug with the details on how you produced that panic, we'd appreciate it
passkit (Sun, 12 Mar 2017 04:25:46 GMT):
I'm not sure it is a bug, as long as it is clear that each file should only contain one PEM block.
jyellick (Sun, 12 Mar 2017 04:40:42 GMT):
It might be a lower severity bug, but it a crash without a helpful message we should probably improve
passkit (Sun, 12 Mar 2017 04:43:34 GMT):
Ok - will post and add the reference here
passkit (Sun, 12 Mar 2017 04:45:10 GMT):
Another question on bootstrapping: If I create a genesis block with a channel `configtxgen -channelID myChannel` - does that channel have to exist?
passkit (Sun, 12 Mar 2017 04:46:01 GMT):
If I try for a new channel I get the following critical error when starting the orderer
passkit (Sun, 12 Mar 2017 04:46:14 GMT):
`2017-03-12 12:42:52.920 HKT [orderer/kafka] Send -> INFO 0ed Failed to send message to chain partition 706b74657374696e67/0 on the Kafka cluster: Unknown error, how did this happen? Error code = 38
2017-03-12 12:42:52.920 HKT [orderer/kafka] Start -> CRIT 0ee Couldn't post CONNECT message to 706b74657374696e67/0: Unknown error, how did this happen? Error code = 38`
passkit (Sun, 12 Mar 2017 04:46:14 GMT):
```2017-03-12 12:42:52.920 HKT [orderer/kafka] Send -> INFO 0ed Failed to send message to chain partition 706b74657374696e67/0 on the Kafka cluster: Unknown error, how did this happen? Error code = 38
2017-03-12 12:42:52.920 HKT [orderer/kafka] Start -> CRIT 0ee Couldn't post CONNECT message to 706b74657374696e67/0: Unknown error, how did this happen? Error code = 38```
passkit (Sun, 12 Mar 2017 04:47:53 GMT):
Error 38 is INVALID_REPLICATION_FACTOR. I have 3 brokers currently available.
passkit (Sun, 12 Mar 2017 05:03:41 GMT):
https://jira.hyperledger.org/browse/FAB-2749
mastersingh24 (Sun, 12 Mar 2017 15:02:43 GMT):
@passkit - you should not have `FILE: ` in the entry
mastersingh24 (Sun, 12 Mar 2017 15:02:49 GMT):
it should just be the path
mastersingh24 (Sun, 12 Mar 2017 15:02:58 GMT):
but there's another error you'll get as well
mastersingh24 (Sun, 12 Mar 2017 15:03:26 GMT):
(basically the orderer is not reading the files properly)
mastersingh24 (Sun, 12 Mar 2017 15:03:58 GMT):
I just fixed that and also the peer will not be able to communicate with the orderer when TLS is enabled without some hacks
passkit (Sun, 12 Mar 2017 15:04:02 GMT):
Cannot be the path - check the source reference above - those paths are sent directly as byte arrays that should contain a PEM encoded cert / key.
mastersingh24 (Sun, 12 Mar 2017 15:04:21 GMT):
I'm fixing that right now
mastersingh24 (Sun, 12 Mar 2017 15:04:36 GMT):
I just fixed up the source
passkit (Sun, 12 Mar 2017 15:04:54 GMT):
Ok - has it been pushed yet?
mastersingh24 (Sun, 12 Mar 2017 15:05:34 GMT):
no - still need to get TLS working between the peer and the orderer - working on that right now
mastersingh24 (Sun, 12 Mar 2017 15:06:05 GMT):
https://gerrit.hyperledger.org/r/#/c/7141/ is the in work in progress
mastersingh24 (Sun, 12 Mar 2017 15:06:35 GMT):
```
// secure server config
secureConfig := comm.SecureServerConfig{
UseTLS: conf.General.TLS.Enabled,
RequireClientCert: conf.General.TLS.ClientAuthEnabled,
}
// check to see if TLS is enabled
if secureConfig.UseTLS {
logger.Info("Starting orderer with TLS enabled")
// load crypto material from files
serverCertificate, err := ioutil.ReadFile(conf.General.TLS.Certificate)
if err != nil {
logger.Fatalf("Failed to load ServerCertificate file '%s' (%s)",
conf.General.TLS.Certificate, err)
}
serverKey, err := ioutil.ReadFile(conf.General.TLS.PrivateKey)
if err != nil {
logger.Fatalf("Failed to load PrivateKey file '%s' (%s)",
conf.General.TLS.PrivateKey, err)
}
var serverRootCAs, clientRootCAs [][]byte
for _, serverRoot := range conf.General.TLS.RootCAs {
root, err := ioutil.ReadFile(serverRoot)
if err != nil {
logger.Fatalf("Failed to load ServerRootCAs file '%s' (%s)",
err, serverRoot)
}
serverRootCAs = append(serverRootCAs, root)
}
if secureConfig.RequireClientCert {
for _, clientRoot := range conf.General.TLS.ClientRootCAs {
root, err := ioutil.ReadFile(clientRoot)
if err != nil {
logger.Fatalf("Failed to load ClientRootCAs file '%s' (%s)",
err, clientRoot)
}
clientRootCAs = append(clientRootCAs, root)
}
}
secureConfig.ServerKey = serverKey
secureConfig.ServerCertificate = serverCertificate
secureConfig.ServerRootCAs = serverRootCAs
secureConfig.ClientRootCAs = clientRootCAs
}
```
mastersingh24 (Sun, 12 Mar 2017 15:06:51 GMT):
^^^ that's the updated code for the orderer
passkit (Sun, 12 Mar 2017 15:08:10 GMT):
Cool - I'm working on my orderer docker right now, so I will include that and rebuild. Won't start on the peer until tomorrow.
passkit (Sun, 12 Mar 2017 15:08:10 GMT):
Cool - I'm working on my orderer image right now, so I will include that and rebuild. Won't start on the peer until tomorrow.
jyellick (Sun, 12 Mar 2017 15:56:48 GMT):
@mastersingh24 I'm all in favor of bringing parity between the orderer and peer yamls for the TLS config. Though I wonder if it shouldn't be the other way around. Being able to specify both cert literals or files via the yaml seems like a nice option. You have more experience with real deployments, so maybe putting cert literals in the config is something that's actually useless, but it is feature we'll be losing.
passkit (Sun, 12 Mar 2017 16:27:16 GMT):
Either is fine. Literals solves the chaining issue and may be more intuitive to those not familiar with yaml (indented `file:` and indented `- file:` can be confusing). But your patch above clutters up the main function somewhat.
passkit (Sun, 12 Mar 2017 16:27:16 GMT):
Either is fine. Literals solves the chaining issue and may be more intuitive to those not familiar with yaml (difference between indented `file:` and indented `- file:` is not immediately obvious). But your patch above clutters up the main function somewhat.
passkit (Sun, 12 Mar 2017 16:28:12 GMT):
Unchaining certs is not a big deal, so consistency with the peer may be better.
mastersingh24 (Sun, 12 Mar 2017 18:40:07 GMT):
@passkit - https://gerrit.hyperledger.org/r/#/c/7141/ - this has TLS working across the entire system now
mastersingh24 (Sun, 12 Mar 2017 18:40:24 GMT):
and yes - agree that I could clean up the orderer main a bit
Willson (Mon, 13 Mar 2017 07:20:39 GMT):
Has joined the channel.
mastersingh24 (Mon, 13 Mar 2017 12:18:15 GMT):
[@jyellick @kostas - any ideas? My initial thought is that they might be bootstrapping the orderer with a block generated with a prior version of the tool ](https://chat.hyperledger.org/channel/fabric-ci?msg=98nphK8kuiGvuZMkt) @rahulhegde
kostas (Mon, 13 Mar 2017 14:08:17 GMT):
So, Ingress/Egress policy names are no longer used as per this commit of @jyellick: https://gerrit.hyperledger.org/r/#/c/6771/ But this was merged on the 7th, and the images @rahulhegde uses are from the 10th. A non up-to-date block from the configtxgen tool is my guess as well. If that doesn't work, please post back with details.
kostas (Mon, 13 Mar 2017 14:08:17 GMT):
So, Ingress/Egress policy names are no longer used as per this commit of @jyellick: https://gerrit.hyperledger.org/r/#/c/6771/
This was merged on the 7th, and the images @rahulhegde uses are from the 10th.
A non up-to-date block from the configtxgen tool is my guess as well. If that doesn't work, please post back with details.
kostas (Mon, 13 Mar 2017 14:08:17 GMT):
So, Ingress/Egress policy names are no longer used as per this commit of @jyellick: https://gerrit.hyperledger.org/r/#/c/6771/
This was merged on the 7th, and the images @rahulhegde uses are from the 10th.
A non up-to-date block from the configtxgen tool is my guess as well. If that doesn't work, @rahulhegde please post back with details.
jyellick (Mon, 13 Mar 2017 14:22:10 GMT):
+1 In general that 'Unexpected key' error implies that the configuration was generated with a different level of the config tools than match the level of the orderer.
jyellick (Mon, 13 Mar 2017 14:31:10 GMT):
And, just as a heads up to all, there will likely be one additional non-backwards compatible config change before GA. The channel creation is wide open right now. https://gerrit.hyperledger.org/r/#/c/7105/ fixes that, and https://gerrit.hyperledger.org/r/#/c/7107/ removes the older unused channel creation policy stuff.
rahulhegde (Mon, 13 Mar 2017 15:59:02 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=XwN7ZriQHsFTy5EGh) @kostas @mastersingh24
Thanks this is resolved. It was the configtxgen getting generated with old fabric-code, I had to do add hard reset on the changeset.
scottz (Mon, 13 Mar 2017 18:24:19 GMT):
@jyellick We can run the configtxgen tool to create a genesis block for the orderer service, and channel-creation blocks. We do not see an option to create a genesis-RECONFIGURATION block, for a scenario when we want to add another organization to an existing blockchain. SImilarly of course we will want to create new channel reconfiguration transactions and use them to add new peers from the new org. Does the fabric support these actions yet? Does any SDK or the CLI support this? How about the configtxgen tool? Are there Jira issues opened that list these activities to do? (copying @dbshah FYI)
jyellick (Mon, 13 Mar 2017 18:26:48 GMT):
@scottz Not to nit-pick, but the tool creates a "genesis block" for the orderer system channel, and "channel creation transactions" (not blocks). Similarly, to reconfigure the ordering system channel would be a "configuration transaction" (not block). The server code supports this reconfiguration, but the tooling to generate these reconfguration transactions is not there yet.
scottz (Mon, 13 Mar 2017 18:31:17 GMT):
your rewording seems more accurate, yes, thanks. Is the tooling work on someone's list to do for v1.0? Or would you like us to create a Jira issue or two so these things can be more visible and prioritized appropriately?
jyellick (Mon, 13 Mar 2017 18:34:15 GMT):
The question really becomes what tooling needs to be created and where. Is it `configtxgen`? Is it the SDKs? Is it some management application that sits on top of the SDKs? For doing reconfiguration in general, this requires a workflow of collecting signatures, and knowing the existing configuration, so the answer is not obvious. I'm presently working on enhancing `configtxgen` to support some limited reconfiguration scenarios, but the path forward I had generally seen was that the `configtxgen` tool should support minimal configurations, while pushing more advanced reconfiguration into the use case specific management tools.
grapebaba (Tue, 14 Mar 2017 02:13:53 GMT):
@kostas @jyellick @scottz is any plan to make kafka/sbft orderer cluster dynamic resize?
kostas (Tue, 14 Mar 2017 03:36:57 GMT):
@grapebaba:
For Kafka --
Assuming you refer to the ordering service nodes, you can add or remove nodes from the cluster and updates the "addresses" list of the "orderer" section via config updates to let the clients know about the new ensemble. The caveat though is that --to my knowledge-- we have not yet looked into having the ordering service clients drop a connection if a config update removes an ordering service node from the set. @mastersingh24 will correct me if I'm wrong. (There's also the issue of preventing that node from pumping data into the partitions, so that requires some concurrent updating of the ACLs on the Kafka broker cluster front, but we're getting into the weeds here. Let's assume that an ordering service node is removed not because it is malicious --this is outside the attack vector in the Kafka case anyway-- but because it's now permanently offline.)
Assuming you refer to the Kafka brokers themselves, there is an undocumented feature of the partition reassignment tool that allows you to modify the replication factor for a partition (it includes modifying and submitting a JSON file, but these are low-level details), so resizing is also possible on that front.
So the TL;DR answer for Kafka is that dynamic cluster resizing is possible, and should be covered end-to-end with little work.
For SBFT --
Refer to the set of messages [here](https://chat.hyperledger.org/channel/fabric-consensus?msg=oFz8cd7ZiWspkz8tm). Short answer: not right now, and it's something that probably won't come any moment now either.
kostas (Tue, 14 Mar 2017 03:36:57 GMT):
@grapebaba:
For Kafka --
Assuming you refer to the ordering service nodes, you can add or remove nodes from the cluster and updates the "addresses" list of the "orderer" section via config updates to let the clients know about the new ensemble. The caveat though is that --to my knowledge-- we have not yet looked into having the ordering service clients drop a connection if a config update removes an ordering service node from the set. @mastersingh24 will correct me if I'm wrong. (There's also the issue of preventing that node from pumping data into the partitions, so that requires some concurrent updating of the ACLs on the Kafka broker cluster front, but we're getting into the weeds here. Let's assume that an ordering service node is removed not because it is malicious --this is outside the attack vector in the Kafka case anyway-- but because it's now permanently offline.)
Assuming you refer to the Kafka brokers themselves, there is an undocumented feature of the partition reassignment tool that allows you to modify the replication factor for a partition (it includes modifying and submitting a JSON file, but these are low-level details), so resizing is also possible on that front.
So the TL;DR answer for Kafka is that dynamic cluster resizing is possible, and should be covered end-to-end with little work.
For SBFT --
Refer to the set of messages [starting from here](https://chat.hyperledger.org/channel/fabric-consensus?msg=oFz8cd7ZiWspkz8tm). Short answer: not right now, and it's something that probably won't come any moment now either.
kostas (Tue, 14 Mar 2017 03:36:57 GMT):
@grapebaba:
For Kafka --
Assuming you refer to the ordering service nodes, you can add or remove nodes from the cluster and update the "addresses" list of the "orderer" section via config updates to let the clients know about the new ensemble. The caveat though is that --to my knowledge-- we have not yet looked into having the ordering service clients drop a connection if a config update removes an ordering service node from the set. @mastersingh24 will correct me if I'm wrong. (There's also the issue of preventing that node from pumping data into the Kafka partitions, so that requires some concurrent updating of the ACLs on the Kafka broker cluster front, but we're getting into the weeds here. Let's assume that an ordering service node is removed not because it is malicious --this is outside the attack vector in the Kafka case anyway-- but because it's now permanently offline.)
Assuming you refer to the Kafka brokers themselves, there is an undocumented feature of the partition reassignment tool that allows you to modify the replication factor for a partition (it includes modifying and submitting a JSON file, but these are low-level details), so resizing is also possible on that front.
So the TL;DR answer for Kafka is that dynamic cluster resizing is possible, and should be covered end-to-end with little work.
For SBFT --
Refer to the set of messages [starting from here](https://chat.hyperledger.org/channel/fabric-consensus?msg=oFz8cd7ZiWspkz8tm). Short answer: not right now, and it's something that probably won't come any moment now either.
kostas (Tue, 14 Mar 2017 03:36:57 GMT):
@grapebaba:
For Kafka --
Assuming you refer to the ordering service nodes, you can add or remove nodes from the cluster and update the "addresses" list of the "orderer" section via config updates to let the clients know about the new ensemble. The caveat though is that --to my knowledge-- we have not yet looked into having the ordering service clients drop a connection if a config update removes an ordering service node from the set. @mastersingh24 will correct me if I'm wrong. (There's also the issue of preventing that node from pumping data into the Kafka partitions, so that requires some concurrent updating of the ACLs on the Kafka broker cluster front, but we're getting into the weeds here. Let's assume that an ordering service node is removed not because it is malicious --this is outside the attack vector in the Kafka case anyway-- but because it's now permanently offline.)
Assuming you refer to the Kafka brokers themselves, there is an undocumented feature of the Kafka partition reassignment tool that allows you to modify the replication factor for a partition (it includes modifying and submitting a JSON file, but these are low-level details), so resizing is also possible on that front.
So the TL;DR answer for Kafka is that dynamic cluster resizing is possible, and should be covered end-to-end with little work.
For SBFT --
Refer to the set of messages [starting from here](https://chat.hyperledger.org/channel/fabric-consensus?msg=oFz8cd7ZiWspkz8tm). Short answer: not right now, and it's something that probably won't come any moment now either.
grapebaba (Tue, 14 Mar 2017 04:23:11 GMT):
got it. thanks @kostas hope quickly add kafka example
mastersingh24 (Tue, 14 Mar 2017 10:47:27 GMT):
@kostas @grapebaba - `we have not yet looked into having the ordering service clients drop a connection if a config update removes an ordering service node from the set` - correct - we don't currently drop an active connection to an ordering node if a config update were to change the list of orderer addresses.
ruslan.kryukov (Tue, 14 Mar 2017 15:51:28 GMT):
Has joined the channel.
lignyxg (Tue, 14 Mar 2017 17:34:12 GMT):
Has joined the channel.
dave.enyeart (Tue, 14 Mar 2017 21:09:01 GMT):
@jyellick I'm seeing error during `peer channel create` after the last merge:
dave.enyeart (Tue, 14 Mar 2017 21:09:13 GMT):
```panic: Error unmarshaling into structure: 4 error(s) decoding:
* 'Orderer' has invalid keys: MaxChannels
* 'Profiles[SampleInsecureKafka].Orderer' has invalid keys: MaxChannels
* 'Profiles[SampleInsecureSolo].Orderer' has invalid keys: MaxChannels
* 'Profiles[SampleSingleMSPSolo].Orderer' has invalid keys: MaxChannels```
jyellick (Tue, 14 Mar 2017 21:09:46 GMT):
@dave.enyeart This would imply to me that your orderer image or binary has not been rebuilt
jyellick (Tue, 14 Mar 2017 21:10:25 GMT):
Actually, let me look again, that's an odd log statement
dave.enyeart (Tue, 14 Mar 2017 21:10:32 GMT):
I did `make orderer`
jyellick (Tue, 14 Mar 2017 21:11:51 GMT):
Ah, did you run `make configtxgen`?
jyellick (Tue, 14 Mar 2017 21:12:14 GMT):
And `make peer`?
dave.enyeart (Tue, 14 Mar 2017 21:13:09 GMT):
potentially `make peer` didnt finish yet, will try again
dave.enyeart (Tue, 14 Mar 2017 21:14:32 GMT):
yep, `make peer` hadnt finished. works now.
jyellick (Tue, 14 Mar 2017 21:14:43 GMT):
Great!
geminatea (Wed, 15 Mar 2017 04:42:10 GMT):
Has joined the channel.
mychewcents (Wed, 15 Mar 2017 11:44:42 GMT):
Has joined the channel.
balashevich (Wed, 15 Mar 2017 12:36:20 GMT):
Has joined the channel.
mwall (Wed, 15 Mar 2017 15:56:52 GMT):
Hello! Is there any way to reject transactions? I can handle wrong transactions in chaincode and do not put anything on ledger, but transaction still will be created, right?
jyellick (Wed, 15 Mar 2017 16:10:49 GMT):
@mwall Transactions which have a valid submitter will generally make it onto the blockchain. However, if a submitter is not authorized to transact on a channel will have the transaction rejected outright.
mwall (Wed, 15 Mar 2017 16:11:37 GMT):
@jyellick okey, thank you!
snakejerusalem (Wed, 15 Mar 2017 18:24:46 GMT):
Has joined the channel.
snakejerusalem (Wed, 15 Mar 2017 18:28:40 GMT):
Hello. I am João Sousa, from the University of Lisbon. I am trying to launch the ordering service with the sample clients included in its package, but so far I only managed to execute the broadcast_config client. The other 3 clients are giving all sorts of errors
snakejerusalem (Wed, 15 Mar 2017 18:30:10 GMT):
Broadcast_timestamp outputs "BAD_REQUEST", Deliver_stdout outputs "&{FORBIDDEN}", and single_client_tx throws the following goroutine exception:
snakejerusalem (Wed, 15 Mar 2017 18:31:22 GMT):
{Update Receiver} Creating a ledger update delivery stream.
panic: proto: oneof field has nil value
goroutine 53 [running]:
github.com/hyperledger/fabric/protos/utils.MarshalOrPanic(0xdcaee0, 0xc420148a60, 0xe124c8, 0x0, 0x0)
/home/joao/gocode/src/github.com/hyperledger/fabric/protos/utils/commonutils.go:37 +0x80
main.updateReceiver(0xc42020e1e0, 0xc42020e240, 0xdc7da0, 0xc420142140)
/home/joao/gocode/src/github.com/hyperledger/fabric/orderer/sample_clients/single_tx_client/single_tx_client.go:114 +0x450
created by main.main
/home/joao/gocode/src/github.com/hyperledger/fabric/orderer/sample_clients/single_tx_client/single_tx_client.go:56 +0x3c8
jyellick (Wed, 15 Mar 2017 18:51:37 GMT):
@snakejerusalem I saw your post on the mailing list, but it's a good idea to have brought it here, I think you're more likely to get a quick response this way
jyellick (Wed, 15 Mar 2017 18:52:01 GMT):
Those sample clients do not currently do any message signing
jyellick (Wed, 15 Mar 2017 18:52:36 GMT):
Which means that if you start your orderer with crypto material defined, then those sample clients will return those errors which you indicate
jyellick (Wed, 15 Mar 2017 18:53:38 GMT):
You may try editing the `orderer/orderer.yaml` file to change the `GenesisProfile` to `SampleInsecureSolo` if you wish for these sample client to work
jyellick (Wed, 15 Mar 2017 18:54:03 GMT):
Alternatively, you may specify `ORDERER_GENERAL_GENESISPROFILE=SampleInsecureSolo` in the environment when starting the orderer
snakejerusalem (Wed, 15 Mar 2017 18:55:23 GMT):
@jyellick thank you, I will try it out now
snakejerusalem (Wed, 15 Mar 2017 18:58:48 GMT):
ok, I think it is working with the broadcast_timestamp client
jyellick (Wed, 15 Mar 2017 18:59:06 GMT):
The `deliver_stdout`, `broadcast_timestamp` and `broadcast_config` clients should all work
jyellick (Wed, 15 Mar 2017 18:59:25 GMT):
The `single_tx_client` is an artifact of the `sbft` package and may be in an odd state
snakejerusalem (Wed, 15 Mar 2017 18:59:37 GMT):
the debugger outputs the following: [orderer/ramledger] appendBlock -> DEBU 11e Sending signal that block 2 has a successor
snakejerusalem (Wed, 15 Mar 2017 19:00:09 GMT):
This indicates that it is working properly, right?
jyellick (Wed, 15 Mar 2017 19:00:12 GMT):
Correct
jyellick (Wed, 15 Mar 2017 19:00:27 GMT):
You may wish to run `deliver_stdout` to see your transactions encoded in blocks coming out of the orderer
snakejerusalem (Wed, 15 Mar 2017 19:00:40 GMT):
I also noticed that a timer associated with the batch as expired, is that also expected?
jyellick (Wed, 15 Mar 2017 19:00:55 GMT):
You may also do `./broadcast_timestamp -messages 10000` to produce many blocks
jyellick (Wed, 15 Mar 2017 19:01:22 GMT):
Yes, there is a batch timeout which starts when the first message is received. If the batch size is not reached by the time the batch timer expires, a block is cut
snakejerusalem (Wed, 15 Mar 2017 19:03:50 GMT):
ok, I just launched the ordering service, the deliver_stdout client, and the broadcast_timestamp client. They are all working.
snakejerusalem (Wed, 15 Mar 2017 19:04:14 GMT):
@jyellick thank you very much for your help!
jyellick (Wed, 15 Mar 2017 19:04:20 GMT):
Great, happy to help!
snakejerusalem (Wed, 15 Mar 2017 19:08:34 GMT):
Btw, one question about the orderer executable. This orderer executable is supposed to be a frontend to both kafka and pbft, correct?
snakejerusalem (Wed, 15 Mar 2017 19:09:37 GMT):
as in, the clients will connect to orderer, and orderer connects to kafka/pbft.
tuand (Wed, 15 Mar 2017 19:12:48 GMT):
correct, there is one orderer api, solo/kafka/sbft/other consensus is set in configuration
snakejerusalem (Wed, 15 Mar 2017 19:14:09 GMT):
ok, and there can/should be more than one orderer api, right?
tuand (Wed, 15 Mar 2017 19:19:12 GMT):
you can have multiple orderers
snakejerusalem (Wed, 15 Mar 2017 19:19:53 GMT):
ok, thank you very much again! I think thats it for now
mychewcents (Wed, 15 Mar 2017 20:00:06 GMT):
I had a question in mind. I had my default batch timeout to be set as 10secs, while the orderee was listening to any more transactions, I could fire the same command again within the batch timeout and it would still create the block. This will lead to double spending isn't it? How do we deal with this in fabric v1.0?
silliman (Wed, 15 Mar 2017 20:14:41 GMT):
@mychewcents It doesn't matter if these two transactions are in the same block or are in different blocks....the second transaction will be invalidated because the committers will detect that its read/write set is invalid due to the updates applied by the first transaction.
yacovm (Wed, 15 Mar 2017 20:15:56 GMT):
And, just to add to @silliman's correct comment- all peers would see the same order of transactions, so all would invalidate the "double spent" transaction
mychewcents (Wed, 15 Mar 2017 20:18:18 GMT):
Well, I totally agree to the points you ha e made and that is what the documentation also states, but my application was giving the exact opposite response, I was able to double spend my entity. Is that thing configurable, the thing you mentioned just now? Or is it built in. I guess it is built in, but my response shows otherwise.
tuand (Wed, 15 Mar 2017 20:21:43 GMT):
@mychewcents if you haven't done so already pls create a JIRA issue and include logs and whatever details needed to reproduce
mychewcents (Wed, 15 Mar 2017 20:23:08 GMT):
Sure @tuand will do that definitely. Thanks
jeffgarratt (Wed, 15 Mar 2017 21:15:03 GMT):
@snakejerusalem https://gerrit.hyperledger.org/r/#/c/7227/ adds kafka single node to boostrap scenarios
jeffgarratt (Wed, 15 Mar 2017 21:15:03 GMT):
@snakejerusalem https://gerrit.hyperledger.org/r/#/c/7227/ adds kafka single node to bootstrap scenarios
snakejerusalem (Wed, 15 Mar 2017 21:17:24 GMT):
thanks for the tip, @jeffgarratt !
jeffgarratt (Wed, 15 Mar 2017 21:17:33 GMT):
yw
yacovm (Wed, 15 Mar 2017 21:18:34 GMT):
@mychewcents also please specify the type of orderer you used, number of peers, how the app works, etc.
Vadim (Thu, 16 Mar 2017 12:50:19 GMT):
Has joined the channel.
mwall (Thu, 16 Mar 2017 13:13:45 GMT):
Hello! Can you explain to me how peers behave after they reached consensus. What happened to faulty peers? Transactions will refused from them? How are peers synchronizing after reaching consensus? I read on StackOverflow "it will commit the new block to it's local copy of the ledger." How these local copies synchronizing between each other? And every VP has their version of RocksDB or is there common RocksDB between every node?
jansony1 (Thu, 16 Mar 2017 13:33:21 GMT):
Hi all:
I think the gensis block(created on channel build up time) of a channel would define acl and other things, and all the chaincode deployed on this channel would build on this gensis block as the first block. From the ACL of the gensis block, we could know who could join this channel. If a new peer would join this channel, based on the ACL list already defined in the first gensis block and could not be modified, the way to open the gate is to send a update configuration transaction to this channel.
Any input on that understanding?
kostas (Thu, 16 Mar 2017 15:04:32 GMT):
@jansony1: I'd say this sounds about right. The ACL being the union of the `MSP`s of the participating orgs (as listed in the `Application` ConfigGroup of the `CONFIG_UPDATE` message that kicks off the creation of a channel), minus what's listed [in the revocation lists](https://github.com/hyperledger/fabric/blob/bf3f1decb0c10eab9c3788d29628c0a212e49d87/protos/msp/mspconfig.proto#L69) (*) of those MSPs. If a new _org_ wishes to join the channel, then one of the existing members need to send configuration update that modifies the `Application` ConfigGroup. That ConfigGroup has a modification policy attached to it, which dictates which and/or how many of the existing orgs need to sign off on it.
(* Not sure if the revocation path is already active in the code though. @elli-androulaki is it?)
dave.enyeart (Thu, 16 Mar 2017 15:17:18 GMT):
@jyellick Quick question from the e2e instructions:
```The configtxgen tool is used to create two artifacts: - orderer bootstrap block - fabric channel configuration transaction```
Is the orderer bootstrap block channel agnostic?
jyellick (Thu, 16 Mar 2017 15:17:56 GMT):
I'm not sure what the question means?
dave.enyeart (Thu, 16 Mar 2017 15:18:19 GMT):
when i think of a block it is usually scoped to some channel. but we dont have a channel yet at this point.
jyellick (Thu, 16 Mar 2017 15:18:19 GMT):
The orderer bootstrap block has a channel ID, and defines the ordering system channel.
dave.enyeart (Thu, 16 Mar 2017 15:18:38 GMT):
ok, so there is a system channel. that's the piece i was missing.
jyellick (Thu, 16 Mar 2017 15:18:56 GMT):
Yes, it is a privileged channel which only the ordering service should have access to
dave.enyeart (Thu, 16 Mar 2017 15:19:16 GMT):
and it has a chain like any other channel?
jyellick (Thu, 16 Mar 2017 15:19:21 GMT):
Correct
jyellick (Thu, 16 Mar 2017 15:19:25 GMT):
It is where, for instance, channel creation requests are serialized
dave.enyeart (Thu, 16 Mar 2017 15:19:36 GMT):
got it
bkvellanki (Thu, 16 Mar 2017 15:22:10 GMT):
Has joined the channel.
jansony1 (Fri, 17 Mar 2017 01:44:21 GMT):
@kostas, I am little confused about org join channel and peer join Channel. Is the configGroup contain both 1)which peer could join the channel 2)which org could join this channel(which means this org may has some right to query or invoke to one of the peers in this channel), or just 1)?
kostas (Fri, 17 Mar 2017 01:51:46 GMT):
2 is closer to the truth. The ConfigGroup contains MSP definitions, one per org. Roughly, and technically incorrectly I'm sure, you can think of the MSP concept in this context as a way of saying: "every node that presents a cert that links back to the cert specified in this MSP can participate in this channel as a member of this org". For more MSP-related questions, #fabric-crypto is your best bet.
kostas (Fri, 17 Mar 2017 01:51:46 GMT):
2 is closer to the truth. The ConfigGroup contains MSP definitions, one per org. Roughly –and technically not 100% precise I'm sure– you can think of the MSP concept in this context as a way of saying: "every node that presents a cert that links back to the cert specified in this MSP can participate in this channel as a member of this org". For more MSP-related questions, #fabric-crypto is your best bet.
kostas (Fri, 17 Mar 2017 01:51:46 GMT):
2 is closer to the truth. The ConfigGroup contains MSP definitions, one per org. Roughly –and technically not 100% accurate I'm sure– you can think of the MSP concept in this context as a way of saying: "every node that presents a cert that links back to the cert specified in this MSP can participate in this channel as a member of this org". For more MSP-related questions, #fabric-crypto is your best bet.
kostas (Fri, 17 Mar 2017 01:51:46 GMT):
2 is closer to the truth. The ConfigGroup contains MSP definitions, one per org. Roughly –and technically not 100% accurate I'm sure– you can think of the MSP concept in this context as a way of saying: "every node that presents a cert that links back to the cert specified in this MSP can participate in this channel as a member of this org". For more MSP-related questions, #fabric-crypto is your best bet.
RahulBagaria (Fri, 17 Mar 2017 04:00:52 GMT):
Has joined the channel.
steigensonne (Fri, 17 Mar 2017 05:17:00 GMT):
I have a quick Question on the abbreviation "MVCC + postimage".....What does MVCC and postimage means here? MVCC means "Multiversion concurrency control"?? Thanks.
jyellick (Fri, 17 Mar 2017 05:23:23 GMT):
@steigensonne MVCC+postimage indicates that the transaction format includes the readset of keys (and their versions), as well as the writeset + postimage (the keys and their new values). As you indicate, the MVCC is used to allow safe concurrent updates to the ledger (allowing multiple uncommitted state updates to be 'in flight' while still all committing successfully), and the postimage provides determinism with respect to the output of not necessarily deterministic code.
steigensonne (Fri, 17 Mar 2017 05:25:00 GMT):
Appreciate of your quick response!! : )
jyellick (Fri, 17 Mar 2017 05:25:02 GMT):
You can imagine that in the case that a single asset exists on the blockchain, that two transactions could be constructed which modified ownership of this asset. Until these transactions are ordered, both would be considered valid, but after ordering, the first transaction ordered would modify the read version of the asset, invalidating the second in flight transaction before it would commit.
jyellick (Fri, 17 Mar 2017 05:26:17 GMT):
Of course on the other hand, if the two transactions had been about two different assets, the read version would not have changed, and both could be executed successfully (allowing concurrent state updates to be in in flight).
jyellick (Fri, 17 Mar 2017 05:26:17 GMT):
Of course on the other hand, if the two transactions had been about two different assets, the read version would not have changed, and both could be executed successfully (allowing concurrent state updates to be in flight).
mychewcents (Fri, 17 Mar 2017 06:05:15 GMT):
Hi, I just ran my application against the double spending problem that I had written a few days back here. The blockchain is able to write the block with the double spending transaction, but when I query the blockchain for the value, it gives only the first entity for which I fired the query first. So, it gives an error for the second transaction, which it definitely should. But I've a problem that the second transaction (double spending transaction) shouldn't have been written on the chain in the first place. Can anyone guide me on that?
mychewcents (Fri, 17 Mar 2017 06:05:15 GMT):
Hi, I just ran my application against the double spending problem that I had written a few days back here. The blockchain is able to write the block with the double spending transaction, but when I query the blockchain for the value, it gives only the first entity for which I fired the query first. So, it gives an error for the second transaction, which it definitely should. But I've a problem that the transaction shouldn't have been written on the chain in the first place. Can anyone guide me on that?
yacovm (Fri, 17 Mar 2017 06:46:13 GMT):
The 2nd transaction is invalid because it is double spent
mychewcents (Fri, 17 Mar 2017 06:50:44 GMT):
@yacovm I know that but that is added on the block anyways. It is invalid but why does it get added on the block then. Shouldn't that transaction report an error to the client of double spending.
zhouer (Fri, 17 Mar 2017 06:53:05 GMT):
Has joined the channel.
yacovm (Fri, 17 Mar 2017 08:53:32 GMT):
@mychewcents well, it is added to the block, because the ordering service doesn't validate transactions, but it signs the block.
If you remove it from the block, the signature on the block can no longer verify the block
7sigma (Fri, 17 Mar 2017 15:39:54 GMT):
Has joined the channel.
7sigma (Fri, 17 Mar 2017 15:41:16 GMT):
Hi everyone. In one of my network based on 0.6, an issue is faced, where no invoke transactions are accepted. How do I analyze the issue and make an attempt to fix it.
7sigma (Fri, 17 Mar 2017 15:41:16 GMT):
Hi everyone. In one of my network based on 0.6, an issue is faced, where no invoke transactions are accepted. How do I analyze the issue and make an attempt to fix it.
7sigma (Fri, 17 Mar 2017 15:41:24 GMT):
My network in 0.6 have gone to state where it doesn't accept any new transactions. The log says Replica 0 invalid p entry in view-change: vc(v:4 h:600) && WARN 141 Replica 0 found view-change message incorrect. Plz guide me what should I try to recover the chain
jyellick (Fri, 17 Mar 2017 17:06:01 GMT):
@7sigma Have you tried submitting transactions to other peers other than peer0?
rangak (Fri, 17 Mar 2017 21:18:40 GMT):
Has joined the channel.
samwood (Sat, 18 Mar 2017 01:13:50 GMT):
Has joined the channel.
7sigma (Sat, 18 Mar 2017 08:28:13 GMT):
Hi @jyellick . Yeah I have tried submit on other peers. Getting the same result
7sigma (Sat, 18 Mar 2017 08:29:31 GMT):
@jyellick also I added a nvp and submitted using it. same results received at random vps
ruslan.kryukov (Sat, 18 Mar 2017 13:06:46 GMT):
Hello, howto add new organisation to existing channel? Is there some way to update genesis block?
bh4rtp (Sat, 18 Mar 2017 13:59:36 GMT):
Has joined the channel.
jyellick (Sat, 18 Mar 2017 15:45:00 GMT):
@7sigma And you are certain you are querying across nodes? In general, the PBFT network should heal itself in the situation you described.
jyellick (Sat, 18 Mar 2017 15:45:35 GMT):
@ruslan.kryukov There is support for this in the server code, but the tooling to do so does not support this yet.
bh4rtp (Sat, 18 Mar 2017 15:45:43 GMT):
hi, what is validation system chaincode (VSCC)? is this chaincode preinstalled on all peers? how is it associated with transaction chaincode?
jyellick (Sat, 18 Mar 2017 15:46:52 GMT):
@bh4rtp The VSCC is a type of system chaincode, and all system chaincodes are currently compiled into the peer binary. It is used to validate the transaction before committing it to the state database.
bh4rtp (Sat, 18 Mar 2017 15:53:18 GMT):
@jyellick thanks. so why is this binary VSCC called system 'chaincode', not system module? is this because of conformance with the chaincode interfaces and called using grpc protocol?
bh4rtp (Sat, 18 Mar 2017 15:53:18 GMT):
@jyellick thanks. so why is this binary VSCC called system 'chaincode', not system module? is this because of its conformance with the chaincode interfaces and called using grpc protocol?
jyellick (Sat, 18 Mar 2017 15:56:47 GMT):
@bh4rtp Yes, it is written using the same chaincode interfaces. It could have been called it something like "system module" and written differently, but system chaincodes are traditionally the place where customizable functionality lives. Although the VSCC which comes with the peer should be appropriate for nearly all use cases, it is a point where someone could customize the system with different verification logic. I would encourage anyone who wants to do this to think very hard about accomplishing their goals in other ways, as an incorrectly implemented VSCC could cause many problems for the system.
jyellick (Sat, 18 Mar 2017 15:56:47 GMT):
@bh4rtp Yes, it is written using the same chaincode interfaces. It could have been called something like "system module" and written differently, but system chaincodes are traditionally the place where customizable functionality lives. Although the VSCC which comes with the peer should be appropriate for nearly all use cases, it is a point where someone could customize the system with different verification logic. I would encourage anyone who wants to do this to think very hard about accomplishing their goals in other ways, as an incorrectly implemented VSCC could cause many problems for the system.
bh4rtp (Sat, 18 Mar 2017 16:07:49 GMT):
@jyellick that's very clear. thank you very much.
abutler (Sat, 18 Mar 2017 18:51:45 GMT):
Has joined the channel.
GeorgeSamman (Mon, 20 Mar 2017 03:49:38 GMT):
Has joined the channel.
7sigma (Mon, 20 Mar 2017 04:11:11 GMT):
hi @jyellick .. I will try quering each nodes and let you know. Thanks for the assistance
JaemanHong (Mon, 20 Mar 2017 05:24:41 GMT):
Has joined the channel.
ruslan.kryukov (Mon, 20 Mar 2017 06:34:38 GMT):
What does field Application in configtx mean?
ruslan.kryukov (Mon, 20 Mar 2017 06:34:41 GMT):
Hello
Xiao (Mon, 20 Mar 2017 07:10:12 GMT):
Has joined the channel.
Vadim (Mon, 20 Mar 2017 07:54:43 GMT):
@ruslan.kryukov it seems that it is for listing organizations (i.e. their MSP configs) that in future will be used to create channels
ruslan.kryukov (Mon, 20 Mar 2017 07:58:22 GMT):
okay, fine, thanks
ruslan.kryukov (Mon, 20 Mar 2017 07:59:57 GMT):
I am just trying to use two profiles in configtx for two orderers and figuring out that fields make sense and that can be removed
dorrakhribi (Mon, 20 Mar 2017 10:23:46 GMT):
Has joined the channel.
pd93 (Mon, 20 Mar 2017 10:28:26 GMT):
Has joined the channel.
pd93 (Mon, 20 Mar 2017 10:31:13 GMT):
I'm trying to create and join my own channels on my network, but I get this from my orderer at the end of every `docker-compose up` command
`INFO 0d1 Starting with system channel: testchainid and orderer type solo`.
My `/production/ledgersData/chains/` folder also contains the `testchainid` folder
Is there any way to disable the testchainid channel?
jyellick (Mon, 20 Mar 2017 13:38:32 GMT):
@ruslan.kryukov The Application section in configtx is for information related to the peers and client SDKs, ie, the 'application logic' portion of the fabric. It's separated explicitly from the orderer related configuration, because the organization(s) running ordering may not be authorized to transact on a channel.
jyellick (Mon, 20 Mar 2017 13:39:36 GMT):
You may wish to review this document https://gerrit.hyperledger.org/r/#/c/7115/ , it is not merged, because some of the information in the document will not be correct until the previous CRs merge, but it is a good place to get started
ruslan.kryukov (Mon, 20 Mar 2017 13:40:51 GMT):
Interesting document, thanks!
jyellick (Mon, 20 Mar 2017 13:49:03 GMT):
@pd93 You are starting the ordering service with the provisional bootstrapping method. This dynamically invokes the configtxgen tool and creates an ordering system channel called `testchainid`. You may instead use the `file` bootstrapping method, by setting `ORDERER_GENERAL_GENESISMETHOD=file` and supply a genesis block for the channel ID of your choice for the ordering system channel.
jyellick (Mon, 20 Mar 2017 13:49:03 GMT):
@pd93 You are starting the ordering service with the provisional bootstrapping method. This dynamically invokes the configtxgen tool and creates an ordering system channel called `testchainid`. You may instead use the `file` bootstrapping method, by setting `ORDERER_GENERAL_GENESISMETHOD=file` and supply a genesis block via `ORDERER_GENERAL_GENESISFILE=/path/to/your.block` for the channel ID of your choice for the ordering system channel.
jyellick (Mon, 20 Mar 2017 13:50:35 GMT):
Note that for the ordering service to work, it requires this ordering system channel in order to coordinate and serialize channel creation requests, so starting an ordering service always implies creating one channel.
jyellick (Mon, 20 Mar 2017 13:51:07 GMT):
This channel should never be used for non orderer system channel transactions (ie, no application logic should be transmitted here)
pd93 (Mon, 20 Mar 2017 14:15:32 GMT):
@jyellick Thanks. Just to make sure I understand this, I'm currently doing the following:
`configtxgen -profile Orgs -outputBlock orderer.block`
`configtxgen -profile Orgs -channelID mych -outputCreateChannelTx channel.tx`
Then I'm bringing up a network with a single peer (for now), a CA and an orderer (with the `GENESISMETHOD` set to file)
My scripts then automatically run a channel create command: `peer channel create -o orderer:7050 -c mych -f channel.tx`
... and a channel join command: `peer channel join -b mych.block`
The channel join command is currently failing with `Block number should have been 1 but was 0`
Am I still doing something wrong? and should `mych` *not* be used for app logic?
jyellick (Mon, 20 Mar 2017 14:21:53 GMT):
@pd93 A quick note first, you should specify `-channelID
jyellick (Mon, 20 Mar 2017 14:23:46 GMT):
`-peer-defaultchain false` I think should do it
jyellick (Mon, 20 Mar 2017 14:24:06 GMT):
(This is on the `peer node start` command)
pd93 (Mon, 20 Mar 2017 14:29:08 GMT):
@jyellick ohh, so when generating the genesis block, I should be giving it a channel name that is *not* mych (eg. sysch)
The orderer will then use sysch
and my app logic and everything else can use mych?
kostas (Mon, 20 Mar 2017 14:30:09 GMT):
As a side note, why is the channelID needed in both the second `configtxgen` command and in the `peer channel create` call? It seems that this info should be encoded in `channel.tx`.
pd93 (Mon, 20 Mar 2017 14:33:46 GMT):
@kostas The information already is encoded in `channel.tx` (as if you give the wrong name, it tells you that they don't match), but I agree. It's a bit weird having to specify it twice
jyellick (Mon, 20 Mar 2017 14:39:25 GMT):
> ohh, so when generating the genesis block, I should be giving it a channel name that is *not* mych (eg. sysch)
@pd93 Exactly
jyellick (Mon, 20 Mar 2017 14:39:25 GMT):
> ohh, so when generating the genesis block, I should be giving it a channel name that is *not* mych (eg. sysch)
@pd93 Exactly
jyellick (Mon, 20 Mar 2017 14:39:25 GMT):
> ohh, so when generating the genesis block, I should be giving it a channel name that is *not* mych (eg. sysch)
Exactly @pd93
jyellick (Mon, 20 Mar 2017 14:40:18 GMT):
@kostas I don't believe it is actually necessary to specify `-c` to the `peer channel create` command.
jyellick (Mon, 20 Mar 2017 14:41:57 GMT):
There are essentially two invocation styles for `peer channel create`. One takes the output of `configtxgen` which is the generally more powerful and extensible way. The other creates the channel creation tx for you, and always specifies exactly the sample org and there is no way to tweak it.
7sigma (Mon, 20 Mar 2017 14:42:01 GMT):
@jyellick I tried all combinations multiple times. All queries works but invoke doesnt work. Can u suggest me some thing which I can change in docker compose
jyellick (Mon, 20 Mar 2017 14:42:38 GMT):
Were it up to me, we would remove the `-c` method and only use the `-f` method. But some people seem to think the `-c` method is sufficiently simpler that it's worth preserving
jyellick (Mon, 20 Mar 2017 14:43:54 GMT):
@7sigma Presumably you are using recent v0.6 images? And you have probably also tried starting and stopping the network? (bringing the peers down and back up)
pd93 (Mon, 20 Mar 2017 14:44:18 GMT):
@jyellick It's working now :) Many many thanks
@yacovm Thanks for pointing me here too
Been stuck on this for days
7sigma (Mon, 20 Mar 2017 14:44:39 GMT):
@jyellick yeah I tried . Using v0.6 images. One of the peer is lagging, other three peers at same height
jyellick (Mon, 20 Mar 2017 14:44:59 GMT):
Those three which are at the correct height should be able to accept transactions though, but they are not?
7sigma (Mon, 20 Mar 2017 14:45:15 GMT):
@jyellick yeah
jyellick (Mon, 20 Mar 2017 14:46:18 GMT):
Can you check the block info (in particular the hash of the most recent block) from the three with matching block heights?
7sigma (Mon, 20 Mar 2017 14:46:58 GMT):
@jyellick ok. will check now
7sigma (Mon, 20 Mar 2017 14:49:56 GMT):
@jyellick Yes stateHash & previousBlockHash are different in all 3
jyellick (Mon, 20 Mar 2017 14:50:14 GMT):
@7sigma There is your problem
jyellick (Mon, 20 Mar 2017 14:50:29 GMT):
That would indicate that your chaincode has behaved in a non-deterministic way.
jyellick (Mon, 20 Mar 2017 14:50:40 GMT):
So the network cannot agree on the output of the chaincode, and it has deadlocked.
7sigma (Mon, 20 Mar 2017 14:50:47 GMT):
@jyellick oh ok,
jyellick (Mon, 20 Mar 2017 14:50:58 GMT):
(Note, this situation is eliminated in v1 by using MVCC+postimage for transactions)
7sigma (Mon, 20 Mar 2017 14:51:34 GMT):
@jyellick . So I need to check in the chaincode... may be json objects could be the issue. Is there any way to set off few last blocks
jyellick (Mon, 20 Mar 2017 14:53:20 GMT):
In general, in a situation like this, we would recommend that you revert to a backup to get back to a consistent state
7sigma (Mon, 20 Mar 2017 14:53:28 GMT):
@jyellick is there a way to test I can remove the last few blocks
7sigma (Mon, 20 Mar 2017 14:57:25 GMT):
@jyellick The previous hash is also different. Does it implies the problem started from previous blocks?
jyellick (Mon, 20 Mar 2017 15:01:46 GMT):
Correct
jyellick (Mon, 20 Mar 2017 15:02:02 GMT):
You would need to retrieve blocks until you find blocks with a matching state hash
jyellick (Mon, 20 Mar 2017 15:02:39 GMT):
If you are running with the default configuration, I believe this will be no more than 40 block ago, probably less than 20 blocks ago
jyellick (Mon, 20 Mar 2017 15:02:39 GMT):
If you are running with the default configuration, I believe this will be no more than 40 block ago, probably 20 blocks ago
7sigma (Mon, 20 Mar 2017 15:04:22 GMT):
@jyellick Thanks. I will check out and see the block from where the issue started with the type of transaction. How to change the configuration to ensure that blocks does not get added after error occured
jyellick (Mon, 20 Mar 2017 15:05:41 GMT):
You may reduce your checkpoint interval `K` in the PBFT from 10 to 1, which will make you detect the situation 10x faster.
jyellick (Mon, 20 Mar 2017 15:06:41 GMT):
But, ultimately, if this occurs, you will need to revert to a stable state to recover.
jyellick (Mon, 20 Mar 2017 15:06:54 GMT):
The best solution is to fix the chaincode to be deterministic
7sigma (Mon, 20 Mar 2017 15:07:13 GMT):
Thanks! It really helped. I will work on it
7sigma (Mon, 20 Mar 2017 15:07:22 GMT):
will let you know my findings
jyellick (Mon, 20 Mar 2017 15:07:53 GMT):
Good luck @7sigma !
yacovm (Mon, 20 Mar 2017 16:15:29 GMT):
@jyellick @kostas I know this might be a bit too late to ask, but- why are `Broadcast` and `Deliver` both under the same *gRPC Service` ( `AtomicBroadcast` )?
The actors in the system that call them usually differ- peers call deliver and clients call broadcast. (The corner case is `peer cli` but that's not important)
yacovm (Mon, 20 Mar 2017 16:15:29 GMT):
@jyellick @kostas I know this might be a bit too late to ask, but- why are `Broadcast` and `Deliver` both under the same *gRPC Service* ( `AtomicBroadcast` )?
The actors in the system that call them usually differ- peers call deliver and clients call broadcast. (The corner case is `peer cli` but that's not important)
yacovm (Mon, 20 Mar 2017 16:32:41 GMT):
This are they not separated ?
kostas (Mon, 20 Mar 2017 16:35:39 GMT):
This separation, based on who invokes the RPC, had simply not occurred to me.
kostas (Mon, 20 Mar 2017 16:36:28 GMT):
The criterion for grouping them this way was these are the two calls that the ordering service offers.
kostas (Mon, 20 Mar 2017 16:36:28 GMT):
The criterion for grouping them this way was simply that these are the two calls that the ordering service offers.
kostas (Mon, 20 Mar 2017 16:37:56 GMT):
For my edification, would splitting these into two separate services offer any concrete benefits?
kostas (Mon, 20 Mar 2017 16:38:36 GMT):
(Modeling wise you may be right that this is the right way to go, though honestly, I'm not entirely sold on this just yet.)
kostas (Mon, 20 Mar 2017 16:38:36 GMT):
(Modeling wise you may be right that this is the right way to go, though honestly, I'm not entirely sold on this just yet. I'll need to think about it some more.)
yacovm (Mon, 20 Mar 2017 17:08:19 GMT):
Mock wise yes
yacovm (Mon, 20 Mar 2017 17:08:25 GMT):
Its less code to mock
yacovm (Mon, 20 Mar 2017 17:08:53 GMT):
And it's a separatr service
yacovm (Mon, 20 Mar 2017 17:08:58 GMT):
I mean
jyellick (Mon, 20 Mar 2017 17:09:02 GMT):
You could define your own golang interface which is only the `Deliver` subset?
jyellick (Mon, 20 Mar 2017 17:09:14 GMT):
https://en.wikipedia.org/wiki/Atomic_broadcast
yacovm (Mon, 20 Mar 2017 17:09:17 GMT):
Think of something: if you have some gateway in the future
yacovm (Mon, 20 Mar 2017 17:09:32 GMT):
That allows only 1 of them
jyellick (Mon, 20 Mar 2017 17:09:41 GMT):
Since `Deliver` is defined in terms of `Broadcast` it seemed like it made sense to group them together. They seem to be usually discussed as a single context in distributed systems
jyellick (Mon, 20 Mar 2017 17:11:52 GMT):
Maybe it would have made more sense to split them into two gRPC services. Would have to reflect on it more, though I suspect that ship has sailed at this point
kostas (Mon, 20 Mar 2017 17:12:57 GMT):
FWIW I think Yacov is talking about two separate _gRPC_ services indeed.
kostas (Mon, 20 Mar 2017 17:13:35 GMT):
(Agree about the last bit though, w/ the ship sailing.)
yacovm (Mon, 20 Mar 2017 17:18:16 GMT):
@jyellick so you mean I need to define my own interface that has only 1 of them (the 1 I need, i.e deliver),
and also implement:
```
func NewAtomicBroadcastClient(cc *grpc.ClientConn) AtomicBroadcastClient {
return &atomicBroadcastClient{cc}
}
```
Which returns my own interface, right?
yacovm (Mon, 20 Mar 2017 17:18:55 GMT):
(instead of this `AtomicBroadcastClient` which makes everyone implement both methods)
jyellick (Mon, 20 Mar 2017 17:19:41 GMT):
That would be one approach, not sure that it's simpler than simply mocking out the full interface with some unimplemented methods though
yacovm (Mon, 20 Mar 2017 17:22:00 GMT):
That's what I ended up doing (mocking the full).
I'm of course- not trying to criticize or say something should be changed, I only inquired about the reasoning.
Regarding being "related to the same atomic-broadcast notion" - yeah you're right, but now this prohibits you from having different ports to clients and peers, right?
yacovm (Mon, 20 Mar 2017 17:22:00 GMT):
That's what I ended up doing (mocking the full).
I'm of course- not trying to criticize or say something should be changed, I only inquired about the reasoning.
Regarding being "related to the same atomic-broadcast notion" - yeah you're right, but now this prohibits you from having different ports to clients and peers, right?
(If you want different endpoints you now need to register 2 instances but somehow make 1 of each not-service on that port or something)
jyellick (Mon, 20 Mar 2017 17:22:52 GMT):
One thing to keep in mind, the SDKs do also invoke Deliver
jyellick (Mon, 20 Mar 2017 17:22:52 GMT):
One thing to keep in mind, the SDKs do also invoke `Deliver`
yacovm (Mon, 20 Mar 2017 17:23:11 GMT):
Only at channel creation though right?
jyellick (Mon, 20 Mar 2017 17:23:14 GMT):
Right
jyellick (Mon, 20 Mar 2017 17:24:00 GMT):
Having multiple ports would certainly make course filtering easier to differentiate between types of clients, but it also increases the deployment complexity.
yacovm (Mon, 20 Mar 2017 17:24:54 GMT):
Well I think the deployment now is so complex that making different ports doesn't change much if you look at it graphically ;)
jyellick (Mon, 20 Mar 2017 17:26:04 GMT):
> I'm of course- not trying to criticize or say something should be changed, I only inquired about the reasoning.
Not taken as criticism at all. The simple answer is, this was defined _very_ early, and on reflection, I believe 'submitting peer' was still a concept, so peers would have been the only consumer of both interfaces. After definition, we never ran into a reason to consider splitting it into two services, so there it is today.
yacovm (Mon, 20 Mar 2017 17:26:43 GMT):
aha! hyperledger lore.
antoniovassell (Tue, 21 Mar 2017 01:41:46 GMT):
Has joined the channel.
jansony1 (Tue, 21 Mar 2017 02:51:00 GMT):
I wonder if there is a doc which tell detailed about what items will include orderer.block. And what is the difference between orderer.block and channel.tx ?
I thinks there is somewhere to define reader and writer things in orderer.block or channel.tx. While I have not found them in configtx.yaml file, i found some default setting in the orderer.block by using inspect command. Where it cames from? and Where could I customize it?
honghongbuqi (Tue, 21 Mar 2017 07:35:02 GMT):
Has joined the channel.
zwsyjj (Tue, 21 Mar 2017 11:54:27 GMT):
Has joined the channel.
jyellick (Tue, 21 Mar 2017 13:56:41 GMT):
@jansony1 Please see https://gerrit.hyperledger.org/r/#/c/7115/ https://github.com/jyellick/fabric-gerrit/blob/master/docs/source/configtxgen.rst and https://github.com/jyellick/fabric-gerrit/blob/master/docs/source/policies.rst
thojest (Tue, 21 Mar 2017 15:44:33 GMT):
Has joined the channel.
thojest (Tue, 21 Mar 2017 15:44:45 GMT):
hey guys and especially @jyellick
thojest (Tue, 21 Mar 2017 15:44:53 GMT):
i have exactly this problem https://jira.hyperledger.org/browse/FAB-1871
thojest (Tue, 21 Mar 2017 15:45:20 GMT):
isnt it possible to deploy chaincode when your peer network isnt created using docker-compose
thojest (Tue, 21 Mar 2017 15:45:31 GMT):
i get this warning and it stops my chaincode from being deployed
jyellick (Tue, 21 Mar 2017 15:46:24 GMT):
@thojest Please try deploying your chaincode to a peer that is not vp0
thojest (Tue, 21 Mar 2017 15:46:50 GMT):
just changing the id ?
jyellick (Tue, 21 Mar 2017 15:49:18 GMT):
You should have brought up at least 4 peers for a PBFT network
jyellick (Tue, 21 Mar 2017 15:49:28 GMT):
The names of which should be vp0, vp1, vp2, vp3
thojest (Tue, 21 Mar 2017 15:49:45 GMT):
:D didnt know that
jyellick (Tue, 21 Mar 2017 15:49:45 GMT):
The snippet in FAB-1871 is from vp0, which indicates it is currently in view-change, awaiting the network to move to view 1
jyellick (Tue, 21 Mar 2017 15:50:15 GMT):
Ah, yes, if you do not have 3f+1 nodes in your network, things are not going to work
thojest (Tue, 21 Mar 2017 15:50:24 GMT):
:D i feel dumb now
jyellick (Tue, 21 Mar 2017 15:50:25 GMT):
(where f is the number of faults you have configured your network to tolerate)
jyellick (Tue, 21 Mar 2017 15:50:34 GMT):
No problem, good luck!
thojest (Tue, 21 Mar 2017 15:51:22 GMT):
@jyellick is it possible to locally run several peers on the same laptop if you carefully handle the ports?
jyellick (Tue, 21 Mar 2017 15:51:54 GMT):
Yes, it is possible, I recommend you see the `busywork` package for examples of how this is done
thojest (Tue, 21 Mar 2017 15:52:09 GMT):
@jyellick thx a lot
jyellick (Tue, 21 Mar 2017 15:52:11 GMT):
[And for context of anyone reading the above, this is in reference to the v0.6 architecture, not the v1)
jyellick (Tue, 21 Mar 2017 15:52:11 GMT):
[And for context of anyone reading the above, this is in reference to the v0.6 architecture, not the v1]
kostas (Tue, 21 Mar 2017 15:52:17 GMT):
(@thojest if this takes care of your issue, can you please update the bug accordingly by closing it?)
thojest (Tue, 21 Mar 2017 15:54:28 GMT):
@kostas if it works i ll do it
thojest (Tue, 21 Mar 2017 16:11:40 GMT):
@jyellick how do i overwrite the peer event address with a variable?
thojest (Tue, 21 Mar 2017 16:11:49 GMT):
is it CORE_PEER_EVENT_ADDRESS ?
thojest (Tue, 21 Mar 2017 16:14:02 GMT):
got it working :)
thojest (Tue, 21 Mar 2017 16:44:20 GMT):
@jyellick now have a network of 4 peer nodes seeing each other but still get this error
thojest (Tue, 21 Mar 2017 16:44:48 GMT):
and deployed to vp1
jyellick (Tue, 21 Mar 2017 17:12:46 GMT):
@thojest Can you paste the output of all 4 peer's logs to the issue?
thojest (Tue, 21 Mar 2017 17:14:38 GMT):
wait a second
thojest (Tue, 21 Mar 2017 17:20:03 GMT):
hmm is it possible
thojest (Tue, 21 Mar 2017 17:20:11 GMT):
that i have to set the installpath of the chaincode
thojest (Tue, 21 Mar 2017 17:20:14 GMT):
i think i missed that
thojest (Tue, 21 Mar 2017 17:20:54 GMT):
CORE_PEER_CHAINCODE_INSTALLPATH
thojest (Tue, 21 Mar 2017 17:21:36 GMT):
naah but this refers to the docker-container only i think
jyellick (Tue, 21 Mar 2017 19:45:51 GMT):
Took a look at the logs you sent me. It appears to me that vp0 has advanced its view, but vp1-vp3 are talking and doing fine.
jyellick (Tue, 21 Mar 2017 19:46:30 GMT):
You'll notice that if you shut down one of the other peers, say vp3, then send a transaction to vp2 that this will cause vp0 to recover and begin participating again
jyellick (Tue, 21 Mar 2017 19:47:43 GMT):
Note that for the PBFT protocol, the guarantee is only that the network makes progress, not that every node makes progress, so vp0 being out of sync is "working as designed". This is a common question. See https://jira.hyperledger.org/browse/FAB-707 or https://github.com/hyperledger-archives/fabric/issues/1120
Shadow-Hawk (Wed, 22 Mar 2017 03:39:32 GMT):
Has joined the channel.
Shadow-Hawk (Wed, 22 Mar 2017 03:45:54 GMT):
Hi guys, in fabric 0.6, what does the consensus module consent on? Does it only consent on the order of transactions? Or does it also consent on the transaction execution result to meet deterministic? How does the transaction deterministic guaranteed? Thank you!
kostas (Wed, 22 Mar 2017 04:51:29 GMT):
@Shadow-Hawk: In PBFT which is what 0.6 runs on by default, you consent on the order of transactions.
kostas (Wed, 22 Mar 2017 04:52:30 GMT):
Every K blocks (where K is configurable, look at your YAML file), the nodes exchange a so-called checkpoint which is a hash of their state thus far. This allows us to detect whether the nodes have diverged.
kostas (Wed, 22 Mar 2017 04:52:47 GMT):
PBFT does *not* guarantee that the transaction is deterministic.
kostas (Wed, 22 Mar 2017 04:55:03 GMT):
Sieve, which used to be an option in the 0.5 version, was a protocol designed to filter out non-deterministic transactions. You can read more about it here: https://arxiv.org/abs/1603.07351
Shadow-Hawk (Wed, 22 Mar 2017 06:19:18 GMT):
Thank you @kostas ! According to the paper, it seems that Fabric 0.6 follows the order-then-execute approach so that VPs only reach a agreement on inputs i.e. ordering, which means it does not guarantee the outputs are the same. While Sieve follows the execute-then-order approach to agree on the output. However, I saw this post http://stackoverflow.com/questions/41710738/pbft-algorithm-in-hyperledger saying VPs check the hash code of execution result to be the same which seems only if the execution result are the same (>=2/3) could a block be formed. Looks like it contradicts with what you were saying. Or did I miss something?
kostas (Wed, 22 Mar 2017 06:36:58 GMT):
Yeah, that post is BS.
kostas (Wed, 22 Mar 2017 06:36:58 GMT):
Yeah, that StackOverflow post is BS.
magg (Wed, 22 Mar 2017 09:23:52 GMT):
Has joined the channel.
magg (Wed, 22 Mar 2017 09:25:31 GMT):
will fabric 1.0 remove consensus protocols entirely?
Vadim (Wed, 22 Mar 2017 09:26:03 GMT):
@magg I would say it does not make sense, why do you think so?
magg (Wed, 22 Mar 2017 09:27:08 GMT):
@Vadim i saw they removed pbft and replace it with kafka and zookeeper in v1.0
Vadim (Wed, 22 Mar 2017 09:27:32 GMT):
there will be SBFT implemented
Vadim (Wed, 22 Mar 2017 09:28:04 GMT):
also, using Kafka does not mean there is no consensus, I guess
magg (Wed, 22 Mar 2017 09:28:29 GMT):
oh ok
Vadim (Wed, 22 Mar 2017 09:29:11 GMT):
I think PBFT did not really work for the new architecture very well and as a first step Kafka was used, with the aim to implement SBFT later
thojest (Wed, 22 Mar 2017 09:31:49 GMT):
hey guys im posting my error log here again because it seems to have something to do with consensus.
thojest (Wed, 22 Mar 2017 09:32:14 GMT):
using v0.6 im trying to set-up a network of 4 peer nodes but not using docker instead running peers locally
thojest (Wed, 22 Mar 2017 09:32:22 GMT):
1 peer node as root node on laptop 1
thojest (Wed, 22 Mar 2017 09:32:43 GMT):
3 peer nodes on laptop 2 connecting to root node on laptop 1
thojest (Wed, 22 Mar 2017 09:32:56 GMT):
then trying to deploy chaincode using hfc sdk
thojest (Wed, 22 Mar 2017 09:33:28 GMT):
before deployment everything seems fine but upon deployment i get the following "warning" which leads to the fact that my chaincode is not deployed
thojest (Wed, 22 Mar 2017 09:33:50 GMT):
sry for the long post
thojest (Wed, 22 Mar 2017 09:34:11 GMT):
19:02:22.901 [peer] func1 -> INFO 001 Auto detected peer address: 192.168.2.151:7051
19:02:22.902 [peer] func1 -> INFO 002 Auto detected peer address: 192.168.2.151:7051
19:02:22.904 [nodeCmd] serve -> INFO 003 Security enabled status: true
19:02:22.904 [eventhub_producer] start -> INFO 004 event processor started
19:02:22.905 [nodeCmd] serve -> INFO 005 Privacy enabled status: false
19:02:22.906 [db] open -> INFO 006 Setting rocksdb maxLogFileSize to 10485760
19:02:22.906 [db] open -> INFO 007 Setting rocksdb keepLogFileNum to 10
19:02:22.919 [crypto] RegisterValidator -> INFO 008 Registering validator [test_vp1] with name [test_vp1]...
19:02:23.129 [crypto] RegisterValidator -> INFO 009 Registering validator [test_vp1] with name [test_vp1]...done!
19:02:23.129 [crypto] InitValidator -> INFO 00a Initializing validator [test_vp1]...
19:02:23.138 [crypto] InitValidator -> INFO 00b Initializing validator [test_vp1]...done!
19:02:23.139 [chaincode] NewChaincodeSupport -> INFO 00c Chaincode support using peerAddress: 192.168.2.151:7051
19:02:23.139 [sysccapi] RegisterSysCC -> WARN 00d Currently system chaincode does support security(noop,github.com/hyperledger/fabric/bddtests/syschaincode/noop)
19:02:23.140 [state] loadConfig -> INFO 00e Loading configurations...
19:02:23.140 [state] loadConfig -> INFO 00f Configurations loaded. stateImplName=[buckettree], stateImplConfigs=map[numBuckets:%!s(int=1000003) maxGroupingAtEachLevel:%!s(int=5) bucketCacheSize:%!s(int=100)], deltaHistorySize=[500]
19:02:23.140 [state] NewState -> INFO 010 Initializing state implementation [buckettree]
19:02:23.140 [buckettree] initConfig -> INFO 011 configs passed during initialization = map[string]interface {}{"numBuckets":1000003, "maxGroupingAtEachLevel":5, "bucketCacheSize":100}
19:02:23.140 [buckettree] initConfig -> INFO 012 Initializing bucket tree state implemetation with configurations &{maxGroupingAtEachLevel:5 lowestLevel:9 levelToNumBucketsMap:map[7:40001 4:321 3:65 2:13 1:3 9:1000003 8:200001 6:8001 5:1601 0:1] hashFunc:0x9af1b0}
19:02:23.140 [buckettree] newBucketCache -> INFO 013 Constructing bucket-cache with max bucket cache size = [100] MBs
19:02:23.141 [buckettree] loadAllBucketNodesFromDB -> INFO 014 Loaded buckets data in cache. Total buckets in DB = [0]. Total cache size:=0
19:02:23.141 [genesis] func1 -> INFO 015 Creating genesis block.
19:02:23.144 [consensus/controller] NewConsenter -> INFO 016 Creating consensus plugin pbft
19:02:23.145 [consensus/pbft] newPbftCore -> INFO 017 PBFT type = *pbft.obcBatch
19:02:23.145 [consensus/pbft] newPbftCore -> INFO 018 PBFT Max number of validating peers (N) = 4
19:02:23.145 [consensus/pbft] newPbftCore -> INFO 019 PBFT Max number of failing peers (f) = 1
19:02:23.146 [consensus/pbft] newPbftCore -> INFO 01a PBFT byzantine flag = false
19:02:23.146 [consensus/pbft] newPbftCore -> INFO 01b PBFT request timeout = 3s
19:02:23.146 [consensus/pbft] newPbftCore -> INFO 01c PBFT view change timeout = 2s
19:02:23.146 [consensus/pbft] newPbftCore -> INFO 01d PBFT Checkpoint period (K) = 10
19:02:23.146 [consensus/pbft] newPbftCore -> INFO 01e PBFT broadcast timeout = 1s
19:02:23.146 [consensus/pbft] newPbftCore -> INFO 01f PBFT Log multiplier = 4
19:02:23.146 [consensus/pbft] newPbftCore -> INFO 020 PBFT log size (L) = 40
19:02:23.146 [consensus/pbft] newPbftCore -> INFO 021 PBFT null requests disabled
19:02:23.146 [consensus/pbft] newPbftCore -> INFO 022 PBFT automatic view change disabled
19:02:23.146 [consensus/pbft] restoreLastSeqNo -> INFO 023 Replica 1 restored lastExec: 0
19:02:23.146 [consensus/pbft] restoreState -> INFO 024 Replica 1 restored state: view: 0, seqNo: 0, pset: 0, qset: 0, reqBatches: 0, chkpts: 1 h: 0
19:02:23.147 [consensus/pbft] newObcBatch -> INFO 025 PBFT Batch size = 10
19:02:23.147 [consensus/pbft] newObcBatch -> INFO 026 PBFT Batch timeout = 1s
thojest (Wed, 22 Mar 2017 09:34:36 GMT):
19:02:23.147 [consensus/statetransfer] SyncToTarget -> INFO 027 Syncing to target 46b9dd2b0ba88d13233b3feb743eeb243fcd52ea62b81b82b50c27646ed5762fd75dc4ddd8c0f200cb05019d67b592f6fc821c49479ab48640292eacb3b7c4be for block number 0 with peers []
19:02:23.148 [nodeCmd] serve -> INFO 029 Starting peer with ID=name:"vp1" , network ID=dev, address=192.168.2.151:7051, rootnodes=192.168.2.150:7051, validator=true
19:02:23.148 [rest] StartOpenchainRESTServer -> INFO 02a Initializing the REST service on 192.168.2.151:7050, TLS is disabled.
19:02:23.148 [consensus/statetransfer] blockThread -> INFO 028 Validated blockchain to the genesis block
19:02:23.150 [consensus/statetransfer] blockThread -> INFO 02b Validated blockchain to the genesis block
19:02:23.150 [consensus/pbft] ProcessEvent -> INFO 02c Replica 1 application caught up via state transfer, lastExec now 0
19:03:21.933 [consensus/pbft] ProcessEvent -> INFO 02d Replica 1 view change timer expired, sending view change: Batch outstanding requests
19:03:21.934 [consensus/pbft] sendViewChange -> INFO 02e Replica 1 sending view-change, v:1, h:0, |C|:1, |P|:0, |Q|:0
19:03:21.934 [consensus/pbft] recvViewChange -> INFO 02f Replica 1 received view-change from replica 1, v:1, h:0, |C|:1, |P|:0, |Q|:0
19:03:22.015 [consensus/pbft] recvViewChange -> INFO 030 Replica 1 received view-change from replica 2, v:1, h:0, |C|:1, |P|:0, |Q|:0
19:03:22.017 [consensus/pbft] recvViewChange -> INFO 031 Replica 1 received view-change from replica 3, v:1, h:0, |C|:1, |P|:0, |Q|:0
19:03:22.018 [consensus/pbft] sendNewView -> INFO 032 Replica 1 is new primary, sending new-view, v:1, X:map[1:]
19:03:22.018 [consensus/pbft] processNewView2 -> INFO 033 Replica 1 accepting new-view to view 1
19:03:22.126 [consensus/pbft] executeOne -> INFO 034 Replica 1 executing/committing null request for view=1/seqNo=1
19:03:22.126 [consensus/pbft] execDoneSync -> INFO 035 Replica 1 finished execution 1, trying next
19:03:23.126 [consensus/pbft] ProcessEvent -> INFO 036 Replica 1 batch timer expired
19:03:23.126 [consensus/pbft] sendBatch -> INFO 037 Creating batch with 1 requests
19:03:23.409 [consensus/pbft] executeOne -> INFO 038 Replica 1 executing/committing request batch for view=1/seqNo=2 and digest Oy28ZjgNIX9X9Z7vbutAfrfybMONNHWlhWI1P/KTU2ibHJVGS/aIU2jKHZnJNQveDyj8lqRuYCo3LaDbvJBG8w==
19:03:31.406 [consensus/pbft] recvPrePrepare -> WARN 039 Pre-prepare from other than primary: got 0, should be 1
19:03:31.406 [consensus/pbft] recvViewChange -> INFO 03a Replica 1 received view-change from replica 0, v:1, h:0, |C|:1, |P|:0, |Q|:1
19:03:35.820 [consensus/pbft] recvViewChange -> INFO 03b Replica 1 received view-change from replica 0, v:2, h:0, |C|:1, |P|:1, |Q|:2
magg (Wed, 22 Mar 2017 09:35:27 GMT):
got it, thanks @Vadim
thojest (Wed, 22 Mar 2017 09:42:21 GMT):
is it possible that i run into problems because of the parameters i used for pbft?
thojest (Wed, 22 Mar 2017 09:42:28 GMT):
what is view change timeer?
K Sai Anirudh (Wed, 22 Mar 2017 10:13:22 GMT):
Has joined the channel.
thojest (Wed, 22 Mar 2017 14:44:00 GMT):
ok i got it working now
thojest (Wed, 22 Mar 2017 14:44:21 GMT):
can somebody please in detail explain what this means
thojest (Wed, 22 Mar 2017 14:45:57 GMT):
`[recvViewChange -> WARN 126 Replica 0 already has a view-change, v:2, h:0, |C|:1 |P|:1, |Q|:1]`
thojest (Wed, 22 Mar 2017 14:48:52 GMT):
especially what is PBFT automatic view change?
kostas (Wed, 22 Mar 2017 14:54:39 GMT):
View = 2, lower bound is seqNo 0, number of messages in C-, P-, Q- sets is all 1. There is no one-liner explanation for those sets. If you care about this level of detail, you really need to dive into the Castro paper.
kostas (Wed, 22 Mar 2017 14:57:53 GMT):
@magg: This is more-or-less correct. https://chat.hyperledger.org/channel/fabric-consensus?msg=knhShy76jtvYWaZnj
thojest (Wed, 22 Mar 2017 14:58:33 GMT):
the thing is i have setup a network of local working peers no docker
kostas (Wed, 22 Mar 2017 14:58:46 GMT):
I wouldn't say PBFT didn't work very well, though it's certainly more complex and there was no direct mapping of the channel concept as you have with Kafka.
thojest (Wed, 22 Mar 2017 14:58:46 GMT):
and im permanently geting this warning on all logs of all peer nodes
kostas (Wed, 22 Mar 2017 14:59:10 GMT):
@thojest: This can be ignored.
magg (Wed, 22 Mar 2017 14:59:52 GMT):
oh @kostas... any reasron on why choose sbft over pbft?
magg (Wed, 22 Mar 2017 14:59:52 GMT):
oh @kostas... any reason on why choose sbft over pbft?
kostas (Wed, 22 Mar 2017 15:00:19 GMT):
@thojest: It tells you the node receives a view-change request for a view that is not new to it, that's all.
kostas (Wed, 22 Mar 2017 15:00:52 GMT):
@magg: I use the acronyms PBFT and SBFT interchangeably. SBFT is essentially PBFT with a few simplifying but super reasonable assumptions.
thojest (Wed, 22 Mar 2017 15:01:13 GMT):
@kostas ahh thx; what i also get is on the node where i deployed the chaincode, when i perform invoke or query is
magg (Wed, 22 Mar 2017 15:01:40 GMT):
got it @kostas, any eta on when sbft will land in 1.0?
thojest (Wed, 22 Mar 2017 15:01:47 GMT):
`transport: http2Server.HandleStreams failed to receive the preface from client: EOF`
thojest (Wed, 22 Mar 2017 15:01:54 GMT):
everytime i invoke or query something
thojest (Wed, 22 Mar 2017 15:02:26 GMT):
but only on the peer node to which the chaincode was deployed
kostas (Wed, 22 Mar 2017 15:02:50 GMT):
@magg: I am very bad at predictions. End of year? Don't quote me though.
magg (Wed, 22 Mar 2017 15:02:58 GMT):
ok
kostas (Wed, 22 Mar 2017 15:03:35 GMT):
@thojest: Unfortunately chaincodes are not my area of expertise. A better question for the folks in #fabric
thojest (Wed, 22 Mar 2017 15:04:04 GMT):
@kosas thx anway
thojest (Wed, 22 Mar 2017 15:06:22 GMT):
last question @kostas when i enable automatic view change will the warning disappear?
kostas (Wed, 22 Mar 2017 15:06:36 GMT):
@tjo
kostas (Wed, 22 Mar 2017 15:07:13 GMT):
@thojest: It won't. You'll still get it.
thojest (Wed, 22 Mar 2017 15:07:19 GMT):
k thx
nickmelis (Wed, 22 Mar 2017 16:28:55 GMT):
Has joined the channel.
nickmelis (Wed, 22 Mar 2017 16:32:21 GMT):
Is it true that PBFT (fabric v0.6) will not work with less than 4 peers? I'm having a problem with 2 peers running
nickmelis (Wed, 22 Mar 2017 16:32:28 GMT):
trying to run a 2-validating node cluster PBFT with Hyperledger v0.6 using docker compose and standard docker images. I'm currently unable to deploy my Java chaincode. Although no error message is in the logs, I keep seeing this:
> vp0_1 | 16:23:20.969 [consensus/pbft] recvViewChange -> WARN 11a Replica 0 already has a view change message for view 1 from replica 0
it kind of enters into an infinite loop and the chaincode never gets deployed. Any idea what that means??
kostas (Wed, 22 Mar 2017 16:58:42 GMT):
@nickmelis Yes, 4 is the minimum as required by the protocol.
kostas (Wed, 22 Mar 2017 17:02:09 GMT):
As a result, the error that you're seeing makes perfect sense.
nickmelis (Wed, 22 Mar 2017 17:21:13 GMT):
@kostas yes now it makes perfect sense, and in fact I can deploy chaincode when running with 4 peers
qingdu (Wed, 22 Mar 2017 19:03:37 GMT):
Has joined the channel.
qingdu (Wed, 22 Mar 2017 19:07:14 GMT):
hey guys. i am confused about the consensus in fabric1.0 . Is fabric1.0 canceled PBFT ? So, the consensus is controlled by Endorsement policy and Ordering service in 1.0?Anyone can answer me?
kostas (Wed, 22 Mar 2017 19:10:06 GMT):
@qingdu http://hyperledger-fabric.readthedocs.io/en/latest/fabric_model.html?highlight=consensus#consensus
qingdu (Wed, 22 Mar 2017 19:11:31 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=EjStfRnvKEe2kjBd3) @kostas thx
jansony1 (Thu, 23 Mar 2017 02:29:42 GMT):
HI,
there are two peers, peer0 belong to Org0, peer3 belong to Org1
After two peer join this bluemixChannel and both installed chiancode bluemixcc. I instantiate it on peer0 with policy like below.
peer chaincode instantiate -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C bluemixChannel -n bluemixcc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Args":["init","a", "100", "b","200"]}' -P "AND ('Org1MSP.member')"
The strange thing is that the invoke transaction i did on peer0 has no effect, while only on peer3 did. Why?
Halminhu (Thu, 23 Mar 2017 02:40:25 GMT):
Has joined the channel.
qingdu (Thu, 23 Mar 2017 03:18:22 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=EjStfRnvKEe2kjBd3) @kostas so i think fabirc1.0 will not support the PBFT ,instead separate the consensus into all transaction process?
kostas (Thu, 23 Mar 2017 03:37:22 GMT):
@qingdu: I don't follow.
jinyu18 (Thu, 23 Mar 2017 06:06:45 GMT):
Has joined the channel.
gomsb143 (Thu, 23 Mar 2017 08:20:58 GMT):
Hello! im trying to create a compose file with 4 validating peer and 1 non validating peer but my non validating peer is not syncing with other validating peer
```nvp0:
image: hyperledger/fabric-peer
restart: unless-stopped
ports:
- "12050:7050"
- "12051:7051"
- "12053:7053"
environment:
- CORE_PEER_DISCOVERY_ROOTNODE=vp0:7051
- CORE_PEER_ID=nvp0
- CORE_SECURITY_ENROLLID=test_nvp0
- CORE_SECURITY_ENROLLSECRET=iywrPBDEPl0K
- CORE_PEER_DISCOVERY_PERIOD=60s
- CORE_PEER_DISCOVERY_TOUCHPERIOD=61s
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_PKI_ECA_PADDR=membersrvc:7054
- CORE_PEER_PKI_TCA_PADDR=membersrvc:7054
- CORE_PEER_PKI_TLSCA_PADDR=membersrvc:7054
- CORE_SECURITY_ENABLED=false
- CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=pbft
- CORE_PBFT_GENERAL_MODE=batch
- CORE_PBFT_GENERAL_N=4
- CORE_PEER_VALIDATOR_ENABLED = false
- CORE_PBFT_GENERAL_BATCHSIZE=1
volumes:
- /var/run/docker.sock:/var/run/docker.sock
command: sh -c "sleep 10; peer node start"```
```gomathi@gomathi:~/nvp$ curl http://localhost:10050/chain
{"height":7,"currentBlockHash":"EAFeqPpJdEa6UrtZTEf5WfKZMY+gWbU6ernRnZssQ9EBdYtI3ziy1vGN73R/mJOe79JwqI81r/D1GClmaoUiXQ==","previousBlockHash":"QzgBRyf0k2o+QCYloUFJ0+wu/E2sKPGuKSv8TGGl328cu57l9aWgOkZWrP25mqn+KUPHhVsDMvdyAEThw9mBDA=="}
gomathi@gomathi:~/nvp$ curl http://localhost:12050/chain
{"height":1,"currentBlockHash":"RrndKwuojRMjOz/rdD7rJD/NUupiuBuCtQwnZG7Vdi/XXcTd2MDyAMsFAZ1ntZL2/IIcSUeatIZAKS6ss7fEvg=="}```
Shadow-Hawk (Thu, 23 Mar 2017 08:37:02 GMT):
Hi @kostas and all, when I learn the source code of 0.6, I didn't find transactions within a block get hashed with Merkle tree, seems the entire block with all transactions get persisted, is that true?
xuzhao103389 (Thu, 23 Mar 2017 10:37:20 GMT):
Has joined the channel.
matanyahu (Thu, 23 Mar 2017 10:54:12 GMT):
Has joined the channel.
nickmelis (Thu, 23 Mar 2017 13:16:29 GMT):
> Mar 23 13:15:30 ip-172-31-15-197.eu-west-1.compute.internal docker-compose[2422]: vp0_1 | 13:15:30.495 [consensus/pbft] recvViewChange -> INFO 9ed Replica 0 received view-change from replica 2, v:3, h:480, |C|:1, |P|:0, |Q|:1
> Mar 23 13:15:30 ip-172-31-15-197.eu-west-1.compute.internal docker-compose[2422]: vp0_1 | 13:15:30.495 [consensus/pbft] recvViewChange -> WARN 9ee Replica 0 already has a view change message for view 3 from replica 2
Do these messages indicate a sync problem?
nickmelis (Thu, 23 Mar 2017 13:16:29 GMT):
> Mar 23 13:15:30 docker-compose[2422]: vp0_1 | 13:15:30.495 [consensus/pbft] recvViewChange -> INFO 9ed Replica 0 received view-change from replica 2, v:3, h:480, |C|:1, |P|:0, |Q|:1
> Mar 23 13:15:30 docker-compose[2422]: vp0_1 | 13:15:30.495 [consensus/pbft] recvViewChange -> WARN 9ee Replica 0 already has a view change message for view 3 from replica 2
Do these messages indicate a sync problem?
nickmelis (Thu, 23 Mar 2017 13:17:45 GMT):
right now I'm getting loads of them
kostas (Thu, 23 Mar 2017 13:50:28 GMT):
@Shadow-Hawk: I don't understand the question.
kostas (Thu, 23 Mar 2017 13:51:00 GMT):
@nickmelis: Replica 2 is requesting a view-change and is not participating in ordering.
nickmelis (Thu, 23 Mar 2017 14:00:06 GMT):
@kostas is it something I need to worry about? Is anything going wrong?
kostas (Thu, 23 Mar 2017 14:41:17 GMT):
@nickmelis: There's a million ways to answer this, w/r/t whether you should worry about it or not. Depends on why replica 2 got of sync. But if the rest of the network switches to view 3 while replica 2 is still in that state, then they'll all be part of a quorum again.
nickmelis (Thu, 23 Mar 2017 14:42:33 GMT):
so let me just understand? What's a replica here?
nickmelis (Thu, 23 Mar 2017 14:42:33 GMT):
so let me just understand. What's a replica here?
kostas (Thu, 23 Mar 2017 15:05:17 GMT):
A validating peer.
nickmelis (Thu, 23 Mar 2017 15:41:01 GMT):
ok so in practice one of the peers is receiving a view change from another peer, but it's dropping it because it already received the same view change from the same peer?
kostas (Thu, 23 Mar 2017 15:45:19 GMT):
Yes.
kostas (Thu, 23 Mar 2017 15:45:53 GMT):
It's the replica's passive aggressive way of saying to the other replica "I heard you the first time".
nickmelis (Thu, 23 Mar 2017 15:49:46 GMT):
lol ok :) so it may or may not indicate the two peers are out of sync
nickmelis (Thu, 23 Mar 2017 15:50:19 GMT):
still trying to understand what's happening. Scenario is: running 4 peers with pbft, and sending 1000 transactions all in one go to peer 1.
nickmelis (Thu, 23 Mar 2017 15:50:59 GMT):
all is fine when I run on my laptop (docker compose). When trying to run on a linux machine something goes wrong and the majority of the transactions get dropped
nickmelis (Thu, 23 Mar 2017 15:50:59 GMT):
all is fine when I run on my laptop (docker compose). When trying to run on a linux machine (docker compose as well) something goes wrong and the majority of the transactions get dropped
kostas (Thu, 23 Mar 2017 16:19:51 GMT):
Replica 2 is requesting a view change. If no other replica is requesting that same view change, and if transactions keep coming in, replica 2 *will* get out of sync.
kostas (Thu, 23 Mar 2017 16:20:47 GMT):
Due to the algorithm it will sync up periodically, but there will be periods where it is out of sync, and at all times it is not participating in the ordering. Until the network switches to view 3.
yyyyyyy9 (Fri, 24 Mar 2017 06:31:09 GMT):
Has joined the channel.
sushilsingh94 (Fri, 24 Mar 2017 13:14:54 GMT):
Has joined the channel.
sushilsingh94 (Fri, 24 Mar 2017 13:20:47 GMT):
Hi I trying to create 4 peers using docker compose but while running docker-compose up peer is not connected and I am getting this issue : ←[35mvp1_1 |←[0m 2017/03/24 12:55:52 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: dial tcp 172.17.0.3:7051: getsockopt: connection refused"; Reconnecting to {"vp0:7051"
sushilsingh94 (Fri, 24 Mar 2017 13:25:10 GMT):
my docker compose.yml : membersrvc:
image: hyperledger/fabric-membersrvc
ports:
- "7054:7054"
command: membersrvc
vp0:
# container_name: vp0
image: hyperledger/fabric-peer
ports:
- "7050:7050"
- "7051:7051"
- "7052:7052"
environment:
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=http://localhost:2375
- CORE_LOGGING_LEVEL=WARN
- CORE_PEER_ID=vp0
- CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=noops
command: sh -c "sleep 5; peer node start --peer-chaincodedev"
vp1:
# container_name: vp1
image: hyperledger/fabric-peer
environment:
- CORE_VM_ENDPOINT=http://localhost:2375
- CORE_LOGGING_LEVEL=WARN
- CORE_PEER_ID=vp1
- CORE_PEER_DISCOVERY_ROOTNODE=vp0:7051
- CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=noops
command: peer node start
links:
- vp0
joe-alewine (Fri, 24 Mar 2017 13:57:47 GMT):
Has joined the channel.
xuzhao103389 (Fri, 24 Mar 2017 14:28:15 GMT):
Hi I got an error when doing the fabric tests
xuzhao103389 (Fri, 24 Mar 2017 14:28:32 GMT):
I do make behave
xuzhao103389 (Fri, 24 Mar 2017 14:28:59 GMT):
And we compose "docker-compose-next-4.yml" # steps/bootstrap_impl.py:312 8.357s
And I wait "5" seconds # steps/bootstrap_impl.py:319 5.004s
And the ordererBootstrapAdmin runs the channel template tool to create the orderer configuration template "template1" for application developers using orderer "orderer0" # steps/bootstrap_impl.py:118 0.000s
And the ordererBootstrapAdmin distributes orderer configuration template "template1" and chain creation policy name "chainCreatePolicy1" # steps/bootstrap_impl.py:123 0.000s
And the following application developers are defined for peer organizations and each saves their cert as alias # steps/bootstrap_impl.py:172 0.452s
| Developer | ChainCreationPolicyName | Organization | AliasSavedUnder |
| dev0Org0 | chainCreatePolicy1 | peerOrg0 | dev0Org0App1 |
| dev0Org1 | chainCreatePolicy1 | peerOrg1 | dev0Org1App1 |
And the user "dev0Org0" creates a peer template "template1" with chaincode deployment policy using chain creation policy name "chainCreatePolicy1" and peer organizations # steps/bootstrap_impl.py:128 0.000s
| Organization |
| peerOrg0 |
| peerOrg1 |
And the user "dev0Org0" creates an peer anchor set "anchors1" for channel "com.acme.blockchain.jdoe.Channel1" for orgs # steps/bootstrap_impl.py:305 0.002s
| User | Peer | Organization |
| peer0Signer | peer0 | peerOrg0 |
| peer2Signer | peer2 | peerOrg1 |
And the user "dev0Org0" creates a ConfigUpdateEnvelope "createChannelConfigUpdate1" # steps/bootstrap_impl.py:136 0.003s
| ChannelID | Template | Chain Creation Policy Name | Anchors |
| com.acme.blockchain.jdoe.Channel1 | template1 | chainCreatePolicy1 | anchors1 |
And the user "dev0Org0" collects signatures for ConfigUpdateEnvelope "createChannelConfigUpdate1" from peer orgs # steps/bootstrap_impl.py:182 0.100s
| Organization |
| peerOrg0 |
| peerOrg1 |
And the user "dev0Org0" creates a ConfigUpdate Tx "configUpdateTx1" using cert alias "dev0Org0App1" using signed ConfigUpdateEnvelope "createChannelConfigUpdate1" # steps/bootstrap_impl.py:194 0.050s
And the user "dev0Org0" using cert alias "dev0Org0App1" broadcasts ConfigUpdate Tx "configUpdateTx1" to orderer "orderer0" to create channel "com.acme.blockchain.jdoe.Channel1" # steps/bootstrap_impl.py:209 2.005s
Assertion Failed: counter = 0, expected 1
Captured stdout:
Will copy gensisiBlock over at this point
ipAddress in getABStubForComposeService == 172.26.0.2
Returning GRPC for address: 172.26.0.2
xuzhao103389 (Fri, 24 Mar 2017 14:29:17 GMT):
Then user "binhn" should get a delivery from "orderer0" of "3" blocks with "20" messages within "10" seconds # None
context.failed = True
Failing scenarios:
features/bootstrap.feature:281 Bootstrap a development network with 4 peers (2 orgs) and 1 orderer (1 org), each having a single independent root of trust (No fabric-ca, just openssl) -- @1.1 Orderer Options
0 features passed, 1 failed, 2 skipped
0 scenarios passed, 1 failed, 24 skipped
23 steps passed, 1 failed, 249 skipped, 0 undefined
Took 0m20.194s
Makefile:138: recipe for target 'behave' failed
make: *** [behave] Error 1
kostas (Fri, 24 Mar 2017 15:05:42 GMT):
@sushilsingh94 @xuzhao103389 This is not the right channel for this. Post to #fabric and please don't paste your logs directly here (or there). Use gist.github.com or Pastebin and instead paste a link to your logs.
nhrishi (Mon, 27 Mar 2017 13:06:43 GMT):
Has joined the channel.
Ismailk (Mon, 27 Mar 2017 14:59:30 GMT):
Has joined the channel.
Ismailk (Mon, 27 Mar 2017 15:50:17 GMT):
Hi, can I find more information on sBFT? (I was told in #fabric that this will be the future byzantine fault tolerant consensus protocol)
Ismailk (Mon, 27 Mar 2017 15:50:56 GMT):
Also, are there any recommendations on the batch size for batched PBFT?
Ismailk (Mon, 27 Mar 2017 15:51:34 GMT):
We want to do simple measurements: number of transactions (with increasing number of peers)
Ismailk (Mon, 27 Mar 2017 15:53:57 GMT):
the transaction would contain about the same information as vanilla bitcoin transactions
Ismailk (Mon, 27 Mar 2017 15:53:57 GMT):
the transactions would contain about the same information as vanilla bitcoin transactions
tuand (Mon, 27 Mar 2017 17:01:43 GMT):
@vukolic @hgabre ^^^
kostas (Mon, 27 Mar 2017 17:19:02 GMT):
@Ismailk: SBFT is PBFT with certain reasonable and simplifying assumptions -- for instance FIFO links. See here for more: https://jira.hyperledger.org/browse/FAB-378
No recommendations for the batch size for batched PBFT.
Ismailk (Mon, 27 Mar 2017 18:34:49 GMT):
@kostas thanks!
Ismailk (Mon, 27 Mar 2017 19:34:32 GMT):
Another short question: What was the largest number of peers you or someone has successfully tested with SBFT?
Ismailk (Mon, 27 Mar 2017 19:34:32 GMT):
Another short question: What was the largest number of peers/consenters you or someone has successfully tested with SBFT?
scottz (Mon, 27 Mar 2017 20:14:29 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=DFFSqeARaanweHWZ9) @Ismailk In v0.6 our group set up a network with 16 peers using PBFT, all in docker containers. But it was slow due to resource limitations on the laptop which prevented us from doing much with it. Using a 10-peers network allowed us to run some traffic. If others tried using multiple hosts for each peer, I have no doubts it would function but I have no confirmation of any attempts like that.
TimskiiTim (Tue, 28 Mar 2017 03:42:50 GMT):
Has joined the channel.
Ismailk (Tue, 28 Mar 2017 07:09:31 GMT):
Are their recent (1.0.0 alpha) docker images using the SBFT/batched pbft consensus by default?
Willson (Tue, 28 Mar 2017 07:24:52 GMT):
Hello guys, can someone answer my question?
There are to Endorser in my network, after i send a proposal to that two peers by sdk , i recieve two response but they have diferent payload(.i.e the ReadSet and WriteSet of them are different) and i send the two response to orderer. My question is that how the orderer to distinguish which proposal response is legal?
Ismailk (Tue, 28 Mar 2017 09:23:50 GMT):
Another consensus related question: What is the difference between orderers and validating peers? Both seem to use a consensus protocol
Ismailk (Tue, 28 Mar 2017 09:24:08 GMT):
If there is a difference: What do both roles consent on?
ineiti (Tue, 28 Mar 2017 09:24:17 GMT):
Has joined the channel.
ineiti (Tue, 28 Mar 2017 09:28:26 GMT):
Related question (I'm working with @ismailk on setting up 100s of peers with hyperledger): is `CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN` still used? I see it in docker-yaml-files, but I don't find any reference in the `fabric`-code.
Vadim (Tue, 28 Mar 2017 09:41:15 GMT):
@Ismailk https://github.com/hyperledger/fabric/blob/master/docs/source/architecture.rst
Ismailk (Tue, 28 Mar 2017 09:55:56 GMT):
@Vadim Thanks. Skimmed through this yesterday but I'm still not sure what the different roles are. OK: Orderers consent on the order ... And validating peers? They "Do consensus" according to: https://hyperledger-fabric.readthedocs.io/en/latest/protocol-spec.html#multiple-validating-peers
Ismailk (Tue, 28 Mar 2017 09:56:01 GMT):
on what?
Vadim (Tue, 28 Mar 2017 09:56:18 GMT):
@Ismailk there are no validating peers in v1, they were only in v0.6
Ismailk (Tue, 28 Mar 2017 09:56:29 GMT):
OK. Thanks a lot!
ineiti (Tue, 28 Mar 2017 13:14:18 GMT):
is there a running example for sbft-consensus? I see it asks for a certificate if I enable it for an orderer (`Error when trying to connect: open sbft/testdata/cert1.pem: no such file or directory`).
kostas (Tue, 28 Mar 2017 13:29:52 GMT):
@ineiti: The `sbft` module hasn't been brought up to date with how the other implementations work. (Which is understandable given that our focus was shifted on Kafka.) I see that you and @Ismailk are working on setting up a network but are looking into PBFT/SBFT. This is too early for 1.0. (And I would advise against spending any resources on 0.6.)
kostas (Tue, 28 Mar 2017 13:31:22 GMT):
@Willson: The ordering service will route both of them to the committing peers w/o inspecting the _content_ of the proposals for validity — that's not its job. This validation will happen at the committing peer level.
ineiti (Tue, 28 Mar 2017 15:29:13 GMT):
@kostas - thanks for the tip. We'll look into it at a later time, then.
Willson (Tue, 28 Mar 2017 15:45:55 GMT):
@kostas thanks
dorrakhribi (Tue, 28 Mar 2017 19:40:49 GMT):
hello, i'm working on a project based on hyperledger blockchain and our teacher asked us to know if there's a possibility to override the noops/pbft consensus
dorrakhribi (Tue, 28 Mar 2017 19:41:14 GMT):
i appreciate any help because i really need a response
kostas (Tue, 28 Mar 2017 19:57:27 GMT):
There is.
dorrakhribi (Tue, 28 Mar 2017 20:01:04 GMT):
@kostas is it possible or not ?
kostas (Tue, 28 Mar 2017 20:01:46 GMT):
If you're talking 0.6 it's possible in theory, as the amount of coupling between components is substantial. If you're talking 1.0 it's much much easier.
dorrakhribi (Tue, 28 Mar 2017 20:07:52 GMT):
can you please explain how the coupling between components is substantial ? thanks in advance. In fact, i'm developing an application with fabric 0.6
kostas (Tue, 28 Mar 2017 20:17:01 GMT):
Not sure how one can quantify the "substantial" claim. If you were to write your own consensus plugin with 0.6 you would quickly realize that you have to built interact with a bunch of components across the codebase and that shouldn't be the case. This is your starting cue if you want dive deeper. (And you probably should, given that it's a school project.)
kostas (Tue, 28 Mar 2017 20:17:01 GMT):
Not sure how one can quantify the "substantial" claim. If you were to write your own consensus plugin with 0.6 you would quickly realize that you have to built interact with a bunch of components across the codebase and that shouldn't be the case. This is your starting cue if you want dive deeper. (And you probably should, given that it's your school project.)
kostas (Tue, 28 Mar 2017 20:17:01 GMT):
Not sure how one can quantify the "substantial" claim. If you were to write your own consensus plugin with 0.6 you would quickly realize that you have to interact with a bunch of components across the codebase -- that shouldn't be the case. This is your starting cue if you want dive deeper. (And you probably should, given that it's your school project.)
kostas (Tue, 28 Mar 2017 20:17:01 GMT):
Not sure how one can quantify the "substantial" claim. If you were to write your own consensus plugin with 0.6 you would quickly realize that you have to interact with a bunch of components across the codebase -- that is not ideal if you want to keep things modular. This is your starting cue if you want dive deeper. (And you probably should, given that it's your school project.)
kostas (Tue, 28 Mar 2017 20:17:01 GMT):
Not sure how one can quantify the "substantial" claim. If you were to write your own consensus plugin with 0.6 you would quickly realize that you have to interact with a bunch of components across the codebase -- that is not ideal if you want to keep things modular. This is your starting cue if you want to dive deeper. (And you probably should, given that it's your school project.)
dorrakhribi (Tue, 28 Mar 2017 20:35:46 GMT):
alright. thank you @kostas for the answer
geminatea (Tue, 28 Mar 2017 21:56:30 GMT):
Hi, I'm fairly new to HL. I'm trying to understand where the VSCC code is actually run. The transaction flow doc (http://hyperledger-fabric.readthedocs.io/en/latest/txflow.html) specifies step 3 as a separate step from endorsement and ordering. Is this step run on any peer in the network?
vdods (Tue, 28 Mar 2017 22:18:07 GMT):
Has joined the channel.
geminatea (Tue, 28 Mar 2017 22:23:26 GMT):
Actually, now I'm really confused because it looks like the VSCC is part of step 5 as well.
vdods (Tue, 28 Mar 2017 22:24:23 GMT):
@here Yeah, @geminatea is right, it's very unclear when and on which peers VSCC is run. Also, VSCC is the implementation of the endorsement policy, correct?
kostas (Wed, 29 Mar 2017 00:08:36 GMT):
@geminatea @vdods VSCC does not have anything to do with ordering. Please post to #fabric-questions
Rymd (Thu, 30 Mar 2017 07:18:57 GMT):
Has joined the channel.
Ismailk (Thu, 30 Mar 2017 10:21:15 GMT):
@kostas: @ineiti tried running (0.6) with pbft and it timed out after running with more than 28 peers (resources aren't the problem). Do you have any idea where we can force the peers to wait longer? I know we should wait for 1.0, but unfortunately we need to do this measurements this week...
Ismailk (Thu, 30 Mar 2017 10:21:15 GMT):
@kostas: @ineiti tried running fabric (0.6) with pbft and it timed out after running with more than 28 peers (resources aren't the problem). Do you have any idea where we can force the peers to wait longer? I know we should wait for 1.0, but unfortunately we need to do this measurements this week...
Ismailk (Thu, 30 Mar 2017 10:21:15 GMT):
@kostas: @ineiti tried running fabric (0.6) with pbft and it timed out after running with more than 28 peers (resources aren't the problem). Do you have any idea where we can force the peers to wait longer? Waiting for 1.0 isn't an option as we need to do this measurements this week...
Ismailk (Thu, 30 Mar 2017 10:22:18 GMT):
cc @Vadim
Vadim (Thu, 30 Mar 2017 10:38:14 GMT):
@Ismailk Kafka consensus does not work for you? From my point of view, making measurements in an outdated abandoned v0.6 does not make sense. You will have different measurements in v1 when the sbft consensus is released.
Ismailk (Thu, 30 Mar 2017 10:39:16 GMT):
We need something byzantine fault tolerante. My understanding is that kafka is only "crash" tolerant.
kostas (Thu, 30 Mar 2017 13:24:42 GMT):
@Ismailk Kafka is indeed CFT. @Vadim is correct. It's unfortunate that waiting for 1.0 is not an option, but I would advise against making measurements in an outdated codebase. Without closely inspecting the logs, I have no idea which timeout needs to be relaxed or setting needs to be adjusted.
Ismailk (Thu, 30 Mar 2017 14:15:49 GMT):
@kostas Thanks anyways!
Ismailk (Thu, 30 Mar 2017 14:16:29 GMT):
We see the error "Duplicate Handler Error" starting from around 20 nodes
Ismailk (Thu, 30 Mar 2017 14:17:34 GMT):
With regard to the timeout: Maybe @ineiti can say more? He is doing the measurements.
kostas (Thu, 30 Mar 2017 14:22:39 GMT):
IIRC, the "duplicate handler error" is nothing to be concerned about. (And not a consensus issue at any rate.)
xuzhao103389 (Thu, 30 Mar 2017 14:23:54 GMT):
what is the eventhub() for
Vadim (Thu, 30 Mar 2017 14:24:51 GMT):
@xuzhao103389 for events monitoring
aybekbuka (Fri, 31 Mar 2017 04:19:43 GMT):
Has joined the channel.
bh4rtp (Fri, 31 Mar 2017 09:39:08 GMT):
@Vadim does 1.0.0 release support pbft consensus?
nnao (Fri, 31 Mar 2017 18:58:53 GMT):
Has joined the channel.
forgeRW (Sat, 01 Apr 2017 02:46:38 GMT):
Has joined the channel.
berserkr (Sat, 01 Apr 2017 06:30:45 GMT):
right now, it is kafka based
berserkr (Sat, 01 Apr 2017 06:30:54 GMT):
pbft is planned at some later point
magg (Sat, 01 Apr 2017 08:33:47 GMT):
how can I enable kafka based consensus in a fabric v1.0 docker network? do i need to add more orderers?
kostas (Sat, 01 Apr 2017 16:18:18 GMT):
@magg: You need to bootstrap a network that uses the Kafka-based ordering service. Which means you need to encode this info on the genesis block that you'll use to bootstrap the network. Which means you'll need to create profile in `configtx.yaml` that is similar to the sample Kafka one I have in there, and then use the `configtxgen` tool with the appropriate flags so as to choose your own profile and have it output a genesis block (the tool can also output channel creation requests).
kostas (Sat, 01 Apr 2017 16:19:40 GMT):
(And to answer some obvious questions: Is this complicated? Yes it is. Is there room for improvement? Yes there is. But this is unfortunately the price of working with something that's in alpha.)
bh4rtp (Sun, 02 Apr 2017 11:31:54 GMT):
@kostas hi, to use kafka, can i define the environment variables but not create profile in configtx.yaml, such as CONFIGTX_ORDERER_ORDERERTYPE=kafka ORDERER_KAFKA_BROKERS=[kafka0:9092]
bh4rtp (Sun, 02 Apr 2017 11:31:54 GMT):
@kostas hi, to use kafka, can i define the environment variables but not create profile in configtx.yaml, such as CONFIGTX_ORDERER_ORDERERTYPE=kafka ORDERER_KAFKA_BROKERS=[kafka0:9092]?
choas (Sun, 02 Apr 2017 18:11:10 GMT):
Has joined the channel.
rjkuro (Mon, 03 Apr 2017 05:11:03 GMT):
Hi, I know Kafka is crash fault tolerant, but I am not sure if orderers are. I have read the kafka orderer design doc (https://goo.gl/Ir85ds), but it's not clear about the case when orderers crash. Even if subset of the orderers crash and stop working, do the remaining orderers keep running and complete the ordering? If so, how many of orderers we can tolerate? Is there any recommendation for the number of Kafka orderers? Or can we say simply N orderers can tolerate for N-1 orderers' crash?
yacovm (Mon, 03 Apr 2017 12:26:14 GMT):
@kostas
https://github.com/hyperledger/fabric/blob/master/orderer/orderer.yaml#L17
This is RAM by default, so if someone runs a docker group of ordering service with a couple of peers, and the docker containers are shut down from some reason, the ledger of the peers is persisted but the ledger of the orderer isn't.
maybe it's worth to change it to `file` ?
yacovm (Mon, 03 Apr 2017 12:41:37 GMT):
In any case it doesn't make sense to have a non-persistent ledger in the OS and a persistent one in the peers. It creates discrepancies
kostas (Mon, 03 Apr 2017 14:59:21 GMT):
@bh4rtp: Have you tried this? Did it fail?
kostas (Mon, 03 Apr 2017 15:03:26 GMT):
@rjkuro:
> Even if subset of the orderers crash and stop working, do the remaining orderers keep running and complete the ordering?
Yes.
> If so, how many of orderers we can tolerate?
N-1 ordering service nodes may crash and you can still having a functioning network. (Provided that the SDK or peers reach out to the OSN that's still up.)
> Is there any recommendation for the number of Kafka orderers?
If by Kafka orderers you refer to Kafka brokers, there is no recommendation yet. It's going to be a function of your application's characteristics: size of transactions, tolerance for latency, etc. In a few weeks from now, I should have a document out there with some sensible, minimum defaults.
kostas (Mon, 03 Apr 2017 15:05:28 GMT):
@yacovm: Am aware of the discrepancy, but thanks for the heads up. This, along with the `file` ledger defaulting to `/tmp` for storage will be taken care of. (Focusing on the consortiums config changes now.)
rjkuro (Mon, 03 Apr 2017 15:36:53 GMT):
@kostas For the 3rd part, I actually wanted to know how users can decide on the number of orderers (OSNs). But anyway I see it's somewhat left to users or applications. Another related question: orderers can run on different node than Kafka brokers, and the number of orderers and the number of Kafka brokers are configured independently. Is that understanding correct?
kostas (Mon, 03 Apr 2017 15:37:29 GMT):
@rjkuro:
kostas (Mon, 03 Apr 2017 15:37:29 GMT):
@rjkuro:
> how users can decide on the number of orderers (OSNs)
By users, you refer to those who deploy the network?
kostas (Mon, 03 Apr 2017 15:38:09 GMT):
> orderers can run on different node than Kafka brokers, and the number of orderers and the number of Kafka brokers are configured independently. Is that understanding correct?
Correct.
rjkuro (Mon, 03 Apr 2017 15:39:09 GMT):
yes, who admin all fabric nodes.
rjkuro (Mon, 03 Apr 2017 15:39:09 GMT):
yes, who admin all fabric nodes and network.
kostas (Mon, 03 Apr 2017 15:44:44 GMT):
Then yes, it's up to them to decide.
rjkuro (Mon, 03 Apr 2017 15:45:38 GMT):
ok, I see. Thank you very much.
bh4rtp (Mon, 03 Apr 2017 16:20:23 GMT):
@kostas yes, i tried this two orderer environment variables, it is ok. but how to confirm kafka is working?
kostas (Mon, 03 Apr 2017 16:26:46 GMT):
@bh4rtp: Easiest way to double-check for now is to bring up the orderer container with the logging level set to DEBUG (look into `orderer.yaml` and keep in mind that this can also be controlled via an ENV var). Then, when transactions come in, you see whether the output comes from the `solo` or the `kafka` package. (We'll make this easier soon.)
bh4rtp (Mon, 03 Apr 2017 16:28:26 GMT):
@kostas ok. let me have a try later, now i am compiling the latest fabric.
bh4rtp (Mon, 03 Apr 2017 16:44:26 GMT):
@kostas it turns out to be solo, kafka not working in fact. these ENV vars have no effects.
kostas (Mon, 03 Apr 2017 16:46:36 GMT):
@bh4rtp: So now you know that you do in fact need a profile.
bh4rtp (Mon, 03 Apr 2017 16:50:10 GMT):
@kostas yes. thanks.
nnao (Mon, 03 Apr 2017 16:54:35 GMT):
I tried example/e2e_cli test with kafka base orderer. But I get an error.
...
Channel name : mychannel
...
2017-03-31 20:55:15.122 UTC [msp] Sign -> DEBU 008 Sign: digest: 742857C36EDC3C6A5474F576F87167E7DE15D3CC031BBE954A876EE6A9FC249B
Got status &{NOT_FOUND}
Error receiving: EOF
Error: EOF
Usage:
peer channel create [flags]
...
I added kafka and zookeeper in docker-compose.yaml
Is there any other place to change the setting?
kostas (Mon, 03 Apr 2017 18:35:57 GMT):
I have barely looked into the E2E CLI test so I don't have a quick answer for you there. Look for the `bootstrap.feature` though, and see how it allows you to play around with Kafka in an end-to-end manner.
kostas (Mon, 03 Apr 2017 18:35:57 GMT):
@nnao: I have barely looked into the E2E CLI test so I don't have a quick answer for you there. Look for the `bootstrap.feature` though, and see how it allows you to play around with Kafka in an end-to-end manner.
divyank (Mon, 03 Apr 2017 18:55:09 GMT):
What does the 'orderer organization' in configtx.yaml mean?
divyank (Mon, 03 Apr 2017 18:55:09 GMT):
What does the 'orderer organization' in configtx.yaml mean?
nnao (Mon, 03 Apr 2017 19:26:21 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=BEMR7h6dweZBLWF4e) @kostas Thank you for replay.
I looked over 'bootstrap.feature'. Now kafka parts are comment out. Bootstrap of Kafka orderer worked fine. but the error shows that channel couldn't be created when e2e_cli executed 'peer channel create '. Do you have any example of configtx.yaml to execute 'peer channel create' successfully with kafka orderer?
kostas (Mon, 03 Apr 2017 19:27:19 GMT):
The sample Kafka profile in `configtx.yaml` should work.
nnao (Tue, 04 Apr 2017 00:48:47 GMT):
@kostas I will try. thanks.
bh4rtp (Tue, 04 Apr 2017 08:56:47 GMT):
@kostas is configtx.yaml used by configtxgen for bootstrap.feature encapsulated in genesis block and channel.tx files?
kostas (Tue, 04 Apr 2017 13:27:51 GMT):
It is not. `bootstrap.feature` uses its own code to generate the required blocks and config TXs.
vjuge (Tue, 04 Apr 2017 13:57:18 GMT):
Has joined the channel.
snakejerusalem (Wed, 05 Apr 2017 14:38:45 GMT):
Greetings everyone. I am João Sousa, a phD student from the university of Lisbon. I've been looking at the code of the ordering service, and I've noticed that it is generating two signatures for each block.
snakejerusalem (Wed, 05 Apr 2017 14:39:31 GMT):
one his a signature for the block itself, the other one is for something that is reffered in the code as "last config"
snakejerusalem (Wed, 05 Apr 2017 14:40:35 GMT):
Why is the second signature necessary?
rahulhegde (Wed, 05 Apr 2017 14:43:23 GMT):
CC: @Ratnakar
I too tried E2E CLI with orderer (count 1)-kafka (count 2)-zookeeper(count 1)in the docker-composer file as provided by Ratnakar. This fails during channel creation - same error as reported by @nnao
```
2017-04-05 14:33:42.175 UTC [msp] Sign -> DEBU 009 Sign: digest: AFB146C7697318B8F59E6F3BBFA78BE31A591726F792242DAE48C353C88D7D41
Got status &{NOT_FOUND}
Error receiving: EOF
Error: EOF
Usage:
peer channel create [flags]
```
Modified the configtx.yaml
```
Profiles:
TwoOrgs:
Orderer:
<<: *OrdererDefaults
OrdererType: kafka
Organizations:
- *OrdererOrg
Application:
<<: *ApplicationDefaults
Organizations:
- *Org0
- *Org1
```
```
Orderer: &OrdererDefaults
...
Kafka:
# Brokers: A list of Kafka brokers to which the orderer connects
# NOTE: Use IP:port notation
Brokers:
- kafka0:9092
- kafka1:9092
```
Can we know what is ` bootstrap.feature ` - how do I use it?
Ratnakar (Wed, 05 Apr 2017 14:43:23 GMT):
Has joined the channel.
kostas (Wed, 05 Apr 2017 17:31:11 GMT):
@snakejerusalem I know what last config is, but I want a link to the specific file/line in Github before we go further.
kostas (Wed, 05 Apr 2017 17:32:11 GMT):
@rahulhegde: `bootstrap.feature` is not orderer-specific. For more details, I suggest #fabric
rahulhegde (Wed, 05 Apr 2017 17:33:43 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=6tqqqXT63mrQTYBRR) @kostas
rahulhegde (Wed, 05 Apr 2017 17:33:43 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=6tqqqXT63mrQTYBRR) @kostas If you are pointing to creating a orderer-genesis block using configtxgen; then the above bootstrapping block is creating using the utility. did you mean say the same?
rahulhegde (Wed, 05 Apr 2017 17:33:43 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=6tqqqXT63mrQTYBRR) @kostas If you are pointing to creating a orderer-genesis block using configtxgen; than the above bootstrapping block is created using the utility. did you mean say the same?
snakejerusalem (Wed, 05 Apr 2017 17:35:26 GMT):
@kostas its in orderer/multichan/cainsupport-go, method addLastconfigSignature, line 235
snakejerusalem (Wed, 05 Apr 2017 17:35:59 GMT):
which in turn is used in the WriteBlock method in the same file
snakejerusalem (Wed, 05 Apr 2017 17:36:29 GMT):
writeblock generates the normal block signature and then generates the lastconfig sig
snakejerusalem (Wed, 05 Apr 2017 17:37:46 GMT):
https://github.com/hyperledger/fabric/blob/master/orderer/multichain/chainsupport.go#L222
snakejerusalem (Wed, 05 Apr 2017 17:37:59 GMT):
https://github.com/hyperledger/fabric/blob/master/orderer/multichain/chainsupport.go#L245
snakejerusalem (Wed, 05 Apr 2017 17:38:40 GMT):
(line 222, not 235, my mistake)
kostas (Wed, 05 Apr 2017 17:48:29 GMT):
@snakejerusalem First signature is effectively over the the block's data. The last config info is part of the block's metadata. Therefore, it has to be signed separately.
kostas (Wed, 05 Apr 2017 17:48:29 GMT):
@snakejerusalem First signature is over the the block's header (which includes a has of the block's `Data` field). The last config info is part of the block's `Metadata` field, which is not referenced in the header. Therefore, it has to be signed separately.
kostas (Wed, 05 Apr 2017 17:48:29 GMT):
@snakejerusalem First signature is over the the block's header (which includes a hash of the block's `Data` field). The last config info is part of the block's `Metadata` field, which is not referenced in the header. Therefore, it has to be signed separately.
snakejerusalem (Wed, 05 Apr 2017 17:53:16 GMT):
yeah, but the plaintext supplied to the last config also includes the block serialized data, like the block signature. It also includes the same signature header
snakejerusalem (Wed, 05 Apr 2017 17:53:44 GMT):
the only difference is that last config has one more byte array concatenated to the plaintext
snakejerusalem (Wed, 05 Apr 2017 18:03:04 GMT):
I am sorry, Something came up and I need to disconnect. I'll check your response latter.
kostas (Wed, 05 Apr 2017 18:14:42 GMT):
Correct.
kostas (Wed, 05 Apr 2017 18:59:48 GMT):
If you're asking: then why don't you just do the second signature instead of the first one? The answer is that --as far as the orderers are concerned-- the first signature is technically all you need for the protocol. Gossip wants a reference to last config, and this reference --if not signed-- is completely useless. So that's why we added the second signature and tucked it into the metadata field. It's an extra.
gbolo (Wed, 05 Apr 2017 19:05:32 GMT):
Has joined the channel.
gbolo (Wed, 05 Apr 2017 19:09:08 GMT):
hi @kostas I am also getting this issue - Got status &{NOT_FOUND}
gbolo (Wed, 05 Apr 2017 19:09:11 GMT):
when using kafka
gbolo (Wed, 05 Apr 2017 19:09:26 GMT):
does 1.0-alpha properly work with kafka?
kostas (Wed, 05 Apr 2017 19:10:16 GMT):
@gbolo: When are you getting this response exactly?
kostas (Wed, 05 Apr 2017 19:11:04 GMT):
Kafka works fine, just a matter of proper settings and our lack of good documentation. We really need to fix the latter.
gbolo (Wed, 05 Apr 2017 19:51:47 GMT):
hi @kostas, sorry I was dragged into a meeting
gbolo (Wed, 05 Apr 2017 19:52:25 GMT):
i get this
gbolo (Wed, 05 Apr 2017 19:52:25 GMT):
i get this when i attempt to create a channel
gbolo (Wed, 05 Apr 2017 19:52:26 GMT):
2017-04-05 15:03:50.582 EDT [logging] InitFromViper -> DEBU 001 Setting default logging level to DEBUG for command 'channel'
2017-04-05 15:03:50.582 EDT [msp] GetLocalMSP -> DEBU 002 Returning existing local MSP
2017-04-05 15:03:50.582 EDT [msp] GetDefaultSigningIdentity -> DEBU 003 Obtaining default signing identity
2017-04-05 15:03:52.620 EDT [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-04-05 15:03:52.620 EDT [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-04-05 15:03:52.620 EDT [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2017-04-05 15:03:52.620 EDT [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2017-04-05 15:03:52.620 EDT [msp] Sign -> DEBU 008 Sign: plaintext: 0AE0070A1108021A060898FF94C70522...00120D1A0B08FFFFFFFFFFFFFFFFFF01
2017-04-05 15:03:52.620 EDT [msp] Sign -> DEBU 009 Sign: digest: CC5894C3BC3431B606D722D969B7D16E1683FB2A7126D869635009D35EBF9ADC
Got status &{NOT_FOUND}
gbolo (Wed, 05 Apr 2017 19:53:07 GMT):
```
2017-04-05 15:03:50.582 EDT [logging] InitFromViper -> DEBU 001 Setting default logging level to DEBUG for command 'channel'
2017-04-05 15:03:50.582 EDT [msp] GetLocalMSP -> DEBU 002 Returning existing local MSP
2017-04-05 15:03:50.582 EDT [msp] GetDefaultSigningIdentity -> DEBU 003 Obtaining default signing identity
2017-04-05 15:03:52.620 EDT [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-04-05 15:03:52.620 EDT [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-04-05 15:03:52.620 EDT [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2017-04-05 15:03:52.620 EDT [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2017-04-05 15:03:52.620 EDT [msp] Sign -> DEBU 008 Sign: plaintext: 0AE0070A1108021A060898FF94C70522...00120D1A0B08FFFFFFFFFFFFFFFFFF01
2017-04-05 15:03:52.620 EDT [msp] Sign -> DEBU 009 Sign: digest: CC5894C3BC3431B606D722D969B7D16E1683FB2A7126D869635009D35EBF9ADC
Got status &{NOT_FOUND}
```
gbolo (Wed, 05 Apr 2017 19:54:19 GMT):
on the orderer logs i get:
```
Apr 05 15:03:19 ymr-sbvme-ord1 orderer[29977]: 2017-04-05 15:03:19.448 EDT [orderer/main] main -> INFO 001 Starting orderer with TLS enabled
Apr 05 15:03:22 ymr-sbvme-ord1 orderer[29977]: 2017-04-05 15:03:22.493 EDT [orderer/multichain] NewManagerImpl -> INFO 002 Starting with system channel: testchainid and orderer type kafka
Apr 05 15:03:22 ymr-sbvme-ord1 orderer[29977]: 2017-04-05 15:03:22.500 EDT [orderer/main] NewServer -> INFO 003 Starting orderer
Apr 05 15:03:22 ymr-sbvme-ord1 orderer[29977]: 2017-04-05 15:03:22.501 EDT [orderer/main] main -> INFO 004 Beginning to serve requests
Apr 05 15:03:22 ymr-sbvme-ord1 orderer[29977]: 2017-04-05 15:03:22.502 EDT [msp] Validate -> INFO 005 MSP OrdererMSP validating identity
Apr 05 15:03:25 ymr-sbvme-ord1 orderer[29977]: 2017-04-05 15:03:25.513 EDT [orderer/multichain] newChain -> INFO 006 Created and starting new chain consortium
Apr 05 15:03:50 ymr-sbvme-ord1 orderer[29977]: 2017-04-05 15:03:50.621 EDT [msp] Validate -> INFO 007 MSP OrdererMSP validating identity
Apr 05 15:03:50 ymr-sbvme-ord1 orderer[29977]: 2017-04-05 15:03:50.633 EDT [msp] Validate -> INFO 008 MSP OrdererMSP validating identity
Apr 05 15:03:53 ymr-sbvme-ord1 orderer[29977]: 2017-04-05 15:03:53.650 EDT [orderer/multichain] newChain -> INFO 009 Created and starting new chain gbolo
```
gbolo (Wed, 05 Apr 2017 19:54:58 GMT):
looks like on the orderer, its fine? i didnt have debug enabled on the orderer
snakejerusalem (Wed, 05 Apr 2017 21:19:58 GMT):
@kostas ok, thank you!
snakejerusalem (Wed, 05 Apr 2017 21:22:05 GMT):
one other thing: what is supposed to be this last config? Is it supposed to be the set of ordering nodes?
snakejerusalem (Wed, 05 Apr 2017 21:23:18 GMT):
i.e., the ID of the membership?
scottz (Wed, 05 Apr 2017 22:00:16 GMT):
In the configtx.yaml, in an organization, is it true that the Name and the ID must be the same? If so, then why?
scottz (Wed, 05 Apr 2017 22:00:19 GMT):
Profiles:
# SampleSingleMSPSolo defines a configuration which contains a single MSP
# definition (the MSP sampleconfig).
testOrg:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg1
Application:
<<: *ApplicationDefaults
Organizations:
- *PeerOrg1
- *PeerOrg2
Organizations:
- &PeerOrg1
Name: PeerOrg1
ID: Peer1MSP
scottz (Wed, 05 Apr 2017 22:02:46 GMT):
(In this example ^^^ , the Name and ID of PeerOrg1 are not the same. This is taken from fabric/common/configtx/tool/configtx.yaml.)
yacovm (Wed, 05 Apr 2017 22:11:14 GMT):
This JIRA item explains what @scottz wants to say, I believe
https://jira.hyperledger.org/browse/FAB-2499
scottz (Wed, 05 Apr 2017 23:02:38 GMT):
yes, and do we agree we should combine the two config parameters into one, or should we expand the API to provide an option to select either the name or the ID of the organization? (Or if the capability exists already, then the gossip folks should be informed so they can simply use that capability correctly (i.e. in file gossip/service/gossip_service.go) and we can drop this "requirement" to eliminate one of these parameters.
scottz (Wed, 05 Apr 2017 23:03:56 GMT):
For an operator/user, it is ridiculous to be told to "set two config variables the same, or else it won't work". And by the way, nowhere is it written (even in any comments in configtx.yaml)
AmberZhang (Thu, 06 Apr 2017 07:15:05 GMT):
Has joined the channel.
ameen (Thu, 06 Apr 2017 09:25:05 GMT):
Has joined the channel.
ranjan008 (Thu, 06 Apr 2017 09:28:48 GMT):
Has joined the channel.
lizhih (Thu, 06 Apr 2017 09:47:04 GMT):
Has joined the channel.
DannyWong (Thu, 06 Apr 2017 10:52:26 GMT):
http://hyperledger-fabric.readthedocs.io/en/latest/arch-deep-dive.html?highlight=Anchor
DannyWong (Thu, 06 Apr 2017 10:52:55 GMT):
so right now, I don't need to have my client (SDK) sending endorsement request manually to every endorsers manually???
Vadim (Thu, 06 Apr 2017 10:53:37 GMT):
@DannyWong the node-sdk client sends endorsements to all peers within the channel, you don't need to handle it yourself
Vadim (Thu, 06 Apr 2017 10:54:25 GMT):
but you need to add those peers manually upon startup
DannyWong (Thu, 06 Apr 2017 10:54:47 GMT):
add those peers during startup of what? SDK?
DannyWong (Thu, 06 Apr 2017 10:55:12 GMT):
the app embedding the SDKi mean
DannyWong (Thu, 06 Apr 2017 10:55:43 GMT):
I understand I need to config peer to join the channel manually
Vadim (Thu, 06 Apr 2017 10:56:55 GMT):
sdk
DannyWong (Thu, 06 Apr 2017 10:57:05 GMT):
ic..
DannyWong (Thu, 06 Apr 2017 10:58:31 GMT):
um... do you think the current design is quite weird?
Vadim (Thu, 06 Apr 2017 10:59:48 GMT):
why?
DannyWong (Thu, 06 Apr 2017 11:00:02 GMT):
it relies so much on client side (register peers beforehand, checking whether endorsement policy is fulfilled)
DannyWong (Thu, 06 Apr 2017 11:00:28 GMT):
When we deploy the chaincode, we specify the endorsement policy as well
Vadim (Thu, 06 Apr 2017 11:00:48 GMT):
well in fabric, the endorsing peers set is a subset of all peers, so the client determines which peers need to endorse
DannyWong (Thu, 06 Apr 2017 11:00:49 GMT):
I expect the client (say in OrgA) just need to talk with the peer in OrgA
DannyWong (Thu, 06 Apr 2017 11:01:24 GMT):
then it will forward the endorsement request for the client, then collect the responses, then just respond to app client whether it is a green/red signal
DannyWong (Thu, 06 Apr 2017 11:01:38 GMT):
yes... but we specify in endorement policy
DannyWong (Thu, 06 Apr 2017 11:01:46 GMT):
during instantitatrion?
Vadim (Thu, 06 Apr 2017 11:01:57 GMT):
i.e. for example, if my app is a bank transfer app, and I make a transfer from bank A to bank B, then I probably need to ask the bank A peer and bank B peer to endorse, but not bank C peer which is not the part of this transaction
DannyWong (Thu, 06 Apr 2017 11:04:45 GMT):
agreed that only bank A n bank B (and not bank C), but the endorsement policy should have already stated this chaincode requires Bank A and Bank B only...
DannyWong (Thu, 06 Apr 2017 11:05:10 GMT):
why a client is required to orchestrate itself... which is not secure...
Vadim (Thu, 06 Apr 2017 11:05:17 GMT):
and what if I want to transfer from bank A to bank C?
Vadim (Thu, 06 Apr 2017 11:05:35 GMT):
another chaincode and policy?
DannyWong (Thu, 06 Apr 2017 11:05:38 GMT):
the same chaincode logic should be just instantiate on bank A and bank C channel?
DannyWong (Thu, 06 Apr 2017 11:05:54 GMT):
install same chaincode and instantiate to another channel
DannyWong (Thu, 06 Apr 2017 11:05:59 GMT):
as we need data isolation as well
Vadim (Thu, 06 Apr 2017 11:06:33 GMT):
so then we have a channel for every possible combination of banks?
Vadim (Thu, 06 Apr 2017 11:06:47 GMT):
I'm not saying this is not the case, just seems a lot
Vadim (Thu, 06 Apr 2017 11:07:03 GMT):
and in this case the client also needs to make sure it is allowed to join that channel
Vadim (Thu, 06 Apr 2017 11:07:36 GMT):
so instead of endorsing peer management problem, you have channel management problem
DannyWong (Thu, 06 Apr 2017 11:11:09 GMT):
u are right
DannyWong (Thu, 06 Apr 2017 11:11:40 GMT):
but i still feeling a bit puzzle whether it can be optimized in future :P
snakejerusalem (Thu, 06 Apr 2017 13:00:01 GMT):
Greetings. In the context of a block metadata, what does the LAST_CONFIG index stands for? https://github.com/hyperledger/fabric/blob/master/orderer/multichain/chainsupport.go#L237
snakejerusalem (Thu, 06 Apr 2017 13:00:30 GMT):
Is it the membership Id of the processes associated with the ordering service?
kostas (Thu, 06 Apr 2017 13:45:27 GMT):
@snakejerusalem No need for the double post, saw the original question last night. The `LAST_CONFIG` *index* is simply a constant (=1) that dictates the slot in the metadata array where the last config value will be stored. The last config *value* carries the sequence number of the most recent configuration.
kostas (Thu, 06 Apr 2017 13:46:33 GMT):
@gbolo: It does seem like things are OK on the orderer side. You are still not providing me with details about your setup though. How do you bring this network up?
snakejerusalem (Thu, 06 Apr 2017 13:48:49 GMT):
@kostas Sorry about the double post. I did infer that the index was a constant for the slot in the metadata. But I need to know what does the most recent configuration is.
kostas (Thu, 06 Apr 2017 13:49:15 GMT):
The most recent CONFIG block on the channel.
snakejerusalem (Thu, 06 Apr 2017 13:50:24 GMT):
the sample_clients folder does have a client that seems to be dedicated to send configs to the blockchain. From what I could tell, it would tell the ordering service to create a new channel.
kostas (Thu, 06 Apr 2017 13:52:17 GMT):
The sample clients are just that, samples to showcase _some_ of the things that you can do with the ordering service. Look into `configtxgen`.
kostas (Thu, 06 Apr 2017 13:52:56 GMT):
In short, channel members can send a CONFIG_UPDATE envelope to the channel to modify the existing configuration.
kostas (Thu, 06 Apr 2017 13:53:43 GMT):
Whenever the configuration changes, its sequence number is incremented by one. Look into `Config` message type (proto).
snakejerusalem (Thu, 06 Apr 2017 13:54:45 GMT):
ok, got it. Now I would like to ask something that I reckon is probably an entry-level question in the context of hyperledger, and I apologise in advance for it, but... what exactlly are channels supposed to be?
kostas (Thu, 06 Apr 2017 13:58:02 GMT):
The closest analogy I can think of is private channels in Slack. There may be 1000 people signed up for a Slack, but you setup a private channel with 4 other folks because you want to communicate privately.
kostas (Thu, 06 Apr 2017 13:58:56 GMT):
Essentially a way to establish a blockchain network with only a select number of participants and the ordering service.
kostas (Thu, 06 Apr 2017 13:59:36 GMT):
I know our documentation is still mess and WIP, but did you have a look here https://hyperledger-fabric.readthedocs.io?
kostas (Thu, 06 Apr 2017 13:59:36 GMT):
I know our documentation is still a mess and WIP, but did you have a look here https://hyperledger-fabric.readthedocs.io?
snakejerusalem (Thu, 06 Apr 2017 14:02:09 GMT):
yes I did, but I spent consideably more time looking at how to compile the code.
snakejerusalem (Thu, 06 Apr 2017 14:02:38 GMT):
and I am also heavely focused on the ordering service
snakejerusalem (Thu, 06 Apr 2017 14:02:52 GMT):
so I actually got a little bit of tunnel vision once I manage to compile.
snakejerusalem (Thu, 06 Apr 2017 14:08:17 GMT):
btw, speaking of the documentation, I do have a few suggestions to add to it
snakejerusalem (Thu, 06 Apr 2017 14:08:37 GMT):
in particular, concerning the compilation of fabric
kostas (Thu, 06 Apr 2017 14:12:30 GMT):
#fabric may be a better venue for this
snakejerusalem (Thu, 06 Apr 2017 14:12:45 GMT):
ok
gbolo (Thu, 06 Apr 2017 18:14:55 GMT):
hi @kostas
gbolo (Thu, 06 Apr 2017 18:15:25 GMT):
so despite gettinbg that error, I can confirm that the channels actually get created and work
joekozhaya (Thu, 06 Apr 2017 18:22:40 GMT):
Has joined the channel.
daijianw (Fri, 07 Apr 2017 02:39:03 GMT):
Has joined the channel.
hanhzf (Mon, 10 Apr 2017 07:06:03 GMT):
Has joined the channel.
hanhzf (Mon, 10 Apr 2017 07:26:09 GMT):
@gbolo I am also getting this issue - Got status &{NOT_FOUND}. So if the "peer channel create" succeeds, it will create a mychannle.block in the same folder as you run this command.
hanhzf (Mon, 10 Apr 2017 07:27:52 GMT):
But when the command failed with "Got status &{NOT_FOUND}", this block file was not generated, thus I can not go ahead. @gbolo I see you said " so despite gettinbg that error, I can confirm that the channels actually get created and work", which make me confused.
ksung (Mon, 10 Apr 2017 11:40:34 GMT):
Has joined the channel.
zian (Mon, 10 Apr 2017 13:58:02 GMT):
Has joined the channel.
gbolo (Mon, 10 Apr 2017 15:12:01 GMT):
hi @hanhzf
gbolo (Mon, 10 Apr 2017 15:13:31 GMT):
yes you are right.
gbolo (Mon, 10 Apr 2017 15:13:38 GMT):
we need a fix for this
gbolo (Mon, 10 Apr 2017 15:15:20 GMT):
I am submitting a block that does not contain the kafka info (for channels). However the genesis block for the orderer does contain the kafka info and I can confirm that it still uses the kafka though
gbolo (Mon, 10 Apr 2017 15:16:03 GMT):
@kostas any update on how we can avoid this error? it seems like anyone who uses kafka is getting this error
jeroiraz (Mon, 10 Apr 2017 18:40:02 GMT):
Has joined the channel.
nage (Mon, 10 Apr 2017 20:41:47 GMT):
Has joined the channel.
nage (Mon, 10 Apr 2017 20:42:16 GMT):
Has left the channel.
hanhzf (Tue, 11 Apr 2017 01:20:05 GMT):
@gbolo there is a block defect opened in jira to track this issue, https://jira.hyperledger.org/browse/FAB-2982
no update on it yet
gbolo (Tue, 11 Apr 2017 14:06:41 GMT):
@hanhzf thanks for update
Lin-YiTang (Tue, 11 Apr 2017 21:19:06 GMT):
Has joined the channel.
mychewcents (Wed, 12 Apr 2017 05:53:29 GMT):
Has left the channel.
rahulhegde (Wed, 12 Apr 2017 13:32:59 GMT):
GM - I created a channel having underscore and it gave me an invalid character error in the orderer logs, Can you give the list of valid characters in the channel name?
kostas (Wed, 12 Apr 2017 14:17:35 GMT):
@rahulhegde: https://github.com/hyperledger/fabric/blob/master/common/configtx/manager.go#L56
kostas (Wed, 12 Apr 2017 14:18:59 GMT):
Folks above reporting issues with the E2E flow + Kafka: noted, I hope I get on it before the end of the week. Will post updates here.
kostas (Wed, 12 Apr 2017 14:18:59 GMT):
Folks above reporting issues with the E2E flow + Kafka: noted, I hope I get on it before the end of next week if nobody else beats me to it. Will post updates here.
rahulhegde (Wed, 12 Apr 2017 14:22:07 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=M7T2RwnGAZ2fCQX9t) @kostas did understood the last part?
Are not the strings "." or ".." ==
rahulhegde (Wed, 12 Apr 2017 14:22:07 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=M7T2RwnGAZ2fCQX9t) @kostas can you please explain the last part?
Are not the strings "." or ".."
kostas (Wed, 12 Apr 2017 14:35:40 GMT):
@rahulhegde: If you attempt to create channel "." (w/o quotes) or "..", the request will be rejected.
kostas (Wed, 12 Apr 2017 14:36:07 GMT):
I had to write this as an extra rule since it overrides the first rule in that list.
pmcosta1 (Wed, 12 Apr 2017 16:36:39 GMT):
Has joined the channel.
msoumeit (Wed, 12 Apr 2017 17:22:17 GMT):
Has joined the channel.
scottz (Wed, 12 Apr 2017 21:34:14 GMT):
@kostas Is anyone able to run the broadcast_timestamp and the deliver_stdout clients on recent commits? We haven't tried it since before all the security code was implemened; now we are seeing an error when, for example, when we try to start the deliver client and it tries to connect to the orderer: "Received unauthorized deliver request for channel xyz". Might I be correct if I guessed that it is because there is no Identity cert for the deliver client which is masquerading as a peer?
kostas (Wed, 12 Apr 2017 21:38:02 GMT):
@scottz That would be correct. These sample clients do not do any message signing and will work only with a profile such as SampleInsecureSolo (or Kafka).
Jay (Fri, 14 Apr 2017 05:18:48 GMT):
Has joined the channel.
kostas (Sat, 15 Apr 2017 05:11:35 GMT):
@gbolo @hanhzf @rahulhegde and anyone else seeing an issue with the Kafka orderer in the E2E CLI tests: This is (thankfully) not an orderer bug. It is related to the peer expecting the channel creation to happen a bit too fast. @nnao has submitted a patch -- updates on : https://jira.hyperledger.org/browse/FAB-2982
adeelqureshi (Sun, 16 Apr 2017 17:19:33 GMT):
Has joined the channel.
passkit (Mon, 17 Apr 2017 10:40:01 GMT):
Having some trouble instantiating CC with the latest build and Kafka orderer
```
2017-04-17 10:36:38.333 UTC [orderer/common/broadcast] Handle -> DEBU d81 Broadcast is filtering message of type 3 for channel testing
2017-04-17 10:36:38.341 UTC [common/policies] GetPolicy -> DEBU d82 Returning policy Writers for evaluation
2017-04-17 10:36:38.341 UTC [cauthdsl] func1 -> DEBU d83 Gate evaluation starts: (&{n:1 policies:
kostas (Mon, 17 Apr 2017 11:39:15 GMT):
@passkit: Interesting. I would expect that if the message was too large, the [sizefilter(https://github.com/hyperledger/fabric/blob/master/orderer/common/sizefilter/sizefilter.go#L36) based on [the relevant settings](https://github.com/hyperledger/fabric/blob/master/common/configtx/tool/configtx.yaml#L116) in `configtx.yaml` on batch size would have caught it during [the filtering stage](https://github.com/hyperledger/fabric/blob/master/orderer/common/broadcast/broadcast.go#L149).
kostas (Mon, 17 Apr 2017 11:39:15 GMT):
@passkit: Interesting. I would expect that if the message was too large, the [sizefilter](https://github.com/hyperledger/fabric/blob/master/orderer/common/sizefilter/sizefilter.go#L36) based on [the relevant settings](https://github.com/hyperledger/fabric/blob/master/common/configtx/tool/configtx.yaml#L116) in `configtx.yaml` on batch size would have caught it during [the filtering stage](https://github.com/hyperledger/fabric/blob/master/orderer/common/broadcast/broadcast.go#L149).
kostas (Mon, 17 Apr 2017 11:40:23 GMT):
Is this an issue that you didn't have with _the same_ chaincode and previous builds?
kostas (Mon, 17 Apr 2017 11:41:46 GMT):
Can you open up a JIRA bug w/ all the step-by-step details that will allow me to reproduce this? File it under the "fabric-consensus" component (and label "kafka") so that I don't miss it.
passkit (Mon, 17 Apr 2017 12:06:02 GMT):
Theo's is with example02
passkit (Mon, 17 Apr 2017 12:07:05 GMT):
Was previously working and have been no changes. The CC container logs show the init and opening balances correctly
passkit (Mon, 17 Apr 2017 12:36:41 GMT):
@kostas - are you able to successfully run the e2e example on the latest build with a Kafka orderer? If you can then I will investigate other issues with my setup before filing a bug
gbolo (Mon, 17 Apr 2017 13:35:30 GMT):
@kostas Thank you!!
kostas (Mon, 17 Apr 2017 18:21:50 GMT):
@passkit: See the same error in `bootstrap.feature`. Working on it.
passkit (Mon, 17 Apr 2017 18:22:19 GMT):
Great - helps to know I am not insane!
kostas (Mon, 17 Apr 2017 20:25:35 GMT):
@passkit: Nick, I know what's up.
kostas (Mon, 17 Apr 2017 20:30:40 GMT):
A recent change (that I need to identify -- I'm asking around...) is causing the instantiate proposal transaction to baloon in size.
kostas (Mon, 17 Apr 2017 20:30:55 GMT):
The orderer now gets a 16M transaction, versus 4K up until a few days ago -- thanks to @jeffgarratt for helping me get those numbers.
kostas (Mon, 17 Apr 2017 20:31:41 GMT):
So thanks to the generous settings in configtx.yaml your orderer is receiving these big transactions and is trying to send them, but Kafka (the broker) is still using the default (1MB).
kostas (Mon, 17 Apr 2017 20:32:47 GMT):
If you want to workaround this and you're using the `bddtests/environments/kafka` image, the `KAFKA_MAX_MESSAGE_BYTES` ENV var will do the trick.
kostas (Mon, 17 Apr 2017 20:32:47 GMT):
If you want to workaround this and you're using the `bddtests/environments/kafka` image, the `KAFKA_MESSAGE_MAX_BYTES` and `KAFKA_REPLICA_FETCH_MAX_BYTES` ENV vars will do the trick.
kostas (Mon, 17 Apr 2017 20:33:25 GMT):
Here's what we need to do better:
kostas (Mon, 17 Apr 2017 20:33:37 GMT):
1. Figure out why the instantiate proposal increased so much
kostas (Mon, 17 Apr 2017 20:34:08 GMT):
2. From my side: add a note to configtx.yaml that the Kafka broker settings need to be updated as well
kostas (Mon, 17 Apr 2017 20:34:19 GMT):
If you're still running into issues, let me know.
passkit (Tue, 18 Apr 2017 00:10:37 GMT):
@kostas Thanks, upping to 20M has fixed it.
kostas (Tue, 18 Apr 2017 00:16:05 GMT):
@passkit: Excellent, thanks for the update. I just pushed this WIP changeset so that you have a point of reference, but looks like it's not needed anymore. https://gerrit.hyperledger.org/r/#/c/8127/
kostas (Tue, 18 Apr 2017 00:16:05 GMT):
@passkit: Excellent, thanks for the update. I just pushed this WIP changeset so that you would have a point of reference, but looks like it's not needed anymore. https://gerrit.hyperledger.org/r/#/c/8127/
passkit (Tue, 18 Apr 2017 01:27:30 GMT):
@kostas are you saying it's not needed as there is also a fix to the instantiate proposal in pipeline? I couldn't see anything in Gerrit.
rmohta (Tue, 18 Apr 2017 04:29:17 GMT):
Has joined the channel.
kostas (Tue, 18 Apr 2017 12:49:29 GMT):
@passkit: I'm saying that if you use the WIP changeset and do `behave -k -D cache-deployment-spec features/bootstrap.feature` (per the instructions [here](https://github.com/hyperledger/fabric/blob/master/bddtests/README.md)) the end-to-end flow should now work without issues because of the modified Kafka settings (see `orderer-1-kafka-3/docker-compose.yml`)
smallant (Tue, 18 Apr 2017 14:43:17 GMT):
Has joined the channel.
scottz (Tue, 18 Apr 2017 19:12:57 GMT):
Hi @kostas, would the orderer object with any errors if two different peers used the same cert information? Assume they each use their own IP:Port info when connecting to the orderer. (I am fairly certain that Gossip would get confused, with two peers claiming to be the same identity... but ignore that for this question.)
scottz (Tue, 18 Apr 2017 19:14:35 GMT):
Or how about if the same peer created two simultaneous connections to call Deliver ?
kostas (Tue, 18 Apr 2017 19:44:53 GMT):
@scottz: Both _should_ be OK unless the underlying gRPC implementations has any objections. Why would two peers use the same cert?
scottz (Tue, 18 Apr 2017 20:21:18 GMT):
We are trying to modify OTE test process in GO which has multiple threads (each behaving like a peer/consumer for a different channel). To get them each to connect, for simplicity, it would be easier if they all use same cert info for a single peer (using samd CORE_PEER_MSPCONFIGPATH) also because they are not in separate containers and I think should also use same CORE_PEER_ADDRESS ip address and port.
yacovm (Tue, 18 Apr 2017 21:05:44 GMT):
The orderer doesn't care about that, and the gRPC implementation can't know about this, because there is no mutual TLS between peers and ordering service from what I remember.
yacovm (Tue, 18 Apr 2017 21:06:50 GMT):
so the peer doesn't send its certificate. Also from what I remember when reading the code, the orderer allocates a handler per connection and there is no notion of mapping between identities of peers to sessions
kostas (Tue, 18 Apr 2017 21:11:22 GMT):
(Your memory serves you right.)
scottz (Tue, 18 Apr 2017 22:25:34 GMT):
ok, so that means we should be able to reuse certs. But first, of course, we need to figure out how to enhance the deliver client to use certs. I think we've set the correct MSP Name, ID, and Dir, and as far as I know we did successfully "join channel", but still faiiling when invoke Deliver. If anyone has any thoughts on this, please do share.
yacovm (Tue, 18 Apr 2017 22:39:40 GMT):
it can already use certs
yacovm (Tue, 18 Apr 2017 22:40:13 GMT):
otherwise- e2e won't work...
kostas (Wed, 19 Apr 2017 02:00:42 GMT):
@scottz: I assume that by deliver client you refer to the `deliver_stdout` sample client. If that is the case, then this client does not support certs indeed. Otherwise, what @yacovm says is correct - one _can_ submit signed Deliver requests to the orderer. This is how the E2E tests work. Jeff's `bootstrap.feature` also demonstrates this. And https://gerrit.hyperledger.org/r/#/c/8127/ shows it in action for several Kafka setups (1/3-brokers + 1/3 orderers).
lignyxg (Wed, 19 Apr 2017 06:44:47 GMT):
what protocol orderers use to communicate with peers? gRPC?
yacovm (Wed, 19 Apr 2017 07:19:44 GMT):
yes
SotirisAlfonsos (Wed, 19 Apr 2017 10:37:18 GMT):
Has joined the channel.
JohnWhitton (Wed, 19 Apr 2017 19:09:04 GMT):
Has joined the channel.
toddinpal (Thu, 20 Apr 2017 12:59:53 GMT):
What does a peer do if it receives different RWsets from endorsers? This could clearly happen in the case of concurrent updates. I realize that MVCC will catch some concurrency issues, but not clear from the documentation what the peer is to do in this case.
kostas (Thu, 20 Apr 2017 13:55:33 GMT):
@toddinpal: The peer should not go ahead with submitting a proposal to the ordering service in this case.
toddinpal (Thu, 20 Apr 2017 14:21:49 GMT):
@kostas Is that because there are no endorsement policies in Fabric 1.0 that support this case as the architecture document seems to imply and endorsement policy could define what should happen in this case.
toddinpal (Thu, 20 Apr 2017 14:21:49 GMT):
@kostas Is that because there are no endorsement policies in Fabric 1.0 that support this case as the architecture document seems to imply an endorsement policy could define what should happen in this case.
kostas (Thu, 20 Apr 2017 14:53:39 GMT):
@toddinpal: Let me rephrase: assume there are 4 endorsers and the endorsement policy says that you need 3 endorsements. If you get 3 endorsements with identical RWsets -- _even_ if the 4th endorsement comes with a different RWset -- you're good to go. At any case though, you _need_ identical RWsets in order to proceed. This is the whole point of version checking and avoiding conflicts. There will be no endorsement policy that allows you to go through with different RWsets.
toddinpal (Thu, 20 Apr 2017 14:59:36 GMT):
@kostas In section 2.4 of the consensus architecture document, it states: "If all these checks pass, the transaction is deemed valid or committed. In this case, the peer marks the transaction with 1 in the bitmask of the PeerLedger, applies blob.endorsement.tran-proposal.writeset to blockchain state (if tran-proposals are the same, otherwise endorsement policy logic defines the function that takes blob.endorsement)." which seems to imply there could be different RWsets... or am I interpreting that incorrectly
kostas (Thu, 20 Apr 2017 15:03:31 GMT):
Ah, I can see why the phrasing would have you think that. Not sure what the intent behind it is, let me check. @dave.enyeart Not sure if you wrote this bit above?
dave.enyeart (Thu, 20 Apr 2017 15:29:50 GMT):
@kostas @toddinpal The architecture document was an early document. The implementation has evolved. In the implementation of the transaction there is one proposalResponsePayload (with one RWSet) and N endorsement signatures. The client picks one of the returned proposalResponsePayloads (with one RWSet included) and includes the N endorsement signatures. At least X of those signatures must be valid against the proposalResponsePayload, where X is the endorsement policy.
dave.enyeart (Thu, 20 Apr 2017 15:29:50 GMT):
@kostas @toddinpal The architecture document was an early document. The implementation has evolved. In the implementation of the transaction there is one proposalResponsePayload (with one RWSet) and N endorsement signatures. The client picks one of the returned proposalResponsePayloads (with one RWSet included) and includes the N endorsement signatures in the submitted transaction. At least X of those signatures must be valid against the proposalResponsePayload, where X is the endorsement policy.
kostas (Thu, 20 Apr 2017 15:37:48 GMT):
@dave.enyeart: Thanks. So @toddinpal what I wrote above applies.
toddinpal (Thu, 20 Apr 2017 16:39:05 GMT):
@kostas @dave.enyeart OK, thanks, that makes sense... Any chance the architecture document is going to catch up with the implementation in the near future? :-)
toddinpal (Thu, 20 Apr 2017 16:45:48 GMT):
@kostas @dave.enyeart One related question. If as kostas mentions the endorsement policy requires 3 endorsements, and it for whatever reason gets back 6 endorsements, 3 with one RWset, and the other 3 with a different RWset, I'm guessing it is up to the client to decide? I suppose this is all handled by the SDK and it probably just goes with whatever RWset first gets the required number of endorsements. Make sense?
dave.enyeart (Thu, 20 Apr 2017 16:47:20 GMT):
yes, client needs to decide which proposal response to submit. you would need to check on the fabric-sdk channels and ask if SDK has any features to help with this decision
toddinpal (Thu, 20 Apr 2017 16:49:41 GMT):
@dave.enyeart Thanks!
jimthematrix (Fri, 21 Apr 2017 03:03:18 GMT):
@kostas @jyellick @jeffgarratt I'm getting the following error when submitting create channel, using today's (around noon time) master:
```2017-04-21 00:19:41.338 UTC [orderer/common/broadcast] Handle -> WARN 1a7 Rejecting CONFIG_UPDATE because: Error validating DeltaSet: Attempt to set key [Policy] /Channel/Orderer/Admins to version 0, but key is at version 0
jimthematrix (Fri, 21 Apr 2017 03:03:54 GMT):
the channel.tx is freshly generated from configtxgen built from the same source level
jeffgarratt (Fri, 21 Apr 2017 03:04:03 GMT):
this is master?
jimthematrix (Fri, 21 Apr 2017 03:04:14 GMT):
correct
jeffgarratt (Fri, 21 Apr 2017 03:04:17 GMT):
hmmm
jeffgarratt (Fri, 21 Apr 2017 03:04:27 GMT):
Kostas and I are working (as we speak :( )
jeffgarratt (Fri, 21 Apr 2017 03:04:41 GMT):
on the hopefully um-merged config branch from jyellick
jimthematrix (Fri, 21 Apr 2017 03:05:39 GMT):
these are the top commits in my environment:
```commit 37f411c1da79175886b210b3c91c70a327bc5f77
Merge: d50e1bd2 2ec150d4
Author: Jonathan Levi (HACERA)
kostas (Fri, 21 Apr 2017 03:06:18 GMT):
@jimthematrix: How do you generate the create channel TX?
jeffgarratt (Fri, 21 Apr 2017 03:06:29 GMT):
by hand with protos
jeffgarratt (Fri, 21 Apr 2017 03:06:50 GMT):
sorry, that was for jim :)
jimthematrix (Fri, 21 Apr 2017 03:07:26 GMT):
`./build/bin/configtxgen -outputBlock ./twoorgs.genesis.block -profile TwoOrgs`
`./build/bin/configtxgen -outputCreateChannelTx ./mychannel.tx -profile TwoOrgs`
kostas (Fri, 21 Apr 2017 03:07:56 GMT):
Roger. Will look into it first thing tomorrow morning and get back to you.
jimthematrix (Fri, 21 Apr 2017 03:08:15 GMT):
has the protos been updated lately, like since two days ago?
jimthematrix (Fri, 21 Apr 2017 03:08:30 GMT):
if not I can try the SDK's json-based channel create
jimthematrix (Fri, 21 Apr 2017 03:08:39 GMT):
it was merged yesterday
jimthematrix (Fri, 21 Apr 2017 03:12:53 GMT):
ha! the SDK works fine ;-) `node test/integration/new-channel.js` from https://gerrit.hyperledger.org/r/#/c/8011
jimthematrix (Fri, 21 Apr 2017 03:13:30 GMT):
so that works and BDD works, looks like the problem is in configtxgen tool
jimthematrix (Fri, 21 Apr 2017 03:16:05 GMT):
https://jira.hyperledger.org/browse/FAB-3285
kostas (Fri, 21 Apr 2017 03:31:53 GMT):
Got it, will check to see which dependency got modified. Will fix and update the JIRA issue.
ioctl (Sat, 22 Apr 2017 16:33:56 GMT):
Has joined the channel.
yacovm (Sun, 23 Apr 2017 09:44:29 GMT):
https://gerrit.hyperledger.org/r/#/c/8059/9/orderer/common/deliver/deliver.go@110 @kostas @mastersingh24 your opinion about this is also welcome, as I might be wrong/too conservative.
kostas (Sun, 23 Apr 2017 19:53:48 GMT):
@jimthematrix RE: FAB-3285, I am unable to reproduce locally, and in fact everything works w/o issues on my end. I updated the issue accordingly, and even posted a temp commit as an additional means to check. Let me know if I'm missing anything.
nhrishi (Sun, 23 Apr 2017 21:44:54 GMT):
Hi, Can someone pls help clarify endorsement policy and how end users can approve/reject the transaction. Let's say there 2 peers for 2 different bank (Bank A and B). Bank A initiate a transaction on chaincode X using Bank A client app. It has endorsement policy that says Bank B.Member should sign. The transaction hits endorsement peer of B, it creates R/W, validates transaction and sign of peer A. It then forwards a endorsement to client app which waits for a Bank B member to sign the transaction. Once Bank B users signs the transaction (by approve/reject on UI), its forward back to Bank B Peer which in turn sends endorsment to Bank A client app. Client A app collects endorsment and submits to orderer. Orderer orders and creates a block and sends it to commiting peer to update the state and ledger. Can someone pls confirm this is correct flow.
nhrishi (Sun, 23 Apr 2017 21:44:54 GMT):
Hi, Can someone pls help clarify endorsement policy and how end users can approve/reject the transaction. Let's say there 2 peers for 2 different bank (Bank A and B ). Bank A initiate a transaction on chaincode X using Bank A client app. It has endorsement policy that says Bank B.Member should sign. The transaction hits endorsement peer of B, it creates R/W, validates transaction and sign of peer A. It then forwards a endorsement to client app which waits for a Bank B member to sign the transaction. Once Bank B users signs the transaction (by approve/reject on UI), its forward back to Bank B Peer which in turn sends endorsment to Bank A client app. Client A app collects endorsment and submits to orderer. Orderer orders and creates a block and sends it to commiting peer to update the state and ledger. Can someone pls confirm this is correct flow.
toddinpal (Mon, 24 Apr 2017 12:29:42 GMT):
@nhrishi Although that is probably technically feasible, I'm not sure it is practical. You'll be holding up the transaction for an extended period of time if you are waiting for a user to respond. Any concurrent updates to the related key/value pairs in the world state will likely cause the transaction to eventually be marked invalid. I would think a much better model would be to place some information in the world state indicating something needs to be approved. So the first transaction would have Bank A create a key/value pair indicating what it wants to do, but not actually changing the world state outside this pending transaction.
Bank B's user would then query for that key/value pair, decide to approve or not, and then either directly perform the original transaction, or reject it. That's sort of a rough description of how it might be handled.
kostas (Mon, 24 Apr 2017 14:30:21 GMT):
@nhrishi #fabric-peer-endorser-committer is a better place for this question.
gbolo (Mon, 24 Apr 2017 14:50:52 GMT):
hey @kostas - what is the significance of the metadata info we add with -ldflags when we build the fabric?
gbolo (Mon, 24 Apr 2017 14:51:00 GMT):
what is the consequence, if we don't do it?
kostas (Mon, 24 Apr 2017 14:51:24 GMT):
Not sure I follow?
gbolo (Mon, 24 Apr 2017 14:52:25 GMT):
@kostas when we build the fabric using the makefile, it adds various ldflags during the go build command like: -X github.com/hyperledger/fabric/common/metadata.Version
gbolo (Mon, 24 Apr 2017 14:52:53 GMT):
some metadata
kostas (Mon, 24 Apr 2017 14:53:12 GMT):
Ah, I don't know the answer to this one, but I'd love to know. @mastersingh24?
mastersingh24 (Mon, 24 Apr 2017 14:56:32 GMT):
@gbolo @kostas - it's some magic which @greg.haskins built in in order to basically dynamically set the build versions in the code everytime you build it - you'll notice in dev, make produces snapshot releases based on the latest git commit level.
Go's compiler allows you to pass in build flags with the -X which will actually set variables in the code - https://github.com/hyperledger/fabric/blob/master/common/metadata/metadata.go
greg.haskins (Mon, 24 Apr 2017 14:56:32 GMT):
Has joined the channel.
gbolo (Mon, 24 Apr 2017 14:57:32 GMT):
@mastersingh24 thanks, but whats the consequence for not setting them?
mastersingh24 (Mon, 24 Apr 2017 14:57:37 GMT):
http://stackoverflow.com/questions/11354518/golang-application-auto-build-versioning
mastersingh24 (Mon, 24 Apr 2017 14:57:49 GMT):
the main issue is with chaincode
gbolo (Mon, 24 Apr 2017 14:57:58 GMT):
@mastersingh24 nice icon btw :)
mastersingh24 (Mon, 24 Apr 2017 14:58:07 GMT):
because it tries to match up the versions of the base containers to use for chaincode building
gbolo (Mon, 24 Apr 2017 14:59:28 GMT):
@mastersingh24 but we can set this like: runtime: hyperledger/fabric-baseos:$(ARCH)-$(BASE_VERSION)
gbolo (Mon, 24 Apr 2017 14:59:48 GMT):
in the core.yaml or env
gbolo (Mon, 24 Apr 2017 15:00:13 GMT):
in builder container too - hyperledger/fabric-ccenv:$(ARCH)-$(PROJECT_VERSION)
greg.haskins (Mon, 24 Apr 2017 15:00:26 GMT):
@gbolo as @mastersingh24, the main implication is chaincode may not work
greg.haskins (Mon, 24 Apr 2017 15:01:19 GMT):
those metadata tags are ultimately what informs the runtime about what variable substitution to use for things like $(BASE_VERSION)
gbolo (Mon, 24 Apr 2017 15:01:26 GMT):
@greg.haskins in what sense? you mean from a shim perspective? the peer can be explicitly told what tag to spin up for baseos and ccenv
gbolo (Mon, 24 Apr 2017 15:01:44 GMT):
what docker image tag to use
greg.haskins (Mon, 24 Apr 2017 15:01:57 GMT):
if you change the yaml so that it doesnt use variable substitution, it _may_ work
greg.haskins (Mon, 24 Apr 2017 15:02:06 GMT):
but we dont test for what happens when you do this
greg.haskins (Mon, 24 Apr 2017 15:02:10 GMT):
so you'd be on your own
gbolo (Mon, 24 Apr 2017 15:02:37 GMT):
@greg.haskins I understand that, but if thats the only issue, meaning what tag to use. thats not much at all
gbolo (Mon, 24 Apr 2017 15:02:43 GMT):
easily overridable with env var
greg.haskins (Mon, 24 Apr 2017 15:03:00 GMT):
probably, but like I said, we dont test it
gbolo (Mon, 24 Apr 2017 15:03:19 GMT):
aslong as there is nothing in the code base which may reference this somehow
greg.haskins (Mon, 24 Apr 2017 15:03:22 GMT):
the other implication is things like version self reporting wont work
gbolo (Mon, 24 Apr 2017 15:03:43 GMT):
@greg.haskins yes I thought that too.
gbolo (Mon, 24 Apr 2017 15:03:45 GMT):
question
greg.haskins (Mon, 24 Apr 2017 15:03:49 GMT):
e.g. "peer version" uses it
mastersingh24 (Mon, 24 Apr 2017 15:03:54 GMT):
right
greg.haskins (Mon, 24 Apr 2017 15:04:14 GMT):
I am not sure if that is included on the wire anywhere
gbolo (Mon, 24 Apr 2017 15:04:17 GMT):
do you guys recommend running the peer naively? or do you only recommend using the docker containers?
mastersingh24 (Mon, 24 Apr 2017 15:04:42 GMT):
I use Docker since you need Docker for the chaincode anyway
gbolo (Mon, 24 Apr 2017 15:04:46 GMT):
or it's completely up to the user
greg.haskins (Mon, 24 Apr 2017 15:04:51 GMT):
for now, i would recommend the containers only because its the most tested...but I would like to have a series of native packages (rpm/deb/brew) for 1.1
greg.haskins (Mon, 24 Apr 2017 15:05:17 GMT):
and yes, to @mastersingh24, you need docker anyway
greg.haskins (Mon, 24 Apr 2017 15:05:17 GMT):
and yes, to @mastersingh24s point, you need docker anyway
gbolo (Mon, 24 Apr 2017 15:05:45 GMT):
not a big fan of ubuntu docker containers
gbolo (Mon, 24 Apr 2017 15:06:00 GMT):
I only run lean containers
greg.haskins (Mon, 24 Apr 2017 15:06:09 GMT):
note that the OS is almost completely stripped out
gbolo (Mon, 24 Apr 2017 15:06:16 GMT):
that way they are not full of vulnerabilities
greg.haskins (Mon, 24 Apr 2017 15:06:27 GMT):
in fact, the only reason there is an OS at all is because of a snag with glibc/nss
greg.haskins (Mon, 24 Apr 2017 15:06:43 GMT):
at one point a few months back, it was scratch/busybox based
gbolo (Mon, 24 Apr 2017 15:07:00 GMT):
@greg.haskins you have more details on this snag?
greg.haskins (Mon, 24 Apr 2017 15:07:16 GMT):
yes, you know how NSS works?
gbolo (Mon, 24 Apr 2017 15:07:36 GMT):
yea it should be from scratch idealy, staticaly linked go bin and root-ca pems in the container
greg.haskins (Mon, 24 Apr 2017 15:08:01 GMT):
note that it _is_ a static binary
greg.haskins (Mon, 24 Apr 2017 15:08:24 GMT):
however, glibc is a little weird in that even if you link -static, it still wants to dlopen the NSS stuff at runtime
gbolo (Mon, 24 Apr 2017 15:08:27 GMT):
@greg.haskins no not too familair with nss
greg.haskins (Mon, 24 Apr 2017 15:08:47 GMT):
NSS basically lets you define the execution environment for name resolution
greg.haskins (Mon, 24 Apr 2017 15:08:55 GMT):
dns, files, nis, etc
gbolo (Mon, 24 Apr 2017 15:08:55 GMT):
@greg.haskins would the pks11 stuff compile with muslc?
greg.haskins (Mon, 24 Apr 2017 15:09:12 GMT):
short answer: no
greg.haskins (Mon, 24 Apr 2017 15:09:15 GMT):
but NSS would
gbolo (Mon, 24 Apr 2017 15:09:16 GMT):
:(
greg.haskins (Mon, 24 Apr 2017 15:09:27 GMT):
or more precisely, we wouldnt need NSS concerns with musl
greg.haskins (Mon, 24 Apr 2017 15:09:48 GMT):
however, the snag there is our multi-arch stuff
gbolo (Mon, 24 Apr 2017 15:09:54 GMT):
@greg.haskins doesnt golang have the naitve go resolver that we can use?
greg.haskins (Mon, 24 Apr 2017 15:10:15 GMT):
@gbolo dont know, not that I am aware of
greg.haskins (Mon, 24 Apr 2017 15:10:20 GMT):
all I can tell you is as follows
greg.haskins (Mon, 24 Apr 2017 15:10:29 GMT):
the current binary relies on gethostbyname()
gbolo (Mon, 24 Apr 2017 15:10:47 GMT):
I was looking at this the other day - https://golang.org/pkg/net/
gbolo (Mon, 24 Apr 2017 15:10:47 GMT):
I was looking at this the other day - https://golang.org/pkg/net/#hdr-Name_Resolution
greg.haskins (Mon, 24 Apr 2017 15:10:48 GMT):
if you link with glibc, you pick up glibc's gethostbyname() which relies on NSS
greg.haskins (Mon, 24 Apr 2017 15:11:06 GMT):
if you link with musl, you pick up musl's gethostbyname()
greg.haskins (Mon, 24 Apr 2017 15:11:36 GMT):
musl is a good solution (for NSS) but we need a good multi-arch provider (X/P/Z)
greg.haskins (Mon, 24 Apr 2017 15:11:45 GMT):
alpine is great for X, but not for P/Z
greg.haskins (Mon, 24 Apr 2017 15:12:15 GMT):
and musl wont solve pkcs11 at all, because the HSM vendors are compiling/distributing binaries
gbolo (Mon, 24 Apr 2017 15:12:58 GMT):
@greg.haskins I staticaly compiled with glibc and threw it in an alpine container but I also set the /etc/nsswitch.conf
greg.haskins (Mon, 24 Apr 2017 15:13:00 GMT):
based on all this, the path of least resistance (for now) was to revert the busybox/musl/alpine stuff and go for a minimal ubuntu image
gbolo (Mon, 24 Apr 2017 15:13:25 GMT):
'hosts: files dns' > /etc/nsswitch.conf
greg.haskins (Mon, 24 Apr 2017 15:13:28 GMT):
but bottom line, the OS isnt really used _at all_ other than to provide NSS substrate
greg.haskins (Mon, 24 Apr 2017 15:13:52 GMT):
right, and I am pretty sure you are broken ;)
greg.haskins (Mon, 24 Apr 2017 15:14:31 GMT):
if you link glibc -static and then try to dns lookup something, it will fail
greg.haskins (Mon, 24 Apr 2017 15:14:47 GMT):
if you strace that, its because glibc is still trying to dlopen the nss provider
greg.haskins (Mon, 24 Apr 2017 15:15:06 GMT):
it wont crash, just the DNS lookup will fail
greg.haskins (Mon, 24 Apr 2017 15:15:18 GMT):
its pretty subtle..so subtle we didnt notice the problem right away
gbolo (Mon, 24 Apr 2017 15:15:57 GMT):
@greg.haskins I will take a look. Seems like we have the ability to use the naitive go resolver as well:
gbolo (Mon, 24 Apr 2017 15:16:03 GMT):
```
It can use a pure Go resolver that sends DNS requests directly to the servers listed in /etc/resolv.conf, or it can use a cgo-based resolver that calls C library routines such as getaddrinfo and getnameinfo.
```
gbolo (Mon, 24 Apr 2017 15:16:14 GMT):
https://golang.org/pkg/net/#hdr-Name_Resolution
greg.haskins (Mon, 24 Apr 2017 15:16:39 GMT):
ok, that would be awesome from my perspective
greg.haskins (Mon, 24 Apr 2017 15:16:48 GMT):
but note a few things
greg.haskins (Mon, 24 Apr 2017 15:17:21 GMT):
1) the OS in the container already is stripped and the only thing we run is the static peer binary...so the argument is exclusively one of size not really security
greg.haskins (Mon, 24 Apr 2017 15:17:44 GMT):
2) even if we solve the NSS problem, we dont solve the PKCS11 problem
greg.haskins (Mon, 24 Apr 2017 15:17:58 GMT):
3) we still need to figure out what the container environment would be
greg.haskins (Mon, 24 Apr 2017 15:18:15 GMT):
3a) scratch isnt really viable, as its super annoying to not even have a shell
gbolo (Mon, 24 Apr 2017 15:18:30 GMT):
@greg.haskins I'm thinking in terms of a purely self-hosted production grade deployment
greg.haskins (Mon, 24 Apr 2017 15:18:44 GMT):
3b) we could go back to busybox, which is suitably multi-arch but still lacking some mid-level utilities
gbolo (Mon, 24 Apr 2017 15:18:45 GMT):
I understand our usecases might be different
greg.haskins (Mon, 24 Apr 2017 15:19:14 GMT):
3c) we could go with alpine to pick up a wealth of utilities (like JVM) but then we have a problem with multi-arch
gbolo (Mon, 24 Apr 2017 15:19:57 GMT):
@greg.haskins personaly I think alpine makes the most sense, as long as we can avoid any technicalities with musl
greg.haskins (Mon, 24 Apr 2017 15:20:15 GMT):
yes, but the problem is how do we support P/Z
gbolo (Mon, 24 Apr 2017 15:20:15 GMT):
the peer image would only be 20mb
greg.haskins (Mon, 24 Apr 2017 15:20:43 GMT):
yes, we had that for a while (until NSS problem uncovered) and it was awesome
greg.haskins (Mon, 24 Apr 2017 15:21:09 GMT):
I should also mention 4) no matter what we do to the peer, we still need a ccenv
greg.haskins (Mon, 24 Apr 2017 15:21:16 GMT):
and the ccenv has high requirements
gbolo (Mon, 24 Apr 2017 15:21:55 GMT):
I guess it goes back to the standard model here. I run peer naitvely, and build out the ccenv according to spec (the ubuntu makefile), along with baseos
gbolo (Mon, 24 Apr 2017 15:22:07 GMT):
the peer is not compiling anything anyways
gbolo (Mon, 24 Apr 2017 15:23:23 GMT):
I suppose it could be fine to have the peer either run naively or in very lean container. As long as the ccenv and baseos remain untocuhed
greg.haskins (Mon, 24 Apr 2017 15:23:40 GMT):
@gbolo https://github.com/hyperledger-archives/fabric/commit/4301e41b94679a5ecf95ed373130ac1f1d4a2cb2
greg.haskins (Mon, 24 Apr 2017 15:24:17 GMT):
prior to that merge, it was a stripped down 25M container
gbolo (Mon, 24 Apr 2017 15:25:38 GMT):
@greg.haskins nice
gbolo (Mon, 24 Apr 2017 16:31:03 GMT):
hey @kostas - when i get this:
```
Apr 24 12:28:45 devqa-ord1 orderer[23242]: 2017-04-24 12:28:45.057 EDT [msp] Validate -> INFO 017 MSP OrdererMSP validating identity
Apr 24 12:28:45 devqa-ord1 orderer[23242]: 2017-04-24 12:28:45.058 EDT [orderer/common/broadcast] Handle -> WARN 018 Rejecting broadcast message because of filter error: Rejected by rule: *sigfilter.sigFilter
Apr 24 12:28:45 devqa-ord1 orderer[23242]: 2017-04-24 12:28:45.060 EDT [orderer/common/deliver] Handle -> WARN 019 Error reading from stream: stream error: code = 1 desc = "context canceled"
```
gbolo (Mon, 24 Apr 2017 16:31:26 GMT):
does this suggest a problem with my msp certificates?
kostas (Mon, 24 Apr 2017 16:31:51 GMT):
Yes, it means you're not signing your broadcast request appropriately.
kostas (Mon, 24 Apr 2017 16:31:59 GMT):
So: yes
gbolo (Mon, 24 Apr 2017 16:32:32 GMT):
I'm using the peer binary to create this channel request
gbolo (Mon, 24 Apr 2017 16:32:41 GMT):
So i should look at my msp path env var
gbolo (Mon, 24 Apr 2017 16:32:46 GMT):
ok thanks
gbolo (Mon, 24 Apr 2017 16:46:19 GMT):
@kostas found the issue, thanks buddy!
david_dornseifer (Mon, 24 Apr 2017 21:20:34 GMT):
Has joined the channel.
MikeMayori (Mon, 24 Apr 2017 21:26:15 GMT):
Has joined the channel.
haiderny (Tue, 25 Apr 2017 15:16:10 GMT):
Has joined the channel.
nhrishi (Wed, 26 Apr 2017 19:17:10 GMT):
@toddinpal Thanks for your response. I thought, approval/rejection of the transaction was the one primary reason behind endorsement policies. When endorsement policy says Org1.member must sign. It must be signed after user's approval. Correct? I might be missing something here.
LoveshHarchandani (Thu, 27 Apr 2017 06:44:42 GMT):
Has joined the channel.
yacovm (Thu, 27 Apr 2017 06:58:45 GMT):
@kostas it seems to me like the YAML file isn't updated anymore to run the end-to-end
yacovm (Thu, 27 Apr 2017 06:58:53 GMT):
2017-04-27 06:54:31.623 UTC [orderer/multichain] newLedgerResources -> CRIT 041 Error creating configtx manager and handlers: Error creating group Consortiums: Disallowed channel group: Consortiums
yacovm (Thu, 27 Apr 2017 06:59:01 GMT):
This is from the ordering service when e2e is started
yacovm (Thu, 27 Apr 2017 06:59:01 GMT):
This is from the ordering service log when e2e is started
bh4rtp (Thu, 27 Apr 2017 07:10:54 GMT):
can anyone tell me why the latest e2e_cli changed CORE_PEER_MSPCONFIGPATH and CORE_PEER_LOCALMSPID definitions from orderer to peer0 in the createChannel function of script/scripts.sh?
bh4rtp (Thu, 27 Apr 2017 07:10:54 GMT):
can anyone tell me why the latest e2e_cli changed CORE_PEER_MSPCONFIGPATH and CORE_PEER_LOCALMSPID definitions from orderer to peer0 in the createChannel function of scripts/script.sh?
HubertYoung (Thu, 27 Apr 2017 07:17:26 GMT):
Has joined the channel.
mihaig (Thu, 27 Apr 2017 08:21:53 GMT):
Is there any documentation describing how consensus mechanism works as a whole in v1 with a indication of each node's role in ordering and validation per configured consensus mechanism ?
For example when configured consensus mechanism equals to Kafka then ordering and validation is performed entirely at orderer level or is the peer also involved?
mihaig (Thu, 27 Apr 2017 08:21:53 GMT):
Is there any documentation describing how consensus mechanism works as a whole in v1 with a indication of each node's role in ordering and validation per configured consensus mechanism ?
For example when configured consensus mechanism equals to Kafka then ordering and validation is performed entirely at orderer level or is the peer also involved?
mihaig (Thu, 27 Apr 2017 08:21:53 GMT):
@kostas :Is there any documentation describing how consensus mechanism works as a whole in v1 with a indication of each node's role in ordering and validation per configured consensus mechanism ?
For example when configured consensus mechanism equals to Kafka then ordering and validation is performed entirely at orderer level or is the peer also involved?
HubertYoung (Thu, 27 Apr 2017 14:58:56 GMT):
Is the consensus mechanism PBFT available?It dosen't work refer to the doc.
HubertYoung (Thu, 27 Apr 2017 14:58:56 GMT):
Using a Consensus Plugin
A consensus plugin might require some specific configuration that you need to set up. For example, to use the Practical Byzantine Fault Tolerant (PBFT) consensus plugin provided as part of the fabric, perform the following configuration:
In core.yaml, set the peer.validator.consensus value to pbft
In core.yaml, make sure the peer.id is set sequentially as vpN where N is an integer that starts from 0 and goes to N-1. For example, with 4 validating peers, set the peer.id tovp0, vp1, vp2, vp3.
In consensus/pbft/config.yaml, set the general.mode value to batch and the general.N value to the number of validating peers on the network, also set general.batchsize to the number of transactions per batch.
In consensus/pbft/config.yaml, optionally set timer values for the batch period (general.timeout.batch), the acceptable delay between request and execution (general.timeout.request), and for view-change (general.timeout.viewchange)
See core.yaml and consensus/pbft/config.yaml for more detail.
All of these setting may be overridden via the command line environment variables, e.g. CORE_PEER_VALIDATOR_CONSENSUS_PLUGIN=pbft or CORE_PBFT_GENERAL_MODE=batch
kostas (Thu, 27 Apr 2017 16:16:55 GMT):
@mihaig: This may answer some questions and give you a decent overview: https://docs.google.com/document/u/1/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit
kostas (Thu, 27 Apr 2017 16:17:32 GMT):
If there are still questions, please ask here until we get a proper document.
kostas (Thu, 27 Apr 2017 16:18:35 GMT):
For instance, to your question: the orderers only check that the incoming transaction is signed appropriately (by a client who's authorized to broadcast to the channel), and do not inspect the payload for validity. That is always up to the committing peers.
kostas (Thu, 27 Apr 2017 16:19:26 GMT):
@HubertYoung: This is a remnant of the old documentation for v0.6 and needs to be removed. If you don't mind, please open up a JIRA issue for it and file it under: component = fabric-consensus, fix version = v1.0.0.
kostas (Thu, 27 Apr 2017 16:20:15 GMT):
PBFT not available for v1.0. Sometime during the 1.x cycle I predict we'll have SBFT (which is PBFT w/ some modifications) ready to roll.
kostas (Thu, 27 Apr 2017 16:20:24 GMT):
@yacovm: Thanks for letting me know, looking into it now.
yacovm (Thu, 27 Apr 2017 16:20:31 GMT):
Sure
HubertYoung (Thu, 27 Apr 2017 16:42:24 GMT):
@kostas Thanks.What is the consensus algorithm of kalfka consensus using?
kostas (Thu, 27 Apr 2017 16:43:01 GMT):
@HubertYoung https://kafka.apache.org/documentation/#design_replicatedlog
HubertYoung (Thu, 27 Apr 2017 16:47:05 GMT):
I'll check it later.Another question.How to set up fabric cluster on different servers?
kostas (Thu, 27 Apr 2017 16:47:25 GMT):
This is not related to #fabric-consensus is it?
HubertYoung (Thu, 27 Apr 2017 16:51:03 GMT):
I see. Thanks!
jimthematrix (Thu, 27 Apr 2017 19:33:13 GMT):
@kostas @binhn @adc I need help interpreting the following log messages from the orderer when I tried to send a signed config tx to the orderer to create channel:
```orderer0 | 2017-04-27 19:11:36.467 UTC [policies] GetPolicy -> DEBU 1ad Returning policy Admins for evaluation
orderer0 | 2017-04-27 19:11:36.467 UTC [cauthdsl] func1 -> DEBU 1ae Gate evaluation starts: (&{n:1 policies:
jimthematrix (Thu, 27 Apr 2017 19:37:12 GMT):
never mind, just realized we needed the admin user from each org to sign the config_update bytes
jimthematrix (Thu, 27 Apr 2017 19:38:53 GMT):
did that, and now getting a different error:
```Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Attempted to set key [Policy] /Channel/Orderer/OrdererMSP/Writers to version 1, but key does not exist
yacovm (Thu, 27 Apr 2017 19:41:50 GMT):
@kostas is there news regarding the e2e?
kostas (Thu, 27 Apr 2017 19:43:11 GMT):
@yacovm As I've noted in the maintainers channel, it seems to pass over here just fine: https://chat.hyperledger.org/channel/fabric-maintainers?msg=2YS3rz5kakHefb2GP
kostas (Thu, 27 Apr 2017 19:43:27 GMT):
Waiting for someone else to let us know if they see what I see, or what you see.
yacovm (Thu, 27 Apr 2017 19:44:24 GMT):
ok I'm running a check
kostas (Thu, 27 Apr 2017 19:44:46 GMT):
@jimthematrix The only way to figure out exactly and without the slightest doubt what the config update should look like, how it should be signed, etc. is to study artifacts and the report produced by @jeffgarratt's `bootstrap.feature`.
kostas (Thu, 27 Apr 2017 19:44:46 GMT):
@jimthematrix The only way to figure out exactly and without the slightest doubt what the config update should look like, how it should be signed, etc. is to study the artifacts and the report produced by @jeffgarratt's `bootstrap.feature`.
kostas (Thu, 27 Apr 2017 19:46:06 GMT):
This is why Jeff went all in on this, so that we have a proper guide on how to do things.
jimthematrix (Thu, 27 Apr 2017 19:48:26 GMT):
@kostas @jeffgarratt fixed another problem (it seems configupdate's mspid needs to match ordere's name in configtx.yaml) and now getting this error:
```orderer0 | 2017-04-27 19:44:59.824 UTC [orderer/common/broadcast] Handle -> WARN 1c7 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Values] /Channel/OrdererAddresses not satisfied: Failed to reach implicit threshold of 2 sub-policies, required 1 remaining
jeffgarratt (Thu, 27 Apr 2017 19:50:44 GMT):
@jimthematrix did both consortium orgs sign the config update ?
jeffgarratt (Thu, 27 Apr 2017 19:51:01 GMT):
meaning, did a cert signed by those orgs sign the update ?
kostas (Thu, 27 Apr 2017 19:51:17 GMT):
The error message seems to imply that only one signature matched, and there's one remaining.
kostas (Thu, 27 Apr 2017 19:51:17 GMT):
The error message seems says that one signature/policy matched, and there's one more policy remaining to be satisfied (so another signature is missing).
jimthematrix (Thu, 27 Apr 2017 19:51:32 GMT):
yes we have the config update bytes signed by both consortium orgs, plus the orderer org
jeffgarratt (Thu, 27 Apr 2017 19:52:00 GMT):
hmm, I only have seen this when only 1 of the orgs members signe
jeffgarratt (Thu, 27 Apr 2017 19:52:00 GMT):
hmm, I only have seen this when only 1 of the orgs members signed
jeffgarratt (Thu, 27 Apr 2017 19:52:28 GMT):
do you need the orderer to sign?
jimthematrix (Thu, 27 Apr 2017 19:52:42 GMT):
i have a theory for that (may have to do with a potential bug in node SDK)
jimthematrix (Thu, 27 Apr 2017 19:52:47 GMT):
let me try something real quick
jimthematrix (Thu, 27 Apr 2017 19:56:14 GMT):
don't believe the orderer admin to have to sign, just we threw it in there anyway
jeffgarratt (Thu, 27 Apr 2017 19:58:04 GMT):
k
jeffgarratt (Thu, 27 Apr 2017 20:26:55 GMT):
who created peer0?
jimthematrix (Thu, 27 Apr 2017 20:30:47 GMT):
@jeffgarratt @adc @elli-androulaki @yacovm do you know what this in the genesis block means?
```"identities": [
{
"principalClassification": "ROLE",
"principal": "CgdPcmcxTVNQ"
}
]
jimthematrix (Thu, 27 Apr 2017 20:31:19 GMT):
this used to be something like:
``` "identities":[
{
"principal_classification":"ROLE",
"msp_identifier":"OrdererMSP",
"Role":"MEMBER"
}
]
jeffgarratt (Thu, 27 Apr 2017 20:31:23 GMT):
that is generally the entry that a N out of wil point to
jimthematrix (Thu, 27 Apr 2017 20:31:35 GMT):
i know that
jimthematrix (Thu, 27 Apr 2017 20:31:50 GMT):
but what is that value for the `principal` property mean?
jimthematrix (Thu, 27 Apr 2017 20:31:50 GMT):
but what does that value for the `principal` property mean?
jeffgarratt (Thu, 27 Apr 2017 20:31:58 GMT):
2 secs
jeffgarratt (Thu, 27 Apr 2017 20:34:02 GMT):
principal is a serialized msp_principal_pb2.MSPRole
jeffgarratt (Thu, 27 Apr 2017 20:34:20 GMT):
behave code...
jeffgarratt (Thu, 27 Apr 2017 20:34:21 GMT):
def getMspPrincipalAsRole(self, mspRoleTypeAsString):
mspRole = msp_principal_pb2.MSPRole(msp_identifier=self.name, role=msp_principal_pb2.MSPRole.MSPRoleType.Value(mspRoleTypeAsString))
mspPrincipal = msp_principal_pb2.MSPPrincipal(
principal_classification=msp_principal_pb2.MSPPrincipal.Classification.Value('ROLE'),
principal=mspRole.SerializeToString())
return mspPrincipal
jeffgarratt (Thu, 27 Apr 2017 20:34:29 GMT):
""" def getMspPrincipalAsRole(self, mspRoleTypeAsString):
mspRole = msp_principal_pb2.MSPRole(msp_identifier=self.name, role=msp_principal_pb2.MSPRole.MSPRoleType.Value(mspRoleTypeAsString))
mspPrincipal = msp_principal_pb2.MSPPrincipal(
principal_classification=msp_principal_pb2.MSPPrincipal.Classification.Value('ROLE'),
principal=mspRole.SerializeToString())
return mspPrincipal
"""
jeffgarratt (Thu, 27 Apr 2017 20:35:15 GMT):
@jimthematrix ^^
jimthematrix (Thu, 27 Apr 2017 20:43:49 GMT):
guess the difference in the above print outs are just changes in the tool in how it prints out stuff. I double checked we've always been creating MSPRole's and setting them to the `principal` field as byte array
jimthematrix (Thu, 27 Apr 2017 20:44:05 GMT):
but thanks anyway @jeffgarratt
jimthematrix (Thu, 27 Apr 2017 20:45:03 GMT):
so we are still looking for reasons why we are getting this error:
```[orderer/common/broadcast] Handle -> WARN 873 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Values] /Channel/OrdererAddresses not satisfied: Failed to reach implicit threshold of 2 sub-policies, required 2 remaining
kostas (Thu, 27 Apr 2017 21:00:37 GMT):
Have you compared your artifacts to the ones that the BDD is generating?
kostas (Thu, 27 Apr 2017 23:27:56 GMT):
@jimthematrix: I believe Bret was dealing with a similar issue earlier.
kostas (Thu, 27 Apr 2017 23:29:21 GMT):
So first of all what does this error mean.
kostas (Thu, 27 Apr 2017 23:29:21 GMT):
So first of all what does this error mean:
kostas (Thu, 27 Apr 2017 23:29:45 GMT):
You are attempting to modify the `/Channel/OrdererAddresses` value.
kostas (Thu, 27 Apr 2017 23:30:59 GMT):
In its current form (not sure if it's a stopgap measure, or something that will persist -- do note that I haven't been involved in the design of this, but I can tell you what the code does), whenever you request the creation of a channel, your new channel inherits all the top-level `/Channel` values from the system chain.
kostas (Thu, 27 Apr 2017 23:31:15 GMT):
Which means it also inherits the associated modification policies.
kostas (Thu, 27 Apr 2017 23:31:47 GMT):
If you look at the block that `configtxgen` outputs, it assigns the "admins" modification policy to this value.
kostas (Thu, 27 Apr 2017 23:32:19 GMT):
And the "admins" policy, as defined in the "/Channel" level is an implicit meta policy with a rule of MAJORITY and a subpolicy of "admins".
kostas (Thu, 27 Apr 2017 23:33:38 GMT):
Concretely, this means that the signature set that accompanies your config_update envelope, needs to satisfy the majority of the following admin policies: `/Channel/Orderer/Admins`, `/Channel/Application/Admins`
kostas (Thu, 27 Apr 2017 23:34:13 GMT):
(So in this case majority means both. Which is why get the reference to an implicit threshold of 2 sub-policies that you fail to satisfy.)
kostas (Thu, 27 Apr 2017 23:34:49 GMT):
That aside, in its current form at least, you channel creation request should only touch the `/Channel/Application` config group.
kostas (Thu, 27 Apr 2017 23:35:19 GMT):
You can do a `configtxgen -inspectChannelCreateTx channel.tx`to test this -- pay attention the "delta set" that is printed at the bottom.
kostas (Thu, 27 Apr 2017 23:35:19 GMT):
You can do a `configtxgen -inspectChannelCreateTx channel.tx`to test this -- pay attention to the "delta set" that is printed at the bottom.
kostas (Thu, 27 Apr 2017 23:36:18 GMT):
This is not something that can be inferred from the `configtx.rst` document (I think, anyway), and something that Jeff and I figured out through trial and error.
kostas (Thu, 27 Apr 2017 23:36:18 GMT):
This is not something that can be inferred from the `configtx.rst` document (I think, anyway), and something that Jeff and I figured out through trial and error, and combing through the code.
kostas (Thu, 27 Apr 2017 23:36:18 GMT):
This is not something that can be inferred from the `configtx.rst` document (I think, anyway), and something that Jeff and I figured out through trial and error, and by combing through the code.
kostas (Thu, 27 Apr 2017 23:36:57 GMT):
Which is why I suggest that your output matches what @jeffgarratt does with his `bootstrap.feature` to a T.
jimthematrix (Fri, 28 Apr 2017 02:19:22 GMT):
@kostas thanks for the detailed explanation, it matches our understanding as well, which is why we were trying to include the 3 signatures in the envelope: org1msp admin, org2msp admin and orderermsp admin. see the big blob I pasted earlier at 3:33pm
jimthematrix (Fri, 28 Apr 2017 02:21:39 GMT):
what's weird is that, as you can see in the big blob, we get these errors from the 3 included signatures:
```does not satisfy principal: The identity is a member of a different MSP (expected OrdererMSP, got Org1MSP)
```does not satisfy principal: The identity is a member of a different MSP (expected OrdererMSP, got Org2MSP)
jimthematrix (Fri, 28 Apr 2017 02:21:39 GMT):
what's weird is that, as you can see in the big blob, we get these errors from the 3 included signatures:
```does not satisfy principal: The identity is a member of a different MSP (expected OrdererMSP, got Org1MSP)```
```does not satisfy principal: The identity is a member of a different MSP (expected OrdererMSP, got Org2MSP)```
jimthematrix (Fri, 28 Apr 2017 02:24:40 GMT):
it's as though the orderer expects all signatures to be from the OrdererMSP, which doesn't seem to match the policy as described above
kostas (Fri, 28 Apr 2017 03:22:41 GMT):
@jimthematrix: Hmmm, why wouldn't it match the policy as described above though? It says you're to satisfy both `/Channel/Application/Admins` and `/Channel/Orderer/Admins` so I can see why it's asking for `OrdererMSP`.
jimthematrix (Fri, 28 Apr 2017 03:24:24 GMT):
what i meant is it should expect application orgs (org1msp, org2msp), but from the error messages only the OrdererMSP identity produces a match
kostas (Fri, 28 Apr 2017 03:33:54 GMT):
Can you please file a JIRA with the exact genesis block that you used, the exact channel creation transaction that you pushed, and who signs what?
kostas (Fri, 28 Apr 2017 03:34:11 GMT):
It's somewhat difficult to debug w/o this info.
kostas (Fri, 28 Apr 2017 03:35:54 GMT):
I will also repeat that you are incorrectly trying to update any value in the top-level `/Channel` config group. The only elements in your delta set should belong to `/Channel/Application`.
kostas (Fri, 28 Apr 2017 03:35:54 GMT):
I will also repeat that you should not be trying to update any value in the top-level `/Channel` config group. The only elements in your delta set should belong to `/Channel/Application`.
jimthematrix (Fri, 28 Apr 2017 03:46:16 GMT):
finally able to make some progress by decoding the "config_update" out of the file written by `configtxgen -outputCreateChannelTx`, add signatures, and wrap it in ConfigUpdateEnvelope, sign it and send to the orderer to create channel
jimthematrix (Fri, 28 Apr 2017 03:46:47 GMT):
it seems to have succeeded, will proceed to attempt to use it to make sure
jimthematrix (Fri, 28 Apr 2017 03:51:15 GMT):
can join channels, install chaincode, instantiate chaincode
jimthematrix (Fri, 28 Apr 2017 04:01:10 GMT):
but got an error when trying to get the last config block as part of invoking chaincode transactions: looks like `last_config.index` from the newest block somehow points to an ENDORSER_TRANSACTION block instead of a CONFIG block
jimthematrix (Fri, 28 Apr 2017 04:01:24 GMT):
this has always worked before. has anything changed in this area?
jimthematrix (Fri, 28 Apr 2017 04:01:45 GMT):
@bretharrison ^^^ see above comments
bretharrison (Fri, 28 Apr 2017 04:01:46 GMT):
Has joined the channel.
kostas (Fri, 28 Apr 2017 05:06:17 GMT):
@jimthematrix: I added debugging statements to inspect what's going on in the E2E CLI test and I cannot reproduce the error that you are reporting. This is what I am getting:
kostas (Fri, 28 Apr 2017 05:06:23 GMT):
```2017-04-28 05:01:35.148 UTC [orderer/multichain] addLastConfigSignature -> INFO a69 >>> [channel: mychannel] Last config updated to 1
2017-04-28 05:01:35.148 UTC [orderer/multichain] addLastConfigSignature -> INFO a6c >>> [channel: mychannel] About to write block 1 as last config block
2017-04-28 05:01:35.150 UTC [orderer/multichain] WriteBlock -> INFO a73 >>> [channel: mychannel] Just wrote block 1 whose first transaction is of type CONFIG
2017-04-28 05:01:37.272 UTC [orderer/multichain] addLastConfigSignature -> INFO cbc >>> [channel: mychannel] Last config updated to 2
2017-04-28 05:01:37.272 UTC [orderer/multichain] addLastConfigSignature -> INFO cbf >>> [channel: mychannel] About to write block 2 as last config block
2017-04-28 05:01:37.273 UTC [orderer/multichain] WriteBlock -> INFO cc7 >>> [channel: mychannel] Just wrote block 2 whose first transaction is of type CONFIG
2017-04-28 05:01:55.395 UTC [orderer/multichain] addLastConfigSignature -> INFO d45 >>> [channel: mychannel] About to write block 2 as last config block
2017-04-28 05:01:55.395 UTC [orderer/multichain] WriteBlock -> INFO d4c >>> [channel: mychannel] Just wrote block 3 whose first transaction is of type ENDORSER_TRANSACTION
2017-04-28 05:02:12.469 UTC [orderer/multichain] addLastConfigSignature -> INFO d86 >>> [channel: mychannel] About to write block 2 as last config block
2017-04-28 05:02:12.478 UTC [orderer/multichain] WriteBlock -> INFO d91 >>> [channel: mychannel] Just wrote block 4 whose first transaction is of type ENDORSER_TRANSACTION```
kostas (Fri, 28 Apr 2017 05:07:30 GMT):
As you can see the blocks that are being written to as "last config blocks" (blocks 1 and 2) include an envelope of type CONFIG as expected.
kostas (Fri, 28 Apr 2017 05:08:18 GMT):
The above was tested on the master branch, at this commit: `f4a76319 - (origin/master, origin/HEAD, master) Merge "[FAB-3452] peer/gossip test-coverage" (6 hours ago)
kostas (Fri, 28 Apr 2017 05:08:47 GMT):
If you continue to see odd behavior, can you please open up a JIRA item w/ details to reproduce and assign it to me?
muralisr (Fri, 28 Apr 2017 05:24:05 GMT):
@kostas with latest master I get the following after a clean build
muralisr (Fri, 28 Apr 2017 05:24:22 GMT):
Message Attachments
muralisr (Fri, 28 Apr 2017 05:24:30 GMT):
user error ?
kostas (Fri, 28 Apr 2017 05:25:17 GMT):
Heh, I think so. Did you `make docker && make configtxgen` before running the E2E CLI test?
kostas (Fri, 28 Apr 2017 05:25:29 GMT):
It looks like you're running on old binaries.
HubertYoung (Fri, 28 Apr 2017 05:41:42 GMT):
I submit 20 transactions at the same time.But only one transaction commit successfully.Any ideas?Here is the log.
2017-04-28 03:08:16.005 UTC [kvledger] Commit -> INFO 19f Channel [foo]: Created block [29] with 10 transaction(s)
2017-04-28 03:08:18.271 UTC [statevalidator] ValidateAndPrepareBatch -> WARN 1a0 Block [30] Transaction index [1] TxId [d38ae316767f61502417ea2681c2c7d7fbf3898897bac14edcab2cd5df28362f] marked as invalid by state validator. Reason code [11]
2017-04-28 03:08:18.272 UTC [statevalidator] ValidateAndPrepareBatch -> WARN 1a1 Block [30] Transaction index [2] TxId [8aad508c2a92a093fc471ef9424a27dee8055eb72947a40296e03a31c0d262e6] marked as invalid by state validator. Reason code [11]
2017-04-28 03:08:18.272 UTC [statevalidator] ValidateAndPrepareBatch -> WARN 1a2 Block [30] Transaction index [3] TxId [aa0382a5a8021d35f871a7367ccc67e4771665066843f8a5eaa7365155ace1c3] marked as invalid by state validator. Reason code [11]
kostas (Fri, 28 Apr 2017 05:49:46 GMT):
@HubertYoung Seems like there's an issue with the validation of these transactions. #fabric-peer-endorser-committer seems like a better venue. Though honestly the best way to address this is to create a JIRA item w/ details and post the link to that channel maybe.
muralisr (Fri, 28 Apr 2017 05:51:23 GMT):
@kostas indeed...I *was* using old configtxgen. Thanks
kostas (Fri, 28 Apr 2017 05:51:42 GMT):
@muralisr Any time.
kostas (Fri, 28 Apr 2017 05:53:20 GMT):
@jimthematrix, RE: https://chat.hyperledger.org/channel/fabric-consensus?msg=egY3HcBNxFH9ZvJRa
kostas (Fri, 28 Apr 2017 05:57:25 GMT):
Cool that you got it working, and I believe this is closer to the flow Gari is suggesting anyway (where the SDKs don't reinvent everything from scratch), but just so that we're clear on how the ConfigUpdate should look like
1. This is the readset: https://github.com/hyperledger/fabric/blob/master/bddtests/steps/bootstrap_util.py#L725...L730
2. This is the writeset: https://github.com/hyperledger/fabric/blob/master/bddtests/steps/bootstrap_util.py#L568...L576
Both of them invoked during this step: https://github.com/hyperledger/fabric/blob/master/bddtests/steps/bootstrap_impl.py#L127
HubertYoung (Fri, 28 Apr 2017 05:57:28 GMT):
Thanks.
kostas (Fri, 28 Apr 2017 05:58:26 GMT):
In the report that `bootstrap.feature` generates, in `bddtests/tmp/UUID-goes-here/report.html`, you'll look at something like this:
kostas (Fri, 28 Apr 2017 05:58:32 GMT):
Message Attachments
kostas (Fri, 28 Apr 2017 05:59:37 GMT):
You can copy and paste that bit that starts from "from..." to your Python REPL and you can inspect the ConfigUpdateEnvelope further if you wish so.
wsh_bob (Fri, 28 Apr 2017 08:09:58 GMT):
Has joined the channel.
vugranam (Fri, 28 Apr 2017 12:11:21 GMT):
Has joined the channel.
jimthematrix (Fri, 28 Apr 2017 13:04:38 GMT):
thanks @kostas will give bootstrap.feature a try as we try to revive sdk's create/update channel from JSON
jimthematrix (Fri, 28 Apr 2017 13:28:10 GMT):
@kostas @jeffgarratt I wonder if we are checking the right property:
jimthematrix (Fri, 28 Apr 2017 13:28:15 GMT):
Message Attachments
jimthematrix (Fri, 28 Apr 2017 13:28:49 GMT):
this is the decoded object `ChannelHeader` from the config block, so we did get the right block
jimthematrix (Fri, 28 Apr 2017 13:29:43 GMT):
however, as you can see the `type` property on that block's channel header is set to `3` (ENDORSER_TRANSACTION = 3;), while we expected it to be `1` (CONFIG)
vugranam (Fri, 28 Apr 2017 13:55:14 GMT):
is there an implementation of Raft https://raft.github.io/ for hyperledger fabric ?
yacovm (Fri, 28 Apr 2017 14:05:08 GMT):
@vugranam no
vugranam (Fri, 28 Apr 2017 14:06:31 GMT):
@yacovm Thanks I am wondering if it is worth implementing? I am thinking of implementing it
yacovm (Fri, 28 Apr 2017 14:06:55 GMT):
To implement raft or to plug in an existing raft module into HLF?
vugranam (Fri, 28 Apr 2017 14:07:15 GMT):
not sure which way I will go
binhn (Fri, 28 Apr 2017 14:11:16 GMT):
@vugranam kafka is a raft implementation
vugranam (Fri, 28 Apr 2017 14:11:47 GMT):
Interesting
jimthematrix (Fri, 28 Apr 2017 14:12:09 GMT):
@binhn @jeffgarratt https://hangouts.google.com/hangouts/_/bqyxlr6u25epjpn4iayws5kksue
jkirke (Fri, 28 Apr 2017 15:04:26 GMT):
Has left the channel.
jimthematrix (Fri, 28 Apr 2017 15:24:22 GMT):
@kostas @binhn @jeffgarratt https://jira.hyperledger.org/browse/FAB-3493
kostas (Sat, 29 Apr 2017 03:42:50 GMT):
@jimthematrix: This will _not_ be the final fix, but can you try it out and let me know if it works OK? https://gerrit.hyperledger.org/r/#/c/8749/
kostas (Sat, 29 Apr 2017 03:42:57 GMT):
It seems to work fine here.
jimthematrix (Sat, 29 Apr 2017 03:43:21 GMT):
sure i'll pull it down
jimthematrix (Sat, 29 Apr 2017 04:02:44 GMT):
@kostas just tested with block #1 being a transaction block (used to fail before the fix), now works great!
kostas (Sat, 29 Apr 2017 04:03:05 GMT):
Sweet, thank you for letting me know.
jimthematrix (Sat, 29 Apr 2017 04:03:16 GMT):
subsequent tx also works fine
jimthematrix (Sat, 29 Apr 2017 04:03:21 GMT):
thanks for the fix!
jimthematrix (Sat, 29 Apr 2017 04:03:52 GMT):
please let me know once this gets merged, so I can remove the temporary workaround in node SDK
kostas (Sat, 29 Apr 2017 04:05:57 GMT):
Sure thing. So, the fix that I pointed you too right now is a bit too hacky. I'll post an updated and less hacky fix tomorrow and will ping you.
kostas (Sat, 29 Apr 2017 04:05:57 GMT):
Sure thing. So, the fix that I pointed you too right now is a bit too hacky. I'll post an updated and less hacky fix sometime during the weekend and will ping you.
kostas (Sat, 29 Apr 2017 04:41:53 GMT):
FWIW I updated https://jira.hyperledger.org/browse/FAB-3493 with a detailed explanation of the underlying issue, as well as the fix that I will be applying.
kostas (Sat, 29 Apr 2017 04:41:53 GMT):
@jimthematrix: FWIW I updated https://jira.hyperledger.org/browse/FAB-3493 with a detailed explanation of the underlying issue, as well as the fix that I will be applying.
npnjuguna (Sat, 29 Apr 2017 05:02:13 GMT):
Has joined the channel.
jimthematrix (Sun, 30 Apr 2017 20:28:26 GMT):
@kostas is https://gerrit.hyperledger.org/r/#/c/8749/ ready to merge? still marked as [WIP]
scottz (Mon, 01 May 2017 15:35:34 GMT):
where can someone find a description of consortiums, their relationship with profiles, channel creation blocks, and orderer blocks, and how they are intended to be used?
kostas (Mon, 01 May 2017 15:37:56 GMT):
@scottz: `source/configtx.rst` has some of these answers but most likely not all. I'm here for any follow-up questions you may have.
kostas (Mon, 01 May 2017 16:05:19 GMT):
@jimthematrix Was working on it last night again, still some leftover work w/ unit tests. Will update you when ready.
kostas (Mon, 01 May 2017 16:05:19 GMT):
@scottz: You are right. This is on our radar as well, but we're dealing with an ever-growing TODO list. Regarding the specifics of your example, something along the lines of: "File configtx.yaml shall contain a Profile which encapsulates two definitions: that of the ordering service (via the `Orderer` key) and that of the consortiums, i.e. the sets of orgs that are allowed to create channels w/ each other, via the `Consortiums` key."
jimthematrix (Mon, 01 May 2017 16:05:41 GMT):
thanks Kostas
jimthematrix (Mon, 01 May 2017 16:07:00 GMT):
as far as i know this is the last road block to a successful node SDK build again, but no pressure :smiling_imp:
scottz (Mon, 01 May 2017 17:35:45 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Awcn9hfm2XMnNQ4dB) @kostas That was very detailed information, quite useful for a Fabric developer. But I was asking more from the perspective of an Administrator, trying to architect a network of profiles and channels. I am looking for something that is consumable by a User, not a Fabric developer. We need to make it easier for people to understand how to set up their networks. Say things like the following. (I know this is incomplete, but is this accurate so far?): "File configtx.yaml shall contain a Profile for an orderer service, which defines Consortiums of organizations which can create channels among themselves. It shall contain one or more other Profiles, which identifies the Consortium of Organizations that participate in any channels using this profile. For example ... "
aybekbuka (Mon, 01 May 2017 17:46:06 GMT):
Hello, I have several questions about the release and performance of hyperledger/fabric. I started working on v0.6 in october 2016, now migrating the project to v1.0-alpha. 1) when fabric v1.0 will be stable enough to use in production mode ? 2) Should I start a project on v1-alpha and upgrade it after or wait till stable v1.0 is released ? 3) What is the number of maximum peers that fabric can handle ? Thanks
kostas (Mon, 01 May 2017 18:59:44 GMT):
generik12
kostas (Mon, 01 May 2017 19:19:06 GMT):
@scottz: You are right. This is on our radar as well, but we're dealing with an ever-growing TODO list. Regarding the specifics of your example, something along the lines of: "File configtx.yaml shall contain a Profile which encapsulates two definitions: that of the ordering service (via the `Orderer` key) and that of the consortiums, i.e. the sets of orgs that are allowed to create channels w/ each other, via the `Consortiums` key."
kostas (Mon, 01 May 2017 19:19:19 GMT):
@aybekbuka: This is not relevant to #fabric-consensus.
jimthematrix (Mon, 01 May 2017 20:40:41 GMT):
@kostas just FYI, we are able to use a hack to workaround the problem that you are working on (`last_update` index in block 1) so you can take the time necessary to build the right solution
jimthematrix (Mon, 01 May 2017 20:40:57 GMT):
(https://gerrit.hyperledger.org/r/#/c/8389 - finally a green build)
kostas (Tue, 02 May 2017 04:12:26 GMT):
@jimthematrix: Just pushed the fix for FAB-3493 in https://gerrit.hyperledger.org/r/#/c/8825/
kostas (Tue, 02 May 2017 04:12:50 GMT):
Passes the E2E CLI test, Jeff's Behave test, and all unit tests in Vagrant.
jimthematrix (Tue, 02 May 2017 04:12:55 GMT):
thanks I'll take a look
Lakshmipadmaja (Tue, 02 May 2017 04:59:49 GMT):
Has joined the channel.
yongkook (Tue, 02 May 2017 21:09:57 GMT):
Has joined the channel.
rahulhegde (Tue, 02 May 2017 21:53:28 GMT):
Question:
1. If I want to persist file-system of the orderer across restart; do I need to volume mount only /var/hyperledger/production?
2. I am using e2e cli from fabric repository and even after setting ` ORDERER_LEDGER_TYPE=file ` in the docker composer file, I see print of ramledger. How can I set use fileledger?
rahulhegde (Tue, 02 May 2017 21:53:28 GMT):
Question:
1. If I want to persist file-system of the orderer across restart; do I need to volume mount only /var/hyperledger/production?
2. I am using e2e cli from fabric repository and even after setting ` ORDERER_LEDGER_TYPE=file ` in the docker composer file, I see print of ramledger. How can I set use fileledger?
I don't see anything in confgtx.yaml that can set the ledger-type however orderer.yaml is set to ram. Does it mean since I don't set in genesis block; it takes default form orderer.yaml and the docker-environment variable is not referred (just a guess?)
kostas (Wed, 03 May 2017 02:14:48 GMT):
@rahulhegde:
1. You should persist whatever the location in `FileLedger.Location` points to, in `orderer.yaml`.
2. The way to overwrite the default ledger type is `ORDERER_GENERAL_LEDGERTYPE=file`. In general, you prefix the `ORDERER` keyword in front of every key you want to edit in `orderer.yaml`.
kostas (Wed, 03 May 2017 02:17:55 GMT):
Also note that the `docker_compose.yml` file in the E2E CLI test does set the file-based ledger via the appropriate ENV var (see: https://github.com/hyperledger/fabric/blob/5f4b99a894826f0e8f2ea1ddcfe27099da5e7760/examples/e2e_cli/docker-compose.yaml#L11) and it is indeed being used by the orderer, as can we be witnessed by this line in the debugging logs: `Ledger dir: /var/hyperledger/production/orderer`
kouohhashi (Wed, 03 May 2017 04:55:22 GMT):
Has joined the channel.
tom.appleyard (Wed, 03 May 2017 14:14:35 GMT):
Does anyone know how you can see the contents of the blockchain?
tom.appleyard (Wed, 03 May 2017 14:14:39 GMT):
as in for a peer?
rahulhegde (Wed, 03 May 2017 14:25:06 GMT):
405502
kostas (Wed, 03 May 2017 14:39:09 GMT):
@tom.appleyard #fabric is a better channel for this question
tom.appleyard (Wed, 03 May 2017 16:15:58 GMT):
Hey All, I've been playing a bit with Fabric 1.0 and have a question about how blockchains are maintained when it comes to installing new chaincode:
I have 4 peers, all on the same channel, and 2 chaincodes. Peers 1 and 2 have Chaincode A, Peers 3 and 4 have Chaincode B. Both A and B are copies of example02.
I install and instantiate the A and B on their respective peers then invoke them, subtracting 30 from 1&2 and 10 from 3&4. Now I install chaincode A on Peer 3 and chaincode B on Peer 1. Issuing queries on 3 and 1 for their new chaincodes gets me the updated values.
However I now have two questions:
If we inspect the blockchains, won’t they be different – both peers were unaware of the other chaincode until after they had made changes to their worldstates? Does installing chaincode just mean you are able to endorse it and the deltas from all chaincode invocations on the channel are shared?
Both of these chaincodes work by retrieving the value with key ‘a’ from the world state so shouldn’t the value of ‘a’ actually be 60 (not 70 and 90)? How would you access the same data from two different chaincodes?
kostas (Wed, 03 May 2017 16:25:07 GMT):
@tom.appleyard Hello. I think #fabric is a better channel for this question
tom.appleyard (Wed, 03 May 2017 16:27:49 GMT):
Sure - thanks
scottz (Wed, 03 May 2017 16:36:16 GMT):
Is "multiple zookeepers" a possible thing we could/shoud configure? I know we should use multiple orderers and kafka brokers in a resilient network. I have not seen any code or scripts examples or conversations about it.
scottz (Wed, 03 May 2017 17:23:32 GMT):
@kostas ^^^
kostas (Wed, 03 May 2017 17:34:03 GMT):
@scottz: All instances of a Kafka-based ordering service should have an ensemble of 3 (or 5) ZK nodes. By the end of next this should be codified in sample configurations as well, working on the BDD path for Kafka as we speak.
scottz (Wed, 03 May 2017 19:38:58 GMT):
excellent.
scottz (Wed, 03 May 2017 19:57:10 GMT):
Is this a known issue? or fixed? In a network with multiple orderers, we should be able to send a "peer channel create" to any of them. On the alpha load, it only works if we send the transaction to the last orderer in our ordererOrg list. It looks like the identity cert of the last orderer is included in the channel.tx.
rahulhegde (Wed, 03 May 2017 21:13:45 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=gSB8gY3jQZqPaQ8qz) @kostas
Sorry just go to try this out once again.
I re-ran the Peer CLI E2E using https://chat.hyperledger.org/channel/fabric-ci?msg=7jxNech7WzkP9ngi3 however I see following logs in the orderer ` 2017-05-03 20:32:57.260 UTC [orderer/ramledger] ` which as per my understanding it is not using fileledger.
At the same time, following is the orderer.yaml data
```
FileLedger:
# Location: The directory to store the blocks in.
# NOTE: If this is unset, a temporary location will be chosen using
# the prefix specified by Prefix.
Location:
# The prefix to use when generating a ledger directory in temporary space.
# Otherwise, this value is ignored.
Prefix: hyperledger-fabric-ordererledger
```
I don't see any file created via ` find / -name 'hyperledger-fabric-ordererledger*' ` (Ran inside docker container of orderer)
Next - following link https://github.com/hyperledger/fabric/blob/master/orderer/README.md indicates to use ` ORDERER_LEDGER_TYPE=file ` which differs from ` ORDERER_GENERAL_GENESISMETHOD=file ` and true this is already defined in Peer CLI e2e test.
I have tried specifying both environment variables however I don't see anything.
kostas (Wed, 03 May 2017 21:58:07 GMT):
@rahulhegde: I suspect that you are using an earlier Docker image. Are you doing a `make docker` before running the E2E CLI test?
kostas (Wed, 03 May 2017 21:58:45 GMT):
@scottz: https://chat.hyperledger.org/channel/fabric-consensus?msg=TKP3BSHnqGsNNBwLh
kostas (Wed, 03 May 2017 21:59:43 GMT):
Interesting. Can you provide some more details that will allow me to reproduce this in a JIRA item? (Component = fabric-consensus, fix = v1.0.0)
scottz (Wed, 03 May 2017 22:14:30 GMT):
yes, surya will do that.
rahulhegde (Thu, 04 May 2017 00:20:38 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Srkvk2Q3qfj8Gf85e) @kostas I am using the fabric-ci published 19th April fabric-images.
rahulhegde (Thu, 04 May 2017 00:20:38 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Srkvk2Q3qfj8Gf85e) @kostas I am using the fabric-ci published 19th April fabric-images. Was there a fix added post these images were published?
veegas (Thu, 04 May 2017 01:30:53 GMT):
Has joined the channel.
kostas (Thu, 04 May 2017 04:26:52 GMT):
@rahulhegde: Yes, fixes comes in on an almost daily basis (incl. this one: https://gerrit.hyperledger.org/r/#/c/8357/), and there are some features that were added right before the freeze for 1.0 as well that you've missed if you're using those images. (For example: the consortium-related changes.) So I would advise fast-forwarding to the latest changes on master.
William.Z (Thu, 04 May 2017 13:04:39 GMT):
Has joined the channel.
arnuschky (Thu, 04 May 2017 15:06:31 GMT):
Has joined the channel.
arnuschky (Thu, 04 May 2017 17:36:43 GMT):
Under the new consensus architecture of 1.0, how can chaincodes interact? More specifically, how can they *trust* each other? If I understand correctly, each chaincode can have a totally different set of validators. I guess chaincodes can only trust each other's results if their validator set is overlapping or equal, no?
kostas (Thu, 04 May 2017 18:04:49 GMT):
I would guess so.
Willson (Fri, 05 May 2017 06:46:25 GMT):
hi everybody, these days i studied the consensus process of kafka by following the source code. however there are some duobts at my mind, hope someone can answer for me, thanks.
In official documents, the kafka consumer gets the transaction message from kafka and packs them into a block, and thme send the block to commiter. but at the source code, the consumer packs the message into a block and commit it to the file ledger directly without the version check and endorsement policy check. Is there any problem with my understanding?
Willson (Fri, 05 May 2017 06:46:25 GMT):
hi everybody, these days i studied the consensus process of kafka by following the source code. however there are some duobts at my mind, hope someone can answer for me, thanks.
In official documents, the kafka consumer gets the transaction message from kafka and packs them into a block, and than send the block to commiter. but at the source code, the consumer packs the message into a block and commit it to the file ledger directly without the version check and endorsement policy check. Is there any problem with my understanding?
Willson (Fri, 05 May 2017 06:47:05 GMT):
```
case *ab.KafkaMessage_TimeToCut:
ttcNumber = msg.GetTimeToCut().BlockNumber
logger.Debugf("[channel: %s] It's a time-to-cut message for block %d", ch.support.ChainID(), ttcNumber)
if ttcNumber == ch.lastCutBlock+1 {
timer = nil
logger.Debugf("[channel: %s] Nil'd the timer", ch.support.ChainID())
batch, committers := ch.support.BlockCutter().Cut()
if len(batch) == 0 {
logger.Warningf("[channel: %s] Got right time-to-cut message (for block %d),"+
" no pending requests though; this might indicate a bug", ch.support.ChainID(), ch.lastCutBlock)
logger.Infof("[channel: %s] Consenter for channel exiting", ch.support.ChainID())
return
}
block := ch.support.CreateNextBlock(batch)
encodedLastOffsetPersisted = utils.MarshalOrPanic(&ab.KafkaMetadata{LastOffsetPersisted: in.Offset})
ch.support.WriteBlock(block, committers, encodedLastOffsetPersisted)
ch.lastCutBlock++
logger.Debugf("[channel: %s] Proper time-to-cut received, just cut block %d",
ch.support.ChainID(), ch.lastCutBlock)
continue
} else if ttcNumber > ch.lastCutBlock+1 {
logger.Warningf("[channel: %s] Got larger time-to-cut message (%d) than allowed (%d)"+
" - this might indicate a bug", ch.support.ChainID(), ttcNumber, ch.lastCutBlock+1)
logger.Infof("[channel: %s] Consenter for channel exiting", ch.support.ChainID())
return
}
logger.Debugf("[channel: %s] Ignoring stale time-to-cut-message for block %d", ch.support.ChainID(), ch.lastCutBlock)
case *ab.KafkaMessage_Regular:
env := new(cb.Envelope)
if err := proto.Unmarshal(msg.GetRegular().Payload, env); err != nil {
// This shouldn't happen, it should be filtered at ingress
logger.Criticalf("[channel: %s] Unable to unmarshal consumed regular message:", ch.support.ChainID(), err)
continue
}
batches, committers, ok := ch.support.BlockCutter().Ordered(env)
logger.Debugf("[channel: %s] Ordering results: items in batch = %v, ok = %v", ch.support.ChainID(), batches, ok)
if ok && len(batches) == 0 && timer == nil {
timer = time.After(ch.batchTimeout)
logger.Debugf("[channel: %s] Just began %s batch timer", ch.support.ChainID(), ch.batchTimeout.String())
continue
}
// If !ok, batches == nil, so this will be skipped
for i, batch := range batches {
block := ch.support.CreateNextBlock(batch)
encodedLastOffsetPersisted = utils.MarshalOrPanic(&ab.KafkaMetadata{LastOffsetPersisted: in.Offset})
ch.support.WriteBlock(block, committers[i], encodedLastOffsetPersisted)
ch.lastCutBlock++
logger.Debugf("[channel: %s] Batch filled, just cut block %d", ch.support.ChainID(), ch.lastCutBlock)
}
if len(batches) > 0 {
timer = nil
}
```
yacovm (Fri, 05 May 2017 08:10:30 GMT):
> the consumer packs the message into a block and commit it to the file ledger directly without the version check and endorsement policy check.
Well @Willson here is the thing - the ordering service doesn't care about endorsements policies.
Its only goal is to set total order among the transactions in the same block.
The file ledger of the ordering service is only for storing the files and *not* for clients to read state from.
The ones that do the policies checking are the peers, before they commit the block.
Willson (Fri, 05 May 2017 08:14:33 GMT):
thanks @yacovm ,are you meaning that the orderer also have a ledger that differ from the commiter's?
yacovm (Fri, 05 May 2017 08:25:25 GMT):
I'd say that it's role is different
yacovm (Fri, 05 May 2017 08:26:36 GMT):
the blocks are the same blocks though, as all peers also keep the blocks in their raw form as given them by the ordering service
Willson (Fri, 05 May 2017 08:54:48 GMT):
Sorry, I do not quite understand, after the block is added to the ledger of ordering service, how and when it will be sent to peers(commiters?) ?
yacovm (Fri, 05 May 2017 08:58:16 GMT):
When they reach out to it
yacovm (Fri, 05 May 2017 08:58:22 GMT):
And ask for it
Willson (Fri, 05 May 2017 09:04:32 GMT):
thanks for your advance @yacovm
bh4rtp (Fri, 05 May 2017 12:01:38 GMT):
hi, does fabric have kafka orderer example?
kusmarius (Fri, 05 May 2017 12:43:16 GMT):
Has joined the channel.
nitingaur (Fri, 05 May 2017 22:19:55 GMT):
Has joined the channel.
vijaygopal (Sat, 06 May 2017 17:15:47 GMT):
Has joined the channel.
vijaygopal (Sat, 06 May 2017 17:15:51 GMT):
Error: proposal failed (err: rpc error: code = 2 desc = Block number should have been 3 but was 0)
Has anyone faced this issue ?
I am trying to setup Hyperledger Fabric network using this tutorial http://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html?cm_mc_uid=10430795090714934358737&cm_mc_sid_50200000=1494087517
richard.holzeis (Sun, 07 May 2017 17:24:48 GMT):
Has joined the channel.
kostas (Mon, 08 May 2017 14:20:38 GMT):
@bh4rtp: I can point you to the `orderer-M-kafka-N` environments here, where you can do a `docker-compose up` and be good to go, but I want to point you to an end-to-end example of using Kafka. Please track https://jira.hyperledger.org/browse/FAB-3289 which may even be marked as DONE today.
kostas (Mon, 08 May 2017 14:20:38 GMT):
@bh4rtp: I can point you to the `orderer-M-kafka-N` environments in https://github.com/hyperledger/fabric/tree/master/bddtests/environments, where you can do a `docker-compose up` and be good to go, but I want to point you to an end-to-end example of using Kafka. Please track https://jira.hyperledger.org/browse/FAB-3289 which may even be marked as DONE today.
bh4rtp (Tue, 09 May 2017 00:19:53 GMT):
@kostas excellent! i have been waiting for a long time. :grinning:
amber-zhang (Tue, 09 May 2017 00:52:30 GMT):
Has joined the channel.
michele (Tue, 09 May 2017 09:05:43 GMT):
Has joined the channel.
tom.appleyard (Tue, 09 May 2017 11:20:57 GMT):
What is in orderer.block exactly?
tom.appleyard (Tue, 09 May 2017 11:21:41 GMT):
As in what information does it hold and what exactly is done with it when you stand up the network?
kostas (Tue, 09 May 2017 13:07:57 GMT):
@tom.appleyard: This is the genesis block. It encodes information about the ordering service (what are the addresses of the ordering nodes? what is the hashing algorithm?) and the consortiums that are available when the network is being bootstrapped (groups of orgs that can create channel w/ each other, along with the policy that should apply when such a channel is created). The ordering nodes need this block in their (system channel) ledger so that they can be bootstrapped. They read it and operate based on what's written there.
tom.appleyard (Tue, 09 May 2017 14:58:51 GMT):
@kostas Thanks! Couple of questions:
How would you add new ordering nodes to it?
What do you mean by hashing algorithm? (presumably this is pluggable, as such what is available in hyperledger and what are the respective advantages?)
Could you expand on what you mean by "groups of orgs that can create channel w/ each other" - am I to infer from this that only certain orgs can create channels with each other? Are the orgs grouped in any formal way?
What do you mean by "the policy that should apply when such a channel is created"?
What is kept in the ordering nodes' system channel? (I assume this is a channel in the sense of how it is with peers)
kostas (Wed, 10 May 2017 00:51:51 GMT):
@tom.appleyard:
> How would you add new ordering nodes to it?
An ordering service administrator would have to submit a configuration update transaction to the system channel, updating the `OrdererAddresses` value of the system channel, and adding new ordering orgs if necessary.
> What do you mean by hashing algorithm? (presumably this is pluggable, as such what is available in hyperledger and what are the respective advantages?)
Wherever you need a hash (whether this is for building the Merkle tree) or hashing the previous block, you would use the hashing function encoded in the genesis block. Note that even though we are encoding this in the genesis block, no component actually reads that value for their hashing operations. (Thanks to @jyellick for noting this.) So this was perhaps not the best example I could come up with in the original response.
> Could you expand on what you mean by "groups of orgs that can create channel w/ each other" - am I to infer from this that only certain orgs can create channels with each other? Are the orgs grouped in any formal way?
See `source/configtx.rst`. (Yes, and yes, via the definition of consortiums under the "Consortiums" config group in the system channel.)
> What do you mean by "the policy that should apply when such a channel is created"?
See `source/policies.rst` (Look for "ChannelCreationPolicy" per consortium.)
> What is kept in the ordering nodes' system channel? (I assume this is a channel in the sense of how it is with peers)
See `source/configtx.rst`
kostas (Wed, 10 May 2017 00:52:01 GMT):
If there are follow-up questions, let me know.
jojialex2 (Wed, 10 May 2017 03:56:54 GMT):
Has joined the channel.
lenin.mehedy (Wed, 10 May 2017 04:35:18 GMT):
Has joined the channel.
yacovm (Wed, 10 May 2017 06:44:42 GMT):
https://chat.hyperledger.org/channel/fabric?msg=846tyXmXqggsQ5ZMt
nickmelis (Wed, 10 May 2017 10:34:08 GMT):
^^ Same error here! Any idea what's causing it?
kostas (Wed, 10 May 2017 12:18:30 GMT):
@nickmelis @LordGoodman Can you please attach the YAML and scripts you used to get there?
kostas (Wed, 10 May 2017 12:18:30 GMT):
@nickmelis @LordGoodman Can you please attach the Docker Compose YAML and the scripts you used to get there?
LordGoodman (Wed, 10 May 2017 12:18:30 GMT):
Has joined the channel.
nickmelis (Wed, 10 May 2017 13:10:44 GMT):
```version: '2'
services:
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer:${ARCH_TAG}-1.0.0-alpha
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./orderer.block:/var/hyperledger/orderer/orderer.block
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com:/var/hyperledger/orderer
ports:
- 7050:7050
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: peer-base/peer-base-no-tls.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_LOCALMSPID=Org0MSP
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/fabric/msp/sampleconfig
ports:
- 7051:7051
- 7053:7053
depends_on:
- orderer.example.com
peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: peer-base/peer-base-no-tls.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.org1.example.com
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0:7051
- CORE_PEER_LOCALMSPID=Org0MSP
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/msp:/etc/hyperledger/fabric/msp/sampleconfig
ports:
- 8051:7051
- 8053:7053
depends_on:
- orderer.example.com
- peer0.org1.example.com
peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: peer-base/peer-base-no-tls.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer0.org2.example.com
#- CORE_PEER_GOSSIP_BOOTSTRAP=peer2:7051
- CORE_PEER_LOCALMSPID=Org1MSP
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/msp:/etc/hyperledger/fabric/msp/sampleconfig
ports:
- 9051:7051
- 9053:7053
depends_on:
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
# - couchdb2
peer1.org2.example.com:
container_name: peer1.org2.example.com
extends:
file: peer-base/peer-base-no-tls.yaml
service: peer-base
environment:
- CORE_PEER_ID=peer1.org2.example.com
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org2.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/msp:/etc/hyperledger/fabric/msp/sampleconfig
ports:
- 10051:7051
- 10053:7053
depends_on:
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
- peer0.org2.example.com
cli:
container_name: cli
image: hyperledger/fabric-peer:${ARCH_TAG}-1.0.0-alpha
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_PEER_ADDRESSAUTODETECT=true
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ENDORSER_ENABLED=true
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_GOSSIP_IGNORESECURITY=true
- CORE_PEER_LOCALMSPID=Org0MSP
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
# command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}; '
volumes:
- /var/run/:/host/var/run/
- ./chaincodes:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel.tx:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel.tx
depends_on:
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
- peer0.org2.example.com
- peer1.org2.example.com
```
nickmelis (Wed, 10 May 2017 13:11:13 GMT):
sorry I can't find how to attach a snippet (like on Slack)
nickmelis (Wed, 10 May 2017 13:11:54 GMT):
btw, I just followed the steps in http://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html
kostas (Wed, 10 May 2017 14:15:35 GMT):
@nickmelis Thanks for following up. Let me investigate what's going on. Will update you.
nickmelis (Wed, 10 May 2017 14:16:15 GMT):
thanks @kostas. It might as well be an error on my side, but just by looking at the code where the error is thrown, I can't quite understand the reasons
nickmelis (Wed, 10 May 2017 14:16:28 GMT):
let me know if you need me to do some more tests
jyellick (Wed, 10 May 2017 14:35:42 GMT):
@nickmelis What commit is this at? A change was recently merged https://gerrit.hyperledger.org/r/#/c/9111/ which fixes how the orderer recognizes the ordering system channel.
jyellick (Wed, 10 May 2017 14:36:15 GMT):
If you are using a genesis block for the ordering system channel which was generated with older tooling, you will see this crash at startup.
nickmelis (Wed, 10 May 2017 14:42:42 GMT):
@jyellick I just donwloaded the whole package from https://logs.hyperledger.org/sandbox/vex-yul-hyp-jenkins-2/fabric-binaries-x86_64/release.tar.gz
nickmelis (Wed, 10 May 2017 14:42:54 GMT):
as suggested in the guide
kostas (Wed, 10 May 2017 14:46:12 GMT):
This file definitely doesn't have the fix, hmm.
nickmelis (Wed, 10 May 2017 14:48:08 GMT):
is there another way to get everything up and running with little hassle? And perhaps with the latest code?
nickmelis (Wed, 10 May 2017 14:48:18 GMT):
are docker images rebuilt every day?
LordGoodman (Wed, 10 May 2017 14:49:10 GMT):
docker images doesn't rebuilt everyday
LordGoodman (Wed, 10 May 2017 14:49:40 GMT):
docker images don't rebuilt everyday.
kostas (Wed, 10 May 2017 14:49:41 GMT):
@nickmelis: Yes. Assuming you have `git clone'd` the project, `make docker` from the project directory, then cd to `e2e_cli` dir, do `./network_setup.sh restart`.
kostas (Wed, 10 May 2017 14:49:41 GMT):
@nickmelis: Yes. Assuming you have `git clone'd` the project, `make docker` from the project directory, then cd to the `e2e_cli` dir, do `./network_setup.sh restart`.
nickmelis (Wed, 10 May 2017 14:50:45 GMT):
@kostas brilliant. I haven't cloned, but that's an easy one. Let me try and get back to you
nickmelis (Wed, 10 May 2017 14:58:23 GMT):
```can't load package: package github.com/hyperledger/fabric/core/chaincode/shim: cannot find package "github.com/hyperledger/fabric/core/chaincode/shim" in any of:
/usr/local/Cellar/go/1.7.4_1/libexec/src/github.com/hyperledger/fabric/core/chaincode/shim (from $GOROOT)
($GOPATH not set)
find: /src/github.com/hyperledger/fabric/core/chaincode/shim: No such file or directory
```
nickmelis (Wed, 10 May 2017 14:58:38 GMT):
I guess this is because $GOPATH is not set right? What value should I set it to?
nickmelis (Wed, 10 May 2017 14:58:50 GMT):
the main repo?
LordGoodman (Wed, 10 May 2017 14:59:14 GMT):
@nickmelis Did you successfully run make release at $GOPATH/src/github.com/fabric ?
bkvellanki (Wed, 10 May 2017 14:59:40 GMT):
@kostas How can we define the blocksize, blockheight, for a peer - From Any SDK, Or Interface in the Shim......Also, is there any example that shows sample policies (SignedPolicy,Implicit) and how and where to configure.. Is there a way to create our own Policy and how to do implement it..Any document or sample
LordGoodman (Wed, 10 May 2017 14:59:43 GMT):
GOPATH=/opt/gopath
nickmelis (Wed, 10 May 2017 14:59:52 GMT):
@LordGoodman I cloned the fabric repo, then ran `make docker` as suggested by @kostas
LordGoodman (Wed, 10 May 2017 15:00:35 GMT):
@nickmelis you are not in vagrant environment, right ?
nickmelis (Wed, 10 May 2017 15:00:51 GMT):
I'd like to run the whole thing with docker compose
nickmelis (Wed, 10 May 2017 15:01:02 GMT):
and I'm trying to build on my mac
kostas (Wed, 10 May 2017 15:01:59 GMT):
Ah, I had taken it for a given that you had Go installed and your workspace setup properly. I shouldn't have.
kostas (Wed, 10 May 2017 15:02:11 GMT):
Do you have Go installed?
kostas (Wed, 10 May 2017 15:02:50 GMT):
(I will note that you're better off running this from within the vagrant environment.)
nickmelis (Wed, 10 May 2017 15:02:59 GMT):
should have installed a while ago via Homebrew, but let me check
nickmelis (Wed, 10 May 2017 15:03:14 GMT):
yup, just updating it
kostas (Wed, 10 May 2017 15:04:36 GMT):
Does your `$PATH` include the `go/bin` folder?
kostas (Wed, 10 May 2017 15:04:44 GMT):
`echo $PATH` and check
nickmelis (Wed, 10 May 2017 15:05:05 GMT):
`/usr/local/opt/go/libexec/bin` check
nickmelis (Wed, 10 May 2017 15:05:12 GMT):
it's in the $PATH
nickmelis (Wed, 10 May 2017 15:05:28 GMT):
also $GOPATH is set to `/opt/gopath`
nickmelis (Wed, 10 May 2017 15:05:38 GMT):
as per @LordGoodman 's suggestion
kostas (Wed, 10 May 2017 15:05:40 GMT):
Cool, do a `go version` as well to be sure you like the output?
nickmelis (Wed, 10 May 2017 15:05:53 GMT):
`go version go1.8.1 darwin/amd64`
kostas (Wed, 10 May 2017 15:05:57 GMT):
Nice
nickmelis (Wed, 10 May 2017 15:06:12 GMT):
however I still get
```can't load package: package github.com/hyperledger/fabric/core/chaincode/shim: cannot find package "github.com/hyperledger/fabric/core/chaincode/shim" in any of:
/usr/local/Cellar/go/1.8.1/libexec/src/github.com/hyperledger/fabric/core/chaincode/shim (from $GOROOT)
/Users/thinkitconsulting/go/src/github.com/hyperledger/fabric/core/chaincode/shim (from $GOPATH)
find: /src/github.com/hyperledger/fabric/core/chaincode/shim: No such file or directory```
nickmelis (Wed, 10 May 2017 15:06:17 GMT):
when doing `make docker`
kostas (Wed, 10 May 2017 15:06:24 GMT):
The `$GOPATH = /opt/gopath` makes sense for the Vagrant environment, but not necessarily so for your local env
kostas (Wed, 10 May 2017 15:06:28 GMT):
Hold on, not done yet
nickmelis (Wed, 10 May 2017 15:06:37 GMT):
ok what should I set it to?
nickmelis (Wed, 10 May 2017 15:06:43 GMT):
is it the path to the main fabric repo?
LordGoodman (Wed, 10 May 2017 15:07:15 GMT):
fabric should under that path
kostas (Wed, 10 May 2017 15:07:34 GMT):
https://golang.org/doc/code.html#GOPATH
kostas (Wed, 10 May 2017 15:08:19 GMT):
Whatever you set it to, make sure that that your cloned repo ends up residing in: `$GOPATH/src/github.com/hyperledger/fabric`
kostas (Wed, 10 May 2017 15:09:15 GMT):
So, if you set `$GOPATH` to `/Users/nick/go` (which I think is the default actually), then do:
kostas (Wed, 10 May 2017 15:09:18 GMT):
`cd $GOPATH`
kostas (Wed, 10 May 2017 15:09:35 GMT):
`mkdir -p src/github.com/hyperledger/`
kostas (Wed, 10 May 2017 15:09:43 GMT):
`cd` to that folder
kostas (Wed, 10 May 2017 15:09:47 GMT):
and `git clone` from there
nickmelis (Wed, 10 May 2017 15:10:12 GMT):
exactly, I think that's the problem. It's looking for something inside `src/github.com/hyperledger/fabric/core/chaincode/shim (from $GOPATH)`
nickmelis (Wed, 10 May 2017 15:10:18 GMT):
but I cloned into a different folder
kostas (Wed, 10 May 2017 15:10:46 GMT):
And you tried now w/ the instructions I gave you and it still fails?
nickmelis (Wed, 10 May 2017 15:11:01 GMT):
cloning to the right folder...brb
LordGoodman (Wed, 10 May 2017 15:12:49 GMT):
My problem is can not make release :joy:
LordGoodman (Wed, 10 May 2017 15:12:49 GMT):
No way to generate the new configtxgen
nickmelis (Wed, 10 May 2017 15:13:10 GMT):
ok that looks better already
```$ make docker
Building build/docker/bin/peer
```
nickmelis (Wed, 10 May 2017 15:13:51 GMT):
whoops, talked too soon :(
```Step 3 : ADD payload/goshim.tar.bz2 $GOPATH/src/
Error processing tar file(bzip2 data invalid: bad magic value in continuation file):
make: *** [build/image/ccenv/.dummy-x86_64-1.0.0-snapshot-132817bd] Error 1
```
kostas (Wed, 10 May 2017 15:13:51 GMT):
@LordGoodman: `make configtxgen`?
kostas (Wed, 10 May 2017 15:14:11 GMT):
@nickmelis: That's a known bug in macOS, let me get you the fix.
nickmelis (Wed, 10 May 2017 15:14:18 GMT):
thanks!
LordGoodman (Wed, 10 May 2017 15:14:28 GMT):
@kostas yes, it failed
kostas (Wed, 10 May 2017 15:14:39 GMT):
@nickmelis: http://stackoverflow.com/questions/41465720/error-building-peer-bzip2-data-invalid-in-goshim-tar-bz2
kostas (Wed, 10 May 2017 15:14:55 GMT):
@LordGoodman: Can't help you without a stack trace and details of your environment.
LordGoodman (Wed, 10 May 2017 15:16:06 GMT):
@kostas like this *** No rule to make target 'release/linux-amd64/bin/configtxgen', needed by 'release/linux-amd64'. Stop.
kostas (Wed, 10 May 2017 15:16:28 GMT):
@LordGoodman: Which env are you running this on?
LordGoodman (Wed, 10 May 2017 15:16:37 GMT):
ubuntu 16.0.1 LTS, no vagrant
kostas (Wed, 10 May 2017 15:17:37 GMT):
Can't help you since I'm on macOS (and I work from within vagrant most of the time). Ask in #fabric maybe?
LordGoodman (Wed, 10 May 2017 15:18:00 GMT):
I am home now, I will put more details tomorrow
LordGoodman (Wed, 10 May 2017 15:18:08 GMT):
@kostas Ok thanks
kostas (Wed, 10 May 2017 15:18:51 GMT):
@bkvellanki: https://chat.hyperledger.org/channel/fabric-consensus?msg=4sBDn9sLxiyAGMenj
In order to define the blocksize, you need to make sure you write the desired values in the genesis block of the ordering service. For instance, if you are using `configtxgen` to generate that block, you would need to edit this section https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L145..L161 and point `configtxgen` to a profile that uses these values.
kostas (Wed, 10 May 2017 15:19:12 GMT):
For example, SampleSingleMSPSolo https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L42 uses that section https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L44
LordGoodman (Wed, 10 May 2017 15:19:43 GMT):
can I join the hyperleger slack ?
kostas (Wed, 10 May 2017 15:20:22 GMT):
For blockheight, this is not something you define. The block height is the index of a block in the chain. The more blocks you add to the chain, the bigger the height.
kostas (Wed, 10 May 2017 15:20:34 GMT):
@LordGoodman: We are no longer using Slack.
kostas (Wed, 10 May 2017 15:20:52 GMT):
@bkvellanki: Not sure what "interface in the shim" means.
kostas (Wed, 10 May 2017 15:21:10 GMT):
For policies, see `source/policies.rst`
nickmelis (Wed, 10 May 2017 15:23:00 GMT):
`It is worth noting that if you've gotten the error, then you need to nuke the fabric directory, and pull it again from the repo. Just running 'make peer' again won't work (the messed up bz2 file is still there).`
nickmelis (Wed, 10 May 2017 15:23:14 GMT):
does it mean delete and clone from scratch?
kostas (Wed, 10 May 2017 15:23:26 GMT):
@nickmelis: Which error?
nickmelis (Wed, 10 May 2017 15:23:52 GMT):
I've installed gnu-tar as suggested in the StackOverflow link above, but still getting the same error
nickmelis (Wed, 10 May 2017 15:24:14 GMT):
and in one of the comments it says you need to nuke the fabric dir
kostas (Wed, 10 May 2017 15:24:23 GMT):
@nickmelis: Try `make dist-clean` first?
kostas (Wed, 10 May 2017 15:24:30 GMT):
Nuking the dir seems a bit excessive to me.
kostas (Wed, 10 May 2017 15:26:56 GMT):
@bkvellanki: As best as I can tell, there is no easy way to modify policies. You can modify the default policies `configtxgen` encodes in the genesis block by messing around with its source code, but that is probably not what you're after. Keep track of https://jira.hyperledger.org/browse/FAB-1678 which is tangentially related. @jyellick are you aware of any plans for 1.0 to make creation of policies easier?
nickmelis (Wed, 10 May 2017 15:27:00 GMT):
```Step 3 : ADD payload/goshim.tar.bz2 $GOPATH/src/
Error processing tar file(bzip2 data invalid: bad magic value in continuation file):
make: *** [build/image/ccenv/.dummy-x86_64-1.0.0-snapshot-132817bd] Error 1```
nickmelis (Wed, 10 May 2017 15:27:02 GMT):
Same error I'm afraid
kostas (Wed, 10 May 2017 15:27:39 GMT):
@nickmelis: Let me ping @greg.haskins in case he can help.
nickmelis (Wed, 10 May 2017 15:27:50 GMT):
thanks a lot @kostas
kostas (Wed, 10 May 2017 15:28:01 GMT):
Greg, I pointed Nick to https://chat.hyperledger.org/channel/fabric-consensus?msg=2wT4PFnTNb4NcyywK and he's still getting this error. Any ideas?
greg.haskins (Wed, 10 May 2017 15:28:22 GMT):
_looks_
greg.haskins (Wed, 10 May 2017 15:30:38 GMT):
@nickmelis @kostas a "make clean" should be perfomed after installing the updated gnu-tar
greg.haskins (Wed, 10 May 2017 15:30:40 GMT):
was this done?
greg.haskins (Wed, 10 May 2017 15:30:52 GMT):
i need to udate the instructions, as that was omitted
kostas (Wed, 10 May 2017 15:30:59 GMT):
I suggested `make dist-clean`: https://chat.hyperledger.org/channel/fabric-consensus?msg=WnwW6uKTMZKp7nXpy
kostas (Wed, 10 May 2017 15:31:05 GMT):
Wouldn't that be enough?
greg.haskins (Wed, 10 May 2017 15:31:07 GMT):
ok, thats a superset
kostas (Wed, 10 May 2017 15:31:10 GMT):
Right
greg.haskins (Wed, 10 May 2017 15:31:18 GMT):
yeah, dist-clean is superset of clean, so that should work
kostas (Wed, 10 May 2017 15:31:19 GMT):
Still same issue though, I'll let Nick chime in.
greg.haskins (Wed, 10 May 2017 15:31:30 GMT):
next thing to check is whether the gnu-tar actually is available
greg.haskins (Wed, 10 May 2017 15:31:37 GMT):
e.g. "tar --version"
nickmelis (Wed, 10 May 2017 15:31:41 GMT):
@greg.haskins what instructions are you talking about? I'm following this doc: http://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html
kostas (Wed, 10 May 2017 15:31:58 GMT):
(The S/O ones, I guess.)
nickmelis (Wed, 10 May 2017 15:32:04 GMT):
mmh..interesting: `$ tar --version
bsdtar 2.8.3 - libarchive 2.8.3`
nickmelis (Wed, 10 May 2017 15:32:04 GMT):
mmh..interesting: ```$ tar --version
bsdtar 2.8.3 - libarchive 2.8.3```
nickmelis (Wed, 10 May 2017 15:32:04 GMT):
mmh..interesting:
```$ tar --version
bsdtar 2.8.3 - libarchive 2.8.3```
greg.haskins (Wed, 10 May 2017 15:32:13 GMT):
@nickmelis sorry, the SO instructions: http://stackoverflow.com/questions/41465720/error-building-peer-bzip2-data-invalid-in-goshim-tar-bz2
greg.haskins (Wed, 10 May 2017 15:32:45 GMT):
so thats your basic problem...until that returns gnu-tar, you'll have the problem
nickmelis (Wed, 10 May 2017 15:33:00 GMT):
@greg.haskins I just did `brew install gnu-tar --with-default-names` but I still using bsdtar apparentrly
nickmelis (Wed, 10 May 2017 15:33:00 GMT):
@greg.haskins I just did `brew install gnu-tar --with-default-names` but I still using bsdtar apparently
greg.haskins (Wed, 10 May 2017 15:33:01 GMT):
check for aliases
greg.haskins (Wed, 10 May 2017 15:33:17 GMT):
sometimes people have an alias set, or a weird $PATH override
greg.haskins (Wed, 10 May 2017 15:33:37 GMT):
so the usual suspects: "which tar", "alias | grep tar"
greg.haskins (Wed, 10 May 2017 15:33:51 GMT):
I think the homebrew one installs in /usr/local/bin
greg.haskins (Wed, 10 May 2017 15:34:02 GMT):
you just need to figure out why that doesnt resolve to "tar"
nickmelis (Wed, 10 May 2017 15:34:27 GMT):
```$ which tar
/usr/bin/tar```
nickmelis (Wed, 10 May 2017 15:34:49 GMT):
`lrwxr-xr-x 1 root wheel 6B Nov 5 2016 tar -> bsdtar`
nickmelis (Wed, 10 May 2017 15:34:53 GMT):
there we go
greg.haskins (Wed, 10 May 2017 15:35:49 GMT):
bingo
nickmelis (Wed, 10 May 2017 15:35:56 GMT):
guess I need to replace it with `/usr/local/Cellar/gnu-tar/1.29_1/bin/`, right?
greg.haskins (Wed, 10 May 2017 15:36:13 GMT):
well, look in /usr/local/bin first
greg.haskins (Wed, 10 May 2017 15:36:32 GMT):
i suspect you have /usr/local/bin/tar from the brew install
nickmelis (Wed, 10 May 2017 15:36:51 GMT):
I haven't
nickmelis (Wed, 10 May 2017 15:36:59 GMT):
should I have it there?
nickmelis (Wed, 10 May 2017 15:46:33 GMT):
looks like I can't delete the symlink to bsdtar I have in /usr/bin, and creating a symlink to gnu-tar in /usr/local/bin doesn't have any effect
nickmelis (Wed, 10 May 2017 15:46:58 GMT):
how can I tell `make` to use gnu-tar instead of the other one?
HansDeLeenheer (Wed, 10 May 2017 15:52:36 GMT):
Has joined the channel.
nickmelis (Wed, 10 May 2017 16:03:56 GMT):
@greg.haskins I set `alias tar=/usr/local/bin/tar`, and now `tar --version` returns `tar (GNU tar) 1.29`
nickmelis (Wed, 10 May 2017 16:04:13 GMT):
however the build still fails...should I `make clean` first?
nickmelis (Wed, 10 May 2017 16:07:06 GMT):
it doesn't fix the error. Any clue what else may be causing it?
greg.haskins (Wed, 10 May 2017 16:17:53 GMT):
@nickmelis you still see the bzip2 error?
greg.haskins (Wed, 10 May 2017 16:17:58 GMT):
or is it a different error?
greg.haskins (Wed, 10 May 2017 16:18:08 GMT):
(and yes, the clean should have fixed it
greg.haskins (Wed, 10 May 2017 16:18:34 GMT):
i suppose its possible that the alias is not taking effect in the build somehow
greg.haskins (Wed, 10 May 2017 16:18:47 GMT):
can you address with a $PATH update rather than an alias?
greg.haskins (Wed, 10 May 2017 16:19:05 GMT):
e.g. export PATH=/usr/local/bin:$PATH
greg.haskins (Wed, 10 May 2017 16:20:25 GMT):
@nickmelis also suggest running "brew update && brew doctor"
greg.haskins (Wed, 10 May 2017 16:20:34 GMT):
to ensure your environment is healthy
greg.haskins (Wed, 10 May 2017 16:20:56 GMT):
it seems weird to me that the /usr/local/bin path wasnt updated
nickmelis (Wed, 10 May 2017 16:23:27 GMT):
ok I fixed a few issues with brew. I'll try to reinstall gnu-tar now
nickmelis (Wed, 10 May 2017 16:26:41 GMT):
fixed brew, unlinked-and-relinked tar, removed alias. Fingers crossed
jyellick (Wed, 10 May 2017 16:41:44 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=oDAfxA52wrHhzQDTa)
The work being done around https://jira.hyperledger.org/browse/FAB-1678 should make policy definitions easier, but improving the ease of policy editing is a longer term goal
kostas (Wed, 10 May 2017 17:40:28 GMT):
@nickmelis @LordGoodman Modified the E2E CLI test to whether there any issues w/ Kafka and I see none.
kostas (Wed, 10 May 2017 17:40:37 GMT):
For the record, these were the modifications I did: https://github.com/kchristidis/fabric/commit/7f83f40fdd8c5cd0a837a820f728f3b864e1cce2
kostas (Wed, 10 May 2017 17:41:53 GMT):
(The timeout on the peer side is certainly generous, this should ultimately be patched like so: https://jira.hyperledger.org/browse/FAB-3786)
kostas (Wed, 10 May 2017 17:41:53 GMT):
(The timeout on the peer side is certainly generous, this should ultimately be patched like so: https://jira.hyperledger.org/browse/FAB-2982)
lenin.mehedy (Thu, 11 May 2017 00:25:50 GMT):
@kostas @nickmelis I have got e2e_cli working with kafka, but I had to modify the script.sh a bit to avoid that timing issue. Here is how my script.sh look like for channel creation:
```
createChannel() {
setGlobals 0
if [ -z "$CORE_PEER_TLS_ENABLED" -o "$CORE_PEER_TLS_ENABLED" = "false" ]; then
peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx >&log.txt
else
peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile $ORDERER_CA >&log.txt
fi
sleep 3
fetchChannelConfig
}
fetchChannelConfig() {
setGlobals 0
if [ -z "$CORE_PEER_TLS_ENABLED" -o "$CORE_PEER_TLS_ENABLED" = "false" ]; then
peer channel fetch -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx >&log.txt
else
peer channel fetch -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile $ORDERER_CA >&log.txt
fi
res=$?
cat log.txt
verifyResult $res "Channel creation failed"
echo "===================== Channel \"$CHANNEL_NAME\" is created successfully ===================== "
echo
}
```
lenin.mehedy (Thu, 11 May 2017 00:26:44 GMT):
and my docker-compose-cli.yaml is like below:
```
version: '2'
services:
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
depends_on:
- zookeeper.example.com
- kafka.example.com
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com
peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org1.example.com
peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org2.example.com
peer1.org2.example.com:
container_name: peer1.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org2.example.com
zookeeper.example.com:
image: hyperledger/fabric-zookeeper
container_name: zookeeper.example.com
kafka.example.com:
image: hyperledger/fabric-kafka
container_name: kafka.example.com
environment:
KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE: "false"
KAFKA_ZOOKEEPER_CONNECT: zookeeper.example.com:2181
ports:
- 9092:9092
depends_on:
- zookeeper.example.com
cli:
container_name: cli
image: hyperledger/fabric-testenv
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
# command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT'
volumes:
- /var/run/:/host/var/run/
- ./examples:/opt/gopath/src/github.com/hyperledger/fabric/examples
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
- peer0.org2.example.com
- peer1.org2.example.com
- zookeeper.example.com
- kafka.example.com
```
kostas (Thu, 11 May 2017 00:31:36 GMT):
@lenin.mehedy: Ah right, that would work as well. Thanks for sharing the code!
lenin.mehedy (Thu, 11 May 2017 00:34:18 GMT):
@kostas Thanks. I am actually creating a getting started guide/tutorial deck for our team here. The goal is to be able to start up a network with various optional components (e.g. kafka, couchdb, fabric-ca) and test using cli/SDK. I shall keep you posted if I can finalize the tutorial pack.
kostas (Thu, 11 May 2017 00:34:45 GMT):
That'll be awesome - looking forward to it.
lenin.mehedy (Thu, 11 May 2017 00:36:28 GMT):
Thanks @kostas Can you recommend me about the SDKs? Is Go or Java SDK stable enough?
kostas (Thu, 11 May 2017 00:41:00 GMT):
I'll let the domain experts comment on their stability -- @rickr is working on the Java SDK, and @jimthematrix on the Node.js one.
bh4rtp (Thu, 11 May 2017 00:48:26 GMT):
@lenin.mehedy in fact, your kafka does not work in chain. it only starts up and has no functioning of consensus but runs separately.
lenin.mehedy (Thu, 11 May 2017 00:51:22 GMT):
Yeah, I understand that it runs separately. I think the idea of using Kafka is to help the ordering of transactions in a crash fault tolerant manner, not Byzantine faults (https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit). However, I am still trying to understand more about the HL Fabric 1.0 architecture though.
bh4rtp (Thu, 11 May 2017 00:58:26 GMT):
a month ago, i did the same as you mentioned above. but exactly it is far from functioning of kafka consensus. however i did feel as glad as you. :grinning:
kostas (Thu, 11 May 2017 00:59:36 GMT):
What exactly does "functioning of kafka consensus" mean?
kostas (Thu, 11 May 2017 00:59:36 GMT):
@bh4rtp: What exactly does "functioning of kafka consensus" mean?
bh4rtp (Thu, 11 May 2017 01:03:57 GMT):
@kostas work in the network mentioned above between orderer.example.com.
kostas (Thu, 11 May 2017 01:04:22 GMT):
I do not follow.
kostas (Thu, 11 May 2017 01:06:13 GMT):
As I have pointed out in the example I posted here earlier today, if you create a genesis block that sets the orderer type to kafka and the brokers addresses to those of the kafka brokers on your network, then the ordering service nodes in your system will use Kafka to agree on the ordering of messages.
kostas (Thu, 11 May 2017 01:06:15 GMT):
https://github.com/kchristidis/fabric/commit/7f83f40fdd8c5cd0a837a820f728f3b864e1cce2
kostas (Thu, 11 May 2017 01:06:26 GMT):
See the modification to the `e2e_cli/configtx.yaml` file.
kostas (Thu, 11 May 2017 01:06:58 GMT):
If you were to then run `./generateArtifacts.sh` (as is the case with the E2E CLI test), you'd end up with a genesis block that fits my description above.
kostas (Thu, 11 May 2017 01:07:34 GMT):
Basically this gives you a network that works exactly as described here: https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit
bh4rtp (Thu, 11 May 2017 01:08:39 GMT):
@kostas that's a good news. let me test it.
bh4rtp (Thu, 11 May 2017 01:56:10 GMT):
@kostas i have modified configtx.yaml, docker-compose-cli.yaml and peer/channel/create.go, but e2e_cli cannot pass.
```2017-05-11 09:49:21.113 CST [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-05-11 09:49:41.114 CST [grpc] Printf -> DEBU 006 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: credentials: Dial timed out"; Reconnecting to {"orderer.example.com:7050"
kostas (Thu, 11 May 2017 01:57:15 GMT):
Once you modifed `peer/channel/create.go`, did you `git add` the file, then did you do `make docker`, _before_ re-running the E2E test?
kostas (Thu, 11 May 2017 01:57:38 GMT):
Modifying the *.go files alone won't do anything, unless you create a new image.
bh4rtp (Thu, 11 May 2017 01:57:54 GMT):
yes, i have done make native docker.
bh4rtp (Thu, 11 May 2017 01:57:54 GMT):
yes, i have done `make native docker`.
kostas (Thu, 11 May 2017 01:58:19 GMT):
Did you `git add` the modified `create.go` file before running `make docker`?
kostas (Thu, 11 May 2017 01:59:35 GMT):
The error your are getting is indicative of a non-modified `create.go`, so I'm skeptical as to whether the process was followed.
bh4rtp (Thu, 11 May 2017 02:00:06 GMT):
not yet. why need git add? i only want to do local modification.
kostas (Thu, 11 May 2017 02:00:49 GMT):
Because `make docker` uses the results of `git ls` in order to figure out whether it needs to build a new image.
kostas (Thu, 11 May 2017 02:02:35 GMT):
If you still run into issues after that, please follow up with a JIRA item. (File it under the "fabric-consensus" component and feel free to assign it to me. This way we create a point of reference for others who have the same issue.) But the process I outlined above should work w/o issues.
bh4rtp (Thu, 11 May 2017 02:02:56 GMT):
ok. thanks @kostas
bh4rtp (Thu, 11 May 2017 02:31:08 GMT):
i follow your instructions. now kafka ordering really does work. :v:
bh4rtp (Thu, 11 May 2017 02:31:08 GMT):
@kostas i follow your instructions. now kafka ordering really does work. :v:
ermyas (Thu, 11 May 2017 06:39:52 GMT):
Has joined the channel.
nickmelis (Thu, 11 May 2017 08:54:50 GMT):
@kostas @greg.haskins I managed to run `make docker` yesterday with no errors. Is there an easy way to start up the cluster without having to generate all the certificates myself? What's the preferred way?
nickmelis (Thu, 11 May 2017 09:19:40 GMT):
trying to start e2e_cli/docker-compose-cli.yaml but getting several errors with signcerts...checking now
greg.haskins (Thu, 11 May 2017 11:01:33 GMT):
@nickmelis Check out examples/cluster
greg.haskins (Thu, 11 May 2017 11:02:16 GMT):
Do "make compose-up" and it will generate all the certs and launch a composition
nickmelis (Thu, 11 May 2017 11:02:28 GMT):
Hi @greg.haskins. Just posted on the main #fabric channel. Thought it was a good idea to run `make unit-test` first, and I got a failure
nickmelis (Thu, 11 May 2017 11:02:28 GMT):
Hi @greg.haskins . Just posted on the main #fabric channel. Thought it was a good idea to run `make unit-test` first, and I got a failure
nickmelis (Thu, 11 May 2017 11:02:44 GMT):
see https://chat.hyperledger.org/channel/fabric?msg=qeHEPCHzX36Sa47cN
nickmelis (Thu, 11 May 2017 11:03:30 GMT):
is it a bad sign?
nickmelis (Thu, 11 May 2017 11:09:54 GMT):
also:
```$ make compose-up
GOBIN=/Users/nick/src/github.com/hyperledger/fabric/examples/cluster/build/bin go install github.com/hyperledger/fabric/common/tools/cryptogen
# github.com/hyperledger/fabric/vendor/github.com/miekg/pkcs11
../../vendor/github.com/miekg/pkcs11/pkcs11.go:29:10: fatal error: 'ltdl.h' file not found
#include
Vadim (Thu, 11 May 2017 11:14:35 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=hK84YW7uYD9nBTrmP) `yum install libtool-ltdl` or `apt-get install libltdl3-dev`
nickmelis (Thu, 11 May 2017 11:15:00 GMT):
I'm running on osx
dhuseby (Thu, 11 May 2017 14:18:13 GMT):
Has joined the channel.
toddinpal (Thu, 11 May 2017 17:27:34 GMT):
Can there be more than one ordering service in a fabric network?
kostas (Thu, 11 May 2017 17:42:35 GMT):
@toddinpal: Let's see -- what would be the point of this?
toddinpal (Thu, 11 May 2017 17:42:54 GMT):
Trying to join to existing networks into one
toddinpal (Thu, 11 May 2017 17:43:06 GMT):
my guess is no
kostas (Thu, 11 May 2017 17:43:09 GMT):
Not sure I follow?
toddinpal (Thu, 11 May 2017 17:43:16 GMT):
I can't see anyway it could work...
kostas (Thu, 11 May 2017 17:43:59 GMT):
I'm still not sure what your motivation is.
toddinpal (Thu, 11 May 2017 17:45:37 GMT):
If I have two fabric networks already configured and running... want to join them into a single network
toddinpal (Thu, 11 May 2017 17:45:51 GMT):
but I can't see anyway to make that work
kostas (Thu, 11 May 2017 17:46:38 GMT):
In the end you'll need to switch network to the other.
toddinpal (Thu, 11 May 2017 17:46:58 GMT):
right
kostas (Thu, 11 May 2017 17:47:56 GMT):
At any rate, I guess you _could_ have existing orgs use ordering service B if you wanted to, but (a) then you're really dealing with a different logical network (orgs + ord. serv. A is network 1, orgs + ord. serv. B is network 2), and (b) if you are comfortable with the trust assumptions placed in the ordering service, channels are meant to make it so that you can have multiple networks/consortiums use the same ordering service.
bh4rtp (Fri, 12 May 2017 03:15:34 GMT):
@kostas i am studying the goodle doc - a kafka-based ordering service for fabric, and wonder what the final solutions are implemented in fabric. in other words, have all the described solutions been realized, or are there any new options substituted the mentioned solutions, or partially implemented.
kostas (Fri, 12 May 2017 03:17:48 GMT):
The document concludes with one solution being better than all the others. This is exactly what's implemented in Fabric.
bh4rtp (Fri, 12 May 2017 03:21:53 GMT):
@kostas yes. could you tell me which one is implemented?
bh4rtp (Fri, 12 May 2017 03:52:41 GMT):
to be frankly, i think the kafka ordering is somewhat complex. as we know, the bitcoin network creates a new block by a miner competed by its computing power. it changes the consensus need to a mathematical problem. that is the point. in my logic, simplicity is eternal. just my opinion. :grinning:
bh4rtp (Fri, 12 May 2017 03:52:41 GMT):
to be frank, i think the kafka ordering is somewhat complex. as we know, the bitcoin network creates a new block by a miner competed by its computing power. it changes the consensus need to a mathematical problem. that is the point. in my logic, simplicity is eternal. just my opinion. :grinning:
toddinpal (Fri, 12 May 2017 04:36:12 GMT):
@bh4rtp But electricity is not. Proof of work makes little sense for real world applications.
bh4rtp (Fri, 12 May 2017 06:11:04 GMT):
@toddinpal is electricity not simple or eternal?
toddinpal (Fri, 12 May 2017 06:11:55 GMT):
@bh4rtp electricity is ephemeral, energy is eternal
bh4rtp (Fri, 12 May 2017 06:14:59 GMT):
@toddinpal your understand is wrong. electricity is simple and used till now but not discarded. what we are saying is not thing itself, but its application.
bh4rtp (Fri, 12 May 2017 06:14:59 GMT):
@toddinpal your understand is wrong. electricity is simple and used till now but not discarded. what we are saying eternal is not thing itself, but its application.
toddinpal (Fri, 12 May 2017 06:17:53 GMT):
@bh4rtp Actually electricity gets converted into other forms of energy, typically heat or light, so it is ephemeral. The associated energy is not due to the first law of thermodynamics.
bh4rtp (Fri, 12 May 2017 06:32:47 GMT):
@toddinpal interesting! the statement that electricity is not eternal is said just by you or other people commonly? as i said, simplicity is talking about thing's property and eternal is talking about the usage to the people.
bh4rtp (Fri, 12 May 2017 06:47:30 GMT):
@toddinpal pow is used successfully for several years (https://blockchain.info/) in the bitcoin networks. i read an ieee paper titled `security and privacy in decentralized energy trading through multi-signatures, blockchain and anonymous messaging streams` describing how the bitcoin blockchain is applied in new energy trading system.
bh4rtp (Fri, 12 May 2017 06:47:30 GMT):
@toddinpal pow is used successfully for several years (https://blockchain.info/) in the bitcoin networks. i read an ieee paper titled _security and privacy in decentralized energy trading through multi-signatures, blockchain and anonymous messaging streams_ describing how the bitcoin blockchain is applied in new energy trading system.
kostas (Fri, 12 May 2017 12:52:38 GMT):
@bh4rtp: The last one. 5b/6d. The label in Figure 8 also identifies this as the proposed solution.
kostas (Fri, 12 May 2017 12:52:38 GMT):
@bh4rtp RE: https://chat.hyperledger.org/channel/fabric-consensus?msg=Bj2MfG5MPfcB6u7jA -- The last one. 5b/6d. The label in Figure 8 also identifies this as the proposed solution.
bh4rtp (Fri, 12 May 2017 12:59:16 GMT):
@kostas thanks!
kostas (Fri, 12 May 2017 13:06:03 GMT):
As for the rest of the conversation:
(a) there is nothing complex about Kafka ordering,
(b) the comparison to PoW is apples & oranges as the trust assumptions between the two systems are radically different, and
(c) even if we talk strictly about BFT solutions, PoW for our applications is rather nonsensical as it solves a problem that we do not have in a permissioned systems (Sybil attacks)
kostas (Fri, 12 May 2017 13:06:03 GMT):
As for the rest of the conversation:
(a) there is nothing complex about Kafka ordering,
(b) the comparison to PoW is apples & oranges as the trust assumptions between the two systems are radically different, and
(c) even if we talk strictly about BFT solutions, PoW for our applications is rather nonsensical as it solves a problem that we do not have in permissioned systems (Sybil attacks)
bh4rtp (Fri, 12 May 2017 13:19:10 GMT):
@kostas i have two questions to ask.
1) kafka is a centralized middle ware to fabric although it can be structured as a cluster. does kafka make fabric a non decentralized blockchain?
2) kafka is a pure information technological solution. can i think it solve the ordering problem but introduce other problems such as centralization, network latency, storage, lower efficiency?
toddinpal (Fri, 12 May 2017 13:39:51 GMT):
@bh4rtp How is Kafka any more centralized than any other service? The ordering service is central to the operation of the network, but the implementation of an ordering service doesn't have to be "centralized". Kafka is no more centralized than Fabric itself.
kostas (Fri, 12 May 2017 13:42:02 GMT):
@bh4rtp:
1. Kafka introduces a centralization element to the network, and as such it certainly goes against the "decentralize all the things" mantra of blockchain systems as we know them. What you need to keep in mind is that: (a) it still solves a lot of use cases effectively, esp. for small, dictator-like networks where the ordering should only be restricted to one org, and (b) we don't stop at the Kafka option. The PBFT module, inline with what we did in v0.6, is next.
2. What does "pure information technological solution" mean? Anyway, on the problems that you point out: I've addressed the centralization concern in (1) above. For everything else the answer is either no. Yes, you have one roundtrip between the shim and the Kafka clusters, but wouldn't you have multiple roundtrips in a BFT solution? Yes, you have another set of machines with disks on them, but (a) the disk space needed can be capped once pruning support gets in, (b) it's not 1970, storage is inexpensive, and (c) if I may since when is storage a concern when we're talking about blockchains, where by default you want to keep a history of everything? Esp. since this storage concern here only affects a subset of the system (the ordering service nodes). Lower efficiency? What does this mean? How do you define it?
kostas (Fri, 12 May 2017 13:42:02 GMT):
@bh4rtp:
1. Kafka introduces a centralization element to the network, and as such it certainly goes against the "decentralize all the things" mantra of blockchain systems as we know them. What you need to keep in mind is that: (a) it still solves a lot of use cases effectively, esp. for small, dictator-like networks where the ordering should only be restricted to one org, and (b) we don't stop at the Kafka option. The PBFT module, inline with what we did in v0.6, is next.
2. What does "pure information technological solution" mean? Anyway, on the problems that you point out: I've addressed the centralization concern in (1) above. For everything else the answer is either no. Yes, you have one roundtrip between the shim and the Kafka clusters, but wouldn't you have multiple roundtrips in a BFT solution? Yes, you have another set of machines with disks on them, but (a) the disk space needed can be capped once pruning support gets in, (b) it's not 1970, storage is inexpensive, and (c) if I may since when is storage a concern when we're talking about blockchains, where by default you want to keep a history of everything? Esp. since this storage concern here only affects a subset of the system (the ordering service nodes). As for "lower efficiency": what do you mean here? How do you define it?
kostas (Fri, 12 May 2017 13:42:02 GMT):
@bh4rtp:
1. Kafka introduces a centralization element to the network, and as such it certainly goes against the "decentralize all the things" mantra of blockchain systems as we know them. What you need to keep in mind is that: (a) it still solves a lot of use cases effectively, esp. for small, dictator-like networks where the ordering should only be restricted to one org, and (b) we don't stop at the Kafka option. The PBFT module, inline with what we did in v0.6, is next.
2. What does "pure information technological solution" mean? Anyway, on the problems that you point out: I've addressed the centralization concern in (1) above. For everything else the answer is either no, or the comparison is invalid. Yes, you have one roundtrip between the shim and the Kafka clusters, but wouldn't you have multiple roundtrips in a BFT solution? Yes, you have another set of machines with disks on them, but (a) the disk space needed can be capped once pruning support gets in, (b) it's not 1970, storage is inexpensive, and (c) if I may since when is storage a concern when we're talking about blockchains, where by default you want to keep a history of everything? Esp. since this storage concern here only affects a subset of the system (the ordering service nodes). As for "lower efficiency": what do you mean here? How do you define it?
kostas (Fri, 12 May 2017 13:42:02 GMT):
@bh4rtp:
1. Kafka introduces a centralization element to the network, and as such it certainly goes against the "decentralize all the things" mantra of blockchain systems as we know them. What you need to keep in mind is that: (a) it still solves a lot of use cases effectively, esp. for small, dictator-like networks where the ordering should only be restricted to one org, and (b) we don't stop at the Kafka option. The PBFT module, inline with what we did in v0.6, is next.
2. What does "pure information technological solution" mean? Anyway, on the problems that you point out: I've addressed the centralization concern in (1) above. For everything else the answer is either no, or the comparison is invalid. Yes, you have one roundtrip between the shim and the Kafka clusters, but wouldn't you have multiple roundtrips in a BFT solution? Yes, you have another set of machines with disks on them, but (a) the disk space needed can be capped once pruning support gets in, (b) it's not 1970, storage is inexpensive, and (c) if I may since when is storage a concern when we're talking about blockchains, where by default you want to keep a history of everything? Esp. since this storage concern here only affects a subset of the system (the ordering service nodes). As for "lower efficiency": what do you mean here? How do you define it?
toddinpal (Fri, 12 May 2017 13:43:25 GMT):
The Kafka orderer does introduce another "problem", it is non-BFT. Yes, what it is ordering is encrypted, so some types of Byzantine behaviors aren't very likely, but nonetheless a Byzantine Kafka service could cause havoc.
bh4rtp (Fri, 12 May 2017 23:06:07 GMT):
@kostas the answer 1 is acceptable.
when it comes to answer 2, the pure it solution means an existed messaging solution, not a self-contained fabric solution. storage is not expensive but it seriously depends on kafka. the lower efficiency is due to kafka forwarding. this makes fabric non p2p networking. i have tested e2e example that a query often runs 3 seconds timeout, but frankly i don't know it is due to kafka. in addition, do you consider the situation of lost message by kafka?
bh4rtp (Fri, 12 May 2017 23:06:07 GMT):
@kostas the answer 1 is acceptable.
when it comes to answer 2, the pure it solution means kafka is an existed messaging middle ware, not a self-contained fabric solution. storage is not expensive but it seriously depends on kafka. the lower efficiency is due to kafka forwarding. this makes fabric non p2p networking. i have tested e2e example that a query often runs 3 seconds timeout, but frankly i don't know it is due to kafka. in addition, do you consider the situation of lost message by kafka?
bh4rtp (Fri, 12 May 2017 23:06:07 GMT):
@kostas the answer 1 is acceptable.
when it comes to answer 2, the pure it solution means kafka is an existed messaging middle ware, not a self-contained fabric solution. do you agree on this point? storage is not expensive but it seriously depends on kafka. the lower efficiency is due to kafka forwarding. this makes fabric non p2p networking. i have tested e2e example that a query often runs 3 seconds timeout, but frankly i don't know it is due to kafka. in addition, do you consider the situation of lost message by kafka?
bh4rtp (Fri, 12 May 2017 23:06:07 GMT):
@kostas the answer 1 is acceptable.
when it comes to answer 2, the pure it solution means kafka is an existed messaging middle ware, not a self-contained fabric solution. do you agree on this point? storage is not expensive but it seriously depends on kafka. you can not assume that there are no hardware of software failures to kafka. the lower efficiency is due to kafka forwarding. this makes fabric non p2p networking. i have tested e2e example that a query often runs 3 seconds timeout, but frankly i don't know it is due to kafka. in addition, do you consider the situation of lost message by kafka?
bh4rtp (Fri, 12 May 2017 23:06:07 GMT):
@kostas the answer 1 is acceptable.
when it comes to answer 2, the pure it solution means kafka is an existed messaging middle ware, not a self-contained fabric solution. do you agree on this point? storage is not expensive but it seriously depends on kafka. you can not assume that there are no hardware or software failures to kafka. the lower efficiency is due to kafka forwarding. this makes fabric non p2p networking. i have tested e2e example that a query often runs 3 seconds timeout, but frankly i don't know it is due to kafka. in addition, do you consider the situation of lost message by kafka?
bh4rtp (Fri, 12 May 2017 23:06:07 GMT):
@kostas the answer 1 is acceptable.
when it comes to answer 2, the pure it solution means kafka is an existed messaging middle ware, not a self-contained fabric solution. do you agree on this point? storage is not expensive but it seriously depends on kafka. you can not assume that there are no hardware or software failures to kafka. the lower efficiency is due to kafka forwarding. this makes fabric non p2p networking (unfortunately, i have seen any p2p documentation in fabric until now). i have tested e2e example that a query often runs 3 seconds timeout, but frankly i don't know it is due to kafka. in addition, do you consider the situation of lost message by kafka?
bh4rtp (Fri, 12 May 2017 23:06:07 GMT):
@kostas the answer 1 is acceptable.
when it comes to answer 2, the pure it solution means kafka is an existed messaging middle ware, not a self-contained fabric solution. do you agree on this point? storage is not expensive but it seriously depends on kafka. you can not assume that there are no hardware or software failures to kafka. the lower efficiency is due to kafka forwarding. this makes fabric non p2p networking (unfortunately, i have seen any p2p documentation in fabric until now). i have tested e2e example that a query often runs 3 seconds timeout, but frankly i don't know it is owe to kafka. in addition, do you consider the situation of lost message by kafka?
bh4rtp (Fri, 12 May 2017 23:06:07 GMT):
@kostas the answer 1 is acceptable.
when it comes to answer 2, the pure it solution means kafka is an existed messaging middle ware, not a self-contained fabric solution. do you agree on this point? storage is not expensive but it seriously depends on kafka. you can not assume that there are no hardware or software failures to kafka. the lower efficiency is due to kafka forwarding. this makes fabric non p2p networking (unfortunately, i have not seen any p2p documentation in fabric until now). i have tested e2e example that a query often runs 3 seconds timeout, but frankly i don't know it is owe to kafka. in addition, do you consider the situation of lost message by kafka?
kostas (Sat, 13 May 2017 14:35:00 GMT):
@bh4rtp: Not sure I'm following the rationale, and not sure this conversation helps either of us become any wiser. At any rate, see comments below.
> when it comes to answer 2, the pure it solution means kafka is an existed messaging middle ware, not a self-contained fabric solution. do you agree on this point?
Yes, it is not a self-contained Fabric solution. Not sure what the end point is?
> storage is not expensive but it seriously depends on kafka.
Yes. And this is bad because...?
> you can not assume that there are no hardware or software failures to kafka.
We are not making this assumption. This is where redundancy comes in.
> the lower efficiency is due to kafka forwarding.
1. You have still not defined "efficiency".
2. Lower compared to what?
> this makes fabric non p2p networking (unfortunately, i have seen any p2p documentation in fabric until now).
I believe we've already covered this, see point 1 in my previous response above.
> i have tested e2e example that a query often runs 3 seconds timeout, but frankly i don't know it is owe to kafka.
Well, if you do find out that it is due to Kafka, let me know and I'll gladly look into it. So far, in every test we've done, Kafka performs as expected. I suspect that you are underestimating how battle-tested Kafka (the Apache project, not our Kafka-based orderer) is.
> in addition, do you consider the situation of lost message by kafka?
We do. https://github.com/hyperledger/fabric/blob/master/orderer/kafka/util.go#L62...L65
pvrbharg (Sat, 13 May 2017 15:37:47 GMT):
Has joined the channel.
bh4rtp (Sun, 14 May 2017 02:35:09 GMT):
@kostas thanks for your reply. in your mind, may kafka be the best solution?
kostas (Sun, 14 May 2017 19:56:30 GMT):
@bh4rtp: It may serve a subset of use cases nicely, but the sbft/pbft work is probably the way forward.
berserkr (Sun, 14 May 2017 21:45:00 GMT):
@bh4rtp As you may have seen in the field, permissionless blockchains suffer from latency issues. Permissioned blockchains on the other hand, tend to perform better because there are many things that can be relaxed as opposed to permissionless blockchains. PBFT is slow, you can see ho wit performs with v0.6... kafka is intended to provide a more robust/elegant approach to the consensus problem, having the ability to have separate chains on the same fabric is also a plus
berserkr (Sun, 14 May 2017 21:47:54 GMT):
As you can see now, the bitcoin blockchain is not what it is intended to be
berserkr (Sun, 14 May 2017 21:48:04 GMT):
we can no longer participate in the mining process
berserkr (Sun, 14 May 2017 21:48:16 GMT):
as most mining is done by farms
berserkr (Sun, 14 May 2017 21:48:20 GMT):
in places like China
yacovm (Sun, 14 May 2017 21:48:30 GMT):
eh, the reason 0.6 was slow was not because of PBFT but because the CC was in the critical path.
I remember that when someone short-circuited the CC invocations it was much faster
berserkr (Sun, 14 May 2017 21:49:08 GMT):
so at any given time, the government there decides to take over those mines, then they can own the entire network given that 51% of the compute power is owned by those farms
berserkr (Sun, 14 May 2017 21:49:23 GMT):
@yacovm it does not scale well
berserkr (Sun, 14 May 2017 21:49:36 GMT):
at least, HFL v0.6 can't scale beyond 16 nodes
berserkr (Sun, 14 May 2017 21:49:50 GMT):
my understanding is due to pbft, but I may be mistaken as you pointed out
yacovm (Sun, 14 May 2017 21:49:53 GMT):
Yeah, I know - I was doing performance benchmarking when I joined the project ;)
yacovm (Sun, 14 May 2017 21:49:53 GMT):
Yeah, I know - I was doing performance benchmarking for 0.6 when I joined the project ;)
berserkr (Sun, 14 May 2017 21:50:05 GMT):
ahh good
yacovm (Sun, 14 May 2017 21:50:20 GMT):
But the fact that it doesn't scale wasn't the reason 0.6 was "slow"
berserkr (Sun, 14 May 2017 21:50:20 GMT):
I will be doing the same for v1 in the coming weeks
yacovm (Sun, 14 May 2017 21:50:34 GMT):
What's your setup if I may ask?
yacovm (Sun, 14 May 2017 21:50:40 GMT):
What is the performance evaluation plan?
yacovm (Sun, 14 May 2017 21:50:59 GMT):
How many channels / namespaces (CCs) / keys?
berserkr (Sun, 14 May 2017 21:51:00 GMT):
will try to reproduce the work being presented at sigmod17
berserkr (Sun, 14 May 2017 21:51:14 GMT):
and then we will draft a plan for v1
berserkr (Sun, 14 May 2017 21:51:35 GMT):
will mostly hit the fabric with write intensive workloads
berserkr (Sun, 14 May 2017 21:51:46 GMT):
all of those items will vary
berserkr (Sun, 14 May 2017 21:52:00 GMT):
we have the ability to generate networks on demand now, so we can automate the testing
bh4rtp (Mon, 15 May 2017 00:19:06 GMT):
@berserkr who tell you mining is done by farms? as i know, they built their mining bank in the rural area where the cost is much low.
berserkr (Mon, 15 May 2017 00:34:50 GMT):
when I was working on bitcoin stuff, they called them mining farms... not because they are run in farms... but because they are composed of many workers doing the mining, and as I said, it is essentially centralized now days, where most big miners are in China
bh4rtp (Mon, 15 May 2017 01:20:07 GMT):
@berserkr i misunderstand your mentioned mining farms. the bitcoin mining cannot not be taken as centralized because there are numerous miners. only the greater computation power the miner has, the more probability the miner wins. in other words, if he closes his mining farm, the bitcoin mining will even run normally. this is unlike kafka, if kafka is collapsed, the fabric blockchain will shutdown.
bh4rtp (Mon, 15 May 2017 01:20:07 GMT):
@berserkr i misunderstand your mentioned mining farms. but the bitcoin mining cannot not be taken as centralized because there are numerous miners. only the greater computation power the miner has, the more probability the miner wins. in other words, if he does not want to mine any longer, the bitcoin mining will even run normally. this is unlike kafka, if kafka is collapsed, the fabric blockchain will shutdown.
bh4rtp (Mon, 15 May 2017 01:20:07 GMT):
@berserkr i misunderstand your mentioned mining farms. but the bitcoin mining cannot not be taken as centralized because there are numerous miners. only the greater computation power the miner has, the more probability the miner wins. in other words, if he does not want to mine any longer, the bitcoin mining will even run normally. this is unlike kafka, if kafka is collapsed, the fabric blockchain will shutdown. and once again, p2p networking is used in bitcoin.
bh4rtp (Mon, 15 May 2017 01:20:07 GMT):
@berserkr i misunderstand your mentioned mining farms. but the bitcoin mining cannot not be taken as centralized because there are numerous miners doing solving the hash puzzle. only the greater computation power the miner has, the more probability the miner wins. in other words, if he does not want to mine any longer, the bitcoin mining will even run normally. this is unlike kafka, if kafka is collapsed, the fabric blockchain will shutdown. and once again, p2p networking is used in bitcoin.
bh4rtp (Mon, 15 May 2017 01:20:07 GMT):
@berserkr i misunderstand your mentioned mining farms. but the bitcoin mining cannot not be taken as centralized because there are numerous miners doing solving the hash puzzle simultaneously. only the greater computation power the miner has, the more probability the miner wins. in other words, if he does not want to mine any longer, the bitcoin mining will even run normally. this is unlike kafka, if kafka is collapsed, the fabric blockchain will shutdown. and once again, p2p networking is used in bitcoin.
bh4rtp (Mon, 15 May 2017 01:20:07 GMT):
@berserkr i misunderstand your mentioned mining farms. but the bitcoin mining cannot be taken as centralized because there are numerous miners doing solving the hash puzzle simultaneously. only the greater computation power the miner has, the more probability the miner wins. in other words, if he does not want to mine any longer, the bitcoin mining will even run normally. this is unlike kafka, if kafka is collapsed, the fabric blockchain will shutdown. and once again, p2p networking is used in bitcoin.
bh4rtp (Mon, 15 May 2017 01:20:07 GMT):
@berserkr i misunderstand your mentioned mining farms. but the bitcoin mining cannot be taken as centralized because there are numerous miners solving the hash puzzle simultaneously. only the greater computation power the miner has, the more probability the miner wins. in other words, if he does not want to mine any longer, the bitcoin mining will even run normally. this is unlike kafka, if kafka is collapsed, the fabric blockchain will shutdown. and once again, p2p networking is used in bitcoin.
bh4rtp (Mon, 15 May 2017 01:20:07 GMT):
@berserkr i misunderstand your mentioned mining farms. but the bitcoin mining cannot be taken as centralized because there are numerous miners solving the hash puzzle simultaneously. only the greater computation power the miner has, the more probability he will win. in other words, if he does not want to mine any longer, the bitcoin mining will even run normally. this is unlike kafka, if kafka is collapsed, the fabric blockchain will shutdown. and once again, p2p networking is used in bitcoin.
bh4rtp (Mon, 15 May 2017 01:20:07 GMT):
@berserkr i misunderstand your mentioned mining farms. but the bitcoin mining cannot be taken as centralized because there are numerous miners solving the hash puzzle simultaneously. only the greater computation power the miner has, the more probability he will win. in other words, if he does not want to mine any longer, the bitcoin mining will even run normally. this is unlike kafka, if kafka is collapsed, the fabric blockchain will shutdown. and once again, i would emphasize than p2p networking is used in bitcoin. the bitcoin blockchain is a p2p society.
bh4rtp (Mon, 15 May 2017 01:20:07 GMT):
@berserkr i misunderstand your mentioned mining farms. but the bitcoin mining cannot be taken as centralized because there are numerous miners solving the hash puzzle simultaneously. only the greater computation power the miner has, the more probability he will win. in other words, if he does not want to mine any longer, the bitcoin mining will even run normally. this is unlike kafka, if kafka is collapsed, the fabric blockchain will shutdown. and once again, i would emphasize that p2p networking is used in bitcoin. the bitcoin blockchain is a p2p society.
bh4rtp (Mon, 15 May 2017 01:20:07 GMT):
@berserkr i misunderstand your mentioned mining farms. but the bitcoin mining cannot be taken as centralized because there are numerous miners solving the hash puzzle simultaneously. only the greater computation power the miner has, the more probability he will win. in other words, if he does not want to mine any longer, the bitcoin mining will even run normally. this is unlike kafka, if kafka is collapsed, the fabric blockchain will shutdown. and once again, i would emphasize that p2p networking is used by bitcoin. the bitcoin blockchain is really an autonomous p2p society. that's why it makes great success in the technological aspect.
Vadim (Mon, 15 May 2017 07:47:06 GMT):
@bh4rtp I guess he was referring to the mining pool centralization, i.e. most of the mining pools are located in China and in sum they are close to 51%, see https://bitcoinworldwide.com/mining/pools/
jsong1230 (Mon, 15 May 2017 12:39:19 GMT):
I got the following message in v1.0.0-alpha2
jsong1230 (Mon, 15 May 2017 12:39:20 GMT):
2017-05-15 21:36:18.376 KST [orderer/configupdate] Process -> DEBU 134 Processing channel creation request for channel ch4-1
2017-05-15 21:36:18.377 KST [orderer/common/broadcast] Handle -> WARN 135 Rejecting CONFIG_UPDATE because: Unknown consortium name:
2017-05-15 21:36:18.378 KST [orderer/common/deliver] Handle -> WARN 136 Error reading from stream: stream error: code = 1 desc = "context canceled"
jsong1230 (Mon, 15 May 2017 12:41:47 GMT):
Message Attachments
jsong1230 (Mon, 15 May 2017 12:43:10 GMT):
I generate block and tx file using the following command: configtxgen -profile SampleInsecureKafka -outputCreateChannelTx ch4-1.tx -channelID ch4-1 -outputBlock ch4-1.block
kostas (Mon, 15 May 2017 13:42:20 GMT):
@jsong1230: You should use one of the profiles ending with the `Channel` suffix when creating a new channel. (And in this, if you've generated a genesis block for the `SampleInsecureKafka` profile, you're probably looking a the `SampleEmptyInsecureChannel` profile for your channel.)
kostas (Mon, 15 May 2017 13:42:38 GMT):
@berserkr: https://chat.hyperledger.org/channel/fabric-consensus?msg=EveHkTDadcagS5Mc3
kostas (Mon, 15 May 2017 13:42:59 GMT):
What is work presented at SIGMOD 2017?
kostas (Mon, 15 May 2017 13:42:59 GMT):
What is that work presented at SIGMOD 2017? Can you provide a link?
kostas (Mon, 15 May 2017 13:43:50 GMT):
Also:
kostas (Mon, 15 May 2017 13:52:41 GMT):
Also:
> PBFT is slow, you can see how it performs with v0.6... kafka is intended to provide a more robust/elegant approach to the consensus problem
I disagree with both statements here.
kostas (Mon, 15 May 2017 13:52:41 GMT):
Also:
> PBFT is slow, you can see how it performs with v0.6... kafka is intended to provide a more robust/elegant approach to the consensus problem
I disagree with both statements here.
kostas (Mon, 15 May 2017 13:52:41 GMT):
Also:
> PBFT is slow, you can see how it performs with v0.6... kafka is intended to provide a more robust/elegant approach to the consensus problem
I disagree with both statements here.
kostas (Mon, 15 May 2017 13:52:57 GMT):
For the first statement: Yacov is right. Chaincode execution was in the main path, so of course it will look slow.
kostas (Mon, 15 May 2017 13:52:59 GMT):
For the second part, when you write "more robust/elegant approach to the consensus problem" you are again comparing apples to oranges. The Kafka option (which I suggested to the team back when we were considering our options for 1.0, so I'm definitely a fan) can only apply to certain use cases. It also cannot be "more robust" by definition than a (properly engineered) solution that is meant to tackle Byzantine faults.
kostas (Mon, 15 May 2017 13:52:59 GMT):
For the second part, when you write "more robust/elegant approach to the consensus problem" you are again comparing apples to oranges. The Kafka option (which I suggested to the team back when we were considering our options for 1.0, so I'm definitely a fan) can only apply to certain use cases. At the risk of me falling into the "apples to oranges" comparison trap as well, I'll also note that it cannot be "more robust" by definition than a (properly engineered) solution that is meant to tackle Byzantine faults.
kostas (Mon, 15 May 2017 13:52:59 GMT):
For the second part, when you write "more robust/elegant approach to the consensus problem" you are comparing apples to oranges. The Kafka option (which I suggested to the team back when we were considering our options for 1.0, so I'm definitely a fan) can only apply to certain use cases. At the risk of me falling into the "apples to oranges" comparison trap as well, I'll also note that it cannot be "more robust" by definition than a (properly engineered) solution that is meant to tackle Byzantine faults.
gormand (Mon, 15 May 2017 14:38:21 GMT):
Hi - I'm getting questions about the number of channels that could be supported for a given blockchain network. How many channels have we tested with, and is there a design goal for a maximum number of channels? Thanks!
berserkr (Mon, 15 May 2017 14:40:27 GMT):
@kostas https://arxiv.org/pdf/1703.04057.pdf
jyellick (Mon, 15 May 2017 14:45:44 GMT):
@gormand This is a very challenging question, and will depend on the consensus type, and workload. Assuming that the Kafka consensus is used, the hope and expectation would be that the number of channels should scale fairly proportionally with the size of the and power of the Kafka cluster, but I don't have any real numbers for you. This is hopefully something that will be produced in the coming weeks/months.
jyellick (Mon, 15 May 2017 14:45:44 GMT):
@gormand This is a very challenging question, and will depend on the consensus type, and workload. Assuming that the Kafka consensus is used, the hope and expectation would be that the number of channels should scale fairly proportionally with the size and power of the Kafka cluster, but I don't have any real numbers for you. This is hopefully something that will be produced in the coming weeks/months.
gormand (Mon, 15 May 2017 14:52:11 GMT):
Thanks @jyellick. Appreciate without the testing it's difficult to say. Is there a threshold where we should start to question if the number of channels will be achievable. I'd assume 10s of channels will be OK, but do we expect a system to scale to 100s or 1000s?
gormand (Mon, 15 May 2017 14:52:11 GMT):
Thanks @jyellick . Appreciate without the testing it's difficult to say. Is there a threshold where we should start to question if the number of channels will be achievable. I'd assume 10s of channels will be OK, but do we expect a system to scale to 100s or 1000s?
HansDeLeenheer (Mon, 15 May 2017 14:56:23 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=xtvFzXQSLgh23WJd4) @gormand I would assume at some point these numbers could increase. Let's say you run a global company anf you want to create channels for divisions in your company and then the possibility to divide those in regions. multiplying these two factors easily scales you above 100
kostas (Mon, 15 May 2017 14:59:10 GMT):
@gormand: For the Kafka option, a scale of thousands should be feasible. The problem is equivalent to how many Kafka partitions you can create, since in Fabric each channel corresponds to a Kafka partition. If you search online for that, you'll see that there is no hard limit on this number, but as Jason noted, it depends on the resources + configuration of the Kafka cluster. So this now becomes a Kafka tuning issue.
jyellick (Mon, 15 May 2017 15:05:02 GMT):
One point, there are limitations on scale beyond the throughput of the Kafka cluster. In particular, each ordering shim which handles the translation of fabric transactions into Kafka messages and back into blocks must do this for _every_ channel. There are some technical reasons around this, particularly that each shim must have the crypto state for every channel in order to be able to properly authenticate requests. This should generally be lightweight, and I would not expect it to be the bottleneck until the number of channels is quite large, but, it is another bottleneck other than the Kafka cluster.
kostas (Mon, 15 May 2017 15:05:42 GMT):
^^ This is right, I had missed that.
jyellick (Mon, 15 May 2017 15:06:24 GMT):
This is especially why I emphasized workload. 10k channels each processing 1 tx per second might be easily feasible, while 100 channels each processing 10k tx/s might cause problems
jyellick (Mon, 15 May 2017 15:06:24 GMT):
This is especially why I emphasized workload. 10k channels each processing 1 tx per second might be easily feasible, while 100 channels each processing 10k tx/s might cause problems (Though again, these numbers are fake and made up off the top of my head, real performance testing needs to be done before any meaningful numbers can be given)
gormand (Mon, 15 May 2017 15:08:45 GMT):
Thanks @HansDeLeenheer @kostas. Thanks @jyellick I was just going to ask how the orderers and endorsers would cope with large numbers of channels. On the endorser system, chaincode (or several) is instantiated into each channel. If you had 10k channels, that would be a minimum10k chaincodes running?
gormand (Mon, 15 May 2017 15:08:45 GMT):
Thanks @HansDeLeenheer @kostas. Thanks @jyellick I was just going to ask how the orderers and endorsers would cope with large numbers of channels. On the endorser system, chaincode is instantiated into each channel. If you had 10k channels, that would be a minimum of 10k chaincodes running?
kostas (Mon, 15 May 2017 15:08:47 GMT):
For Kafka, let me also note the following quite quickly: A partition is cut into log segments, and a Kafka broker holds two file handles per segment. I've seen references to brokers holding more than 30K file handles ([example](https://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/)), but this depends on the open file handle limit on the OS, and it's also a function of how often you cut a log segment, when you expire it, etc.
kostas (Mon, 15 May 2017 15:08:47 GMT):
For Kafka, let me also note the following quite quickly: A partition is cut into log segments, and a Kafka broker holds two file handles per segment. I've seen references to brokers holding more than 30K file handles ([example](https://www.confluent.io/blog/how-to-choose-the-number-of-topicspartitions-in-a-kafka-cluster/)), but this depends on the open file handle limit on the OS, and it's also a function of how often you cut a log segment, when you expire it, etc. So any answer will have to identify all of those parameters.
jyellick (Mon, 15 May 2017 15:10:56 GMT):
@gormand Chaincode is installed per peer, and instantiated per channel. This is a change vs 0.6 and was intended to address specifically the problem you mentioned. @muralisr and #fabric-peer-endorser-committer might be a better resource on the logic of chaincode container creation though.
muralisr (Mon, 15 May 2017 15:13:59 GMT):
@gormand chaincode "mycc" instantiated on 100 channels will have 1 container running fielding requests to the 100 instantiations.
gormand (Mon, 15 May 2017 15:16:37 GMT):
Thanks @muralisr .
chrisconway (Mon, 15 May 2017 15:22:04 GMT):
Has joined the channel.
s.narayanan (Mon, 15 May 2017 21:50:41 GMT):
I am currently using the fabric alpha docker images for orderer. Is it possible to use Kafka with this image? If so, are there instructions around how to configure orderer to use a Kafka cluster?
jyellick (Mon, 15 May 2017 21:55:38 GMT):
@s.narayanan Yes it is possible, my doc knowledge may be a little out of date though, do you have a suggested start path @kostas ^
kostas (Mon, 15 May 2017 21:57:04 GMT):
@s.narayanan: alpha-1 or alpha-2?
bmkor (Tue, 16 May 2017 00:43:17 GMT):
Has joined the channel.
bmkor (Tue, 16 May 2017 00:44:03 GMT):
Hello folks. Wondering how to solve the issue below when starting fabric-orderer docker image?
```
2017-05-15 17:46:11.076 UTC [orderer/multichain] NewManagerImpl -> DEBU 0f0 Starting chain: testchainid
2017-05-15 17:46:11.076 UTC [fsblkstorage] retrieveBlockByNumber -> DEBU 0f1 retrieveBlockByNumber() - blockNum = [0]
2017-05-15 17:46:11.076 UTC [fsblkstorage] newBlockfileStream -> DEBU 0f2 newBlockfileStream(): filePath=[/var/hyperledger/production/orderer/chains/testchainid/blockfile_000000], startOffset=[0]
2017-05-15 17:46:11.076 UTC [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 0f3 Remaining bytes=[9829], Going to peek [8] bytes
2017-05-15 17:46:11.076 UTC [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 0f4 Returning blockbytes - length=[9827], placementInfo={fileNum=[0], startOffset=[0], bytesOffset=[2]}
2017-05-15 17:46:11.076 UTC [orderer/multichain] newChainSupport -> DEBU 0f5 [channel: testchainid] Retrieved metadata for tip of chain (block #0):
2017-05-15 17:46:11.077 UTC [orderer/multichain] NewManagerImpl -> CRIT 0f6 No system chain found
panic: No system chain found
2017-05-15T17:46:11.080851521Z
goroutine 1 [running]:
panic(0xb03220, 0xc4203154d0)
/opt/go/src/runtime/panic.go:500 +0x1a1
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc420206d50, 0xc25168, 0x15, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x127
github.com/hyperledger/fabric/orderer/multichain.NewManagerImpl(0x11af3c0, 0xc420360100, 0xc420296ed0, 0x11acbc0, 0x11f3900, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/multichain/manager.go:139 +0x5d3
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:174 +0x179d
```
No system chain found. I started this docker image with genesis method = file & supplied with genesis block which seems okay under inspection by configtxgen.
rocket.cat (Tue, 16 May 2017 00:44:03 GMT):
Good day, bmkor
cbf (Tue, 16 May 2017 00:45:42 GMT):
@kostas meet @guoger - he is on my team and will be full time on Hyperledger for the forseeable
guoger (Tue, 16 May 2017 00:45:43 GMT):
Has joined the channel.
cbf (Tue, 16 May 2017 00:45:58 GMT):
I've asked that he help you with UT coverage
cbf (Tue, 16 May 2017 00:46:28 GMT):
please help orient him, etc and suggest where he might dig in
kostas (Tue, 16 May 2017 00:47:14 GMT):
@cbf: Sure thing, and thank you. @guoger will ping you tomorrow morning to get this going.
kostas (Tue, 16 May 2017 00:47:14 GMT):
@cbf: Sure thing, and thank you. @guoger will ping you tomorrow morning to get this going. @sanchezl is working on this as well, but it'll be useful to get a couple of more hands on this as well.
cbf (Tue, 16 May 2017 00:47:31 GMT):
FYI - it is now tomorrow morning for Jay
cbf (Tue, 16 May 2017 00:47:36 GMT):
;-)
cbf (Tue, 16 May 2017 00:47:43 GMT):
he's in Beijing
kostas (Tue, 16 May 2017 00:48:07 GMT):
Alright then, @guoger will ping you now.
kostas (Tue, 16 May 2017 00:48:07 GMT):
Alright then, @guoger will ping you now and we'll get this coordinated with the work Luis is doing.
kostas (Tue, 16 May 2017 00:49:04 GMT):
@bmkor: This sounds like an issue that https://gerrit.hyperledger.org/r/#/c/9111/ addressed.
kostas (Tue, 16 May 2017 00:49:28 GMT):
I am guessing that this is an alpha-1 Docker image?
kostas (Tue, 16 May 2017 00:49:56 GMT):
If that's the case, I suggest giving it another go with the alpha-2 Docker image.
bmkor (Tue, 16 May 2017 00:53:54 GMT):
Thanks. Ah, can I check if it is the alpha-2 Docker image for this orderer? [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=fR7uJLGoxfCJENB9d) @kostas
kostas (Tue, 16 May 2017 00:55:37 GMT):
@rameshthoomu Is the alpha-2 Docker image for orderer available now? See message above ^^.
bh4rtp (Tue, 16 May 2017 00:59:41 GMT):
@berserkr i am reading your provided paper. the conclusion is current blockchains are not well suited for large scale data processing workloads. which hyperledger version did you use to test on blockbench? and i don't know your standard index for large scale workload. please tell me some more details.
bh4rtp (Tue, 16 May 2017 00:59:41 GMT):
@berserkr i am reading your provided paper. the conclusion is current blockchains are not well suited for large scale data processing workloads. which hyperledger version did you use to test on the blockbench? and i don't know your standard index for large scale workload. please tell me some more details.
bh4rtp (Tue, 16 May 2017 00:59:41 GMT):
@berserkr i am reading your provided paper. the conclusion is current blockchains are not well suited for large scale data processing workloads. which hyperledger version did you use to test on the blockbench? and i don't know your standard index for large scale workload. and the paper indicates several bottlenecks and design trade-offs at different layers of the software stack. would you please list the bottlenecks and trade-offs for hyperledger?
bh4rtp (Tue, 16 May 2017 00:59:41 GMT):
@berserkr i am reading your provided paper. the conclusion is current blockchains are not well suited for large scale data processing workloads. which hyperledger version did you use to test on the blockbench? and i don't know your standard index for large scale workload. the paper indicates several bottlenecks and design trade-offs at different layers of the software stack. would you please list the bottlenecks and trade-offs for hyperledger?
rameshthoomu (Tue, 16 May 2017 01:08:31 GMT):
yes it is available.. see here https://hub.docker.com/r/hyperledger/fabric-orderer/tags/
kostas (Tue, 16 May 2017 01:56:34 GMT):
@bmkor ^^
kostas (Tue, 16 May 2017 01:56:34 GMT):
@bmkor: See Ramesh's message above ^^
kostas (Tue, 16 May 2017 01:56:40 GMT):
@rameshthoomu: Thank you.
jsong1230 (Tue, 16 May 2017 05:52:54 GMT):
I have deployed mycc chaincode at peer1. It is working fine in peer1 but showed the following error in peer 2,3,4
jsong1230 (Tue, 16 May 2017 05:52:55 GMT):
2017-05-16 14:51:37.118 KST [chaincode] ExecuteChaincode -> ERRO 6f1 Error executing chaincode: Could not get deployment transaction from LSCC for mycc:0.3 - Get ChaincodeDeploymentSpec for mycc/testchainid from LSCC error: chaincode fingerprint mismatch data mismatch
jsong1230 (Tue, 16 May 2017 05:53:10 GMT):
I am using alpha2 version of fabric
jsong1230 (Tue, 16 May 2017 05:53:33 GMT):
I can install chaincode but cannot invoke or query at peer 2,3,4
jsong1230 (Tue, 16 May 2017 05:55:01 GMT):
I am using "SampleInsecureKafka" profile for orderer, and used the channel "SampleEmptyInsecureChannel" as they are.
jsong1230 (Tue, 16 May 2017 05:57:23 GMT):
I am using "testchainid" channel as it is
rangak (Tue, 16 May 2017 06:08:28 GMT):
Hacked up a little utility to help a bit in identifying significant events in orderer and peer logfiles. It is a simplistic algorithm. Welcome pointer to alternatives out there and/or improvements. https://github.com/Rangak/loglights
s.narayanan (Tue, 16 May 2017 12:29:19 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=wQXTmM6kQePXA4Tkz) @kostas I presume it is alpha 1, this is the image that was made available early March. Is there a more recent alpha-2?
jsong1230 (Tue, 16 May 2017 12:50:18 GMT):
Using alpha2, I am generating a channel (ch4-1) using the following command.
jsong1230 (Tue, 16 May 2017 12:50:18 GMT):
configtxgen -profile SampleEmptyInsecureChannel -outputCreateChannelTx ch4-1.tx -channelID ch4-1
jsong1230 (Tue, 16 May 2017 12:50:18 GMT):
After that I have created a channel and joined it by using the following two commands
jsong1230 (Tue, 16 May 2017 12:50:18 GMT):
After that I have created a channel and joined it by using the following two commands, and install and instantiate a mycc chaincode.
jsong1230 (Tue, 16 May 2017 12:50:18 GMT):
peer channel create -o 127.0.0.1:7050 -c ch4-1 -f ch4-1.tx -b ch4-1.block
peer channel join -o 127.0.0.1:7050 -b ch4-1.block -f ch4-1.tx
jsong1230 (Tue, 16 May 2017 12:50:18 GMT):
peer channel create -o 127.0.0.1:7050 -c ch4-1 -f ch4-1.tx -b ch4-1.block
peer channel join -o 127.0.0.1:7050 -b ch4-1.block -f ch4-1.tx
jsong1230 (Tue, 16 May 2017 12:50:18 GMT):
peer channel create -o 127.0.0.1:7050 -c ch4-1 -f ch4-1.tx -b ch4-1.block
peer channel join -o 127.0.0.1:7050 -b ch4-1.block -f ch4-1.txpeer chaincode install -C ch4-1 -n mycc -v 0.1 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Args":["init","a","100","b","200"]}'
peer chaincode instantiate -n mycc -v 0.1 -c '{"Args":["init","a","100","b","200"]}' -o 127.0.0.1:7050 -C ch4-1 -P "OR ('DEFAULT.member','org2.member')"
jsong1230 (Tue, 16 May 2017 12:50:18 GMT):
peer channel create -o 127.0.0.1:7050 -c ch4-1 -f ch4-1.tx -b ch4-1.block
peer channel join -o 127.0.0.1:7050 -b ch4-1.block -f ch4-1.tx
peer chaincode install -C ch4-1 -n mycc -v 0.1 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Args":["init","a","100","b","200"]}'
peer chaincode instantiate -n mycc -v 0.1 -c '{"Args":["init","a","100","b","200"]}' -o 127.0.0.1:7050 -C ch4-1 -P "OR ('DEFAULT.member','org2.member')"
But I got the following error
Error: Error endorsing chaincode: rpc error: code = 2 desc = could not get msp for chain [ch4-1]
s.narayanan (Tue, 16 May 2017 12:55:38 GMT):
@kostas I realized alpha 2 seems to have been made available a few days back. However, would like to know if kafka can be used in alpha 1 or is it necessary to move to alpha 2. As well any instructions on configuring kafka based orderer would be helpful.
kostas (Tue, 16 May 2017 13:10:42 GMT):
@jsong1230: This is not the right channel for this question, please try #fabric
kostas (Tue, 16 May 2017 13:11:49 GMT):
@rangak: Thanks for this! Can you let me know exactly what it does?
jsong1230 (Tue, 16 May 2017 13:13:38 GMT):
I got it. Thanks. [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=M3TJsphax2fBabx9B) @kostas
kostas (Tue, 16 May 2017 13:17:53 GMT):
@s.narayanan: alpha-2 was cut yesterday. At any rate, you can use Kafka in both cases, it's just the E2E CLI test will fail with alpha-1 because of the peer's hard-coded and short timeout during channel creation. At any rate, all you need to do is write a genesis block where the OrdererType is set to Kafka and Addresses.Brokers point to your Kafka brokers. This commit is _an example_ of how you'd modify the E2E CLI test to work. Notice that we're modifying the Docker Compose file so that we bring up a single Kafka broker and ZK node (generally a no-no for production environments), and we're also modifying `configtx.yaml` so that when we invoke `configtxgen` we get the right genesis block (one that lets the orderers know that they should be running in "kafka" mode, and lists the Kafka brokers addresses).
kostas (Tue, 16 May 2017 13:22:45 GMT):
> https://chat.hyperledger.org/channel/fabric-consensus?msg=7NMbP2KLHLZGvgXL2
kostas (Tue, 16 May 2017 13:23:35 GMT):
On a related note: every < 1.0 release is bound to have bugs, so I would suggest you adopt a workflow that allows you to use the newest images as soon as possible.
kostas (Tue, 16 May 2017 13:23:35 GMT):
On a related note: every < 1.0 release is bound to have critical bugs, so I would suggest you adopt a workflow that allows you to use the newest images as soon as possible.
s.narayanan (Tue, 16 May 2017 14:05:28 GMT):
@kostas thanks, I will work with alpha 2 ...
rangak (Tue, 16 May 2017 14:13:09 GMT):
@costas It reduces the amount of text per line, can sort and filter by package and has a heuristic that filters out lines that follow a line matching the time time stamp (with decimal precision you can set). It is a very coarse algorithm that relies on user relevant events being separated in time and deems clustered lines as belonging to the same "event". I could be much improved with knowledge of how fabric orderers and peers log. I hope someone will do that and/or provide input.
rangak (Tue, 16 May 2017 14:13:35 GMT):
sorry that should be @kostas
rangak (Tue, 16 May 2017 16:11:09 GMT):
@kostas Updated with regex support suggested by @muralisr . Enjoy! https://github.com/Rangak/loglights
kostas (Tue, 16 May 2017 16:12:22 GMT):
Awesome, will check it out.
Glen (Wed, 17 May 2017 08:46:08 GMT):
Has joined the channel.
Glen (Wed, 17 May 2017 08:47:45 GMT):
Hello, I have one question, How can I enable Kafka service type in my fabric v1.0 environment since it defaults to "Solo", Is this a configuration guide? thanks!
sallyde (Wed, 17 May 2017 10:40:07 GMT):
Has joined the channel.
sallyde (Wed, 17 May 2017 10:40:31 GMT):
hi all, could somebody please refer to the configuration setting where consensus protocol is defined? for example what is the consensu protocol in this example http://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html and where it has been defined?
Glen (Wed, 17 May 2017 10:49:13 GMT):
Sallyde, https://jira.hyperledger.org/browse/FAB-3787, I've tried this link, but I'm not sure if Kafka is involved in the transaction, since the modified configtx.xml under e2e_cli is not used by configtxgen.
jyellick (Wed, 17 May 2017 13:35:50 GMT):
The consensus protocol is set in the genesis block for the ordering system channel. The genesis block is generated using the `configtxgen` which is controlled by `configtx.yaml`, so you must modify this file before generating the genesis block to select your consensus protocol.
sallyde (Wed, 17 May 2017 13:47:50 GMT):
thanks @jyellick and @Glen , I actually went through the configtx.yaml of the example in (http://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html) but there is not any pre-specified place for defining the consensus protocol. I appreciate if you could please refer to a configtx.yaml that has defined a consensus protocol
jyellick (Wed, 17 May 2017 13:53:09 GMT):
@sallyde Please see `fabric/sampleconfig/configtx.yaml`. You can see the `SampleInsecureKafka` profile for instance:
```
# SampleInsecureKafka defines a configuration that differs from the
# SampleInsecureSolo one only in that is uses the Kafka-based orderer.
SampleInsecureKafka:
Orderer:
<<: *OrdererDefaults
OrdererType: kafka
```
kostas (Wed, 17 May 2017 13:53:27 GMT):
@Glen:
kostas (Wed, 17 May 2017 13:53:28 GMT):
> Sallyde, https://jira.hyperledger.org/browse/FAB-3787, I've tried this link, but I'm not sure if Kafka is involved in the transaction, since the modified configtx.xml under e2e_cli is not used by configtxgen.
kostas (Wed, 17 May 2017 13:54:28 GMT):
The E2E CLI test uses `examples.e2e_cli/configtx.yaml`
kostas (Wed, 17 May 2017 13:54:28 GMT):
The E2E CLI test uses `examples/e2e_cli/configtx.yaml`
kostas (Wed, 17 May 2017 13:55:34 GMT):
If you were to modify this YAML file as this example here shows: https://github.com/kchristidis/fabric/commit/7f83f40fdd8c5cd0a837a820f728f3b864e1cce2
kostas (Wed, 17 May 2017 13:56:21 GMT):
And then modify `docker-compose-cli.yaml` under the same folder in the way the link above shows, you'd be good to go.
kostas (Wed, 17 May 2017 13:56:44 GMT):
Copying @sallyde as well on the above I guess.
kostas (Wed, 17 May 2017 13:57:54 GMT):
So you see that we modify the profile that the test uses so that it uses kafka as the consensus type, and point to the right Kafka brokers.
kostas (Wed, 17 May 2017 13:58:13 GMT):
We modify the Docker Compose file, so that we bring these Kafka + ZK nodes up.
nickmelis (Wed, 17 May 2017 15:03:55 GMT):
running 5 v0.6 nodes with PBFT, it looks like the 5th node doesn't sync with the others. Everything works fine with 4 nodes. Is there any particular number of nodes required for PBFT? I know it doesn't work with less than 4, is there any other limitation?
nickmelis (Wed, 17 May 2017 15:28:33 GMT):
found this in the log:
nickmelis (Wed, 17 May 2017 15:28:40 GMT):
`vp4_1 | 15:27:20.319 [consensus/pbft] newPbftCore -> INFO 02a PBFT Max number of validating peers (N) = 4`
jyellick (Wed, 17 May 2017 16:17:34 GMT):
@nickmelis For v0.6, when you configure your network, you must specify the number of members. It sounds like you tried to spin up 5 peers without increasing the number of N in the configuration (note, N can only be set at bootstrap). In general, I would recommend that the network be configured with a 3f+1 nodes, where f is the number of failures to be tolerated (so, 4, 7, 10, etc.)
sallyde (Wed, 17 May 2017 17:22:23 GMT):
thanks @kostas
tarcisiocjr (Thu, 18 May 2017 00:30:40 GMT):
Has joined the channel.
Glen (Thu, 18 May 2017 05:33:16 GMT):
@kostas, as a commit(9937c3612b0e12903a5110ad5c63e6334c468e28) on master removed examples/e2e_cli/generateCfgTrx.sh, instead added a new file generateArtifacts.sh, this script doesn't copy the configtx.yml to ../../common/configtx/tool/configtx.yaml as the primer did, and I haven't found where configtxgen can load the updated yaml. so i still suspect it. although I've checked the kafka, and it did receve the message from orderer. thanks
Glen (Thu, 18 May 2017 07:24:04 GMT):
I've dumped the log of the position of the configtx.yaml, it seems to be hard coded into the config.go, and the log is as follows:
Glen (Thu, 18 May 2017 07:24:35 GMT):
2017-05-18 15:08:54.883 CST [common/configtx/tool/localconfig] Load -> INFO 002 Loaded configuration: /opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli/configtx.yaml
nickmelis (Thu, 18 May 2017 11:38:56 GMT):
[How do you configure that? ](https://chat.hyperledger.org/channel/fabric-consensus?msg=zYgBSADBBcyNcNukF) @jyellick
kostas (Thu, 18 May 2017 11:42:28 GMT):
@Glen: Corect. I pointed this out here yesterday: https://chat.hyperledger.org/channel/fabric-consensus?msg=xxBvZNdsnSxGbwhzS This is the `configtx.yaml` that includes the `TwoOrgsOrdererGenesis` profile that `generateArtifacts.sh` uses. If you modify this, you're good to go.
jyellick (Thu, 18 May 2017 13:32:16 GMT):
@nickmelis For v0.6, the value for `N` is configured in the `consensus/pbft/config.yaml` as the `general.N` config parameter or overridden in the environment with `CORE_PBFT_GENERAL_N`
nickmelis (Thu, 18 May 2017 13:33:20 GMT):
thanks @jyellick that's exactly what I was after
sallyde (Thu, 18 May 2017 15:46:08 GMT):
Hi All, following the tutorial in http://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html , I get error while creating the channel. The cli log is
2017-05-18 15:41:07.481 UTC [logging] InitFromViper -> DEBU 001 Setting default logging level to DEBUG for command 'channel'
2017-05-18 15:41:07.481 UTC [msp] GetLocalMSP -> DEBU 002 Returning existing local MSP
2017-05-18 15:41:07.481 UTC [msp] GetDefaultSigningIdentity -> DEBU 003 Obtaining default signing identity
Error: Got unexpected status: BAD_REQUEST
and the network log says
orderer.example.com | 2017-05-18 15:41:07.493 UTC [common/configtx] addToMap -> DEBU 4d1 Adding to config map: [Policy] /Channel/AcceptAllPolicy
orderer.example.com | 2017-05-18 15:41:07.493 UTC [orderer/common/broadcast] Handle -> WARN 4d2 Rejecting CONFIG_UPDATE because: Error validating DeltaSet: Attempt to set key [Policy] /Channel/Readers to version 0, but key is at version 0
orderer.example.com | 2017-05-18 15:41:07.497 UTC [orderer/common/deliver] Handle -> WARN 4d3 Error reading from stream: stream error: code = 1 desc = "context canceled"
Any hint is highly appreciated.
s.narayanan (Thu, 18 May 2017 18:37:12 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=xxBvZNdsnSxGbwhzS) @kostas if we choose to use Kafka, the orderer type: solo should be commented out?
s.narayanan (Thu, 18 May 2017 19:07:55 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=wG9QrZrpLgxpkSiPZ) @s.narayanan Sorry ignore the message I figured what I was looking for ..
jsong1230 (Fri, 19 May 2017 06:18:14 GMT):
When I start "orderer", I got this message "panic: No system chain found" I am following {sample/e2e} in my machine with 2 peers without using docker-compose.
bmkor (Fri, 19 May 2017 06:46:03 GMT):
If I run orderer in kafka type, what kafka & zookeeper docker image should be used?
saptarshee (Fri, 19 May 2017 10:40:46 GMT):
Has joined the channel.
bmkor (Fri, 19 May 2017 12:33:41 GMT):
Would like to know could there be multiple orderers in the same channel? If yes, how the ordering be carried out?
kostas (Fri, 19 May 2017 12:50:29 GMT):
@jsong1230: It is impossible to help out without more details. Provide a detailed listing of the steps/commands you followed, while keeping those details to the minimum needed. Think of this as a StackOverflow question. How would you phrase it?
kostas (Fri, 19 May 2017 12:51:22 GMT):
@bmkor: In the Kafka-based ordering service yes. The orderers use the Kafka cluster to agree on a global order for that channel. (So Kafka does it for you.) See https://docs.google.com/document/u/1/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit for more details.
bmkor (Fri, 19 May 2017 13:36:28 GMT):
Thanks. @kostas So in the yaml, making orderer as a Kafka type would mean making it as an orderer service client in the document? And the OSNs this client connected to could be defined under kafka: in the yaml?
nickmelis (Fri, 19 May 2017 14:03:16 GMT):
tried the following scenario using v0.6 and docker-compose:
* Started 4 nodes with PBFT, sent some transactions, verified nodes are in sync (same height and last block)
* Stopped one node (docker stop), sent some transactions making sure the remaining 3 nodes got synced with each other
* Restarted 4th node, I'd expect it would sync with the others. It didn't sync automatically. I sent some more transactions, and although the height on the 4th node changed, it was different from the other 3.
nickmelis (Fri, 19 May 2017 14:03:16 GMT):
Tried the following scenario using v0.6 and docker-compose:
* Started 4 nodes with PBFT, sent some transactions, verified nodes are in sync (same height and last block)
* Stopped one node (docker stop), sent some transactions making sure the remaining 3 nodes got synced with each other
* Restarted 4th node, I'd expect it would sync with the others. It didn't sync automatically. I sent some more transactions, and although the height on the 4th node changed, it was different from the other 3.
nickmelis (Fri, 19 May 2017 14:03:29 GMT):
Has anyone tried a similar scenario before?
rahulhegde (Fri, 19 May 2017 14:10:11 GMT):
@kostas - Are the blockchain block stored at {channel-Id + chaincode-name/ledger} granularity?
cca88 (Fri, 19 May 2017 14:42:52 GMT):
ff e2e_cli
kostas (Fri, 19 May 2017 15:04:57 GMT):
@bmkor: You provide a list of the Kafka brokers here: https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L170
kostas (Fri, 19 May 2017 15:05:06 GMT):
A list of the OSNs here: https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L139
kostas (Fri, 19 May 2017 15:05:42 GMT):
@rahulhegde: Not sure I get the question, can you expand? (There is one blockchain per channel.)
rahulhegde (Fri, 19 May 2017 15:34:20 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Zy83k4rskm6pMNGD6) @kostas
Fabric allows multiple user-chaincode to be installed and instantiated on a single channel. Each invoke operation (ledger transaction) on user-chaincode will result in ledger update (append).
So I take from your response - this is defined at the channel level. And reason this is not maintained at the user-chaincode-id granularity?
kostas (Fri, 19 May 2017 19:45:31 GMT):
@rahulhegde: What do you mean when you say "maintained at the user-chaincode-id granularity"?
kostas (Fri, 19 May 2017 19:46:12 GMT):
Can you give me an example? Assume user1, user2, channel1, channel2, chaincode1, chaincode2, and describe to me what your expectation is, and we can take it from there.
farhan3 (Fri, 19 May 2017 20:30:31 GMT):
@kostas Hi Kostas - I had a question about endorsements. The docs read:
> By default, endorsing logic at a peer accepts the tran-proposal and simply signs the tran-proposal. However, endorsing logic may interpret arbitrary functionality, to, e.g., interact with legacy systems with tran-proposal and tx as inputs to reach the decision whether to endorse a transaction or not.
Do you know if this endorsing logic can be changed?
kostas (Fri, 19 May 2017 20:31:23 GMT):
@farhan3: Not the best person to ask for this unfortunately, maybe you'll have better luck ien #fabric ?
kostas (Fri, 19 May 2017 20:31:23 GMT):
@farhan3: Not the best person to ask for this unfortunately, maybe you'll have better luck in #fabric ?
farhan3 (Fri, 19 May 2017 20:31:45 GMT):
Ok - I'll give that a try. Thank you.
bmkor (Fri, 19 May 2017 23:57:45 GMT):
Thanks. So leading peer is ordering service client? [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Zq8AWmh9znv545T9h) @kostas
kostas (Sat, 20 May 2017 00:00:57 GMT):
Not sure I follow?
bmkor (Sat, 20 May 2017 00:41:21 GMT):
Just wondering who, if any, will be ordering service client in the Fig 1 of https://docs.google.com/document/u/1/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/mobilebasic
kostas (Sat, 20 May 2017 11:47:12 GMT):
You use a client application to broadcast to the ordering service. You use the leader peer of your participating org to deliver from the service.
bmkor (Sat, 20 May 2017 12:48:57 GMT):
So leader peer would deliver from the service to `some` of the peers belonging to the same org in the channel?
bmkor (Sat, 20 May 2017 12:51:32 GMT):
My second question is in the Kafka type ordering, if I read it right, each osn would maintain a local ledger, what would happen to an osn if it went crashed and reactive again? Would osn talk to other osn for syncing?
kostas (Sat, 20 May 2017 12:51:52 GMT):
> So leader peer would deliver from the service to `some` of the peers belonging to the same org in the channel?
The details of that fall into gossiping and how it's set up per org. #fabric-gossip is probably a better resource for this.
bmkor (Sat, 20 May 2017 12:52:13 GMT):
Thanks :grinning:
kostas (Sat, 20 May 2017 12:52:19 GMT):
> My second question is in the Kafka type ordering, if I read it right, each osn would maintain a local ledger, what would happen to an osn if it went crashed and reactive again? Would osn talk to other osn for syncing?
It would sync up by reaching out to the Kafka cluster.
bmkor (Sat, 20 May 2017 12:53:14 GMT):
OSN won't talk to other osn, will it?
kostas (Sat, 20 May 2017 12:53:40 GMT):
It will not. The diagrams should be capturing this accurately.
bmkor (Sat, 20 May 2017 12:53:58 GMT):
Yes. You are right.
bmkor (Sat, 20 May 2017 12:55:22 GMT):
I see. So I can have more than one orderer in the same channel and client app broadcast tx in the channel, reaching the ordererS.
bmkor (Sat, 20 May 2017 12:55:22 GMT):
I see. So I can have more than one orderer in the same channel and client app broadcast (endorsed) tx in the channel, reaching the ordererS.
kostas (Sat, 20 May 2017 12:56:30 GMT):
Specifically it will reach one orderer, then the orderer will route it to the Kafka cluster, and via that cluster, the other orderers will receive the transaction.
bmkor (Sat, 20 May 2017 12:58:27 GMT):
The orderer is designated?
bmkor (Sat, 20 May 2017 12:58:27 GMT):
The orderer is designated? Or randomly chosen if more than one orderer in the same channel.
kostas (Sat, 20 May 2017 13:00:34 GMT):
Up to the application to decide which orderer to call, we are agnostic to it, and all orderers have access to all channels.
bmkor (Sat, 20 May 2017 13:02:18 GMT):
Ah yes. Input arg got a place for orderer address. My bad.
bmkor (Sat, 20 May 2017 13:06:15 GMT):
Before it syncing up, will it sign any batch and write to the partition? [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=cC3mYGbd8uNYoqNBD) @kostas
kostas (Sat, 20 May 2017 13:07:19 GMT):
Signing a batch and writing to the partition are two steps happening in entirely different stages.
bmkor (Sat, 20 May 2017 13:08:06 GMT):
Will it cut a block before syncing up?
kostas (Sat, 20 May 2017 13:08:09 GMT):
The OSN writes to the partition (technically it's the partition replica of the cluster) to order an incoming transaction
kostas (Sat, 20 May 2017 13:08:42 GMT):
The OSN signs a batch after it receives an ordered batch from the partition, right before adding it to its local ledger, and serving it to deliver clients.
bmkor (Sat, 20 May 2017 13:09:11 GMT):
I see
kostas (Sat, 20 May 2017 13:09:49 GMT):
> Will it cut a block before syncing up?
While I see what you're concerned about (and the answer is "no, we're good"), the question needs to be rephrased.
kostas (Sat, 20 May 2017 13:10:18 GMT):
"Will it cut any out-of-order blocks before having synced up fully?" is probably a better way to put it.
kostas (Sat, 20 May 2017 13:10:22 GMT):
And the answer to that is no.
kostas (Sat, 20 May 2017 13:11:02 GMT):
The literal answer to your original question ("Will it cut a block before syncing up?") is "yes", but these will be in-order blocks, same as what all the other orderers cut.
kostas (Sat, 20 May 2017 13:11:12 GMT):
You cut a block as soon as you read it from the partition.
kostas (Sat, 20 May 2017 13:11:54 GMT):
But since the partition is ordered, you're good.
bmkor (Sat, 20 May 2017 13:13:28 GMT):
Thanks a lot! Btw, would like to check my understanding: the partition 0 contains tx and ttc-x message but not else.
kostas (Sat, 20 May 2017 13:14:40 GMT):
Basically yes. Technically it also contains some empty "CONNECT" messages that each OSN pushes before listening from a partition, but these are ignored by the OSNs when reading the partition and calculating blocks.
kostas (Sat, 20 May 2017 13:14:40 GMT):
Basically yes. Technically it also contains empty "CONNECT" messages that each OSN pushes before listening from a partition, but these are ignored by the OSNs when reading the partition and calculating blocks.
bmkor (Sat, 20 May 2017 13:15:32 GMT):
Thanks.
kostas (Sat, 20 May 2017 13:15:34 GMT):
The technical reason for this is here: https://github.com/hyperledger/fabric/blob/master/orderer/kafka/orderer.go#L168
kostas (Sat, 20 May 2017 13:15:34 GMT):
The technical reason for this is here: https://github.com/hyperledger/fabric/blob/master/orderer/kafka/orderer.go#L168...L169
kostas (Sat, 20 May 2017 13:15:37 GMT):
Sure thing.
bmkor (Sat, 20 May 2017 13:17:11 GMT):
Since the broadcast is to one orderer, what would happen if the orderer got crashed? Should I have to reinstantiate the chaincode to other alive orderer?
kostas (Sat, 20 May 2017 13:19:18 GMT):
If the orderer forwards your transaction to the Kafka cluster, you'll get back a SUCCESS status message.
kostas (Sat, 20 May 2017 13:19:29 GMT):
Even if the orderer crashes at this point, your transaction is not lost.
kostas (Sat, 20 May 2017 13:21:24 GMT):
There's a tricky scenario in which your transaction is enqueued to the Kafka cluster but the connection between you and the OSN terminates before you get back the SUCCESS status message.
kostas (Sat, 20 May 2017 13:21:52 GMT):
In this case, you would logically repeat the request via another OSN.
kostas (Sat, 20 May 2017 13:21:52 GMT):
In this case, you would want to listen to the channel first, to check if the transaction has been added to the blockchain. If not, you would repeat the request via another OSN.
kostas (Sat, 20 May 2017 13:23:58 GMT):
Note that you could repeat the same request via another OSN w/o checking the chain first, and that still would be harmless because only the first of these requests would actually go through. The second one would be caught by the MVCC checks during the validation stage and would get rejected.
bmkor (Sat, 20 May 2017 13:33:15 GMT):
Thanks. Still digesting. I can't broadcast more tx without changing chain in this case, can I?
bmkor (Sat, 20 May 2017 13:41:43 GMT):
Thanks. Gotcha.
bmkor (Sun, 21 May 2017 05:51:20 GMT):
May I ask if SBFT is ready?
yacovm (Sun, 21 May 2017 06:10:53 GMT):
It wont be in v1
bmkor (Sun, 21 May 2017 07:00:20 GMT):
Thanks.
MohammadObaid (Sun, 21 May 2017 19:20:49 GMT):
Has joined the channel.
dave.enyeart (Sun, 21 May 2017 22:23:51 GMT):
Question from @rickr - what is the difference between CONFIG and CONFIG_UPDATE transaction? Is CONFIG only used for genesis block and any subsequent config update uses CONFIG_UPDATE? In the proto definitions it isn't quite clear...
kostas (Sun, 21 May 2017 22:34:49 GMT):
@dave.enyeart The explanations provided here https://github.com/hyperledger/fabric/blob/master/docs/source/configtx.rst#anatomy-of-a-configuration (for CONFIG) and here https://github.com/hyperledger/fabric/blob/master/docs/source/configtx.rst#configuration-updates (for CONFIG_UPDATE) should remove all ambiguity
dave.enyeart (Sun, 21 May 2017 23:34:15 GMT):
@kostas Is this a decent summary: clients can submit a CONFIG_UPDATE which is a sparse structure defining which config elements will be updated. Orderer uses this to build a fully populated CONFIG transaction (merge between prior CONFIG and the new CONFIG_UPDATES). The new CONFIG is then delivered to all peers in a config block. Only the final CONFIG transaction gets persisted to the channel's ledger.
kostas (Mon, 22 May 2017 00:15:57 GMT):
@dave.enyeart I'd say this sounds right
lehors (Mon, 22 May 2017 14:17:25 GMT):
hi guys, I have a question about kafka: what happens if the orderers get split in two groups?
lehors (Mon, 22 May 2017 14:18:25 GMT):
my naive understanding leads me to think that the group with the leader would continue, and the other would elect a new one which would create a plit but I imagine there is something to prevent or recover from this?
lehors (Mon, 22 May 2017 14:18:25 GMT):
my naive understanding leads me to think that the group with the leader would continue, and the other would elect a new one which would create a split but I imagine there is something to prevent or recover from this?
lehors (Mon, 22 May 2017 14:18:55 GMT):
@dave.enyeart?
lehors (Mon, 22 May 2017 14:19:19 GMT):
@kostas is offline...
kostas (Mon, 22 May 2017 14:23:36 GMT):
@kostas is here. What do orderers have to do with leaders/followers?
kostas (Mon, 22 May 2017 14:23:50 GMT):
Are you talking about OSNs or Kafka brokers?
lehors (Mon, 22 May 2017 14:24:13 GMT):
ah, I guess I'm showing how even more limited my understanding is :)
lehors (Mon, 22 May 2017 14:24:59 GMT):
so, forget the part about the leaders, how do we deal with a split of the group in 2?
kostas (Mon, 22 May 2017 14:26:32 GMT):
The orderers are not really connected to each other directly anyway, so the split into 2 doesn't really make sense. What matters is: can each orderer reach out to the Kafka cluster?
kostas (Mon, 22 May 2017 14:27:08 GMT):
If yes, it can forward transactions in there for ordering, and it can read from it to figure out what the globally agreed order is.
kostas (Mon, 22 May 2017 14:27:32 GMT):
If not, it won't be able to do any of those things, but it won't fork either.
kostas (Mon, 22 May 2017 14:27:52 GMT):
This might shed some more light on how things work: https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4
lehors (Mon, 22 May 2017 14:28:10 GMT):
thanks, I appreciate the pointer
lehors (Mon, 22 May 2017 14:28:25 GMT):
I'm happy to do a bit of reading before asking more stupid questions :)
kostas (Mon, 22 May 2017 14:52:55 GMT):
No problem, will be happy to discuss any concerns there are.
jyellick (Mon, 22 May 2017 14:58:28 GMT):
@dave.enyeart You said:
> Orderer uses this to build a fully populated CONFIG transaction (merge between prior CONFIG and the new CONFIG_UPDATES). The new CONFIG is then delivered to all peers in a config block. Only the final CONFIG transaction gets persisted to the channel's ledger.
This isn't entirely true. The original CONFIG_UPDATE tx is embedded in the generated CONFIG tx. This is necessary for clients to validate the orderer's application of the config.
dave.enyeart (Mon, 22 May 2017 14:59:57 GMT):
ok got it, so the CONFIG transaction will have a fully populated config (one stop shopping for config), as well as the embedded CONFIG_UPDATE that triggered the config change
kostas (Mon, 22 May 2017 15:00:21 GMT):
> The last_update field is defined below in the Updates to configuration section, but is only necessary when validating the configuration, not reading it.
kostas (Mon, 22 May 2017 15:00:38 GMT):
From the doc I linked to yesterday.
kostas (Mon, 22 May 2017 15:08:29 GMT):
Ah, and I see what I missed yesterday. You wrote CONFIG transaction, referring to the `Config` proto. I assumed (incorrectly) you were referring to the transaction (envelope) with `HeaderType_CONFIG`.
nickmelis (Mon, 22 May 2017 16:18:01 GMT):
[Anyone able to help with this?](https://chat.hyperledger.org/channel/fabric-consensus?msg=i8KaN8PGgAsj7YvrC)
kostas (Mon, 22 May 2017 16:20:15 GMT):
My very-very quick theory here:
kostas (Mon, 22 May 2017 16:20:32 GMT):
The 4th node is in a different view, so it's not actively participating in the network
kostas (Mon, 22 May 2017 16:20:42 GMT):
Only syncing via state updates when there are checkpoint quorums
kostas (Mon, 22 May 2017 16:20:51 GMT):
Which happens with a lag
kostas (Mon, 22 May 2017 16:21:02 GMT):
So this would explain the behavior
nickmelis (Mon, 22 May 2017 16:25:29 GMT):
right...how could I avoid that?
kostas (Mon, 22 May 2017 16:41:53 GMT):
You cannot avoid that. Once the view-change counter rolls forward it cannot (shouldn't) roll back. You need to figure out which view counter that node is on, and then force the network to switch to that view.
kostas (Mon, 22 May 2017 16:42:13 GMT):
You take the current leader off, thus forcing the network to bump a view, and move to the next leader.
kostas (Mon, 22 May 2017 16:42:30 GMT):
And if that still doesn't match what the 4th node was looking for, you repeat the process.
kostas (Mon, 22 May 2017 16:43:14 GMT):
I'm waving my hands over this I know, but it's hopefully a pointer to keep you going. Explaining the process in details takes time, and we're knee-deep in the 1.0 work right now, so that's a problem.
mlishok (Mon, 22 May 2017 18:17:21 GMT):
@nickmelis something similar is written up here : https://developer.ibm.com/answers/questions/336783/when-does-the-blocks-across-the-peers-get-synchron/
s.narayanan (Mon, 22 May 2017 22:36:46 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=vKe2vNLobFDvjpgv5) @kostas I modified the docker-compose-cli.yaml file as mentioned, however get an error. Appreciate any thoughts on what might be going wrong: CHANNEL_NAME=testchannel TIMEOUT=1000 docker-compose -f docker-compose-cli.yaml up -d
ERROR: The Compose file './docker-compose-cli.yaml' is invalid because:
Additional properties are not allowed ('kafka0', 'zookeeper' were unexpected)
You might be seeing this error because you're using the wrong Compose file version. Either specify a version of "2" (or "2.0") and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.
s.narayanan (Mon, 22 May 2017 22:36:46 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=vKe2vNLobFDvjpgv5) @kostas I modified the docker-compose-cli.yaml file as mentioned here to run e2e_cli test with Kafka and ZK: https://github.com/kchristidis/fabric/blob/7f83f40fdd8c5cd0a837a820f728f3b864e1cce2/examples/e2e_cli/docker-compose-cli.yaml. However I get an error. Appreciate any thoughts on what might be going wrong: CHANNEL_NAME=testchannel TIMEOUT=1000 docker-compose -f docker-compose-cli.yaml up -d
ERROR: The Compose file './docker-compose-cli.yaml' is invalid because:
Additional properties are not allowed ('kafka0', 'zookeeper' were unexpected)
You might be seeing this error because you're using the wrong Compose file version. Either specify a version of "2" (or "2.0") and place your service definitions under the `services` key, or omit the `version` key and place your service definitions at the root of the file to use version 1.
rahulhegde (Tue, 23 May 2017 00:28:32 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=7vN53NdjNNWRgSaQY) @kostas
From documentation https://github.com/hyperledger/fabric/blob/master/docs/source/ledger.rst#ledger. Ledger is per channel. This remains true even if we install + instantiate more than 1 chain-code (like https://github.com/hyperledger/fabric/tree/master/examples/chaincode/go/chaincode_example02, https://github.com/hyperledger/fabric/tree/master/examples/chaincode/go/chaincode_example04) per channel.
rahulhegde (Tue, 23 May 2017 00:28:32 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=7vN53NdjNNWRgSaQY) @kostas
Got also from documentation https://github.com/hyperledger/fabric/blob/master/docs/source/ledger.rst#ledger, Ledger is per channel. Ant this remains true even if we install + instantiate more than 1 chain-code (like https://github.com/hyperledger/fabric/tree/master/examples/chaincode/go/chaincode_example02, https://github.com/hyperledger/fabric/tree/master/examples/chaincode/go/chaincode_example04) per channel.
rahulhegde (Tue, 23 May 2017 00:31:53 GMT):
Can please help to understand what is Consortium Role (https://github.com/hyperledger/fabric/blob/master/examples/e2e_cli/configtx.yaml#L17)
s.narayanan (Tue, 23 May 2017 00:53:22 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=gTFikZk6BA3RSSfyg) @s.narayanan sorry it was typo in my yaml file, so its resolved.
kostas (Tue, 23 May 2017 03:16:18 GMT):
@rahulhegde:
kostas (Tue, 23 May 2017 03:16:32 GMT):
> Got also from documentation https://github.com/hyperledger/fabric/blob/master/docs/source/ledger.rst#ledger, Ledger is per channel. Ant this remains true even if we install + instantiate more than 1 chain-code (like https://github.com/hyperledger/fabric/tree/master/examples/chaincode/go/chaincode_example02, https://github.com/hyperledger/fabric/tree/master/examples/chaincode/go/chaincode_example04) per channel.
kostas (Tue, 23 May 2017 03:17:59 GMT):
I'm aware of this, and this doesn't contradict what I wrote earlier. See: https://chat.hyperledger.org/channel/fabric-consensus?msg=Zy83k4rskm6pMNGD6
kostas (Tue, 23 May 2017 03:19:41 GMT):
Are you writing this to support your claim that the blockchain should be "maintained at the user-chaincode-id granularity"?
kostas (Tue, 23 May 2017 03:20:15 GMT):
If that is the case, I am still unable to follow, and I'll repeat my request: https://chat.hyperledger.org/channel/fabric-consensus?msg=7vN53NdjNNWRgSaQY
kostas (Tue, 23 May 2017 03:21:34 GMT):
If you writing this to agree with the "one blockchain per channel" strategy (which is what I wrote, and the logical way of doing things), then we're good :slight_smile:
kostas (Tue, 23 May 2017 03:23:33 GMT):
> Can please help to understand what is Consortium Role
kostas (Tue, 23 May 2017 03:24:02 GMT):
Think of consortiums as groups of orgs who are allowed to create channels with each other.
kostas (Tue, 23 May 2017 03:24:30 GMT):
In the example that you linked to, we have such a group of orgs, called "SampleConsortium", comprising of orgs "Org1" and "Org2".
kostas (Tue, 23 May 2017 03:26:36 GMT):
The system channel holds these consortium definitions.
kostas (Tue, 23 May 2017 03:26:46 GMT):
They can be updated to add/remove members.
kostas (Tue, 23 May 2017 03:27:04 GMT):
So you can have consortiumFoo with orgs Foo1, Foo2, Foo3, Foo4.
kostas (Tue, 23 May 2017 03:27:16 GMT):
And consortiumBar with orgs Bar1, Bar2, Bar3.
kostas (Tue, 23 May 2017 03:29:03 GMT):
Every consortium has its own (agreed-upon by the members) ChannelCreationPolicy.
kostas (Tue, 23 May 2017 03:29:58 GMT):
For instance, ALL or ANY of the members of the channel should sign to authorize channel creation.
kostas (Tue, 23 May 2017 03:30:22 GMT):
So assume for the sake of this example that the ChannelCreationPolicy for consortiumFoo is ALL.
kostas (Tue, 23 May 2017 03:30:38 GMT):
And the ChannelCreationPolicy for consortiumBar is ANY
kostas (Tue, 23 May 2017 03:31:15 GMT):
In practice this means that the only valid channel creation requests we can have ae:
kostas (Tue, 23 May 2017 03:31:15 GMT):
In practice this means that the only valid channel creation requests we can have are:
kostas (Tue, 23 May 2017 03:33:23 GMT):
1. A channel creation request that involves any members from the set {Foo1, Foo2, Foo3, Foo4} and requires signatures from ALL channel members. e.g. "create channel random_foo with members Foo2 and Foo3". This will only go through if it's signed by Foo2 and Foo3.
kostas (Tue, 23 May 2017 03:34:19 GMT):
2. A channel creation request that involves any members from the set {Bar1, Bar2, Bar3} and requires signatures from any of the channel members. In this chase, "create channel random_bar with members {Bar1, Bar2}" signed by just Bar2 would be a valid channel creation request.
kostas (Tue, 23 May 2017 03:34:40 GMT):
A few additional thoughts:
kostas (Tue, 23 May 2017 03:34:40 GMT):
A few additional thoughts/references:
kostas (Tue, 23 May 2017 03:36:53 GMT):
1. Once channel "random_foo" has been created, nothing prevents those channel members from inviting an org from a different consortium to their channel. The "it has to be in the same consortium" concept applies only for channel creation.
kostas (Tue, 23 May 2017 03:37:20 GMT):
2. The ChannelCreationPolicy of the consortium becomes the mod_policy of the new channel's "Application" ConfigGroup.
kostas (Tue, 23 May 2017 03:37:45 GMT):
3. I believe that configtxgen defaults the ChannelCreationPolicy to ANY.
kostas (Tue, 23 May 2017 03:41:59 GMT):
4. Read https://github.com/hyperledger/fabric/blob/master/docs/source/configtx.rst#orderer-system-channel-configuration till the end of the doc (written by @jyellick) if you want a bit more low-level detail on how the consortium concept is defined/maintained/used by the orderers.
kostas (Tue, 23 May 2017 03:44:35 GMT):
5. @jeffgarratt's `bootstrap.feature` is the reference file to check if you want to see how the consortium concept blends with the rest of the flow: https://github.com/hyperledger/fabric/blob/master/bddtests/features/bootstrap.feature - you'll see that we start with a system chain that has no consortiums defined, then we define a consortium with some orgs, then these orgs can create a channel with each other, etc.
kostas (Tue, 23 May 2017 03:44:35 GMT):
5. @jeffgarratt's `bootstrap.feature` is the reference file to check if you want to see how the consortium concept blends with the rest of the flow: https://github.com/hyperledger/fabric/blob/master/bddtests/features/bootstrap.feature - you'll see that we start with a system chain that has no consortiums defined, then we define a consortium with some orgs, then these orgs can create a channel with each other, etc. This flow is still being refined/expanded so keep an eye on it.
kostas (Tue, 23 May 2017 03:48:02 GMT):
@s.narayanan: I'm glad you got it to work. In the future I'd suggest you create a JIRA item, file it under the "fabric-orderer" component, and assign it to me (or mention my name in the comments of the issue) so that I get a notification and look into it. You'll have to identify the minimum list of necessary steps that will allow me to reproduce the problem, and include any artifacts (YAML files, etc.) that I may have to look into.
anik (Tue, 23 May 2017 14:20:52 GMT):
Has joined the channel.
rahulhegde (Tue, 23 May 2017 14:29:37 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=qDNgTmy9Zkaxz2jKW) @kostas
Thank U - this is very useful. Let me try this out.
rahulhegde (Tue, 23 May 2017 14:29:37 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=qDNgTmy9Zkaxz2jKW) @kostas
Thank U - this is very useful. Let me try this out.
srvnnp (Wed, 24 May 2017 13:39:06 GMT):
Has joined the channel.
SandySun2000 (Wed, 24 May 2017 16:38:57 GMT):
Has joined the channel.
Glen (Thu, 25 May 2017 15:19:51 GMT):
Hi @kostas I have some doubts, what a concept is the "channel" to create against an orderer, it can let the peers join. Is it only another "subnet", or give the joined peers some special roles to deliver? If I have more than one orderers, how to deal with it, can one channel hold more than one orderers, so the other orderers also join the channel. I haven't tried that.
kostas (Thu, 25 May 2017 15:21:55 GMT):
Channel are a means to have logical sub-networks between orgs. All orderers have access to all channels.
Glen (Thu, 25 May 2017 15:54:12 GMT):
how can the other orderer visit it ,by joining the channel?
kostas (Thu, 25 May 2017 16:11:32 GMT):
The channel creation is recorded on the system channel, a channel that only the orderers have access to. The orderers therefore are aware of all channels available.
bmkor (Thu, 25 May 2017 17:47:55 GMT):
Got a question. Hope someone would help. After the orderer notified the client that the endorsed proposal was in proper order by returning "success", the leading peer would update its ledger and gets other peers commit too. Would every peer no matter rejected or not, notify the client? Or just some peers?
bmkor (Thu, 25 May 2017 17:47:55 GMT):
Got a question. Hope someone would help. After the orderer notified the client that the endorsed proposal was in proper order by returning "success", the leading peer would update its ledger and gets other peers commit too. Would every peer no matter gets the commit rejected or not, notify the client? Or just some peers?
kostas (Thu, 25 May 2017 18:59:30 GMT):
When the orderer returns a success message, the transaction has been received for ordering -- at this point the peers don't do any ledger updates.
kostas (Thu, 25 May 2017 19:00:01 GMT):
They will update their ledger once they receive that transaction on the deliver call.
kostas (Thu, 25 May 2017 19:00:29 GMT):
From that point on, I'm not sure what you're asking exactly? Whether every peer gets the transaction, or just some? Something else?
jeffgarratt (Thu, 25 May 2017 21:18:55 GMT):
@bmkor to expound on @kostas remark, the broadcast to orderer 'may' get a success if the msg is accepted. The second phase of the peer receiving the subsequent block from the orderer is completely isolated from the broadcast (and of course, asynchronous in nature).
bmkor (Fri, 26 May 2017 00:07:29 GMT):
Thanks a lot! I would like to know if the client got "success" msg from the orderer, then the client can assume it is all done right? The client would no need to check with a peer's ledger?
jyellick (Fri, 26 May 2017 00:40:21 GMT):
@bmkor A client receiving "success" from the orderer guarantees that the transaction will be ordered. Under very odd circumstances it may not appear in a block (primarily if the certificate is revoked between submission and final order). Once in a block it must also be checked to make sure that it was applied successfully by the peer. For instance, two transactions in the same block could both try to modify the same key at the same version, and one would necessarily fail.
jyellick (Fri, 26 May 2017 00:41:29 GMT):
So, in summary, the client will need to check the peer's ledger to be assured of commit
bmkor (Fri, 26 May 2017 00:41:41 GMT):
@jyellick thanks. So which peer I should trust to check against with?
jyellick (Fri, 26 May 2017 00:42:26 GMT):
In general, the relationship between an SDK and a peer is a trusted one. Usually they are administered by the same interest
bmkor (Fri, 26 May 2017 00:42:27 GMT):
And do I need check with the majority or just one would be enough?
bmkor (Fri, 26 May 2017 00:42:27 GMT):
And do I need to check with the majority or just one would be enough?
jyellick (Fri, 26 May 2017 00:43:37 GMT):
Usually just one, because of the trusted relationship between the SDK and peer. If you want to participate fully in a network, your organization should deploy its own peers, and connect its SDKs to those peers.
bmkor (Fri, 26 May 2017 00:44:04 GMT):
Thanks a lot.
jyellick (Fri, 26 May 2017 00:44:11 GMT):
You're welcome, happy to help
kostas (Fri, 26 May 2017 00:45:56 GMT):
In a scenario that's not exactly user-friendly and thus difficult to pull off, couldn't we argue that just one is enough, given that the block is signed?
jyellick (Fri, 26 May 2017 00:48:13 GMT):
@kostas A peer can pull blocks from any other single peer (for instance, implemented via gossip) and do so with complete safely. The SDKs assume the peer they connect to is not byzantine. So, to do what you describe, I would assume it would be equivalent to deploying a local peer, directing it blocks via gossip, or other, then pointing the SDK to it
jyellick (Fri, 26 May 2017 00:48:13 GMT):
@kostas A peer can pull blocks from any other single peer (for instance, implemented via gossip) and do so with complete safely (because the peer can apply the same `Deliver` checks to it). The SDKs assume the peer they connect to is not byzantine. So, to do what you describe, I would assume it would be equivalent to deploying a local peer, directing it blocks via gossip, or other, then pointing the SDK to it
jyellick (Fri, 26 May 2017 00:48:55 GMT):
I do not believe there is any support intended in the SDKs to tolerate a byzantine peer (in fact, they do not generally check things like the hash chaining across blocks.
jyellick (Fri, 26 May 2017 00:48:55 GMT):
I do not believe there is any support intended in the SDKs to tolerate a byzantine peer (in fact, they do not generally check things like the hash chaining across blocks.)
bmkor (Fri, 26 May 2017 00:53:23 GMT):
Would the assumption of peer not Byzantine be too strong?
bmkor (Fri, 26 May 2017 00:53:23 GMT):
Would the assumption of peer not Byzantine be too strong? If I understand correctly.
jyellick (Fri, 26 May 2017 01:28:59 GMT):
From an SDK, to its peer, the assumption is the peer is not byzantine.
jyellick (Fri, 26 May 2017 01:29:17 GMT):
But from a peer to another peer, there is no non-byzantine assumption
baohua (Fri, 26 May 2017 01:29:32 GMT):
aha, IMHO, this is a common problem in distributed system, usually if the client need strong confidence, need to query majority to get consistent result.
jyellick (Fri, 26 May 2017 01:30:56 GMT):
@baohua In fact, I would argue, the only way to ever have complete confidence in a read, is effectively to do it as a write. Usually, even getting a majority is not good enough, because the query is from different points in time
baohua (Fri, 26 May 2017 01:31:24 GMT):
correct, you're more accurate :)
baohua (Fri, 26 May 2017 01:32:28 GMT):
Btw, the orderer can only protect the order of the batch of tx, and each peer will do the commitment locally, hence if there's bad peers, they may generate different blocks. How would the network handle if there's different block generated?
jyellick (Fri, 26 May 2017 01:33:27 GMT):
As with most byzantine fault tolerant systems, the promise is only that "all the honest participants arrive at the same answer"
jyellick (Fri, 26 May 2017 01:34:02 GMT):
A bad peer may of course corrupt its own world view, but it is not capable of doing so to its honest neighbor
baohua (Fri, 26 May 2017 01:47:47 GMT):
so will the gossip find those block mismatching and trigger some sync? try to find how to handle this case.
jyellick (Fri, 26 May 2017 02:14:46 GMT):
Blocks have proof that they were consented upon, in particular, signatures over the block header from the orderer or orderers which generated them. So, if a bad peer attempts to send a good peer a forged block, the good peer will detect the block as bad because the correct signatures are not included.
bmkor (Fri, 26 May 2017 02:32:03 GMT):
How could the SDK know this peer is corrupted?
jyellick (Fri, 26 May 2017 02:37:31 GMT):
Peer to peer communication is byzantine fault tolerant, but SDK to peer is not
bmkor (Fri, 26 May 2017 03:46:49 GMT):
Thanks.
bmalavan (Sat, 27 May 2017 15:32:06 GMT):
Has joined the channel.
guoger (Mon, 29 May 2017 04:59:11 GMT):
I created two JIRA to capture potential bugs in jsonledger and deliver, please take a look. Any feedbacks are welcome.
https://jira.hyperledger.org/browse/FAB-4201
https://jira.hyperledger.org/browse/FAB-4202
I already have fixes for them, but need to separate the code from other patchset, particularly c/9707 and c/9867
Thanks!
rahulhegde (Mon, 29 May 2017 23:52:02 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=sN5KiKaPgHvbyGCcD) @kostas
I modified e2e_cli as below
```
developer@rahul-Inspiron-3442:~/workspace/playtime/go-workspace/src/github.com/hyperledger/fabric/examples/e2e_cli$ git diff configtx.yaml
diff --git a/examples/e2e_cli/configtx.yaml b/examples/e2e_cli/configtx.yaml
index 85d0fe8..62275d3 100644
--- a/examples/e2e_cli/configtx.yaml
+++ b/examples/e2e_cli/configtx.yaml
@@ -18,7 +18,7 @@ Profiles:
SampleConsortium:
Organizations:
- *Org1
- - *Org2
+
TwoOrgsChannel:
Consortium: SampleConsortium
Application:
```
Please correct me as per understanding, the expectation is only Org1 is now allowed to create channel however channel creation fails with orderer reporting:
` 2017-05-29 23:46:57.011 UTC [orderer/common/broadcast] Handle -> WARN 0c5 Rejecting CONFIG_UPDATE because: Attempted to include a member which is not in the consortium `
rahulhegde (Mon, 29 May 2017 23:52:02 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=sN5KiKaPgHvbyGCcD) @kostas
I modified e2e_cli as below
```
--- a/examples/e2e_cli/configtx.yaml
+++ b/examples/e2e_cli/configtx.yaml
@@ -18,7 +18,7 @@ Profiles:
SampleConsortium:
Organizations:
- *Org1
- - *Org2
+
TwoOrgsChannel:
Consortium: SampleConsortium
Application:
```
Please correct me as per understanding, the expectation is only Org1 is now allowed to create channel however channel creation fails with orderer reporting:
` 2017-05-29 23:46:57.011 UTC [orderer/common/broadcast] Handle -> WARN 0c5 Rejecting CONFIG_UPDATE because: Attempted to include a member which is not in the consortium `
rahulhegde (Mon, 29 May 2017 23:52:02 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=sN5KiKaPgHvbyGCcD) @kostas
I modified e2e_cli as below
```
--- a/examples/e2e_cli/configtx.yaml
+++ b/examples/e2e_cli/configtx.yaml
@@ -18,7 +18,7 @@ Profiles:
SampleConsortium:
Organizations:
- *Org1
- - *Org2
+
TwoOrgsChannel:
Consortium: SampleConsortium
Application:
```
Please correct me as per understanding, the expectation is only Org1 is now allowed to create channel however channel creation fails with orderer reporting:
` 2017-05-29 23:46:57.011 UTC [orderer/common/broadcast] Handle -> WARN 0c5 Rejecting CONFIG_UPDATE because: Attempted to include a member which is not in the consortium `
Channel creation request is performed by using Peer0-Org1 Admin MSP as per the E2E CLI script.
kostas (Tue, 30 May 2017 00:12:09 GMT):
@rahulhegde: That makes sense. As the error message indicates, you attempted to include in the channel an org (org2) which does not belong in the consortium. Remember that the channel creation requests should only include orgs that belong to the same consortium.
kostas (Tue, 30 May 2017 00:16:47 GMT):
In your case, what you want is to keep org2 in the consortium (so that you can have org2 in the channel), but modify the ChannelCreationPolicy so it looks for org1's signature (and that signature only). This is possible, but the tooling that provides this ability is not there yet.
rahulhegde (Tue, 30 May 2017 00:20:40 GMT):
@kostas first - we can say all organizations, that are going to be part (join) of the channel must be part of the consortium?
kostas (Tue, 30 May 2017 00:21:13 GMT):
Correct.
kostas (Tue, 30 May 2017 00:22:05 GMT):
https://chat.hyperledger.org/channel/fabric-consensus?msg=26TS3QG4L7cZde3Gc
rahulhegde (Tue, 30 May 2017 00:25:04 GMT):
and ChannelCreationPolicy - this would be part of the channel configuration block, and currently i will not be able to see it using tool inspectChannelCreateTx option?
kostas (Tue, 30 May 2017 00:30:35 GMT):
You can see the ChannelCreationPolicy for every consortium if you inspect the genesis block (or subsequent configuration blocks) of the system channel.
kostas (Tue, 30 May 2017 00:30:50 GMT):
Remember that https://chat.hyperledger.org/channel/fabric-consensus?msg=hkNBgS8tiDf4WzKS5
adc (Wed, 31 May 2017 14:43:38 GMT):
Hi All, do we have the reconfiguration mechanism ready to be used?
adc (Wed, 31 May 2017 14:43:38 GMT):
Hi All, do we have the reconfiguration mechanism ready to be used? @jyellick @kostas
jyellick (Wed, 31 May 2017 14:55:38 GMT):
@adc I have a number of pending CRs out there
jyellick (Wed, 31 May 2017 14:56:01 GMT):
The doc is not quite finished, but I can push a preliminary set for you to work with if that would be helpful
adc (Wed, 31 May 2017 15:00:20 GMT):
I'm fine, it was just to know the current status. I got a question on this and wanted to be sure how to answer. May you point me to the change-sets?
jyellick (Wed, 31 May 2017 15:01:36 GMT):
These are the CRs out for review:
https://gerrit.hyperledger.org/r/9701
https://gerrit.hyperledger.org/r/9703
https://gerrit.hyperledger.org/r/9705
https://gerrit.hyperledger.org/r/9719
https://gerrit.hyperledger.org/r/9851
https://gerrit.hyperledger.org/r/9853
https://gerrit.hyperledger.org/r/9879
jyellick (Wed, 31 May 2017 15:01:50 GMT):
I have at least 3 more on my laptop that I am cleaning up
jyellick (Wed, 31 May 2017 15:02:13 GMT):
One of which is doc + an example
guoger (Wed, 31 May 2017 15:05:54 GMT):
Hi, @jyellick thanks for explaining all the additional `isEnabledFor` for me the other day. @kostas is reviewing the patches and we kinda prefer cleaner code over the benefit we get in performance. probably speed gain at _ns_ level is not that significant? Need your inputs, thx!
adc (Wed, 31 May 2017 15:06:44 GMT):
@jyellick cool thanks :)
jyellick (Wed, 31 May 2017 15:08:34 GMT):
Will take a look @gouger, thanks!
jyellick (Wed, 31 May 2017 15:08:34 GMT):
Will take a look @gouger , thanks!
jyellick (Wed, 31 May 2017 15:08:34 GMT):
Will take a look @guoger , thanks!
jyellick (Thu, 01 Jun 2017 13:45:08 GMT):
FYI, for anywhere here interested in the new config tool, please see:
https://github.com/jyellick/fabric-gerrit/tree/configtxlator/examples/configtxupdate
In order to follow the instructions in the document above, the code needs to be update with the two change series terminating in
https://gerrit.hyperledger.org/r/#/c/10007/
and
https://gerrit.hyperledger.org/r/#/c/9989/
At their current level, you may retrieve these simply by:
```
git fetch origin && git checkout origin/master && git cherry-pick 75294a99eda00371208ec03411784816fa4a19c6..531de02da29a23f3d6ba35edb150638de8a2ecd8 && git cherry-pick b9dd46409b0d9c7f13ba2ec996dd243aa219343b..61264d943b351daf7031d89d10fbdb6417de143a
```
I'll be putting together a recorded video playback on usage at some point soon, but I thought I would post this here for those who were more anxious
s.narayanan (Thu, 01 Jun 2017 14:19:44 GMT):
Is the Solo orderer ledger only maintained in memory? If this is the case, then if the orderer dies, then messages will be lost (i.e. those that have not been delivered in block to peers).
kostas (Thu, 01 Jun 2017 14:20:33 GMT):
You can actually have it persist to disk if you choose the file (or json) ledger.
kostas (Thu, 01 Jun 2017 14:20:50 GMT):
https://github.com/hyperledger/fabric/blob/master/sampleconfig/orderer.yaml#L22
jeffgarratt (Thu, 01 Jun 2017 15:10:04 GMT):
@kostas @jyellick have a sec? seeing an issue with latest master
jeffgarratt (Thu, 01 Jun 2017 15:10:31 GMT):
might be a config change perhaps due to removal of sbft
latitiah (Thu, 01 Jun 2017 15:41:17 GMT):
When trying to join a channel (using the peer cli command), I receive the following error:
```
Error: proposal failed (err: rpc error: code = 2 desc = Failed to reconstruct the genesis block, Failed to reconstruct the genesis block, proto: bad wiretype for field common.BlockHeader.Number: got wiretype 2, want 0
```
Can anyone point me in the direction to look to figure out this issue?
From the logs, it looks as though tthe channel has been created successfully
ambatigaan (Thu, 01 Jun 2017 15:49:52 GMT):
Has joined the channel.
jyellick (Thu, 01 Jun 2017 16:27:32 GMT):
@latitiah This is probably better asked on #fabric-peer-endorser-committer but, it sounds to me like you are sending the wrong message to join channel. Could you accidentally be sending the configtx instead of the block?
ambatigaan (Thu, 01 Jun 2017 16:52:31 GMT):
@kostas tendermint consensus engine can be used in hyperledger fabric network? @jeffgarratt advised me to post it here. If how it can be configured? any thoughts?
jyellick (Thu, 01 Jun 2017 17:06:45 GMT):
@ambatigaan The consensus architecture of fabric is meant to be pluggable with other consensus types with minimal effort. However, there is no way to simply plug the tendermint consensus engine into fabric without writing some associated glue code (which does not exist to my knowledge). We would welcome the contribution of such code however!
xixuejia (Thu, 01 Jun 2017 17:23:21 GMT):
@jyellick Hi Jason, sorry I might did cherry pick at the wrong place for https://gerrit.hyperledger.org/r/#/c/10007/
jyellick (Thu, 01 Jun 2017 17:26:04 GMT):
Thanks for the heads up @xixuejia I will move it back to the correct parent
xixuejia (Thu, 01 Jun 2017 17:27:21 GMT):
I could not do a cherry-pick with the commands you provided @jyellick
xixuejia (Thu, 01 Jun 2017 17:27:24 GMT):
fatal: Invalid revision range 75294a99eda00371208ec03411784816fa4a19c6
jyellick (Thu, 01 Jun 2017 17:30:03 GMT):
@xixuejia Please try:
```
git fetch origin && git checkout origin/master && git cherry-pick 75294a99eda00371208ec03411784816fa4a19c6..531de02da29a23f3d6ba35edb150638de8a2ecd8 && git cherry-pick 524b881c3^..ad2a88b2f
```
I just checked this locally and it seems to work
latitiah (Thu, 01 Jun 2017 17:38:35 GMT):
@jyellick : omg - thx!
xixuejia (Thu, 01 Jun 2017 18:03:12 GMT):
@jyellick Is `make configtxlator` the command to build configtxlator, it seems there's no target for it
xixuejia (Thu, 01 Jun 2017 18:03:12 GMT):
@jyellick Is `make configtxlator` the command to build configtxlator? it seems there's no target for it
jyellick (Thu, 01 Jun 2017 18:04:08 GMT):
@xixuejia Are you certain? https://gerrit.hyperledger.org/r/#/c/9853/6/Makefile
xixuejia (Thu, 01 Jun 2017 18:05:16 GMT):
ah, I didn't cherry pick this commit
Nishi (Thu, 01 Jun 2017 18:08:19 GMT):
@jyellick : after getting patchset on Makefile it failed with :
```
common/tools/configtxlator/rest/protolator_handlers.go:35:2: cannot find package "github.com/gorilla/mux" in any of:
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/gorilla/mux (vendor tree)
/opt/go/src/github.com/gorilla/mux (from $GOROOT)
/opt/gopath/src/github.com/gorilla/mux (from $GOPATH)
Makefile:209: recipe for target 'build/bin/configtxlator' failed
make: *** [build/bin/configtxlator] Error 1
```
jyellick (Thu, 01 Jun 2017 18:12:39 GMT):
Sounds like you have not cherry picked the whole series
jyellick (Thu, 01 Jun 2017 18:12:59 GMT):
That library was vendored in an earlier commit
jyellick (Thu, 01 Jun 2017 18:14:01 GMT):
If you cherry pick these CRs
https://gerrit.hyperledger.org/r/9785
https://gerrit.hyperledger.org/r/9851
https://gerrit.hyperledger.org/r/9853
https://gerrit.hyperledger.org/r/9879
https://gerrit.hyperledger.org/r/9981
https://gerrit.hyperledger.org/r/9989
in order, that should be what you need
Nishi (Thu, 01 Jun 2017 18:15:07 GMT):
it worked :)
bretharrison (Thu, 01 Jun 2017 21:28:02 GMT):
rebuilt today and now getting
```
Error unmarshaling config into struct: 1 error(s) decoding:
* '' has invalid keys: genesis, sbftlocal
panic: Error unmarshaling config into struct:1 error(s) decoding:
* '' has invalid keys: genesis, sbftlocal
goroutine 1 [running]:
panic(0xad2b60, 0xc42026e110)
/opt/go/src/runtime/panic.go:500 +0x1a1
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panic(0xc4201e87e0, 0xc4202644e0, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:188 +0xd0
github.com/hyperledger/fabric/orderer/localconfig.Load(0x40952b)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/localconfig/config.go:182 +0x52c
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:52 +0x29
```
tried to rebuild the genesis block with the `cofigtxgen` ....but still getting
jyellick (Thu, 01 Jun 2017 21:31:40 GMT):
@bretharrison You most likely have a local `orderer.yaml` sitting around somewhere
jyellick (Thu, 01 Jun 2017 21:31:40 GMT):
@bretharrison You most likely have a stale local `orderer.yaml` sitting around somewhere
jyellick (Thu, 01 Jun 2017 21:32:42 GMT):
Those keys were removed as SBFT was removed from the v1 release. Extra config values in the `orderer.yaml` will cause an error (this is by design, to help detect typos)
bretharrison (Thu, 01 Jun 2017 21:35:44 GMT):
thanks, will have a look
kostas (Thu, 01 Jun 2017 21:42:49 GMT):
> https://chat.hyperledger.org/channel/fabric-consensus?msg=evdWA4Ad5XGJ8bYkc
kostas (Thu, 01 Jun 2017 21:44:20 GMT):
@ambatigaan: As is, it cannot run. Someone would need to write a plugin for it, i.e. a wrapper for it to make it play nicely with those interfaces: https://github.com/hyperledger/fabric/blob/master/orderer/multichain/chainsupport.go#L36...L63
muralisr (Thu, 01 Jun 2017 22:00:41 GMT):
@bretharrison looks like issue @jeffgarratt had this morning...
bretharrison (Thu, 01 Jun 2017 22:02:08 GMT):
I must have had an old orderer.yaml, I rebuilt again after cleaning out an old copy of fabric and I am running again
guoger (Fri, 02 Jun 2017 03:14:37 GMT):
Could somebody advise on test code style, specially `assert.*` or `if` statement. I see they co-exist in current codebase, personally I prefer the former one, seems to be cleaner, extra pkg though. Thoughts?
kostas (Fri, 02 Jun 2017 03:14:59 GMT):
Yes please, go for `assert` whenever possible.
kostas (Fri, 02 Jun 2017 03:15:30 GMT):
Feel free to modify existing tests to assert as well, when you're touching a file for consistency.
kostas (Fri, 02 Jun 2017 03:15:30 GMT):
Feel free to switch existing tests to assert as well, when you're touching a file for consistency.
kostas (Fri, 02 Jun 2017 03:15:30 GMT):
Feel free to switch existing tests to assert as well, when you're touching a file, for consistency.
jyellick (Fri, 02 Jun 2017 04:18:12 GMT):
FYI, for the `configtxlator` tool I announced here earlier today, please note a new CR which is required additionally for those operating in go 1.7 environments (ie, vagrant) https://gerrit.hyperledger.org/r/#/c/10059/
adc (Fri, 02 Jun 2017 08:38:51 GMT):
@jyellick @kostas is there a way to specify at the genesis the channel's policies?
VamsiKrishnak (Fri, 02 Jun 2017 10:34:44 GMT):
Has joined the channel.
adc (Fri, 02 Jun 2017 12:20:50 GMT):
@jyellick @kostas do we have a method to verify that a policy is well formed?
adc (Fri, 02 Jun 2017 12:25:28 GMT):
actually, the compilation process is a kind of verification process, right?
jyellick (Fri, 02 Jun 2017 13:24:57 GMT):
@adc Yes, I can additionally give you another mechanism
jyellick (Fri, 02 Jun 2017 13:25:49 GMT):
````
curl -X POST --data-binary @policy.json http://127.0.0.1:7059/protolator/encode/common.Policy
```
jyellick (Fri, 02 Jun 2017 13:26:07 GMT):
Where `policy.json` is your policy encoded as JSON
adc (Fri, 02 Jun 2017 13:26:15 GMT):
I'm fine with the compilation. The point is related to this JIRA item https://jira.hyperledger.org/browse/FAB-4250. It looks to me that an instantiate transaction should not be marked as valid if the endorsement policy is not well formed, right?
jyellick (Fri, 02 Jun 2017 13:30:04 GMT):
Ah okay, yes, I would agree
adc (Fri, 02 Jun 2017 13:31:43 GMT):
the jira item does not go that further but because the committing peer is our last chance to check that everything is okay, we should avoid that system transactions (like the instantiate) are marks valid when they are meaningless
adc (Fri, 02 Jun 2017 13:32:18 GMT):
the effect of an instantiate with an bogus endorsement policy is that the chaincode cannot be invoked but probably not even updated, no?
adc (Fri, 02 Jun 2017 13:32:29 GMT):
therefore the name would be locked forever, no?
adc (Fri, 02 Jun 2017 13:35:40 GMT):
ah no, for the upgrade what counts is the instantiation policy.
xixuejia (Fri, 02 Jun 2017 15:34:41 GMT):
@adc 1. endorser should not endorse such instantiation tx; 2. If somehow item 1 check passes, committing peer should mark that tx as invalid.
rahulhegde (Fri, 02 Jun 2017 16:57:26 GMT):
Hello @kostas
We were trying the data persistence of fabric-network and found the orderer panic ` 2017-06-02 16:20:23.913 UTC [orderer/multichain] newLedgerResources -> CRIT 5b6 Error creating configtx manager and handlers: Bad envelope: Not a tx of type CONFIG `
Following are the steps performed using the alpha3 release of the fabric (published by @rameshthoomu on fabric-ci channel)
1. Run the fabric with data-persistence setup for peer and orderer (mounting of /var/hyperledger/production). This is using the E2E CLI however with some more organizations (5)
2. Deploy chaincode, perform transaction. Verified - this transaction are reflected on the couch-db (and thus ledger).
3. Restart the docker container - first time
4. Perform some transaction, Verified - this transaction are reflected on the couch-db.
5. Restart the docker container - second time
6. Orderer panic with above error.
Can you please suggest.
rahulhegde (Fri, 02 Jun 2017 16:57:26 GMT):
Hello @kostas
We were trying the data persistence of fabric-network and found the orderer panic ` 2017-06-02 16:20:23.913 UTC [orderer/multichain] newLedgerResources -> CRIT 5b6 Error creating configtx manager and handlers: Bad envelope: Not a tx of type CONFIG `
Following are the steps performed using the alpha3 release of the fabric (published by @rameshthoomu on fabric-ci channel)
1. Run the fabric with data-persistence setup for peer and orderer (mounting of /var/hyperledger/production). This is using the E2E CLI however with some more organizations (5)
2. Deploy chaincode, perform transaction. Verified - this transaction are reflected on the couch-db (and thus ledger).
3. Restart the docker container - first time
4. Perform some transaction, Verified - this transaction are reflected on the couch-db.
5. Restart the docker container - second time
6. Orderer panic with above error.
Can you please suggest.
```
2017-06-02 16:58:51.695 UTC [common/config] NewStandardValues -> DEBU 5b1 Initializing protos for *config.ChannelProtos
2017-06-02 16:58:51.695 UTC [common/config] initializeProtosStruct -> DEBU 5b2 Processing field: HashingAlgorithm
2017-06-02 16:58:51.695 UTC [common/config] initializeProtosStruct -> DEBU 5b3 Processing field: BlockDataHashingStructure
2017-06-02 16:58:51.695 UTC [common/config] initializeProtosStruct -> DEBU 5b4 Processing field: OrdererAddresses
2017-06-02 16:58:51.695 UTC [common/config] initializeProtosStruct -> DEBU 5b5 Processing field: Consortium
2017-06-02 16:58:51.695 UTC [orderer/multichain] newLedgerResources -> CRIT 5b6 Error creating configtx manager and handlers: Bad envelope: Not a tx of type CONFIG
panic: Error creating configtx manager and handlers: Bad envelope: Not a tx of type CONFIG
goroutine 1 [running]:
panic(0xb03100, 0xc420163800)
/opt/go/src/runtime/panic.go:500 +0x1a1
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc420183d40, 0xc39f84, 0x30, 0xc420163640, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x127
github.com/hyperledger/fabric/orderer/multichain.(*multiLedger).newLedgerResources(0xc42025e0f0, 0xc420cd4960, 0xc420cd4960)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/multichain/manager.go:159 +0x393
github.com/hyperledger/fabric/orderer/multichain.NewManagerImpl(0x11af3c0, 0xc42032e1e0, 0xc42026c840, 0x11acbc0, 0x11f3900, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/multichain/manager.go:109 +0x23b
main.initializeMultiChainManager(0xc4201dbd40, 0x11acbc0, 0x11f3900, 0xc420206701, 0xc420206780)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:199 +0x5c4
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:59 +0xb2
```
BhavishaDawda (Fri, 02 Jun 2017 17:04:13 GMT):
Has joined the channel.
jyellick (Fri, 02 Jun 2017 17:09:55 GMT):
@rahulhegde Thank you for the nicely documented bug report, could you please open this as JIRA item and assign it to me?
kostas (Fri, 02 Jun 2017 17:10:22 GMT):
^^ To that I'd add, can you specify which Docker container you restart?
rahulhegde (Fri, 02 Jun 2017 17:18:55 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=K8qtySsbugxzKT3eG) @kostas
the complete set of containers were brought down - peers (5), orderer (1) and additionally ca (5) and cli (1),
rahulhegde (Fri, 02 Jun 2017 17:35:52 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Nzxw6tnj5cMEGj6Ah) @jyellick
https://jira.hyperledger.org/browse/FAB-4330
jyellick (Fri, 02 Jun 2017 18:32:22 GMT):
@rahulhegde One more piece of information, could you add to the JIRA the commit this was produced at?
rahulhegde (Fri, 02 Jun 2017 18:42:54 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=KPt6tShyGWPDtoWz2) @jyellick
Updated to the JIRA.
FABRIC# -f647b6f
https://chat.hyperledger.org/channel/fabric-ci?msg=8KeH9TZbB4aY5Y9Bb
jyellick (Sun, 04 Jun 2017 15:35:10 GMT):
@here Please note that any existing `configtx.yaml` files are most likely broken and need to be updated now that https://gerrit.hyperledger.org/r/#/c/10125/ has merged. Please remove the `BCCSP` sections from the organization definitions if you encounter a panic in `configtxgen` or `orderer` with the provisional bootstrapper.
jyellick (Sun, 04 Jun 2017 15:35:10 GMT):
@here Please note that any existing `configtx.yaml` files are most likely broken and need to be updated now that https://gerrit.hyperledger.org/r/#/c/10127/ has merged. Please remove the `BCCSP` sections from the organization definitions if you encounter a panic in `configtxgen` or `orderer` with the provisional bootstrapper.
jyellick (Sun, 04 Jun 2017 15:35:10 GMT):
@here Please note that any existing `configtx.yaml` files are most likely broken and need to be updated once https://gerrit.hyperledger.org/r/#/c/10127/ has merged. Please remove the `BCCSP` sections from the organization definitions if you encounter a panic in `configtxgen` or `orderer` with the provisional bootstrapper.
jyellick (Sun, 04 Jun 2017 15:35:10 GMT):
~@here Please note that any existing `configtx.yaml` files are most likely broken and need to be updated once https://gerrit.hyperledger.org/r/#/c/10127/ has merged. Please remove the `BCCSP` sections from the organization definitions if you encounter a panic in `configtxgen` or `orderer` with the provisional bootstrapper.~ This is not yet merged
jyellick (Sun, 04 Jun 2017 15:35:10 GMT):
@here Please note that any existing `configtx.yaml` files are most likely broken and need to be updated because of https://gerrit.hyperledger.org/r/#/c/10127/. Please remove the `BCCSP` sections from the organization definitions if you encounter a panic in `configtxgen` or `orderer` with the provisional bootstrapper.
jyellick (Sun, 04 Jun 2017 15:42:05 GMT):
(Sorry, I sent that a little prematurely, I thought that CR had merged, but it is actually still pending, I expect it to merge shortly, so leaving it posted for now)
latitiah (Sun, 04 Jun 2017 15:45:51 GMT):
To be sure, if I were to remove the BCCSP section now, should I expect any side effects. I just want to do it now while I'm thinking about it :P
jyellick (Sun, 04 Jun 2017 15:48:32 GMT):
@latitiah Sorry, but I actually made a mistake, a similar CR merged, we cannot remove the BCCSP section quite yet
latitiah (Sun, 04 Jun 2017 15:49:00 GMT):
ok. cool. thx
jyellick (Mon, 05 Jun 2017 02:10:29 GMT):
@latitiah The BCCSP removal CR was merged, so next time you update to master you can/should remove the `configtx.yaml` BCCSP sections
latitiah (Mon, 05 Jun 2017 04:10:12 GMT):
@jyellick: Thx! I noticed and updated. :)
guruce (Mon, 05 Jun 2017 07:28:03 GMT):
Has joined the channel.
chenxl (Mon, 05 Jun 2017 07:47:42 GMT):
orderer use kafka, create channel error :
chenxl (Mon, 05 Jun 2017 07:48:09 GMT):
2017-06-05 07:39:39.827 UTC [policies] CommitProposals -> DEBU 48a As expected, current configuration has policy '/Channel/Application/Admins'
2017-06-05 07:39:39.827 UTC [policies] GetPolicy -> DEBU 48b Returning policy Orderer/BlockValidation for evaluation
2017-06-05 07:39:39.827 UTC [policies] CommitProposals -> DEBU 48c As expected, current configuration has policy '/Channel/Orderer/BlockValidation'
2017-06-05 07:39:39.828 UTC [orderer/common/broadcast] Handle -> INFO 48d Consenter instructed us to shut down
2017-06-05 07:39:39.831 UTC [orderer/common/deliver] Handle -> WARN 48e Error reading from stream: stream error: code = 1 desc = "context canceled"
jun (Mon, 05 Jun 2017 09:00:20 GMT):
Has joined the channel.
jyellick (Mon, 05 Jun 2017 13:30:45 GMT):
`Consenter Instructed us to shut down` I believe indicates a problem connecting to the kafka cluster. @kostas may have more insight
kostas (Mon, 05 Jun 2017 13:36:38 GMT):
@jyellick is correct. @chenxl your Kafka cluster is not reachable, even if after retrying for the amount of time specified [here](https://github.com/hyperledger/fabric/blob/master/sampleconfig/orderer.yaml#L131). You are most likely not providing the right Kafka broker addresses under the `Brokers` key in your genesis block ([example](https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L151)). If the problem persists, please open up a JIRA issue filed against the "fabric-orderer" component with "fix version" set to "v1.0.0" and/or assign it to me. Please make sure to run with the loglevel set to DEBUG and with the verbose flag for Kafka set to true (both settings shown [here](https://github.com/hyperledger/fabric/blob/master/bddtests/dc-orderer-base.yml#L16...L17)), and attach all orderer logs to the JIRA issue, as well as all relevant artifacts (e.g. `configtx.yaml` if you're using `configtxgen` to generate the genesis block for your network).
kostas (Mon, 05 Jun 2017 13:36:38 GMT):
@jyellick is correct. @chenxl your Kafka cluster is not reachable, even if after retrying for the amount of time specified [here](https://github.com/hyperledger/fabric/blob/master/sampleconfig/orderer.yaml#L131). You are most likely not providing the right Kafka broker addresses under the `Brokers` key in your genesis block ([example](https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L151)). If the problem persists, please open up a JIRA issue and file it against the "fabric-orderer" component with "fix version" set to "v1.0.0" and/or assign it to me. Please make sure to run with the loglevel set to DEBUG and with the verbose flag for Kafka set to true (both settings shown [here](https://github.com/hyperledger/fabric/blob/master/bddtests/dc-orderer-base.yml#L16...L17)), and attach all orderer logs to the JIRA issue, as well as all relevant artifacts (e.g. `configtx.yaml` if you're using `configtxgen` to generate the genesis block for your network).
toddinpal (Mon, 05 Jun 2017 18:01:58 GMT):
Is there a term for the type of consensus algorithm that the Kafka based ordering service uses? I'm thinking in terms of things like PBFT, SBFT, Raft, Tangaroa, etc.
kostas (Mon, 05 Jun 2017 18:09:28 GMT):
@toddinpal: No term. Closest thing to what Kafka's doing is here: https://www.microsoft.com/en-us/research/publication/pacifica-replication-in-log-based-distributed-storage-systems/?from=http%3A%2F%2Fresearch.microsoft.com%2Fapps%2Fpubs%2Fdefault.aspx%3Fid%3D66814
kostas (Mon, 05 Jun 2017 18:09:28 GMT):
@toddinpal: No term. Closest thing to what Kafka's doing is here: https://www.microsoft.com/en-us/research/publication/pacifica-replication-in-log-based-distributed-storage-systems/
toddinpal (Mon, 05 Jun 2017 18:10:26 GMT):
@kostas OK, thanks!
jdockter (Mon, 05 Jun 2017 20:52:29 GMT):
Has left the channel.
vdods (Mon, 05 Jun 2017 21:42:15 GMT):
Has left the channel.
LordGoodman (Tue, 06 Jun 2017 05:01:13 GMT):
somebody know what is default channel use for?
jyellick (Tue, 06 Jun 2017 05:07:15 GMT):
Can you give more context of what you mean by 'the default channel'?
jyellick (Tue, 06 Jun 2017 05:09:36 GMT):
There used to be a concept of 'default chain' on the peer, which I believe has been removed. There is an 'ordering system channel', which is used by the orderer for coordinating channel creation
LordGoodman (Tue, 06 Jun 2017 05:18:29 GMT):
@jyellick Thank you for reply, I just find out the some docker-compose.yaml file will set "--peer-defaultchain=false" when start a peer.
jyellick (Tue, 06 Jun 2017 05:20:49 GMT):
@LordGoodman perhaps you are using an old compose file
jyellick (Tue, 06 Jun 2017 05:20:59 GMT):
https://gerrit.hyperledger.org/r/#/c/9895/ attempted to remove this from the compose files
LordGoodman (Tue, 06 Jun 2017 05:22:53 GMT):
@jyellick thank you
LordGoodman (Tue, 06 Jun 2017 05:25:05 GMT):
@jyellick Do we have any way to find out every member in the same channel ?
jyellick (Tue, 06 Jun 2017 05:25:51 GMT):
@LordGoodman By viewing the configuration block for the channel, you may see the organizational membership in a channel
LordGoodman (Tue, 06 Jun 2017 05:28:01 GMT):
@jyellick So there is no other way can do this ? like some api for a peer in the channel to find out other peers
jyellick (Tue, 06 Jun 2017 05:28:57 GMT):
@LordGoodman I'm not certain what your goal is. Generally speaking, peers will discover eachother via the peer gossip protocol
LordGoodman (Tue, 06 Jun 2017 05:34:30 GMT):
@jyellick thank you very much. I'm trying to understand how could a peer to discover each other, do you have any gossip protocol document ?
jyellick (Tue, 06 Jun 2017 05:35:44 GMT):
https://docs.google.com/document/d/157AvKxVRqgeaCTSpN86ICa5x-XihZ67bOrNMc5xLvEU/edit is a good place to start, but #fabric-gossip would be the place to ask questions specific to the gossip networking
jyellick (Tue, 06 Jun 2017 05:36:37 GMT):
You might also try https://github.com/hyperledger/fabric/blob/master/docs/source/gossip.rst
bmkor (Tue, 06 Jun 2017 05:36:55 GMT):
Hi all. Wanna know if `configtxlator` supports adding extra organisation in a channel?
LordGoodman (Tue, 06 Jun 2017 05:37:14 GMT):
@jyellick thank you again
jyellick (Tue, 06 Jun 2017 05:38:12 GMT):
@bmkor Yes, `configtxlator` should support arbitrary config manipulations, including add organizations to a channel, though I'm not certain this scenario has been tested yet
jyellick (Tue, 06 Jun 2017 05:40:35 GMT):
(Though this is a scenario which is in the list to be tested, but most everyone is still familiarizing themselves with the tool)
bmkor (Tue, 06 Jun 2017 05:41:30 GMT):
@jyellick Thanks. I did try some. Like adding `Org3` in `e2e_cli` example but failed in fulfilling the policy (seems to me).
jyellick (Tue, 06 Jun 2017 05:42:05 GMT):
Ah, if you are using only the peer CLI, you will find this procedure is difficult if not impossible
jyellick (Tue, 06 Jun 2017 05:42:19 GMT):
In general, adding an organization will require a set of multiple signatures, which is not supported by the peer CLI
jyellick (Tue, 06 Jun 2017 05:42:58 GMT):
The primary target for `configtxlator` is in support of an application leveraging the SDK (which can support multiple signatures)
jyellick (Tue, 06 Jun 2017 05:43:12 GMT):
This is why the interface is REST
bmkor (Tue, 06 Jun 2017 05:43:18 GMT):
Ah, I see.
jyellick (Tue, 06 Jun 2017 05:43:37 GMT):
Of course, if you were interested in patching the peer CLI to support multiple signatures, CRs are welcome
bmkor (Tue, 06 Jun 2017 05:44:09 GMT):
Even I modify the policy from MAJORITY to ANY? I still need multiple signature?
bmkor (Tue, 06 Jun 2017 05:44:22 GMT):
Let me try. [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=ffcrBnQRFyQ4v7Thg) @jyellick
jyellick (Tue, 06 Jun 2017 05:44:37 GMT):
Ah, if you change the admins policy for the application to ANY at bootstrap, then yes, I think you could get away with just one signature
jyellick (Tue, 06 Jun 2017 05:45:10 GMT):
You could also try creating a channel with just one member, then constructing a tx to add a second. Because there is only one member to begin with, it should only require 1 signature to hit the default MAJORITY rule
bmkor (Tue, 06 Jun 2017 05:45:38 GMT):
Right.
bmkor (Tue, 06 Jun 2017 05:46:08 GMT):
It's much clear now. Thanks a lot!
jyellick (Tue, 06 Jun 2017 05:46:51 GMT):
Happy to help!
bmkor (Tue, 06 Jun 2017 05:47:52 GMT):
By the way, any light on how to do multiple signatures in general? Refer me to some sources in fabric will suffice. Thanks:wink:
jyellick (Tue, 06 Jun 2017 05:49:12 GMT):
In general, it's quite similar to a single signature. For the `ConfigUpdateEnvelope`, it is simply a repeated section of `ConfigSignature`s.
jyellick (Tue, 06 Jun 2017 05:49:49 GMT):
I do not think it would be terribly difficult to add multi-sig support to the peer
jyellick (Tue, 06 Jun 2017 05:50:05 GMT):
Rather than submitting the TX, if the signed TX were saved to disk
jyellick (Tue, 06 Jun 2017 05:50:27 GMT):
Then the command could be run multiple times with different local MSP dirs, to build up the signatures
jyellick (Tue, 06 Jun 2017 05:50:42 GMT):
(You'll notice the existing command already appends to the signature, not replaces it)
jyellick (Tue, 06 Jun 2017 05:50:42 GMT):
(You'll notice the existing command already appends to the signature set, not replaces it)
bmkor (Tue, 06 Jun 2017 05:51:01 GMT):
Got it. Thanks again @jyellick
jyellick (Tue, 06 Jun 2017 05:51:45 GMT):
From `peer/channel/create.go`
```
signer := localsigner.NewSigner()
sigHeader, err := signer.NewSignatureHeader()
if err != nil {
return nil, err
}
configSig := &cb.ConfigSignature{
SignatureHeader: utils.MarshalOrPanic(sigHeader),
}
configSig.Signature, err = signer.Sign(util.ConcatenateBytes(configSig.SignatureHeader, configUpdateEnv.ConfigUpdate))
configUpdateEnv.Signatures = append(configUpdateEnv.Signatures, configSig)
```
jyellick (Tue, 06 Jun 2017 05:52:56 GMT):
Sure thing @bmkor, let me know if I can do anything else to help
bh4rtp (Tue, 06 Jun 2017 11:15:51 GMT):
@kostas in the invocation of chaincode, i need to query state first and then write state. but block may be not committed yet and all latest transactions are still buffering in the orderer service node. so i suggest that before batchTimeout or batchSize event triggers, the consensus peer should cut the block and write state right now. in this way, GetState will return the latest values to avoid timeout error.
bh4rtp (Tue, 06 Jun 2017 11:15:51 GMT):
@kostas in the invocation of chaincode, i need to query state first and then write state. but block may be not committed yet and all latest transactions are still buffering in the orderer service node. so i suggest that before batchTimeout or batchSize event triggers, the consensus peer should cut the block and write state right now. in this way, GetState will return the latest values for the next loop to avoid timeout error.
bh4rtp (Tue, 06 Jun 2017 11:15:51 GMT):
@kostas in the invocation of chaincode, i need to query state first and then write state. but block may be not committed yet and all latest transactions are still buffering in the orderer service node. so i suggest that before batchTimeout or batchSize event triggers, the consensus peer should cut the block and write state right now once GetState is called. in this way, GetState will return the latest values for the next loop to avoid timeout error.
bh4rtp (Tue, 06 Jun 2017 11:15:51 GMT):
@kostas in the invocation of chaincode, i need to query state first and then write state. but block may be not committed yet and all latest transactions are still buffering in the orderer service node. so i suggest that before batchTimeout or batchSize event triggers, the consensus peer should cut the block and write state right now once GetState is called. in this way, GetState will return the latest values for the next loop to avoid GetState timeout error.
jyellick (Tue, 06 Jun 2017 13:07:00 GMT):
@bh4rtp You may reduce the orderer batch size and batch timeout to decrease the amount of time between submitting a transaction and having it commit in the chain. However, I would point out 2 things:
1. fabric is designed to be a distributed asynchronous system. If you wish to know when your transaction commits, you should listen for an event indicating that it has committed
2. Even if the `GetState` call could somehow trigger a block commit (this is technically infeasible for a number of reasons), you would still have the corner case that the peer does not have the state you are interested in, that it it is still in flight over the network.
latitiah (Tue, 06 Jun 2017 16:13:37 GMT):
This is a new error for me: I'm not doing anything with policy just yet, but I'm seeing this error on my orderer (solo) before it dies:
```[orderer/multichain] newLedgerResources -> CRIT 066 Error creating configtx manager and handlers: Error deserializing key ChainCreationPolicyNames for group /Channel/Orderer: Unexpected key ChainCreationPolicyNames
panic: Error creating configtx manager and handlers: Error deserializing key ChainCreationPolicyNames for group /Channel/Orderer: Unexpected key ChainCreationPolicyNames```
latitiah (Tue, 06 Jun 2017 16:14:08 GMT):
Any ideas what I may be doing wrong? I'll continue debugging it, but thought I'd post in case someone knew off the top of their head
jeffgarratt (Tue, 06 Jun 2017 16:15:14 GMT):
@latitiah that is an old configuration Name... Are you using an older genesis block?
jeffgarratt (Tue, 06 Jun 2017 16:16:15 GMT):
i.e., that configuration key "ChainCreationPolicyNames" is deprecated, as the new consortium mechanism is used
jyellick (Tue, 06 Jun 2017 17:18:30 GMT):
@latitiah Yes, this config name was removed quite a while ago. In general, it's a good idea to do a `make dist-clean` and remove any old ledger artifacts via `rm -Rf /var/hyperledger/*` before testing.
latitiah (Tue, 06 Jun 2017 18:05:16 GMT):
Thx!
bh4rtp (Tue, 06 Jun 2017 23:59:14 GMT):
@jyellick thanks. is there an example for listening to the block committed event?
bh4rtp (Wed, 07 Jun 2017 00:17:16 GMT):
i am using solo ordering. the `batchSize` and `batchTimeout` are set as default, i.e. 512k and 2s. in the cli command script, sleep 3 is done after every invoke. but `GetState` always return nil, nil even though adding sleep 300 before querying.
bh4rtp (Wed, 07 Jun 2017 00:17:16 GMT):
i am using solo ordering. the `batchSize` and `batchTimeout` are set as default, i.e. 5 and 2s. in the cli command script, sleep 5 is done after register 5 entities (5 invocation finished). but `GetState` always return nil, nil even though adding sleep 300 before querying.
bh4rtp (Wed, 07 Jun 2017 00:17:16 GMT):
i changed to use kafka ordering. the `batchSize` and `batchTimeout` are set as default, i.e. 5 and 2s. in the cli command script, sleep 5 is done after register 5 entities (5 invocation finished). but `GetState` always return nil, nil even though adding sleep 10 before querying.
jyellick (Wed, 07 Jun 2017 03:09:23 GMT):
@bh4rtp This is probably better asked in #fabric-peer-endorser-committer but there is a block listener example here: https://github.com/hyperledger/fabric/tree/master/examples/events/block-listener
bh4rtp (Wed, 07 Jun 2017 04:16:05 GMT):
@jyellick is it possible to know how many transactions are bufferred in the ordering service node?
jyellick (Wed, 07 Jun 2017 04:17:01 GMT):
@bh4rtp Not generally, no
bh4rtp (Wed, 07 Jun 2017 04:19:41 GMT):
@jyellick the invoke transactions return `OK` in cli. does this mean the transactions are correctly endorsed?
jyellick (Wed, 07 Jun 2017 04:20:44 GMT):
@bh4rtp, no, the orderer does not check endorsements, only that the submitter (the outermost signature on the Envelope message) is authorized to transact on the channel.
jyellick (Wed, 07 Jun 2017 04:21:14 GMT):
The `OK` status indicates that the transaction has passed the set of pre-filter checks and that the transaction has been accepted for ordering
jyellick (Wed, 07 Jun 2017 04:21:29 GMT):
It does not make any promises about the transaction being valid after or ordering or that the transaction will affect the ledger state
bh4rtp (Wed, 07 Jun 2017 04:24:23 GMT):
@jyellick ok. it is much more complicated than i thought. now `PostState` are called without any errors and ordering timeout is triggered, but `GetState` returns `nil, nil`. how to debug this problem?
jyellick (Wed, 07 Jun 2017 04:25:46 GMT):
@bh4rtp First, I would verify that your transaction has committed into a block, I would also verify that the transaction was flagged as valid in that block
jyellick (Wed, 07 Jun 2017 04:25:55 GMT):
My suspicion is that you will find one of these is not true
bh4rtp (Wed, 07 Jun 2017 04:29:39 GMT):
yes. i think so. but how to verify? the procedure is much long. can i diagnose the problem through the logging information?
jyellick (Wed, 07 Jun 2017 04:32:23 GMT):
@bh4rtp I am most familiar with the ordering logs, and can show you log statements which would confirm your transaction has been committed to a block at the orderer, I expect at debug, you should see similar messages in the peer, but I do not know them off the top of my head. Let me look
bh4rtp (Wed, 07 Jun 2017 04:37:20 GMT):
@jyellick thanks. please take trouble to see this log.
```Sending registerEntity invoke transaction on org2/peer2...
2017-06-07 11:28:33.941 CST [msp] getMspConfig -> INFO 001 intermediate certs folder not found at [/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/intermediatecerts]. Skipping.: [stat /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/intermediatecerts: no such file or directory]
2017-06-07 11:28:33.941 CST [msp] getMspConfig -> INFO 002 crls folder not found at [/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/intermediatecerts]. Skipping.: [stat /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/crls: no such file or directory]
2017-06-07 11:28:33.941 CST [msp] getMspConfig -> INFO 003 MSP configuration file not found at [/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/config.yaml]: [stat /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/config.yaml: no such file or directory]
2017-06-07 11:28:33.983 CST [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-06-07 11:28:33.983 CST [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-06-07 11:28:33.989 CST [msp/identity] Sign -> DEBU 006 Sign: plaintext: 0AB2070A6C08031A0C08E1E5DDC90510...30323433343331323838333435227D7D
2017-06-07 11:28:33.989 CST [msp/identity] Sign -> DEBU 007 Sign: digest: 2A42E4AC32E9CA3FAC31B48A58566C10184C9D74725269A452102534C337E590
2017-06-07 11:28:34.056 CST [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0AB2070A6C08031A0C08E1E5DDC90510...1955E8C4E92DAEED14779C4B45ECA391
2017-06-07 11:28:34.056 CST [msp/identity] Sign -> DEBU 009 Sign: digest: BB8AF8E774BFD1B0B1E22D75EBB67F63A7BB48BE6AC3A6070668441B8501A5D7
2017-06-07 11:28:34.059 CST [chaincodeCmd] chaincodeInvokeOrQuery -> DEBU 00a ESCC invoke result: version:1 response:
bh4rtp (Wed, 07 Jun 2017 04:37:20 GMT):
@jyellick thanks. please take trouble to view this log.
```Sending registerEntity invoke transaction on org2/peer2...
2017-06-07 11:28:33.941 CST [msp] getMspConfig -> INFO 001 intermediate certs folder not found at [/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/intermediatecerts]. Skipping.: [stat /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/intermediatecerts: no such file or directory]
2017-06-07 11:28:33.941 CST [msp] getMspConfig -> INFO 002 crls folder not found at [/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/intermediatecerts]. Skipping.: [stat /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/crls: no such file or directory]
2017-06-07 11:28:33.941 CST [msp] getMspConfig -> INFO 003 MSP configuration file not found at [/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/config.yaml]: [stat /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/config.yaml: no such file or directory]
2017-06-07 11:28:33.983 CST [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-06-07 11:28:33.983 CST [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-06-07 11:28:33.989 CST [msp/identity] Sign -> DEBU 006 Sign: plaintext: 0AB2070A6C08031A0C08E1E5DDC90510...30323433343331323838333435227D7D
2017-06-07 11:28:33.989 CST [msp/identity] Sign -> DEBU 007 Sign: digest: 2A42E4AC32E9CA3FAC31B48A58566C10184C9D74725269A452102534C337E590
2017-06-07 11:28:34.056 CST [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0AB2070A6C08031A0C08E1E5DDC90510...1955E8C4E92DAEED14779C4B45ECA391
2017-06-07 11:28:34.056 CST [msp/identity] Sign -> DEBU 009 Sign: digest: BB8AF8E774BFD1B0B1E22D75EBB67F63A7BB48BE6AC3A6070668441B8501A5D7
2017-06-07 11:28:34.059 CST [chaincodeCmd] chaincodeInvokeOrQuery -> DEBU 00a ESCC invoke result: version:1 response:
bh4rtp (Wed, 07 Jun 2017 04:37:20 GMT):
@jyellick thanks. please wait.
bh4rtp (Wed, 07 Jun 2017 04:37:20 GMT):
@jyellick thanks. here is the logging slice after invoke registerEntity from orderer.
```2017-06-07 13:58:38.531 CST [fsblkstorage] indexBlock -> DEBU d2b Indexing block [blockNum=5, blockHash=[]byte{0x78, 0xdf, 0xce, 0x67, 0x3b, 0xe2, 0x74, 0xec, 0x23, 0x29, 0xdf, 0x3 b, 0xe3, 0x75, 0x96, 0x9b, 0x92, 0x21, 0xc3, 0x9e, 0xa5, 0x1f, 0x3d, 0x43, 0x7f, 0x22, 0xc5, 0x29, 0xee, 0xaf, 0xa5, 0x9d} txOffsets=
txId=eecc600001768d019c489b9679ba58689f9d497d07b9e7d321c06536f8d1ff01 locPointer=offset=70, bytesLength=4989
]
2017-06-07 13:58:38.532 CST [fsblkstorage] updateCheckpoint -> DEBU d2c Broadcasting about update checkpointInfo: latestFileChunkSuffixNum=[0], latestFileChunksize=[50774], isChain Empty=[false], lastBlockNumber=[5]
2017-06-07 13:58:38.532 CST [orderer/multichain] WriteBlock -> DEBU d2d [channel: eprich1] Wrote block 5
2017-06-07 13:58:38.532 CST [fsblkstorage] retrieveBlockByNumber -> DEBU d2e retrieveBlockByNumber() - blockNum = [5]
2017-06-07 13:58:38.532 CST [fsblkstorage] newBlockfileStream -> DEBU d2f newBlockfileStream(): filePath=[/var/hyperledger/production/orderer/chains/eprich1/blockfile_000000], star tOffset=[43785]
2017-06-07 13:58:38.532 CST [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU d30 Remaining bytes=[6989], Going to peek [8] bytes
2017-06-07 13:58:38.532 CST [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU d31 Returning blockbytes - length=[6987], placementInfo={fileNum=[0], startOffset=[43785], bytesOf fset=[43787]}
2017-06-07 13:58:38.532 CST [orderer/common/deliver] Handle -> DEBU d32 Delivering block for (0xc4208d4740) channel: eprich1
2017-06-07 13:58:38.540 CST [fsblkstorage] retrieveBlockByNumber -> DEBU d33 retrieveBlockByNumber() - blockNum = [5]
2017-06-07 13:58:38.540 CST [fsblkstorage] newBlockfileStream -> DEBU d34 newBlockfileStream(): filePath=[/var/hyperledger/production/orderer/chains/eprich1/blockfile_000000], star tOffset=[43785]
2017-06-07 13:58:38.540 CST [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU d35 Remaining bytes=[6989], Going to peek [8] bytes
2017-06-07 13:58:38.540 CST [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU d36 Returning blockbytes - length=[6987], placementInfo={fileNum=[0], startOffset=[43785], bytesOf fset=[43787]}
2017-06-07 13:58:38.540 CST [orderer/common/deliver] Handle -> DEBU d37 Delivering block for (0xc420770600) channel: eprich1```
bh4rtp (Wed, 07 Jun 2017 04:37:20 GMT):
@jyellick thanks. here is the logging slice from orderer after invoke registerEntity.
```2017-06-07 13:58:38.531 CST [fsblkstorage] indexBlock -> DEBU d2b Indexing block [blockNum=5, blockHash=[]byte{0x78, 0xdf, 0xce, 0x67, 0x3b, 0xe2, 0x74, 0xec, 0x23, 0x29, 0xdf, 0x3 b, 0xe3, 0x75, 0x96, 0x9b, 0x92, 0x21, 0xc3, 0x9e, 0xa5, 0x1f, 0x3d, 0x43, 0x7f, 0x22, 0xc5, 0x29, 0xee, 0xaf, 0xa5, 0x9d} txOffsets=
txId=eecc600001768d019c489b9679ba58689f9d497d07b9e7d321c06536f8d1ff01 locPointer=offset=70, bytesLength=4989
]
2017-06-07 13:58:38.532 CST [fsblkstorage] updateCheckpoint -> DEBU d2c Broadcasting about update checkpointInfo: latestFileChunkSuffixNum=[0], latestFileChunksize=[50774], isChain Empty=[false], lastBlockNumber=[5]
2017-06-07 13:58:38.532 CST [orderer/multichain] WriteBlock -> DEBU d2d [channel: eprich1] Wrote block 5
2017-06-07 13:58:38.532 CST [fsblkstorage] retrieveBlockByNumber -> DEBU d2e retrieveBlockByNumber() - blockNum = [5]
2017-06-07 13:58:38.532 CST [fsblkstorage] newBlockfileStream -> DEBU d2f newBlockfileStream(): filePath=[/var/hyperledger/production/orderer/chains/eprich1/blockfile_000000], star tOffset=[43785]
2017-06-07 13:58:38.532 CST [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU d30 Remaining bytes=[6989], Going to peek [8] bytes
2017-06-07 13:58:38.532 CST [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU d31 Returning blockbytes - length=[6987], placementInfo={fileNum=[0], startOffset=[43785], bytesOf fset=[43787]}
2017-06-07 13:58:38.532 CST [orderer/common/deliver] Handle -> DEBU d32 Delivering block for (0xc4208d4740) channel: eprich1
2017-06-07 13:58:38.540 CST [fsblkstorage] retrieveBlockByNumber -> DEBU d33 retrieveBlockByNumber() - blockNum = [5]
2017-06-07 13:58:38.540 CST [fsblkstorage] newBlockfileStream -> DEBU d34 newBlockfileStream(): filePath=[/var/hyperledger/production/orderer/chains/eprich1/blockfile_000000], star tOffset=[43785]
2017-06-07 13:58:38.540 CST [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU d35 Remaining bytes=[6989], Going to peek [8] bytes
2017-06-07 13:58:38.540 CST [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU d36 Returning blockbytes - length=[6987], placementInfo={fileNum=[0], startOffset=[43785], bytesOf fset=[43787]}
2017-06-07 13:58:38.540 CST [orderer/common/deliver] Handle -> DEBU d37 Delivering block for (0xc420770600) channel: eprich1```
bh4rtp (Wed, 07 Jun 2017 06:11:31 GMT):
And the next slice which has a warning.
```2017-06-07 14:08:44.421 CST [orderer/multichain] WriteBlock -> DEBU f13 [channel: eprich1] Wrote block 7
2017-06-07 14:08:44.421 CST [fsblkstorage] retrieveBlockByNumber -> DEBU f14 retrieveBlockByNumber() - blockNum = [7]
2017-06-07 14:08:44.421 CST [fsblkstorage] newBlockfileStream -> DEBU f15 newBlockfileStream(): filePath=[/var/hyperledger/production/orderer/chains/eprich1/blockfile_000000], startOffset=[83101]
2017-06-07 14:08:44.421 CST [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU f16 Remaining bytes=[18626], Going to peek [8] bytes
2017-06-07 14:08:44.421 CST [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU f17 Returning blockbytes - length=[18623], placementInfo={fileNum=[0], startOffset=[83101], bytesOffset=[83104]}
2017-06-07 14:08:44.422 CST [orderer/common/deliver] Handle -> DEBU f18 Delivering block for (0xc4208d4740) channel: eprich1
2017-06-07 14:08:44.437 CST [fsblkstorage] retrieveBlockByNumber -> DEBU f19 retrieveBlockByNumber() - blockNum = [7]
2017-06-07 14:08:44.437 CST [fsblkstorage] newBlockfileStream -> DEBU f1a newBlockfileStream(): filePath=[/var/hyperledger/production/orderer/chains/eprich1/blockfile_000000], startOffset=[83101]
2017-06-07 14:08:44.447 CST [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU f1b Remaining bytes=[18626], Going to peek [8] bytes
2017-06-07 14:08:44.450 CST [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU f1c Returning blockbytes - length=[18623], placementInfo={fileNum=[0], startOffset=[83101], bytesOffset=[83104]}
2017-06-07 14:08:44.450 CST [orderer/common/deliver] Handle -> DEBU f1d Delivering block for (0xc420770600) channel: eprich1
2017-06-07 14:08:47.877 CST [orderer/main] Broadcast -> DEBU f1e Starting new Broadcast handler
2017-06-07 14:08:47.877 CST [orderer/common/broadcast] Handle -> DEBU f1f Starting new broadcast loop
2017-06-07 14:08:47.890 CST [orderer/common/broadcast] Handle -> WARN f20 Error reading from stream: rpc error: code = Canceled desc = context canceled
```
bh4rtp (Wed, 07 Jun 2017 07:57:26 GMT):
@jyellick sorry i found the reason. the hash code taken as key is changed.
jyellick (Wed, 07 Jun 2017 14:10:16 GMT):
@dave.enyeart @nickgaski Moving from #fabric-release There are two examples in the document, and two scripts in the example dir which essentially run those two examples. They leave the output artifacts in the directory where run
s.narayanan (Wed, 07 Jun 2017 15:06:54 GMT):
A few questions on failover scenario for orderer. If orderer node fails, the broadcast of transaction proposal response from client to orderer may or may not have succeeded. When the broadcast message is retried, if orderer had successfully processed the previous message how is duplicate message handled? How is this scenario handled in context of Kafka or Solo. In context of Kafka if orderer wrote the message to Kafka but failed before the offset was written then Orderer might create duplicate messages in Kafka? How is this scenario handled in validation stage since only one of the messages should be processed?
jyellick (Wed, 07 Jun 2017 15:20:23 GMT):
@s.narayanan In solo, there is no fault tolerance, so a failure scenario does not make sense. In Kafka, a `Broadcast` replies only after the message has been successfully delivered to the Kafka cluster and acknowledged.
jyellick (Wed, 07 Jun 2017 15:21:00 GMT):
Ultimately, if a duplicate transaction is committed for whatever reason (client error, etc.), the MVCC data will make sure that only the first one applies.
jeangui (Thu, 08 Jun 2017 06:42:07 GMT):
Has joined the channel.
bmkor (Thu, 08 Jun 2017 06:42:22 GMT):
@jyellick Regarding `configtxlator`, been able to sign the config update for adding a new peer Organisation `Org3` and got the config update signed by both `Org1MSP` and `Org2MSP` in a `ConfigUpdateEnvelope` which was translated to a `json` and after attaching `echo '{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":'`, the `json` was `protolator` to an `Envelope` `proto` for channel config update via `cli` but still fails due to policy validation (satisfy one only, expect to satisfy both).
bmkor (Thu, 08 Jun 2017 06:42:22 GMT):
@jyellick Regarding `configtxlator`, been able to sign the config update for adding a new peer Organisation `Org3` and got the config update signed by both `Org1MSP` and `Org2MSP` in a `ConfigUpdateEnvelope` which was translated to a `json` for forming a `Payload` by attaching `'{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":'`, this `Payload` was `protolator` to an `Envelope` `proto` for channel config update via `cli` but still fails due to policy validation (satisfy one only, expect to satisfy both).
bmkor (Thu, 08 Jun 2017 06:42:22 GMT):
@jyellick Regarding `configtxlator`, been able to sign the config update for adding a new peer Organisation `Org3` and got the `ConfigUpdate` signed by both `Org1MSP` and `Org2MSP` forming a `ConfigUpdateEnvelope` which was translated to a `json` for forming a `Payload` by attaching `'{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":'`, this `Payload` was `protolator` to an `Envelope` `proto` for channel config update via `cli` but still fails due to policy validation (satisfy one only, expect to satisfy both).
bmkor (Thu, 08 Jun 2017 06:42:22 GMT):
@jyellick Regarding `configtxlator`, been able to sign the `ConfigUpdate` for adding a new peer Organisation `Org3` and got the `ConfigUpdate` signed by both `Org1MSP` and `Org2MSP` forming a `ConfigUpdateEnvelope` which was translated to a `json` for forming a `Payload` by attaching `'{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":'`, this `Payload` was `protolator` to an `Envelope` `proto` for channel config update via `cli` but still fails due to policy validation (satisfy one only, expect to satisfy both).
bmkor (Thu, 08 Jun 2017 06:42:22 GMT):
@jyellick Regarding `configtxlator`, I changed it to be able to sign the `ConfigUpdate` for adding a new peer Organisation `Org3`. Now I got the `ConfigUpdate` signed by both `Org1MSP` and `Org2MSP` forming a `ConfigUpdateEnvelope` which was translated to a `json` for forming a `Payload` by attaching `'{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":'`, this `Payload` was `protolator` to an `Envelope` `proto` for channel config update via `cli` but still fails due to policy validation (satisfy one only, expect to satisfy both).
bmkor (Thu, 08 Jun 2017 06:42:22 GMT):
@jyellick Regarding `configtxlator`, I changed it to be able to sign the `ConfigUpdate`. Now I wanna add a new peer Organisation `Org3` in `e2e_cli` example. Prepared the `ConfigUpdate` and got the `ConfigUpdate` signed by both `Org1MSP` and `Org2MSP` forming a `ConfigUpdateEnvelope` which was translated to a `json` for forming a `Payload` by attaching `'{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":'`, this `Payload` was `protolator` to an `Envelope` `proto` for channel config update via `cli` but still fails due to policy validation (satisfy one only, expect to satisfy both).
bmkor (Thu, 08 Jun 2017 06:42:22 GMT):
@jyellick Regarding `configtxlator`, I changed it to be able to sign the `ConfigUpdate`. Now I wanna add a new peer Organisation `Org3` in the `e2e_cli` example. After prepared the `ConfigUpdate`, I got the `ConfigUpdate` signed by both `Org1MSP` and `Org2MSP` forming a `ConfigUpdateEnvelope` which was translated by `protolator` to a `json`. It was then becoming a `Payload` by attaching the prefix `'{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":'`; after that, this `Payload` went through `protolator` to become an `Envelope` `proto` for channel config update via `cli`. But it still fails the policy validation (satisfy one only, expect to satisfy both).
bmkor (Thu, 08 Jun 2017 06:42:22 GMT):
@jyellick Regarding `configtxlator`, I changed it to be able to sign the `ConfigUpdate`. Now I wanna add a new peer Organisation `Org3` in the `e2e_cli` example. After prepared the `ConfigUpdate`, I got the `ConfigUpdate` signed by both `Org1MSP` and `Org2MSP` forming a `ConfigUpdateEnvelope` which was translated by `protolator` to a `json`. It was then becoming a `Payload` by attaching the prefix `'{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":'`; after that, this `Payload` went through `protolator` to become an `Envelope` `proto` for channel config update via `cli`. But it still fails the `MAJORITY` policy validation (satisfy one only, expect to satisfy both).
bmkor (Thu, 08 Jun 2017 06:42:22 GMT):
@jyellick Regarding `configtxlator`, I changed it to be able to sign the `ConfigUpdate`. Now I wanna add a new peer Organisation `Org3` in the `e2e_cli` example. After prepared the `ConfigUpdate`, I got this `ConfigUpdate` signed by both `Org1MSP` and `Org2MSP` forming a `ConfigUpdateEnvelope` which was translated by `protolator` to a `json`. It was then becoming a `Payload` by attaching the prefix `'{"payload":{"header":{"channel_header":{"channel_id":"mychannel", "type":2}},"data":'`; after that, this `Payload` went through `protolator` to become an `Envelope` `proto` for channel config update via `cli`. But it still fails the `MAJORITY` policy validation (satisfy one only, expect to satisfy both).
bmkor (Thu, 08 Jun 2017 06:45:10 GMT):
Do we need the `Envelope` be signed by both `Org1` and `Org2` for satisfying `MAJORITY` policy in this case?
bmkor (Thu, 08 Jun 2017 06:45:10 GMT):
Do we need this `Envelope` be signed by both `Org1` and `Org2` for satisfying `MAJORITY` policy in this case?
bmkor (Thu, 08 Jun 2017 06:45:10 GMT):
Do we additionally need this `Envelope` be signed by both `Org1MSP` and `Org2MSP` for satisfying `MAJORITY` policy in this case?
bmkor (Thu, 08 Jun 2017 06:45:10 GMT):
My thought is "Do we additionally need this `Envelope` be signed by both `Org1MSP` and `Org2MSP` for satisfying `MAJORITY` policy in this case?"
bmkor (Thu, 08 Jun 2017 07:55:13 GMT):
I can see the `peer channel update` would sign the `Envelope` which is then broadcasted to the `orderer`. The signature of the `Envelope` would consist of the `msp` defined in `cli`; as a result, I always get only one of two parties signed (either `Org1MSP` or `Org2MSP` but not both). Upon `lscc` validating the `MAJORITY` policy by the signed data, would the signed data be the one in `Envelope` (which is signed by one `Org`, instead of the one inside `ConfigUpdateEnvelope` which was signed by both `Org1MSP` and `Org2MSP`. Not sure if I understand correctly. Any one can help? Thanks a lot.
bmkor (Thu, 08 Jun 2017 07:55:13 GMT):
Further to my question above, I can see the `peer channel update` would sign the `Envelope` which is then broadcasted to the `orderer`. The signature of this `Envelope` would consist of the `msp` defined in `cli`; as a result, I always get only one of two parties signed (either `Org1MSP` or `Org2MSP` but not both). Upon `lscc` validating the `MAJORITY` policy by the signed data, would the signed data be the one in `Envelope` (which is signed by one `Org`, instead of the one inside `ConfigUpdateEnvelope` which was signed by both `Org1MSP` and `Org2MSP`. Not sure if I understand correctly. Any one can help? Thanks a lot.
bmkor (Thu, 08 Jun 2017 07:55:13 GMT):
Further to my question above, I can see the `peer channel update` would sign the `Envelope` which is then broadcasted to the `orderer`. The signature of this `Envelope` would consist of the `msp` defined in `cli`; as a result, I always get only one of the two parties signed (either `Org1MSP` or `Org2MSP` but not both). Upon `lscc` validating the `MAJORITY` policy by the signed data, would the signed data be the one in `Envelope` (which is signed by one `Org`, instead of the one inside `ConfigUpdateEnvelope` which was signed by both `Org1MSP` and `Org2MSP`. Not sure if I understand correctly. Any one can help? Thanks a lot.
bmkor (Thu, 08 Jun 2017 07:55:13 GMT):
Further to my question above, I can see the `peer channel update` would sign the `Envelope` which is then broadcasted to the `orderer`. The signature of this `Envelope` would consist of the `msp` defined in `cli`; as a result, I always get only one of the two parties signed (either `Org1MSP` or `Org2MSP` but not both). Upon `lscc` validating the `MAJORITY` policy with the signed data, I guess the signed data would be the one in `Envelope` (which is signed by one `Org`, instead of the `ConfigUpdateEnvelope` which was signed by both `Org1MSP` and `Org2MSP`. Hence, we failed to satisfy the `MAJORITY` policy. Not sure if I understand correctly. Any one can help? Thanks a lot.
bmkor (Thu, 08 Jun 2017 07:55:13 GMT):
Further to my question above, I can see the `peer channel update` would sign the `Envelope` which is then broadcasted to the `orderer`. The signature of this `Envelope` would consist of the `msp` defined in `cli`; as a result, I always get only one of the two parties signed (either `Org1MSP` or `Org2MSP` but not both). Upon `lscc` validating the `MAJORITY` policy with the signed data, I guess the signed data would be the one in `Envelope` (which is signed by one `Org`), instead of the `ConfigUpdateEnvelope` (which was signed by both `Org1MSP` and `Org2MSP`). Hence, we failed to satisfy the `MAJORITY` policy. Not sure if I understand correctly. Any one can help? Thanks a lot.
bmkor (Thu, 08 Jun 2017 07:55:13 GMT):
Further to my question above, I can see the `peer channel update` would sign the `Envelope` which is then broadcasted to the `orderer`. The signature of this `Envelope` would consist of the `msp` defined in `cli`; as a result, I always get only one of the two parties signed (either `Org1MSP` or `Org2MSP` but not both). Upon `lscc` validating the `MAJORITY` policy with the signed data, I guess the signed data here would be of `Envelope` (which is signed by one `Org`), instead of the `ConfigUpdateEnvelope` (which was signed by both `Org1MSP` and `Org2MSP`). Hence, we failed to satisfy the `MAJORITY` policy. Not sure if I understand correctly. Any one can help? Thanks a lot.
bmkor (Thu, 08 Jun 2017 07:55:13 GMT):
Further to my question above, I can see the `peer channel update` would sign the `Envelope` which is then broadcasted to the `orderer`. The signature of this `Envelope` would consist (solely?) of the `msp` defined in `cli`; as a result, I always get only one of the two parties signed (either `Org1MSP` or `Org2MSP` but not both). Upon `lscc` validating the `MAJORITY` policy with the signed data, I guess the signed data here would be of `Envelope` (which is signed by one `Org`), instead of the `ConfigUpdateEnvelope` (which was signed by both `Org1MSP` and `Org2MSP`). Hence, we failed to satisfy the `MAJORITY` policy. Not sure if I understand correctly. Any one can help? Thanks a lot.
bmkor (Thu, 08 Jun 2017 07:55:13 GMT):
Further to my question above, I can see the `peer channel update` would sign the `Envelope` which is then broadcasted to the `orderer`. The signature of this `Envelope` would consist (solely?) of the `msp` defined in `cli`; as a result, I always get only one of the two parties signed (either `Org1MSP` or `Org2MSP` but not both) on this `Envelope`. Upon `lscc` validating the `MAJORITY` policy with the signed data, I guess the signed data here would be of `Envelope` (which is signed by one `Org`), instead of the `ConfigUpdateEnvelope` (which was signed by both `Org1MSP` and `Org2MSP`). Hence, we failed to satisfy the `MAJORITY` policy. Not sure if I understand correctly. Any one can help? Thanks a lot.
bmkor (Thu, 08 Jun 2017 07:55:13 GMT):
Further to my question above, I can see the `peer channel update` would sign the `Envelope` which is then broadcasted to the `orderer`. The signature of this `Envelope` would consist (solely?) of the `msp` defined in `cli`; as a result, I always get only one of the two parties signed (either `Org1MSP` or `Org2MSP` but not both) on this `Envelope`. Upon `lscc` validating the `MAJORITY` policy with the signed data, I guess the signed data here would be of `Envelope` (which is signed by one `Org`), instead of the `ConfigUpdateEnvelope` (which was signed by both `Org1MSP` and `Org2MSP`). Hence, we failed to satisfy the `MAJORITY` policy. Not sure if I understand correctly. Offering me a help? @jyellick Thanks a lot.
bmkor (Thu, 08 Jun 2017 08:05:07 GMT):
Error log shown in the `orderer`:
bmkor (Thu, 08 Jun 2017 08:05:07 GMT):
DEBUG log shown in the `orderer`:
bmkor (Thu, 08 Jun 2017 08:09:13 GMT):
```
2017-06-08 06:34:52.157 UTC [orderer/main] Deliver -> DEBU da1 Starting new Deliver handler
2017-06-08 06:34:52.157 UTC [orderer/common/deliver] Handle -> DEBU da2 Starting new deliver loop
2017-06-08 06:34:52.157 UTC [orderer/common/deliver] Handle -> DEBU da3 Attempting to read seek info message
2017-06-08 06:34:52.162 UTC [orderer/main] Broadcast -> DEBU da4 Starting new Broadcast handler
2017-06-08 06:34:52.162 UTC [orderer/common/broadcast] Handle -> DEBU da5 Starting new broadcast loop
2017-06-08 06:34:52.162 UTC [orderer/common/broadcast] Handle -> DEBU da6 Preprocessing CONFIG_UPDATE
2017-06-08 06:34:52.162 UTC [orderer/configupdate] Process -> DEBU da7 Processing channel reconfiguration request for channel mychannel
...Skipped...
2017-06-08 06:34:52.164 UTC [policies] GetPolicy -> DEBU dbc Returning policy Admins for evaluation
2017-06-08 06:34:52.164 UTC [cauthdsl] func1 -> DEBU dbd Gate evaluation starts: (&{n:1 policies:
jyellick (Thu, 08 Jun 2017 13:39:12 GMT):
@bmkor With respect to the `Envelope` signature, it's not possible to encode more than one signature here, so it is only the submitter's signature.
jyellick (Thu, 08 Jun 2017 13:39:57 GMT):
According to the log snippet you sent me, it looks like it successfully evaluates the `Org1.Admin` signature, and the `Org2.Admin` cert is encoded, but, the signature for that cert is not being validated.
jyellick (Thu, 08 Jun 2017 13:40:34 GMT):
I would double check to make sure that the signature there was truly generated by the `Org2.Admin` private key
bmkor (Thu, 08 Jun 2017 13:45:22 GMT):
Thanks! [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=qubBtGbuFZpdQmxa4) @jyellick
bmkor (Thu, 08 Jun 2017 13:45:22 GMT):
Thanks! Silly me. The envelope got signature w/o "s" and also not an array. [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=qubBtGbuFZpdQmxa4) @jyellick
bmkor (Thu, 08 Jun 2017 13:45:45 GMT):
I'll take a look as well.
udaykhambadkone (Thu, 08 Jun 2017 15:18:18 GMT):
Has joined the channel.
udaykhambadkone (Thu, 08 Jun 2017 15:22:27 GMT):
I am new to blockchains and researching different distributed ledger technologies for a project. I cannot find any documentation/resource as to what exact consensus algorithm does fabric v1.0 support. I have seen references that it has a plug and play architecture and some references about Kafka. Does it currently support any type of BFT? Thanks!
s.narayanan (Thu, 08 Jun 2017 15:25:09 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=yCarvFjFE7AJpk6zX) @jyellick follow up question on Kafka fault tolerance. If leader broker goes down, then new leader is elected. Is this handled transparently through the orderer node (i.e. the Orderer will connect to the new leader) or does the client receive an error (e.g. when broadcasting messages to orderer) and needs to retry the request?
jyellick (Thu, 08 Jun 2017 15:28:21 GMT):
> @jyellick follow up question on Kafka fault tolerance. If leader broker goes down, then new leader is elected. Is this handled transparently through the orderer node (i.e. the Orderer will connect to the new leader) or does the client receive an error (e.g. when broadcasting messages to orderer) and needs to retry the request?
The client may receive a `SERVICE_UNAVAILABLE` message, if the Kafka service is unreachable at the moment. This is because the ordering node doesn't know whether the failure is transient or not, and cannot promise to deliver the message (what if the orderer crashed with requests in the buffer?). In the case that the client receives this error, the client should resubmit to a different orderer (or wait some time and try again).
Of course if a client is connected, the kafka broker goes down, the orderer will automatically recover by connecting to another broker. This should be fairly quick, and, assuming the client does not happen to send a message during this window then it will be transparent to the client.
jyellick (Thu, 08 Jun 2017 15:28:29 GMT):
@s.narayanan ^
jeffgarratt (Thu, 08 Jun 2017 15:28:35 GMT):
@udaykhambadkone not at the moment. SBFT is planned for the future.
jyellick (Thu, 08 Jun 2017 15:32:21 GMT):
(SBFT is a variant of PBFT with some different assumptions on the network links, but fundamentally a very similar protocol)
kostas (Thu, 08 Jun 2017 15:35:01 GMT):
@s.narayanan: On a low-level, note that if leader election is not yet completed (or the metadata on the brokers hasn't been updated to reflect the new state), the orderer will retry `Producer.RetryMax` times to post the client's message (doing `Metadata.RetryMax` attempts to get the right info every time). Only if this fails, will it return a `SERVICE_UNAVAILABLE` message.
scottz (Thu, 08 Jun 2017 15:57:59 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=GxNznzNC9Ymq24Qez) @kostas @jyellick Per https://jira.hyperledger.org/browse/FAB-2575 , we see a 503 when kafkabrokers are restarted, and then as we continue to send transactions to same orderer, we would expect to see 503 err codes for all of them, right? And then when the KBs recover, the transactions should succeed, right?
kostas (Thu, 08 Jun 2017 16:03:39 GMT):
I would think that's right.
kostas (Thu, 08 Jun 2017 16:03:39 GMT):
I would say that's right.
kostas (Thu, 08 Jun 2017 16:03:39 GMT):
@scottz: I would say that's right.
s.narayanan (Thu, 08 Jun 2017 16:57:37 GMT):
@kostas thanks. Is the Producer.RetryMax same as the Kafka Retry Period in orderer.yaml?
kostas (Thu, 08 Jun 2017 17:00:16 GMT):
@s.narayanan: No, have a look at this revised `orderer.yaml` which should be getting merged soon: https://gerrit.hyperledger.org/r/#/c/10257/7/sampleconfig/orderer.yaml
kostas (Thu, 08 Jun 2017 17:00:30 GMT):
In short these are settings that were always there, we're just now exposing them to the user.
udaykhambadkone (Thu, 08 Jun 2017 17:11:13 GMT):
@jeffgarratt @jyellick thanks for your quick responses. So which consensus algorithm does Fabric v1.0 support now?
jyellick (Thu, 08 Jun 2017 17:13:28 GMT):
@udaykhambadkone The Kafka consensus type uses a cluster of Kafka brokers to achieve crash fault tolerance, there is also a reference solo consensus implementation which simply satisfies the interfaces as a testing example but it has no fault tolerance
udaykhambadkone (Thu, 08 Jun 2017 17:17:41 GMT):
thanks! @jyellick that helps :)
s.narayanan (Thu, 08 Jun 2017 17:23:17 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=GxNznzNC9Ymq24Qez) @kostas follow up to this, I have a question on how this affects delivery of blocks to peers. Do peers retry as well if they receive SERVICE_UNAVAILABLE message. Is there a way to configure the # of retry attempts and retry interval? What happens if retry attemps are exhuasted from peer perspective?
kostas (Thu, 08 Jun 2017 17:24:50 GMT):
@s.narayanan: https://gerrit.hyperledger.org/r/#/c/10317/
udaykhambadkone (Thu, 08 Jun 2017 17:27:11 GMT):
@jyellick sorry to bother, I have one more question: I read in one of the hyperledger fabric documents that it has a pbft plugin and newPbftCore() function for consensus mechanism http://hyperledger-fabric.readthedocs.io/en/stable/protocol-spec/#5-byzantine-consensus_1 Not sure if this was for older version of Fabric.
jyellick (Thu, 08 Jun 2017 17:28:33 GMT):
@udaykhambadkone No bother, happy to help. Yes, the v0.5/v0.6 releases were based on PBFT. For v1, we decided to emphasize scale over BFT, and settled on Kafka. We plan to re-introduce BFT via SBFT (a PBFT like protocol) after v1. The SBFT code was even in the tree until a few weeks ago, but we did not have time to polish it for the v1 release.
s.narayanan (Thu, 08 Jun 2017 17:32:27 GMT):
@kostas thanks. If I understand this correctly, this implies the peers would disconnect from orderer and failover to another orderer. However if it is a kafka issue then second orderer could return the same SERVICE_UNAVAILABLE error? Also on related note, from failover perspective are orderer endpoints (assume we have 3 orderers) configured in genesis block? Can these end points be configured in a load balancer and peers essentially use load balancer to dynamically failover between orderer nodes?
kostas (Thu, 08 Jun 2017 17:33:23 GMT):
> However if it is a kafka issue then second orderer could return the same SERVICE_UNAVAILABLE error?
Correct.
kostas (Thu, 08 Jun 2017 17:33:51 GMT):
> Can these end points be configured in a load balancer and peers essentially use load balancer to dynamically failover between orderer nodes?
I haven't given any thought into that.
kostas (Thu, 08 Jun 2017 17:33:51 GMT):
> Can these end points be configured in a load balancer and peers essentially use load balancer to dynamically failover between orderer nodes?
I haven't given any thought into that. Perhaps someone else has.
udaykhambadkone (Thu, 08 Jun 2017 17:37:34 GMT):
@jyellick thanks for the clarification
bmkor (Thu, 08 Jun 2017 18:13:02 GMT):
Mind elaborating more on how to check the signature was truly generated by the right private key? Still scratching my head... [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=qubBtGbuFZpdQmxa4) @jyellick
jyellick (Thu, 08 Jun 2017 18:15:59 GMT):
@bmkor Are you trying to sign twice in a single process for the peer cli? There was a very unfortunate decision made to make the MSP stuff a singleton, so if you try to load multiple private keys, you may find that you're actually just signing twice with the same one
bmkor (Thu, 08 Jun 2017 18:18:11 GMT):
Ah, yes. I changed the `configtxlator` so that it can sign the `ConfigUpdate` and output the `ConfigUpdateEnvelope`, then I used the `configtxlator` again to sign the `ConfigUpdateEnvelope`.
bmkor (Thu, 08 Jun 2017 18:18:11 GMT):
Ah, yes. I changed the `configtxlator` so that it can sign the `ConfigUpdate` and output the `ConfigUpdateEnvelope`, then I used the `configtxlator` again to sign the `ConfigUpdateEnvelope`. all in the same peer cli.
bmkor (Thu, 08 Jun 2017 18:18:11 GMT):
Ah, yes. I changed the `configtxlator` so that it can sign the `ConfigUpdate` and output the `ConfigUpdateEnvelope`, then I used the `configtxlator` again to sign the `ConfigUpdateEnvelope`. all in the same cli.
bmkor (Thu, 08 Jun 2017 18:18:11 GMT):
Hmmm, don't think so. I changed the `configtxlator` so that it can sign the `ConfigUpdate` and output the `ConfigUpdateEnvelope`, then I used the `configtxlator` again to sign the `ConfigUpdateEnvelope`, then following your doc to produce `config_update_as_envelope.proto` and put it to the cli for channel update..
bmkor (Thu, 08 Jun 2017 18:18:11 GMT):
Hmmm, don't think so. I changed the `configtxlator` so that it can sign the `ConfigUpdate` and output the `ConfigUpdateEnvelope`, then I used the `configtxlator` again to sign the `ConfigUpdateEnvelope`, then following your doc to produce `config_update_as_envelope.proto` and put it in the cli for channel update.
jyellick (Thu, 08 Jun 2017 18:43:12 GMT):
Oh does `configtxlator` add signatures?
jyellick (Thu, 08 Jun 2017 18:43:12 GMT):
Oh, how does `configtxlator` add signatures?
jyellick (Thu, 08 Jun 2017 18:43:59 GMT):
I think this is the wrong place to add signatures, I would think the peer CLI. The reason being, `configtxlator` is a zero auth REST service, there is no way to enforce who's request is being signed
jyellick (Thu, 08 Jun 2017 18:46:10 GMT):
^ @bmkor
scottz (Thu, 08 Jun 2017 19:46:44 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Xih7vaZ646NZnfhKo) @kostas @s.narayanan We have not done it, but we were thinking that is exactly how people would deploy it; they would use a single IP address of the loadbalancer, which could distribute among the list of 3 IP addresses for the orderers
scottz (Thu, 08 Jun 2017 19:53:24 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=KJ9eTNkxXKCRhcDCN) @scottz Clarification: Apparently, broadcast client sees only one 503 "Service Unavailable" from orderer when kafkabrokers are incommunicado. Subsequent broadcast msgs result in "EOF" response from networking layer because the grpc connections got torn down upon 503. Unless, of course, the broadcast client reacts to the 503 by recreating a new grpc connection (or recreates grpc conn with every broadcast msg anyways).
jyellick (Thu, 08 Jun 2017 20:21:27 GMT):
@simsc @nickgaski @dave.enyeart especially, but also anyone interested in the `confitxlator` tool.
Here is a short playback on using `configtxlator` to add a new policy to the ordering system channel at bootstrap.
https://drive.google.com/file/d/0B_vi2yshsJpBb2M2b3VoV0NjVVU/view
I will put try to put together another one on performing a config update in the future.
jyellick (Thu, 08 Jun 2017 20:21:27 GMT):
@simsc @nickgaski @dave.enyeart especially, but also anyone interested in the `configtxlator` tool.
Here is a short playback on using `configtxlator` to add a new policy to the ordering system channel at bootstrap.
https://drive.google.com/file/d/0B_vi2yshsJpBb2M2b3VoV0NjVVU/view
I will put try to put together another one on performing a config update in the future.
simsc (Thu, 08 Jun 2017 20:21:27 GMT):
Has joined the channel.
jyellick (Thu, 08 Jun 2017 20:22:14 GMT):
@Nishi @varadatibm ^
bmkor (Thu, 08 Jun 2017 23:41:17 GMT):
I just add a couple of signing functions to `configtxlator` which require user to input `mspid` and `mspdir` for creating a `localmsp` to sign. [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=BiGWRKjJwHorN7Amd) @jyellick
bmkor (Thu, 08 Jun 2017 23:47:56 GMT):
I understand what you meant and agree with what you said. I just did it for testing purpose. :grin: [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=E6wwuzTov8nmWYPpz) @jyellick
bmkor (Thu, 08 Jun 2017 23:47:56 GMT):
I understand what you meant and agree with what you said. I need to do this for our testing purpose. :grin: [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=E6wwuzTov8nmWYPpz) @jyellick
jyellick (Fri, 09 Jun 2017 01:48:33 GMT):
@bmkor :
> I just add a couple of signing functions to `configtxlator` which require user to input `mspid` and `mspdir` for creating a `localmsp` to sign.
How are you loading multiple MSPs in one process? Or do you run the tool multiple times, once for each signature?
bmkor (Fri, 09 Jun 2017 02:00:29 GMT):
Multiple times. [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Nexhd5NjN2pYMSWRm) @jyellick
bmkor (Fri, 09 Jun 2017 02:02:50 GMT):
One time for one msp signing.
jyellick (Fri, 09 Jun 2017 02:04:28 GMT):
Could you re-run the test, but with the signature order reversed and send the same logs?
jyellick (Fri, 09 Jun 2017 02:05:10 GMT):
(According to the first set of logs, MSP1 admin signed with a valid signature, but MSP2 admin claimed to sign without a valid signature. I am curious if we will see the same result or something reversed indicating a bug.
bmkor (Fri, 09 Jun 2017 02:14:41 GMT):
:ok_hand:
bh4rtp (Fri, 09 Jun 2017 04:57:01 GMT):
@here i have my fabric network running without any errors. now i am reviewing the logging information of orderer. and notice that there are 9 blocks created from start. the transactions number packaged in these blocks are given as follows.
[block 0] 1 tx, normal hash
[block 0] 1 tx, null hash
[block 1] 1 tx, null hash
[block 1] 1 tx, null hash
[block 2] 1 tx, null hash
[block 3] 1 tx, normal hash
[block 4] 8 tx, normal hash
[block 5] 7 tx, normal hash
[block 6] 9 tx, normal hash
[block 7] 10 tx, normal hash
[block 8] 5 tx, normal hash
i am using e2e_cli as network setup template, 4 couchdb, 4 peers, 1 orderer, 1 cli. all the 4 peers joined the channel. install and instantiate chaincode on peer0 and peer2. then call 7 invocations, sleep 3, call 7 invocations, sleep 3, call 8 invocations, sleep 3, call 9 invocations, and sleep 5, then call 15 invocations. the ordering batchSize = 10, batchTimeout = 2s. would you please explain above logging information?
bh4rtp (Fri, 09 Jun 2017 05:11:01 GMT):
why do block 0, 1 and 2 have null string txId hash? and why do block 0 and 1 print twice?
bh4rtp (Fri, 09 Jun 2017 05:12:18 GMT):
the null string txId hash like this:
```2017-06-09 11:12:18.218 CST [fsblkstorage] indexBlock -> DEBU 830 Indexing block [blockNum=1, blockHash=[]byte{0x1f, 0xe5, 0x16, 0x11, 0xd8, 0xfa, 0x96, 0x9d, 0x40, 0xe6, 0x3e, 0x24, 0xff, 0xd, 0xd7, 0x52, 0xee, 0x93, 0x85, 0x71, 0x11, 0x7f, 0x20, 0xd3, 0x6c, 0xa1, 0xf1, 0x8e, 0x5f, 0x7b, 0x8f, 0x3} txOffsets=
txId= locPointer=offset=70, bytesLength=10645
]```
jyellick (Fri, 09 Jun 2017 06:10:51 GMT):
@bh4rtp The txid is only required to be set for endorser type transactions. The blocks you are seeing with no txid are configuration blocks
bmkor (Fri, 09 Jun 2017 06:50:30 GMT):
Should be late night at your time. I'll re-run the test after settling some urgent stuff.
Calvin_Heo (Fri, 09 Jun 2017 09:00:05 GMT):
Has joined the channel.
vukolic (Fri, 09 Jun 2017 09:55:13 GMT):
@jyellick @kostas why do we sign block 2x in orderer/multichain/chainsupport.go WriteBlock() ?
vukolic (Fri, 09 Jun 2017 09:55:24 GMT):
there is
vukolic (Fri, 09 Jun 2017 09:55:24 GMT):
cs.addBlockSignature(block)
cs.addLastConfigSignature(block)
yacovm (Fri, 09 Jun 2017 13:22:58 GMT):
I think because the signature is in the Metadata, and the last config block is outside of the block header / body - also in the metadata
jyellick (Fri, 09 Jun 2017 13:40:41 GMT):
@vukolic In retrospect, I wish we had considered consolidating these signatures. The thought at the time was that for something BFT, we would need f+1 signatures over the block, but only one orderer signature over the last config field. In the solo/kafka case, these reduces to 1 signature for each
nickgaski (Fri, 09 Jun 2017 13:47:43 GMT):
@jyellick - for the config update to the orderer genesis. Do you have to stop the orderer and re-bootstrap. Or is there a way to dynamically update?
jyellick (Fri, 09 Jun 2017 13:49:39 GMT):
@nickgaski The demo video I sent was for bootstrap, you may also reconfigure the orderering system channel, or any channel, dynamically via a config update. I started with bootstrapping because it's an easier scenario. You can get some idea looking at the second example in https://github.com/hyperledger/fabric/blob/master/examples/configtxupdate/README.md
nickgaski (Fri, 09 Jun 2017 13:50:45 GMT):
so basically you just reconfig your genesis block and then send an update call to the ordering service? and you can do this on a system or channel level?
jyellick (Fri, 09 Jun 2017 16:25:44 GMT):
@nickgaski The video was for the bootstrapping scenario. Update has a couple more steps. Roughly though:
1. Retrieve the current config
2. Translate the config to human readable via tool
3. Modify the human readable config
4. Compute a config update based on the two configs via tool
5. Submit config update with signatures
jyellick (Fri, 09 Jun 2017 16:26:22 GMT):
The big difference to remember is that at bootstrapping, you provide a block. For channel creation and reconfiguration, you provide a config update transaction. These are different data structures, and the words should not be used interchangeably
Asara (Fri, 09 Jun 2017 20:06:54 GMT):
Hey guys, having some errors in my orderer logs and was curious as to whats going on.
Asara (Fri, 09 Jun 2017 20:07:00 GMT):
'''cauthdsl] func2 -> ERRO 1a7 Principal deserialization failed:'''
Asara (Fri, 09 Jun 2017 20:07:00 GMT):
cauthdsl] func2 -> ERRO 1a7 Principal deserialization failed:
Asara (Fri, 09 Jun 2017 20:07:00 GMT):
Principal deserialization failed: (MSP org1MSP is unknown) for identity
jrosmith (Fri, 09 Jun 2017 20:10:14 GMT):
Has joined the channel.
jyellick (Fri, 09 Jun 2017 20:44:22 GMT):
@Asara this most likely indicates that you have defined a policy referencing an org, which does not have an MSP defined in the channel.
jyellick (Fri, 09 Jun 2017 20:44:49 GMT):
https://jira.hyperledger.org/browse/FAB-3831 is actually there to help you detect this very scenario
jyellick (Fri, 09 Jun 2017 20:45:04 GMT):
How have you configured your channel?
latitiah (Fri, 09 Jun 2017 20:52:20 GMT):
What can cause a channel to halt? I'm seeing this warning in my kafka-based orderer log: `[orderer/kafka] Enqueue -> WARN 4c1 [channel: behavesystest] Will not enqueue cause the chain has been halted`
latitiah (Fri, 09 Jun 2017 20:52:20 GMT):
What can cause a channel to halt? I'm seeing this warning in my kafka-based orderer log:
```
2017-06-09 20:30:00.017 UTC [orderer/kafka] Enqueue -> WARN 4c1 [channel: behavesystest] Will not enqueue cause the chain has been halted
2017-06-09 20:30:00.017 UTC [orderer/common/broadcast] Handle -> INFO 4c2 Consenter instructed us to shut down
2017-06-09 20:30:00.018 UTC [orderer/main] func1 -> DEBU 4c3 Closing Broadcast stream
2017-06-09 20:30:00.019 UTC [orderer/common/deliver] Handle -> WARN 4c4 Error reading from stream: rpc error: code = Canceled desc = context canceled
2017-06-09 20:30:00.019 UTC [orderer/main] func1 -> DEBU 4c5 Closing Deliver stream```
kostas (Fri, 09 Jun 2017 20:56:09 GMT):
What do the orderer logs say?
kostas (Fri, 09 Jun 2017 20:59:58 GMT):
In general, what cause a channel to halt is if a request for a channel reaches the orderer _before_ the orderer has established a connection to the Kafka cluster for the partition that corresponds to the channel. In the changesets that we're merging this weekend the behavior as far as the client is concerned remains the same, although the response code and the log message and the logic that's internal to the orderer changes.
kostas (Fri, 09 Jun 2017 21:00:16 GMT):
When the changes get merged, you should expect to receive a SERVICE_UNAVAILABLE and your connection will be dropped.
kostas (Fri, 09 Jun 2017 21:00:43 GMT):
Meanwhile the orderer will keep retrying to connect to the Kafka cluster for a user-configurable amount of time.
kostas (Fri, 09 Jun 2017 21:00:54 GMT):
When it establishes that connection, then your request will go through.
Asara (Fri, 09 Jun 2017 21:22:36 GMT):
@jyellick am attempting to set up the channel. I used the generateArtifacts script to create everything (leaving everything default.)
Asara (Fri, 09 Jun 2017 21:23:26 GMT):
Hm... actually generateArtifacts, does it create MSP related certs as well?
jyellick (Fri, 09 Jun 2017 21:30:31 GMT):
`generateArtifacts.sh` uses the `cryptogen` tool to create the certs
jyellick (Fri, 09 Jun 2017 21:31:31 GMT):
@Asara ^
latitiah (Fri, 09 Jun 2017 22:32:28 GMT):
Thanks @kostas!
kostas (Fri, 09 Jun 2017 22:32:53 GMT):
Sure thing, if there are more questions, please let me know.
lenin.mehedy (Sat, 10 Jun 2017 02:22:10 GMT):
Any idea on the following errors/warnings?
```
peer0.org1.example.com | 2017-06-10 02:20:17.163 UTC [eventhub_producer] Chat -> ERRO c8b error during Chat, stopping handler: stream error: code = 1 desc = "context canceled"
peer0.org1.example.com | 2017-06-10 02:20:17.164 UTC [eventhub_producer] deRegisterHandler -> DEBU c8c deregistering event type: BLOCK
peer1.org1.example.com | 2017-06-10 02:20:17.164 UTC [eventhub_producer] Chat -> ERRO e8d error during Chat, stopping handler: stream error: code = 1 desc = "context canceled"
peer1.org1.example.com | 2017-06-10 02:20:17.164 UTC [eventhub_producer] deRegisterHandler -> DEBU e8e deregistering event type: BLOCK
orderer.example.com | 2017-06-10 02:20:17.165 UTC [orderer/common/deliver] Handle -> WARN a30 Error reading from stream: stream error: code = 1 desc = "context canceled"
orderer.example.com | 2017-06-10 02:20:17.165 UTC [orderer/common/deliver] Handle -> WARN a31 Error reading from stream: stream error: code = 1 desc = "context canceled"
```
jyellick (Sat, 10 Jun 2017 02:34:18 GMT):
@lenin.mehedy It's hard to tell from those logs, but it sounds like perhaps the perhaps the peer went down
jyellick (Sat, 10 Jun 2017 02:34:18 GMT):
@lenin.mehedy It's hard to tell from those logs, but it sounds like perhaps the peer went down
cca88 (Sat, 10 Jun 2017 09:09:04 GMT):
Interestingly, a Corda notary is now supported in 3 resilience (consensus) models: Single-node / Raft-notary (tolerating crashes) / BFT-notary (tolerating malicious nodes; based on the open-source BFT-SMaRt).
See also here - https://docs.corda.net/releases/release-M11.1/release-notes.html
I hope that Fabric will get BFT support again soon...
kostas (Sat, 10 Jun 2017 10:20:40 GMT):
Noticed that as well when I went through their beta announcement. Very nicely done.
jeffgarratt (Sat, 10 Jun 2017 12:42:19 GMT):
@lenin.mehedy those logs appear to show that a client that was listening to events disconnected. in general this is NOT an error, Context cancelled represents the closing of the GRPC stream.
jeffgarratt (Sat, 10 Jun 2017 12:42:19 GMT):
@lenin.mehedy those logs appear to show that a client that was listening to events disconnected. Context cancelled represents the closing of the GRPC stream.
Mnorberto (Mon, 12 Jun 2017 04:16:15 GMT):
Has joined the channel.
paapighoda (Mon, 12 Jun 2017 05:58:25 GMT):
Has joined the channel.
CedricHumbert (Mon, 12 Jun 2017 12:49:57 GMT):
Has joined the channel.
Asara (Mon, 12 Jun 2017 13:21:23 GMT):
@jyellick regarding cryptogen, it consumes the crypto-config.yaml. In that, is there a way to make certs specifically for an organization's clients? I only see examples for orderers and peers.
yacovm (Mon, 12 Jun 2017 13:21:55 GMT):
cryptogen also creates client certs
Asara (Mon, 12 Jun 2017 13:22:41 GMT):
Is there an example somewhere of how to do that?
jyellick (Mon, 12 Jun 2017 13:39:35 GMT):
@yacovm ^
yacovm (Mon, 12 Jun 2017 13:42:09 GMT):
ah but it just creates them
yacovm (Mon, 12 Jun 2017 13:42:13 GMT):
by default I think
yacovm (Mon, 12 Jun 2017 13:42:47 GMT):
in users/
Asara (Mon, 12 Jun 2017 13:42:51 GMT):
client certs? It creates the certs required by the orderer/peers to connect to the CA, but not TLS certs you would use for clients to connect to the MSP
Asara (Mon, 12 Jun 2017 13:42:53 GMT):
Hm. I'll check
Asara (Mon, 12 Jun 2017 13:43:42 GMT):
Okay so, ./ordererOrganizations/example.com/users/Admin@example.com would be the orderer organization's Admin user's certs?
Asara (Mon, 12 Jun 2017 13:44:09 GMT):
Is there a functional difference between the Admin user and User1 that gets created?
yacovm (Mon, 12 Jun 2017 14:00:20 GMT):
yeah
yacovm (Mon, 12 Jun 2017 14:00:22 GMT):
there is
yacovm (Mon, 12 Jun 2017 14:00:44 GMT):
the latter can't install CCs, etc.
yacovm (Mon, 12 Jun 2017 14:00:47 GMT):
in the case of the peer
yacovm (Mon, 12 Jun 2017 14:00:57 GMT):
in the case of the orderer I'm not sure what role the admin user has, @jyellick ?
jrosmith (Mon, 12 Jun 2017 14:16:51 GMT):
hey all, I'm following the example for joining a channel at https://github.com/hyperledger/fabric-sdk-node/blob/master/test/integration/e2e/join-channel.js. I'm having trouble getting the genesis_block for my channel, getting the following error:
```
error: [Orderer.js]: sendDeliver - rejecting - status:NOT_FOUND
(node:1726) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): Error: Invalid results returned ::NOT_FOUND
```
I'm not sure what the issue is, but I feel like I should be using the peer admin the query for the genesis_block? I'm setting up a client using the node_sdk and I don't think I fully understand why a client would need access to Orderer credentials
jrosmith (Mon, 12 Jun 2017 14:18:50 GMT):
I get that its not finding the genesis block, I just don't understand why I'm using the orderer to query for it
latitiah (Mon, 12 Jun 2017 14:54:15 GMT):
@kostas I have some questions about the kafka setup. I've been talking to a couple of people and there seems to be some confusion. I noticed the kakfa composition files in the bddtest folder.
* Is this configuration a recommended setup? (3 kafka brokers, 3 zookeepers, 3 orderers)
* Will kafka not work if we don't set things up this way? (Is there a setup such that it would equate to solo if we don't set the values a certain way? There seems to be a train of thought that says that using only 1 broker with 1 orderer is the same as a solo-based orderer?)
* Are there defaults that assume that this is the setup? If we want to tweak certain env vars, what are the best ones to tweak?
* Are there any additional env vars not listed in the composition files?
* Is it possible to get this all documented somewhere?
jyellick (Mon, 12 Jun 2017 15:05:26 GMT):
@jrosmith Are you certain the channel creation was successful? In particular, you should not have seen anything about a `BAD_REQUEST` in reply.
jyellick (Mon, 12 Jun 2017 15:05:43 GMT):
There is also some time between channel creation request and the block becoming available
jyellick (Mon, 12 Jun 2017 15:05:55 GMT):
You could try using `peer channel fetch config` to try again
kostas (Mon, 12 Jun 2017 15:17:47 GMT):
@latitiah:
> * Is this configuration a recommended setup? (3 kafka brokers, 3 zookeepers, 3 orderers)
(Note that it's 4 brokers.) You can vary the number of orderers as you see fit, but there rest is the bare minimum setup that allows you to have one broker go down and keep things moving. Kafka is there to give us crash fault tolerance, 1 broker going down is the simplest fault in that sense, and this is the minimum setup that allows you to achieve that.
> Will kafka not work if we don't set things up this way? (Is there a setup such that it would equate to solo if we don't set the values a certain way? There seems to be a train of thought that says that using only 1 broker with 1 orderer is the same as a solo-based orderer?)
This train of thought needs to be derailed :slight_smile: We'll either use this setup or something with even higher crash fault tolerances (e.g. add another 5 brokers in there with the same settings).
> * Are there defaults that assume that this is the setup? If we want to tweak certain env vars, what are the best ones to tweak?
Not sure I get this question?
> * Are there any additional env vars not listed in the composition files?
All the Kafka-related settings are listed here:
https://github.com/hyperledger/fabric/blob/master/bddtests/dc-orderer-base.yml (`ORDERER_KAFKA_*`)
https://github.com/hyperledger/fabric/blob/master/bddtests/dc-orderer-kafka-base.yml
https://github.com/hyperledger/fabric/blob/master/bddtests/dc-orderer-kafka.yml
Along with descriptions next to each setting and pointers to actual Kafka/ZK setting.
Anything not set there explicitly is assumed to have the default value taken from here: https://kafka.apache.org/documentation/#brokerconfigs
The two additional things that someone would set are TLS and `log.retention.hours` (`KAFKA_LOG_RETENTION_HOURS`) which will need to be set to a large value in a production environment.
> * Is it possible to get this all documented somewhere?
It is and it should. I have a draft of a guide that wraps all of these things together but I was caught up in bug fixing up until last night. Watch FAB-3384, I should have a changeset out tomorrow or Wednesday at the latest.
kostas (Mon, 12 Jun 2017 15:17:56 GMT):
Please let me know if there are more questions.
latitiah (Mon, 12 Jun 2017 15:24:34 GMT):
Cool! I think that I follow. :P I'll read up on the docs and the JIRA issue (I've seen the composition fiiles in bddtests) to be sure. Thx!
scottz (Mon, 12 Jun 2017 16:16:54 GMT):
@kostas we want all our tests to use (at a minimum) the recommended configuration. To clarify, it sounds like you are saying we should ues 3 orderers, 4 kafkabrokers, 3 zookeepers. And use KAFKA_MIN_INSYNC_REPLICAS=2 and KAFKA_DEFAULT_REPLICATION_FACTOR=3. With that setup, a network can continue when 2 KBs go down. I assume your statement "only one can go down" was referring to the replication factor being 3 - and if we had only 3 KBs.
scottz (Mon, 12 Jun 2017 16:17:38 GMT):
Another question: If we were to add 5 more Kafkabrokers as you suggested, for some tests, then should we keep KAFKA_DEFAULT_REPLICATION_FACTOR=3? Or at what point would you recommend increasing it?
jrosmith (Mon, 12 Jun 2017 16:52:09 GMT):
@jyellick the channel was initially created via `generateArtifacts`. I have the physical genesis.block file, I just can't access via the node_sdk
jyellick (Mon, 12 Jun 2017 16:53:18 GMT):
@jrosmith The `NOT_FOUND` error would indicate that the channel does not exist, are you certain you're specifying the channel ID correctly?
jrosmith (Mon, 12 Jun 2017 16:55:35 GMT):
is there a way to check the channel ID from the genesis.block? i'm nearly positive that it was setup via the example with the name 'mychannel'
kostas (Mon, 12 Jun 2017 17:33:21 GMT):
@scottz:
> With that setup, a network can continue when 2 KBs go down. I assume your statement "only one can go down" was referring to the replication factor being 3 - and if we had only 3 KBs.
Incorrect. I wrote this with the current/suggested setup in mind.
Only one broker can go down with those min.ISR and RF settings we've given, and we can still have a fully functional ordering service. The reason for this is that at channel creation RF brokers need to be up. So if you were to bring two brokers down you'd be unable to create a new channel.
> Another question: If we were to add 5 more Kafkabrokers as you suggested, for some tests, then should we keep KAFKA_DEFAULT_REPLICATION_FACTOR=3? Or at what point would you recommend increasing it?
There's really no hard rule for this. Depends on the resources you have available, and the level of paranoia :slight_smile: I'd keep min.isr to 2 but would bump RF to say 4 or 5 if I had 10 brokers up.
kostas (Mon, 12 Jun 2017 17:33:21 GMT):
@scottz:
> With that setup, a network can continue when 2 KBs go down. I assume your statement "only one can go down" was referring to the replication factor being 3 - and if we had only 3 KBs.
Incorrect. I wrote this with the current/suggested setup in mind.
Only one broker can go down with those min.ISR and RF settings we've given, and we can still have a fully functional ordering service. The reason for this is that at channel creation RF brokers need to be up. So if you were to bring two brokers down you'd be unable to create a new channel. (This is something that I'm also noting down in the document.)
> Another question: If we were to add 5 more Kafkabrokers as you suggested, for some tests, then should we keep KAFKA_DEFAULT_REPLICATION_FACTOR=3? Or at what point would you recommend increasing it?
There's really no hard rule for this. Depends on the resources you have available, and the level of paranoia :slight_smile: I'd keep min.isr to 2 but would bump RF to say 4 or 5 if I had 10 brokers up.
kostas (Mon, 12 Jun 2017 17:33:46 GMT):
If there are more questions, please let me know.
jeffgarratt (Mon, 12 Jun 2017 17:39:17 GMT):
@jrosmith yes. The genesis block (or any block with a config) can be deserialized into (after several steps of deserialziation) into a ConfigEnvelope, which the last_update is a configUpdate, which has the channel_id as a field
jrosmith (Mon, 12 Jun 2017 17:40:16 GMT):
@jeffgarratt and to deserialize I should be using the BlockDecoder class?
jeffgarratt (Mon, 12 Jun 2017 17:40:56 GMT):
hmmm... that may be a @jyellick question... but sounds promising
jeffgarratt (Mon, 12 Jun 2017 17:40:56 GMT):
@jrosmith hmmm... that may be a @jyellick question... but sounds promising
jyellick (Mon, 12 Jun 2017 17:42:14 GMT):
@jrosmith You may use `configtxgen -inspectBlock
jrosmith (Mon, 12 Jun 2017 17:42:47 GMT):
@jyellick @jeffgarratt awesome, thanks guys i'll do that now
jeffgarratt (Mon, 12 Jun 2017 17:42:55 GMT):
yw!! good luck!
jyellick (Mon, 12 Jun 2017 17:43:06 GMT):
You may also look at the new `configtxlator` tool which provides more information, but is a little more cumbersome to use (as it is REST only for now)
jrosmith (Mon, 12 Jun 2017 17:46:41 GMT):
@jyellick I'm thinking something went wrong somewhere...not sure where though. Bright side I have another error message:
```
root@ip-172-30-2-69 linux-amd64]# ./bin/configtxgen -inspectBlock channel-artifacts/genesis.block
2017-06-12 17:44:43.872 UTC [common/configtx/tool] main -> INFO 001 Loading configuration
2017-06-12 17:44:43.872 UTC [common/configtx/tool/localconfig] Load -> CRIT 002 Error reading configuration: Unsupported Config Type ""
panic: Error reading configuration: Unsupported Config Type ""
goroutine 1 [running]:
panic(0x8c7820, 0xc4201c0da0)
```
jyellick (Mon, 12 Jun 2017 17:52:48 GMT):
What size is the block? Take a look in an editor? It should be mostly binary nonsense.
Asara (Mon, 12 Jun 2017 17:55:26 GMT):
genesis.block? Er... the file has some random data, testchainid and a hash, and then info about the consortium
Asara (Mon, 12 Jun 2017 17:56:10 GMT):
including certs for them
jyellick (Mon, 12 Jun 2017 17:56:13 GMT):
I meant @jrosmith's `channel-artifacts/genesis.block `
Asara (Mon, 12 Jun 2017 17:56:28 GMT):
@jyellick Am working with @jrosmith
Asara (Mon, 12 Jun 2017 17:56:31 GMT):
That is the file i'm talking about :)
jyellick (Mon, 12 Jun 2017 17:56:33 GMT):
Ah, got it
jyellick (Mon, 12 Jun 2017 17:58:02 GMT):
So that sounds like the ordering system channel genesis block
jyellick (Mon, 12 Jun 2017 17:58:15 GMT):
I was hoping to get `configtxgen -inspectBlock` run against the genesis block for the new channel
jyellick (Mon, 12 Jun 2017 17:58:24 GMT):
The one that was returned by the channel creation command if you used the peer CLI
Asara (Mon, 12 Jun 2017 17:58:47 GMT):
Wouldn't the genesis block be generated by generateArtifacts?
Asara (Mon, 12 Jun 2017 17:58:51 GMT):
with the name of mychannel?
jyellick (Mon, 12 Jun 2017 18:00:16 GMT):
There is the ordering system channel, which is bootstrapped using the genesis block output by `configtxgen`
jyellick (Mon, 12 Jun 2017 18:00:24 GMT):
Out of `generateArtifacts` this defaults to `testchainid`
jyellick (Mon, 12 Jun 2017 18:00:44 GMT):
Then, `generateArtifacts` creates a channel creation transaction, and sends that in via the peer CLI
jyellick (Mon, 12 Jun 2017 18:00:57 GMT):
the peer cli writes out `
jyellick (Mon, 12 Jun 2017 18:01:13 GMT):
This is the new genesis block which was created for the new channel, in response to the creation request
Asara (Mon, 12 Jun 2017 18:01:24 GMT):
and this only exists on the peers?
jyellick (Mon, 12 Jun 2017 18:02:03 GMT):
I'm not sure what you mean? This is the genesis block for the channel, it is the first block in the channel. It is also the artifact sent to the peers in the join channel request
Asara (Mon, 12 Jun 2017 18:03:16 GMT):
Yeah, so the CHANNEL_NAME.block request gets sent via generateArtifacts to the peer(s?), and it (they?) will create the genesis block. Does this block end up on the docker host? Or do I need to retrieve it from the peers?
Asara (Mon, 12 Jun 2017 18:03:37 GMT):
Does it get created on the peers for each consortium? or just Org1 (in terms of the example e2e docker setup)
jyellick (Mon, 12 Jun 2017 18:09:28 GMT):
The `
jyellick (Mon, 12 Jun 2017 18:09:28 GMT):
The `
jyellick (Mon, 12 Jun 2017 18:09:28 GMT):
The `
Asara (Mon, 12 Jun 2017 18:12:33 GMT):
So again, I'm bringing up the environment using generateArtifacts/docker-compose. generateArtifacts creates the crypto related files, but does bringing up the environment also initilize the channel specified with the CHANNEL_NAME environment variable? Or do I need to create it manually? I am running everything on docker, so is the block created in the peer's docker container and need to be retrieved from there, or is it also copied to somewhere on the docker host
jyellick (Mon, 12 Jun 2017 18:25:02 GMT):
`./generateArtifacts.sh up foo` will bring the environment up with a channel named `foo`
jyellick (Mon, 12 Jun 2017 18:25:02 GMT):
`./network_setup.sh up foo` will bring the environment up with a channel named `foo`
jyellick (Mon, 12 Jun 2017 18:25:29 GMT):
The block is created in the ordering service, but retrieved by the peer CLI, and then sent to the peers to join
jyellick (Mon, 12 Jun 2017 18:25:58 GMT):
This is I believe on the local filesystem, so you do not need to get it from docker
Asara (Mon, 12 Jun 2017 18:31:49 GMT):
`./generateArtifacts.sh` doesn't bring up the environment. It only creates the crypto configurations... `./network_setup.sh up foo` will bring up the entire network.
Asara (Mon, 12 Jun 2017 18:32:18 GMT):
I am running `./generateArtifacts.sh` to create the crytpo configurations, and just running docker-compose manually. So it shouldn't be different.
Asara (Mon, 12 Jun 2017 18:32:56 GMT):
the only `*.block` file I have after ./generateArtifacts.sh is genesis.block under channel-artifacts.
Asara (Mon, 12 Jun 2017 18:32:56 GMT):
the only `*.block` file I have after `./generateArtifacts.sh` is genesis.block under channel-artifacts.
Asara (Mon, 12 Jun 2017 18:32:56 GMT):
the only `*.block` file I have after `./generateArtifacts.sh` is `genesis.block` under channel-artifacts.
Asara (Mon, 12 Jun 2017 18:36:35 GMT):
Hm...
Asara (Mon, 12 Jun 2017 18:36:58 GMT):
Is it because in `network_setup.sh` `./generateArtifacts.sh` is being sourced and not just run as a script?
jyellick (Mon, 12 Jun 2017 18:40:10 GMT):
Right, so, `./generateArtifacts.sh` creates the genesis block to bootstrap the orderer, and the configtx to create the channel, but, the genesis block for your new channel is not created until after that confgtx has been submitted to ordering
Asara (Mon, 12 Jun 2017 18:41:01 GMT):
Is that a manual process? That doesn't occur during the environment coming up via docker-compose?
jyellick (Mon, 12 Jun 2017 18:41:20 GMT):
I may have mis-spoke. It has been a while since I have been in these files, let me check for you now
Asara (Mon, 12 Jun 2017 18:41:33 GMT):
Thank you friend!
jyellick (Mon, 12 Jun 2017 18:45:07 GMT):
Ah, yes, this is my mistake, I was still thinking about running the end to end script locally. You can see that in the docker compose file there is a `peer-cli` container. where the peer commands are run. In particular, it runs `scripts/script.sh`, There, the `createChannel` function invokes `peer create channel` which submits the configtx from `generateArtifacts.sh` and will write the `
jyellick (Mon, 12 Jun 2017 18:45:11 GMT):
Sorry about that
Asara (Mon, 12 Jun 2017 18:46:35 GMT):
So that isn't run in the `docker-compose-e2e.yaml` file?
jyellick (Mon, 12 Jun 2017 18:46:50 GMT):
It is
jyellick (Mon, 12 Jun 2017 18:47:18 GMT):
```
command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT'
```
Asara (Mon, 12 Jun 2017 18:47:21 GMT):
Er... cli isn't included as a service in that compose file
jyellick (Mon, 12 Jun 2017 18:47:38 GMT):
Oh, right
jyellick (Mon, 12 Jun 2017 18:47:41 GMT):
It is in `docker-compose-cli.yaml`
Asara (Mon, 12 Jun 2017 18:49:28 GMT):
Which doesn't get run if you set the `COMPOSE_FILE` in `network_setup.sh` to the e2e version...
jyellick (Mon, 12 Jun 2017 18:49:29 GMT):
You can see in `network_setup.sh`
jyellick (Mon, 12 Jun 2017 18:49:36 GMT):
Yes
Asara (Mon, 12 Jun 2017 18:49:50 GMT):
Which is the version that gets used with the Fabric CA
jyellick (Mon, 12 Jun 2017 18:50:38 GMT):
Ah, okay... I guess I am slow to catch up here
jyellick (Mon, 12 Jun 2017 18:50:48 GMT):
Let me reread your original question
Asara (Mon, 12 Jun 2017 18:52:52 GMT):
So I guess my original question was misguided, since I am now learning that a channel was never actually created, since `scripts/scripts.sh` was never run.
Asara (Mon, 12 Jun 2017 18:53:26 GMT):
And that everything that would occur via that script needs to be done manually in the e2e/CA environment
jyellick (Mon, 12 Jun 2017 18:54:17 GMT):
Right. So `generateArtifacts.sh` creates a channel creation TX, which you can submit, to create a channel. This can be submitted via the SDK, to create the channel. Then the SDK can be used to retrieve the genesis block, which can be passed as a parameter to the join channel.
jyellick (Mon, 12 Jun 2017 18:54:58 GMT):
I expect this is done in the sdk e2e, but would need to check
Asara (Mon, 12 Jun 2017 18:55:19 GMT):
Awesome. that makes perfect sense. So `channel-artifacts/channeltx` which contains the mychannel block can be submit via the SDK to actually initialize the channel
Asara (Mon, 12 Jun 2017 18:55:19 GMT):
Awesome. that makes perfect sense. So `channel-artifacts/channel.tx` which contains the mychannel block can be submit via the SDK to actually initialize the channel
jyellick (Mon, 12 Jun 2017 18:56:04 GMT):
> which contains the mychannel block
It contains the mychannel creation transaction. This will be processed by the orderer to create the mychannel genesis block
Asara (Mon, 12 Jun 2017 18:57:13 GMT):
So that goes back to my original question then. I can use `./generateArtifacts.sh` to create the peer/orderer admins/users. Should I just use those users in the SDK proceed with the channel creation?
Asara (Mon, 12 Jun 2017 18:57:22 GMT):
OR is there another way for me to create users for the SDK itself...
jyellick (Mon, 12 Jun 2017 19:00:26 GMT):
We are rapidly drifting outside of my area of expertise, but by default, channel creation requires an Admin cert to sign. These admin certs are part of the MSP definition, so you cannot simply generate a new one. You may generate user certificates which may transact for things like chaincodes. But as channel creation requires the admin role, new certs won't work.
jyellick (Mon, 12 Jun 2017 19:01:02 GMT):
I'm not certain what APIs the SDK providers for creating users in the SDK, but I do not believe they support creating new admin users
jyellick (Mon, 12 Jun 2017 19:01:17 GMT):
(This would require a config update to update the channel definitions with the new admin users)
Asara (Mon, 12 Jun 2017 19:01:29 GMT):
So when `./generateArtifacts.sh` creates `crypto-config/peerOrganizations/org1.example.com/users/Admin\@org1.example.com/`
Asara (Mon, 12 Jun 2017 19:02:21 GMT):
It is creating an Admin
Asara (Mon, 12 Jun 2017 19:02:24 GMT):
for that organization?
Asara (Mon, 12 Jun 2017 19:02:42 GMT):
And the peer itself has its own set of credentials with which it communicates on the network?
jyellick (Mon, 12 Jun 2017 19:02:46 GMT):
You see `crypto-config/peerOrganizations/org1.example.com/msp/admincerts` contains a single `.pem` file, this is the admin cert for that organization. And, channel admin actions, like channel creation generally require this cert to be used for signing.
jyellick (Mon, 12 Jun 2017 19:03:32 GMT):
For normal operations, like chaincode invocation, etc., a normal user cert, which is simply signed by the CA is sufficient. These are the new users I suspect the SDK is capable of helping you with.
jyellick (Mon, 12 Jun 2017 19:03:46 GMT):
But you would be better off asking @jimthematrix as that is purely speculative.
jimthematrix (Mon, 12 Jun 2017 19:05:28 GMT):
yes the SDKs (node and java) supports enrolling (as well as registering) users with fabric-ca
jimthematrix (Mon, 12 Jun 2017 19:06:58 GMT):
note that all the users registered with fabric-ca are of the "MEMBER" role of the organization, they will never be of the "ADMIN" role because the admin certs must exist ahead of time when the peers start up, while the fabric-ca enrollment for new users happen dynamically at runtime
Asara (Mon, 12 Jun 2017 19:17:27 GMT):
Alright cool.
Asara (Mon, 12 Jun 2017 19:17:53 GMT):
Thanks guys! Appreciate it a lot.
Asara (Mon, 12 Jun 2017 19:18:07 GMT):
Thanks for being so an-jyellick :D
jyellick (Mon, 12 Jun 2017 19:18:28 GMT):
Haha, that's a first, happy to help
lm_nop (Mon, 12 Jun 2017 23:14:53 GMT):
Has joined the channel.
Glen (Tue, 13 Jun 2017 02:44:41 GMT):
@kostas @latitiah Hi, I've seen your discussion about Kafka cluster,I have some questions about the production deployment. As one channel is one topic, and If we set the partition number to more than one, then the transactions will be distributed across different partitions, then when we consume on orderers, in my opinion, we need to iterate the partions to collect all transactions in a certain order.Is that right?If so, does Fabric support a relevant schedule strategy or do we need to implement by ourselves?
jyellick (Tue, 13 Jun 2017 02:58:45 GMT):
@Glen The fabric orderer manages creating the partitions and topics, there is no need for you to manage this manually
jyellick (Tue, 13 Jun 2017 03:03:01 GMT):
I believe the discussion seen earlier was around the replication strategy for the partitions. Kafka allows specifying the minimum number of 'in sync replicas', before the partition becomes read-only. In general, this number should be > 1 to achieve true CFT, but the number of places the data is replicated may be greater than this minimum value (the replication factor)
Glen (Tue, 13 Jun 2017 03:46:05 GMT):
Thanks jyellick, yes just one partition by default , I'm not concerned about the repicas, but we need to use mutiple partitons to improve concurrency and throughput.
Glen (Tue, 13 Jun 2017 03:47:40 GMT):
I also doubt the production deployment in rel
Glen (Tue, 13 Jun 2017 03:47:40 GMT):
I also doubt the production deployment in reality usually has thousands of partitions is suitable for Fabric
jyellick (Tue, 13 Jun 2017 03:54:22 GMT):
@Glen You are welcome to experiment, but my gut says that increasing the number of partitions per fabric channel will not increase throughput. I do not believe that is the bottleneck
Glen (Tue, 13 Jun 2017 03:57:31 GMT):
If I experiment, I need to modify the pub and sub strategy, because Fabric only supports one partition
jyellick (Tue, 13 Jun 2017 03:57:46 GMT):
Correct
jyellick (Tue, 13 Jun 2017 03:58:32 GMT):
(one partition, per channel)
Glen (Tue, 13 Jun 2017 04:01:13 GMT):
thanks, i also have doubt about that, how to guarntee the global order. one partition one channel is the same to one topic one channel except the publishers can be divided into small groups.
kostas (Tue, 13 Jun 2017 05:33:29 GMT):
@Glen I'm having difficulty understanding your point here. What is your claim?
Glen (Tue, 13 Jun 2017 06:23:53 GMT):
hi Kostas, I want to configure more than one partition to serve, then I need to implement a consuming strategy to collect a global batch of Txs by iterating all partitions under one topic
kostas (Tue, 13 Jun 2017 06:35:46 GMT):
> I want to configure more than one partition to serve
kostas (Tue, 13 Jun 2017 06:36:01 GMT):
Who's the "I" in this sentence?
kostas (Tue, 13 Jun 2017 06:46:27 GMT):
Why? Yes, partitions are a unit of parallelization in Kafka, and consumer groups are a great way of distributing the load. But have you considered how much more complex your design will become if the orderers are split into consumer groups? Have you considered the the extra hops you're adding from the time when a customer issues a Deliver RPC till it gets served? Have you considered the consuming strategy, as you write, that will allow you to iterate all partitions under a topic? Can you confidently say that this is a better strategy than the current one? I've seen the profiles from performance runs and, as @jyellick noted, nothing points to this as a bottleneck.
kostas (Tue, 13 Jun 2017 06:46:27 GMT):
Why? Yes, partitions are a unit of parallelization in Kafka, and consumer groups are a great way of distributing the load. But have you considered how much more complex your design will become if the orderers are split into consumer groups? Have you considered the extra hops you're adding from the time when a customer issues a Deliver RPC till it gets served? Have you considered the consuming strategy, as you write, that will allow you to iterate all partitions under a topic? Can you confidently say that this is a better strategy than the current one? I've seen the profiles from performance runs and, as @jyellick noted, nothing points to this as a bottleneck.
kostas (Tue, 13 Jun 2017 06:46:27 GMT):
Why? Yes, partitions are a unit of parallelization in Kafka, and consumer groups are a great way of distributing the load. But have you considered how much more complex your design will become if the orderers are split into consumer groups? Have you considered the extra hops you're adding from the time when a customer issues a Deliver RPC till it gets served? Have you considered the consuming strategy, as you write, that will allow you to iterate all partitions under a topic? Can you confidently say that this is a better strategy than the current one? We considered this option when designing the system, but went against it. I've seen the profiles from performance runs and, as @jyellick noted, nothing points to the ordering service node consuming all topics as the bottleneck.
kostas (Tue, 13 Jun 2017 06:49:35 GMT):
On a personal note I'm also highly skeptical of the existence of these "thousands of channels" production deployments. I'll believe them when I see them.
kostas (Tue, 13 Jun 2017 06:49:35 GMT):
FWIW I'm also highly skeptical that we'll get to see "thousands of channels" deployments. (I'd love to be proven wrong.)
leoleo (Tue, 13 Jun 2017 06:59:55 GMT):
Has joined the channel.
Glen (Tue, 13 Jun 2017 07:01:38 GMT):
Thank you, I understand
elsesiy (Tue, 13 Jun 2017 08:01:51 GMT):
Has joined the channel.
yecineoueslati (Tue, 13 Jun 2017 09:37:26 GMT):
Has joined the channel.
SotirisAlfonsos (Tue, 13 Jun 2017 13:28:42 GMT):
Hello. I want to set up a network with 4 orderers in a kafka cluster, but i am having some trouble setting it up. I have 4 orderer containers, 1 zookeeper and 4 brokers (one for each orderer ), am i on the right track?
SotirisAlfonsos (Tue, 13 Jun 2017 13:28:42 GMT):
Hello. I want to set up a network with 4 orderers in a kafka cluster, but i am having some trouble setting it up. I have 4 orderer containers, 1 zookeeper and 4 brokers (one for each orderer ), am i on the right track?
Currently i am getting an error like this:
`Cannot set up producer = kafka: client has run out of available brokers to talk to`
jyellick (Tue, 13 Jun 2017 14:06:04 GMT):
@SotirisAlfonsos There are docker compose files which can do this for you. See for instance `bddtests/dc-orderer-kafka.yml`
jyellick (Tue, 13 Jun 2017 14:07:13 GMT):
With respect to your particular setup, we would recommend that there should always be at least 3 nodes of each time (orderers, zookeepers, and brokers). In the scenario you described, if the zookeeper crashes, this creates a single point of faliure for your network.
kostas (Tue, 13 Jun 2017 14:42:27 GMT):
@SotirisAlfonsos: I am guessing you use `configtxgen` to create the genesis block for your network?
kostas (Tue, 13 Jun 2017 14:45:29 GMT):
If you do, make sure that the `Orderer.Kafka.Brokers` key in `configtx.yaml` points to at least one your Kafka brokers' IPs.
kostas (Tue, 13 Jun 2017 14:46:07 GMT):
There are two ways you can get that error.
kostas (Tue, 13 Jun 2017 14:46:19 GMT):
1. If you have incorrectly configured the brokers list in `configtx.yaml`
kostas (Tue, 13 Jun 2017 14:46:19 GMT):
1. If you have incorrectly configured the brokers list in `configtx.yaml`: https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L167
SotirisAlfonsos (Tue, 13 Jun 2017 14:47:28 GMT):
@jyellick Those files are very useful. I am now trying to extract the info to my compose file. Yes @kostas i am using configtxgen for the genesis brock. The default `127.0.0.1:9092` should work i guess right?
kostas (Tue, 13 Jun 2017 14:47:52 GMT):
2. If your brokers are correctly encoded, but they are down for more than `LongTotal` since you tried to bring up the ordering service.
kostas (Tue, 13 Jun 2017 14:47:52 GMT):
2. If your brokers are correctly encoded, but they are down for more than `LongTotal` since you tried to bring up the ordering service: https://github.com/hyperledger/fabric/blob/master/sampleconfig/orderer.yaml#L169
kostas (Tue, 13 Jun 2017 14:48:27 GMT):
> The default `127.0.0.1:9092` should work i guess right?
It would work if that's the address of one of your Kafka brokers which is unlikely.
SotirisAlfonsos (Tue, 13 Jun 2017 14:49:58 GMT):
@kostas i see what you mean. You are probably right. I will check that.
karve (Tue, 13 Jun 2017 16:38:26 GMT):
Has joined the channel.
Sandeep (Tue, 13 Jun 2017 18:01:37 GMT):
Has joined the channel.
Sandeep (Tue, 13 Jun 2017 18:12:18 GMT):
@jyellick @kostas . What is the process to add a new Org to existing channel?. Does the below flow make sense ?
1. Get the system channel tx file.
2. Use configtxlator to decode the existing system channel tx. Follow the steps for configtxlator, add the new Org and get the updated file
3. peer channel update using the new system channel tx file.
4. Get the channel config block.
5. Use configtxlator to decode the existing channel config block. Follow the steps for configtxlator, add the new Org and get the updated file
6. peer channel update using the new channel config block.
Sandeep (Tue, 13 Jun 2017 18:12:18 GMT):
What is the process to add a new Org to existing channel?. Does this make sense ?
1. Get the system channel tx file.
2. Use configtxlator to decode the existing system channel tx. Follow the steps for configtxlator, add the new Org and get the updated file
3. peer channel update using the new system channel tx file.
4. Get the channel config block.
5. Use configtxlator to decode the existing channel config block. Follow the steps for configtxlator, add the new Org and get the updated file
6. peer channel update using the new channel config block.
jyellick (Tue, 13 Jun 2017 18:27:29 GMT):
@Sandeep Not quite, let me outline them
jyellick (Tue, 13 Jun 2017 18:29:10 GMT):
1. Get the config block for the channel
2. Translate the block into JSON via `configtxlator` and extract the `config` section.
3. Make a copy of this section, and edit it to add your new org
4. Encode both the extract config, and the updated config to proto using `configtxlator`
5. Have `configtxlator` compute the `ConfigUpdate` based on the two config objects you just produced
6. Collect signatures over this `ConfigUpdate` and submit to ordering as a config update transaction
jyellick (Tue, 13 Jun 2017 18:29:29 GMT):
You may see this process more verbosely via https://github.com/hyperledger/fabric/tree/master/examples/configtxupdate
jyellick (Tue, 13 Jun 2017 18:29:53 GMT):
There is a bootstrap example, and a config update example, you want the latter.
jyellick (Tue, 13 Jun 2017 18:30:24 GMT):
You can see how the config update changes the batch size for the channel, but the procedure is no different for adding an org.
Sandeep (Tue, 13 Jun 2017 18:35:30 GMT):
Thanks @jyellick . So we don't have to do anything at the system channel level to let the orderer know that new org has joined the consortium. Correct?
jyellick (Tue, 13 Jun 2017 18:36:36 GMT):
@Sandeep Correct. Once a channel has been established, that channel may mutate by whatever policies were established for that channel at creation time. It may even add organizations which are not defined in the consortium.
Sandeep (Tue, 13 Jun 2017 18:37:39 GMT):
@jyellick on step 6, about collecting signatures. How do I do that ?.
jyellick (Tue, 13 Jun 2017 18:38:54 GMT):
This is an out of band procedure. The bytes of the config update must be sent by some mechanism (HTTP, email, etc.). The admin receives these bytes, inspects them via `configtxlator` and assuming the admin approves of the change, may reply with a signature (once more, out of fabric band).
jyellick (Tue, 13 Jun 2017 18:40:01 GMT):
We anticipate that tools like web services to facilitate this exchange will be produced, but as the decision to sign is inherently a very human one (unlike endorsement, which is evaluating an (ideally) deterministic chaincode), we decided not to bake this into the fabric for v1.
Sandeep (Tue, 13 Jun 2017 18:42:19 GMT):
Thanks @jyellick . I am trying to add a new org on the channel using e2e cli example. Do I have to go through the process of collecting signatures for that example ?
jyellick (Tue, 13 Jun 2017 18:44:10 GMT):
Correct. In the e2e example there are two application orgs, and the modification policy requires that the majority of the admin policies are satisfied, in this case, it means that an admin from each org must attach a signature in order to change the membership of the channel.
Sandeep (Tue, 13 Jun 2017 18:52:25 GMT):
Thanks @jyellick . L
kostas (Tue, 13 Jun 2017 19:03:42 GMT):
It'll take him a few minutes to join I guess
ArchanaBalaji (Tue, 13 Jun 2017 19:41:33 GMT):
Has joined the channel.
SotirisAlfonsos (Tue, 13 Jun 2017 21:51:29 GMT):
@kostas thank you for the help earlier, turns out i just needed ` - kafka0:9092` in the config tx file. However now i am getting timed out on the metadata fetch ( with the 3 time threshold ). You pointed me to the file on master, but i am on beta, and do not want to migrate yet. Could you point me to the place where i can find and change the default max retry?
kostas (Tue, 13 Jun 2017 22:09:29 GMT):
@SotirisAlfonsos: https://github.com/hyperledger/fabric/blob/v1.0.0-beta/orderer/kafka/config.go#L26
Do `brokerConfig.Metadata.Retry.Max = YourValueHere`.
kostas (Tue, 13 Jun 2017 22:09:31 GMT):
Note that you may want to adjust the backoff interval: `brokerConfig.Metadata.Retry.Backoff = 1 * time.Second`.
kostas (Tue, 13 Jun 2017 22:09:31 GMT):
Note that you may want to adjust the backoff interval as well: `brokerConfig.Metadata.Retry.Backoff = 1 * time.Second`.
kostas (Tue, 13 Jun 2017 22:09:59 GMT):
(Also note that I've made all of these settings easy to change in master. You'll see them in RC1.)
kostas (Tue, 13 Jun 2017 22:10:32 GMT):
(Finally, I'd point out that the metadata fetch error surprises me a little, but I'm not familiar with your setup.)
SotirisAlfonsos (Tue, 13 Jun 2017 22:40:46 GMT):
@kostas turns out i had two issues. One was the with the retry. It would reach the number of retries before registering the brokers. The other is that my `- KAFKA_DEFAULT_REPLICATION_FACTOR=2` was at 3 and i was getting an error that i can not do it with 2 brokers (although i have 4 brokers). I am putting some stress on the server with my setup, so that might be a reason for it. Now it seems to be working though. Thank you very much for your help.
kostas (Tue, 13 Jun 2017 22:41:19 GMT):
Glad to be of help.
yahtoo (Wed, 14 Jun 2017 08:54:13 GMT):
Has joined the channel.
lenin.mehedy (Wed, 14 Jun 2017 11:39:47 GMT):
How to enable TLS for kafka and couchdb?
SotirisAlfonsos (Wed, 14 Jun 2017 12:46:52 GMT):
Hello again. I am getting
`2017-06-14 12:39:31.878 UTC [grpc] Println -> DEBU 186 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: write tcp 172.22.0.15:7050->172.22.0.1:52171: write: broken pipe"` on the orderer logs on channel creation. Could it be because of a timeout on the kafka producer? Any pointers?
jyellick (Wed, 14 Jun 2017 14:00:50 GMT):
@muralisr The block structure would not be modified
jyellick (Wed, 14 Jun 2017 14:01:19 GMT):
Today, we encode a `Metadata` structure as proto bytes into each of the `block.Metadata` byte slice fields
jyellick (Wed, 14 Jun 2017 14:01:32 GMT):
(Except invalidtx)
jyellick (Wed, 14 Jun 2017 14:02:02 GMT):
Then, for the opaque bytes of the `Metadata.value`, we encode a second proto structure, which actually represents the specific sort of metadata
jyellick (Wed, 14 Jun 2017 14:03:45 GMT):
So, in the case of say, `LAST_CONFIGURATION`, we encode something which looks like:
```
metadata[i] = utils.MarshalOrPanic(&cb.Metadata{
Value: utils.MarshalOrPanic(&cb.LastConfig{
Index: j,
}),
Signatures: ...,
}
```
jyellick (Wed, 14 Jun 2017 14:05:40 GMT):
This means, that say we wanted to add another field to this metadata, like, instead of just the last config index, say we also wanted to know the hash of that config block's header. We would simply add a second field to the `LastConfig` proto, and existing clients would _not_ break
jyellick (Wed, 14 Jun 2017 14:06:19 GMT):
Now, contrast this with how the invalidtxs are encoded. They are encoded simply as
```
metadata[i] = bitmask
```
jyellick (Wed, 14 Jun 2017 14:07:47 GMT):
This means, not only, can you not sign over that data, but, let's say in the future, we also wished to have a second bitmask... say... SideDBAffected indicating which transactions need to have additional data retrieved via gossip.
jyellick (Wed, 14 Jun 2017 14:08:21 GMT):
There's absolutely no way to do this without breaking every single client out there, or without switching to an entirely new data structure
jyellick (Wed, 14 Jun 2017 14:17:18 GMT):
From the reactions to the issue, everyone has asserted that "We can fix this post v1 if we simply decide how to handle it based on some version information", which I agree, on the surface sounds reasonable. But, on the other hand, I see absolutely no way to make v1 clients forwards compatible if choose to change the way this field is encoded, and I don't see a particularly obvious way to gracefully break this ABI.
jeffgarratt (Wed, 14 Jun 2017 14:17:45 GMT):
I thought we had agreed to change this?
jyellick (Wed, 14 Jun 2017 14:17:48 GMT):
And despite repeated calls for a plan on the issue going forward from @dave.enyeart there is not one
jeffgarratt (Wed, 14 Jun 2017 14:17:51 GMT):
about 1.5 weeks ago
jyellick (Wed, 14 Jun 2017 14:18:05 GMT):
Not enough votes, only 3, plus my 1 vote which JIRA won't count because I opened the issue
muralisr (Wed, 14 Jun 2017 14:19:06 GMT):
reading @jyellick .. a bit slow :-)
kostas (Wed, 14 Jun 2017 14:20:00 GMT):
I hope common sense prevails and this gets the 5 votes it needs.
kostas (Wed, 14 Jun 2017 14:21:14 GMT):
@lenin.mehedy: http://docs.confluent.io/current/kafka/ssl.html for the Kafka part as usual, and edit the `Kafka.TLS` in `orderer.yaml` for the orderer part.
kostas (Wed, 14 Jun 2017 14:21:26 GMT):
I'd suggest asking in #fabric-ledger for CouchDB.
kostas (Wed, 14 Jun 2017 14:22:13 GMT):
@SotirisAlfonsos: Can't help you without a log, and your Docker Compose configuration files. I'm surprised you keep getting broken connections. Aren't you settings this up locally?
kostas (Wed, 14 Jun 2017 14:22:13 GMT):
@SotirisAlfonsos: Can't help you without a log, and your Docker Compose configuration files. I'm surprised you keep getting broken connections. Aren't you setting this up locally?
muralisr (Wed, 14 Jun 2017 14:29:52 GMT):
sorry for the interrupt @kostas @SotirisAlfonsos .. responding to @jyellick
muralisr (Wed, 14 Jun 2017 14:30:07 GMT):
so @jyellick something like this
muralisr (Wed, 14 Jun 2017 14:30:07 GMT):
so @jyellick something like this ?
muralisr (Wed, 14 Jun 2017 14:30:10 GMT):
```message Block {
BlockHeader header = 1;
BlockData data = 2;
Metadata metadata = 3; //use Metadata that can be signed
}
Or
message Block {
BlockHeader header = 1;
BlockData data = 2;
bytes metadata = 3; //use marshalled array of Metadata; currently only 1 element
}
```
jyellick (Wed, 14 Jun 2017 14:30:46 GMT):
No changes to the existing protos
muralisr (Wed, 14 Jun 2017 14:31:11 GMT):
oh I see
muralisr (Wed, 14 Jun 2017 14:31:23 GMT):
just mess with
muralisr (Wed, 14 Jun 2017 14:31:27 GMT):
```message BlockMetadata {
repeated bytes metadata = 1;
}```
jyellick (Wed, 14 Jun 2017 14:31:42 GMT):
We would simply define a new proto, like
```
message LedgerTxInfo {
repeated bool invalid_txs = 1;
}
```
Then we would encode that just like in the example above for last config
jeffgarratt (Wed, 14 Jun 2017 14:31:49 GMT):
can we get on a voice call to discuss the ramifications?
jyellick (Wed, 14 Jun 2017 14:32:30 GMT):
I'm not opposed, though it would require a fix from the SDK clients, I think that is the hardest piece of this
jyellick (Wed, 14 Jun 2017 14:32:37 GMT):
The fabric code is relatively straightforward
jeffgarratt (Wed, 14 Jun 2017 14:32:43 GMT):
I guess the answer is moot, but this seems quite important at least that folks to understand
jeffgarratt (Wed, 14 Jun 2017 14:32:43 GMT):
I guess the answer is moot, but this seems quite important at least for folks to understand
muralisr (Wed, 14 Jun 2017 14:37:44 GMT):
so @jyellick ```
utils.MarshalOrPanic(&cb.Metadata{
Value: utils.MarshalOrPanic(&cb. LedgerTxInfo{
InvalidTxs: invaltxs,
}),
Signatures: ...,
}``` ?
jyellick (Wed, 14 Jun 2017 14:37:53 GMT):
Exactly
muralisr (Wed, 14 Jun 2017 14:38:03 GMT):
where will this go
muralisr (Wed, 14 Jun 2017 14:38:03 GMT):
which field will this go
jyellick (Wed, 14 Jun 2017 14:38:20 GMT):
In place of the `metadata[i] = invalidtxbitmask`
jyellick (Wed, 14 Jun 2017 14:38:46 GMT):
Where `i` is the `TRANSACTIONS_FILTER` field
jyellick (Wed, 14 Jun 2017 14:40:08 GMT):
Today:
```
metadata[SIGNATURES] = utils.MarshalOrPanic(&cb.Metadata{...})
metadata[LAST_CONFIG] = utils.MarshalOrPanic(&cb.Metadata{...})
metadata[TRANSACTIONS_FILTER] = invalidtxbitmask
metadata[ORDERER] = utils.MarshalOrPanic(&cb.Metadata{...})
```
jyellick (Wed, 14 Jun 2017 14:40:39 GMT):
This was why I originally opened the bug, because it seemed inconsistent and, because the metadata is not hashed over, without signatures it could be manipulated
jyellick (Wed, 14 Jun 2017 14:41:03 GMT):
But, perhaps the even bigger issue is that we have no way to enhance this field, because we are using raw bytes, and not a proto
jyellick (Wed, 14 Jun 2017 14:41:30 GMT):
So, we lose all of the upgrade goodness, with forward and backwards compatibility that proto gives us
jyellick (Wed, 14 Jun 2017 14:42:35 GMT):
I actually do think it might have been a good idea, in retrospect, to modify the block definition as you proposed, replaces the repeated bytes metadata with repeated Metadata, but I thought that was likely to cause more problems, so did not advocate for that.
jyellick (Wed, 14 Jun 2017 14:42:35 GMT):
I actually do think it might have been a good idea, in retrospect, to modify the block definition as you proposed, replacing the repeated bytes metadata with repeated Metadata, but I thought that was likely to cause more problems, so did not advocate for that.
muralisr (Wed, 14 Jun 2017 14:43:51 GMT):
(yielding the floor to @SotirisAlfonsos )
muralisr (Wed, 14 Jun 2017 14:43:51 GMT):
I understand now @jyellick ... (yielding the floor to @SotirisAlfonsos )
SotirisAlfonsos (Wed, 14 Jun 2017 14:44:58 GMT):
@kostas i am using the node-sdk to interact with the setup. Everything is set and used within a server, so i am using it locally. The config and docker compose are in https://github.com/SotirisAlfonsos/TempKafkaSet.
some logs:
`orderer3.example.com | 2017-06-14 14:35:01.142 UTC [orderer/kafka] processMessagesToBlock -> DEBU 17f [channel: testchainid] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
orderer3.example.com | 2017-06-14 14:35:01.142 UTC [orderer/kafka] processConnect -> DEBU 180 [channel: testchainid] It's a connect message - ignoring
orderer3.example.com | 2017-06-14 14:35:01.142 UTC [orderer/kafka] processMessagesToBlock -> DEBU 181 [channel: testchainid] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
orderer3.example.com | 2017-06-14 14:35:01.143 UTC [orderer/kafka] processConnect -> DEBU 182 [channel: testchainid] It's a connect message - ignoring`
`orderer.example.com | 2017-06-14 14:35:01.149 UTC [orderer/kafka] setupConsumerForChannel -> DEBU 17c [channel: testchainid] Created new channel consumer
orderer.example.com | 2017-06-14 14:35:01.149 UTC [orderer/kafka] Start -> INFO 17d [channel: testchainid] Consumer set up successfully
orderer.example.com | 2017-06-14 14:35:01.149 UTC [orderer/main] main -> INFO 17e Beginning to serve requests`
and when i am trying to create a channel:
`orderer.example.com | 2017-06-14 14:37:44.146 UTC [grpc] Println -> DEBU 187 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: write tcp 172.22.0.14:7050->172.22.0.1:38912: write: broken pipe"
orderer.example.com | 2017-06-14 14:38:44.198 UTC [grpc] Println -> DEBU 188 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: write tcp 172.22.0.14:7050->172.22.0.1:38918: write: broken pipe"`
and on the sdk side:
`[2017-06-14 14:41:09.138] [ERROR] Create-Channel - Error: SERVICE_UNAVAILABLE`
`at ClientDuplexStream.
SotirisAlfonsos (Wed, 14 Jun 2017 14:44:58 GMT):
@kostas i am using the node-sdk to interact with the setup. Everything is set and used within a server, so i am using it locally. The config and docker compose are in https://github.com/SotirisAlfonsos/TempKafkaSet.
some logs:
```orderer3.example.com | 2017-06-14 14:35:01.142 UTC [orderer/kafka] processMessagesToBlock -> DEBU 17f [channel: testchainid] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
orderer3.example.com | 2017-06-14 14:35:01.142 UTC [orderer/kafka] processConnect -> DEBU 180 [channel: testchainid] It's a connect message - ignoring
orderer3.example.com | 2017-06-14 14:35:01.142 UTC [orderer/kafka] processMessagesToBlock -> DEBU 181 [channel: testchainid] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
orderer3.example.com | 2017-06-14 14:35:01.143 UTC [orderer/kafka] processConnect -> DEBU 182 [channel: testchainid] It's a connect message - ignoring```
`orderer.example.com | 2017-06-14 14:35:01.149 UTC [orderer/kafka] setupConsumerForChannel -> DEBU 17c [channel: testchainid] Created new channel consumer
orderer.example.com | 2017-06-14 14:35:01.149 UTC [orderer/kafka] Start -> INFO 17d [channel: testchainid] Consumer set up successfully
orderer.example.com | 2017-06-14 14:35:01.149 UTC [orderer/main] main -> INFO 17e Beginning to serve requests`
and when i am trying to create a channel:
`orderer.example.com | 2017-06-14 14:37:44.146 UTC [grpc] Println -> DEBU 187 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: write tcp 172.22.0.14:7050->172.22.0.1:38912: write: broken pipe"
orderer.example.com | 2017-06-14 14:38:44.198 UTC [grpc] Println -> DEBU 188 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: write tcp 172.22.0.14:7050->172.22.0.1:38918: write: broken pipe"`
and on the sdk side:
`[2017-06-14 14:41:09.138] [ERROR] Create-Channel - Error: SERVICE_UNAVAILABLE`
`at ClientDuplexStream.
SotirisAlfonsos (Wed, 14 Jun 2017 14:44:58 GMT):
@kostas i am using the node-sdk to interact with the setup. Everything is set and used within a server, so i am using it locally. The config and docker compose are in https://github.com/SotirisAlfonsos/TempKafkaSet.
some logs:
```orderer3.example.com | 2017-06-14 14:35:01.142 UTC [orderer/kafka] processMessagesToBlock -> DEBU 17f [channel: testchainid] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
orderer3.example.com | 2017-06-14 14:35:01.142 UTC [orderer/kafka] processConnect -> DEBU 180 [channel: testchainid] It's a connect message - ignoring
orderer3.example.com | 2017-06-14 14:35:01.142 UTC [orderer/kafka] processMessagesToBlock -> DEBU 181 [channel: testchainid] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
orderer3.example.com | 2017-06-14 14:35:01.143 UTC [orderer/kafka] processConnect -> DEBU 182 [channel: testchainid] It's a connect message - ignoring```
```orderer.example.com | 2017-06-14 14:35:01.149 UTC [orderer/kafka] setupConsumerForChannel -> DEBU 17c [channel: testchainid] Created new channel consumer
orderer.example.com | 2017-06-14 14:35:01.149 UTC [orderer/kafka] Start -> INFO 17d [channel: testchainid] Consumer set up successfully
orderer.example.com | 2017-06-14 14:35:01.149 UTC [orderer/main] main -> INFO 17e Beginning to serve requests```
and when i am trying to create a channel:
```orderer.example.com | 2017-06-14 14:37:44.146 UTC [grpc] Println -> DEBU 187 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: write tcp 172.22.0.14:7050->172.22.0.1:38912: write: broken pipe"
orderer.example.com | 2017-06-14 14:38:44.198 UTC [grpc] Println -> DEBU 188 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: write tcp 172.22.0.14:7050->172.22.0.1:38918: write: broken pipe"```
and on the sdk side:
```[2017-06-14 14:41:09.138] [ERROR] Create-Channel - Error: SERVICE_UNAVAILABLE
at ClientDuplexStream.
kostas (Wed, 14 Jun 2017 14:46:09 GMT):
Which version of Fabric are you on?
SotirisAlfonsos (Wed, 14 Jun 2017 14:46:27 GMT):
v1.0.0-beta
kostas (Wed, 14 Jun 2017 14:46:54 GMT):
OK, the orderer doesn't seem to fail you here.
kostas (Wed, 14 Jun 2017 14:48:16 GMT):
Some issue on the node SDK side of things? I suggest creating a JIRA about this and maybe tagging Jim Zhang on it.
SotirisAlfonsos (Wed, 14 Jun 2017 14:54:45 GMT):
hm understood. I thought that the broken pipe was a kafka thing. Will move this to the node-sdk channel. Thanks again
naolduga (Wed, 14 Jun 2017 17:19:31 GMT):
Has joined the channel.
ryokawajp (Thu, 15 Jun 2017 03:02:22 GMT):
Hi. Is this the right channel to ask a question about endorsement policy?
ryokawajp (Thu, 15 Jun 2017 03:03:50 GMT):
I was using v0.6 and I am recently studying v1.0.
I want to build a blockchain network of the following configuration: 1 organization (say, Org1), 1 CA, 1 orderer and 4 peers.
How can I define the endorsement policy for a chaincode?
I read the document in [1] and I found that there is no means to specify a specific peer in an endorsement policy.
Only "member" and "admin" are allowed.
I want to have a policy which requires at least 2 endorsements from the peers in Org1, for example.
My first thought was: T(2, Org1.member) .
Will this policy work?
[1] https://hyperledger-fabric.readthedocs.io/en/latest/endorsement-policies.html
ryokawajp (Thu, 15 Jun 2017 03:06:02 GMT):
I assumed that the organization is covered by one MSPID "Org1".
kostas (Thu, 15 Jun 2017 03:07:31 GMT):
@ryokawajp: #fabric-peer-endorser-committer might be a better place for this question
ryokawajp (Thu, 15 Jun 2017 03:09:00 GMT):
@kostas thanks! I will visit that channel.
DannyWong (Thu, 15 Jun 2017 03:29:48 GMT):
Guys, I am reading up the SBFT stuffs...
https://jira.hyperledger.org/browse/FAB-378
https://www.microsoft.com/en-us/research/publication/practical-byzantine-fault-tolerance-proactive-recovery/?from=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fum%2Fpeople%2Fmcastro%2Fpublications%2Fp398-castro-bft-tocs.pdf
Will this make the Orderer decentralized again?
DannyWong (Thu, 15 Jun 2017 03:30:09 GMT):
Current solo / Kafka impl makes the orderer a centralized component...
HuangLijun (Thu, 15 Jun 2017 08:33:21 GMT):
Has joined the channel.
AnilOner (Thu, 15 Jun 2017 12:01:50 GMT):
Has joined the channel.
kostas (Thu, 15 Jun 2017 14:36:58 GMT):
@DannyWong: Correct.
LoveshHarchandani (Thu, 15 Jun 2017 15:01:02 GMT):
Hi, i have a question regarding BFT consensus like PBFT, SBFT what involve a three phase commit, specifically about write quorum
BFT literature mentions that we need `2f+1` PREPAREs or COMMITs to consider a write (phase) complete, where `f=floor(n-1/3)` where `n` is number of nodes. The reason as we understand is since there are at max `f` faulty nodes and hence the read quorum is `f+1` nodes so having a write quorum of `2f+1` makes the 2 quorums intersect. But when `n` is not exactly `3f+1`, say 5, then the above calculation leads to `f` being 1, read quorum size being 2 (`f+1`) and write quorum size being 3 (`2f+1`). This does not ensure quorum intersection. So shouldn't write quorum be `n-f`?
LoveshHarchandani (Thu, 15 Jun 2017 15:01:02 GMT):
Hi, i have a question regarding BFT consensus used in protocols like PBFT, SBFT which involve a three phase commit, specifically about write quorum
BFT literature mentions that we need `2f+1` PREPAREs or COMMITs to consider a write (phase) complete, where `f=floor(n-1/3)` where `n` is number of nodes. The reason as we understand is since there are at max `f` faulty nodes and hence the read quorum is `f+1` nodes so having a write quorum of `2f+1` makes the 2 quorums intersect. But when `n` is not exactly `3f+1`, say 5, then the above calculation leads to `f` being 1, read quorum size being 2 (`f+1`) and write quorum size being 3 (`2f+1`). This does not ensure quorum intersection. So shouldn't write quorum be `n-f`?
LoveshHarchandani (Thu, 15 Jun 2017 15:01:02 GMT):
Hi, i have a question regarding BFT consensus used in protocols like PBFT, SBFT which involve a three phase commit, specifically about write quorum sie
BFT literature mentions that we need `2f+1` PREPAREs or COMMITs to consider a write (phase) complete, where `f=floor(n-1/3)` where `n` is number of nodes. The reason as we understand is since there are at max `f` faulty nodes and hence the read quorum is `f+1` nodes so having a write quorum of `2f+1` makes the 2 quorums intersect. But when `n` is not exactly `3f+1`, say 5, then the above calculation leads to `f` being 1, read quorum size being 2 (`f+1`) and write quorum size being 3 (`2f+1`). This does not ensure quorum intersection. So shouldn't write quorum be `n-f`?
LoveshHarchandani (Thu, 15 Jun 2017 15:01:02 GMT):
Hi, i have a question regarding BFT consensus used in protocols like PBFT, SBFT which involve a three phase commit, specifically about write quorum size
BFT literature mentions that we need `2f+1` PREPAREs or COMMITs to consider a write (phase) complete, where `f=floor(n-1/3)` where `n` is number of nodes. The reason as we understand is since there are at max `f` faulty nodes and hence the read quorum is `f+1` nodes so having a write quorum of `2f+1` makes the 2 quorums intersect. But when `n` is not exactly `3f+1`, say 5, then the above calculation leads to `f` being 1, read quorum size being 2 (`f+1`) and write quorum size being 3 (`2f+1`). This does not ensure quorum intersection. So shouldn't write quorum be `n-f`?
jyellick (Thu, 15 Jun 2017 15:24:18 GMT):
@LoveshHarchandani you can see in the v0.6 pbft implementation https://github.com/hyperledger/fabric/blob/v0.6/consensus/pbft/pbft-core.go#L460-L470
kostas (Thu, 15 Jun 2017 15:24:42 GMT):
https://www.zurich.ibm.com/%7Ecca/papers/pax.pdf, Section 2.2.3
rahulhegde (Thu, 15 Jun 2017 19:32:20 GMT):
@kostas
Hello - with the Fabric Beta Images - we are facing a issue running any fabric transaction using fabric-sdk-java. After a successful Endorsement, the Orderer rejects the broadcast message with ` BAD_REQUEST `
Following is the final snapshot from orderer log - ` 2017-06-15 00:09:47.735 UTC [orderer/common/broadcast] Handle -> WARN 45c2 Rejecting broadcast message because of filter error: Rejected by rule: *sigfilter.sigFilter `
Note -
1. The same setup used to work on Alpha-Images (19th May)
2. Using the Peer CLI, the endorsement as well as broadcasting to the Orderer is successful.
kostas (Thu, 15 Jun 2017 19:33:39 GMT):
@rahulhegde: Sounds like the request is not signed properly. @jyellick @rickr Any idea what may be causing this?
jyellick (Thu, 15 Jun 2017 19:34:53 GMT):
@rahulhegde Agreed. That error message indicates, that the channel configuration does not recognize the included signature as valid for the channel. This is most likely because the cert was not issued by one of the org CAs for the channel
jyellick (Thu, 15 Jun 2017 19:34:53 GMT):
@rahulhegde Agreed with @kostas. That error message indicates that the channel configuration does not recognize the included signature as valid for the channel. This is most likely because the cert was not issued by one of the org CAs for the channel
jyellick (Thu, 15 Jun 2017 19:34:53 GMT):
@rahulhegde Agreed with @kostas . That error message indicates that the channel configuration does not recognize the included signature as valid for the channel. This is most likely because the cert was not issued by one of the org CAs for the channel
jyellick (Thu, 15 Jun 2017 19:36:46 GMT):
@rahulhegde Was this after channel creation was successful? (IE, this is not itself a channel creation request)
rahulhegde (Thu, 15 Jun 2017 19:36:49 GMT):
Question is how did the Endorsement Pass?
rahulhegde (Thu, 15 Jun 2017 19:36:49 GMT):
Question is how did the Endorsement Pass - should this have also failed?
rahulhegde (Thu, 15 Jun 2017 19:38:36 GMT):
@jjason - this is not a channel creation request. Bootstrapping of the network in our case is performed using Peer CLI. I have the logs captured at the orderer which i can share offline (1-1).
rahulhegde (Thu, 15 Jun 2017 19:38:36 GMT):
@jjason - this is not a channel creation request. Bootstrapping of the network in our case is performed using Peer CLI (for now). I have the logs captured at the orderer which i can share offline (1-1).
rahulhegde (Thu, 15 Jun 2017 19:38:36 GMT):
@jjason - this is not a channel creation request. Bootstrapping of the network in our case is performed using Peer CLI (for now). I have the logs captured at the orderer which i can share offline (1-1). Otherwise i will have to mask-content.
jjason (Thu, 15 Jun 2017 19:38:36 GMT):
Has joined the channel.
jyellick (Thu, 15 Jun 2017 19:39:17 GMT):
Great, just thought this might be a different alpha-1 to beta bug, this confirms it is not
jyellick (Thu, 15 Jun 2017 19:40:00 GMT):
I would not have expected endorsement to succeed. If you turn the logs up to debug on the orderer, we can get more information about why the signature was rejected. There can be a number of reasons
jyellick (Thu, 15 Jun 2017 19:40:10 GMT):
Bad clock sync in particular I have seen cause this
s.narayanan (Thu, 15 Jun 2017 19:40:30 GMT):
@kostas A few questions related to Zookeeper failures on Ordering service: 1. If a Zookeeper (ZK) node goes down, then as long there is quorum, I presume there is no impact to Kafka cluster and thus Orderer nodes are not impacted. However, if ZK leader goes down, until the ZK leader is elected, what would be impact to Kafka and in turn to Orderer nodes? Specifically, I presume there could be impact to certain Kafka operations especially if at the same time broker or controller goes down since these would rely on ZK leader being up. 2. What is impact to Kafka and Orderer if quorum of ZK servers are down?
rahulhegde (Thu, 15 Jun 2017 19:45:16 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=eWigESefYb3qAWjgf) @jyellick @kostas
Shared logs with Jason.
kostas (Thu, 15 Jun 2017 19:51:39 GMT):
@s.narayanan: I have answers but I'd like to double-check them. Let me do that and get back to you here.
s.narayanan (Thu, 15 Jun 2017 19:51:56 GMT):
@kostas thanks
kostas (Thu, 15 Jun 2017 20:01:34 GMT):
Sure thing, these are good questions. I think I'll also create an FAQ and add them there.
kostas (Thu, 15 Jun 2017 20:01:34 GMT):
Sure thing, these are good questions. I think I'll also create a FAQ section in the Kafka guide and add them there.
Asara (Thu, 15 Jun 2017 20:27:36 GMT):
@jyellick Still haven't been able to figure this error out... any advice?
Asara (Thu, 15 Jun 2017 20:27:41 GMT):
```
2017-06-15 20:25:44.580 UTC [msp] SatisfiesPrincipal -> DEBU 1a16 Checking if identity satisfies ADMIN role for Org1MSP
2017-06-15 20:25:44.580 UTC [cauthdsl] func2 -> DEBU 1a17 Identity ([10 7 79 114 103 49 77 83 80 18 235 6 45 45 45 45 45 66 69 71 73 78 32 67 69 82 84 73 70 73 67 65 84 69 45 45 45 45 45 10 77 73 73 67 87 84 67 67 65 103 67 103 65 119 73 66 65 103 73 81 76 112 82 51 108 81 112 100 77 99 73 55 48 100 81 98 120 99 97 77 114 106 65 75 66 103 103 113 104 107 106 79 80 81 81 68 65 106 66 122 77 81 115 119 10 67 81 89 68 86 81 81 71 69 119 74 86 85 122 69 84 77 66 69 71 65 49 85 69 67 66 77 75 81 50 70 115 97 87 90 118 99 109 53 112 89 84 69 87 77 66 81 71 65 49 85 69 66 120 77 78 85 50 70 117 73 69 90 121 10 89 87 53 106 97 88 78 106 98 122 69 90 77 66 99 71 65 49 85 69 67 104 77 81 98 51 74 110 77 83 53 108 101 71 70 116 99 71 120 108 76 109 78 118 98 84 69 99 77 66 111 71 65 49 85 69 65 120 77 84 89 50 69 117 10 98 51 74 110 77 83 53 108 101 71 70 116 99 71 120 108 76 109 78 118 98 84 65 101 70 119 48 120 78 122 65 50 77 68 103 121 77 68 77 48 77 68 100 97 70 119 48 121 78 122 65 50 77 68 89 121 77 68 77 48 77 68 100 97 10 77 70 115 120 67 122 65 74 66 103 78 86 66 65 89 84 65 108 86 84 77 82 77 119 69 81 89 68 86 81 81 73 69 119 112 68 89 87 120 112 90 109 57 121 98 109 108 104 77 82 89 119 70 65 89 68 86 81 81 72 69 119 49 84 10 89 87 52 103 82 110 74 104 98 109 78 112 99 50 78 118 77 82 56 119 72 81 89 68 86 81 81 68 69 120 90 119 90 87 86 121 77 67 53 118 99 109 99 120 76 109 86 52 89 87 49 119 98 71 85 117 89 50 57 116 77 70 107 119 10 69 119 89 72 75 111 90 73 122 106 48 67 65 81 89 73 75 111 90 73 122 106 48 68 65 81 99 68 81 103 65 69 112 79 88 118 52 116 106 115 116 113 86 116 78 108 43 69 106 112 73 86 108 89 103 49 50 98 111 66 73 85 105 104 10 73 53 78 75 68 53 54 112 85 81 69 57 68 122 112 97 66 113 74 52 67 73 104 77 112 76 53 105 116 97 50 114 65 110 104 106 54 108 83 78 100 76 106 47 87 120 107 50 86 71 120 103 87 75 79 66 106 84 67 66 105 106 65 79 10 66 103 78 86 72 81 56 66 65 102 56 69 66 65 77 67 66 97 65 119 69 119 89 68 86 82 48 108 66 65 119 119 67 103 89 73 75 119 89 66 66 81 85 72 65 119 69 119 68 65 89 68 86 82 48 84 65 81 72 47 66 65 73 119 10 65 68 65 114 66 103 78 86 72 83 77 69 74 68 65 105 103 67 67 97 118 107 78 120 67 120 71 89 87 68 102 105 110 86 105 103 53 53 84 76 111 107 83 53 49 90 57 105 56 105 77 108 97 100 77 90 73 119 57 105 111 68 65 111 10 66 103 78 86 72 82 69 69 73 84 65 102 103 104 90 119 90 87 86 121 77 67 53 118 99 109 99 120 76 109 86 52 89 87 49 119 98 71 85 117 89 50 57 116 103 103 86 119 90 87 86 121 77 68 65 75 66 103 103 113 104 107 106 79 10 80 81 81 68 65 103 78 72 65 68 66 69 65 105 65 118 101 86 80 54 98 48 108 72 81 116 76 48 113 67 112 47 100 122 69 104 56 81 47 120 74 105 106 51 110 54 103 71 73 76 47 78 69 48 82 69 49 81 73 103 74 79 107 80 10 66 111 109 121 82 106 54 54 74 106 74 43 85 47 122 110 107 101 101 85 78 68 102 85 117 53 97 51 102 67 117 53 76 118 51 54 121 83 89 61 10 45 45 45 45 45 69 78 68 32 67 69 82 84 73 70 73 67 65 84 69 45 45 45 45 45 10]) does not satisfy principal: This identity is not an admin
2017-06-15 20:25:44.580 UTC [cauthdsl] func2 -> DEBU 1a18 Principal evaluation fails: (&{0}) [false]
2017-06-15 20:25:44.580 UTC [cauthdsl] func1 -> DEBU 1a19 Gate evaluation fails: (&{n:1 rules:
Asara (Thu, 15 Jun 2017 20:28:35 GMT):
I am using the Peer admin's key in order to run this...
jyellick (Thu, 15 Jun 2017 20:29:36 GMT):
@Asara My suspicion is that you are not using the correct admin key
jyellick (Thu, 15 Jun 2017 20:29:36 GMT):
@Asara My suspicion is that you are not using the correct admin cert/key
jyellick (Thu, 15 Jun 2017 20:29:43 GMT):
You may have an admin cert in your peer's local MSP dir
jyellick (Thu, 15 Jun 2017 20:29:52 GMT):
But this must be the cert in the MSP definition which was encoded for the channel
jyellick (Thu, 15 Jun 2017 20:30:06 GMT):
I would download the channel configuration and confirm that the admin cert present is the same one you expect.
jeffgarratt (Thu, 15 Jun 2017 20:33:06 GMT):
@Asara the behave feature makes a clear distinction between the peerAdmin rolve vs the configAdmin. The former is a localMSP concept, while the latter is relative to the configuration administration, and this encoded as such in the MSPConfig that resides in the latest config_update for the channel.
jeffgarratt (Thu, 15 Jun 2017 20:33:06 GMT):
@Asara the behave feature makes a clear distinction between the peerAdmin rolee vs the configAdmin. The former is a localMSP concept, while the latter is relative to the configuration administration, and this encoded as such in the MSPConfig that resides in the latest config_update for the channel (as @jyellick referred to).
jeffgarratt (Thu, 15 Jun 2017 20:33:06 GMT):
@Asara the behave feature makes a clear distinction between the peerAdmin role vs the configAdmin. The former is a localMSP concept, while the latter is relative to the configuration administration, and this encoded as such in the MSPConfig that resides in the latest config_update for the channel (as @jyellick referred to).
jeffgarratt (Thu, 15 Jun 2017 20:33:06 GMT):
@Asara the behave feature makes a clear distinction between the peerAdmin role vs the configAdmin. The former is a localMSP concept, while the latter is relative to the configuration administration, and thus encoded as such in the MSPConfig that resides in the latest config_update for the channel (as @jyellick referred to).
Asara (Thu, 15 Jun 2017 20:43:39 GMT):
@jeffgarratt @jyellick Aye... Totally was pulling the keys from the wrong place.
Asara (Thu, 15 Jun 2017 20:43:39 GMT):
@jeffgarratt @jyellick Aye... Was totally pulling the keys from the wrong place.
Asara (Thu, 15 Jun 2017 20:43:42 GMT):
Thanks a lot :)
jeffgarratt (Thu, 15 Jun 2017 20:44:01 GMT):
@Asara awesome!!!
caoyu (Fri, 16 Jun 2017 03:11:37 GMT):
Has joined the channel.
DannyWong (Fri, 16 Jun 2017 06:17:43 GMT):
@kostas Can you illustrate more... So is each organization (within business network) going to deploy its own Orderer Service Node and config them as SPFT?
DannyWong (Fri, 16 Jun 2017 06:17:54 GMT):
Right now the orderer is deployed in SINGLE organization
paapighoda (Fri, 16 Jun 2017 06:44:11 GMT):
Has left the channel.
kostas (Fri, 16 Jun 2017 13:15:18 GMT):
@DannyWong: Yes and yes.
Glen (Fri, 16 Jun 2017 13:26:16 GMT):
Has anybody configured SBFT to run ?
Glen (Fri, 16 Jun 2017 13:26:16 GMT):
Has anybody configured SBFT to run ?
kostas (Fri, 16 Jun 2017 14:08:04 GMT):
If SBFT was ready, we would have included it. It's not ready yet.
DannyWong (Sat, 17 Jun 2017 03:58:40 GMT):
Yea, i knew it is under development...
DannyWong (Sat, 17 Jun 2017 03:58:46 GMT):
just that, I cant wait for it...
Willson (Sun, 18 Jun 2017 15:33:53 GMT):
hello everybody, when i start the orderer with kafka type, it has a error like this :
`orderer.example.com | 2017-06-18 15:30:19.901 UTC [orderer/common/deliver] Handle -> WARN 7e8 Rejecting deliver request because of consenter error`
what's wrong with that?
Willson (Sun, 18 Jun 2017 15:51:41 GMT):
i think this error is occured after `[FAB-4457] Add errorChan to Kafka-based consenter `
kostas (Sun, 18 Jun 2017 17:38:52 GMT):
@Willson: I don't have nearly enough info to help you figure out what's going on there. I don't know your setup, I don't know what you're trying to do and when this line occurred, and I don't have logs at the debug level to have a precise picture of what's going on.
kostas (Sun, 18 Jun 2017 17:39:43 GMT):
If you think that this is a bug, please open up a JIRA filed under the "fabric-orderer" component, and assign it to me. Provide the info requested above.
sfukazu (Mon, 19 Jun 2017 06:34:22 GMT):
Has joined the channel.
roj (Mon, 19 Jun 2017 10:43:52 GMT):
Has joined the channel.
RichardGreen (Mon, 19 Jun 2017 13:56:19 GMT):
Has joined the channel.
Glen (Mon, 19 Jun 2017 14:34:14 GMT):
@kostas Hi when I configure SBFT, the orderer exited as a result of the crash in orderer, as follows
Glen (Mon, 19 Jun 2017 14:34:17 GMT):
2017-06-19 14:24:35.173 UTC [msp/identity] Sign -> DEBU 7f1 Sign: digest: BCBF94D3FA7E4F7970F5479E364658A79A76D7A03F299555EFA81F7CCA2C506C
2017-06-19 14:24:35.174 UTC [fsblkstorage] indexBlock -> DEBU 7f2 Indexing block [blockNum=1, blockHash=[]byte{0xc3, 0x8a, 0x1f, 0x61, 0xd0, 0x65, 0x43, 0xd6, 0x69, 0x67, 0x4b, 0x3, 0xa0, 0x84, 0x3f, 0x3b, 0xa2, 0xf2, 0x52, 0xe2, 0xe2, 0xfd, 0x91, 0x9f, 0xb5, 0xe5, 0x7e, 0xb3, 0x26, 0x92, 0x1a, 0xba} txOffsets=
txId= locPointer=offset=70, bytesLength=10562
]
panic: runtime error: index out of range
Glen (Mon, 19 Jun 2017 14:35:04 GMT):
Although I'm checking the source code, but hope anybody has fixed this issue.
kostas (Mon, 19 Jun 2017 14:58:51 GMT):
@Glen: As I wrote above -- https://chat.hyperledger.org/channel/fabric-consensus?msg=43QGXG6SKQ4bXrcKn
Glen (Mon, 19 Jun 2017 15:00:28 GMT):
yes, we are waiting for it,also I 'm studying it, If I can configure it up, that's better, even though N=1/F=0 model.
pschnap (Mon, 19 Jun 2017 19:38:56 GMT):
Has joined the channel.
Kathyx (Mon, 19 Jun 2017 20:01:42 GMT):
Has joined the channel.
dongqi (Tue, 20 Jun 2017 08:35:09 GMT):
Has joined the channel.
jrosmith (Tue, 20 Jun 2017 14:09:46 GMT):
hey everyone, having trouble instantiating chaincode. i only have one org in my network have been able to create a channel and install the chaincode successfully, but am getting the following error:
```
2017-06-20T14:06:38.335Z - debug: Getting client for org: org1
2017-06-20T14:06:38.337Z - debug: Msp ID for org1: Org1MSP
error: [client-utils.js]: sendPeersProposal - Promise is rejected: Error: Failed to deserialize creator identity, err MSP Org1MSP is unknown
at /Users/jrosmith/Documents/Projects/fabric10/node_modules/fabric-client/node_modules/grpc/src/node/src/client.js:434:17
2017-06-20T14:06:38.481Z - error: Instantiate proposal was bad
2017-06-20T14:06:38.482Z - error: Failed to send instantiate Proposal or receive valid response. Response null or status is not 200. exiting...
2017-06-20T14:06:38.482Z - error: Failed to order the transaction. Error code: undefined
```
i'm confused as to why Org1MSP is unknown when `getMspID` was able to find it properly
jrosmith (Tue, 20 Jun 2017 14:11:30 GMT):
also, this is using the example code for balance_transfer
jrosmith (Tue, 20 Jun 2017 14:19:51 GMT):
on the peer I have:
```
2017-06-20 14:15:45.046 UTC [endorser] ProcessProposal -> DEBU 2d3 Entry
2017-06-20 14:15:45.047 UTC [protoutils] ValidateProposalMessage -> DEBU 2d4 ValidateProposalMessage starts for signed proposal 0xc42201dad0
2017-06-20 14:15:45.047 UTC [protoutils] validateChannelHeader -> DEBU 2d5 validateChannelHeader info: header type 3
2017-06-20 14:15:45.047 UTC [protoutils] checkSignatureFromCreator -> DEBU 2d6 checkSignatureFromCreator starts
2017-06-20 14:15:45.047 UTC [endorser] ProcessProposal -> DEBU 2d7 Exit
```
jrosmith (Tue, 20 Jun 2017 14:21:28 GMT):
on the orderer:
```
2017-06-20 14:15:44.971 UTC [msp/identity] newIdentity -> DEBU 267d Creating identity instance for ID &{Org1MSP 69fef3c8f0f7b806bcf31edb040a8de663b5e886ab8a225c2a2f1f90e257fb84}
2017-06-20 14:15:44.971 UTC [msp] SatisfiesPrincipal -> DEBU 267e Checking if identity satisfies MEMBER role for Org1MSP
2017-06-20 14:15:44.971 UTC [msp] Validate -> DEBU 267f MSP Org1MSP validating identity
2017-06-20 14:15:44.971 UTC [cauthdsl] func2 -> DEBU 2680 Principal matched by identity: (&{0}) for [removedTheseBytes]
2017-06-20 14:15:44.972 UTC [cauthdsl] func2 -> DEBU 2683 Principal evaluation succeeds: (&{0}) (used [false])
2017-06-20 14:15:44.972 UTC [cauthdsl] func1 -> DEBU 2684 Gate evaluation succeeds: (&{n:1 rules:
jyellick (Tue, 20 Jun 2017 14:36:20 GMT):
@jrosmith How did you create the channel? That error message is basically indicating that "The creator claims to be from Org1MSP, but Org1MSP isn't on this channel, so I can't validate the signature"
jyellick (Tue, 20 Jun 2017 14:36:20 GMT):
@jrosmith How did you create the channel? That error message (" Failed to deserialize creator identity, err MSP Org1MSP is unknown") is basically indicating that "The creator claims to be from Org1MSP, but Org1MSP isn't on this channel, so I can't validate the signature"
jrosmith (Tue, 20 Jun 2017 14:39:53 GMT):
@jyellick using the create channel function from balance_transfer, creating it with the admin from org1
jyellick (Tue, 20 Jun 2017 14:40:11 GMT):
And then you joined the peer to this channel?
jrosmith (Tue, 20 Jun 2017 14:41:02 GMT):
...well, that would explain that
jrosmith (Tue, 20 Jun 2017 14:41:22 GMT):
@jyellick thank you, going to do that now
LoveshHarchandani (Tue, 20 Jun 2017 15:11:03 GMT):
Hello again, I have another question regarding three phase commit used in protocols like PBFT, for the prepared (1st and 2nd phase) quorum, they require at least 1 PRE-PREPARE and 2f PREPAREs but is it ok to consider the request prepared if it had no PRE-PREPARE but >2f PREPAREs
jrosmith (Tue, 20 Jun 2017 15:50:45 GMT):
@jyellick good news, progress is being made. i received the following error while trying to join the channel:
```
error: [client-utils.js]: sendPeersProposal - Promise is rejected: Error: chaincode error (status: 500, message: Cannot create ledger from genesis block, due to LedgerID already exists)
at /Users/jrosmith/Documents/Projects/fabric10/node_modules/fabric-client/node_modules/grpc/src/node/src/client.js:434:17
E0620 11:45:44.521930000 140735979979712 ssl_transport_security.c:439] SSL_read returned 0 unexpectedly.
E0620 11:45:44.522584000 140735979979712 secure_endpoint.c:185] Decryption error: TSI_INTERNAL_ERROR
```
is this because i installed the chaincode before joining the channel?
kostas (Tue, 20 Jun 2017 16:01:36 GMT):
@LoveshHarchandani: Don't think this is the right venue for these questions.
jyellick (Tue, 20 Jun 2017 16:03:27 GMT):
@jrosmith Sounds to me like perhaps you did not remove an old ledger when you started the peer?
jrosmith (Tue, 20 Jun 2017 16:04:29 GMT):
maybe, i'll confer with @Asara. thank you again
jrosmith (Tue, 20 Jun 2017 16:05:59 GMT):
@jyellick how would we go about removing ledgers from peers?
Asara (Tue, 20 Jun 2017 16:06:14 GMT):
@jyellick what is the difference between a ledger and a channel?
jyellick (Tue, 20 Jun 2017 16:07:43 GMT):
`rm -Rf /var/hyperledger/*` is a pretty effective way of destroying any existing environment. Of course, such a command should be used with care, as it will delete all of your data. But, if you have re-bootstrapped your ordering service, you should make sure that there is no existing ledger stored on the peer from the old ordering service.
jyellick (Tue, 20 Jun 2017 16:09:13 GMT):
@Asara I tend to be pretty fast and loose with the word "ledger". When I use it, I generally mean the on disk resources which store the blocks. A channel is more of a collection of resources, including the ledger.
LoveshHarchandani (Tue, 20 Jun 2017 16:09:55 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=J3bFgqtDfweZJZ3oJ) @kostas Ok
Asara (Tue, 20 Jun 2017 17:24:24 GMT):
@jyellick and that would be run on the docker instances running all the peers and orderer?
jyellick (Tue, 20 Jun 2017 17:27:59 GMT):
@Asara if it is a docker instance, I would recommend simply spinning up a new peer container (cleaning up any shared volumes first)
Asara (Tue, 20 Jun 2017 17:29:24 GMT):
@jyellick Which would also mean dumping the data in couchdb correct?
jyellick (Tue, 20 Jun 2017 17:38:14 GMT):
@Asara correct, yes. The error you mentioned indicates to me that the database has been previously used for a different test scenario, so it is having trouble recreating the artifacts
Asara (Tue, 20 Jun 2017 17:39:05 GMT):
Alright, could you just explain exactly when the LedgerID?
Asara (Tue, 20 Jun 2017 17:39:12 GMT):
A little confused as to the order of events sorry
jyellick (Tue, 20 Jun 2017 17:46:01 GMT):
So, my suspicion is that you:
1. Spun up an orderer instance
2. Created a channel and joined your peer to it.
3. Realized there was something wrong with the channel
4. Destroyed and spun up a new orderer instance
5. Created that same channel, with a different configuration
6. Attempted to join that same peer to that channel
And the peer is telling you, that it already has a channel by that name
jyellick (Tue, 20 Jun 2017 17:46:48 GMT):
(Or, if you are on an old enough level of code, it could be a ledger id collision if you named your channels in a very unlucky way)
jyellick (Tue, 20 Jun 2017 17:51:42 GMT):
The `LedgerID` is not necessarily exactly the channel name. In particular, CouchDB does not support `.` in database names, so, a channel with name `foo.bar.com` would be translated to a LedgerID of `foo_bar_com` in the couchdb world.
Asara (Tue, 20 Jun 2017 18:27:53 GMT):
Thanks for the info @jyellick
jrosmith (Tue, 20 Jun 2017 18:52:50 GMT):
@jyellick everything working through instantiating the chaincode, you were right! thank you so much!
jyellick (Tue, 20 Jun 2017 18:53:06 GMT):
Happy to help!
scottz (Tue, 20 Jun 2017 19:24:09 GMT):
@kostas (1) It works best if configtx_orderer_kafka_brokers lists ALL the KBs in the cluster, correct? (2) Does every orderer connect to the same KB? (3) Is this referred to as the kafka broker leader? (4) How do the orderers do that? I think they try connecting to each KB in the list, so... is the KB Leader the only one that replies and establishes connection?
kostas (Tue, 20 Jun 2017 19:33:59 GMT):
@scottz:
1. As the doc states:
> contains the address of at least two of the Kafka brokers in your cluster in IP:port notation. The list does not need to be exhaustive. (These are your seed brokers.)
Source: https://github.com/hyperledger/fabric/blob/master/docs/source/kafka.rst
2. No, they connect to a KB randomly from that list, do a Metadata request, and in the MetadataResponse they get back a list of the topics/channels hosted on the cluster along with the leader/follower replicas for each.
kostas (Tue, 20 Jun 2017 19:33:59 GMT):
@scottz:
1. As the doc states:
> contains the address of at least two of the Kafka brokers in your cluster in IP:port notation. The list does not need to be exhaustive. (These are your seed brokers.)
Source: https://github.com/hyperledger/fabric/blob/master/docs/source/kafka.rst
2. No, they connect to a KB randomly from that list, do a Metadata request, and in the MetadataResponse they get back a list of the topics/channels hosted on the cluster along with the leader/follower replicas for each.
kostas (Tue, 20 Jun 2017 19:34:33 GMT):
There are three different roles for Kafka brokers:
kostas (Tue, 20 Jun 2017 19:34:45 GMT):
a. cluster controller (there can only be one per cluster)
kostas (Tue, 20 Jun 2017 19:34:59 GMT):
b. partition leader (one per partition)
kostas (Tue, 20 Jun 2017 19:35:13 GMT):
c. partition followers (usually/ideally many per partition)
kostas (Tue, 20 Jun 2017 19:36:27 GMT):
The cluster controller does the leader/follower partition assignments and disseminates that information via metadata messages to the other brokers.
kostas (Tue, 20 Jun 2017 19:36:54 GMT):
So you can reach out to any broker and they'll be able to relay that information to you, so you can target the right leader.
kostas (Tue, 20 Jun 2017 19:37:00 GMT):
You being a client.
kostas (Tue, 20 Jun 2017 19:37:00 GMT):
You == a client.
kostas (Tue, 20 Jun 2017 19:37:00 GMT):
You == a client (producer/consumer).
scottz (Tue, 20 Jun 2017 21:45:14 GMT):
OK.
scottz (Tue, 20 Jun 2017 21:45:21 GMT):
@kostas Wondering... will you be in B500 office tomorrow (Wed)? We would like to invite you to visit us, if possible, for a Q&A discussion of scenarios to confirm our expectations based on configurations and latest code versions.
bh4rtp (Wed, 21 Jun 2017 03:53:50 GMT):
it seems the latest fabric-kafka cannot run.
error reported by cli: `Error: Got unexpected status: SERVICE_UNAVAILABLE`
warn log in orderer: ```2017-06-21 11:43:52.691 CST [orderer/kafka] Enqueue -> DEBU 48f [channel: testchainid] Enqueueing envelope...
2017-06-21 11:43:52.693 CST [orderer/kafka] Enqueue -> WARN 490 [channel: testchainid] Will not enqueue, consenter for this channel hasn't started yet
2017-06-21 11:43:52.701 CST [orderer/common/deliver] Handle -> WARN 492 Error reading from stream: rpc error: code = Canceled desc = context canceled```
bh4rtp (Wed, 21 Jun 2017 03:53:50 GMT):
it seems the latest fabric-kafka cannot run.
error reported by cli: `Error: Got unexpected status: SERVICE_UNAVAILABLE` when create channel
warn log in orderer: ```2017-06-21 11:43:52.691 CST [orderer/kafka] Enqueue -> DEBU 48f [channel: testchainid] Enqueueing envelope...
2017-06-21 11:43:52.693 CST [orderer/kafka] Enqueue -> WARN 490 [channel: testchainid] Will not enqueue, consenter for this channel hasn't started yet
2017-06-21 11:43:52.701 CST [orderer/common/deliver] Handle -> WARN 492 Error reading from stream: rpc error: code = Canceled desc = context canceled```
jyellick (Wed, 21 Jun 2017 03:54:45 GMT):
@bh4rtp Please wait a moment, this error is returned while the Kafka resources are being allocated
bh4rtp (Wed, 21 Jun 2017 03:55:01 GMT):
ok. thanks.
kostas (Wed, 21 Jun 2017 05:36:43 GMT):
@scottz: I'll be there. Let's capture the Q&A and post it in a doc either here or in #fabric-quality for the community to be aware as well.
magg (Wed, 21 Jun 2017 09:54:11 GMT):
is there any docker-compose example with a Kafka-based Ordering Service
magg (Wed, 21 Jun 2017 09:54:11 GMT):
is there any docker-compose example with a Kafka-based Ordering Service???
pmontagn (Wed, 21 Jun 2017 12:21:45 GMT):
Has joined the channel.
jeffgarratt (Wed, 21 Jun 2017 13:29:44 GMT):
@magg you can run the kafka option for the BDD bootstrap.feature
scottz (Wed, 21 Jun 2017 13:33:52 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=oMKmDCv9fwxip8ynq) @kostas ok. And, I should have mentioned, afternoon - since I, and at least one other, will be out of office in morning. see you then.
bh4rtp (Wed, 21 Jun 2017 13:52:08 GMT):
@magg test/feature/docker-compose/docker-compose-kafka.yml
magg (Wed, 21 Jun 2017 13:52:44 GMT):
ok thanks
kostas (Wed, 21 Jun 2017 15:38:09 GMT):
@magg: The path given to you above (`test/feature/...`) is incorrect -- see here for your question: https://github.com/hyperledger/fabric/blob/master/docs/source/kafka.rst#example (these are the files that Jeff's `bootstrap.feature` uses)
kostas (Wed, 21 Jun 2017 15:38:09 GMT):
@magg: The path given to you above (test/feature/...) is incorrect -- see here for your question: https://github.com/hyperledger/fabric/blob/master/docs/source/kafka.rst#example (these are the files that Jeff's `bootstrap.feature` uses)
toddinpal (Wed, 21 Jun 2017 17:25:18 GMT):
Can multiple ordering services share a single Kafka cluster?
kostas (Wed, 21 Jun 2017 17:28:18 GMT):
No, at least the way things stand today. There would be a namespace collision between channel `foo` of ordering service A and channel `foo` of ordering service B.
kostas (Wed, 21 Jun 2017 17:29:04 GMT):
You _could_ make it so that co-existing would be possible in a relatively trivial manner, but this isn't in my TODO list ATM.
toddinpal (Wed, 21 Jun 2017 17:29:09 GMT):
So the topics utilized aren't unique based upon the ordering service
kostas (Wed, 21 Jun 2017 17:29:21 GMT):
Correct.
toddinpal (Wed, 21 Jun 2017 17:29:27 GMT):
ok, thanks...
kostas (Wed, 21 Jun 2017 17:41:02 GMT):
@toddinpal: I'd also note that multiple ordering services sharing a single Kafka cluster doesn't make sense. Just have 2, 3, 4, however many consortiums you want defined in the ordering system chain and you're good to go.
toddinpal (Wed, 21 Jun 2017 17:42:07 GMT):
@kostas right, I understand. just trying to figure out the dependencies and potential deployment models
kostas (Wed, 21 Jun 2017 17:43:15 GMT):
Gotcha. Just wanted to clarify that the consortium concept is your way into multi-tenancy for the ordering service.
jmcnevin (Wed, 21 Jun 2017 18:26:56 GMT):
Has joined the channel.
rcyrus (Wed, 21 Jun 2017 19:13:11 GMT):
Has joined the channel.
bh4rtp (Thu, 22 Jun 2017 02:39:20 GMT):
what is the use of `testchainid` blockchain for orderer? it is created after starting-up.
bh4rtp (Thu, 22 Jun 2017 02:41:49 GMT):
the chain block is saved at var/hyperledger/production/orderer/chains/testchainid
bh4rtp (Thu, 22 Jun 2017 02:41:49 GMT):
the chain block is saved in `var/hyperledger/production/orderer/chains/testchainid`
jyellick (Thu, 22 Jun 2017 02:45:39 GMT):
@bh4rtp `testchainid` is the default name of the ordering system channel
jyellick (Thu, 22 Jun 2017 02:45:54 GMT):
You may pick a different name for this channel while bootstrapping using `configtxgen`
jyellick (Thu, 22 Jun 2017 02:46:20 GMT):
(In fact, it is recommended that you pick a globally unique name for all your channels, including the ordering system channel)
bh4rtp (Thu, 22 Jun 2017 02:57:02 GMT):
@jyellick `$CONFIGTXGEN -profile TwoOrgsOrdererGenesis -channelID myordererch1 -outputBlock ./channel-artifacts/genesis.block`. is this correct?
jyellick (Thu, 22 Jun 2017 03:05:09 GMT):
Correct, you may use this `genesis.block` to bootstrap your orderer `myorderch1` as the ordering system channel id
tennenjl (Thu, 22 Jun 2017 03:28:25 GMT):
Has joined the channel.
bh4rtp (Thu, 22 Jun 2017 03:36:34 GMT):
@jyellick i have just tried. create channel failed.
```2017-06-22 11:31:37.581 CST [grpc] Printf -> DEBU 006 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: lookup orderer.example.com on 127.0.0.11:53: no such host"; Reconnecting to {orderer.example.com:7050
bh4rtp (Thu, 22 Jun 2017 03:37:33 GMT):
running with `$CONFIGTXGEN -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block` is ok.
bh4rtp (Thu, 22 Jun 2017 03:41:59 GMT):
further details: channel id must have no '_' character. i named the orderer channel as "orderer_ch1", it doesn't work. change it to be "myordererch1", it will be ok.
dataharvest (Thu, 22 Jun 2017 15:58:50 GMT):
Has joined the channel.
Glen (Fri, 23 Jun 2017 06:25:43 GMT): chenxuan (Fri, 23 Jun 2017 11:29:18 GMT): chenxuan (Fri, 23 Jun 2017 11:29:38 GMT): chenxuan (Fri, 23 Jun 2017 11:30:09 GMT): chenxuan (Fri, 23 Jun 2017 11:30:09 GMT): chenxuan (Fri, 23 Jun 2017 11:57:28 GMT): jrosmith (Fri, 23 Jun 2017 14:00:41 GMT): jyellick (Fri, 23 Jun 2017 14:05:41 GMT): jyellick (Fri, 23 Jun 2017 14:05:41 GMT): jyellick (Fri, 23 Jun 2017 14:07:41 GMT): jrosmith (Fri, 23 Jun 2017 14:09:10 GMT): chenxuan (Sat, 24 Jun 2017 06:20:04 GMT): chenxuan (Sat, 24 Jun 2017 06:20:13 GMT): ThePleasurable (Sat, 24 Jun 2017 12:27:43 GMT): jeffgarratt (Sat, 24 Jun 2017 14:09:04 GMT): jeffgarratt (Sat, 24 Jun 2017 14:09:04 GMT): jeffgarratt (Sat, 24 Jun 2017 14:09:04 GMT): chenxuan (Sun, 25 Jun 2017 04:54:37 GMT): chenxuan (Sun, 25 Jun 2017 04:54:45 GMT): chenxuan (Sun, 25 Jun 2017 04:56:20 GMT): chenxuan (Sun, 25 Jun 2017 04:56:29 GMT): chenxuan (Sun, 25 Jun 2017 05:11:18 GMT): chenxuan (Sun, 25 Jun 2017 05:11:32 GMT): chenxuan (Sun, 25 Jun 2017 05:11:41 GMT): chenxuan (Sun, 25 Jun 2017 05:11:57 GMT): chenxuan (Sun, 25 Jun 2017 05:12:09 GMT): chenxuan (Sun, 25 Jun 2017 05:16:53 GMT): jyellick (Sun, 25 Jun 2017 18:54:53 GMT): bh4rtp (Mon, 26 Jun 2017 01:17:51 GMT): chenxuan (Mon, 26 Jun 2017 02:26:55 GMT): jyellick (Mon, 26 Jun 2017 02:39:18 GMT): bh4rtp (Mon, 26 Jun 2017 06:32:13 GMT): chenxuan (Mon, 26 Jun 2017 12:34:46 GMT): kostas (Mon, 26 Jun 2017 13:11:19 GMT): kostas (Mon, 26 Jun 2017 13:11:19 GMT): kostas (Mon, 26 Jun 2017 13:11:19 GMT): kostas (Mon, 26 Jun 2017 13:22:36 GMT): chenxuan (Mon, 26 Jun 2017 14:44:06 GMT): chenxuan (Mon, 26 Jun 2017 14:44:51 GMT): Asara (Mon, 26 Jun 2017 20:14:38 GMT): Asara (Mon, 26 Jun 2017 20:14:54 GMT): Asara (Mon, 26 Jun 2017 20:14:54 GMT): Asara (Mon, 26 Jun 2017 20:29:23 GMT): Asara (Mon, 26 Jun 2017 20:29:39 GMT): Asara (Mon, 26 Jun 2017 20:30:03 GMT): Asara (Mon, 26 Jun 2017 20:32:00 GMT): Asara (Mon, 26 Jun 2017 20:32:12 GMT): Asara (Mon, 26 Jun 2017 20:32:17 GMT): Asara (Mon, 26 Jun 2017 20:39:25 GMT): Asara (Mon, 26 Jun 2017 20:39:26 GMT): Asara (Mon, 26 Jun 2017 20:40:55 GMT): Asara (Mon, 26 Jun 2017 20:41:06 GMT): Asara (Mon, 26 Jun 2017 20:42:24 GMT): jyellick (Mon, 26 Jun 2017 21:00:04 GMT): jyellick (Mon, 26 Jun 2017 21:07:17 GMT): jyellick (Mon, 26 Jun 2017 21:07:37 GMT): Asara (Mon, 26 Jun 2017 21:09:45 GMT): Asara (Tue, 27 Jun 2017 00:00:57 GMT): Asara (Tue, 27 Jun 2017 00:07:24 GMT): Asara (Tue, 27 Jun 2017 00:07:26 GMT): chenxuan (Tue, 27 Jun 2017 01:31:05 GMT): chenxuan (Tue, 27 Jun 2017 01:31:17 GMT): chenxuan (Tue, 27 Jun 2017 01:31:24 GMT): chenxuan (Tue, 27 Jun 2017 01:31:26 GMT): chenxuan (Tue, 27 Jun 2017 01:31:34 GMT): chenxuan (Tue, 27 Jun 2017 01:32:02 GMT): chenxuan (Tue, 27 Jun 2017 01:32:13 GMT): chenxuan (Tue, 27 Jun 2017 01:32:27 GMT): Glen (Tue, 27 Jun 2017 06:13:09 GMT): chenxuan (Tue, 27 Jun 2017 06:54:00 GMT): chenxuan (Tue, 27 Jun 2017 06:54:19 GMT): chenxuan (Tue, 27 Jun 2017 06:54:39 GMT): chenxuan (Tue, 27 Jun 2017 06:55:23 GMT): chenxuan (Tue, 27 Jun 2017 06:55:38 GMT): chenxuan (Tue, 27 Jun 2017 06:55:44 GMT): chenxuan (Tue, 27 Jun 2017 06:55:49 GMT): LoveshHarchandani (Tue, 27 Jun 2017 09:57:04 GMT): LoveshHarchandani (Tue, 27 Jun 2017 09:57:04 GMT): LoveshHarchandani (Tue, 27 Jun 2017 09:57:04 GMT): LoveshHarchandani (Tue, 27 Jun 2017 09:57:04 GMT): wy (Tue, 27 Jun 2017 10:08:02 GMT): wy (Tue, 27 Jun 2017 10:08:53 GMT): cca88 (Tue, 27 Jun 2017 12:01:34 GMT): cca88 (Tue, 27 Jun 2017 12:02:20 GMT): LoveshHarchandani (Tue, 27 Jun 2017 12:21:54 GMT): jyellick (Tue, 27 Jun 2017 14:42:40 GMT): jyellick (Tue, 27 Jun 2017 14:42:40 GMT): Asara (Tue, 27 Jun 2017 15:02:52 GMT): chenxuan (Wed, 28 Jun 2017 07:44:40 GMT): chenxuan (Wed, 28 Jun 2017 07:44:51 GMT): chenxuan (Wed, 28 Jun 2017 07:44:52 GMT): chenxuan (Wed, 28 Jun 2017 07:45:06 GMT): chenxuan (Wed, 28 Jun 2017 07:45:20 GMT): chenxuan (Wed, 28 Jun 2017 07:45:28 GMT): chenxuan (Wed, 28 Jun 2017 07:46:05 GMT): chenxuan (Wed, 28 Jun 2017 07:46:06 GMT): jyellick (Wed, 28 Jun 2017 14:05:13 GMT): chenxuan (Wed, 28 Jun 2017 14:13:43 GMT): tzipih0 (Wed, 28 Jun 2017 15:20:05 GMT): tennenjl (Wed, 28 Jun 2017 19:10:34 GMT): jyellick (Wed, 28 Jun 2017 19:14:55 GMT): tennenjl (Wed, 28 Jun 2017 19:15:26 GMT): wy (Thu, 29 Jun 2017 02:40:45 GMT): jyellick (Thu, 29 Jun 2017 03:25:11 GMT): wy (Thu, 29 Jun 2017 03:26:33 GMT): jyellick (Thu, 29 Jun 2017 04:45:50 GMT): chenxuan (Thu, 29 Jun 2017 10:51:26 GMT): chenxuan (Thu, 29 Jun 2017 10:51:32 GMT): chenxuan (Thu, 29 Jun 2017 11:02:18 GMT): chenxuan (Thu, 29 Jun 2017 11:03:01 GMT): cbf (Thu, 29 Jun 2017 14:04:11 GMT): cbf (Thu, 29 Jun 2017 14:05:34 GMT): cbf (Thu, 29 Jun 2017 14:06:56 GMT): cbf (Thu, 29 Jun 2017 14:07:19 GMT): nehalshah50 (Thu, 29 Jun 2017 20:12:27 GMT): chenxuan (Fri, 30 Jun 2017 05:24:54 GMT): chenxuan (Fri, 30 Jun 2017 05:25:01 GMT): chenxuan (Fri, 30 Jun 2017 05:25:10 GMT): chenxuan (Fri, 30 Jun 2017 05:25:46 GMT): chenxuan (Fri, 30 Jun 2017 05:26:23 GMT): chenxuan (Fri, 30 Jun 2017 05:26:37 GMT): chenxuan (Fri, 30 Jun 2017 05:26:55 GMT): jyellick (Fri, 30 Jun 2017 13:00:01 GMT): chenxuan (Fri, 30 Jun 2017 13:08:29 GMT): ariannagolf (Fri, 30 Jun 2017 20:36:49 GMT): wy (Mon, 03 Jul 2017 10:21:34 GMT): alburt (Mon, 03 Jul 2017 10:49:03 GMT): kostas (Mon, 03 Jul 2017 13:23:16 GMT): awattez (Mon, 03 Jul 2017 16:01:17 GMT): wy (Tue, 04 Jul 2017 04:15:02 GMT): bh4rtp (Tue, 04 Jul 2017 06:49:14 GMT): bh4rtp (Tue, 04 Jul 2017 06:49:14 GMT): bh4rtp (Tue, 04 Jul 2017 06:50:22 GMT): zaishengming (Tue, 04 Jul 2017 07:00:38 GMT): wy (Tue, 04 Jul 2017 07:14:15 GMT): shanlusun (Tue, 04 Jul 2017 07:42:19 GMT): mavericklam (Tue, 04 Jul 2017 07:44:52 GMT): mavericklam (Tue, 04 Jul 2017 07:45:15 GMT): mavericklam (Tue, 04 Jul 2017 07:46:08 GMT): dashengSun (Tue, 04 Jul 2017 07:52:27 GMT): guruce (Tue, 04 Jul 2017 08:20:58 GMT): guruce (Tue, 04 Jul 2017 08:20:58 GMT): bh4rtp (Tue, 04 Jul 2017 09:11:10 GMT): mindraker (Tue, 04 Jul 2017 10:48:01 GMT): chenxuan (Tue, 04 Jul 2017 11:48:07 GMT): chenxuan (Tue, 04 Jul 2017 11:48:09 GMT): mavericklam (Tue, 04 Jul 2017 12:20:55 GMT): chenxuan (Tue, 04 Jul 2017 12:47:35 GMT): chenxuan (Tue, 04 Jul 2017 12:47:38 GMT): chenxuan (Tue, 04 Jul 2017 12:48:34 GMT): chenxuan (Tue, 04 Jul 2017 12:50:38 GMT): jyellick (Wed, 05 Jul 2017 14:00:53 GMT): jyellick (Wed, 05 Jul 2017 14:00:53 GMT): jyellick (Wed, 05 Jul 2017 14:02:45 GMT): jyellick (Wed, 05 Jul 2017 14:04:18 GMT): jyellick (Wed, 05 Jul 2017 14:10:09 GMT): bh4rtp (Thu, 06 Jul 2017 00:37:35 GMT): bh4rtp (Thu, 06 Jul 2017 00:37:35 GMT): jyellick (Thu, 06 Jul 2017 02:15:13 GMT): bh4rtp (Thu, 06 Jul 2017 02:17:46 GMT): jyellick (Thu, 06 Jul 2017 02:18:20 GMT): bh4rtp (Thu, 06 Jul 2017 03:00:21 GMT): bh4rtp (Thu, 06 Jul 2017 03:00:21 GMT): bh4rtp (Thu, 06 Jul 2017 03:00:21 GMT): bh4rtp (Thu, 06 Jul 2017 03:39:04 GMT): bh4rtp (Thu, 06 Jul 2017 03:40:25 GMT): bh4rtp (Thu, 06 Jul 2017 03:40:25 GMT): Glen (Thu, 06 Jul 2017 06:41:48 GMT): jyellick (Thu, 06 Jul 2017 13:56:51 GMT): jyellick (Thu, 06 Jul 2017 13:58:50 GMT): Glen (Thu, 06 Jul 2017 15:13:19 GMT): Glen (Thu, 06 Jul 2017 15:13:19 GMT): jyellick (Thu, 06 Jul 2017 15:14:28 GMT): Glen (Thu, 06 Jul 2017 15:15:15 GMT): Glen (Thu, 06 Jul 2017 15:15:33 GMT): Glen (Thu, 06 Jul 2017 15:16:00 GMT): Glen (Thu, 06 Jul 2017 15:17:20 GMT): jyellick (Thu, 06 Jul 2017 15:19:05 GMT): Glen (Thu, 06 Jul 2017 15:19:13 GMT): jyellick (Thu, 06 Jul 2017 15:19:25 GMT): Glen (Thu, 06 Jul 2017 15:19:34 GMT): jyellick (Thu, 06 Jul 2017 15:20:10 GMT): Glen (Thu, 06 Jul 2017 15:21:30 GMT): Glen (Thu, 06 Jul 2017 15:21:49 GMT): Glen (Thu, 06 Jul 2017 15:50:11 GMT): jyellick (Thu, 06 Jul 2017 15:51:28 GMT): jyellick (Thu, 06 Jul 2017 15:51:28 GMT): Glen (Thu, 06 Jul 2017 15:53:39 GMT): Glen (Thu, 06 Jul 2017 15:56:21 GMT): Glen (Thu, 06 Jul 2017 15:56:57 GMT): Glen (Thu, 06 Jul 2017 15:57:32 GMT): Glen (Thu, 06 Jul 2017 16:01:18 GMT): Glen (Thu, 06 Jul 2017 16:03:01 GMT): Glen (Thu, 06 Jul 2017 16:03:16 GMT): jyellick (Thu, 06 Jul 2017 16:06:32 GMT): tkuhrt (Fri, 07 Jul 2017 00:14:09 GMT): gauthampamu (Fri, 07 Jul 2017 03:06:48 GMT): gauthampamu (Fri, 07 Jul 2017 03:06:51 GMT): jyellick (Fri, 07 Jul 2017 04:48:49 GMT): Rachitga (Fri, 07 Jul 2017 04:50:23 GMT): Rachitga (Fri, 07 Jul 2017 04:52:59 GMT): jyellick (Fri, 07 Jul 2017 04:55:33 GMT): jyellick (Fri, 07 Jul 2017 04:55:45 GMT): Rachitga (Fri, 07 Jul 2017 05:01:58 GMT): jyellick (Fri, 07 Jul 2017 05:03:57 GMT): jyellick (Fri, 07 Jul 2017 05:06:22 GMT): Rachitga (Fri, 07 Jul 2017 05:08:09 GMT): Rachitga (Fri, 07 Jul 2017 05:09:04 GMT): jyellick (Fri, 07 Jul 2017 05:09:16 GMT): jyellick (Fri, 07 Jul 2017 05:11:03 GMT): jyellick (Fri, 07 Jul 2017 05:11:22 GMT): Rachitga (Fri, 07 Jul 2017 05:12:01 GMT): Rachitga (Fri, 07 Jul 2017 05:12:35 GMT): jyellick (Fri, 07 Jul 2017 05:12:52 GMT): jyellick (Fri, 07 Jul 2017 05:12:52 GMT): jyellick (Fri, 07 Jul 2017 05:13:13 GMT): Rachitga (Fri, 07 Jul 2017 05:14:53 GMT): gauthampamu (Fri, 07 Jul 2017 11:11:58 GMT): avesense (Fri, 07 Jul 2017 13:20:47 GMT): jyellick (Fri, 07 Jul 2017 13:31:47 GMT): pschnap (Fri, 07 Jul 2017 14:41:33 GMT): pschnap (Fri, 07 Jul 2017 14:44:34 GMT): pschnap (Fri, 07 Jul 2017 14:44:34 GMT): gauthampamu (Fri, 07 Jul 2017 14:49:29 GMT): gauthampamu (Fri, 07 Jul 2017 14:50:44 GMT): scottz (Fri, 07 Jul 2017 15:56:40 GMT): scottz (Fri, 07 Jul 2017 15:56:40 GMT): jyellick (Fri, 07 Jul 2017 16:11:55 GMT): jyellick (Fri, 07 Jul 2017 16:11:55 GMT): pschnap (Fri, 07 Jul 2017 17:20:26 GMT): gauthampamu (Fri, 07 Jul 2017 17:55:35 GMT): jyellick (Fri, 07 Jul 2017 18:19:39 GMT): gauthampamu (Fri, 07 Jul 2017 21:30:10 GMT): jyellick (Fri, 07 Jul 2017 21:34:24 GMT): jyellick (Fri, 07 Jul 2017 21:35:39 GMT): gauthampamu (Fri, 07 Jul 2017 21:36:05 GMT): gauthampamu (Fri, 07 Jul 2017 21:37:41 GMT): jyellick (Fri, 07 Jul 2017 21:38:52 GMT): eliranbi (Fri, 07 Jul 2017 21:40:06 GMT): gauthampamu (Fri, 07 Jul 2017 21:40:33 GMT): gauthampamu (Fri, 07 Jul 2017 21:41:23 GMT): jyellick (Fri, 07 Jul 2017 21:41:34 GMT): gauthampamu (Fri, 07 Jul 2017 21:41:42 GMT): jyellick (Fri, 07 Jul 2017 21:43:11 GMT): gauthampamu (Fri, 07 Jul 2017 21:44:42 GMT): gauthampamu (Fri, 07 Jul 2017 21:45:42 GMT): jyellick (Fri, 07 Jul 2017 23:35:38 GMT): jyellick (Fri, 07 Jul 2017 23:36:23 GMT): jyellick (Fri, 07 Jul 2017 23:36:59 GMT): chenxuan (Sun, 09 Jul 2017 03:30:36 GMT): chenxuan (Sun, 09 Jul 2017 03:31:00 GMT): chenxuan (Sun, 09 Jul 2017 03:31:13 GMT): chenxuan (Sun, 09 Jul 2017 03:32:50 GMT): chenxuan (Sun, 09 Jul 2017 03:33:09 GMT): chenxuan (Mon, 10 Jul 2017 03:24:51 GMT): jyellick (Mon, 10 Jul 2017 05:22:33 GMT): chenxuan (Mon, 10 Jul 2017 05:29:46 GMT): chenxuan (Mon, 10 Jul 2017 05:30:01 GMT): chenxuan (Mon, 10 Jul 2017 05:30:08 GMT): chenxuan (Mon, 10 Jul 2017 05:30:25 GMT): chenxuan (Mon, 10 Jul 2017 05:30:45 GMT): chenxuan (Mon, 10 Jul 2017 05:31:23 GMT): chenxuan (Mon, 10 Jul 2017 05:31:26 GMT): chenxuan (Mon, 10 Jul 2017 05:31:57 GMT): chenxuan (Mon, 10 Jul 2017 05:32:11 GMT): chenxuan (Mon, 10 Jul 2017 05:32:21 GMT): chenxuan (Mon, 10 Jul 2017 05:32:23 GMT): chenxuan (Mon, 10 Jul 2017 05:32:49 GMT): chenxuan (Mon, 10 Jul 2017 05:36:52 GMT): chenxuan (Mon, 10 Jul 2017 05:36:58 GMT): chenxuan (Mon, 10 Jul 2017 05:37:27 GMT): chenxuan (Mon, 10 Jul 2017 05:37:33 GMT): chenxuan (Mon, 10 Jul 2017 05:37:55 GMT): chenxuan (Mon, 10 Jul 2017 05:38:21 GMT): chenxuan (Mon, 10 Jul 2017 05:38:24 GMT): jyellick (Mon, 10 Jul 2017 05:44:34 GMT): jyellick (Mon, 10 Jul 2017 06:10:46 GMT): jyellick (Mon, 10 Jul 2017 06:10:46 GMT): chenxuan (Mon, 10 Jul 2017 06:27:16 GMT): chenxuan (Mon, 10 Jul 2017 06:44:53 GMT): chenxuan (Mon, 10 Jul 2017 06:45:29 GMT): chenxuan (Mon, 10 Jul 2017 06:45:37 GMT): chenxuan (Mon, 10 Jul 2017 06:45:47 GMT): chenxuan (Mon, 10 Jul 2017 06:45:57 GMT): chenxuan (Mon, 10 Jul 2017 06:46:00 GMT): chenxuan (Mon, 10 Jul 2017 06:46:07 GMT): chenxuan (Mon, 10 Jul 2017 06:46:17 GMT): chenxuan (Mon, 10 Jul 2017 06:46:17 GMT): yury (Mon, 10 Jul 2017 10:29:58 GMT): Rachitga (Mon, 10 Jul 2017 11:23:21 GMT): kostas (Mon, 10 Jul 2017 11:37:29 GMT): kostas (Mon, 10 Jul 2017 11:38:07 GMT): kostas (Mon, 10 Jul 2017 11:42:15 GMT): chenxuan (Mon, 10 Jul 2017 11:45:05 GMT): kostas (Mon, 10 Jul 2017 11:45:55 GMT): chenxuan (Mon, 10 Jul 2017 11:49:49 GMT): jmcnevin (Mon, 10 Jul 2017 14:08:19 GMT): jmcnevin (Mon, 10 Jul 2017 14:09:30 GMT): kostas (Mon, 10 Jul 2017 14:24:52 GMT): kostas (Mon, 10 Jul 2017 14:24:52 GMT): jmcnevin (Mon, 10 Jul 2017 14:27:00 GMT): rjones (Mon, 10 Jul 2017 14:29:11 GMT): rjones (Mon, 10 Jul 2017 14:30:30 GMT): rjones (Mon, 10 Jul 2017 14:31:41 GMT): rjones (Mon, 10 Jul 2017 14:31:53 GMT): jyellick (Mon, 10 Jul 2017 14:45:35 GMT): jyellick (Mon, 10 Jul 2017 15:09:52 GMT): jyellick (Mon, 10 Jul 2017 15:17:51 GMT): jyellick (Mon, 10 Jul 2017 15:18:00 GMT): jyellick (Mon, 10 Jul 2017 15:19:42 GMT): PraveenPandu (Mon, 10 Jul 2017 16:13:25 GMT): Rachitga (Mon, 10 Jul 2017 17:08:05 GMT): Rachitga (Mon, 10 Jul 2017 17:18:33 GMT): kostas (Mon, 10 Jul 2017 17:20:06 GMT): kostas (Mon, 10 Jul 2017 17:20:06 GMT): kostas (Mon, 10 Jul 2017 17:20:06 GMT): kostas (Mon, 10 Jul 2017 17:20:06 GMT): kostas (Mon, 10 Jul 2017 17:20:13 GMT): Rachitga (Mon, 10 Jul 2017 17:27:58 GMT): Rachitga (Mon, 10 Jul 2017 17:27:58 GMT): kostas (Mon, 10 Jul 2017 17:29:35 GMT): Rachitga (Mon, 10 Jul 2017 17:34:01 GMT): kostas (Mon, 10 Jul 2017 17:35:07 GMT): Rachitga (Mon, 10 Jul 2017 17:35:09 GMT): kostas (Mon, 10 Jul 2017 17:36:32 GMT): Rachitga (Mon, 10 Jul 2017 17:37:02 GMT): eliranbi (Mon, 10 Jul 2017 17:37:39 GMT): Rachitga (Mon, 10 Jul 2017 17:37:44 GMT): kostas (Mon, 10 Jul 2017 17:38:39 GMT): kostas (Mon, 10 Jul 2017 17:38:39 GMT): kostas (Mon, 10 Jul 2017 17:38:39 GMT): kostas (Mon, 10 Jul 2017 17:38:57 GMT): kostas (Mon, 10 Jul 2017 17:39:11 GMT): kostas (Mon, 10 Jul 2017 17:39:58 GMT): kostas (Mon, 10 Jul 2017 17:39:58 GMT): kostas (Mon, 10 Jul 2017 17:40:35 GMT): kostas (Mon, 10 Jul 2017 17:41:21 GMT): kostas (Mon, 10 Jul 2017 17:42:25 GMT): Rachitga (Mon, 10 Jul 2017 17:43:14 GMT): Rachitga (Mon, 10 Jul 2017 17:43:14 GMT): Rachitga (Mon, 10 Jul 2017 17:43:14 GMT): kostas (Mon, 10 Jul 2017 17:43:59 GMT): kostas (Mon, 10 Jul 2017 17:43:59 GMT): kostas (Mon, 10 Jul 2017 17:43:59 GMT): eliranbi (Mon, 10 Jul 2017 17:45:37 GMT): Rachitga (Mon, 10 Jul 2017 17:45:39 GMT): kostas (Mon, 10 Jul 2017 17:45:55 GMT): kostas (Mon, 10 Jul 2017 17:46:19 GMT): eliranbi (Mon, 10 Jul 2017 17:46:35 GMT): Rachitga (Mon, 10 Jul 2017 17:47:06 GMT): kostas (Mon, 10 Jul 2017 17:47:10 GMT): Rachitga (Mon, 10 Jul 2017 17:48:31 GMT): kostas (Mon, 10 Jul 2017 17:49:33 GMT): kostas (Mon, 10 Jul 2017 17:50:30 GMT): Rachitga (Mon, 10 Jul 2017 17:50:32 GMT): kostas (Mon, 10 Jul 2017 17:50:40 GMT): kostas (Mon, 10 Jul 2017 17:50:40 GMT): kostas (Mon, 10 Jul 2017 17:51:14 GMT): Rachitga (Mon, 10 Jul 2017 17:54:59 GMT): Rachitga (Mon, 10 Jul 2017 17:54:59 GMT): Rachitga (Mon, 10 Jul 2017 17:54:59 GMT): Rachitga (Mon, 10 Jul 2017 17:59:57 GMT): kostas (Mon, 10 Jul 2017 18:00:10 GMT): kostas (Mon, 10 Jul 2017 18:01:12 GMT): kostas (Mon, 10 Jul 2017 18:01:32 GMT): kostas (Mon, 10 Jul 2017 18:01:53 GMT): kostas (Mon, 10 Jul 2017 18:04:03 GMT): Rachitga (Mon, 10 Jul 2017 18:05:05 GMT): kostas (Mon, 10 Jul 2017 18:09:03 GMT): kostas (Mon, 10 Jul 2017 18:09:35 GMT): kostas (Mon, 10 Jul 2017 18:09:53 GMT): kostas (Mon, 10 Jul 2017 18:10:09 GMT): kostas (Mon, 10 Jul 2017 18:10:30 GMT): kostas (Mon, 10 Jul 2017 18:12:16 GMT): kostas (Mon, 10 Jul 2017 18:12:16 GMT): kostas (Mon, 10 Jul 2017 18:12:16 GMT): kostas (Mon, 10 Jul 2017 18:12:16 GMT): kostas (Mon, 10 Jul 2017 18:12:38 GMT): kostas (Mon, 10 Jul 2017 18:12:38 GMT): kostas (Mon, 10 Jul 2017 18:13:01 GMT): kostas (Mon, 10 Jul 2017 18:13:34 GMT): Rachitga (Mon, 10 Jul 2017 18:17:01 GMT): jimthematrix (Mon, 10 Jul 2017 23:50:02 GMT): jimthematrix (Mon, 10 Jul 2017 23:53:34 GMT): jyellick (Tue, 11 Jul 2017 00:32:27 GMT): jyellick (Tue, 11 Jul 2017 00:36:15 GMT): zhangchao (Tue, 11 Jul 2017 02:10:15 GMT): jyellick (Tue, 11 Jul 2017 04:10:25 GMT): jyellick (Tue, 11 Jul 2017 04:58:33 GMT): jyellick (Tue, 11 Jul 2017 05:35:48 GMT): jimthematrix (Tue, 11 Jul 2017 11:12:45 GMT): jimthematrix (Tue, 11 Jul 2017 11:13:21 GMT): jimthematrix (Tue, 11 Jul 2017 11:14:40 GMT): jimthematrix (Tue, 11 Jul 2017 11:15:23 GMT): jimthematrix (Tue, 11 Jul 2017 11:17:50 GMT): jimthematrix (Tue, 11 Jul 2017 11:18:55 GMT): jimthematrix (Tue, 11 Jul 2017 12:12:34 GMT): jimthematrix (Tue, 11 Jul 2017 12:16:08 GMT): jyellick (Tue, 11 Jul 2017 12:57:21 GMT): jyellick (Tue, 11 Jul 2017 12:58:29 GMT): jyellick (Tue, 11 Jul 2017 13:57:41 GMT): scottz (Tue, 11 Jul 2017 14:35:21 GMT): scottz (Tue, 11 Jul 2017 14:36:06 GMT): kostas (Tue, 11 Jul 2017 14:36:36 GMT): kostas (Tue, 11 Jul 2017 14:36:38 GMT): kostas (Tue, 11 Jul 2017 14:37:05 GMT): kostas (Tue, 11 Jul 2017 14:37:18 GMT): scottz (Tue, 11 Jul 2017 14:45:31 GMT): scottz (Tue, 11 Jul 2017 14:50:49 GMT): kostas (Tue, 11 Jul 2017 15:19:05 GMT): kostas (Tue, 11 Jul 2017 15:19:35 GMT): kostas (Tue, 11 Jul 2017 15:19:55 GMT): kostas (Tue, 11 Jul 2017 15:20:00 GMT): kostas (Tue, 11 Jul 2017 15:20:14 GMT): scottz (Tue, 11 Jul 2017 15:28:35 GMT): scottz (Tue, 11 Jul 2017 15:31:02 GMT): scottz (Tue, 11 Jul 2017 15:34:15 GMT): kostas (Tue, 11 Jul 2017 18:29:32 GMT): kostas (Tue, 11 Jul 2017 18:29:34 GMT): kostas (Tue, 11 Jul 2017 18:29:49 GMT): kostas (Tue, 11 Jul 2017 18:30:21 GMT): kostas (Tue, 11 Jul 2017 18:30:21 GMT): kostas (Tue, 11 Jul 2017 18:30:21 GMT): jimthematrix (Tue, 11 Jul 2017 18:46:09 GMT): jimthematrix (Tue, 11 Jul 2017 18:46:47 GMT): jimthematrix (Tue, 11 Jul 2017 18:46:47 GMT): jyellick (Tue, 11 Jul 2017 19:03:28 GMT): jimthematrix (Tue, 11 Jul 2017 19:09:34 GMT): jimthematrix (Tue, 11 Jul 2017 19:09:34 GMT): scottz (Tue, 11 Jul 2017 21:34:41 GMT): jyellick (Wed, 12 Jul 2017 02:58:14 GMT): jyellick (Wed, 12 Jul 2017 02:59:05 GMT): Rachitga (Wed, 12 Jul 2017 12:47:01 GMT): kostas (Wed, 12 Jul 2017 12:47:52 GMT): kostas (Wed, 12 Jul 2017 12:47:54 GMT): kostas (Wed, 12 Jul 2017 12:48:13 GMT): Rachitga (Wed, 12 Jul 2017 12:54:04 GMT): jyellick (Wed, 12 Jul 2017 12:54:19 GMT): jyellick (Wed, 12 Jul 2017 12:54:34 GMT): jyellick (Wed, 12 Jul 2017 12:54:44 GMT): jyellick (Wed, 12 Jul 2017 12:54:44 GMT): Rachitga (Wed, 12 Jul 2017 12:56:03 GMT): jyellick (Wed, 12 Jul 2017 12:56:31 GMT): jyellick (Wed, 12 Jul 2017 12:56:46 GMT): jyellick (Wed, 12 Jul 2017 12:57:06 GMT): jyellick (Wed, 12 Jul 2017 12:57:31 GMT): jyellick (Wed, 12 Jul 2017 12:57:42 GMT): Rachitga (Wed, 12 Jul 2017 12:58:44 GMT): Rachitga (Wed, 12 Jul 2017 12:59:16 GMT): kostas (Wed, 12 Jul 2017 12:59:21 GMT): kostas (Wed, 12 Jul 2017 12:59:46 GMT): jyellick (Wed, 12 Jul 2017 12:59:49 GMT): jyellick (Wed, 12 Jul 2017 13:00:05 GMT): Rachitga (Wed, 12 Jul 2017 13:00:07 GMT): Rachitga (Wed, 12 Jul 2017 13:00:34 GMT): jyellick (Wed, 12 Jul 2017 13:00:48 GMT): jimthematrix (Wed, 12 Jul 2017 14:54:46 GMT): jyellick (Wed, 12 Jul 2017 14:55:11 GMT): jimthematrix (Wed, 12 Jul 2017 14:56:02 GMT): jyellick (Wed, 12 Jul 2017 14:57:01 GMT): jimthematrix (Wed, 12 Jul 2017 14:57:02 GMT): jimthematrix (Wed, 12 Jul 2017 14:57:02 GMT): jyellick (Wed, 12 Jul 2017 14:57:17 GMT): jimthematrix (Wed, 12 Jul 2017 14:57:59 GMT): jyellick (Wed, 12 Jul 2017 15:19:26 GMT): jimthematrix (Wed, 12 Jul 2017 15:46:08 GMT): jimthematrix (Wed, 12 Jul 2017 15:46:54 GMT): jimthematrix (Wed, 12 Jul 2017 15:46:54 GMT): gen_el (Wed, 12 Jul 2017 18:46:45 GMT): SotirisAlfonsos (Thu, 13 Jul 2017 08:34:34 GMT): SotirisAlfonsos (Thu, 13 Jul 2017 08:34:34 GMT): SotirisAlfonsos (Thu, 13 Jul 2017 08:34:34 GMT): SotirisAlfonsos (Thu, 13 Jul 2017 08:34:34 GMT): kostas (Thu, 13 Jul 2017 08:41:52 GMT): kostas (Thu, 13 Jul 2017 08:41:56 GMT): kostas (Thu, 13 Jul 2017 08:43:32 GMT): kostas (Thu, 13 Jul 2017 08:44:18 GMT): SotirisAlfonsos (Thu, 13 Jul 2017 08:48:14 GMT): muralisr (Thu, 13 Jul 2017 12:41:39 GMT): FollowingGhosts (Thu, 13 Jul 2017 12:41:46 GMT): FollowingGhosts (Thu, 13 Jul 2017 12:52:59 GMT): jyellick (Thu, 13 Jul 2017 13:21:23 GMT): FollowingGhosts (Thu, 13 Jul 2017 13:23:37 GMT): FollowingGhosts (Thu, 13 Jul 2017 13:38:04 GMT): FollowingGhosts (Thu, 13 Jul 2017 13:40:57 GMT): jyellick (Thu, 13 Jul 2017 14:26:03 GMT): FollowingGhosts (Thu, 13 Jul 2017 14:26:20 GMT): FollowingGhosts (Thu, 13 Jul 2017 14:26:29 GMT): jyellick (Thu, 13 Jul 2017 14:26:48 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:07:58 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:08:54 GMT): kostas (Thu, 13 Jul 2017 15:11:39 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:12:04 GMT): kostas (Thu, 13 Jul 2017 15:12:22 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:12:24 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:13:00 GMT): jyellick (Thu, 13 Jul 2017 15:13:48 GMT): jyellick (Thu, 13 Jul 2017 15:14:02 GMT): jyellick (Thu, 13 Jul 2017 15:14:25 GMT): jyellick (Thu, 13 Jul 2017 15:15:01 GMT): jyellick (Thu, 13 Jul 2017 15:16:03 GMT): jyellick (Thu, 13 Jul 2017 15:16:58 GMT): jyellick (Thu, 13 Jul 2017 15:17:23 GMT): yacovm (Thu, 13 Jul 2017 15:28:50 GMT): yacovm (Thu, 13 Jul 2017 15:28:55 GMT): yacovm (Thu, 13 Jul 2017 15:29:01 GMT): yacovm (Thu, 13 Jul 2017 15:29:07 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:29:12 GMT): yacovm (Thu, 13 Jul 2017 15:29:31 GMT): yacovm (Thu, 13 Jul 2017 15:29:36 GMT): jyellick (Thu, 13 Jul 2017 15:31:32 GMT): jyellick (Thu, 13 Jul 2017 15:31:32 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:32:22 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:32:32 GMT): yacovm (Thu, 13 Jul 2017 15:32:41 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:32:46 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:32:46 GMT): jyellick (Thu, 13 Jul 2017 15:33:41 GMT): Senthil1 (Thu, 13 Jul 2017 15:33:51 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:34:23 GMT): jyellick (Thu, 13 Jul 2017 15:34:23 GMT): jyellick (Thu, 13 Jul 2017 15:34:35 GMT): jyellick (Thu, 13 Jul 2017 15:35:05 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:35:58 GMT): jyellick (Thu, 13 Jul 2017 15:36:04 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:36:10 GMT): yacovm (Thu, 13 Jul 2017 15:36:21 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:36:25 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:36:25 GMT): jyellick (Thu, 13 Jul 2017 15:36:48 GMT): yacovm (Thu, 13 Jul 2017 15:36:51 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:36:54 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:37:20 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:38:49 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:38:54 GMT): jyellick (Thu, 13 Jul 2017 15:39:21 GMT): thakkarparth007 (Thu, 13 Jul 2017 15:39:40 GMT): qizhang (Fri, 14 Jul 2017 01:21:51 GMT): Rachitga (Fri, 14 Jul 2017 09:03:59 GMT): Rachitga (Fri, 14 Jul 2017 09:11:40 GMT): Rachitga (Fri, 14 Jul 2017 09:12:01 GMT): Rachitga (Fri, 14 Jul 2017 09:12:13 GMT): kostas (Fri, 14 Jul 2017 12:25:04 GMT): kostas (Fri, 14 Jul 2017 12:25:04 GMT): kostas (Fri, 14 Jul 2017 12:25:04 GMT): kostas (Fri, 14 Jul 2017 12:25:04 GMT): kostas (Fri, 14 Jul 2017 12:26:14 GMT): Rachitga (Fri, 14 Jul 2017 14:26:32 GMT): Rachitga (Sat, 15 Jul 2017 07:49:50 GMT): Rachitga (Sat, 15 Jul 2017 07:50:07 GMT): Rachitga (Sat, 15 Jul 2017 07:58:22 GMT): yacovm (Sat, 15 Jul 2017 11:40:30 GMT): yacovm (Sat, 15 Jul 2017 11:40:57 GMT): Rachitga (Sat, 15 Jul 2017 12:25:44 GMT): yacovm (Sat, 15 Jul 2017 13:56:58 GMT): yacovm (Sat, 15 Jul 2017 13:57:14 GMT): yacovm (Sat, 15 Jul 2017 13:57:21 GMT): yacovm (Sat, 15 Jul 2017 13:57:31 GMT): yacovm (Sat, 15 Jul 2017 13:57:31 GMT): Rachitga (Sat, 15 Jul 2017 14:13:01 GMT): jmar42 (Sun, 16 Jul 2017 03:07:00 GMT): jmar42 (Sun, 16 Jul 2017 03:08:44 GMT): kostas (Sun, 16 Jul 2017 19:05:05 GMT): gauthampamu (Sun, 16 Jul 2017 22:00:29 GMT): gauthampamu (Sun, 16 Jul 2017 22:02:05 GMT): jyellick (Mon, 17 Jul 2017 01:34:12 GMT): jmar42 (Mon, 17 Jul 2017 01:58:58 GMT): jyellick (Mon, 17 Jul 2017 02:52:58 GMT): jyellick (Mon, 17 Jul 2017 02:52:58 GMT): jyellick (Mon, 17 Jul 2017 02:52:58 GMT): jyellick (Mon, 17 Jul 2017 02:52:58 GMT): gauthampamu (Mon, 17 Jul 2017 02:57:56 GMT): jyellick (Mon, 17 Jul 2017 03:05:37 GMT): jyellick (Mon, 17 Jul 2017 03:06:10 GMT): gauthampamu (Mon, 17 Jul 2017 04:22:47 GMT): gauthampamu (Mon, 17 Jul 2017 04:23:20 GMT): gauthampamu (Mon, 17 Jul 2017 04:25:09 GMT): jyellick (Mon, 17 Jul 2017 04:52:51 GMT): jmar42 (Mon, 17 Jul 2017 07:36:17 GMT): jmar42 (Mon, 17 Jul 2017 07:36:24 GMT): jmar42 (Mon, 17 Jul 2017 07:37:52 GMT): dushyantbehl (Mon, 17 Jul 2017 07:56:54 GMT): swangbj (Mon, 17 Jul 2017 08:17:43 GMT): Rachitga (Mon, 17 Jul 2017 10:54:15 GMT): kostas (Mon, 17 Jul 2017 11:31:20 GMT): kostas (Mon, 17 Jul 2017 12:18:34 GMT): kostas (Mon, 17 Jul 2017 12:20:36 GMT): Rachitga (Mon, 17 Jul 2017 12:53:05 GMT): DennisM330 (Mon, 17 Jul 2017 14:15:15 GMT): jyellick (Mon, 17 Jul 2017 14:22:56 GMT): jyellick (Mon, 17 Jul 2017 14:22:56 GMT): gauthampamu (Mon, 17 Jul 2017 16:04:28 GMT): jyellick (Mon, 17 Jul 2017 16:08:43 GMT): jyellick (Mon, 17 Jul 2017 16:08:43 GMT): gauthampamu (Mon, 17 Jul 2017 16:29:35 GMT): jyellick (Mon, 17 Jul 2017 17:00:20 GMT): gauthampamu (Mon, 17 Jul 2017 17:10:40 GMT): gauthampamu (Mon, 17 Jul 2017 17:12:04 GMT): gauthampamu (Mon, 17 Jul 2017 17:14:26 GMT): jyellick (Mon, 17 Jul 2017 17:26:16 GMT): n91 (Mon, 17 Jul 2017 18:00:44 GMT): n91 (Mon, 17 Jul 2017 18:51:44 GMT): jyellick (Mon, 17 Jul 2017 18:55:57 GMT): jyellick (Mon, 17 Jul 2017 18:55:57 GMT): n91 (Mon, 17 Jul 2017 18:57:43 GMT): n91 (Mon, 17 Jul 2017 18:57:43 GMT): jyellick (Mon, 17 Jul 2017 19:01:23 GMT): n91 (Mon, 17 Jul 2017 19:05:20 GMT): jyellick (Mon, 17 Jul 2017 19:07:03 GMT): n91 (Mon, 17 Jul 2017 19:09:37 GMT): n91 (Mon, 17 Jul 2017 19:10:11 GMT): jyellick (Mon, 17 Jul 2017 19:11:17 GMT): jyellick (Mon, 17 Jul 2017 19:11:17 GMT): n91 (Mon, 17 Jul 2017 19:11:53 GMT): jyellick (Mon, 17 Jul 2017 19:12:00 GMT): jyellick (Mon, 17 Jul 2017 19:13:01 GMT): n91 (Mon, 17 Jul 2017 19:13:08 GMT): jyellick (Mon, 17 Jul 2017 19:13:28 GMT): jyellick (Mon, 17 Jul 2017 19:13:50 GMT): jyellick (Mon, 17 Jul 2017 19:14:15 GMT): jyellick (Mon, 17 Jul 2017 19:15:07 GMT): jyellick (Mon, 17 Jul 2017 19:15:50 GMT): n91 (Mon, 17 Jul 2017 19:19:16 GMT): jyellick (Mon, 17 Jul 2017 19:19:54 GMT): gauthampamu (Mon, 17 Jul 2017 19:52:07 GMT): n91 (Mon, 17 Jul 2017 19:56:10 GMT): n91 (Mon, 17 Jul 2017 19:58:31 GMT): Asara (Mon, 17 Jul 2017 20:00:49 GMT): jyellick (Mon, 17 Jul 2017 20:04:40 GMT): jyellick (Mon, 17 Jul 2017 20:05:19 GMT): Asara (Mon, 17 Jul 2017 20:05:28 GMT): Asara (Mon, 17 Jul 2017 20:05:28 GMT): jyellick (Mon, 17 Jul 2017 20:05:34 GMT): Asara (Mon, 17 Jul 2017 20:05:43 GMT): Asara (Mon, 17 Jul 2017 20:06:04 GMT): jyellick (Mon, 17 Jul 2017 20:07:22 GMT): Asara (Mon, 17 Jul 2017 20:08:38 GMT): Asara (Mon, 17 Jul 2017 20:08:54 GMT): jyellick (Mon, 17 Jul 2017 20:10:26 GMT): Asara (Mon, 17 Jul 2017 20:10:39 GMT): jyellick (Mon, 17 Jul 2017 20:10:56 GMT): Asara (Mon, 17 Jul 2017 20:14:39 GMT): Asara (Mon, 17 Jul 2017 20:14:46 GMT): Asara (Mon, 17 Jul 2017 20:15:34 GMT): jyellick (Mon, 17 Jul 2017 20:17:29 GMT): jyellick (Mon, 17 Jul 2017 20:21:23 GMT): Asara (Mon, 17 Jul 2017 20:24:13 GMT): szoghybe (Mon, 17 Jul 2017 21:20:38 GMT): szoghybe (Mon, 17 Jul 2017 21:24:58 GMT): jyellick (Mon, 17 Jul 2017 21:39:38 GMT): jyellick (Mon, 17 Jul 2017 21:39:38 GMT): jeffgarratt (Tue, 18 Jul 2017 01:49:23 GMT): jeffgarratt (Tue, 18 Jul 2017 01:50:15 GMT): jeffgarratt (Tue, 18 Jul 2017 01:51:03 GMT): jeffgarratt (Tue, 18 Jul 2017 01:51:05 GMT): jeffgarratt (Tue, 18 Jul 2017 01:51:15 GMT): jeffgarratt (Tue, 18 Jul 2017 01:51:56 GMT): narayanprusty (Tue, 18 Jul 2017 10:49:40 GMT): vu3mmg (Tue, 18 Jul 2017 10:49:54 GMT): narayanprusty (Tue, 18 Jul 2017 10:50:08 GMT): vu3mmg (Tue, 18 Jul 2017 10:51:06 GMT): vu3mmg (Tue, 18 Jul 2017 10:51:40 GMT): vu3mmg (Tue, 18 Jul 2017 10:51:59 GMT): rohitrocket (Tue, 18 Jul 2017 11:32:36 GMT): kostas (Tue, 18 Jul 2017 11:33:25 GMT): narayanprusty (Tue, 18 Jul 2017 11:41:23 GMT): narayanprusty (Tue, 18 Jul 2017 11:41:23 GMT): kostas (Tue, 18 Jul 2017 11:41:47 GMT): narayanprusty (Tue, 18 Jul 2017 11:42:56 GMT): narayanprusty (Tue, 18 Jul 2017 11:43:47 GMT): kostas (Tue, 18 Jul 2017 11:44:24 GMT): narayanprusty (Tue, 18 Jul 2017 11:47:09 GMT): narayanprusty (Tue, 18 Jul 2017 11:47:09 GMT): kostas (Tue, 18 Jul 2017 12:00:10 GMT): kostas (Tue, 18 Jul 2017 12:01:19 GMT): narayanprusty (Tue, 18 Jul 2017 12:04:12 GMT): kostas (Tue, 18 Jul 2017 12:06:35 GMT): kostas (Tue, 18 Jul 2017 12:06:38 GMT): kostas (Tue, 18 Jul 2017 12:06:45 GMT): kostas (Tue, 18 Jul 2017 12:06:52 GMT): kostas (Tue, 18 Jul 2017 12:08:11 GMT): narayanprusty (Tue, 18 Jul 2017 12:08:46 GMT): ascatox (Tue, 18 Jul 2017 12:24:57 GMT): rohitrocket (Tue, 18 Jul 2017 12:41:15 GMT): rohitrocket (Tue, 18 Jul 2017 12:41:19 GMT): rohitrocket (Tue, 18 Jul 2017 12:41:34 GMT): rohitrocket (Tue, 18 Jul 2017 12:42:13 GMT): jeffgarratt (Tue, 18 Jul 2017 12:44:01 GMT): jeffgarratt (Tue, 18 Jul 2017 12:44:01 GMT): rohitrocket (Tue, 18 Jul 2017 12:44:22 GMT): rohitrocket (Tue, 18 Jul 2017 12:44:36 GMT): jeffgarratt (Tue, 18 Jul 2017 12:44:50 GMT): jeffgarratt (Tue, 18 Jul 2017 12:45:09 GMT): jeffgarratt (Tue, 18 Jul 2017 12:45:11 GMT): rohitrocket (Tue, 18 Jul 2017 12:45:36 GMT): rohitrocket (Tue, 18 Jul 2017 12:46:55 GMT): rohitrocket (Tue, 18 Jul 2017 12:47:07 GMT): jyellick (Tue, 18 Jul 2017 13:36:11 GMT): rohitrocket (Tue, 18 Jul 2017 13:36:47 GMT): rohitrocket (Tue, 18 Jul 2017 13:36:58 GMT): jyellick (Tue, 18 Jul 2017 13:38:54 GMT): jyellick (Tue, 18 Jul 2017 13:39:11 GMT): rohitrocket (Tue, 18 Jul 2017 13:41:56 GMT): rohitrocket (Tue, 18 Jul 2017 13:45:56 GMT): sqwerrels (Tue, 18 Jul 2017 14:14:33 GMT): highlander (Tue, 18 Jul 2017 20:22:08 GMT): vu3mmg (Wed, 19 Jul 2017 06:26:31 GMT): yacovm (Wed, 19 Jul 2017 06:26:46 GMT): vu3mmg (Wed, 19 Jul 2017 06:29:02 GMT): rohitrocket (Wed, 19 Jul 2017 06:36:05 GMT): yacovm (Wed, 19 Jul 2017 06:36:41 GMT): yacovm (Wed, 19 Jul 2017 06:36:44 GMT): rohitrocket (Wed, 19 Jul 2017 06:37:45 GMT): rohitrocket (Wed, 19 Jul 2017 06:39:07 GMT): rohitrocket (Wed, 19 Jul 2017 06:50:33 GMT): yacovm (Wed, 19 Jul 2017 06:51:08 GMT): rohitrocket (Wed, 19 Jul 2017 06:55:33 GMT): yacovm (Wed, 19 Jul 2017 06:56:01 GMT): yacovm (Wed, 19 Jul 2017 06:56:06 GMT): cca88 (Wed, 19 Jul 2017 06:58:30 GMT): cca88 (Wed, 19 Jul 2017 06:59:04 GMT): vu3mmg (Wed, 19 Jul 2017 06:59:21 GMT): vu3mmg (Wed, 19 Jul 2017 06:59:50 GMT): cca88 (Wed, 19 Jul 2017 07:00:14 GMT): cca88 (Wed, 19 Jul 2017 07:00:26 GMT): vu3mmg (Wed, 19 Jul 2017 07:01:13 GMT): cca88 (Wed, 19 Jul 2017 07:01:49 GMT): vu3mmg (Wed, 19 Jul 2017 07:02:15 GMT): vu3mmg (Wed, 19 Jul 2017 07:02:29 GMT): vu3mmg (Wed, 19 Jul 2017 07:02:30 GMT): vu3mmg (Wed, 19 Jul 2017 07:02:38 GMT): vu3mmg (Wed, 19 Jul 2017 07:02:51 GMT): vu3mmg (Wed, 19 Jul 2017 07:02:53 GMT): vu3mmg (Wed, 19 Jul 2017 07:03:14 GMT): cca88 (Wed, 19 Jul 2017 07:04:39 GMT): rohitrocket (Wed, 19 Jul 2017 07:06:20 GMT): yacovm (Wed, 19 Jul 2017 07:06:47 GMT): rohitrocket (Wed, 19 Jul 2017 07:08:17 GMT): yacovm (Wed, 19 Jul 2017 07:08:49 GMT): yacovm (Wed, 19 Jul 2017 07:09:14 GMT): rohitrocket (Wed, 19 Jul 2017 07:10:29 GMT): yacovm (Wed, 19 Jul 2017 07:10:38 GMT): rohitrocket (Wed, 19 Jul 2017 07:11:26 GMT): rohitrocket (Wed, 19 Jul 2017 07:12:01 GMT): rohitrocket (Wed, 19 Jul 2017 07:12:21 GMT): vu3mmg (Wed, 19 Jul 2017 07:14:11 GMT): yacovm (Wed, 19 Jul 2017 07:14:34 GMT): yacovm (Wed, 19 Jul 2017 07:14:43 GMT): yacovm (Wed, 19 Jul 2017 07:14:47 GMT): rohitrocket (Wed, 19 Jul 2017 07:14:57 GMT): rohitrocket (Wed, 19 Jul 2017 07:15:06 GMT): shubhamvrkr (Wed, 19 Jul 2017 07:21:58 GMT): moulali308 (Wed, 19 Jul 2017 08:57:33 GMT): vu3mmg (Wed, 19 Jul 2017 10:08:18 GMT): yacovm (Wed, 19 Jul 2017 10:43:51 GMT): rohitrocket (Wed, 19 Jul 2017 11:01:39 GMT): rohitrocket (Wed, 19 Jul 2017 11:02:11 GMT): yacovm (Wed, 19 Jul 2017 11:02:12 GMT): yacovm (Wed, 19 Jul 2017 11:02:24 GMT): rohitrocket (Wed, 19 Jul 2017 11:02:37 GMT): rohitrocket (Wed, 19 Jul 2017 11:02:43 GMT): rohitrocket (Wed, 19 Jul 2017 11:03:16 GMT): yacovm (Wed, 19 Jul 2017 11:09:24 GMT): yacovm (Wed, 19 Jul 2017 11:09:34 GMT): yacovm (Wed, 19 Jul 2017 11:10:05 GMT): yacovm (Wed, 19 Jul 2017 11:10:19 GMT): rohitrocket (Wed, 19 Jul 2017 11:11:13 GMT): rohitrocket (Wed, 19 Jul 2017 11:11:42 GMT): rohitrocket (Wed, 19 Jul 2017 11:11:55 GMT): rolo (Wed, 19 Jul 2017 15:47:16 GMT): rolo (Wed, 19 Jul 2017 15:50:08 GMT): jyellick (Wed, 19 Jul 2017 16:44:08 GMT): szoghybe (Wed, 19 Jul 2017 19:04:50 GMT): kostas (Wed, 19 Jul 2017 19:17:13 GMT): szoghybe (Wed, 19 Jul 2017 19:28:56 GMT): szoghybe (Wed, 19 Jul 2017 19:29:08 GMT): kostas (Wed, 19 Jul 2017 19:29:56 GMT): szoghybe (Wed, 19 Jul 2017 19:30:25 GMT): Rachitga (Wed, 19 Jul 2017 20:03:32 GMT): Rachitga (Wed, 19 Jul 2017 20:03:32 GMT): Rachitga (Wed, 19 Jul 2017 20:03:32 GMT): szoghybe (Wed, 19 Jul 2017 20:42:48 GMT): szoghybe (Wed, 19 Jul 2017 20:42:48 GMT): szoghybe (Wed, 19 Jul 2017 20:43:30 GMT): szoghybe (Wed, 19 Jul 2017 20:44:21 GMT): szoghybe (Wed, 19 Jul 2017 20:45:09 GMT): szoghybe (Wed, 19 Jul 2017 20:48:14 GMT): kostas (Wed, 19 Jul 2017 23:17:26 GMT): kostas (Wed, 19 Jul 2017 23:17:26 GMT): kostas (Wed, 19 Jul 2017 23:27:22 GMT): kostas (Wed, 19 Jul 2017 23:27:22 GMT): kostas (Wed, 19 Jul 2017 23:34:46 GMT): shubhamvrkr (Thu, 20 Jul 2017 03:54:26 GMT): jyellick (Thu, 20 Jul 2017 04:12:10 GMT): jyellick (Thu, 20 Jul 2017 04:12:38 GMT): jyellick (Thu, 20 Jul 2017 04:13:18 GMT): shubhamvrkr (Thu, 20 Jul 2017 04:19:20 GMT): jyellick (Thu, 20 Jul 2017 04:29:36 GMT): shubhamvrkr (Thu, 20 Jul 2017 04:41:12 GMT): rohitrocket (Thu, 20 Jul 2017 05:36:53 GMT): rohitrocket (Thu, 20 Jul 2017 05:39:08 GMT): rohitrocket (Thu, 20 Jul 2017 05:39:39 GMT): rohitrocket (Thu, 20 Jul 2017 05:40:44 GMT): niteshsolanki (Thu, 20 Jul 2017 05:46:25 GMT): jyellick (Thu, 20 Jul 2017 07:03:48 GMT): jyellick (Thu, 20 Jul 2017 07:04:21 GMT): rohitrocket (Thu, 20 Jul 2017 07:05:10 GMT): Sujeeban (Thu, 20 Jul 2017 09:43:59 GMT): chifalcon (Thu, 20 Jul 2017 10:02:20 GMT): Rachitga (Thu, 20 Jul 2017 10:16:20 GMT): Rachitga (Thu, 20 Jul 2017 10:16:20 GMT): Rachitga (Thu, 20 Jul 2017 10:16:20 GMT): Rachitga (Thu, 20 Jul 2017 10:16:20 GMT): szoghybe (Thu, 20 Jul 2017 11:24:27 GMT): kostas (Thu, 20 Jul 2017 12:21:16 GMT): Rachitga (Thu, 20 Jul 2017 12:29:45 GMT): szoghybe (Thu, 20 Jul 2017 12:48:47 GMT): vinay_g (Thu, 20 Jul 2017 12:52:25 GMT): rohitrocket (Thu, 20 Jul 2017 13:50:14 GMT): rohitrocket (Thu, 20 Jul 2017 13:50:30 GMT): rohitrocket (Thu, 20 Jul 2017 13:50:51 GMT): jyellick (Thu, 20 Jul 2017 13:51:41 GMT): jyellick (Thu, 20 Jul 2017 13:51:41 GMT): rohitrocket (Thu, 20 Jul 2017 13:51:42 GMT): rohitrocket (Thu, 20 Jul 2017 13:52:38 GMT): jyellick (Thu, 20 Jul 2017 13:52:50 GMT): jyellick (Thu, 20 Jul 2017 13:53:03 GMT): jyellick (Thu, 20 Jul 2017 13:53:17 GMT): rohitrocket (Thu, 20 Jul 2017 13:53:42 GMT): rohitrocket (Thu, 20 Jul 2017 13:54:20 GMT): rohitrocket (Thu, 20 Jul 2017 13:54:24 GMT): rohitrocket (Thu, 20 Jul 2017 13:55:02 GMT): jyellick (Thu, 20 Jul 2017 13:57:19 GMT): jyellick (Thu, 20 Jul 2017 13:57:44 GMT): jyellick (Thu, 20 Jul 2017 13:57:58 GMT): rohitrocket (Thu, 20 Jul 2017 13:57:59 GMT): rohitrocket (Thu, 20 Jul 2017 13:58:11 GMT): rohitrocket (Thu, 20 Jul 2017 13:58:14 GMT): rohitrocket (Thu, 20 Jul 2017 13:58:33 GMT): rohitrocket (Thu, 20 Jul 2017 13:58:52 GMT): rohitrocket (Thu, 20 Jul 2017 14:02:12 GMT): jyellick (Thu, 20 Jul 2017 14:05:06 GMT): jyellick (Thu, 20 Jul 2017 14:05:16 GMT): rohitrocket (Thu, 20 Jul 2017 14:05:28 GMT): jyellick (Thu, 20 Jul 2017 14:05:30 GMT): jyellick (Thu, 20 Jul 2017 14:05:42 GMT): rohitrocket (Thu, 20 Jul 2017 14:08:31 GMT): rohitrocket (Thu, 20 Jul 2017 14:08:38 GMT): rohitrocket (Thu, 20 Jul 2017 14:09:04 GMT): rohitrocket (Thu, 20 Jul 2017 14:09:24 GMT): jyellick (Thu, 20 Jul 2017 14:17:11 GMT): szoghybe (Thu, 20 Jul 2017 14:17:29 GMT): rohitrocket (Thu, 20 Jul 2017 14:19:34 GMT): rohitrocket (Thu, 20 Jul 2017 14:22:24 GMT): rohitrocket (Thu, 20 Jul 2017 14:22:30 GMT): rohitrocket (Thu, 20 Jul 2017 14:22:41 GMT): jyellick (Thu, 20 Jul 2017 14:23:15 GMT): rishabh1102 (Thu, 20 Jul 2017 14:23:40 GMT): rohitrocket (Thu, 20 Jul 2017 14:23:59 GMT): jyellick (Thu, 20 Jul 2017 14:24:42 GMT): kostas (Thu, 20 Jul 2017 14:25:15 GMT): rishabh1102 (Thu, 20 Jul 2017 14:25:36 GMT): rohitrocket (Thu, 20 Jul 2017 14:25:49 GMT): rohitrocket (Thu, 20 Jul 2017 14:26:23 GMT): kostas (Thu, 20 Jul 2017 14:26:25 GMT): rohitrocket (Thu, 20 Jul 2017 14:26:41 GMT): rishabh1102 (Thu, 20 Jul 2017 14:26:59 GMT): rishabh1102 (Thu, 20 Jul 2017 14:27:12 GMT): jyellick (Thu, 20 Jul 2017 14:27:13 GMT): jyellick (Thu, 20 Jul 2017 14:27:28 GMT): kostas (Thu, 20 Jul 2017 14:28:03 GMT): kostas (Thu, 20 Jul 2017 14:28:03 GMT): rohitrocket (Thu, 20 Jul 2017 14:28:39 GMT): rohitrocket (Thu, 20 Jul 2017 14:29:08 GMT): jyellick (Thu, 20 Jul 2017 14:29:09 GMT): jyellick (Thu, 20 Jul 2017 14:29:42 GMT): rohitrocket (Thu, 20 Jul 2017 14:31:03 GMT): rohitrocket (Thu, 20 Jul 2017 14:31:12 GMT): rohitrocket (Thu, 20 Jul 2017 14:31:21 GMT): jyellick (Thu, 20 Jul 2017 14:32:13 GMT): jyellick (Thu, 20 Jul 2017 14:36:32 GMT): rohitrocket (Thu, 20 Jul 2017 14:37:48 GMT): jyellick (Thu, 20 Jul 2017 14:39:23 GMT): jyellick (Thu, 20 Jul 2017 14:40:20 GMT): jyellick (Thu, 20 Jul 2017 14:40:47 GMT): jyellick (Thu, 20 Jul 2017 14:41:26 GMT): rohitrocket (Thu, 20 Jul 2017 14:42:12 GMT): szoghybe (Thu, 20 Jul 2017 15:25:26 GMT): szoghybe (Thu, 20 Jul 2017 15:26:37 GMT): szoghybe (Thu, 20 Jul 2017 15:26:37 GMT): kostas (Thu, 20 Jul 2017 15:26:49 GMT): Ratnakar (Thu, 20 Jul 2017 15:30:44 GMT): szoghybe (Thu, 20 Jul 2017 19:10:40 GMT): szoghybe (Thu, 20 Jul 2017 19:10:40 GMT): szoghybe (Thu, 20 Jul 2017 19:10:40 GMT): szoghybe (Thu, 20 Jul 2017 19:10:40 GMT): szoghybe (Thu, 20 Jul 2017 19:12:28 GMT): szoghybe (Thu, 20 Jul 2017 19:12:28 GMT): szoghybe (Thu, 20 Jul 2017 19:14:45 GMT): szoghybe (Thu, 20 Jul 2017 19:14:45 GMT): szoghybe (Thu, 20 Jul 2017 19:16:50 GMT): szoghybe (Thu, 20 Jul 2017 19:18:24 GMT): szoghybe (Thu, 20 Jul 2017 19:18:28 GMT): szoghybe (Thu, 20 Jul 2017 19:18:28 GMT): szoghybe (Thu, 20 Jul 2017 19:19:08 GMT): szoghybe (Thu, 20 Jul 2017 19:19:20 GMT): szoghybe (Thu, 20 Jul 2017 19:19:20 GMT): szoghybe (Thu, 20 Jul 2017 19:19:22 GMT): szoghybe (Thu, 20 Jul 2017 19:19:24 GMT): szoghybe (Thu, 20 Jul 2017 19:19:40 GMT): szoghybe (Thu, 20 Jul 2017 19:19:40 GMT): szoghybe (Thu, 20 Jul 2017 19:20:50 GMT): szoghybe (Thu, 20 Jul 2017 19:20:50 GMT): szoghybe (Thu, 20 Jul 2017 19:20:53 GMT): jyellick (Thu, 20 Jul 2017 19:36:48 GMT): jyellick (Thu, 20 Jul 2017 19:37:36 GMT): szoghybe (Thu, 20 Jul 2017 19:47:38 GMT): jyellick (Thu, 20 Jul 2017 19:49:31 GMT): jyellick (Thu, 20 Jul 2017 19:49:31 GMT): szoghybe (Thu, 20 Jul 2017 19:50:04 GMT): jyellick (Thu, 20 Jul 2017 19:50:46 GMT): szoghybe (Thu, 20 Jul 2017 19:54:48 GMT): jyellick (Thu, 20 Jul 2017 19:56:33 GMT): szoghybe (Thu, 20 Jul 2017 20:59:01 GMT): szoghybe (Thu, 20 Jul 2017 20:59:49 GMT): szoghybe (Thu, 20 Jul 2017 21:06:01 GMT): n91 (Thu, 20 Jul 2017 21:08:48 GMT): xinpei8 (Thu, 20 Jul 2017 22:32:02 GMT): xinpei8 (Thu, 20 Jul 2017 22:37:50 GMT): Glen (Fri, 21 Jul 2017 03:29:10 GMT): krupabathia (Fri, 21 Jul 2017 05:32:32 GMT): krupabathia (Fri, 21 Jul 2017 05:32:49 GMT): shubhamvrkr (Fri, 21 Jul 2017 07:20:12 GMT): paapighoda (Fri, 21 Jul 2017 09:33:00 GMT): indirajith (Fri, 21 Jul 2017 10:20:46 GMT): Johnny-The-Dean (Fri, 21 Jul 2017 10:43:45 GMT): Johnny-The-Dean (Fri, 21 Jul 2017 10:58:19 GMT): jyellick (Fri, 21 Jul 2017 14:26:01 GMT): jyellick (Fri, 21 Jul 2017 14:27:53 GMT): jyellick (Fri, 21 Jul 2017 14:29:12 GMT): szoghybe (Fri, 21 Jul 2017 14:29:59 GMT): jyellick (Fri, 21 Jul 2017 14:30:27 GMT): jyellick (Fri, 21 Jul 2017 14:32:02 GMT): szoghybe (Fri, 21 Jul 2017 14:36:31 GMT): szoghybe (Fri, 21 Jul 2017 14:36:35 GMT): jyellick (Fri, 21 Jul 2017 14:39:02 GMT): szoghybe (Fri, 21 Jul 2017 15:11:34 GMT): szoghybe (Fri, 21 Jul 2017 15:11:34 GMT): szoghybe (Fri, 21 Jul 2017 15:30:28 GMT): jyellick (Fri, 21 Jul 2017 15:54:35 GMT): jyellick (Fri, 21 Jul 2017 15:55:09 GMT): jyellick (Fri, 21 Jul 2017 15:58:18 GMT): jyellick (Fri, 21 Jul 2017 15:58:18 GMT): jyellick (Fri, 21 Jul 2017 15:58:18 GMT): szoghybe (Fri, 21 Jul 2017 17:36:08 GMT): szoghybe (Fri, 21 Jul 2017 17:36:23 GMT): kostas (Fri, 21 Jul 2017 17:40:56 GMT): szoghybe (Fri, 21 Jul 2017 18:10:53 GMT): szoghybe (Fri, 21 Jul 2017 18:11:08 GMT): szoghybe (Fri, 21 Jul 2017 18:19:48 GMT): szoghybe (Fri, 21 Jul 2017 18:19:48 GMT): szoghybe (Fri, 21 Jul 2017 19:16:02 GMT): kostas (Fri, 21 Jul 2017 19:18:10 GMT): kostas (Fri, 21 Jul 2017 19:18:34 GMT): szoghybe (Fri, 21 Jul 2017 19:21:28 GMT): szoghybe (Fri, 21 Jul 2017 19:41:32 GMT): gbolo (Sat, 22 Jul 2017 02:47:47 GMT): Glen (Sun, 23 Jul 2017 15:57:34 GMT): Glen (Sun, 23 Jul 2017 16:00:43 GMT): kostas (Sun, 23 Jul 2017 16:08:57 GMT): jyellick (Sun, 23 Jul 2017 20:19:16 GMT): qizhang (Mon, 24 Jul 2017 21:14:23 GMT): jyellick (Mon, 24 Jul 2017 21:25:04 GMT): qizhang (Mon, 24 Jul 2017 21:28:46 GMT): qizhang (Mon, 24 Jul 2017 21:29:59 GMT): jyellick (Mon, 24 Jul 2017 21:30:59 GMT): qizhang (Mon, 24 Jul 2017 21:32:24 GMT): jyellick (Mon, 24 Jul 2017 21:37:39 GMT): qizhang (Mon, 24 Jul 2017 21:40:42 GMT): DaanN518 (Tue, 25 Jul 2017 06:41:36 GMT): DaanN518 (Tue, 25 Jul 2017 06:50:33 GMT): rishabh1102 (Tue, 25 Jul 2017 07:52:09 GMT): guoger (Tue, 25 Jul 2017 08:41:54 GMT): guoger (Tue, 25 Jul 2017 08:45:24 GMT): DaanN518 (Tue, 25 Jul 2017 08:59:09 GMT): guoger (Tue, 25 Jul 2017 09:16:52 GMT): Rachitga (Tue, 25 Jul 2017 10:52:06 GMT): Rachitga (Tue, 25 Jul 2017 10:52:06 GMT): Rachitga (Tue, 25 Jul 2017 10:52:06 GMT): jyellick (Tue, 25 Jul 2017 12:35:52 GMT): jyellick (Tue, 25 Jul 2017 12:39:33 GMT): Rachitga (Tue, 25 Jul 2017 15:50:16 GMT): kostas (Tue, 25 Jul 2017 15:59:12 GMT): Rachitga (Tue, 25 Jul 2017 16:15:20 GMT): rjones (Tue, 25 Jul 2017 18:15:29 GMT): n91 (Tue, 25 Jul 2017 23:13:39 GMT): Asara (Wed, 26 Jul 2017 00:32:31 GMT): Asara (Wed, 26 Jul 2017 00:32:52 GMT): jyellick (Wed, 26 Jul 2017 01:57:06 GMT): jyellick (Wed, 26 Jul 2017 01:58:29 GMT): jyellick (Wed, 26 Jul 2017 01:59:20 GMT): Asara (Wed, 26 Jul 2017 03:00:46 GMT): jyellick (Wed, 26 Jul 2017 03:01:48 GMT): Asara (Wed, 26 Jul 2017 03:01:48 GMT): Asara (Wed, 26 Jul 2017 03:02:20 GMT): jyellick (Wed, 26 Jul 2017 03:03:16 GMT): jyellick (Wed, 26 Jul 2017 03:03:24 GMT): Asara (Wed, 26 Jul 2017 03:04:09 GMT): jyellick (Wed, 26 Jul 2017 03:04:13 GMT): Asara (Wed, 26 Jul 2017 03:04:17 GMT): Asara (Wed, 26 Jul 2017 03:04:25 GMT): Asara (Wed, 26 Jul 2017 03:04:27 GMT): Asara (Wed, 26 Jul 2017 03:04:48 GMT): Asara (Wed, 26 Jul 2017 03:05:02 GMT): Asara (Wed, 26 Jul 2017 03:12:08 GMT): Asara (Wed, 26 Jul 2017 03:12:15 GMT): jyellick (Wed, 26 Jul 2017 03:12:26 GMT): akdj (Wed, 26 Jul 2017 07:25:15 GMT): akdj (Wed, 26 Jul 2017 07:25:25 GMT): yacovm (Wed, 26 Jul 2017 08:15:38 GMT): benjamin_J_sb (Wed, 26 Jul 2017 08:19:18 GMT): akdj (Wed, 26 Jul 2017 11:18:51 GMT): akdj (Wed, 26 Jul 2017 11:18:53 GMT): akdj (Wed, 26 Jul 2017 11:19:09 GMT): akdj (Wed, 26 Jul 2017 11:19:09 GMT): kostas (Wed, 26 Jul 2017 12:13:09 GMT): kostas (Wed, 26 Jul 2017 12:13:09 GMT): kostas (Wed, 26 Jul 2017 12:13:20 GMT): kostas (Wed, 26 Jul 2017 12:14:06 GMT): akdj (Wed, 26 Jul 2017 12:15:33 GMT): kostas (Wed, 26 Jul 2017 12:28:53 GMT): akdj (Wed, 26 Jul 2017 12:36:32 GMT): Asara (Wed, 26 Jul 2017 13:28:46 GMT): Asara (Wed, 26 Jul 2017 13:29:40 GMT): yacovm (Wed, 26 Jul 2017 13:30:09 GMT): Asara (Wed, 26 Jul 2017 13:32:27 GMT): Asara (Wed, 26 Jul 2017 13:53:59 GMT): yacovm (Wed, 26 Jul 2017 13:58:35 GMT): yacovm (Wed, 26 Jul 2017 13:58:44 GMT): yacovm (Wed, 26 Jul 2017 13:58:52 GMT): yacovm (Wed, 26 Jul 2017 13:58:56 GMT): Asara (Wed, 26 Jul 2017 17:26:23 GMT): Asara (Wed, 26 Jul 2017 17:26:47 GMT): jyellick (Wed, 26 Jul 2017 18:15:34 GMT): jyellick (Wed, 26 Jul 2017 18:15:54 GMT): jyellick (Wed, 26 Jul 2017 18:16:08 GMT): Asara (Wed, 26 Jul 2017 18:16:18 GMT): Asara (Wed, 26 Jul 2017 18:19:52 GMT): Asara (Wed, 26 Jul 2017 18:20:09 GMT): jyellick (Wed, 26 Jul 2017 18:21:25 GMT): Asara (Wed, 26 Jul 2017 18:21:32 GMT): jyellick (Wed, 26 Jul 2017 18:22:33 GMT): Asara (Wed, 26 Jul 2017 18:25:07 GMT): Asara (Wed, 26 Jul 2017 18:25:12 GMT): Asara (Wed, 26 Jul 2017 18:25:21 GMT): Asara (Wed, 26 Jul 2017 18:25:36 GMT): kostas (Wed, 26 Jul 2017 18:31:06 GMT): Asara (Wed, 26 Jul 2017 18:34:46 GMT): jyellick (Wed, 26 Jul 2017 18:44:15 GMT): jyellick (Wed, 26 Jul 2017 18:44:35 GMT): Asara (Wed, 26 Jul 2017 18:48:46 GMT): Asara (Wed, 26 Jul 2017 18:48:51 GMT): jyellick (Wed, 26 Jul 2017 18:50:38 GMT): jyellick (Wed, 26 Jul 2017 18:51:24 GMT): Asara (Wed, 26 Jul 2017 18:51:37 GMT): Asara (Wed, 26 Jul 2017 18:54:26 GMT): jyellick (Wed, 26 Jul 2017 18:57:33 GMT): Asara (Wed, 26 Jul 2017 18:59:20 GMT): Asara (Wed, 26 Jul 2017 19:17:28 GMT): Asara (Wed, 26 Jul 2017 19:17:28 GMT): Asara (Wed, 26 Jul 2017 19:21:10 GMT): Asara (Wed, 26 Jul 2017 19:21:10 GMT): jyellick (Wed, 26 Jul 2017 19:26:01 GMT): Asara (Wed, 26 Jul 2017 19:26:19 GMT): SubbaBachina (Wed, 26 Jul 2017 20:39:22 GMT): sqwerrels (Wed, 26 Jul 2017 21:36:50 GMT): jyellick (Thu, 27 Jul 2017 01:50:17 GMT): Long (Thu, 27 Jul 2017 04:36:52 GMT): Long (Thu, 27 Jul 2017 04:38:10 GMT): vigneswaran.r (Thu, 27 Jul 2017 05:40:34 GMT): vigneswaran.r (Thu, 27 Jul 2017 05:41:13 GMT): Long (Thu, 27 Jul 2017 06:40:36 GMT): Long (Thu, 27 Jul 2017 06:42:35 GMT): Long (Thu, 27 Jul 2017 06:43:06 GMT): Long (Thu, 27 Jul 2017 06:54:12 GMT): Long (Thu, 27 Jul 2017 06:54:18 GMT): Long (Thu, 27 Jul 2017 06:54:45 GMT): Long (Thu, 27 Jul 2017 07:00:25 GMT): snehalpansare (Thu, 27 Jul 2017 07:11:43 GMT): vigneswaran.r (Thu, 27 Jul 2017 11:18:23 GMT): sqwerrels (Thu, 27 Jul 2017 15:20:16 GMT): jyellick (Thu, 27 Jul 2017 15:21:52 GMT): jyellick (Thu, 27 Jul 2017 15:22:31 GMT): sqwerrels (Thu, 27 Jul 2017 15:24:44 GMT): sqwerrels (Thu, 27 Jul 2017 15:24:44 GMT): jyellick (Thu, 27 Jul 2017 15:26:25 GMT): sqwerrels (Thu, 27 Jul 2017 15:38:19 GMT): jyellick (Thu, 27 Jul 2017 17:52:20 GMT): dinesh.rivankar (Fri, 28 Jul 2017 03:36:14 GMT): dinesh.rivankar (Fri, 28 Jul 2017 03:36:59 GMT): jyellick (Fri, 28 Jul 2017 05:04:01 GMT): jyellick (Fri, 28 Jul 2017 05:04:01 GMT): amolpednekar (Fri, 28 Jul 2017 05:45:08 GMT): dinesh.rivankar (Fri, 28 Jul 2017 05:55:33 GMT): kostas (Fri, 28 Jul 2017 14:52:19 GMT): kostas (Fri, 28 Jul 2017 14:54:08 GMT): kostas (Fri, 28 Jul 2017 14:54:46 GMT): kostas (Fri, 28 Jul 2017 14:55:26 GMT): tennenjl (Sat, 29 Jul 2017 13:29:09 GMT): jyellick (Sat, 29 Jul 2017 15:08:06 GMT): tennenjl (Sat, 29 Jul 2017 15:09:04 GMT): jyellick (Sat, 29 Jul 2017 15:13:23 GMT): tennenjl (Sat, 29 Jul 2017 15:14:37 GMT): mogarg (Sun, 30 Jul 2017 01:40:24 GMT): scottz (Sun, 30 Jul 2017 16:23:02 GMT): jyellick (Sun, 30 Jul 2017 16:24:08 GMT): jyellick (Sun, 30 Jul 2017 16:24:08 GMT): scottz (Sun, 30 Jul 2017 16:25:17 GMT): scottz (Sun, 30 Jul 2017 16:28:16 GMT): kostas (Sun, 30 Jul 2017 18:01:54 GMT): kostas (Sun, 30 Jul 2017 18:02:53 GMT): jyellick (Sun, 30 Jul 2017 19:59:24 GMT): scottz (Sun, 30 Jul 2017 20:09:29 GMT): dinesh.rivankar (Mon, 31 Jul 2017 06:14:24 GMT): paapighoda (Mon, 31 Jul 2017 09:12:57 GMT): smita0709 (Mon, 31 Jul 2017 12:19:36 GMT): smita0709 (Mon, 31 Jul 2017 12:20:34 GMT): smita0709 (Mon, 31 Jul 2017 12:20:34 GMT): Dpkkmr (Mon, 31 Jul 2017 12:25:29 GMT): jyellick (Mon, 31 Jul 2017 13:08:56 GMT): jyellick (Mon, 31 Jul 2017 13:08:56 GMT): rsherwood (Mon, 31 Jul 2017 14:12:23 GMT): rsherwood (Mon, 31 Jul 2017 14:15:06 GMT): kostas (Mon, 31 Jul 2017 15:23:32 GMT): rsherwood (Mon, 31 Jul 2017 16:49:17 GMT): kostas (Mon, 31 Jul 2017 16:50:42 GMT): jyellick (Mon, 31 Jul 2017 16:53:20 GMT): kostas (Mon, 31 Jul 2017 16:55:05 GMT): kostas (Mon, 31 Jul 2017 16:56:07 GMT): kostas (Mon, 31 Jul 2017 17:06:46 GMT): kostas (Mon, 31 Jul 2017 17:06:46 GMT): kostas (Mon, 31 Jul 2017 17:06:46 GMT): kostas (Mon, 31 Jul 2017 17:06:46 GMT): kostas (Mon, 31 Jul 2017 17:06:53 GMT): kostas (Mon, 31 Jul 2017 17:06:53 GMT): kostas (Mon, 31 Jul 2017 17:06:53 GMT): kostas (Mon, 31 Jul 2017 17:06:53 GMT): rsherwood (Mon, 31 Jul 2017 17:44:59 GMT): gauthampamu (Mon, 31 Jul 2017 18:04:43 GMT): kostas (Mon, 31 Jul 2017 18:07:54 GMT): kostas (Mon, 31 Jul 2017 18:08:19 GMT): kostas (Mon, 31 Jul 2017 18:14:44 GMT): gauthampamu (Mon, 31 Jul 2017 23:51:24 GMT): gauthampamu (Mon, 31 Jul 2017 23:51:27 GMT): gauthampamu (Mon, 31 Jul 2017 23:51:57 GMT): gauthampamu (Mon, 31 Jul 2017 23:52:08 GMT): gauthampamu (Mon, 31 Jul 2017 23:52:35 GMT): gauthampamu (Mon, 31 Jul 2017 23:54:13 GMT): gauthampamu (Mon, 31 Jul 2017 23:55:38 GMT): scottz (Tue, 01 Aug 2017 00:43:29 GMT): jyellick (Tue, 01 Aug 2017 01:29:59 GMT): gauthampamu (Tue, 01 Aug 2017 02:17:23 GMT): gauthampamu (Tue, 01 Aug 2017 02:17:49 GMT): gauthampamu (Tue, 01 Aug 2017 02:18:42 GMT): jyellick (Tue, 01 Aug 2017 02:49:14 GMT): jyellick (Tue, 01 Aug 2017 02:49:14 GMT): smita0709 (Tue, 01 Aug 2017 04:46:52 GMT): smita0709 (Tue, 01 Aug 2017 08:13:31 GMT): YashGanthe (Tue, 01 Aug 2017 08:44:19 GMT): rsherwood (Tue, 01 Aug 2017 08:58:42 GMT): KSLee (Tue, 01 Aug 2017 09:07:56 GMT): subbu165 (Tue, 01 Aug 2017 09:15:49 GMT): KSLee (Tue, 01 Aug 2017 11:01:18 GMT): KSLee (Tue, 01 Aug 2017 11:04:10 GMT): toddinpal (Tue, 01 Aug 2017 13:08:51 GMT): kostas (Tue, 01 Aug 2017 13:36:13 GMT): kostas (Tue, 01 Aug 2017 13:36:13 GMT): kostas (Tue, 01 Aug 2017 13:36:13 GMT): kostas (Tue, 01 Aug 2017 13:37:41 GMT): toddinpal (Tue, 01 Aug 2017 13:37:49 GMT): kostas (Tue, 01 Aug 2017 13:39:06 GMT): jyellick (Tue, 01 Aug 2017 13:39:48 GMT): jyellick (Tue, 01 Aug 2017 13:39:48 GMT): kostas (Tue, 01 Aug 2017 13:41:56 GMT): kostas (Tue, 01 Aug 2017 13:41:56 GMT): toddinpal (Tue, 01 Aug 2017 13:44:16 GMT): jyellick (Tue, 01 Aug 2017 13:45:32 GMT): jyellick (Tue, 01 Aug 2017 13:45:32 GMT): jyellick (Tue, 01 Aug 2017 13:45:32 GMT): jyellick (Tue, 01 Aug 2017 13:45:32 GMT): toddinpal (Tue, 01 Aug 2017 13:46:19 GMT): toddinpal (Tue, 01 Aug 2017 13:47:07 GMT): kostas (Tue, 01 Aug 2017 13:48:11 GMT): kostas (Tue, 01 Aug 2017 13:48:11 GMT): jyellick (Tue, 01 Aug 2017 13:49:43 GMT): toddinpal (Tue, 01 Aug 2017 13:49:53 GMT): toddinpal (Tue, 01 Aug 2017 13:51:20 GMT): kostas (Tue, 01 Aug 2017 13:51:24 GMT): jyellick (Tue, 01 Aug 2017 13:51:34 GMT): toddinpal (Tue, 01 Aug 2017 13:51:38 GMT): toddinpal (Tue, 01 Aug 2017 13:52:07 GMT): kostas (Tue, 01 Aug 2017 13:52:22 GMT): toddinpal (Tue, 01 Aug 2017 13:52:40 GMT): jyellick (Tue, 01 Aug 2017 13:53:29 GMT): jyellick (Tue, 01 Aug 2017 13:53:29 GMT): kostas (Tue, 01 Aug 2017 13:54:22 GMT): toddinpal (Tue, 01 Aug 2017 13:55:25 GMT): jyellick (Tue, 01 Aug 2017 13:56:44 GMT): toddinpal (Tue, 01 Aug 2017 13:58:17 GMT): jyellick (Tue, 01 Aug 2017 14:06:55 GMT): passkit (Tue, 01 Aug 2017 18:46:02 GMT): jyellick (Tue, 01 Aug 2017 19:22:54 GMT): passkit (Tue, 01 Aug 2017 19:28:06 GMT): jyellick (Tue, 01 Aug 2017 19:29:38 GMT): jyellick (Tue, 01 Aug 2017 19:30:23 GMT): passkit (Tue, 01 Aug 2017 19:31:12 GMT): jyellick (Tue, 01 Aug 2017 19:40:44 GMT): passkit (Tue, 01 Aug 2017 19:42:14 GMT): passkit (Tue, 01 Aug 2017 19:43:25 GMT): passkit (Tue, 01 Aug 2017 19:44:58 GMT): passkit (Tue, 01 Aug 2017 19:45:52 GMT): jyellick (Tue, 01 Aug 2017 19:47:24 GMT): jyellick (Tue, 01 Aug 2017 19:48:12 GMT): passkit (Tue, 01 Aug 2017 19:51:02 GMT): jyellick (Tue, 01 Aug 2017 19:58:35 GMT): jyellick (Tue, 01 Aug 2017 19:59:43 GMT): passkit (Tue, 01 Aug 2017 20:11:17 GMT): passkit (Tue, 01 Aug 2017 20:11:53 GMT): jyellick (Tue, 01 Aug 2017 20:13:41 GMT): jyellick (Tue, 01 Aug 2017 20:13:41 GMT): passkit (Tue, 01 Aug 2017 20:17:24 GMT): jyellick (Tue, 01 Aug 2017 20:25:54 GMT): jyellick (Tue, 01 Aug 2017 20:37:41 GMT): jyellick (Tue, 01 Aug 2017 20:38:52 GMT): gauthampamu (Tue, 01 Aug 2017 21:30:15 GMT): gauthampamu (Tue, 01 Aug 2017 22:27:41 GMT): kostas (Tue, 01 Aug 2017 22:37:01 GMT): kostas (Tue, 01 Aug 2017 22:37:20 GMT): passkit (Wed, 02 Aug 2017 02:30:48 GMT): passkit (Wed, 02 Aug 2017 02:31:22 GMT): passkit (Wed, 02 Aug 2017 02:34:20 GMT): passkit (Wed, 02 Aug 2017 03:00:05 GMT): jyellick (Wed, 02 Aug 2017 03:04:29 GMT): jyellick (Wed, 02 Aug 2017 03:08:47 GMT): jyellick (Wed, 02 Aug 2017 03:08:57 GMT): passkit (Wed, 02 Aug 2017 03:14:52 GMT): passkit (Wed, 02 Aug 2017 03:15:54 GMT): jyellick (Wed, 02 Aug 2017 03:16:04 GMT): jyellick (Wed, 02 Aug 2017 03:17:50 GMT): jyellick (Wed, 02 Aug 2017 03:18:35 GMT): passkit (Wed, 02 Aug 2017 03:21:55 GMT): passkit (Wed, 02 Aug 2017 03:23:04 GMT): jyellick (Wed, 02 Aug 2017 03:24:14 GMT): jyellick (Wed, 02 Aug 2017 03:26:49 GMT): jyellick (Wed, 02 Aug 2017 03:27:22 GMT): passkit (Wed, 02 Aug 2017 03:27:37 GMT): jyellick (Wed, 02 Aug 2017 03:28:16 GMT): jyellick (Wed, 02 Aug 2017 03:28:38 GMT): passkit (Wed, 02 Aug 2017 03:28:54 GMT): passkit (Wed, 02 Aug 2017 03:29:04 GMT): jyellick (Wed, 02 Aug 2017 03:33:14 GMT): jyellick (Wed, 02 Aug 2017 03:49:32 GMT): jyellick (Wed, 02 Aug 2017 03:49:32 GMT): jyellick (Wed, 02 Aug 2017 03:49:32 GMT): jyellick (Wed, 02 Aug 2017 03:49:32 GMT): passkit (Wed, 02 Aug 2017 03:53:59 GMT): passkit (Wed, 02 Aug 2017 03:55:53 GMT): passkit (Wed, 02 Aug 2017 03:55:53 GMT): jyellick (Wed, 02 Aug 2017 03:57:27 GMT): jyellick (Wed, 02 Aug 2017 03:58:02 GMT): qiang0723 (Wed, 02 Aug 2017 05:01:01 GMT): passkit (Wed, 02 Aug 2017 05:29:05 GMT): passkit (Wed, 02 Aug 2017 05:29:30 GMT): passkit (Wed, 02 Aug 2017 05:30:36 GMT): passkit (Wed, 02 Aug 2017 05:30:36 GMT): passkit (Wed, 02 Aug 2017 05:31:37 GMT): passkit (Wed, 02 Aug 2017 05:31:37 GMT): passkit (Wed, 02 Aug 2017 05:33:23 GMT): passkit (Wed, 02 Aug 2017 05:33:23 GMT): passkit (Wed, 02 Aug 2017 05:33:23 GMT): passkit (Wed, 02 Aug 2017 05:33:23 GMT): passkit (Wed, 02 Aug 2017 05:38:37 GMT): smita0709 (Wed, 02 Aug 2017 06:19:21 GMT): smita0709 (Wed, 02 Aug 2017 06:19:31 GMT): smita0709 (Wed, 02 Aug 2017 06:20:17 GMT): rsherwood (Wed, 02 Aug 2017 08:03:12 GMT): Joly (Wed, 02 Aug 2017 12:11:22 GMT): bitnut (Wed, 02 Aug 2017 12:35:26 GMT): jyellick (Wed, 02 Aug 2017 13:56:42 GMT): passkit (Wed, 02 Aug 2017 14:34:27 GMT): passkit (Wed, 02 Aug 2017 14:35:36 GMT): passkit (Wed, 02 Aug 2017 14:36:50 GMT): passkit (Wed, 02 Aug 2017 14:36:50 GMT): passkit (Wed, 02 Aug 2017 14:38:02 GMT): passkit (Wed, 02 Aug 2017 14:38:14 GMT): jyellick (Wed, 02 Aug 2017 14:46:09 GMT): rsherwood (Wed, 02 Aug 2017 17:08:42 GMT): jyellick (Wed, 02 Aug 2017 17:11:05 GMT): Madhavi Elamandyam (Thu, 03 Aug 2017 07:06:49 GMT): rahulhegde (Thu, 03 Aug 2017 11:39:59 GMT): kostas (Thu, 03 Aug 2017 13:46:58 GMT): kostas (Thu, 03 Aug 2017 13:46:58 GMT): kostas (Thu, 03 Aug 2017 13:46:58 GMT): kostas (Thu, 03 Aug 2017 13:47:47 GMT): kostas (Thu, 03 Aug 2017 13:47:47 GMT): kostas (Thu, 03 Aug 2017 13:50:04 GMT): kostas (Thu, 03 Aug 2017 13:50:04 GMT): kostas (Thu, 03 Aug 2017 13:52:16 GMT): kostas (Thu, 03 Aug 2017 13:52:16 GMT): rahulhegde (Thu, 03 Aug 2017 14:14:13 GMT): rahulhegde (Thu, 03 Aug 2017 14:14:13 GMT): rahulhegde (Thu, 03 Aug 2017 14:14:13 GMT): rahulhegde (Thu, 03 Aug 2017 14:23:51 GMT): kostas (Thu, 03 Aug 2017 14:27:12 GMT): kostas (Thu, 03 Aug 2017 14:28:10 GMT): rahulhegde (Thu, 03 Aug 2017 14:33:57 GMT): rahulhegde (Thu, 03 Aug 2017 14:33:57 GMT): rahulhegde (Thu, 03 Aug 2017 14:33:57 GMT): mastersingh24 (Thu, 03 Aug 2017 14:57:48 GMT): sklump (Thu, 03 Aug 2017 14:57:48 GMT): mastersingh24 (Thu, 03 Aug 2017 15:02:04 GMT): mastersingh24 (Thu, 03 Aug 2017 15:02:04 GMT): mastersingh24 (Thu, 03 Aug 2017 15:03:56 GMT): jyellick (Thu, 03 Aug 2017 15:07:11 GMT): jyellick (Thu, 03 Aug 2017 15:07:11 GMT): sklump (Thu, 03 Aug 2017 15:07:56 GMT): sklump (Thu, 03 Aug 2017 15:09:29 GMT): jyellick (Thu, 03 Aug 2017 15:12:54 GMT): kostas (Thu, 03 Aug 2017 15:13:57 GMT): kostas (Thu, 03 Aug 2017 15:13:57 GMT): sklump (Thu, 03 Aug 2017 15:16:57 GMT): kostas (Thu, 03 Aug 2017 15:17:20 GMT): sklump (Thu, 03 Aug 2017 15:18:26 GMT): kostas (Thu, 03 Aug 2017 15:19:10 GMT): kostas (Thu, 03 Aug 2017 15:19:28 GMT): Eric.Bui (Thu, 03 Aug 2017 16:10:44 GMT): jyellick (Fri, 04 Aug 2017 01:52:55 GMT): Senthil1 (Fri, 04 Aug 2017 12:52:29 GMT): Senthil1 (Fri, 04 Aug 2017 12:52:29 GMT): Senthil1 (Fri, 04 Aug 2017 12:52:29 GMT): jyellick (Fri, 04 Aug 2017 14:43:35 GMT): jyellick (Fri, 04 Aug 2017 14:43:41 GMT): dave.enyeart (Fri, 04 Aug 2017 15:19:40 GMT): dave.enyeart (Fri, 04 Aug 2017 15:19:40 GMT): jyellick (Fri, 04 Aug 2017 15:38:00 GMT): jyellick (Fri, 04 Aug 2017 15:38:37 GMT): jyellick (Fri, 04 Aug 2017 15:39:07 GMT): jyellick (Fri, 04 Aug 2017 15:40:07 GMT): jyellick (Fri, 04 Aug 2017 15:41:23 GMT): cca88 (Fri, 04 Aug 2017 15:50:56 GMT): dave.enyeart (Fri, 04 Aug 2017 15:55:09 GMT): Senthil1 (Fri, 04 Aug 2017 16:43:51 GMT): eacoeytaux (Fri, 04 Aug 2017 19:53:02 GMT): shubhamvrkr (Sat, 05 Aug 2017 08:17:12 GMT): kostas (Sat, 05 Aug 2017 08:55:55 GMT): shubhamvrkr (Sat, 05 Aug 2017 08:56:18 GMT): rahulhegde (Sun, 06 Aug 2017 18:20:20 GMT): jyellick (Sun, 06 Aug 2017 19:33:59 GMT): jyellick (Sun, 06 Aug 2017 19:34:48 GMT): jyellick (Sun, 06 Aug 2017 19:35:58 GMT): jyellick (Sun, 06 Aug 2017 19:37:23 GMT): jyellick (Sun, 06 Aug 2017 19:37:39 GMT): kostas (Sun, 06 Aug 2017 20:15:27 GMT): kostas (Sun, 06 Aug 2017 20:15:29 GMT): kostas (Sun, 06 Aug 2017 20:15:29 GMT): kostas (Sun, 06 Aug 2017 20:18:01 GMT): kostas (Sun, 06 Aug 2017 20:18:51 GMT): kostas (Sun, 06 Aug 2017 20:19:19 GMT): kostas (Sun, 06 Aug 2017 20:19:19 GMT): kostas (Sun, 06 Aug 2017 20:20:10 GMT): kostas (Sun, 06 Aug 2017 20:20:10 GMT): kostas (Sun, 06 Aug 2017 20:21:10 GMT): kostas (Sun, 06 Aug 2017 20:23:54 GMT): balaji.viswanathan (Mon, 07 Aug 2017 05:55:25 GMT): rahulhegde (Mon, 07 Aug 2017 14:20:12 GMT): frbrkoala (Tue, 08 Aug 2017 00:35:30 GMT): mavstronaut (Tue, 08 Aug 2017 04:04:35 GMT): Hangyu (Tue, 08 Aug 2017 09:01:20 GMT): kostas (Tue, 08 Aug 2017 14:46:23 GMT): kostas (Tue, 08 Aug 2017 14:46:27 GMT): kostas (Tue, 08 Aug 2017 14:46:31 GMT): kostas (Tue, 08 Aug 2017 14:47:12 GMT): kostas (Tue, 08 Aug 2017 14:47:40 GMT): frbrkoala (Tue, 08 Aug 2017 14:55:51 GMT): kostas (Tue, 08 Aug 2017 14:57:03 GMT): scottz (Tue, 08 Aug 2017 19:08:23 GMT): jyellick (Tue, 08 Aug 2017 19:24:20 GMT): jyellick (Tue, 08 Aug 2017 19:25:35 GMT): jyellick (Tue, 08 Aug 2017 19:25:35 GMT): yacovm (Tue, 08 Aug 2017 20:05:44 GMT): jyellick (Tue, 08 Aug 2017 20:23:53 GMT): henryhs (Wed, 09 Aug 2017 09:06:31 GMT): jslee99a (Wed, 09 Aug 2017 09:11:36 GMT): jslee99a (Wed, 09 Aug 2017 09:11:55 GMT): jslee99a (Wed, 09 Aug 2017 09:19:16 GMT): jslee99a (Wed, 09 Aug 2017 09:19:16 GMT): sharlen32 (Wed, 09 Aug 2017 09:21:09 GMT): kostas (Wed, 09 Aug 2017 12:27:48 GMT): frbrkoala (Thu, 10 Aug 2017 00:31:40 GMT): frbrkoala (Thu, 10 Aug 2017 00:31:40 GMT): inatatsu (Thu, 10 Aug 2017 01:59:10 GMT): sampath06 (Thu, 10 Aug 2017 02:25:47 GMT): sampath06 (Thu, 10 Aug 2017 02:26:56 GMT): jyellick (Thu, 10 Aug 2017 02:51:39 GMT): jyellick (Thu, 10 Aug 2017 02:51:51 GMT): scottz (Thu, 10 Aug 2017 03:07:28 GMT): scottz (Thu, 10 Aug 2017 03:07:28 GMT): jyellick (Thu, 10 Aug 2017 03:10:22 GMT): jyellick (Thu, 10 Aug 2017 03:10:22 GMT): jyellick (Thu, 10 Aug 2017 03:10:22 GMT): jyellick (Thu, 10 Aug 2017 03:10:22 GMT): sampath06 (Thu, 10 Aug 2017 03:23:05 GMT): guoger (Thu, 10 Aug 2017 07:33:37 GMT): guoger (Thu, 10 Aug 2017 07:33:37 GMT): guoger (Thu, 10 Aug 2017 07:33:37 GMT): guoger (Thu, 10 Aug 2017 07:57:46 GMT): shanemontague (Thu, 10 Aug 2017 10:51:03 GMT): kostas (Thu, 10 Aug 2017 13:16:32 GMT): kostas (Thu, 10 Aug 2017 13:29:46 GMT): kostas (Thu, 10 Aug 2017 13:29:46 GMT): kostas (Thu, 10 Aug 2017 13:49:48 GMT): kostas (Thu, 10 Aug 2017 13:49:48 GMT): frbrkoala (Thu, 10 Aug 2017 14:07:35 GMT): guoger (Thu, 10 Aug 2017 14:15:40 GMT): kostas (Thu, 10 Aug 2017 14:16:22 GMT): kostas (Thu, 10 Aug 2017 14:18:36 GMT): frbrkoala (Thu, 10 Aug 2017 14:29:12 GMT): kostas (Thu, 10 Aug 2017 14:57:29 GMT): kostas (Thu, 10 Aug 2017 14:57:29 GMT): kostas (Thu, 10 Aug 2017 14:57:29 GMT): kostas (Thu, 10 Aug 2017 14:59:41 GMT): kostas (Thu, 10 Aug 2017 15:00:17 GMT): kostas (Thu, 10 Aug 2017 15:00:29 GMT): kostas (Thu, 10 Aug 2017 15:00:39 GMT): kostas (Thu, 10 Aug 2017 15:01:25 GMT): Glen (Fri, 11 Aug 2017 00:52:15 GMT): jyellick (Fri, 11 Aug 2017 00:52:45 GMT): Glen (Fri, 11 Aug 2017 00:53:28 GMT): jyellick (Fri, 11 Aug 2017 00:53:53 GMT): Glen (Fri, 11 Aug 2017 00:54:02 GMT): Glen (Fri, 11 Aug 2017 01:01:14 GMT): frbrkoala (Fri, 11 Aug 2017 01:50:55 GMT): qsmen (Fri, 11 Aug 2017 06:59:01 GMT): rangak (Fri, 11 Aug 2017 17:21:31 GMT): jyellick (Fri, 11 Aug 2017 17:23:53 GMT): jyellick (Fri, 11 Aug 2017 17:24:20 GMT): kostas (Fri, 11 Aug 2017 17:24:46 GMT): rangak (Fri, 11 Aug 2017 17:25:21 GMT): rangak (Fri, 11 Aug 2017 17:48:46 GMT): rangak (Fri, 11 Aug 2017 17:49:20 GMT): jyellick (Fri, 11 Aug 2017 17:49:21 GMT): rangak (Fri, 11 Aug 2017 17:49:30 GMT): sampath06 (Sat, 12 Aug 2017 11:47:41 GMT): acosta_rodrigo (Sat, 12 Aug 2017 17:03:52 GMT): jyellick (Sat, 12 Aug 2017 17:08:05 GMT): jyellick (Sat, 12 Aug 2017 17:08:05 GMT): sampath06 (Sat, 12 Aug 2017 17:11:02 GMT): jyellick (Sat, 12 Aug 2017 17:12:06 GMT): sampath06 (Sat, 12 Aug 2017 17:19:13 GMT): jyellick (Sat, 12 Aug 2017 17:20:22 GMT): sampath06 (Sat, 12 Aug 2017 17:31:53 GMT): Glen (Mon, 14 Aug 2017 03:56:47 GMT): Glen (Mon, 14 Aug 2017 06:15:13 GMT): jyellick (Mon, 14 Aug 2017 13:45:57 GMT): Glen (Mon, 14 Aug 2017 15:23:00 GMT): AdrienLacombe (Tue, 15 Aug 2017 08:21:13 GMT): gauthampamu (Tue, 15 Aug 2017 13:22:24 GMT): SotirisAlfonsos (Tue, 15 Aug 2017 13:24:43 GMT): gauthampamu (Tue, 15 Aug 2017 13:24:56 GMT): jyellick (Tue, 15 Aug 2017 13:44:49 GMT): jyellick (Tue, 15 Aug 2017 13:46:15 GMT): jyellick (Tue, 15 Aug 2017 13:47:21 GMT): SotirisAlfonsos (Tue, 15 Aug 2017 13:48:12 GMT): SotirisAlfonsos (Tue, 15 Aug 2017 13:48:12 GMT): jyellick (Tue, 15 Aug 2017 13:51:04 GMT): jyellick (Tue, 15 Aug 2017 13:51:04 GMT): gauthampamu (Tue, 15 Aug 2017 13:51:13 GMT): jyellick (Tue, 15 Aug 2017 13:52:20 GMT): htyagi90 (Tue, 15 Aug 2017 15:18:50 GMT): rbv (Tue, 15 Aug 2017 15:35:55 GMT): sampath06 (Tue, 15 Aug 2017 15:39:55 GMT): jyellick (Tue, 15 Aug 2017 15:43:56 GMT): qizhang (Tue, 15 Aug 2017 16:00:27 GMT): jyellick (Tue, 15 Aug 2017 16:05:53 GMT): jyellick (Tue, 15 Aug 2017 16:06:57 GMT): qizhang (Tue, 15 Aug 2017 16:07:38 GMT): jyellick (Tue, 15 Aug 2017 16:08:07 GMT): jyellick (Tue, 15 Aug 2017 16:09:47 GMT): jyellick (Tue, 15 Aug 2017 16:10:28 GMT): jyellick (Tue, 15 Aug 2017 16:17:35 GMT): qizhang (Tue, 15 Aug 2017 17:04:28 GMT): qizhang (Tue, 15 Aug 2017 17:04:28 GMT): jyellick (Tue, 15 Aug 2017 17:08:12 GMT): qizhang (Tue, 15 Aug 2017 17:44:09 GMT): rahulhegde (Tue, 15 Aug 2017 18:20:01 GMT): jyellick (Tue, 15 Aug 2017 18:21:53 GMT): rahulhegde (Tue, 15 Aug 2017 18:30:08 GMT): jyellick (Tue, 15 Aug 2017 18:31:30 GMT): rahulhegde (Tue, 15 Aug 2017 18:34:09 GMT): rahulhegde (Tue, 15 Aug 2017 18:34:09 GMT): jyellick (Tue, 15 Aug 2017 18:35:19 GMT): rahulhegde (Tue, 15 Aug 2017 18:43:10 GMT): rahulhegde (Tue, 15 Aug 2017 18:47:18 GMT): rahulhegde (Tue, 15 Aug 2017 18:47:18 GMT): jyellick (Tue, 15 Aug 2017 18:48:33 GMT): jyellick (Tue, 15 Aug 2017 18:48:54 GMT): gauthampamu (Tue, 15 Aug 2017 23:51:56 GMT): jyellick (Wed, 16 Aug 2017 02:21:56 GMT): gauthampamu (Wed, 16 Aug 2017 03:36:11 GMT): jyellick (Wed, 16 Aug 2017 05:16:19 GMT): spipes (Wed, 16 Aug 2017 08:22:32 GMT): tom.appleyard (Wed, 16 Aug 2017 10:45:40 GMT): tom.appleyard (Wed, 16 Aug 2017 10:45:58 GMT): Vadim (Wed, 16 Aug 2017 10:57:57 GMT): tom.appleyard (Wed, 16 Aug 2017 11:02:34 GMT): tom.appleyard (Wed, 16 Aug 2017 11:02:46 GMT): Vadim (Wed, 16 Aug 2017 11:10:26 GMT): Vadim (Wed, 16 Aug 2017 11:10:37 GMT): scottz (Wed, 16 Aug 2017 12:28:47 GMT): rbv (Wed, 16 Aug 2017 13:08:39 GMT): tom.appleyard (Wed, 16 Aug 2017 13:27:39 GMT): Vadim (Wed, 16 Aug 2017 13:28:08 GMT): Vadim (Wed, 16 Aug 2017 13:28:17 GMT): Vadim (Wed, 16 Aug 2017 13:28:57 GMT): tom.appleyard (Wed, 16 Aug 2017 13:29:29 GMT): tom.appleyard (Wed, 16 Aug 2017 13:29:29 GMT): tom.appleyard (Wed, 16 Aug 2017 13:29:59 GMT): Vadim (Wed, 16 Aug 2017 13:30:26 GMT): Vadim (Wed, 16 Aug 2017 13:31:44 GMT): jyellick (Wed, 16 Aug 2017 13:33:55 GMT): jyellick (Wed, 16 Aug 2017 13:33:55 GMT): jyellick (Wed, 16 Aug 2017 13:34:59 GMT): jyellick (Wed, 16 Aug 2017 13:34:59 GMT): jyellick (Wed, 16 Aug 2017 13:42:01 GMT): jyellick (Wed, 16 Aug 2017 13:43:57 GMT): rbv (Wed, 16 Aug 2017 14:17:11 GMT): jyellick (Wed, 16 Aug 2017 14:18:57 GMT): rbv (Wed, 16 Aug 2017 14:19:58 GMT): LeoKotschenreuther (Wed, 16 Aug 2017 18:30:09 GMT): rahulhegde (Wed, 16 Aug 2017 19:41:10 GMT): jyellick (Wed, 16 Aug 2017 19:44:37 GMT): LeoKotschenreuther (Wed, 16 Aug 2017 19:46:38 GMT): jyellick (Wed, 16 Aug 2017 19:47:26 GMT): jyellick (Wed, 16 Aug 2017 19:47:51 GMT): LeoKotschenreuther (Wed, 16 Aug 2017 19:47:59 GMT): LeoKotschenreuther (Wed, 16 Aug 2017 19:47:59 GMT): rahulhegde (Wed, 16 Aug 2017 19:48:09 GMT): jyellick (Wed, 16 Aug 2017 19:48:43 GMT): LeoKotschenreuther (Wed, 16 Aug 2017 19:51:20 GMT): jyellick (Wed, 16 Aug 2017 19:59:14 GMT): jyellick (Wed, 16 Aug 2017 19:59:14 GMT): LeoKotschenreuther (Wed, 16 Aug 2017 20:03:16 GMT): jyellick (Wed, 16 Aug 2017 20:06:28 GMT): gauthampamu (Wed, 16 Aug 2017 22:38:18 GMT): gauthampamu (Wed, 16 Aug 2017 22:38:47 GMT): jyellick (Thu, 17 Aug 2017 01:43:50 GMT): jyellick (Thu, 17 Aug 2017 01:43:50 GMT): jyellick (Thu, 17 Aug 2017 01:43:50 GMT): jyellick (Thu, 17 Aug 2017 01:43:50 GMT): gauthampamu (Thu, 17 Aug 2017 02:42:40 GMT): gauthampamu (Thu, 17 Aug 2017 02:42:40 GMT): gauthampamu (Thu, 17 Aug 2017 02:42:40 GMT): gauthampamu (Thu, 17 Aug 2017 02:43:44 GMT): jyellick (Thu, 17 Aug 2017 02:44:09 GMT): zhasni (Thu, 17 Aug 2017 08:34:46 GMT): YashGanthe (Thu, 17 Aug 2017 12:19:34 GMT): YashGanthe (Thu, 17 Aug 2017 12:19:48 GMT): SotirisAlfonsos (Thu, 17 Aug 2017 12:37:37 GMT): jyellick (Thu, 17 Aug 2017 13:37:03 GMT): jyellick (Thu, 17 Aug 2017 13:42:42 GMT): SotirisAlfonsos (Thu, 17 Aug 2017 13:51:40 GMT): SotirisAlfonsos (Thu, 17 Aug 2017 13:51:40 GMT): jyellick (Thu, 17 Aug 2017 14:11:07 GMT): jyellick (Thu, 17 Aug 2017 14:11:07 GMT): jyellick (Thu, 17 Aug 2017 14:11:07 GMT): SotirisAlfonsos (Thu, 17 Aug 2017 14:25:46 GMT): SotirisAlfonsos (Thu, 17 Aug 2017 14:25:46 GMT): SotirisAlfonsos (Thu, 17 Aug 2017 14:25:46 GMT): jyellick (Thu, 17 Aug 2017 14:33:08 GMT): jyellick (Thu, 17 Aug 2017 14:33:18 GMT): jyellick (Thu, 17 Aug 2017 14:34:33 GMT): jyellick (Thu, 17 Aug 2017 14:34:52 GMT): SotirisAlfonsos (Thu, 17 Aug 2017 14:35:55 GMT): jyellick (Thu, 17 Aug 2017 14:36:43 GMT): SotirisAlfonsos (Thu, 17 Aug 2017 14:37:43 GMT): SotirisAlfonsos (Thu, 17 Aug 2017 14:37:43 GMT): jyellick (Thu, 17 Aug 2017 14:39:40 GMT): jyellick (Thu, 17 Aug 2017 14:40:09 GMT): jyellick (Thu, 17 Aug 2017 14:40:46 GMT): gauthampamu (Thu, 17 Aug 2017 14:45:27 GMT): gauthampamu (Thu, 17 Aug 2017 14:45:27 GMT): SotirisAlfonsos (Thu, 17 Aug 2017 14:46:41 GMT): jyellick (Thu, 17 Aug 2017 15:06:47 GMT): SotirisAlfonsos (Thu, 17 Aug 2017 15:35:52 GMT): kraemer (Fri, 18 Aug 2017 16:55:49 GMT): jmcnevin (Fri, 18 Aug 2017 18:49:06 GMT): jyellick (Fri, 18 Aug 2017 18:49:33 GMT): jmcnevin (Fri, 18 Aug 2017 18:50:21 GMT): jmcnevin (Fri, 18 Aug 2017 18:50:31 GMT): jyellick (Fri, 18 Aug 2017 19:06:31 GMT): jyellick (Fri, 18 Aug 2017 19:06:31 GMT): jmcnevin (Fri, 18 Aug 2017 19:11:18 GMT): asaningmaxchain (Mon, 21 Aug 2017 14:52:08 GMT): asaningmaxchain (Mon, 21 Aug 2017 14:54:16 GMT): asaningmaxchain (Mon, 21 Aug 2017 14:54:31 GMT): asaningmaxchain (Mon, 21 Aug 2017 14:54:31 GMT): asaningmaxchain (Mon, 21 Aug 2017 14:54:43 GMT): asaningmaxchain (Mon, 21 Aug 2017 14:55:14 GMT): asaningmaxchain (Mon, 21 Aug 2017 14:55:46 GMT): asaningmaxchain (Mon, 21 Aug 2017 14:55:53 GMT): asaningmaxchain (Mon, 21 Aug 2017 14:56:22 GMT): asaningmaxchain (Mon, 21 Aug 2017 14:56:23 GMT): asaningmaxchain (Mon, 21 Aug 2017 14:56:28 GMT): asaningmaxchain (Mon, 21 Aug 2017 14:56:42 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:16:02 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:26:34 GMT): jyellick (Mon, 21 Aug 2017 15:32:00 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:35:51 GMT): muralisr (Mon, 21 Aug 2017 15:39:24 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:40:03 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:40:18 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:40:21 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:41:43 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:42:04 GMT): jyellick (Mon, 21 Aug 2017 15:42:08 GMT): vikas_hada (Mon, 21 Aug 2017 15:42:09 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:42:34 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:42:35 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:43:02 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:43:34 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:43:49 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:45:36 GMT): jyellick (Mon, 21 Aug 2017 15:46:12 GMT): jyellick (Mon, 21 Aug 2017 15:46:12 GMT): jyellick (Mon, 21 Aug 2017 15:46:47 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:47:07 GMT): jyellick (Mon, 21 Aug 2017 15:47:57 GMT): jyellick (Mon, 21 Aug 2017 15:48:36 GMT): jyellick (Mon, 21 Aug 2017 15:49:06 GMT): jyellick (Mon, 21 Aug 2017 15:49:48 GMT): jyellick (Mon, 21 Aug 2017 15:50:23 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:51:17 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:51:43 GMT): jyellick (Mon, 21 Aug 2017 15:52:00 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:52:15 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:52:21 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:58:58 GMT): asaningmaxchain (Mon, 21 Aug 2017 15:59:17 GMT): jyellick (Mon, 21 Aug 2017 15:59:48 GMT): asaningmaxchain (Mon, 21 Aug 2017 16:00:41 GMT): asaningmaxchain (Mon, 21 Aug 2017 16:00:42 GMT): jyellick (Mon, 21 Aug 2017 16:01:03 GMT): jyellick (Mon, 21 Aug 2017 16:01:03 GMT): asaningmaxchain (Mon, 21 Aug 2017 16:01:34 GMT): asaningmaxchain (Mon, 21 Aug 2017 16:01:48 GMT): asaningmaxchain (Mon, 21 Aug 2017 16:04:33 GMT): raidinesh80 (Mon, 21 Aug 2017 17:13:20 GMT): jeffgarratt (Mon, 21 Aug 2017 18:40:08 GMT): CodeReaper (Mon, 21 Aug 2017 18:40:08 GMT): jeffgarratt (Mon, 21 Aug 2017 18:40:40 GMT): jeffgarratt (Mon, 21 Aug 2017 18:41:02 GMT): jeffgarratt (Mon, 21 Aug 2017 18:41:02 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:43:10 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:45:37 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:46:33 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:47:14 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:47:30 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:48:01 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:48:02 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:49:09 GMT): jyellick (Tue, 22 Aug 2017 01:49:43 GMT): jyellick (Tue, 22 Aug 2017 01:49:52 GMT): jyellick (Tue, 22 Aug 2017 01:50:15 GMT): jyellick (Tue, 22 Aug 2017 01:50:39 GMT): jyellick (Tue, 22 Aug 2017 01:51:03 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:52:24 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:52:24 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:52:27 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:52:34 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:52:51 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:52:54 GMT): jyellick (Tue, 22 Aug 2017 01:53:40 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:53:41 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:54:04 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:54:33 GMT): jyellick (Tue, 22 Aug 2017 01:54:55 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:55:52 GMT): jyellick (Tue, 22 Aug 2017 01:56:23 GMT): jyellick (Tue, 22 Aug 2017 01:56:37 GMT): jyellick (Tue, 22 Aug 2017 01:56:37 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:57:48 GMT): asaningmaxchain (Tue, 22 Aug 2017 01:58:39 GMT): jyellick (Tue, 22 Aug 2017 01:59:11 GMT): jyellick (Tue, 22 Aug 2017 01:59:40 GMT): jyellick (Tue, 22 Aug 2017 01:59:49 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:02:45 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:03:22 GMT): jyellick (Tue, 22 Aug 2017 02:03:51 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:04:51 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:04:51 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:04:51 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:04:51 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:05:22 GMT): jyellick (Tue, 22 Aug 2017 02:05:30 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:05:57 GMT): jyellick (Tue, 22 Aug 2017 02:05:59 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:06:05 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:06:10 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:06:22 GMT): jyellick (Tue, 22 Aug 2017 02:06:46 GMT): jyellick (Tue, 22 Aug 2017 02:08:06 GMT): jyellick (Tue, 22 Aug 2017 02:08:23 GMT): jyellick (Tue, 22 Aug 2017 02:08:45 GMT): jyellick (Tue, 22 Aug 2017 02:09:32 GMT): jyellick (Tue, 22 Aug 2017 02:09:32 GMT): jyellick (Tue, 22 Aug 2017 02:10:28 GMT): jyellick (Tue, 22 Aug 2017 02:10:28 GMT): jyellick (Tue, 22 Aug 2017 02:11:16 GMT): jyellick (Tue, 22 Aug 2017 02:11:31 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:13:03 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:13:03 GMT): jyellick (Tue, 22 Aug 2017 02:14:00 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:16:14 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:16:14 GMT): jyellick (Tue, 22 Aug 2017 02:16:51 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:20:56 GMT): jyellick (Tue, 22 Aug 2017 02:23:18 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:23:49 GMT): jyellick (Tue, 22 Aug 2017 02:24:40 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:25:19 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:25:22 GMT): jyellick (Tue, 22 Aug 2017 02:25:50 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:26:27 GMT): jyellick (Tue, 22 Aug 2017 02:27:29 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:28:06 GMT): jyellick (Tue, 22 Aug 2017 02:28:55 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:29:32 GMT): jyellick (Tue, 22 Aug 2017 02:30:12 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:30:43 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:30:43 GMT): jyellick (Tue, 22 Aug 2017 02:30:51 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:31:26 GMT): jyellick (Tue, 22 Aug 2017 02:31:38 GMT): jyellick (Tue, 22 Aug 2017 02:32:39 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:33:25 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:33:44 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:33:54 GMT): jyellick (Tue, 22 Aug 2017 02:35:10 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:36:53 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:37:04 GMT): jyellick (Tue, 22 Aug 2017 02:38:04 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:39:39 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:49:42 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:49:43 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:49:44 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:50:12 GMT): jyellick (Tue, 22 Aug 2017 02:51:29 GMT): asaningmaxchain (Tue, 22 Aug 2017 02:52:44 GMT): jyellick (Tue, 22 Aug 2017 02:53:43 GMT): asaningmaxchain (Tue, 22 Aug 2017 04:55:11 GMT): asaningmaxchain (Tue, 22 Aug 2017 04:55:12 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:00:08 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:00:19 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:01:19 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:01:36 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:02:09 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:02:09 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:03:24 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:03:53 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:03:53 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:03:53 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:03:53 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:03:53 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:04:48 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:04:48 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:06:16 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:06:34 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:06:34 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:06:59 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:17:24 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:17:26 GMT): jyellick (Tue, 22 Aug 2017 05:17:59 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:18:14 GMT): jyellick (Tue, 22 Aug 2017 05:19:04 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:19:52 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:20:03 GMT): jyellick (Tue, 22 Aug 2017 05:20:54 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:21:55 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:21:59 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:29:56 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:35:51 GMT): asaningmaxchain (Tue, 22 Aug 2017 05:35:55 GMT): cwng (Tue, 22 Aug 2017 09:44:25 GMT): sklump (Tue, 22 Aug 2017 11:50:58 GMT): jyellick (Tue, 22 Aug 2017 12:54:33 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:02:39 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:02:51 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:03:03 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:03:19 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:03:37 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:03:49 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:04:03 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:04:17 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:04:38 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:04:52 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:05:00 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:05:30 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:05:41 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:05:54 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:06:05 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:06:19 GMT): asaningmaxchain (Tue, 22 Aug 2017 13:08:31 GMT): jyellick (Tue, 22 Aug 2017 13:54:56 GMT): asaningmaxchain (Tue, 22 Aug 2017 14:12:53 GMT): asaningmaxchain (Tue, 22 Aug 2017 14:12:59 GMT): asaningmaxchain (Tue, 22 Aug 2017 14:13:09 GMT): asaningmaxchain (Tue, 22 Aug 2017 14:25:20 GMT): asaningmaxchain (Tue, 22 Aug 2017 14:25:23 GMT): jyellick (Tue, 22 Aug 2017 14:25:52 GMT): jyellick (Tue, 22 Aug 2017 14:26:17 GMT): asaningmaxchain (Tue, 22 Aug 2017 14:27:04 GMT): asaningmaxchain (Tue, 22 Aug 2017 14:27:11 GMT): asaningmaxchain (Tue, 22 Aug 2017 14:30:37 GMT): asaningmaxchain (Tue, 22 Aug 2017 14:35:37 GMT): jyellick (Tue, 22 Aug 2017 14:46:44 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:03:20 GMT): jyellick (Tue, 22 Aug 2017 17:04:44 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:05:39 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:22:01 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:22:10 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:23:15 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:23:23 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:23:23 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:24:23 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:25:18 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:25:39 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:26:13 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:26:19 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:26:44 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:27:08 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:27:27 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:27:35 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:28:02 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:28:15 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:28:39 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:28:39 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:29:21 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:29:36 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:29:47 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:30:00 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:30:24 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:30:25 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:30:30 GMT): jyellick (Tue, 22 Aug 2017 17:30:37 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:31:13 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:31:15 GMT): jyellick (Tue, 22 Aug 2017 17:31:39 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:33:06 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:33:20 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:33:38 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:33:43 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:33:50 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:33:50 GMT): jyellick (Tue, 22 Aug 2017 17:34:02 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:34:08 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:34:09 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:36:51 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:36:53 GMT): jyellick (Tue, 22 Aug 2017 17:37:08 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:41:10 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:41:41 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:42:07 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:42:25 GMT): jyellick (Tue, 22 Aug 2017 17:43:31 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:44:30 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:44:34 GMT): jyellick (Tue, 22 Aug 2017 17:44:50 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:45:58 GMT): jyellick (Tue, 22 Aug 2017 17:46:19 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:47:17 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:49:00 GMT): asaningmaxchain (Tue, 22 Aug 2017 17:49:07 GMT): jyellick (Tue, 22 Aug 2017 17:49:14 GMT): rsherwood (Tue, 22 Aug 2017 18:12:45 GMT): hayato (Tue, 22 Aug 2017 23:22:58 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:17:32 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:19:27 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:19:57 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:21:12 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:21:34 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:22:16 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:22:27 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:22:33 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:22:33 GMT): jyellick (Wed, 23 Aug 2017 06:22:38 GMT): jyellick (Wed, 23 Aug 2017 06:22:38 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:23:35 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:23:45 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:23:53 GMT): jyellick (Wed, 23 Aug 2017 06:23:54 GMT): jyellick (Wed, 23 Aug 2017 06:24:22 GMT): jyellick (Wed, 23 Aug 2017 06:24:22 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:25:54 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:25:54 GMT): jyellick (Wed, 23 Aug 2017 06:26:28 GMT): jyellick (Wed, 23 Aug 2017 06:27:04 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:27:13 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:27:16 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:27:34 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:27:34 GMT): asaningmaxchain (Wed, 23 Aug 2017 06:27:52 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:05:40 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:05:40 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:07:17 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:07:26 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:07:54 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:09:05 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:09:09 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:10:03 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:10:58 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:11:26 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:11:26 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:13:44 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:13:54 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:14:09 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:14:18 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:15:34 GMT): guoger (Wed, 23 Aug 2017 08:20:15 GMT): guoger (Wed, 23 Aug 2017 08:20:44 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:25:47 GMT): guoger (Wed, 23 Aug 2017 08:26:46 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:27:27 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:31:16 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:31:18 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:32:34 GMT): asaningmaxchain (Wed, 23 Aug 2017 08:32:39 GMT): Hai-XuCheng (Wed, 23 Aug 2017 08:42:28 GMT): asaningmaxchain (Wed, 23 Aug 2017 11:54:06 GMT): asaningmaxchain (Wed, 23 Aug 2017 12:04:50 GMT): asaningmaxchain (Wed, 23 Aug 2017 12:41:56 GMT): asaningmaxchain (Wed, 23 Aug 2017 12:42:04 GMT): asaningmaxchain (Wed, 23 Aug 2017 12:42:20 GMT): asaningmaxchain (Wed, 23 Aug 2017 12:42:20 GMT): latitiah (Wed, 23 Aug 2017 14:59:24 GMT): jyellick (Wed, 23 Aug 2017 15:02:21 GMT): jyellick (Wed, 23 Aug 2017 15:05:24 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:21:40 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:21:49 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:23:29 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:23:47 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:24:33 GMT): jyellick (Wed, 23 Aug 2017 15:25:15 GMT): jyellick (Wed, 23 Aug 2017 15:25:15 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:27:06 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:27:06 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:27:06 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:27:06 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:27:50 GMT): jyellick (Wed, 23 Aug 2017 15:28:27 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:29:11 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:29:13 GMT): jyellick (Wed, 23 Aug 2017 15:29:43 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:31:59 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:32:09 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:32:23 GMT): jyellick (Wed, 23 Aug 2017 15:32:55 GMT): jyellick (Wed, 23 Aug 2017 15:33:10 GMT): jyellick (Wed, 23 Aug 2017 15:33:36 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:33:50 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:33:57 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:34:28 GMT): asaningmaxchain (Wed, 23 Aug 2017 15:40:23 GMT): jyellick (Wed, 23 Aug 2017 15:44:56 GMT): jyellick (Wed, 23 Aug 2017 15:45:19 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:03:41 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:05:12 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:05:25 GMT): jyellick (Wed, 23 Aug 2017 16:05:41 GMT): jyellick (Wed, 23 Aug 2017 16:06:01 GMT): jyellick (Wed, 23 Aug 2017 16:06:06 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:06:22 GMT): jyellick (Wed, 23 Aug 2017 16:06:27 GMT): jyellick (Wed, 23 Aug 2017 16:06:53 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:07:28 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:07:37 GMT): jyellick (Wed, 23 Aug 2017 16:07:41 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:08:02 GMT): jyellick (Wed, 23 Aug 2017 16:09:07 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:09:10 GMT): jyellick (Wed, 23 Aug 2017 16:09:29 GMT): jyellick (Wed, 23 Aug 2017 16:09:46 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:09:53 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:09:57 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:10:47 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:11:02 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:11:03 GMT): jyellick (Wed, 23 Aug 2017 16:11:26 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:11:50 GMT): jyellick (Wed, 23 Aug 2017 16:13:26 GMT): jyellick (Wed, 23 Aug 2017 16:13:35 GMT): jyellick (Wed, 23 Aug 2017 16:13:52 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:14:09 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:14:21 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:19:08 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:19:09 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:20:10 GMT): jyellick (Wed, 23 Aug 2017 16:20:29 GMT): jyellick (Wed, 23 Aug 2017 16:22:04 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:22:21 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:22:23 GMT): jyellick (Wed, 23 Aug 2017 16:22:31 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:22:40 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:23:06 GMT): jyellick (Wed, 23 Aug 2017 16:23:21 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:23:32 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:24:02 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:24:02 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:34:11 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:34:18 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:34:29 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:34:30 GMT): jyellick (Wed, 23 Aug 2017 16:34:37 GMT): jyellick (Wed, 23 Aug 2017 16:34:52 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:35:23 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:35:34 GMT): jyellick (Wed, 23 Aug 2017 16:36:07 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:36:37 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:36:38 GMT): jyellick (Wed, 23 Aug 2017 16:36:46 GMT): jyellick (Wed, 23 Aug 2017 16:37:30 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:39:13 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:39:23 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:39:29 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:41:35 GMT): asaningmaxchain (Wed, 23 Aug 2017 16:41:50 GMT): Glen (Thu, 24 Aug 2017 02:52:24 GMT): jyellick (Thu, 24 Aug 2017 02:52:53 GMT): Glen (Thu, 24 Aug 2017 02:53:41 GMT): Glen (Thu, 24 Aug 2017 02:54:09 GMT): Glen (Thu, 24 Aug 2017 02:54:47 GMT): jyellick (Thu, 24 Aug 2017 02:55:22 GMT): jyellick (Thu, 24 Aug 2017 02:55:47 GMT): Glen (Thu, 24 Aug 2017 02:55:52 GMT): jyellick (Thu, 24 Aug 2017 02:56:14 GMT): jyellick (Thu, 24 Aug 2017 02:56:29 GMT): Glen (Thu, 24 Aug 2017 02:56:42 GMT): jyellick (Thu, 24 Aug 2017 02:57:11 GMT): Glen (Thu, 24 Aug 2017 02:57:18 GMT): Glen (Thu, 24 Aug 2017 02:58:10 GMT): jyellick (Thu, 24 Aug 2017 02:58:38 GMT): jyellick (Thu, 24 Aug 2017 02:58:50 GMT): jyellick (Thu, 24 Aug 2017 02:59:36 GMT): jyellick (Thu, 24 Aug 2017 02:59:36 GMT): jyellick (Thu, 24 Aug 2017 02:59:36 GMT): jyellick (Thu, 24 Aug 2017 03:00:19 GMT): Glen (Thu, 24 Aug 2017 03:00:41 GMT): Glen (Thu, 24 Aug 2017 03:01:06 GMT): Glen (Thu, 24 Aug 2017 03:01:32 GMT): jyellick (Thu, 24 Aug 2017 03:02:11 GMT): jyellick (Thu, 24 Aug 2017 03:02:11 GMT): Glen (Thu, 24 Aug 2017 03:02:15 GMT): jyellick (Thu, 24 Aug 2017 03:04:04 GMT): Glen (Thu, 24 Aug 2017 03:06:24 GMT): jyellick (Thu, 24 Aug 2017 03:08:45 GMT): duwenhui (Thu, 24 Aug 2017 05:28:35 GMT): asaningmaxchain (Thu, 24 Aug 2017 09:42:02 GMT): asaningmaxchain (Thu, 24 Aug 2017 09:42:53 GMT): asaningmaxchain (Thu, 24 Aug 2017 09:43:32 GMT): asaningmaxchain (Thu, 24 Aug 2017 09:43:38 GMT): asaningmaxchain (Thu, 24 Aug 2017 09:43:47 GMT): asaningmaxchain (Thu, 24 Aug 2017 09:44:04 GMT): asaningmaxchain (Thu, 24 Aug 2017 09:45:23 GMT): asaningmaxchain (Thu, 24 Aug 2017 09:46:34 GMT): asaningmaxchain (Thu, 24 Aug 2017 09:47:02 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:27:59 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:28:29 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:30:09 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:30:51 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:30:51 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:32:13 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:32:43 GMT): jyellick (Thu, 24 Aug 2017 13:33:20 GMT): jyellick (Thu, 24 Aug 2017 13:34:14 GMT): jyellick (Thu, 24 Aug 2017 13:34:28 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:37:20 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:37:31 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:38:20 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:38:28 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:40:31 GMT): jyellick (Thu, 24 Aug 2017 13:46:29 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:52:50 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:56:38 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:56:51 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:57:18 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:57:31 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:57:39 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:58:10 GMT): asaningmaxchain (Thu, 24 Aug 2017 13:59:41 GMT): asaningmaxchain (Thu, 24 Aug 2017 14:02:25 GMT): asaningmaxchain (Thu, 24 Aug 2017 14:03:05 GMT): asaningmaxchain (Thu, 24 Aug 2017 14:04:49 GMT): asaningmaxchain (Thu, 24 Aug 2017 14:05:54 GMT): ysadek (Thu, 24 Aug 2017 14:44:42 GMT): asaningmaxchain (Fri, 25 Aug 2017 03:48:36 GMT): asaningmaxchain (Fri, 25 Aug 2017 04:30:46 GMT): jyellick (Fri, 25 Aug 2017 13:33:13 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:44:33 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:44:51 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:45:07 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:45:48 GMT): jyellick (Fri, 25 Aug 2017 15:46:08 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:47:24 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:48:36 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:50:34 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:50:34 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:51:40 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:51:40 GMT): jyellick (Fri, 25 Aug 2017 15:53:34 GMT): jyellick (Fri, 25 Aug 2017 15:54:00 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:56:50 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:57:56 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:57:56 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:58:52 GMT): asaningmaxchain (Fri, 25 Aug 2017 15:59:31 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:00:28 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:00:59 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:01:27 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:01:53 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:02:07 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:02:42 GMT): jyellick (Fri, 25 Aug 2017 16:05:55 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:10:01 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:10:08 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:11:10 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:29:08 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:29:37 GMT): jyellick (Fri, 25 Aug 2017 16:32:20 GMT): jyellick (Fri, 25 Aug 2017 16:33:08 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:34:18 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:34:23 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:34:35 GMT): jyellick (Fri, 25 Aug 2017 16:37:09 GMT): jyellick (Fri, 25 Aug 2017 16:37:09 GMT): jyellick (Fri, 25 Aug 2017 16:38:36 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:40:37 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:40:38 GMT): jyellick (Fri, 25 Aug 2017 16:41:00 GMT): jyellick (Fri, 25 Aug 2017 16:41:10 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:45:06 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:45:16 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:48:58 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:49:30 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:53:01 GMT): jyellick (Fri, 25 Aug 2017 16:53:03 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:53:18 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:54:14 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:55:03 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:55:17 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:55:37 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:56:47 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:56:56 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:58:41 GMT): asaningmaxchain (Fri, 25 Aug 2017 16:58:51 GMT): jyellick (Fri, 25 Aug 2017 17:00:49 GMT): jyellick (Fri, 25 Aug 2017 17:01:21 GMT): asaningmaxchain (Fri, 25 Aug 2017 17:02:17 GMT): asaningmaxchain (Fri, 25 Aug 2017 17:02:27 GMT): asaningmaxchain (Fri, 25 Aug 2017 17:02:38 GMT): jyellick (Fri, 25 Aug 2017 17:02:49 GMT): asaningmaxchain (Fri, 25 Aug 2017 17:02:53 GMT): asaningmaxchain (Fri, 25 Aug 2017 17:04:41 GMT): asaningmaxchain (Fri, 25 Aug 2017 17:05:00 GMT): asaningmaxchain (Fri, 25 Aug 2017 17:05:00 GMT): asaningmaxchain (Fri, 25 Aug 2017 17:07:52 GMT): asaningmaxchain (Fri, 25 Aug 2017 17:08:16 GMT): asaningmaxchain (Fri, 25 Aug 2017 17:08:16 GMT): jyellick (Fri, 25 Aug 2017 17:09:25 GMT): asaningmaxchain (Fri, 25 Aug 2017 17:09:54 GMT): jyellick (Fri, 25 Aug 2017 17:09:55 GMT): asaningmaxchain (Fri, 25 Aug 2017 17:10:23 GMT): asaningmaxchain (Sat, 26 Aug 2017 03:45:20 GMT): asaningmaxchain (Sat, 26 Aug 2017 03:45:20 GMT): asaningmaxchain (Sat, 26 Aug 2017 03:45:22 GMT): asaningmaxchain (Sat, 26 Aug 2017 03:46:01 GMT): jyellick (Sat, 26 Aug 2017 05:51:01 GMT): asaningmaxchain (Sat, 26 Aug 2017 05:52:50 GMT): asaningmaxchain (Sat, 26 Aug 2017 05:54:53 GMT): jyellick (Sat, 26 Aug 2017 05:55:25 GMT): jyellick (Sat, 26 Aug 2017 05:55:25 GMT): jyellick (Sat, 26 Aug 2017 05:55:25 GMT): jyellick (Sat, 26 Aug 2017 05:55:25 GMT): jyellick (Sat, 26 Aug 2017 05:55:25 GMT): jyellick (Sat, 26 Aug 2017 05:57:06 GMT): asaningmaxchain (Sat, 26 Aug 2017 05:57:59 GMT): asaningmaxchain (Sat, 26 Aug 2017 05:57:59 GMT): jyellick (Sat, 26 Aug 2017 05:58:31 GMT): jyellick (Sat, 26 Aug 2017 05:59:01 GMT): asaningmaxchain (Sat, 26 Aug 2017 05:59:37 GMT): asaningmaxchain (Sat, 26 Aug 2017 06:02:06 GMT): jyellick (Sat, 26 Aug 2017 06:02:19 GMT): aberfou (Sat, 26 Aug 2017 14:53:40 GMT): jyellick (Sat, 26 Aug 2017 15:25:43 GMT): aberfou (Sat, 26 Aug 2017 15:26:53 GMT): aberfou (Sat, 26 Aug 2017 15:26:53 GMT): jyellick (Sat, 26 Aug 2017 15:28:23 GMT): aberfou (Sat, 26 Aug 2017 15:29:31 GMT): asaningmaxchain (Mon, 28 Aug 2017 02:57:20 GMT): asaningmaxchain (Mon, 28 Aug 2017 02:57:49 GMT): asaningmaxchain (Mon, 28 Aug 2017 02:58:07 GMT): jyellick (Mon, 28 Aug 2017 02:58:43 GMT): asaningmaxchain (Mon, 28 Aug 2017 02:59:04 GMT): jyellick (Mon, 28 Aug 2017 02:59:24 GMT): jyellick (Mon, 28 Aug 2017 02:59:30 GMT): asaningmaxchain (Mon, 28 Aug 2017 03:00:15 GMT): asaningmaxchain (Mon, 28 Aug 2017 03:00:38 GMT): jyellick (Mon, 28 Aug 2017 03:02:29 GMT): jyellick (Mon, 28 Aug 2017 03:02:37 GMT): asaningmaxchain (Mon, 28 Aug 2017 03:02:38 GMT): jyellick (Mon, 28 Aug 2017 03:03:36 GMT): jyellick (Mon, 28 Aug 2017 03:03:55 GMT): jyellick (Mon, 28 Aug 2017 03:04:18 GMT): asaningmaxchain (Mon, 28 Aug 2017 03:04:20 GMT): asaningmaxchain (Mon, 28 Aug 2017 03:05:28 GMT): jyellick (Mon, 28 Aug 2017 03:06:36 GMT): asaningmaxchain (Mon, 28 Aug 2017 03:06:45 GMT): asaningmaxchain (Mon, 28 Aug 2017 03:07:05 GMT): asaningmaxchain (Mon, 28 Aug 2017 03:07:37 GMT): jyellick (Mon, 28 Aug 2017 03:09:42 GMT): asaningmaxchain (Mon, 28 Aug 2017 03:10:46 GMT): asaningmaxchain (Mon, 28 Aug 2017 03:10:47 GMT): asaningmaxchain (Mon, 28 Aug 2017 05:52:03 GMT): asaningmaxchain (Mon, 28 Aug 2017 05:52:56 GMT): Glen (Mon, 28 Aug 2017 06:48:36 GMT): asaningmaxchain (Mon, 28 Aug 2017 12:08:16 GMT): asaningmaxchain (Mon, 28 Aug 2017 12:08:27 GMT): asaningmaxchain (Mon, 28 Aug 2017 12:08:29 GMT): asaningmaxchain (Mon, 28 Aug 2017 12:08:44 GMT): asaningmaxchain (Mon, 28 Aug 2017 12:09:13 GMT): asaningmaxchain (Mon, 28 Aug 2017 12:09:33 GMT): asaningmaxchain (Mon, 28 Aug 2017 14:41:32 GMT): asaningmaxchain (Mon, 28 Aug 2017 14:41:44 GMT): asaningmaxchain (Mon, 28 Aug 2017 14:41:53 GMT): asaningmaxchain (Mon, 28 Aug 2017 14:42:23 GMT): asaningmaxchain (Mon, 28 Aug 2017 14:42:39 GMT): asaningmaxchain (Mon, 28 Aug 2017 14:42:55 GMT): asaningmaxchain (Mon, 28 Aug 2017 14:42:55 GMT): jyellick (Mon, 28 Aug 2017 14:42:58 GMT): asaningmaxchain (Mon, 28 Aug 2017 14:43:19 GMT): asaningmaxchain (Mon, 28 Aug 2017 14:44:12 GMT): jyellick (Mon, 28 Aug 2017 14:44:41 GMT): asaningmaxchain (Mon, 28 Aug 2017 14:45:11 GMT): jworthington (Mon, 28 Aug 2017 14:49:04 GMT): jworthington (Mon, 28 Aug 2017 16:09:38 GMT): jyellick (Mon, 28 Aug 2017 16:43:21 GMT): jworthington (Mon, 28 Aug 2017 17:33:01 GMT): gauthampamu (Mon, 28 Aug 2017 18:25:49 GMT): gauthampamu (Mon, 28 Aug 2017 18:27:33 GMT): jyellick (Mon, 28 Aug 2017 19:18:38 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:06:09 GMT): jyellick (Tue, 29 Aug 2017 02:06:56 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:07:47 GMT): jyellick (Tue, 29 Aug 2017 02:10:48 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:12:32 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:13:13 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:13:14 GMT): jyellick (Tue, 29 Aug 2017 02:13:30 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:13:32 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:13:37 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:13:41 GMT): Glen (Tue, 29 Aug 2017 02:27:05 GMT): jyellick (Tue, 29 Aug 2017 02:28:11 GMT): Glen (Tue, 29 Aug 2017 02:28:34 GMT): Glen (Tue, 29 Aug 2017 02:28:46 GMT): Glen (Tue, 29 Aug 2017 02:29:04 GMT): jyellick (Tue, 29 Aug 2017 02:29:43 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:31:43 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:31:45 GMT): jyellick (Tue, 29 Aug 2017 02:32:38 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:32:56 GMT): Glen (Tue, 29 Aug 2017 02:37:16 GMT): Glen (Tue, 29 Aug 2017 02:38:13 GMT): jyellick (Tue, 29 Aug 2017 02:39:04 GMT): Glen (Tue, 29 Aug 2017 02:39:19 GMT): Glen (Tue, 29 Aug 2017 02:45:08 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:45:18 GMT): jyellick (Tue, 29 Aug 2017 02:45:38 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:46:01 GMT): jyellick (Tue, 29 Aug 2017 02:47:02 GMT): jyellick (Tue, 29 Aug 2017 02:47:52 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:48:01 GMT): jyellick (Tue, 29 Aug 2017 02:48:15 GMT): asaningmaxchain (Tue, 29 Aug 2017 02:48:24 GMT): pschnap (Tue, 29 Aug 2017 17:44:04 GMT): jyellick (Tue, 29 Aug 2017 17:44:46 GMT): jyellick (Tue, 29 Aug 2017 17:45:14 GMT): pschnap (Tue, 29 Aug 2017 17:46:16 GMT): jyellick (Tue, 29 Aug 2017 17:46:27 GMT): pschnap (Tue, 29 Aug 2017 17:46:44 GMT): jyellick (Tue, 29 Aug 2017 17:46:59 GMT): jyellick (Tue, 29 Aug 2017 17:47:21 GMT): jyellick (Tue, 29 Aug 2017 17:47:48 GMT): pschnap (Tue, 29 Aug 2017 17:53:30 GMT): asaningmaxchain (Wed, 30 Aug 2017 01:44:08 GMT): asaningmaxchain (Wed, 30 Aug 2017 01:44:15 GMT): asaningmaxchain (Wed, 30 Aug 2017 01:44:20 GMT): jyellick (Wed, 30 Aug 2017 01:44:27 GMT): asaningmaxchain (Wed, 30 Aug 2017 01:52:45 GMT): asaningmaxchain (Wed, 30 Aug 2017 01:52:51 GMT): asaningmaxchain (Wed, 30 Aug 2017 01:52:58 GMT): jyellick (Wed, 30 Aug 2017 02:02:01 GMT): jyellick (Wed, 30 Aug 2017 02:02:20 GMT): jyellick (Wed, 30 Aug 2017 02:02:52 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:03:58 GMT): jyellick (Wed, 30 Aug 2017 02:09:06 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:09:14 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:09:27 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:28:16 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:28:39 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:29:04 GMT): jyellick (Wed, 30 Aug 2017 02:29:16 GMT): jyellick (Wed, 30 Aug 2017 02:29:29 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:29:51 GMT): jyellick (Wed, 30 Aug 2017 02:29:58 GMT): jyellick (Wed, 30 Aug 2017 02:30:18 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:30:32 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:30:41 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:31:14 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:31:34 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:31:48 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:32:37 GMT): jyellick (Wed, 30 Aug 2017 02:32:59 GMT): jyellick (Wed, 30 Aug 2017 02:33:18 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:33:19 GMT): jyellick (Wed, 30 Aug 2017 02:33:32 GMT): jyellick (Wed, 30 Aug 2017 02:33:43 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:33:52 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:35:04 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:35:09 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:35:12 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:35:17 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:35:31 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:35:44 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:35:59 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:36:06 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:36:15 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:36:31 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:36:32 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:36:59 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:37:03 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:37:17 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:37:23 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:37:36 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:37:44 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:37:50 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:39:20 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:41:12 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:41:56 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:42:01 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:42:01 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:42:23 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:42:49 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:43:17 GMT): jyellick (Wed, 30 Aug 2017 02:49:53 GMT): jyellick (Wed, 30 Aug 2017 02:50:28 GMT): jyellick (Wed, 30 Aug 2017 02:51:10 GMT): jyellick (Wed, 30 Aug 2017 02:51:19 GMT): jyellick (Wed, 30 Aug 2017 02:51:39 GMT): jyellick (Wed, 30 Aug 2017 02:52:11 GMT): jyellick (Wed, 30 Aug 2017 02:52:23 GMT): jyellick (Wed, 30 Aug 2017 02:52:56 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:53:37 GMT): jyellick (Wed, 30 Aug 2017 02:53:46 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:54:03 GMT): jyellick (Wed, 30 Aug 2017 02:54:05 GMT): jyellick (Wed, 30 Aug 2017 02:54:17 GMT): jyellick (Wed, 30 Aug 2017 02:54:28 GMT): jyellick (Wed, 30 Aug 2017 02:54:48 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:54:49 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:55:08 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:55:13 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:55:22 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:55:28 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:55:47 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:56:13 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:56:27 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:56:29 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:56:50 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:57:17 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:57:36 GMT): asaningmaxchain (Wed, 30 Aug 2017 02:59:33 GMT): jyellick (Wed, 30 Aug 2017 02:59:39 GMT): jyellick (Wed, 30 Aug 2017 02:59:57 GMT): jyellick (Wed, 30 Aug 2017 02:59:57 GMT): jyellick (Wed, 30 Aug 2017 03:00:10 GMT): jyellick (Wed, 30 Aug 2017 03:00:38 GMT): jyellick (Wed, 30 Aug 2017 03:01:02 GMT): jyellick (Wed, 30 Aug 2017 03:01:02 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:04:05 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:04:32 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:04:33 GMT): jyellick (Wed, 30 Aug 2017 03:05:49 GMT): jyellick (Wed, 30 Aug 2017 03:06:14 GMT): jyellick (Wed, 30 Aug 2017 03:06:32 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:08:19 GMT): jyellick (Wed, 30 Aug 2017 03:10:02 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:11:17 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:11:17 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:11:26 GMT): jyellick (Wed, 30 Aug 2017 03:12:57 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:15:19 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:15:26 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:15:34 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:15:49 GMT): jyellick (Wed, 30 Aug 2017 03:16:00 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:17:14 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:17:40 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:17:40 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:17:54 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:17:54 GMT): jyellick (Wed, 30 Aug 2017 03:18:47 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:18:54 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:18:54 GMT): jyellick (Wed, 30 Aug 2017 03:20:16 GMT): jyellick (Wed, 30 Aug 2017 03:20:16 GMT): jyellick (Wed, 30 Aug 2017 03:20:48 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:21:08 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:58:41 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:58:56 GMT): asaningmaxchain (Wed, 30 Aug 2017 03:59:13 GMT): jyellick (Wed, 30 Aug 2017 04:00:55 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:05:15 GMT): jyellick (Wed, 30 Aug 2017 04:06:04 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:06:44 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:06:44 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:07:14 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:26:43 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:26:46 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:27:02 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:27:40 GMT): jyellick (Wed, 30 Aug 2017 04:28:02 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:28:11 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:30:58 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:31:14 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:31:39 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:31:46 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:32:20 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:32:20 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:32:37 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:32:37 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:32:58 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:33:30 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:35:23 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:35:28 GMT): asaningmaxchain (Wed, 30 Aug 2017 04:35:55 GMT): asaningmaxchain (Wed, 30 Aug 2017 05:00:49 GMT): asaningmaxchain (Wed, 30 Aug 2017 05:01:01 GMT): asaningmaxchain (Wed, 30 Aug 2017 05:01:19 GMT): jyellick (Wed, 30 Aug 2017 05:01:52 GMT): asaningmaxchain (Wed, 30 Aug 2017 05:05:27 GMT): asaningmaxchain (Wed, 30 Aug 2017 05:05:27 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:01:02 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:01:27 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:01:33 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:01:51 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:01:58 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:01:58 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:02:29 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:02:29 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:03:16 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:03:29 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:03:36 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:04:01 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:04:23 GMT): guoger (Wed, 30 Aug 2017 14:07:34 GMT): guoger (Wed, 30 Aug 2017 14:07:34 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:10:57 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:13:25 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:13:29 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:13:47 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:15:46 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:15:47 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:15:51 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:16:20 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:16:28 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:16:44 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:16:44 GMT): guoger (Wed, 30 Aug 2017 14:21:36 GMT): guoger (Wed, 30 Aug 2017 14:21:36 GMT): asaningmaxchain (Wed, 30 Aug 2017 14:31:57 GMT): anishman (Wed, 30 Aug 2017 14:32:07 GMT): rsherwood (Wed, 30 Aug 2017 15:19:20 GMT): kostas (Wed, 30 Aug 2017 15:25:18 GMT): rsherwood (Wed, 30 Aug 2017 16:02:54 GMT): milesc22 (Wed, 30 Aug 2017 18:32:56 GMT): greg.haskins (Wed, 30 Aug 2017 20:13:47 GMT): kostas (Thu, 31 Aug 2017 12:22:05 GMT): kostas (Thu, 31 Aug 2017 12:22:06 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:41:23 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:41:23 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:42:54 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:42:54 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:42:54 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:43:31 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:43:46 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:44:15 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:45:38 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:45:50 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:45:50 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:47:41 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:57:18 GMT): asaningmaxchain (Thu, 31 Aug 2017 14:57:34 GMT): htyagi90 (Thu, 31 Aug 2017 15:22:17 GMT): htyagi90 (Thu, 31 Aug 2017 15:23:04 GMT): htyagi90 (Thu, 31 Aug 2017 15:23:21 GMT): jyellick (Thu, 31 Aug 2017 15:24:09 GMT): jyellick (Thu, 31 Aug 2017 15:24:21 GMT): htyagi90 (Thu, 31 Aug 2017 20:13:36 GMT): htyagi90 (Thu, 31 Aug 2017 20:14:08 GMT): htyagi90 (Thu, 31 Aug 2017 20:14:28 GMT): htyagi90 (Thu, 31 Aug 2017 20:18:25 GMT): kostas (Fri, 01 Sep 2017 01:34:36 GMT): kostas (Fri, 01 Sep 2017 01:34:36 GMT): kostas (Fri, 01 Sep 2017 01:34:36 GMT): kostas (Fri, 01 Sep 2017 01:34:36 GMT): kostas (Fri, 01 Sep 2017 01:34:43 GMT): kostas (Fri, 01 Sep 2017 01:34:44 GMT): kostas (Fri, 01 Sep 2017 01:34:44 GMT): kostas (Fri, 01 Sep 2017 01:46:11 GMT): asaningmaxchain (Fri, 01 Sep 2017 01:54:53 GMT): asaningmaxchain (Fri, 01 Sep 2017 01:55:30 GMT): jyellick (Fri, 01 Sep 2017 01:55:34 GMT): asaningmaxchain (Fri, 01 Sep 2017 01:56:10 GMT): asaningmaxchain (Fri, 01 Sep 2017 01:56:43 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:01:04 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:02:05 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:02:17 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:02:51 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:03:04 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:03:50 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:03:59 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:03:59 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:04:23 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:04:45 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:05:32 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:05:44 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:05:56 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:06:04 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:06:07 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:06:16 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:06:23 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:06:33 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:06:48 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:06:48 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:06:58 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:08:18 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:08:30 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:08:53 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:09:08 GMT): jyellick (Fri, 01 Sep 2017 02:13:37 GMT): jyellick (Fri, 01 Sep 2017 02:14:01 GMT): asaningmaxchain (Fri, 01 Sep 2017 02:14:18 GMT): lmrln (Fri, 01 Sep 2017 10:03:19 GMT): htyagi90 (Fri, 01 Sep 2017 14:46:56 GMT): htyagi90 (Fri, 01 Sep 2017 15:24:54 GMT): kostas (Fri, 01 Sep 2017 15:26:51 GMT): kostas (Fri, 01 Sep 2017 15:26:51 GMT): asaningmaxchain (Sat, 02 Sep 2017 07:42:48 GMT): asaningmaxchain (Sat, 02 Sep 2017 07:42:48 GMT): jyellick (Sun, 03 Sep 2017 06:31:49 GMT): asaningmaxchain (Sun, 03 Sep 2017 11:19:36 GMT): asaningmaxchain (Sun, 03 Sep 2017 11:20:29 GMT): asaningmaxchain (Sun, 03 Sep 2017 11:21:11 GMT): asaningmaxchain (Sun, 03 Sep 2017 11:21:57 GMT): asaningmaxchain (Sun, 03 Sep 2017 11:23:12 GMT): asaningmaxchain (Sun, 03 Sep 2017 11:23:12 GMT): asaningmaxchain (Mon, 04 Sep 2017 02:48:02 GMT): asaningmaxchain (Mon, 04 Sep 2017 02:48:02 GMT): asaningmaxchain (Mon, 04 Sep 2017 02:48:02 GMT): asaningmaxchain (Mon, 04 Sep 2017 02:48:05 GMT): kutenglaoshu (Mon, 04 Sep 2017 06:59:17 GMT): ynkumar143 (Mon, 04 Sep 2017 12:42:29 GMT): ynkumar143 (Mon, 04 Sep 2017 12:43:15 GMT): guoger (Mon, 04 Sep 2017 14:44:38 GMT): guoger (Mon, 04 Sep 2017 15:03:01 GMT): jworthington (Mon, 04 Sep 2017 15:13:10 GMT): jyellick (Mon, 04 Sep 2017 18:58:39 GMT): jyellick (Mon, 04 Sep 2017 18:58:39 GMT): jyellick (Mon, 04 Sep 2017 18:58:39 GMT): jyellick (Mon, 04 Sep 2017 18:58:39 GMT): jyellick (Mon, 04 Sep 2017 18:59:39 GMT): jworthington (Mon, 04 Sep 2017 19:00:41 GMT): asaningmaxchain (Tue, 05 Sep 2017 01:09:54 GMT): asaningmaxchain (Tue, 05 Sep 2017 01:10:14 GMT): asaningmaxchain (Tue, 05 Sep 2017 01:10:37 GMT): asaningmaxchain (Tue, 05 Sep 2017 01:10:55 GMT): asaningmaxchain (Tue, 05 Sep 2017 01:11:34 GMT): asaningmaxchain (Tue, 05 Sep 2017 01:11:48 GMT): jyellick (Tue, 05 Sep 2017 01:13:26 GMT): asaningmaxchain (Tue, 05 Sep 2017 01:14:16 GMT): jyellick (Tue, 05 Sep 2017 01:15:19 GMT): jyellick (Tue, 05 Sep 2017 01:15:35 GMT): asaningmaxchain (Tue, 05 Sep 2017 01:16:25 GMT): asaningmaxchain (Tue, 05 Sep 2017 01:33:29 GMT): asaningmaxchain (Tue, 05 Sep 2017 01:33:46 GMT): asaningmaxchain (Tue, 05 Sep 2017 01:56:20 GMT): guoger (Tue, 05 Sep 2017 02:18:24 GMT): asaningmaxchain (Tue, 05 Sep 2017 10:34:12 GMT): asaningmaxchain (Tue, 05 Sep 2017 10:34:33 GMT): asaningmaxchain (Tue, 05 Sep 2017 10:34:33 GMT): asaningmaxchain (Tue, 05 Sep 2017 10:34:33 GMT): asaningmaxchain (Tue, 05 Sep 2017 10:35:24 GMT): asaningmaxchain (Tue, 05 Sep 2017 10:35:52 GMT): jyellick (Tue, 05 Sep 2017 13:25:53 GMT): jyellick (Tue, 05 Sep 2017 13:25:53 GMT): htyagi90 (Tue, 05 Sep 2017 20:26:52 GMT): htyagi90 (Tue, 05 Sep 2017 20:26:52 GMT): htyagi90 (Tue, 05 Sep 2017 20:27:52 GMT): htyagi90 (Tue, 05 Sep 2017 20:27:52 GMT): htyagi90 (Tue, 05 Sep 2017 20:29:13 GMT): htyagi90 (Tue, 05 Sep 2017 20:29:49 GMT): htyagi90 (Tue, 05 Sep 2017 20:30:28 GMT): htyagi90 (Tue, 05 Sep 2017 20:43:13 GMT): htyagi90 (Tue, 05 Sep 2017 21:02:14 GMT): jyellick (Wed, 06 Sep 2017 02:20:35 GMT): htyagi90 (Wed, 06 Sep 2017 02:31:27 GMT): htyagi90 (Wed, 06 Sep 2017 02:31:36 GMT): jyellick (Wed, 06 Sep 2017 02:32:37 GMT): jyellick (Wed, 06 Sep 2017 02:32:37 GMT): jyellick (Wed, 06 Sep 2017 02:33:16 GMT): htyagi90 (Wed, 06 Sep 2017 02:34:24 GMT): htyagi90 (Wed, 06 Sep 2017 02:34:58 GMT): htyagi90 (Wed, 06 Sep 2017 02:36:13 GMT): jyellick (Wed, 06 Sep 2017 02:37:41 GMT): jyellick (Wed, 06 Sep 2017 02:37:57 GMT): htyagi90 (Wed, 06 Sep 2017 02:38:58 GMT): jyellick (Wed, 06 Sep 2017 02:43:42 GMT): jyellick (Wed, 06 Sep 2017 02:44:15 GMT): htyagi90 (Wed, 06 Sep 2017 03:00:24 GMT): htyagi90 (Wed, 06 Sep 2017 03:00:36 GMT): jyellick (Wed, 06 Sep 2017 03:01:12 GMT): luckydogchina (Wed, 06 Sep 2017 03:07:09 GMT): qsmen (Wed, 06 Sep 2017 03:12:17 GMT): jyellick (Wed, 06 Sep 2017 03:13:11 GMT): jyellick (Wed, 06 Sep 2017 03:13:32 GMT): jyellick (Wed, 06 Sep 2017 03:13:46 GMT): jyellick (Wed, 06 Sep 2017 03:13:58 GMT): qsmen (Wed, 06 Sep 2017 03:14:44 GMT): jyellick (Wed, 06 Sep 2017 03:15:01 GMT): qsmen (Wed, 06 Sep 2017 03:15:06 GMT): asaningmaxchain (Wed, 06 Sep 2017 05:18:51 GMT): asaningmaxchain (Wed, 06 Sep 2017 05:20:12 GMT): jyellick (Wed, 06 Sep 2017 05:20:33 GMT): asaningmaxchain (Wed, 06 Sep 2017 05:20:47 GMT): asaningmaxchain (Wed, 06 Sep 2017 07:26:54 GMT): asaningmaxchain (Wed, 06 Sep 2017 07:26:57 GMT): asaningmaxchain (Wed, 06 Sep 2017 08:51:03 GMT): asaningmaxchain (Wed, 06 Sep 2017 08:51:14 GMT): asaningmaxchain (Wed, 06 Sep 2017 08:51:14 GMT): asaningmaxchain (Wed, 06 Sep 2017 08:51:30 GMT): asaningmaxchain (Wed, 06 Sep 2017 08:56:54 GMT): asaningmaxchain (Wed, 06 Sep 2017 08:57:14 GMT): asaningmaxchain (Wed, 06 Sep 2017 08:57:44 GMT): asaningmaxchain (Wed, 06 Sep 2017 08:58:31 GMT): asaningmaxchain (Wed, 06 Sep 2017 08:58:41 GMT): asaningmaxchain (Wed, 06 Sep 2017 08:59:39 GMT): asaningmaxchain (Wed, 06 Sep 2017 08:59:39 GMT): asaningmaxchain (Wed, 06 Sep 2017 08:59:48 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:00:45 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:00:57 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:01:59 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:02:13 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:02:41 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:02:47 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:05:08 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:08:03 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:08:13 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:08:33 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:08:42 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:08:47 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:30:36 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:30:56 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:30:56 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:31:13 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:31:13 GMT): asaningmaxchain (Wed, 06 Sep 2017 09:31:39 GMT): asaningmaxchain (Wed, 06 Sep 2017 10:06:01 GMT): asaningmaxchain (Wed, 06 Sep 2017 10:07:01 GMT): guoger (Wed, 06 Sep 2017 10:16:14 GMT): guoger (Wed, 06 Sep 2017 10:18:11 GMT): asaningmaxchain (Wed, 06 Sep 2017 10:19:25 GMT): asaningmaxchain (Wed, 06 Sep 2017 10:19:38 GMT): asaningmaxchain (Wed, 06 Sep 2017 10:19:38 GMT): asaningmaxchain (Wed, 06 Sep 2017 10:19:38 GMT): asaningmaxchain (Wed, 06 Sep 2017 10:21:06 GMT): asaningmaxchain (Wed, 06 Sep 2017 10:21:29 GMT): asaningmaxchain (Wed, 06 Sep 2017 11:27:55 GMT): asaningmaxchain (Wed, 06 Sep 2017 11:28:12 GMT): jworthington (Wed, 06 Sep 2017 13:05:37 GMT): jworthington (Wed, 06 Sep 2017 13:06:17 GMT): guoger (Wed, 06 Sep 2017 13:17:53 GMT): guoger (Wed, 06 Sep 2017 13:18:55 GMT): jworthington (Wed, 06 Sep 2017 13:19:26 GMT): jworthington (Wed, 06 Sep 2017 13:19:41 GMT): jworthington (Wed, 06 Sep 2017 13:20:42 GMT): guoger (Wed, 06 Sep 2017 13:22:46 GMT): guoger (Wed, 06 Sep 2017 13:22:46 GMT): jworthington (Wed, 06 Sep 2017 13:23:59 GMT): guoger (Wed, 06 Sep 2017 13:24:36 GMT): viswanath.tg (Wed, 06 Sep 2017 13:28:42 GMT): qizhang (Wed, 06 Sep 2017 21:08:32 GMT): qizhang (Wed, 06 Sep 2017 21:08:32 GMT): kostas (Wed, 06 Sep 2017 21:17:53 GMT): kostas (Wed, 06 Sep 2017 21:19:08 GMT): asaningmaxchain (Thu, 07 Sep 2017 02:58:57 GMT): asaningmaxchain (Thu, 07 Sep 2017 03:07:35 GMT): asaningmaxchain (Thu, 07 Sep 2017 03:07:35 GMT): asaningmaxchain (Thu, 07 Sep 2017 03:07:35 GMT): ygnr (Thu, 07 Sep 2017 03:33:44 GMT): TuanNN (Thu, 07 Sep 2017 06:57:16 GMT): boliang (Thu, 07 Sep 2017 07:51:59 GMT): asaningmaxchain (Thu, 07 Sep 2017 10:10:01 GMT): asaningmaxchain (Thu, 07 Sep 2017 10:10:01 GMT): asaningmaxchain (Thu, 07 Sep 2017 11:09:40 GMT): asaningmaxchain (Thu, 07 Sep 2017 11:10:37 GMT): asaningmaxchain (Thu, 07 Sep 2017 11:10:37 GMT): asaningmaxchain (Thu, 07 Sep 2017 11:10:57 GMT): asaningmaxchain (Thu, 07 Sep 2017 11:12:01 GMT): asaningmaxchain (Thu, 07 Sep 2017 11:12:03 GMT): kostas (Thu, 07 Sep 2017 13:51:05 GMT): jyellick (Thu, 07 Sep 2017 13:51:48 GMT): jyellick (Thu, 07 Sep 2017 13:51:48 GMT): vukolic (Thu, 07 Sep 2017 22:49:02 GMT): vukolic (Thu, 07 Sep 2017 22:49:47 GMT): luckydogchina (Fri, 08 Sep 2017 01:26:17 GMT): kostas (Fri, 08 Sep 2017 01:38:40 GMT): luckydogchina (Fri, 08 Sep 2017 01:40:26 GMT): kostas (Fri, 08 Sep 2017 01:41:40 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:42:53 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:42:53 GMT): kostas (Fri, 08 Sep 2017 01:43:32 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:43:42 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:46:10 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:46:15 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:46:29 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:46:30 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:46:31 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:49:39 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:50:06 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:50:17 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:50:17 GMT): kostas (Fri, 08 Sep 2017 01:52:04 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:53:29 GMT): kostas (Fri, 08 Sep 2017 01:53:34 GMT): kostas (Fri, 08 Sep 2017 01:54:05 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:55:24 GMT): kostas (Fri, 08 Sep 2017 01:57:08 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:57:25 GMT): asaningmaxchain (Fri, 08 Sep 2017 01:57:30 GMT): asaningmaxchain (Fri, 08 Sep 2017 02:38:25 GMT): asaningmaxchain (Fri, 08 Sep 2017 02:38:26 GMT): asaningmaxchain (Fri, 08 Sep 2017 02:39:05 GMT): asaningmaxchain (Fri, 08 Sep 2017 02:39:13 GMT): asaningmaxchain (Fri, 08 Sep 2017 02:39:44 GMT): asaningmaxchain (Fri, 08 Sep 2017 02:40:56 GMT): asaningmaxchain (Fri, 08 Sep 2017 02:40:56 GMT): asaningmaxchain (Fri, 08 Sep 2017 02:41:02 GMT): asaningmaxchain (Fri, 08 Sep 2017 02:44:40 GMT): asaningmaxchain (Fri, 08 Sep 2017 02:44:40 GMT): kostas (Fri, 08 Sep 2017 02:57:32 GMT): asaningmaxchain (Fri, 08 Sep 2017 02:59:58 GMT): kostas (Fri, 08 Sep 2017 03:00:23 GMT): asaningmaxchain (Fri, 08 Sep 2017 03:00:51 GMT): asaningmaxchain (Fri, 08 Sep 2017 03:01:31 GMT): asaningmaxchain (Fri, 08 Sep 2017 03:01:37 GMT): asaningmaxchain (Fri, 08 Sep 2017 03:01:42 GMT): jgabuya (Fri, 08 Sep 2017 03:03:18 GMT): jgabuya (Fri, 08 Sep 2017 03:03:25 GMT): jyellick (Fri, 08 Sep 2017 03:05:30 GMT): jgabuya (Fri, 08 Sep 2017 03:32:57 GMT): jyellick (Fri, 08 Sep 2017 03:33:56 GMT): jyellick (Fri, 08 Sep 2017 03:38:19 GMT): jgabuya (Fri, 08 Sep 2017 03:40:37 GMT): jyellick (Fri, 08 Sep 2017 03:43:30 GMT): jgabuya (Fri, 08 Sep 2017 03:44:14 GMT): jgabuya (Fri, 08 Sep 2017 03:47:02 GMT): jyellick (Fri, 08 Sep 2017 03:49:24 GMT): jgabuya (Fri, 08 Sep 2017 03:50:42 GMT): jyellick (Fri, 08 Sep 2017 03:54:52 GMT): guoger (Fri, 08 Sep 2017 04:09:32 GMT): guoger (Fri, 08 Sep 2017 04:10:11 GMT): kostas (Fri, 08 Sep 2017 04:10:53 GMT): kostas (Fri, 08 Sep 2017 04:10:53 GMT): jyellick (Fri, 08 Sep 2017 04:11:15 GMT): jyellick (Fri, 08 Sep 2017 04:11:15 GMT): jgabuya (Fri, 08 Sep 2017 04:11:22 GMT): jyellick (Fri, 08 Sep 2017 04:12:57 GMT): jyellick (Fri, 08 Sep 2017 04:14:12 GMT): jgabuya (Fri, 08 Sep 2017 04:20:11 GMT): asaningmaxchain (Fri, 08 Sep 2017 07:26:42 GMT): asaningmaxchain (Fri, 08 Sep 2017 07:26:50 GMT): kostas (Fri, 08 Sep 2017 14:11:19 GMT): qizhang (Sun, 10 Sep 2017 15:21:01 GMT): kostas (Sun, 10 Sep 2017 19:08:01 GMT): WHATISOOP (Mon, 11 Sep 2017 03:07:15 GMT): asaningmaxchain (Mon, 11 Sep 2017 03:56:00 GMT): asaningmaxchain (Mon, 11 Sep 2017 03:56:25 GMT): asaningmaxchain (Mon, 11 Sep 2017 03:56:31 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:21:39 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:21:51 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:22:13 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:22:15 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:22:26 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:22:48 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:22:48 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:23:25 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:23:39 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:24:09 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:25:31 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:25:48 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:25:48 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:27:08 GMT): asaningmaxchain (Mon, 11 Sep 2017 08:27:20 GMT): guoger (Mon, 11 Sep 2017 09:23:58 GMT): asaningmaxchain (Mon, 11 Sep 2017 09:24:14 GMT): asaningmaxchain (Mon, 11 Sep 2017 13:57:35 GMT): jeffgarratt (Mon, 11 Sep 2017 13:58:23 GMT): asaningmaxchain (Mon, 11 Sep 2017 13:58:33 GMT): asaningmaxchain (Mon, 11 Sep 2017 14:03:08 GMT): asaningmaxchain (Mon, 11 Sep 2017 14:03:13 GMT): jyellick (Mon, 11 Sep 2017 14:13:49 GMT): asaningmaxchain (Mon, 11 Sep 2017 14:14:06 GMT): asaningmaxchain (Mon, 11 Sep 2017 14:15:43 GMT): asaningmaxchain (Mon, 11 Sep 2017 14:15:44 GMT): jyellick (Mon, 11 Sep 2017 14:17:00 GMT): asaningmaxchain (Mon, 11 Sep 2017 14:17:49 GMT): asaningmaxchain (Mon, 11 Sep 2017 14:19:32 GMT): jyellick (Mon, 11 Sep 2017 14:27:01 GMT): asaningmaxchain (Mon, 11 Sep 2017 14:27:34 GMT): hpurmann (Mon, 11 Sep 2017 14:49:01 GMT): asaningmaxchain (Mon, 11 Sep 2017 14:52:51 GMT): jyellick (Mon, 11 Sep 2017 14:53:27 GMT): kostas (Mon, 11 Sep 2017 14:53:36 GMT): asaningmaxchain (Mon, 11 Sep 2017 14:53:46 GMT): kostas (Mon, 11 Sep 2017 15:00:45 GMT): kostas (Mon, 11 Sep 2017 15:01:02 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:03:04 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:03:19 GMT): kostas (Mon, 11 Sep 2017 15:03:57 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:04:14 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:04:27 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:04:29 GMT): kostas (Mon, 11 Sep 2017 15:04:30 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:07:06 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:07:14 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:07:21 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:07:33 GMT): kostas (Mon, 11 Sep 2017 15:08:20 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:10:45 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:11:19 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:11:30 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:13:08 GMT): kostas (Mon, 11 Sep 2017 15:14:07 GMT): kostas (Mon, 11 Sep 2017 15:14:41 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:15:45 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:17:05 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:17:19 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:17:24 GMT): asaningmaxchain (Mon, 11 Sep 2017 15:17:45 GMT): Vrai1127 (Mon, 11 Sep 2017 19:04:57 GMT): skbodwell (Mon, 11 Sep 2017 20:45:23 GMT): asaningmaxchain (Tue, 12 Sep 2017 09:23:44 GMT): asaningmaxchain (Tue, 12 Sep 2017 09:24:02 GMT): asaningmaxchain (Tue, 12 Sep 2017 09:26:33 GMT): asaningmaxchain (Tue, 12 Sep 2017 09:26:56 GMT): asaningmaxchain (Tue, 12 Sep 2017 09:27:58 GMT): UtkarshSingh (Tue, 12 Sep 2017 12:39:08 GMT): jyellick (Tue, 12 Sep 2017 13:42:23 GMT): asaningmaxchain (Tue, 12 Sep 2017 13:54:21 GMT): htyagi90 (Tue, 12 Sep 2017 13:58:58 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:00:02 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:00:11 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:00:19 GMT): jyellick (Tue, 12 Sep 2017 14:02:21 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:05:55 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:06:01 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:06:11 GMT): jyellick (Tue, 12 Sep 2017 14:07:39 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:08:26 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:08:42 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:08:48 GMT): jyellick (Tue, 12 Sep 2017 14:08:59 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:09:26 GMT): jyellick (Tue, 12 Sep 2017 14:13:23 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:14:48 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:14:53 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:16:21 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:16:27 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:16:45 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:16:45 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:19:18 GMT): jyellick (Tue, 12 Sep 2017 14:22:01 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:23:42 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:25:14 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:25:17 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:25:23 GMT): jyellick (Tue, 12 Sep 2017 14:25:43 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:26:26 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:26:26 GMT): jyellick (Tue, 12 Sep 2017 14:26:31 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:27:01 GMT): jyellick (Tue, 12 Sep 2017 14:27:54 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:29:22 GMT): jyellick (Tue, 12 Sep 2017 14:29:29 GMT): jyellick (Tue, 12 Sep 2017 14:29:45 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:30:33 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:30:43 GMT): jyellick (Tue, 12 Sep 2017 14:31:18 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:32:11 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:32:14 GMT): jyellick (Tue, 12 Sep 2017 14:32:18 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:32:51 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:33:01 GMT): jyellick (Tue, 12 Sep 2017 14:33:33 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:33:42 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:33:59 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:34:06 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:34:13 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:34:24 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:34:24 GMT): jyellick (Tue, 12 Sep 2017 14:34:48 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:35:30 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:35:36 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:35:42 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:35:42 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:36:26 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:36:27 GMT): kostas (Tue, 12 Sep 2017 14:37:39 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:38:22 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:38:22 GMT): kostas (Tue, 12 Sep 2017 14:39:18 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:40:35 GMT): asaningmaxchain (Tue, 12 Sep 2017 14:42:48 GMT): gauthampamu (Tue, 12 Sep 2017 14:49:32 GMT): jyellick (Tue, 12 Sep 2017 14:51:06 GMT): jyellick (Tue, 12 Sep 2017 14:51:06 GMT): jyellick (Tue, 12 Sep 2017 14:51:38 GMT): gauthampamu (Tue, 12 Sep 2017 14:52:43 GMT): gauthampamu (Tue, 12 Sep 2017 14:53:07 GMT): jyellick (Tue, 12 Sep 2017 14:53:45 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:31:40 GMT): jyellick (Thu, 14 Sep 2017 02:32:17 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:33:02 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:34:29 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:35:05 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:35:47 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:36:01 GMT): jyellick (Thu, 14 Sep 2017 02:37:05 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:37:20 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:38:30 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:39:05 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:39:12 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:39:21 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:39:25 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:39:47 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:39:52 GMT): jyellick (Thu, 14 Sep 2017 02:39:53 GMT): jyellick (Thu, 14 Sep 2017 02:40:03 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:40:53 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:40:53 GMT): asaningmaxchain (Thu, 14 Sep 2017 02:41:11 GMT): asaningmaxchain (Thu, 14 Sep 2017 14:34:02 GMT): asaningmaxchain (Thu, 14 Sep 2017 14:34:02 GMT): jyellick (Thu, 14 Sep 2017 20:04:54 GMT): chifalcon (Fri, 15 Sep 2017 06:34:48 GMT): chifalcon (Fri, 15 Sep 2017 06:37:20 GMT): jyellick (Fri, 15 Sep 2017 06:39:36 GMT): jyellick (Fri, 15 Sep 2017 06:40:18 GMT): jyellick (Fri, 15 Sep 2017 06:40:45 GMT): fred0071 (Fri, 15 Sep 2017 06:43:16 GMT): chifalcon (Fri, 15 Sep 2017 06:47:59 GMT): jyellick (Fri, 15 Sep 2017 13:18:35 GMT): jyellick (Fri, 15 Sep 2017 13:18:35 GMT): sujitedu (Sat, 16 Sep 2017 06:22:35 GMT): wanghhao (Sat, 16 Sep 2017 16:20:53 GMT): jgiron (Mon, 18 Sep 2017 07:39:21 GMT): hpurmann (Mon, 18 Sep 2017 10:34:29 GMT): Lavanya5896 (Mon, 18 Sep 2017 10:53:40 GMT): asaningmaxchain (Mon, 18 Sep 2017 12:30:02 GMT): jyellick (Mon, 18 Sep 2017 13:05:54 GMT): jyellick (Mon, 18 Sep 2017 13:06:13 GMT): jyellick (Mon, 18 Sep 2017 13:08:04 GMT): asaningmaxchain (Mon, 18 Sep 2017 14:35:15 GMT): asaningmaxchain (Mon, 18 Sep 2017 14:35:40 GMT): asaningmaxchain (Mon, 18 Sep 2017 14:35:41 GMT): jyellick (Mon, 18 Sep 2017 14:35:51 GMT): asaningmaxchain (Mon, 18 Sep 2017 14:35:58 GMT): asaningmaxchain (Mon, 18 Sep 2017 14:36:16 GMT): asaningmaxchain (Mon, 18 Sep 2017 14:36:38 GMT): jyellick (Mon, 18 Sep 2017 14:37:50 GMT): asaningmaxchain (Mon, 18 Sep 2017 14:38:18 GMT): asaningmaxchain (Mon, 18 Sep 2017 14:38:24 GMT): asaningmaxchain (Mon, 18 Sep 2017 14:39:10 GMT): asaningmaxchain (Mon, 18 Sep 2017 14:39:10 GMT): asaningmaxchain (Mon, 18 Sep 2017 14:39:28 GMT): asaningmaxchain (Mon, 18 Sep 2017 14:39:28 GMT): asaningmaxchain (Tue, 19 Sep 2017 02:57:36 GMT): asaningmaxchain (Tue, 19 Sep 2017 02:57:36 GMT): asaningmaxchain (Tue, 19 Sep 2017 02:57:36 GMT): asaningmaxchain (Tue, 19 Sep 2017 02:57:36 GMT): alix (Tue, 19 Sep 2017 08:21:59 GMT): jworthington (Tue, 19 Sep 2017 10:06:36 GMT): jworthington (Tue, 19 Sep 2017 10:07:46 GMT): jyellick (Tue, 19 Sep 2017 13:37:28 GMT): jyellick (Tue, 19 Sep 2017 13:38:04 GMT): jworthington (Tue, 19 Sep 2017 13:45:08 GMT): jworthington (Tue, 19 Sep 2017 13:46:00 GMT): jyellick (Tue, 19 Sep 2017 13:46:47 GMT): jyellick (Tue, 19 Sep 2017 13:47:29 GMT): jworthington (Tue, 19 Sep 2017 13:48:45 GMT): jyellick (Tue, 19 Sep 2017 13:50:01 GMT): jworthington (Tue, 19 Sep 2017 13:50:25 GMT): jyellick (Tue, 19 Sep 2017 13:50:56 GMT): jyellick (Tue, 19 Sep 2017 13:50:56 GMT): jyellick (Tue, 19 Sep 2017 13:50:56 GMT): jworthington (Tue, 19 Sep 2017 13:51:32 GMT): jyellick (Tue, 19 Sep 2017 13:51:48 GMT): jworthington (Tue, 19 Sep 2017 13:52:50 GMT): jyellick (Tue, 19 Sep 2017 13:53:40 GMT): jyellick (Tue, 19 Sep 2017 13:53:59 GMT): jyellick (Tue, 19 Sep 2017 13:53:59 GMT): jworthington (Tue, 19 Sep 2017 13:57:05 GMT): jyellick (Tue, 19 Sep 2017 13:58:44 GMT): jworthington (Tue, 19 Sep 2017 13:58:59 GMT): jyellick (Tue, 19 Sep 2017 13:59:38 GMT): jyellick (Tue, 19 Sep 2017 13:59:38 GMT): jworthington (Tue, 19 Sep 2017 13:59:50 GMT): jworthington (Tue, 19 Sep 2017 14:00:45 GMT): jworthington (Tue, 19 Sep 2017 14:01:38 GMT): jyellick (Tue, 19 Sep 2017 14:02:40 GMT): jyellick (Tue, 19 Sep 2017 14:03:00 GMT): jyellick (Tue, 19 Sep 2017 14:03:30 GMT): jworthington (Tue, 19 Sep 2017 14:04:14 GMT): jworthington (Tue, 19 Sep 2017 14:04:34 GMT): jyellick (Tue, 19 Sep 2017 14:05:05 GMT): jyellick (Tue, 19 Sep 2017 14:05:33 GMT): jyellick (Tue, 19 Sep 2017 14:05:55 GMT): jyellick (Tue, 19 Sep 2017 14:06:10 GMT): jyellick (Tue, 19 Sep 2017 14:06:10 GMT): jworthington (Tue, 19 Sep 2017 14:07:10 GMT): rahulhegde (Tue, 19 Sep 2017 18:43:36 GMT): jyellick (Tue, 19 Sep 2017 18:45:45 GMT): jyellick (Tue, 19 Sep 2017 18:45:48 GMT): jyellick (Tue, 19 Sep 2017 18:45:48 GMT): jyellick (Tue, 19 Sep 2017 18:47:50 GMT): rahulhegde (Tue, 19 Sep 2017 18:49:58 GMT): jyellick (Tue, 19 Sep 2017 18:51:31 GMT): rahulhegde (Tue, 19 Sep 2017 18:52:10 GMT): rahulhegde (Tue, 19 Sep 2017 18:52:10 GMT): jyellick (Tue, 19 Sep 2017 18:52:44 GMT): jyellick (Tue, 19 Sep 2017 18:53:02 GMT): jyellick (Tue, 19 Sep 2017 18:55:27 GMT): jyellick (Tue, 19 Sep 2017 18:56:18 GMT): rahulhegde (Tue, 19 Sep 2017 18:56:35 GMT): jyellick (Tue, 19 Sep 2017 18:58:19 GMT): jyellick (Tue, 19 Sep 2017 18:58:19 GMT): rahulhegde (Tue, 19 Sep 2017 18:59:52 GMT): rahulhegde (Tue, 19 Sep 2017 19:00:37 GMT): jyellick (Tue, 19 Sep 2017 19:03:22 GMT): muralisr (Tue, 19 Sep 2017 19:04:16 GMT): jyellick (Tue, 19 Sep 2017 19:05:20 GMT): rahulhegde (Tue, 19 Sep 2017 19:11:27 GMT): muralisr (Tue, 19 Sep 2017 19:14:23 GMT): Jacky_Sheng (Wed, 20 Sep 2017 03:21:00 GMT): indira.kalagara (Wed, 20 Sep 2017 09:21:46 GMT): Colonel_HLE (Wed, 20 Sep 2017 12:19:36 GMT): Glen (Thu, 21 Sep 2017 01:41:32 GMT): qsmen (Thu, 21 Sep 2017 02:03:08 GMT): Glen (Thu, 21 Sep 2017 02:31:36 GMT): jyellick (Thu, 21 Sep 2017 04:12:58 GMT): luckydogchina (Thu, 21 Sep 2017 07:59:30 GMT): luckydogchina (Thu, 21 Sep 2017 07:59:30 GMT): luckydogchina (Thu, 21 Sep 2017 07:59:30 GMT): luckydogchina (Thu, 21 Sep 2017 07:59:30 GMT): Glen (Thu, 21 Sep 2017 08:34:07 GMT): rahulhegde (Thu, 21 Sep 2017 11:16:36 GMT): luckydogchina (Fri, 22 Sep 2017 02:51:02 GMT): luckydogchina (Fri, 22 Sep 2017 02:51:02 GMT): luckydogchina (Fri, 22 Sep 2017 02:51:02 GMT): jyellick (Fri, 22 Sep 2017 03:37:54 GMT): qsmen (Fri, 22 Sep 2017 03:40:01 GMT): qsmen (Fri, 22 Sep 2017 03:43:16 GMT): jyellick (Fri, 22 Sep 2017 03:46:01 GMT): qsmen (Fri, 22 Sep 2017 03:48:13 GMT): qsmen (Fri, 22 Sep 2017 03:48:27 GMT): luckydogchina (Fri, 22 Sep 2017 03:51:50 GMT): luckydogchina (Fri, 22 Sep 2017 03:51:50 GMT): jyellick (Fri, 22 Sep 2017 04:02:15 GMT): luckydogchina (Fri, 22 Sep 2017 05:08:32 GMT): asaningmaxchain (Fri, 22 Sep 2017 05:28:34 GMT): asaningmaxchain (Fri, 22 Sep 2017 05:28:34 GMT): asaningmaxchain (Fri, 22 Sep 2017 05:30:29 GMT): jyellick (Fri, 22 Sep 2017 05:30:46 GMT): asaningmaxchain (Fri, 22 Sep 2017 05:34:09 GMT): asaningmaxchain (Fri, 22 Sep 2017 05:38:58 GMT): asaningmaxchain (Fri, 22 Sep 2017 05:38:58 GMT): jyellick (Fri, 22 Sep 2017 05:40:26 GMT): asaningmaxchain (Fri, 22 Sep 2017 05:41:10 GMT): bh4rtp (Fri, 22 Sep 2017 10:37:55 GMT): asaningmaxchain (Fri, 22 Sep 2017 12:48:33 GMT): asaningmaxchain (Fri, 22 Sep 2017 12:48:33 GMT): asaningmaxchain (Fri, 22 Sep 2017 12:49:09 GMT): asaningmaxchain (Fri, 22 Sep 2017 12:50:09 GMT): asaningmaxchain (Fri, 22 Sep 2017 12:50:09 GMT): asaningmaxchain (Fri, 22 Sep 2017 15:25:43 GMT): asaningmaxchain (Fri, 22 Sep 2017 15:25:57 GMT): asaningmaxchain (Fri, 22 Sep 2017 15:25:57 GMT): asaningmaxchain (Fri, 22 Sep 2017 15:25:57 GMT): gentios (Sat, 23 Sep 2017 15:28:28 GMT): gentios (Sat, 23 Sep 2017 15:29:52 GMT): gentios (Sat, 23 Sep 2017 15:29:53 GMT): vukolic (Mon, 25 Sep 2017 00:46:37 GMT): bh4rtp (Mon, 25 Sep 2017 02:23:19 GMT): bh4rtp (Mon, 25 Sep 2017 02:35:04 GMT): bh4rtp (Mon, 25 Sep 2017 02:37:34 GMT): jyellick (Mon, 25 Sep 2017 03:59:34 GMT): toriaezunama (Mon, 25 Sep 2017 12:15:31 GMT): sanchezl (Mon, 25 Sep 2017 15:03:07 GMT): bh4rtp (Tue, 26 Sep 2017 01:54:22 GMT): bh4rtp (Tue, 26 Sep 2017 01:54:22 GMT): hpurmann (Tue, 26 Sep 2017 08:39:40 GMT): hpurmann (Tue, 26 Sep 2017 08:41:14 GMT): kostas (Tue, 26 Sep 2017 09:07:47 GMT): hpurmann (Tue, 26 Sep 2017 09:09:18 GMT): yushan (Tue, 26 Sep 2017 09:16:31 GMT): yacovm (Tue, 26 Sep 2017 09:29:10 GMT): jyellick (Tue, 26 Sep 2017 14:01:40 GMT): jyellick (Tue, 26 Sep 2017 14:01:40 GMT): asaningmaxchain (Tue, 26 Sep 2017 17:10:02 GMT): asaningmaxchain (Tue, 26 Sep 2017 17:10:34 GMT): harlanlc (Wed, 27 Sep 2017 00:19:25 GMT): bh4rtp (Wed, 27 Sep 2017 00:56:02 GMT): bh4rtp (Wed, 27 Sep 2017 00:56:46 GMT): asaningmaxchain (Wed, 27 Sep 2017 00:57:06 GMT): asaningmaxchain (Wed, 27 Sep 2017 00:57:06 GMT): bh4rtp (Wed, 27 Sep 2017 01:00:25 GMT): asaningmaxchain (Wed, 27 Sep 2017 01:00:52 GMT): asaningmaxchain (Wed, 27 Sep 2017 01:00:52 GMT): bh4rtp (Wed, 27 Sep 2017 01:01:54 GMT): bh4rtp (Wed, 27 Sep 2017 01:01:54 GMT): bh4rtp (Wed, 27 Sep 2017 01:01:54 GMT): jyellick (Wed, 27 Sep 2017 02:25:22 GMT): bh4rtp (Wed, 27 Sep 2017 02:36:57 GMT): asaningmaxchain (Wed, 27 Sep 2017 03:55:53 GMT): asaningmaxchain (Wed, 27 Sep 2017 03:56:09 GMT): asaningmaxchain (Wed, 27 Sep 2017 03:56:09 GMT): jyellick (Wed, 27 Sep 2017 04:06:43 GMT): asaningmaxchain (Wed, 27 Sep 2017 04:07:21 GMT): asaningmaxchain (Wed, 27 Sep 2017 04:57:52 GMT): asaningmaxchain (Wed, 27 Sep 2017 04:57:52 GMT): asaningmaxchain (Wed, 27 Sep 2017 04:57:54 GMT): asaningmaxchain (Wed, 27 Sep 2017 05:05:59 GMT): kostas (Wed, 27 Sep 2017 10:12:14 GMT): asaningmaxchain (Wed, 27 Sep 2017 10:12:27 GMT): asaningmaxchain (Wed, 27 Sep 2017 10:12:27 GMT): asaningmaxchain (Wed, 27 Sep 2017 10:17:51 GMT): Amjadnz (Wed, 27 Sep 2017 13:23:58 GMT): Amjadnz (Wed, 27 Sep 2017 13:25:58 GMT): Amjadnz (Wed, 27 Sep 2017 13:26:43 GMT): kostas (Wed, 27 Sep 2017 13:28:11 GMT): Amjadnz (Wed, 27 Sep 2017 13:28:21 GMT): Amjadnz (Wed, 27 Sep 2017 13:28:31 GMT): Amjadnz (Wed, 27 Sep 2017 13:28:48 GMT): Amjadnz (Wed, 27 Sep 2017 13:28:59 GMT): kostas (Wed, 27 Sep 2017 13:30:25 GMT): Amjadnz (Wed, 27 Sep 2017 13:31:51 GMT): kostas (Wed, 27 Sep 2017 13:34:24 GMT): Amjadnz (Wed, 27 Sep 2017 13:34:36 GMT): Amjadnz (Wed, 27 Sep 2017 13:34:57 GMT): Amjadnz (Wed, 27 Sep 2017 14:40:19 GMT): Amjadnz (Wed, 27 Sep 2017 14:40:23 GMT): t_stephens67 (Wed, 27 Sep 2017 17:50:13 GMT): t_stephens67 (Wed, 27 Sep 2017 17:52:13 GMT): kostas (Wed, 27 Sep 2017 18:02:53 GMT): bh4rtp (Thu, 28 Sep 2017 00:44:13 GMT): bh4rtp (Thu, 28 Sep 2017 00:44:13 GMT): bh4rtp (Thu, 28 Sep 2017 01:52:04 GMT): bh4rtp (Thu, 28 Sep 2017 01:52:04 GMT): jyellick (Thu, 28 Sep 2017 02:18:25 GMT): jyellick (Thu, 28 Sep 2017 02:18:54 GMT): bh4rtp (Thu, 28 Sep 2017 02:20:08 GMT): bh4rtp (Thu, 28 Sep 2017 03:23:12 GMT): bh4rtp (Thu, 28 Sep 2017 03:23:12 GMT): jyellick (Thu, 28 Sep 2017 03:32:20 GMT): jyellick (Thu, 28 Sep 2017 03:32:46 GMT): ruigonzalez_mosi (Thu, 28 Sep 2017 05:27:54 GMT): sampath06 (Thu, 28 Sep 2017 05:34:34 GMT): jyellick (Thu, 28 Sep 2017 05:36:14 GMT): jyellick (Thu, 28 Sep 2017 05:36:14 GMT): jyellick (Thu, 28 Sep 2017 05:36:14 GMT): bh4rtp (Thu, 28 Sep 2017 05:52:12 GMT): jyellick (Thu, 28 Sep 2017 05:53:36 GMT): jyellick (Thu, 28 Sep 2017 05:54:12 GMT): bh4rtp (Thu, 28 Sep 2017 05:56:40 GMT): jyellick (Thu, 28 Sep 2017 05:59:18 GMT): sampath06 (Thu, 28 Sep 2017 06:19:30 GMT): Vadim (Thu, 28 Sep 2017 07:16:07 GMT): sampath06 (Thu, 28 Sep 2017 07:32:17 GMT): Vadim (Thu, 28 Sep 2017 07:32:55 GMT): sampath06 (Thu, 28 Sep 2017 07:34:39 GMT): Vadim (Thu, 28 Sep 2017 07:35:17 GMT): Vadim (Thu, 28 Sep 2017 07:35:35 GMT): Vadim (Thu, 28 Sep 2017 07:35:54 GMT): Vadim (Thu, 28 Sep 2017 07:36:42 GMT): sampath06 (Thu, 28 Sep 2017 07:38:34 GMT): Vadim (Thu, 28 Sep 2017 07:42:29 GMT): Vadim (Thu, 28 Sep 2017 07:42:42 GMT): sampath06 (Thu, 28 Sep 2017 08:11:48 GMT): Vadim (Thu, 28 Sep 2017 08:12:45 GMT): Vadim (Thu, 28 Sep 2017 08:13:11 GMT): sampath06 (Thu, 28 Sep 2017 08:13:33 GMT): Vadim (Thu, 28 Sep 2017 08:13:51 GMT): bh4rtp (Thu, 28 Sep 2017 08:18:17 GMT): Vadim (Thu, 28 Sep 2017 08:37:40 GMT): kostas (Thu, 28 Sep 2017 10:24:33 GMT): kostas (Thu, 28 Sep 2017 10:24:33 GMT): kostas (Thu, 28 Sep 2017 10:24:46 GMT): kostas (Thu, 28 Sep 2017 10:24:51 GMT): kostas (Thu, 28 Sep 2017 11:43:58 GMT): rjones (Thu, 28 Sep 2017 11:43:58 GMT): sampath06 (Thu, 28 Sep 2017 11:55:31 GMT): kostas (Thu, 28 Sep 2017 13:14:42 GMT): qizhang (Thu, 28 Sep 2017 16:02:33 GMT): rjones (Thu, 28 Sep 2017 18:28:03 GMT): rjones (Thu, 28 Sep 2017 18:28:22 GMT): Glen (Fri, 29 Sep 2017 01:35:36 GMT): Glen (Fri, 29 Sep 2017 01:35:36 GMT): asaningmaxchain (Fri, 29 Sep 2017 02:34:07 GMT): Glen (Fri, 29 Sep 2017 03:06:44 GMT): Glen (Fri, 29 Sep 2017 03:07:09 GMT): Glen (Fri, 29 Sep 2017 03:09:46 GMT): rjones (Fri, 29 Sep 2017 03:10:03 GMT): Glen (Fri, 29 Sep 2017 03:10:46 GMT): asaningmaxchain (Fri, 29 Sep 2017 03:58:46 GMT): asaningmaxchain (Fri, 29 Sep 2017 03:58:47 GMT): asaningmaxchain (Fri, 29 Sep 2017 04:16:25 GMT): Ashish (Fri, 29 Sep 2017 04:55:53 GMT): Ashish (Fri, 29 Sep 2017 05:01:25 GMT): asaningmaxchain (Fri, 29 Sep 2017 05:38:12 GMT): Ashish (Fri, 29 Sep 2017 05:39:22 GMT): asaningmaxchain (Fri, 29 Sep 2017 07:07:28 GMT): asaningmaxchain (Fri, 29 Sep 2017 07:07:54 GMT): qsmen (Fri, 29 Sep 2017 09:00:31 GMT): asaningmaxchain (Fri, 29 Sep 2017 09:18:34 GMT): asaningmaxchain (Fri, 29 Sep 2017 09:18:34 GMT): sampath06 (Fri, 29 Sep 2017 09:20:10 GMT): qsmen (Fri, 29 Sep 2017 09:44:59 GMT): aberfou (Fri, 29 Sep 2017 09:52:30 GMT): aberfou (Fri, 29 Sep 2017 09:52:33 GMT): aberfou (Fri, 29 Sep 2017 09:52:56 GMT): luckydogchina (Fri, 29 Sep 2017 09:55:13 GMT): luckydogchina (Fri, 29 Sep 2017 09:55:13 GMT): aberfou (Fri, 29 Sep 2017 09:59:11 GMT): luckydogchina (Fri, 29 Sep 2017 10:02:48 GMT): luckydogchina (Fri, 29 Sep 2017 10:02:48 GMT): gentios (Fri, 29 Sep 2017 10:03:04 GMT): Ashish (Fri, 29 Sep 2017 10:15:44 GMT): Ashish (Fri, 29 Sep 2017 10:15:44 GMT): kostas (Fri, 29 Sep 2017 14:01:07 GMT): kostas (Fri, 29 Sep 2017 14:03:05 GMT): kostas (Fri, 29 Sep 2017 14:05:02 GMT): kostas (Fri, 29 Sep 2017 14:06:08 GMT): jworthington (Fri, 29 Sep 2017 16:00:11 GMT): jworthington (Fri, 29 Sep 2017 16:01:40 GMT): jyellick (Fri, 29 Sep 2017 16:04:10 GMT): jyellick (Fri, 29 Sep 2017 16:08:01 GMT): jworthington (Fri, 29 Sep 2017 16:09:46 GMT): Ashish (Fri, 29 Sep 2017 16:15:40 GMT): Ashish (Fri, 29 Sep 2017 16:15:40 GMT): qsmen (Sat, 30 Sep 2017 01:01:59 GMT): qsmen (Sat, 30 Sep 2017 01:10:18 GMT): qsmen (Sat, 30 Sep 2017 01:10:18 GMT): qsmen (Sat, 30 Sep 2017 01:10:18 GMT): qsmen (Sat, 30 Sep 2017 01:14:43 GMT): qsmen (Sat, 30 Sep 2017 01:44:34 GMT): asaningmaxchain (Sat, 30 Sep 2017 02:22:49 GMT): asaningmaxchain (Sat, 30 Sep 2017 02:22:53 GMT): qsmen (Sat, 30 Sep 2017 02:24:25 GMT): asaningmaxchain (Sat, 30 Sep 2017 02:24:32 GMT): qsmen (Sat, 30 Sep 2017 02:24:41 GMT): qsmen (Sat, 30 Sep 2017 02:26:51 GMT): asaningmaxchain (Sat, 30 Sep 2017 02:32:19 GMT): qsmen (Sat, 30 Sep 2017 02:37:45 GMT): qsmen (Sat, 30 Sep 2017 02:47:26 GMT): asaningmaxchain (Sat, 30 Sep 2017 02:47:35 GMT): asaningmaxchain (Sat, 30 Sep 2017 02:47:50 GMT): qsmen (Sat, 30 Sep 2017 02:52:19 GMT): luckydogchina (Sat, 30 Sep 2017 03:06:26 GMT): luckydogchina (Sat, 30 Sep 2017 03:13:52 GMT): luckydogchina (Sat, 30 Sep 2017 03:13:52 GMT): asaningmaxchain (Sat, 30 Sep 2017 03:15:14 GMT): asaningmaxchain (Sat, 30 Sep 2017 03:15:34 GMT): asaningmaxchain (Sat, 30 Sep 2017 03:15:50 GMT): asaningmaxchain (Sat, 30 Sep 2017 03:15:51 GMT): luckydogchina (Sat, 30 Sep 2017 03:16:07 GMT): luckydogchina (Sat, 30 Sep 2017 03:17:33 GMT): asaningmaxchain (Sat, 30 Sep 2017 03:17:57 GMT): asaningmaxchain (Sat, 30 Sep 2017 03:19:19 GMT): asaningmaxchain (Sat, 30 Sep 2017 03:19:57 GMT): qsmen (Sat, 30 Sep 2017 03:25:47 GMT): qsmen (Sat, 30 Sep 2017 03:25:47 GMT): qsmen (Sat, 30 Sep 2017 03:35:30 GMT): LordGoodman (Sat, 30 Sep 2017 04:17:22 GMT): asaningmaxchain (Sat, 30 Sep 2017 05:32:14 GMT): asaningmaxchain (Sat, 30 Sep 2017 05:32:14 GMT): asaningmaxchain (Sat, 30 Sep 2017 07:55:43 GMT): Amjadnz (Sat, 30 Sep 2017 09:50:37 GMT): Amjadnz (Sat, 30 Sep 2017 09:50:50 GMT): Amjadnz (Sat, 30 Sep 2017 09:52:58 GMT): asaningmaxchain (Sat, 30 Sep 2017 09:56:40 GMT): Amjadnz (Sat, 30 Sep 2017 09:58:34 GMT): Amjadnz (Sat, 30 Sep 2017 09:58:59 GMT): Amjadnz (Sat, 30 Sep 2017 10:00:46 GMT): wy (Sat, 30 Sep 2017 10:01:35 GMT): Amjadnz (Sat, 30 Sep 2017 10:02:01 GMT): Amjadnz (Sat, 30 Sep 2017 10:02:21 GMT): Amjadnz (Sat, 30 Sep 2017 10:04:06 GMT): mastersingh24 (Sat, 30 Sep 2017 10:05:18 GMT): Amjadnz (Sat, 30 Sep 2017 11:21:27 GMT): Amjadnz (Sat, 30 Sep 2017 11:21:54 GMT): Amjadnz (Sat, 30 Sep 2017 11:24:37 GMT): Amjadnz (Sat, 30 Sep 2017 11:24:59 GMT): yacovm (Sat, 30 Sep 2017 11:25:12 GMT): yacovm (Sat, 30 Sep 2017 11:25:20 GMT): Amjadnz (Sat, 30 Sep 2017 11:25:42 GMT): yacovm (Sat, 30 Sep 2017 11:25:48 GMT): yacovm (Sat, 30 Sep 2017 11:25:59 GMT): Amjadnz (Sat, 30 Sep 2017 11:26:04 GMT): Amjadnz (Sat, 30 Sep 2017 11:27:27 GMT): Amjadnz (Sat, 30 Sep 2017 11:27:27 GMT): Amjadnz (Sat, 30 Sep 2017 11:29:02 GMT): Amjadnz (Sat, 30 Sep 2017 11:29:09 GMT): yacovm (Sat, 30 Sep 2017 11:29:48 GMT): Amjadnz (Sat, 30 Sep 2017 11:30:08 GMT): yacovm (Sat, 30 Sep 2017 11:30:11 GMT): yacovm (Sat, 30 Sep 2017 11:30:21 GMT): Amjadnz (Sat, 30 Sep 2017 11:30:56 GMT): yacovm (Sat, 30 Sep 2017 11:31:10 GMT): Amjadnz (Sat, 30 Sep 2017 11:31:20 GMT): yacovm (Sat, 30 Sep 2017 11:31:38 GMT): Amjadnz (Sat, 30 Sep 2017 11:31:52 GMT): Amjadnz (Sat, 30 Sep 2017 11:32:12 GMT): yacovm (Sat, 30 Sep 2017 11:32:20 GMT): Amjadnz (Sat, 30 Sep 2017 11:32:31 GMT): Amjadnz (Sat, 30 Sep 2017 11:33:12 GMT): yacovm (Sat, 30 Sep 2017 11:33:14 GMT): Amjadnz (Sat, 30 Sep 2017 11:33:39 GMT): Amjadnz (Sat, 30 Sep 2017 11:33:39 GMT): Amjadnz (Sat, 30 Sep 2017 11:33:39 GMT): Amjadnz (Sat, 30 Sep 2017 11:34:08 GMT): yacovm (Sat, 30 Sep 2017 11:34:34 GMT): yacovm (Sat, 30 Sep 2017 11:34:46 GMT): Amjadnz (Sat, 30 Sep 2017 11:35:46 GMT): Amjadnz (Sat, 30 Sep 2017 11:48:43 GMT): Amjadnz (Sat, 30 Sep 2017 11:49:10 GMT): Amjadnz (Sat, 30 Sep 2017 11:50:14 GMT): Amjadnz (Sat, 30 Sep 2017 11:51:33 GMT): Amjadnz (Sat, 30 Sep 2017 11:51:50 GMT): yacovm (Sat, 30 Sep 2017 11:52:23 GMT): yacovm (Sat, 30 Sep 2017 11:52:33 GMT): yacovm (Sat, 30 Sep 2017 11:52:40 GMT): Amjadnz (Sat, 30 Sep 2017 11:52:45 GMT): Amjadnz (Sat, 30 Sep 2017 11:52:45 GMT): yacovm (Sat, 30 Sep 2017 11:53:10 GMT): yacovm (Sat, 30 Sep 2017 11:53:16 GMT): yacovm (Sat, 30 Sep 2017 11:53:16 GMT): yacovm (Sat, 30 Sep 2017 11:54:08 GMT): yacovm (Sat, 30 Sep 2017 11:54:27 GMT): Amjadnz (Sat, 30 Sep 2017 12:14:34 GMT): Amjadnz (Sat, 30 Sep 2017 12:54:20 GMT): Amjadnz (Sat, 30 Sep 2017 12:54:28 GMT): Amjadnz (Sat, 30 Sep 2017 12:54:48 GMT): Amjadnz (Sat, 30 Sep 2017 12:55:04 GMT): Amjadnz (Sat, 30 Sep 2017 12:55:48 GMT): Amjadnz (Sat, 30 Sep 2017 12:56:28 GMT): Amjadnz (Sat, 30 Sep 2017 12:58:14 GMT): Amjadnz (Sat, 30 Sep 2017 19:19:21 GMT): Amjadnz (Sat, 30 Sep 2017 19:19:21 GMT): Amjadnz (Sat, 30 Sep 2017 19:19:21 GMT): jyellick (Sun, 01 Oct 2017 02:31:34 GMT): jyellick (Sun, 01 Oct 2017 02:31:34 GMT): jyellick (Sun, 01 Oct 2017 02:31:34 GMT): asaningmaxchain (Sun, 01 Oct 2017 03:57:47 GMT): asaningmaxchain (Sun, 01 Oct 2017 03:57:48 GMT): asaningmaxchain (Sun, 01 Oct 2017 03:58:43 GMT): Amjadnz (Sun, 01 Oct 2017 15:54:04 GMT): Amjadnz (Sun, 01 Oct 2017 19:43:45 GMT): Amjadnz (Sun, 01 Oct 2017 19:44:03 GMT): Amjadnz (Sun, 01 Oct 2017 19:44:20 GMT): yacovm (Sun, 01 Oct 2017 19:44:26 GMT): Amjadnz (Sun, 01 Oct 2017 19:44:36 GMT): yacovm (Sun, 01 Oct 2017 19:44:57 GMT): yacovm (Sun, 01 Oct 2017 19:45:31 GMT): Amjadnz (Sun, 01 Oct 2017 19:45:46 GMT): Amjadnz (Sun, 01 Oct 2017 19:48:46 GMT): Amjadnz (Sun, 01 Oct 2017 19:49:31 GMT): Amjadnz (Sun, 01 Oct 2017 19:51:44 GMT): Amjadnz (Sun, 01 Oct 2017 19:52:57 GMT): Amjadnz (Sun, 01 Oct 2017 19:52:57 GMT): yacovm (Sun, 01 Oct 2017 20:01:22 GMT): Amjadnz (Sun, 01 Oct 2017 20:32:28 GMT): Amjadnz (Sun, 01 Oct 2017 20:34:16 GMT): Amjadnz (Sun, 01 Oct 2017 20:34:37 GMT): Amjadnz (Sun, 01 Oct 2017 20:35:17 GMT): Amjadnz (Sun, 01 Oct 2017 20:35:20 GMT): Amjadnz (Sun, 01 Oct 2017 20:36:48 GMT): jyellick (Mon, 02 Oct 2017 01:15:32 GMT): jyellick (Mon, 02 Oct 2017 01:16:30 GMT): jyellick (Mon, 02 Oct 2017 01:16:30 GMT): Amjadnz (Mon, 02 Oct 2017 03:08:40 GMT): Amjadnz (Mon, 02 Oct 2017 03:09:43 GMT): Amjadnz (Mon, 02 Oct 2017 03:10:35 GMT): Amjadnz (Mon, 02 Oct 2017 03:19:19 GMT): jyellick (Mon, 02 Oct 2017 05:28:59 GMT): LordGoodman (Mon, 02 Oct 2017 06:43:43 GMT): LordGoodman (Mon, 02 Oct 2017 06:44:35 GMT): LordGoodman (Mon, 02 Oct 2017 06:47:30 GMT): yacovm (Mon, 02 Oct 2017 07:09:22 GMT): yacovm (Mon, 02 Oct 2017 07:09:41 GMT): yacovm (Mon, 02 Oct 2017 07:10:36 GMT): LordGoodman (Mon, 02 Oct 2017 07:32:46 GMT): yacovm (Mon, 02 Oct 2017 07:34:59 GMT): LordGoodman (Mon, 02 Oct 2017 07:36:37 GMT): gentios (Mon, 02 Oct 2017 07:44:25 GMT): gentios (Mon, 02 Oct 2017 07:44:28 GMT): gentios (Mon, 02 Oct 2017 07:44:42 GMT): gentios (Mon, 02 Oct 2017 07:44:43 GMT): gentios (Mon, 02 Oct 2017 08:16:11 GMT): gentios (Mon, 02 Oct 2017 08:16:16 GMT): yacovm (Mon, 02 Oct 2017 08:16:52 GMT): gentios (Mon, 02 Oct 2017 08:17:02 GMT): gentios (Mon, 02 Oct 2017 08:17:09 GMT): gentios (Mon, 02 Oct 2017 08:17:09 GMT): yacovm (Mon, 02 Oct 2017 08:17:26 GMT): gentios (Mon, 02 Oct 2017 08:18:03 GMT): gentios (Mon, 02 Oct 2017 08:18:04 GMT): yacovm (Mon, 02 Oct 2017 08:18:41 GMT): gentios (Mon, 02 Oct 2017 08:19:08 GMT): gentios (Mon, 02 Oct 2017 08:19:23 GMT): gentios (Mon, 02 Oct 2017 08:21:21 GMT): gentios (Mon, 02 Oct 2017 08:22:49 GMT): gentios (Mon, 02 Oct 2017 08:22:51 GMT): gentios (Mon, 02 Oct 2017 08:23:06 GMT): gentios (Mon, 02 Oct 2017 08:23:21 GMT): yacovm (Mon, 02 Oct 2017 08:26:38 GMT): yacovm (Mon, 02 Oct 2017 08:26:43 GMT): gentios (Mon, 02 Oct 2017 08:27:19 GMT): gentios (Mon, 02 Oct 2017 08:27:28 GMT): yacovm (Mon, 02 Oct 2017 08:28:05 GMT): gentios (Mon, 02 Oct 2017 08:28:30 GMT): gentios (Mon, 02 Oct 2017 08:28:32 GMT): gentios (Mon, 02 Oct 2017 08:28:39 GMT): yacovm (Mon, 02 Oct 2017 08:28:50 GMT): gentios (Mon, 02 Oct 2017 08:29:05 GMT): gentios (Mon, 02 Oct 2017 08:29:15 GMT): gentios (Mon, 02 Oct 2017 09:09:21 GMT): gentios (Mon, 02 Oct 2017 09:09:40 GMT): lclclc (Mon, 02 Oct 2017 09:09:40 GMT): stchrysa (Mon, 02 Oct 2017 11:51:17 GMT): aberfou (Mon, 02 Oct 2017 16:08:12 GMT): jyellick (Mon, 02 Oct 2017 16:31:01 GMT): aberfou (Mon, 02 Oct 2017 16:31:36 GMT): aberfou (Mon, 02 Oct 2017 16:31:51 GMT): jyellick (Mon, 02 Oct 2017 16:33:09 GMT): aberfou (Mon, 02 Oct 2017 16:34:36 GMT): jyellick (Mon, 02 Oct 2017 16:34:56 GMT): jyellick (Mon, 02 Oct 2017 16:35:42 GMT): aberfou (Mon, 02 Oct 2017 16:36:17 GMT): aberfou (Mon, 02 Oct 2017 16:37:28 GMT): jyellick (Mon, 02 Oct 2017 16:40:10 GMT): jyellick (Mon, 02 Oct 2017 16:40:48 GMT): jyellick (Mon, 02 Oct 2017 16:42:08 GMT): aberfou (Mon, 02 Oct 2017 16:42:13 GMT): aberfou (Mon, 02 Oct 2017 16:42:25 GMT): jyellick (Mon, 02 Oct 2017 16:42:46 GMT): aberfou (Mon, 02 Oct 2017 16:43:12 GMT): jyellick (Mon, 02 Oct 2017 16:43:35 GMT): jyellick (Mon, 02 Oct 2017 16:43:35 GMT): jyellick (Mon, 02 Oct 2017 16:44:01 GMT): aberfou (Mon, 02 Oct 2017 16:44:19 GMT): jyellick (Mon, 02 Oct 2017 16:45:11 GMT): jyellick (Mon, 02 Oct 2017 16:45:45 GMT): aberfou (Mon, 02 Oct 2017 16:45:56 GMT): jyellick (Mon, 02 Oct 2017 16:46:16 GMT): aberfou (Mon, 02 Oct 2017 16:46:24 GMT): aberfou (Mon, 02 Oct 2017 16:46:40 GMT): jjason (Mon, 02 Oct 2017 17:27:52 GMT): bsteinfeld (Mon, 02 Oct 2017 20:34:55 GMT): bsteinfeld (Mon, 02 Oct 2017 20:36:39 GMT): kostas (Mon, 02 Oct 2017 20:44:01 GMT): bsteinfeld (Mon, 02 Oct 2017 21:05:38 GMT): bsteinfeld (Mon, 02 Oct 2017 21:05:38 GMT): kostas (Mon, 02 Oct 2017 21:07:32 GMT): bsteinfeld (Mon, 02 Oct 2017 21:10:51 GMT): kostas (Mon, 02 Oct 2017 21:10:57 GMT): bsteinfeld (Mon, 02 Oct 2017 21:11:20 GMT): jyellick (Mon, 02 Oct 2017 21:11:35 GMT): bsteinfeld (Mon, 02 Oct 2017 21:14:01 GMT): sanchezl (Mon, 02 Oct 2017 21:25:10 GMT): gauthampamu (Mon, 02 Oct 2017 23:57:05 GMT): gauthampamu (Mon, 02 Oct 2017 23:57:21 GMT): jyellick (Tue, 03 Oct 2017 00:23:36 GMT): Amjadnz (Tue, 03 Oct 2017 05:57:40 GMT): Amjadnz (Tue, 03 Oct 2017 07:31:54 GMT): Amjadnz (Tue, 03 Oct 2017 07:31:54 GMT): Amjadnz (Tue, 03 Oct 2017 07:33:16 GMT): Amjadnz (Tue, 03 Oct 2017 07:33:49 GMT): Amjadnz (Tue, 03 Oct 2017 07:34:12 GMT): Amjadnz (Tue, 03 Oct 2017 07:34:51 GMT): Amjadnz (Tue, 03 Oct 2017 07:34:51 GMT): Vadim (Tue, 03 Oct 2017 07:37:58 GMT): Amjadnz (Tue, 03 Oct 2017 07:38:15 GMT): Amjadnz (Tue, 03 Oct 2017 07:38:26 GMT): Amjadnz (Tue, 03 Oct 2017 07:38:34 GMT): Vadim (Tue, 03 Oct 2017 07:38:42 GMT): Vadim (Tue, 03 Oct 2017 07:39:32 GMT): Amjadnz (Tue, 03 Oct 2017 07:39:44 GMT): Amjadnz (Tue, 03 Oct 2017 07:39:46 GMT): Amjadnz (Tue, 03 Oct 2017 07:40:02 GMT): Amjadnz (Tue, 03 Oct 2017 07:41:18 GMT): Amjadnz (Tue, 03 Oct 2017 07:42:37 GMT): Amjadnz (Tue, 03 Oct 2017 07:42:37 GMT): Vadim (Tue, 03 Oct 2017 07:42:39 GMT): Vadim (Tue, 03 Oct 2017 07:42:50 GMT): Vadim (Tue, 03 Oct 2017 07:43:07 GMT): Vadim (Tue, 03 Oct 2017 07:43:07 GMT): Amjadnz (Tue, 03 Oct 2017 07:43:46 GMT): Amjadnz (Tue, 03 Oct 2017 07:44:42 GMT): Amjadnz (Tue, 03 Oct 2017 07:44:48 GMT): Vadim (Tue, 03 Oct 2017 07:44:51 GMT): Amjadnz (Tue, 03 Oct 2017 07:44:58 GMT): yacovm (Tue, 03 Oct 2017 08:43:20 GMT): yacovm (Tue, 03 Oct 2017 08:45:09 GMT): yacovm (Tue, 03 Oct 2017 08:45:49 GMT): yacovm (Tue, 03 Oct 2017 08:53:12 GMT): Vadim (Tue, 03 Oct 2017 08:53:20 GMT): yacovm (Tue, 03 Oct 2017 08:53:46 GMT): Vadim (Tue, 03 Oct 2017 08:55:39 GMT): Vadim (Tue, 03 Oct 2017 08:56:38 GMT): yacovm (Tue, 03 Oct 2017 08:56:59 GMT): yacovm (Tue, 03 Oct 2017 08:57:00 GMT): Vadim (Tue, 03 Oct 2017 08:57:10 GMT): yacovm (Tue, 03 Oct 2017 08:57:17 GMT): yacovm (Tue, 03 Oct 2017 08:57:21 GMT): Vadim (Tue, 03 Oct 2017 08:59:28 GMT): yacovm (Tue, 03 Oct 2017 08:59:49 GMT): yacovm (Tue, 03 Oct 2017 08:59:51 GMT): Vadim (Tue, 03 Oct 2017 09:00:17 GMT): Vadim (Tue, 03 Oct 2017 09:01:05 GMT): mastersingh24 (Tue, 03 Oct 2017 09:43:30 GMT): Vadim (Tue, 03 Oct 2017 09:44:02 GMT): Vadim (Tue, 03 Oct 2017 09:44:33 GMT): Vadim (Tue, 03 Oct 2017 09:44:52 GMT): mastersingh24 (Tue, 03 Oct 2017 09:45:53 GMT): Vadim (Tue, 03 Oct 2017 09:47:36 GMT): Vadim (Tue, 03 Oct 2017 09:47:38 GMT): Vadim (Tue, 03 Oct 2017 09:49:34 GMT): Vadim (Tue, 03 Oct 2017 09:49:57 GMT): Vadim (Tue, 03 Oct 2017 09:51:50 GMT): mastersingh24 (Tue, 03 Oct 2017 10:00:00 GMT): mastersingh24 (Tue, 03 Oct 2017 10:00:00 GMT): Vadim (Tue, 03 Oct 2017 10:00:44 GMT): Vadim (Tue, 03 Oct 2017 10:01:15 GMT): mastersingh24 (Tue, 03 Oct 2017 10:02:06 GMT): lovesh (Tue, 03 Oct 2017 14:04:47 GMT): eetti (Tue, 03 Oct 2017 14:31:56 GMT): t_stephens67 (Tue, 03 Oct 2017 14:45:23 GMT): t_stephens67 (Tue, 03 Oct 2017 14:46:21 GMT): kostas (Tue, 03 Oct 2017 14:54:56 GMT): stacie (Tue, 03 Oct 2017 14:55:26 GMT): ganbold (Tue, 03 Oct 2017 14:55:31 GMT): t_stephens67 (Tue, 03 Oct 2017 14:57:47 GMT): kostas (Tue, 03 Oct 2017 14:58:08 GMT): kostas (Tue, 03 Oct 2017 14:59:06 GMT): kostas (Tue, 03 Oct 2017 14:59:47 GMT): kostas (Tue, 03 Oct 2017 15:00:56 GMT): t_stephens67 (Tue, 03 Oct 2017 15:01:11 GMT): t_stephens67 (Tue, 03 Oct 2017 15:15:59 GMT): Amjadnz (Tue, 03 Oct 2017 16:03:03 GMT): Amjadnz (Tue, 03 Oct 2017 16:03:03 GMT): Amjadnz (Tue, 03 Oct 2017 16:03:03 GMT): yacovm (Tue, 03 Oct 2017 16:22:53 GMT): yacovm (Tue, 03 Oct 2017 16:23:09 GMT): yacovm (Tue, 03 Oct 2017 16:23:13 GMT): t_stephens67 (Tue, 03 Oct 2017 17:51:32 GMT): t_stephens67 (Tue, 03 Oct 2017 17:54:31 GMT): kostas (Tue, 03 Oct 2017 18:13:49 GMT): jyellick (Tue, 03 Oct 2017 18:15:31 GMT): t_stephens67 (Tue, 03 Oct 2017 18:18:34 GMT): kostas (Tue, 03 Oct 2017 18:18:50 GMT): t_stephens67 (Tue, 03 Oct 2017 18:18:59 GMT): kostas (Tue, 03 Oct 2017 18:19:36 GMT): jyellick (Tue, 03 Oct 2017 18:22:18 GMT): t_stephens67 (Tue, 03 Oct 2017 18:29:17 GMT): t_stephens67 (Tue, 03 Oct 2017 18:30:09 GMT): t_stephens67 (Tue, 03 Oct 2017 18:30:35 GMT): kostas (Tue, 03 Oct 2017 18:31:32 GMT): kostas (Tue, 03 Oct 2017 19:25:06 GMT): kostas (Tue, 03 Oct 2017 19:25:30 GMT): t_stephens67 (Tue, 03 Oct 2017 20:03:38 GMT): kostas (Tue, 03 Oct 2017 20:04:57 GMT): t_stephens67 (Tue, 03 Oct 2017 20:05:11 GMT): kostas (Tue, 03 Oct 2017 20:05:35 GMT): t_stephens67 (Tue, 03 Oct 2017 20:06:27 GMT): kostas (Tue, 03 Oct 2017 20:07:32 GMT): t_stephens67 (Tue, 03 Oct 2017 20:08:03 GMT): kostas (Tue, 03 Oct 2017 20:08:58 GMT): t_stephens67 (Tue, 03 Oct 2017 20:18:14 GMT): kostas (Tue, 03 Oct 2017 20:22:11 GMT): kostas (Tue, 03 Oct 2017 20:22:27 GMT): t_stephens67 (Tue, 03 Oct 2017 20:24:00 GMT): kostas (Tue, 03 Oct 2017 20:24:41 GMT): t_stephens67 (Tue, 03 Oct 2017 20:25:23 GMT): kostas (Tue, 03 Oct 2017 20:26:12 GMT): kostas (Tue, 03 Oct 2017 20:27:54 GMT): t_stephens67 (Tue, 03 Oct 2017 20:29:55 GMT): kostas (Tue, 03 Oct 2017 20:30:27 GMT): kostas (Tue, 03 Oct 2017 20:31:50 GMT): kostas (Tue, 03 Oct 2017 20:32:54 GMT): t_stephens67 (Tue, 03 Oct 2017 20:33:18 GMT): t_stephens67 (Tue, 03 Oct 2017 20:33:45 GMT): t_stephens67 (Tue, 03 Oct 2017 20:36:18 GMT): t_stephens67 (Tue, 03 Oct 2017 20:36:24 GMT): kostas (Tue, 03 Oct 2017 20:37:38 GMT): t_stephens67 (Tue, 03 Oct 2017 20:38:17 GMT): kostas (Tue, 03 Oct 2017 20:39:05 GMT): t_stephens67 (Tue, 03 Oct 2017 20:41:06 GMT): kostas (Tue, 03 Oct 2017 20:45:27 GMT): kostas (Tue, 03 Oct 2017 20:47:18 GMT): kostas (Tue, 03 Oct 2017 20:47:18 GMT): t_stephens67 (Tue, 03 Oct 2017 20:48:52 GMT): t_stephens67 (Tue, 03 Oct 2017 20:49:32 GMT): t_stephens67 (Tue, 03 Oct 2017 20:49:51 GMT): kostas (Tue, 03 Oct 2017 20:50:31 GMT): t_stephens67 (Tue, 03 Oct 2017 20:52:47 GMT): kostas (Tue, 03 Oct 2017 20:54:24 GMT): falix (Wed, 04 Oct 2017 01:48:09 GMT): AlekNS (Wed, 04 Oct 2017 05:14:17 GMT): Luke_Chen (Wed, 04 Oct 2017 07:17:58 GMT): carlosfaria (Wed, 04 Oct 2017 12:49:03 GMT): t_stephens67 (Wed, 04 Oct 2017 13:29:27 GMT): t_stephens67 (Wed, 04 Oct 2017 13:30:23 GMT): kostas (Wed, 04 Oct 2017 13:37:40 GMT): kostas (Wed, 04 Oct 2017 13:37:55 GMT): t_stephens67 (Wed, 04 Oct 2017 13:50:30 GMT): t_stephens67 (Wed, 04 Oct 2017 13:50:35 GMT): t_stephens67 (Wed, 04 Oct 2017 13:50:51 GMT): t_stephens67 (Wed, 04 Oct 2017 13:51:04 GMT): kostas (Wed, 04 Oct 2017 13:52:15 GMT): t_stephens67 (Wed, 04 Oct 2017 13:53:48 GMT): gentios (Wed, 04 Oct 2017 14:30:38 GMT): gentios (Wed, 04 Oct 2017 14:30:47 GMT): gentios (Wed, 04 Oct 2017 14:30:49 GMT): gentios (Wed, 04 Oct 2017 14:30:49 GMT): gentios (Wed, 04 Oct 2017 14:30:58 GMT): jyellick (Wed, 04 Oct 2017 14:31:49 GMT): jyellick (Wed, 04 Oct 2017 14:38:01 GMT): qizhang (Wed, 04 Oct 2017 15:05:47 GMT): qizhang (Wed, 04 Oct 2017 15:05:55 GMT): qizhang (Wed, 04 Oct 2017 15:06:12 GMT): jyellick (Wed, 04 Oct 2017 15:08:33 GMT): jy (Wed, 04 Oct 2017 15:09:20 GMT): jyellick (Wed, 04 Oct 2017 15:09:25 GMT): jy (Wed, 04 Oct 2017 15:09:34 GMT): yacovm (Wed, 04 Oct 2017 15:10:04 GMT): jyellick (Wed, 04 Oct 2017 15:10:30 GMT): jy (Wed, 04 Oct 2017 15:12:39 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:14:48 GMT): jyellick (Wed, 04 Oct 2017 15:15:16 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:20:47 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:20:47 GMT): kostas (Wed, 04 Oct 2017 15:22:13 GMT): kostas (Wed, 04 Oct 2017 15:22:13 GMT): kostas (Wed, 04 Oct 2017 15:27:43 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:28:05 GMT): kostas (Wed, 04 Oct 2017 15:28:20 GMT): kostas (Wed, 04 Oct 2017 15:28:34 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:28:55 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:29:14 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:29:14 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:29:14 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:29:14 GMT): kostas (Wed, 04 Oct 2017 15:29:15 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:30:07 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:31:53 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:31:53 GMT): qizhang (Wed, 04 Oct 2017 15:32:33 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:33:55 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:33:55 GMT): kostas (Wed, 04 Oct 2017 15:34:11 GMT): kostas (Wed, 04 Oct 2017 15:34:53 GMT): kostas (Wed, 04 Oct 2017 15:34:53 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:35:26 GMT): kostas (Wed, 04 Oct 2017 15:35:49 GMT): kostas (Wed, 04 Oct 2017 15:35:49 GMT): kostas (Wed, 04 Oct 2017 15:36:29 GMT): kostas (Wed, 04 Oct 2017 15:36:29 GMT): kostas (Wed, 04 Oct 2017 15:36:43 GMT): kostas (Wed, 04 Oct 2017 15:37:01 GMT): jyellick (Wed, 04 Oct 2017 15:37:04 GMT): jyellick (Wed, 04 Oct 2017 15:37:04 GMT): jyellick (Wed, 04 Oct 2017 15:37:04 GMT): asaningmaxchain (Wed, 04 Oct 2017 15:39:34 GMT): qizhang (Wed, 04 Oct 2017 15:40:21 GMT): qizhang (Wed, 04 Oct 2017 15:41:53 GMT): jyellick (Wed, 04 Oct 2017 15:44:04 GMT): jyellick (Wed, 04 Oct 2017 15:44:04 GMT): kostas (Wed, 04 Oct 2017 15:45:22 GMT): kostas (Wed, 04 Oct 2017 15:45:22 GMT): qizhang (Wed, 04 Oct 2017 15:45:37 GMT): jyellick (Wed, 04 Oct 2017 15:47:56 GMT): qizhang (Wed, 04 Oct 2017 15:52:19 GMT): kostas (Wed, 04 Oct 2017 15:53:05 GMT): kostas (Wed, 04 Oct 2017 15:53:23 GMT): kostas (Wed, 04 Oct 2017 15:53:29 GMT): qizhang (Wed, 04 Oct 2017 15:54:03 GMT): qizhang (Wed, 04 Oct 2017 15:54:03 GMT): kostas (Wed, 04 Oct 2017 15:54:32 GMT): kostas (Wed, 04 Oct 2017 15:54:44 GMT): kostas (Wed, 04 Oct 2017 15:54:58 GMT): qizhang (Wed, 04 Oct 2017 15:55:41 GMT): qizhang (Wed, 04 Oct 2017 15:55:41 GMT): kostas (Wed, 04 Oct 2017 15:56:38 GMT): kostas (Wed, 04 Oct 2017 15:57:18 GMT): qizhang (Wed, 04 Oct 2017 15:58:08 GMT): kostas (Wed, 04 Oct 2017 15:59:42 GMT): qizhang (Wed, 04 Oct 2017 16:03:07 GMT): qizhang (Wed, 04 Oct 2017 16:03:07 GMT): kostas (Wed, 04 Oct 2017 16:04:58 GMT): asaningmaxchain (Wed, 04 Oct 2017 16:28:00 GMT): asaningmaxchain (Wed, 04 Oct 2017 16:28:00 GMT): kostas (Wed, 04 Oct 2017 16:30:03 GMT): jyellick (Wed, 04 Oct 2017 16:30:26 GMT): asaningmaxchain (Wed, 04 Oct 2017 16:35:32 GMT): asaningmaxchain (Wed, 04 Oct 2017 16:42:53 GMT): jyellick (Wed, 04 Oct 2017 16:47:18 GMT): asaningmaxchain (Wed, 04 Oct 2017 16:49:21 GMT): jyellick (Wed, 04 Oct 2017 16:52:02 GMT): jyellick (Wed, 04 Oct 2017 16:52:02 GMT): jyellick (Wed, 04 Oct 2017 16:52:02 GMT): asaningmaxchain (Wed, 04 Oct 2017 16:52:44 GMT): kostas (Wed, 04 Oct 2017 18:54:41 GMT): jmcnevin (Wed, 04 Oct 2017 20:20:14 GMT): kostas (Wed, 04 Oct 2017 20:28:41 GMT): qizhang (Wed, 04 Oct 2017 20:38:46 GMT): jyellick (Wed, 04 Oct 2017 20:41:47 GMT): jrosmith (Wed, 04 Oct 2017 20:43:53 GMT): jmcnevin (Wed, 04 Oct 2017 20:47:43 GMT): asaningmaxchain (Thu, 05 Oct 2017 01:58:33 GMT): asaningmaxchain (Thu, 05 Oct 2017 01:58:33 GMT): vu3mmg (Thu, 05 Oct 2017 02:07:18 GMT): jyellick (Thu, 05 Oct 2017 02:08:44 GMT): vu3mmg (Thu, 05 Oct 2017 02:09:37 GMT): vu3mmg (Thu, 05 Oct 2017 02:22:30 GMT): jyellick (Thu, 05 Oct 2017 02:47:52 GMT): vu3mmg (Thu, 05 Oct 2017 02:50:52 GMT): vu3mmg (Thu, 05 Oct 2017 02:51:59 GMT): gauthampamu (Thu, 05 Oct 2017 02:54:03 GMT): gauthampamu (Thu, 05 Oct 2017 02:57:25 GMT): jyellick (Thu, 05 Oct 2017 03:17:37 GMT): jyellick (Thu, 05 Oct 2017 03:18:38 GMT): vu3mmg (Thu, 05 Oct 2017 03:26:20 GMT): kostas (Thu, 05 Oct 2017 04:14:13 GMT): kostas (Thu, 05 Oct 2017 04:15:47 GMT): vu3mmg (Thu, 05 Oct 2017 04:17:00 GMT): kostas (Thu, 05 Oct 2017 04:17:37 GMT): vu3mmg (Thu, 05 Oct 2017 04:17:48 GMT): gentios (Thu, 05 Oct 2017 07:52:48 GMT): gentios (Thu, 05 Oct 2017 07:52:51 GMT): gentios (Thu, 05 Oct 2017 07:53:05 GMT): gentios (Thu, 05 Oct 2017 07:53:11 GMT): gentios (Thu, 05 Oct 2017 07:53:15 GMT): gentios (Thu, 05 Oct 2017 08:14:52 GMT): gentios (Thu, 05 Oct 2017 08:14:52 GMT): asaningmaxchain (Thu, 05 Oct 2017 08:20:34 GMT): asaningmaxchain (Thu, 05 Oct 2017 08:20:34 GMT): gentios (Thu, 05 Oct 2017 08:43:51 GMT): gentios (Thu, 05 Oct 2017 08:43:55 GMT): gentios (Thu, 05 Oct 2017 08:43:56 GMT): gentios (Thu, 05 Oct 2017 08:45:50 GMT): gentios (Thu, 05 Oct 2017 08:45:51 GMT): asaningmaxchain (Thu, 05 Oct 2017 08:47:34 GMT): gentios (Thu, 05 Oct 2017 08:48:00 GMT): gentios (Thu, 05 Oct 2017 08:48:41 GMT): asaningmaxchain (Thu, 05 Oct 2017 08:48:57 GMT): gentios (Thu, 05 Oct 2017 08:49:35 GMT): gentios (Thu, 05 Oct 2017 08:49:37 GMT): asaningmaxchain (Thu, 05 Oct 2017 08:50:15 GMT): gentios (Thu, 05 Oct 2017 08:54:28 GMT): gentios (Thu, 05 Oct 2017 09:08:24 GMT): gentios (Thu, 05 Oct 2017 09:24:03 GMT): gentios (Thu, 05 Oct 2017 09:24:06 GMT): asaningmaxchain (Thu, 05 Oct 2017 09:24:33 GMT): gentios (Thu, 05 Oct 2017 09:24:51 GMT): gentios (Thu, 05 Oct 2017 09:43:19 GMT): gentios (Thu, 05 Oct 2017 09:48:18 GMT): gentios (Thu, 05 Oct 2017 09:48:22 GMT): asaningmaxchain (Thu, 05 Oct 2017 10:22:21 GMT): gentios (Thu, 05 Oct 2017 11:02:16 GMT): asaningmaxchain (Thu, 05 Oct 2017 11:03:01 GMT): gentios (Thu, 05 Oct 2017 11:03:37 GMT): gentios (Thu, 05 Oct 2017 14:13:24 GMT): gentios (Thu, 05 Oct 2017 14:13:27 GMT): gentios (Thu, 05 Oct 2017 14:13:31 GMT): gentios (Thu, 05 Oct 2017 14:13:32 GMT): gentios (Thu, 05 Oct 2017 14:13:44 GMT): gentios (Thu, 05 Oct 2017 14:13:44 GMT): jyellick (Thu, 05 Oct 2017 14:18:34 GMT): gentios (Thu, 05 Oct 2017 14:19:15 GMT): jyellick (Thu, 05 Oct 2017 14:20:03 GMT): gentios (Thu, 05 Oct 2017 14:21:04 GMT): gentios (Thu, 05 Oct 2017 14:21:14 GMT): jyellick (Thu, 05 Oct 2017 14:22:03 GMT): gentios (Thu, 05 Oct 2017 14:22:43 GMT): gentios (Thu, 05 Oct 2017 14:22:53 GMT): gentios (Thu, 05 Oct 2017 14:23:06 GMT): gentios (Thu, 05 Oct 2017 14:24:34 GMT): jyellick (Thu, 05 Oct 2017 14:24:43 GMT): jyellick (Thu, 05 Oct 2017 14:24:43 GMT): gentios (Thu, 05 Oct 2017 14:25:08 GMT): gentios (Thu, 05 Oct 2017 14:25:24 GMT): gentios (Thu, 05 Oct 2017 14:25:44 GMT): gentios (Thu, 05 Oct 2017 14:26:05 GMT): gentios (Thu, 05 Oct 2017 14:26:28 GMT): gentios (Thu, 05 Oct 2017 14:26:30 GMT): gentios (Thu, 05 Oct 2017 14:26:46 GMT): gentios (Thu, 05 Oct 2017 14:27:14 GMT): jyellick (Thu, 05 Oct 2017 14:28:13 GMT): gentios (Thu, 05 Oct 2017 14:29:49 GMT): gentios (Thu, 05 Oct 2017 14:30:03 GMT): gentios (Thu, 05 Oct 2017 14:30:20 GMT): gentios (Thu, 05 Oct 2017 14:30:32 GMT): gentios (Thu, 05 Oct 2017 14:30:39 GMT): jyellick (Thu, 05 Oct 2017 14:35:55 GMT): wy (Thu, 05 Oct 2017 15:40:20 GMT): kostas (Thu, 05 Oct 2017 16:09:30 GMT): kostas (Thu, 05 Oct 2017 16:10:27 GMT): jyellick (Thu, 05 Oct 2017 16:27:17 GMT): jyellick (Thu, 05 Oct 2017 16:27:17 GMT): d88 (Thu, 05 Oct 2017 16:53:24 GMT): qizhang (Thu, 05 Oct 2017 21:16:52 GMT): jyellick (Thu, 05 Oct 2017 21:19:33 GMT): CodeReaper (Fri, 06 Oct 2017 05:35:02 GMT): jyellick (Fri, 06 Oct 2017 06:01:28 GMT): CodeReaper (Fri, 06 Oct 2017 06:04:59 GMT): CodeReaper (Fri, 06 Oct 2017 06:04:59 GMT): jyellick (Fri, 06 Oct 2017 06:05:37 GMT): CodeReaper (Fri, 06 Oct 2017 06:06:23 GMT): jyellick (Fri, 06 Oct 2017 06:06:49 GMT): gentios (Fri, 06 Oct 2017 06:54:11 GMT): gentios (Fri, 06 Oct 2017 06:54:40 GMT): gentios (Fri, 06 Oct 2017 13:01:48 GMT): gentios (Fri, 06 Oct 2017 13:01:48 GMT): kostas (Fri, 06 Oct 2017 13:23:26 GMT): gentios (Fri, 06 Oct 2017 13:23:46 GMT): gentios (Fri, 06 Oct 2017 13:24:24 GMT): kostas (Fri, 06 Oct 2017 13:25:54 GMT): kostas (Fri, 06 Oct 2017 13:25:54 GMT): kostas (Fri, 06 Oct 2017 13:26:04 GMT): gentios (Fri, 06 Oct 2017 13:26:37 GMT): jyellick (Fri, 06 Oct 2017 13:26:58 GMT): gentios (Fri, 06 Oct 2017 13:27:11 GMT): gentios (Fri, 06 Oct 2017 13:27:13 GMT): kostas (Fri, 06 Oct 2017 13:27:23 GMT): kostas (Fri, 06 Oct 2017 13:27:52 GMT): gentios (Fri, 06 Oct 2017 13:27:58 GMT): kostas (Fri, 06 Oct 2017 13:28:33 GMT): gentios (Fri, 06 Oct 2017 13:28:52 GMT): gentios (Fri, 06 Oct 2017 13:28:56 GMT): kostas (Fri, 06 Oct 2017 13:29:12 GMT): gentios (Fri, 06 Oct 2017 13:29:36 GMT): kostas (Fri, 06 Oct 2017 13:30:15 GMT): kostas (Fri, 06 Oct 2017 13:30:46 GMT): gentios (Fri, 06 Oct 2017 13:31:23 GMT): gentios (Fri, 06 Oct 2017 13:31:36 GMT): gentios (Fri, 06 Oct 2017 13:31:48 GMT): kostas (Fri, 06 Oct 2017 13:32:02 GMT): gentios (Fri, 06 Oct 2017 13:32:53 GMT): kostas (Fri, 06 Oct 2017 13:33:48 GMT): kostas (Fri, 06 Oct 2017 13:34:01 GMT): gentios (Fri, 06 Oct 2017 13:34:13 GMT): kostas (Fri, 06 Oct 2017 13:34:37 GMT): gentios (Fri, 06 Oct 2017 13:34:57 GMT): kostas (Fri, 06 Oct 2017 13:37:02 GMT): kostas (Fri, 06 Oct 2017 13:37:02 GMT): kostas (Fri, 06 Oct 2017 13:37:53 GMT): kostas (Fri, 06 Oct 2017 13:38:20 GMT): gentios (Fri, 06 Oct 2017 13:39:34 GMT): gentios (Fri, 06 Oct 2017 13:39:46 GMT): gentios (Fri, 06 Oct 2017 13:39:53 GMT): gentios (Fri, 06 Oct 2017 13:40:10 GMT): kostas (Fri, 06 Oct 2017 13:40:13 GMT): kostas (Fri, 06 Oct 2017 13:40:13 GMT): gentios (Fri, 06 Oct 2017 13:40:16 GMT): gentios (Fri, 06 Oct 2017 13:41:13 GMT): gentios (Fri, 06 Oct 2017 13:41:23 GMT): gentios (Fri, 06 Oct 2017 13:41:44 GMT): gentios (Fri, 06 Oct 2017 13:41:49 GMT): gentios (Fri, 06 Oct 2017 13:42:07 GMT): gentios (Fri, 06 Oct 2017 13:42:38 GMT): rahulhegde (Fri, 06 Oct 2017 13:57:06 GMT): rahulhegde (Fri, 06 Oct 2017 13:57:06 GMT): jyellick (Fri, 06 Oct 2017 13:58:32 GMT): jyellick (Fri, 06 Oct 2017 13:59:00 GMT): jyellick (Fri, 06 Oct 2017 13:59:14 GMT): rahulhegde (Fri, 06 Oct 2017 14:00:03 GMT): rahulhegde (Fri, 06 Oct 2017 14:00:03 GMT): jyellick (Fri, 06 Oct 2017 14:00:26 GMT): rahulhegde (Fri, 06 Oct 2017 14:03:00 GMT): rahulhegde (Fri, 06 Oct 2017 14:03:00 GMT): jyellick (Fri, 06 Oct 2017 14:05:48 GMT): rahulhegde (Fri, 06 Oct 2017 14:08:19 GMT): rahulhegde (Fri, 06 Oct 2017 14:08:19 GMT): jyellick (Fri, 06 Oct 2017 14:09:12 GMT): jyellick (Fri, 06 Oct 2017 14:09:31 GMT): rahulhegde (Fri, 06 Oct 2017 14:10:01 GMT): jyellick (Fri, 06 Oct 2017 14:10:23 GMT): rahulhegde (Fri, 06 Oct 2017 14:10:29 GMT): rahulhegde (Fri, 06 Oct 2017 14:11:56 GMT): jyellick (Fri, 06 Oct 2017 14:12:23 GMT): jyellick (Fri, 06 Oct 2017 14:12:43 GMT): rahulhegde (Fri, 06 Oct 2017 14:12:48 GMT): rahulhegde (Fri, 06 Oct 2017 14:12:59 GMT): jyellick (Fri, 06 Oct 2017 14:13:01 GMT): rahulhegde (Fri, 06 Oct 2017 14:15:26 GMT): jyellick (Fri, 06 Oct 2017 14:17:49 GMT): jyellick (Fri, 06 Oct 2017 14:19:26 GMT): jyellick (Fri, 06 Oct 2017 14:19:26 GMT): qizhang (Fri, 06 Oct 2017 14:54:32 GMT): kostas (Fri, 06 Oct 2017 14:54:41 GMT): yacovm (Fri, 06 Oct 2017 15:19:05 GMT): yacovm (Fri, 06 Oct 2017 15:19:11 GMT): yacovm (Fri, 06 Oct 2017 15:19:26 GMT): yacovm (Fri, 06 Oct 2017 15:19:34 GMT): yacovm (Fri, 06 Oct 2017 15:20:13 GMT): yacovm (Fri, 06 Oct 2017 15:20:24 GMT): rahulhegde (Fri, 06 Oct 2017 18:59:13 GMT): jyellick (Fri, 06 Oct 2017 18:59:53 GMT): rahulhegde (Fri, 06 Oct 2017 19:00:25 GMT): rahulhegde (Fri, 06 Oct 2017 19:01:28 GMT): rahulhegde (Fri, 06 Oct 2017 19:06:56 GMT): rahulhegde (Fri, 06 Oct 2017 19:07:47 GMT): rahulhegde (Fri, 06 Oct 2017 19:07:47 GMT): jyellick (Fri, 06 Oct 2017 19:18:27 GMT): jyellick (Fri, 06 Oct 2017 19:19:02 GMT): rahulhegde (Fri, 06 Oct 2017 19:22:29 GMT): rahulhegde (Fri, 06 Oct 2017 19:22:29 GMT): rahulhegde (Fri, 06 Oct 2017 19:22:29 GMT): rahulhegde (Fri, 06 Oct 2017 19:22:29 GMT): rahulhegde (Fri, 06 Oct 2017 19:22:29 GMT): rahulhegde (Fri, 06 Oct 2017 19:37:18 GMT): jyellick (Fri, 06 Oct 2017 19:38:32 GMT): rahulhegde (Fri, 06 Oct 2017 19:40:59 GMT): rahulhegde (Fri, 06 Oct 2017 19:40:59 GMT): rahulhegde (Fri, 06 Oct 2017 19:40:59 GMT): jyellick (Fri, 06 Oct 2017 19:42:38 GMT): rahulhegde (Fri, 06 Oct 2017 19:43:17 GMT): jyellick (Fri, 06 Oct 2017 19:43:34 GMT): jyellick (Fri, 06 Oct 2017 19:44:10 GMT): jyellick (Fri, 06 Oct 2017 19:45:21 GMT): jyellick (Fri, 06 Oct 2017 19:46:11 GMT): yacovm (Fri, 06 Oct 2017 19:47:04 GMT): jyellick (Fri, 06 Oct 2017 19:47:48 GMT): yacovm (Fri, 06 Oct 2017 19:48:09 GMT): jyellick (Fri, 06 Oct 2017 19:48:52 GMT): rahulhegde (Fri, 06 Oct 2017 19:49:01 GMT): yacovm (Fri, 06 Oct 2017 19:49:16 GMT): yacovm (Fri, 06 Oct 2017 19:52:22 GMT): yacovm (Fri, 06 Oct 2017 19:52:35 GMT): yacovm (Fri, 06 Oct 2017 19:53:07 GMT): jyellick (Fri, 06 Oct 2017 19:54:14 GMT): yacovm (Fri, 06 Oct 2017 19:55:14 GMT): jyellick (Fri, 06 Oct 2017 19:55:23 GMT): yacovm (Fri, 06 Oct 2017 19:56:41 GMT): yacovm (Fri, 06 Oct 2017 19:57:05 GMT): jyellick (Fri, 06 Oct 2017 19:57:34 GMT): jyellick (Fri, 06 Oct 2017 19:57:44 GMT): yacovm (Fri, 06 Oct 2017 19:57:56 GMT): jyellick (Fri, 06 Oct 2017 19:58:18 GMT): yacovm (Fri, 06 Oct 2017 19:58:22 GMT): yacovm (Fri, 06 Oct 2017 19:58:33 GMT): jyellick (Fri, 06 Oct 2017 19:59:33 GMT): yacovm (Fri, 06 Oct 2017 20:00:04 GMT): Asara (Fri, 06 Oct 2017 20:00:43 GMT): C0rWin (Fri, 06 Oct 2017 20:01:07 GMT): yacovm (Fri, 06 Oct 2017 20:01:13 GMT): C0rWin (Fri, 06 Oct 2017 20:01:40 GMT): C0rWin (Fri, 06 Oct 2017 20:01:42 GMT): yacovm (Fri, 06 Oct 2017 20:02:11 GMT): C0rWin (Fri, 06 Oct 2017 20:02:21 GMT): yacovm (Fri, 06 Oct 2017 20:02:39 GMT): yacovm (Fri, 06 Oct 2017 20:02:45 GMT): Asara (Fri, 06 Oct 2017 20:04:08 GMT): Asara (Fri, 06 Oct 2017 20:04:19 GMT): yacovm (Fri, 06 Oct 2017 20:04:29 GMT): rahulhegde (Fri, 06 Oct 2017 20:04:31 GMT): jyellick (Fri, 06 Oct 2017 20:04:57 GMT): Asara (Fri, 06 Oct 2017 20:05:23 GMT): C0rWin (Fri, 06 Oct 2017 20:05:45 GMT): yacovm (Fri, 06 Oct 2017 20:06:56 GMT): yacovm (Fri, 06 Oct 2017 20:08:09 GMT): yacovm (Fri, 06 Oct 2017 20:08:21 GMT): yacovm (Fri, 06 Oct 2017 20:08:33 GMT): yacovm (Fri, 06 Oct 2017 20:09:18 GMT): yacovm (Fri, 06 Oct 2017 20:09:34 GMT): yacovm (Fri, 06 Oct 2017 20:09:44 GMT): yacovm (Fri, 06 Oct 2017 20:10:07 GMT): yacovm (Fri, 06 Oct 2017 20:11:06 GMT): rahulhegde (Fri, 06 Oct 2017 20:16:46 GMT): rahulhegde (Fri, 06 Oct 2017 20:17:14 GMT): yacovm (Fri, 06 Oct 2017 20:17:53 GMT): yacovm (Fri, 06 Oct 2017 20:18:09 GMT): yacovm (Fri, 06 Oct 2017 20:18:38 GMT): yacovm (Fri, 06 Oct 2017 20:18:42 GMT): yacovm (Fri, 06 Oct 2017 20:18:51 GMT): rahulhegde (Fri, 06 Oct 2017 20:20:19 GMT): yacovm (Fri, 06 Oct 2017 20:20:51 GMT): yacovm (Fri, 06 Oct 2017 20:20:56 GMT): yacovm (Fri, 06 Oct 2017 20:21:48 GMT): yacovm (Fri, 06 Oct 2017 20:22:33 GMT): rahulhegde (Fri, 06 Oct 2017 20:25:17 GMT): yacovm (Fri, 06 Oct 2017 20:25:54 GMT): rahulhegde (Fri, 06 Oct 2017 20:25:56 GMT): yacovm (Fri, 06 Oct 2017 20:26:04 GMT): rahulhegde (Fri, 06 Oct 2017 20:26:41 GMT): jyellick (Fri, 06 Oct 2017 20:27:03 GMT): yacovm (Fri, 06 Oct 2017 20:27:13 GMT): rahulhegde (Fri, 06 Oct 2017 20:27:49 GMT): rahulhegde (Fri, 06 Oct 2017 20:28:13 GMT): yacovm (Fri, 06 Oct 2017 20:28:23 GMT): yacovm (Fri, 06 Oct 2017 20:28:40 GMT): yacovm (Fri, 06 Oct 2017 20:28:53 GMT): qizhang (Fri, 06 Oct 2017 20:30:53 GMT): rahulhegde (Fri, 06 Oct 2017 20:31:26 GMT): yacovm (Fri, 06 Oct 2017 20:32:21 GMT): rahulhegde (Fri, 06 Oct 2017 20:33:49 GMT): rahulhegde (Fri, 06 Oct 2017 20:33:49 GMT): jyellick (Fri, 06 Oct 2017 20:36:39 GMT): yacovm (Fri, 06 Oct 2017 20:37:24 GMT): rahulhegde (Fri, 06 Oct 2017 20:40:46 GMT): rahulhegde (Fri, 06 Oct 2017 20:40:46 GMT): rahulhegde (Fri, 06 Oct 2017 20:40:46 GMT): rahulhegde (Fri, 06 Oct 2017 20:44:15 GMT): jyellick (Fri, 06 Oct 2017 20:45:49 GMT): jyellick (Fri, 06 Oct 2017 20:45:49 GMT): yacovm (Fri, 06 Oct 2017 20:46:06 GMT): yacovm (Fri, 06 Oct 2017 20:46:53 GMT): yacovm (Fri, 06 Oct 2017 20:47:00 GMT): yacovm (Fri, 06 Oct 2017 20:47:02 GMT): yacovm (Fri, 06 Oct 2017 20:47:09 GMT): yacovm (Fri, 06 Oct 2017 20:47:18 GMT): rahulhegde (Fri, 06 Oct 2017 20:50:34 GMT): yacovm (Fri, 06 Oct 2017 20:59:06 GMT): yacovm (Fri, 06 Oct 2017 22:42:46 GMT): yacovm (Fri, 06 Oct 2017 22:42:46 GMT): yacovm (Fri, 06 Oct 2017 22:42:46 GMT): yacovm (Fri, 06 Oct 2017 22:42:46 GMT): jethdg (Sat, 07 Oct 2017 05:05:13 GMT): asaningmaxchain (Mon, 09 Oct 2017 08:27:00 GMT): gentios (Mon, 09 Oct 2017 09:13:48 GMT): gentios (Mon, 09 Oct 2017 09:18:47 GMT): gentios (Mon, 09 Oct 2017 09:18:47 GMT): gentios (Mon, 09 Oct 2017 09:19:06 GMT): gentios (Mon, 09 Oct 2017 09:19:17 GMT): gentios (Mon, 09 Oct 2017 09:19:42 GMT): gentios (Mon, 09 Oct 2017 09:19:45 GMT): gentios (Mon, 09 Oct 2017 09:19:55 GMT): gentios (Mon, 09 Oct 2017 09:19:55 GMT): gentios (Mon, 09 Oct 2017 09:20:16 GMT): gentios (Mon, 09 Oct 2017 09:20:16 GMT): gentios (Mon, 09 Oct 2017 09:38:29 GMT): kostas (Mon, 09 Oct 2017 11:09:32 GMT): SimonOberzan (Mon, 09 Oct 2017 12:46:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:55:16 GMT): SimonOberzan (Mon, 09 Oct 2017 12:58:35 GMT): SimonOberzan (Mon, 09 Oct 2017 13:02:28 GMT): jyellick (Mon, 09 Oct 2017 13:33:37 GMT): SimonOberzan (Mon, 09 Oct 2017 13:34:20 GMT): jyellick (Mon, 09 Oct 2017 13:35:52 GMT): SimonOberzan (Mon, 09 Oct 2017 13:37:16 GMT): jyellick (Mon, 09 Oct 2017 13:37:34 GMT): SimonOberzan (Mon, 09 Oct 2017 13:38:05 GMT): jyellick (Mon, 09 Oct 2017 13:39:23 GMT): SimonOberzan (Mon, 09 Oct 2017 13:41:48 GMT): jyellick (Mon, 09 Oct 2017 13:45:13 GMT): SimonOberzan (Mon, 09 Oct 2017 13:46:04 GMT): julian (Mon, 09 Oct 2017 17:12:47 GMT): rahulhegde (Mon, 09 Oct 2017 21:34:52 GMT): rahulhegde (Mon, 09 Oct 2017 21:34:52 GMT): jyellick (Mon, 09 Oct 2017 21:36:39 GMT): jyellick (Mon, 09 Oct 2017 21:37:05 GMT): rahulhegde (Mon, 09 Oct 2017 21:38:11 GMT): rahulhegde (Mon, 09 Oct 2017 21:38:11 GMT): jyellick (Mon, 09 Oct 2017 21:52:45 GMT): jyellick (Mon, 09 Oct 2017 21:53:51 GMT): rahulhegde (Mon, 09 Oct 2017 22:23:00 GMT): rahulhegde (Mon, 09 Oct 2017 22:23:00 GMT): kostas (Tue, 10 Oct 2017 00:14:29 GMT): kostas (Tue, 10 Oct 2017 00:14:29 GMT): scott_xu (Tue, 10 Oct 2017 01:52:46 GMT): asaningmaxchain (Tue, 10 Oct 2017 05:53:16 GMT): asaningmaxchain (Tue, 10 Oct 2017 05:55:27 GMT): asaningmaxchain (Tue, 10 Oct 2017 05:55:27 GMT): chfalak (Tue, 10 Oct 2017 09:05:38 GMT): chfalak (Tue, 10 Oct 2017 09:05:58 GMT): asaningmaxchain (Tue, 10 Oct 2017 09:09:08 GMT): asaningmaxchain (Tue, 10 Oct 2017 09:09:08 GMT): asaningmaxchain (Tue, 10 Oct 2017 09:09:21 GMT): chfalak (Tue, 10 Oct 2017 09:10:00 GMT): chfalak (Tue, 10 Oct 2017 09:10:29 GMT): lclclc (Tue, 10 Oct 2017 09:57:44 GMT): kostas (Tue, 10 Oct 2017 10:28:02 GMT): kostas (Tue, 10 Oct 2017 10:28:59 GMT): kostas (Tue, 10 Oct 2017 10:29:40 GMT): kostas (Tue, 10 Oct 2017 10:30:38 GMT): kostas (Tue, 10 Oct 2017 10:30:38 GMT): asaningmaxchain (Tue, 10 Oct 2017 10:35:51 GMT): asaningmaxchain (Tue, 10 Oct 2017 10:35:51 GMT): asaningmaxchain (Tue, 10 Oct 2017 10:35:51 GMT): kostas (Tue, 10 Oct 2017 11:24:08 GMT): SimonOberzan (Tue, 10 Oct 2017 11:39:33 GMT): Asara (Tue, 10 Oct 2017 11:45:02 GMT): Asara (Tue, 10 Oct 2017 11:45:20 GMT): Asara (Tue, 10 Oct 2017 11:45:37 GMT): SimonOberzan (Tue, 10 Oct 2017 11:48:35 GMT): SimonOberzan (Tue, 10 Oct 2017 11:48:35 GMT): SimonOberzan (Tue, 10 Oct 2017 11:49:45 GMT): 5igm4 (Tue, 10 Oct 2017 11:50:26 GMT): Asara (Tue, 10 Oct 2017 11:51:23 GMT): SimonOberzan (Tue, 10 Oct 2017 11:53:01 GMT): SimonOberzan (Tue, 10 Oct 2017 11:53:01 GMT): SimonOberzan (Tue, 10 Oct 2017 12:00:50 GMT): chfalak (Tue, 10 Oct 2017 12:04:57 GMT): SimonOberzan (Tue, 10 Oct 2017 12:09:07 GMT): kostas (Tue, 10 Oct 2017 12:09:20 GMT): SimonOberzan (Tue, 10 Oct 2017 12:23:44 GMT): SimonOberzan (Tue, 10 Oct 2017 13:36:24 GMT): jyellick (Tue, 10 Oct 2017 13:45:17 GMT): jyellick (Tue, 10 Oct 2017 13:45:17 GMT): SimonOberzan (Tue, 10 Oct 2017 13:47:34 GMT): SimonOberzan (Tue, 10 Oct 2017 13:47:34 GMT): SimonOberzan (Tue, 10 Oct 2017 13:47:34 GMT): jyellick (Tue, 10 Oct 2017 13:49:56 GMT): SimonOberzan (Tue, 10 Oct 2017 13:59:49 GMT): SimonOberzan (Tue, 10 Oct 2017 13:59:49 GMT): SimonOberzan (Tue, 10 Oct 2017 13:59:49 GMT): SimonOberzan (Tue, 10 Oct 2017 13:59:49 GMT): lmars (Tue, 10 Oct 2017 15:51:24 GMT): collins (Tue, 10 Oct 2017 20:25:04 GMT): ManjeetGambhir (Tue, 10 Oct 2017 20:59:07 GMT): ManjeetGambhir (Tue, 10 Oct 2017 21:02:05 GMT): ManjeetGambhir (Tue, 10 Oct 2017 21:02:49 GMT): jyellick (Tue, 10 Oct 2017 21:04:24 GMT): jyellick (Tue, 10 Oct 2017 21:05:58 GMT): jyellick (Tue, 10 Oct 2017 21:06:18 GMT): jyellick (Tue, 10 Oct 2017 21:06:18 GMT): jyellick (Tue, 10 Oct 2017 21:06:18 GMT): SimonOberzan (Tue, 10 Oct 2017 21:07:16 GMT): jyellick (Tue, 10 Oct 2017 21:10:00 GMT): t_stephens67 (Wed, 11 Oct 2017 17:19:51 GMT): jyellick (Wed, 11 Oct 2017 17:21:01 GMT): t_stephens67 (Wed, 11 Oct 2017 17:28:24 GMT): t_stephens67 (Wed, 11 Oct 2017 17:29:44 GMT): jyellick (Wed, 11 Oct 2017 17:33:33 GMT): toddinpal (Wed, 11 Oct 2017 17:51:38 GMT): t_stephens67 (Wed, 11 Oct 2017 17:58:49 GMT): t_stephens67 (Wed, 11 Oct 2017 18:00:13 GMT): jyellick (Wed, 11 Oct 2017 18:07:04 GMT): jyellick (Wed, 11 Oct 2017 18:07:31 GMT): t_stephens67 (Wed, 11 Oct 2017 18:08:29 GMT): jyellick (Wed, 11 Oct 2017 18:10:08 GMT): jyellick (Wed, 11 Oct 2017 18:11:01 GMT): t_stephens67 (Wed, 11 Oct 2017 18:13:37 GMT): jyellick (Wed, 11 Oct 2017 18:28:01 GMT): jyellick (Wed, 11 Oct 2017 18:28:30 GMT): yacovm (Wed, 11 Oct 2017 20:23:02 GMT): yacovm (Wed, 11 Oct 2017 20:24:23 GMT): yacovm (Wed, 11 Oct 2017 20:24:34 GMT): yacovm (Wed, 11 Oct 2017 20:24:34 GMT): jyellick (Wed, 11 Oct 2017 21:04:20 GMT): yacovm (Wed, 11 Oct 2017 21:04:49 GMT): rahulhegde (Wed, 11 Oct 2017 21:22:36 GMT): rahulhegde (Wed, 11 Oct 2017 21:22:36 GMT): jyellick (Wed, 11 Oct 2017 21:23:02 GMT): rahulhegde (Wed, 11 Oct 2017 21:26:08 GMT): rahulhegde (Wed, 11 Oct 2017 21:26:08 GMT): rahulhegde (Wed, 11 Oct 2017 21:26:08 GMT): rahulhegde (Wed, 11 Oct 2017 21:26:08 GMT): kostas (Wed, 11 Oct 2017 21:26:19 GMT): rahulhegde (Wed, 11 Oct 2017 21:31:45 GMT): kostas (Wed, 11 Oct 2017 21:32:14 GMT): kostas (Wed, 11 Oct 2017 21:32:32 GMT): kostas (Wed, 11 Oct 2017 21:32:32 GMT): kostas (Wed, 11 Oct 2017 21:32:57 GMT): jyellick (Wed, 11 Oct 2017 21:33:01 GMT): jyellick (Wed, 11 Oct 2017 21:33:01 GMT): jyellick (Wed, 11 Oct 2017 21:33:24 GMT): jyellick (Wed, 11 Oct 2017 21:33:41 GMT): jyellick (Wed, 11 Oct 2017 21:33:41 GMT): jyellick (Wed, 11 Oct 2017 21:35:10 GMT): rahulhegde (Wed, 11 Oct 2017 21:35:56 GMT): rahulhegde (Wed, 11 Oct 2017 21:35:56 GMT): vdods (Thu, 12 Oct 2017 01:17:19 GMT): vdods (Thu, 12 Oct 2017 01:19:40 GMT): vdods (Thu, 12 Oct 2017 01:19:40 GMT): jyellick (Thu, 12 Oct 2017 01:26:20 GMT): jyellick (Thu, 12 Oct 2017 01:26:20 GMT): jyellick (Thu, 12 Oct 2017 01:26:20 GMT): vdods (Thu, 12 Oct 2017 01:26:47 GMT): vdods (Thu, 12 Oct 2017 01:28:26 GMT): jyellick (Thu, 12 Oct 2017 01:29:03 GMT): vdods (Thu, 12 Oct 2017 01:29:14 GMT): yoheiueda (Thu, 12 Oct 2017 04:03:09 GMT): WHATISOOP (Thu, 12 Oct 2017 06:32:51 GMT): WHATISOOP (Thu, 12 Oct 2017 06:35:07 GMT): guoger (Thu, 12 Oct 2017 06:47:54 GMT): WHATISOOP (Thu, 12 Oct 2017 06:52:25 GMT): guoger (Thu, 12 Oct 2017 06:56:24 GMT): guoger (Thu, 12 Oct 2017 06:56:24 GMT): WHATISOOP (Thu, 12 Oct 2017 07:03:45 GMT): WHATISOOP (Thu, 12 Oct 2017 07:03:58 GMT): guoger (Thu, 12 Oct 2017 07:30:08 GMT): WHATISOOP (Thu, 12 Oct 2017 07:32:28 GMT): guoger (Thu, 12 Oct 2017 09:06:20 GMT): guoger (Thu, 12 Oct 2017 09:06:20 GMT): WHATISOOP (Thu, 12 Oct 2017 09:09:02 GMT): qizhang (Thu, 12 Oct 2017 14:07:30 GMT): kostas (Thu, 12 Oct 2017 14:11:34 GMT): qizhang (Thu, 12 Oct 2017 14:12:49 GMT): kostas (Thu, 12 Oct 2017 14:20:49 GMT): kostas (Thu, 12 Oct 2017 14:20:49 GMT): qizhang (Thu, 12 Oct 2017 14:39:39 GMT): kostas (Thu, 12 Oct 2017 14:40:46 GMT): kostas (Thu, 12 Oct 2017 14:40:58 GMT): kostas (Thu, 12 Oct 2017 14:41:18 GMT): kostas (Thu, 12 Oct 2017 14:41:49 GMT): kostas (Thu, 12 Oct 2017 14:42:13 GMT): qizhang (Thu, 12 Oct 2017 14:43:36 GMT): qizhang (Thu, 12 Oct 2017 14:43:36 GMT): kostas (Thu, 12 Oct 2017 14:46:03 GMT): kostas (Thu, 12 Oct 2017 14:46:40 GMT): qizhang (Thu, 12 Oct 2017 14:48:24 GMT): kostas (Thu, 12 Oct 2017 14:49:05 GMT): qizhang (Thu, 12 Oct 2017 14:50:42 GMT): qizhang (Thu, 12 Oct 2017 14:50:42 GMT): qizhang (Thu, 12 Oct 2017 14:50:56 GMT): kostas (Thu, 12 Oct 2017 14:51:01 GMT): qizhang (Thu, 12 Oct 2017 14:51:20 GMT): kostas (Thu, 12 Oct 2017 14:51:48 GMT): 5igm4 (Thu, 12 Oct 2017 15:02:10 GMT): kostas (Thu, 12 Oct 2017 15:05:51 GMT): qizhang (Thu, 12 Oct 2017 15:08:01 GMT): kostas (Thu, 12 Oct 2017 15:08:19 GMT): jyellick (Thu, 12 Oct 2017 15:08:46 GMT): 5igm4 (Thu, 12 Oct 2017 15:12:35 GMT): kostas (Thu, 12 Oct 2017 15:12:48 GMT): kostas (Thu, 12 Oct 2017 15:16:32 GMT): kostas (Thu, 12 Oct 2017 15:16:33 GMT): 5igm4 (Thu, 12 Oct 2017 15:19:11 GMT): kostas (Thu, 12 Oct 2017 15:20:28 GMT): rahulhegde (Thu, 12 Oct 2017 16:13:10 GMT): rahulhegde (Thu, 12 Oct 2017 16:13:10 GMT): jyellick (Thu, 12 Oct 2017 16:13:59 GMT): jyellick (Thu, 12 Oct 2017 16:14:27 GMT): jyellick (Thu, 12 Oct 2017 16:15:00 GMT): rahulhegde (Thu, 12 Oct 2017 16:15:11 GMT): rahulhegde (Thu, 12 Oct 2017 16:15:11 GMT): jyellick (Thu, 12 Oct 2017 16:16:14 GMT): rahulhegde (Thu, 12 Oct 2017 16:16:49 GMT): rahulhegde (Thu, 12 Oct 2017 16:16:49 GMT): jyellick (Thu, 12 Oct 2017 16:16:50 GMT): jyellick (Thu, 12 Oct 2017 16:19:11 GMT): jyellick (Thu, 12 Oct 2017 16:20:38 GMT): jyellick (Thu, 12 Oct 2017 16:20:38 GMT): rahulhegde (Thu, 12 Oct 2017 16:27:40 GMT): rahulhegde (Thu, 12 Oct 2017 16:27:40 GMT): rahulhegde (Thu, 12 Oct 2017 16:28:34 GMT): rahulhegde (Thu, 12 Oct 2017 16:28:34 GMT): jyellick (Thu, 12 Oct 2017 17:00:20 GMT): jyellick (Thu, 12 Oct 2017 17:01:22 GMT): rahulhegde (Thu, 12 Oct 2017 18:11:14 GMT): rahulhegde (Thu, 12 Oct 2017 18:11:14 GMT): rahulhegde (Thu, 12 Oct 2017 18:13:55 GMT): rahulhegde (Thu, 12 Oct 2017 18:13:55 GMT): rahulhegde (Thu, 12 Oct 2017 18:13:55 GMT): jyellick (Thu, 12 Oct 2017 18:21:34 GMT): jyellick (Thu, 12 Oct 2017 18:23:42 GMT): jyellick (Thu, 12 Oct 2017 18:24:59 GMT): qizhang (Thu, 12 Oct 2017 19:13:49 GMT): kostas (Thu, 12 Oct 2017 19:26:22 GMT): kostas (Thu, 12 Oct 2017 19:27:26 GMT): kostas (Thu, 12 Oct 2017 19:28:27 GMT): kostas (Thu, 12 Oct 2017 19:28:41 GMT): kelly_ (Thu, 12 Oct 2017 19:42:35 GMT): kelly_ (Thu, 12 Oct 2017 19:43:29 GMT): kostas (Thu, 12 Oct 2017 19:46:05 GMT): kostas (Thu, 12 Oct 2017 19:46:08 GMT): kelly_ (Thu, 12 Oct 2017 19:47:58 GMT): asaningmaxchain (Fri, 13 Oct 2017 07:01:33 GMT): asaningmaxchain (Fri, 13 Oct 2017 07:01:33 GMT): chfalak (Fri, 13 Oct 2017 07:14:22 GMT): Glen (Fri, 13 Oct 2017 08:11:50 GMT): kostas (Fri, 13 Oct 2017 12:09:57 GMT): asaningmaxchain (Fri, 13 Oct 2017 12:10:52 GMT): kostas (Fri, 13 Oct 2017 12:11:46 GMT): kostas (Fri, 13 Oct 2017 12:11:46 GMT): kostas (Fri, 13 Oct 2017 12:12:56 GMT): kostas (Fri, 13 Oct 2017 12:13:43 GMT): kostas (Fri, 13 Oct 2017 12:13:43 GMT): kostas (Fri, 13 Oct 2017 12:14:36 GMT): kostas (Fri, 13 Oct 2017 12:16:06 GMT): scottxu (Fri, 13 Oct 2017 13:46:18 GMT): 5igm4 (Fri, 13 Oct 2017 14:53:40 GMT): 5igm4 (Fri, 13 Oct 2017 14:53:40 GMT): Asara (Fri, 13 Oct 2017 14:56:19 GMT): Asara (Fri, 13 Oct 2017 14:56:24 GMT): 5igm4 (Fri, 13 Oct 2017 14:59:28 GMT): jyellick (Fri, 13 Oct 2017 15:38:19 GMT): rahulhegde (Fri, 13 Oct 2017 17:24:26 GMT): jyellick (Fri, 13 Oct 2017 17:32:05 GMT): jyellick (Fri, 13 Oct 2017 17:32:05 GMT): chfalak (Fri, 13 Oct 2017 17:32:12 GMT): jyellick (Fri, 13 Oct 2017 17:33:16 GMT): jyellick (Fri, 13 Oct 2017 17:33:45 GMT): jyellick (Fri, 13 Oct 2017 17:35:34 GMT): chfalak (Fri, 13 Oct 2017 17:38:37 GMT): jyellick (Fri, 13 Oct 2017 17:39:25 GMT): chfalak (Fri, 13 Oct 2017 17:40:56 GMT): chfalak (Fri, 13 Oct 2017 17:41:40 GMT): jyellick (Fri, 13 Oct 2017 17:45:00 GMT): jyellick (Fri, 13 Oct 2017 17:45:00 GMT): chfalak (Fri, 13 Oct 2017 17:48:57 GMT): yacovm (Fri, 13 Oct 2017 17:50:38 GMT): chfalak (Fri, 13 Oct 2017 17:52:16 GMT): yacovm (Fri, 13 Oct 2017 17:53:34 GMT): kostas (Fri, 13 Oct 2017 18:19:51 GMT): ahmadzafar (Mon, 16 Oct 2017 05:04:31 GMT): antoniovassell (Mon, 16 Oct 2017 14:22:15 GMT): kostas (Mon, 16 Oct 2017 14:23:23 GMT): antoniovassell (Mon, 16 Oct 2017 14:24:42 GMT): antoniovassell (Mon, 16 Oct 2017 14:24:48 GMT): kostas (Mon, 16 Oct 2017 14:26:41 GMT): ascatox (Mon, 16 Oct 2017 14:35:44 GMT): antoniovassell (Mon, 16 Oct 2017 14:37:30 GMT): kostas (Mon, 16 Oct 2017 14:38:27 GMT): ascatox (Mon, 16 Oct 2017 14:39:08 GMT): rahulhegde (Mon, 16 Oct 2017 17:12:01 GMT): rahulhegde (Mon, 16 Oct 2017 17:13:39 GMT): jyellick (Mon, 16 Oct 2017 17:16:09 GMT): rahulhegde (Mon, 16 Oct 2017 17:16:26 GMT): rahulhegde (Mon, 16 Oct 2017 17:16:53 GMT): rahulhegde (Mon, 16 Oct 2017 17:17:20 GMT): kostas (Mon, 16 Oct 2017 17:17:31 GMT): jyellick (Mon, 16 Oct 2017 17:23:05 GMT): jyellick (Mon, 16 Oct 2017 17:24:17 GMT): jyellick (Mon, 16 Oct 2017 17:25:31 GMT): kostas (Mon, 16 Oct 2017 17:26:03 GMT): jyellick (Mon, 16 Oct 2017 17:26:40 GMT): kostas (Mon, 16 Oct 2017 17:28:28 GMT): jyellick (Mon, 16 Oct 2017 17:30:01 GMT): jyellick (Mon, 16 Oct 2017 17:30:37 GMT): kostas (Mon, 16 Oct 2017 17:31:08 GMT): kostas (Mon, 16 Oct 2017 17:31:08 GMT): kostas (Mon, 16 Oct 2017 17:31:08 GMT): jyellick (Mon, 16 Oct 2017 17:31:54 GMT): rahulhegde (Mon, 16 Oct 2017 17:45:28 GMT): rahulhegde (Mon, 16 Oct 2017 20:59:23 GMT): rahulhegde (Mon, 16 Oct 2017 20:59:23 GMT): kostas (Mon, 16 Oct 2017 21:19:49 GMT): kostas (Mon, 16 Oct 2017 21:19:49 GMT): rahulhegde (Mon, 16 Oct 2017 21:24:08 GMT): rahulhegde (Mon, 16 Oct 2017 21:25:21 GMT): kostas (Mon, 16 Oct 2017 21:27:30 GMT): kostas (Mon, 16 Oct 2017 21:35:10 GMT): kostas (Mon, 16 Oct 2017 21:35:10 GMT): kostas (Mon, 16 Oct 2017 21:36:42 GMT): kostas (Mon, 16 Oct 2017 21:36:42 GMT): kostas (Mon, 16 Oct 2017 21:38:02 GMT): kostas (Mon, 16 Oct 2017 21:38:23 GMT): rohitadivi (Mon, 16 Oct 2017 21:43:33 GMT): pvrbharg (Tue, 17 Oct 2017 00:38:22 GMT): jyellick (Tue, 17 Oct 2017 02:40:51 GMT): jyellick (Tue, 17 Oct 2017 02:40:51 GMT): jyellick (Tue, 17 Oct 2017 02:40:51 GMT): baohua (Tue, 17 Oct 2017 02:40:54 GMT): baohua (Tue, 17 Oct 2017 02:40:54 GMT): pvrbharg (Tue, 17 Oct 2017 12:02:22 GMT): jyellick (Tue, 17 Oct 2017 12:47:17 GMT): pvrbharg (Tue, 17 Oct 2017 12:50:06 GMT): rahulhegde (Tue, 17 Oct 2017 15:47:41 GMT): rahulhegde (Tue, 17 Oct 2017 15:47:41 GMT): rahulhegde (Tue, 17 Oct 2017 15:47:41 GMT): jyellick (Tue, 17 Oct 2017 15:51:59 GMT): jyellick (Tue, 17 Oct 2017 15:52:17 GMT): rahulhegde (Tue, 17 Oct 2017 15:53:37 GMT): rahulhegde (Tue, 17 Oct 2017 15:53:37 GMT): rahulhegde (Tue, 17 Oct 2017 16:17:21 GMT): rahulhegde (Tue, 17 Oct 2017 16:17:21 GMT): jyellick (Tue, 17 Oct 2017 16:20:33 GMT): jeffgarratt (Tue, 17 Oct 2017 16:23:47 GMT): mikykey (Tue, 17 Oct 2017 16:23:47 GMT): jeffgarratt (Tue, 17 Oct 2017 16:24:31 GMT): jyellick (Tue, 17 Oct 2017 16:25:17 GMT): guoger (Tue, 17 Oct 2017 20:01:18 GMT): rahulhegde (Tue, 17 Oct 2017 20:15:44 GMT): rahulhegde (Tue, 17 Oct 2017 20:15:44 GMT): rahulhegde (Tue, 17 Oct 2017 20:15:44 GMT): rahulhegde (Tue, 17 Oct 2017 20:17:42 GMT): rahulhegde (Tue, 17 Oct 2017 20:17:42 GMT): rahulhegde (Tue, 17 Oct 2017 20:22:05 GMT): kostas (Tue, 17 Oct 2017 20:34:16 GMT): kostas (Tue, 17 Oct 2017 20:34:16 GMT): kostas (Tue, 17 Oct 2017 20:35:27 GMT): kostas (Tue, 17 Oct 2017 20:35:29 GMT): kostas (Tue, 17 Oct 2017 20:35:47 GMT): kostas (Tue, 17 Oct 2017 20:35:54 GMT): kostas (Tue, 17 Oct 2017 20:36:23 GMT): jyellick (Tue, 17 Oct 2017 20:38:01 GMT): kostas (Tue, 17 Oct 2017 20:39:45 GMT): kostas (Tue, 17 Oct 2017 20:39:45 GMT): kostas (Tue, 17 Oct 2017 20:39:45 GMT): kostas (Tue, 17 Oct 2017 20:39:45 GMT): kostas (Tue, 17 Oct 2017 20:39:45 GMT): jyellick (Tue, 17 Oct 2017 20:40:13 GMT): kostas (Tue, 17 Oct 2017 20:40:38 GMT): jyellick (Tue, 17 Oct 2017 20:41:53 GMT): jyellick (Tue, 17 Oct 2017 20:41:53 GMT): rahulhegde (Tue, 17 Oct 2017 20:52:17 GMT): rahulhegde (Tue, 17 Oct 2017 20:52:17 GMT): kostas (Tue, 17 Oct 2017 20:53:39 GMT): kostas (Tue, 17 Oct 2017 20:54:10 GMT): kostas (Tue, 17 Oct 2017 20:55:37 GMT): rahulhegde (Tue, 17 Oct 2017 21:01:26 GMT): rahulhegde (Tue, 17 Oct 2017 21:02:27 GMT): rahulhegde (Tue, 17 Oct 2017 21:04:03 GMT): rahulhegde (Tue, 17 Oct 2017 21:04:03 GMT): kostas (Tue, 17 Oct 2017 21:28:56 GMT): kostas (Tue, 17 Oct 2017 21:28:56 GMT): kostas (Tue, 17 Oct 2017 21:28:56 GMT): kostas (Tue, 17 Oct 2017 21:28:56 GMT): kostas (Tue, 17 Oct 2017 21:28:56 GMT): kostas (Tue, 17 Oct 2017 21:28:56 GMT): kostas (Tue, 17 Oct 2017 21:28:56 GMT): kostas (Tue, 17 Oct 2017 21:38:43 GMT): kostas (Tue, 17 Oct 2017 21:38:43 GMT): rahulhegde (Tue, 17 Oct 2017 22:42:27 GMT): rahulhegde (Tue, 17 Oct 2017 22:42:27 GMT): kostas (Wed, 18 Oct 2017 00:19:26 GMT): kostas (Wed, 18 Oct 2017 00:19:26 GMT): kostas (Wed, 18 Oct 2017 00:19:26 GMT): kostas (Wed, 18 Oct 2017 00:19:26 GMT): kostas (Wed, 18 Oct 2017 00:19:26 GMT): kostas (Wed, 18 Oct 2017 00:19:26 GMT): kostas (Wed, 18 Oct 2017 00:19:37 GMT): kostas (Wed, 18 Oct 2017 00:19:39 GMT): kostas (Wed, 18 Oct 2017 00:19:39 GMT): scott_xu (Wed, 18 Oct 2017 05:15:38 GMT): scott_xu (Wed, 18 Oct 2017 05:16:22 GMT): yoheiueda (Wed, 18 Oct 2017 05:28:28 GMT): scott_xu (Wed, 18 Oct 2017 05:35:21 GMT): Glen (Wed, 18 Oct 2017 05:40:45 GMT): yoheiueda (Wed, 18 Oct 2017 05:44:18 GMT): ahmadzafar (Wed, 18 Oct 2017 06:14:37 GMT): rahulhegde (Wed, 18 Oct 2017 11:27:30 GMT): rahulhegde (Wed, 18 Oct 2017 11:33:35 GMT): kostas (Wed, 18 Oct 2017 12:10:48 GMT): kostas (Wed, 18 Oct 2017 12:12:12 GMT): kostas (Wed, 18 Oct 2017 12:13:00 GMT): kostas (Wed, 18 Oct 2017 12:13:00 GMT): kostas (Wed, 18 Oct 2017 12:13:00 GMT): kostas (Wed, 18 Oct 2017 12:14:31 GMT): kostas (Wed, 18 Oct 2017 12:16:50 GMT): kostas (Wed, 18 Oct 2017 12:21:33 GMT): kostas (Wed, 18 Oct 2017 12:24:09 GMT): kostas (Wed, 18 Oct 2017 12:24:55 GMT): kostas (Wed, 18 Oct 2017 12:27:18 GMT): rahulhegde (Wed, 18 Oct 2017 12:31:51 GMT): kostas (Wed, 18 Oct 2017 12:39:58 GMT): kostas (Wed, 18 Oct 2017 12:58:19 GMT): rahulhegde (Wed, 18 Oct 2017 13:06:41 GMT): rahulhegde (Wed, 18 Oct 2017 13:06:41 GMT): kostas (Wed, 18 Oct 2017 13:07:47 GMT): rahulhegde (Wed, 18 Oct 2017 13:08:44 GMT): kostas (Wed, 18 Oct 2017 13:09:30 GMT): jyellick (Wed, 18 Oct 2017 13:10:54 GMT): jyellick (Wed, 18 Oct 2017 13:12:29 GMT): jyellick (Wed, 18 Oct 2017 13:13:14 GMT): jyellick (Wed, 18 Oct 2017 13:14:20 GMT): kostas (Wed, 18 Oct 2017 13:14:32 GMT): jyellick (Wed, 18 Oct 2017 13:16:09 GMT): kostas (Wed, 18 Oct 2017 13:16:41 GMT): kostas (Wed, 18 Oct 2017 13:16:41 GMT): rahulhegde (Wed, 18 Oct 2017 13:44:40 GMT): rahulhegde (Wed, 18 Oct 2017 13:44:40 GMT): kostas (Wed, 18 Oct 2017 13:49:10 GMT): kostas (Wed, 18 Oct 2017 13:49:10 GMT): kostas (Wed, 18 Oct 2017 13:49:10 GMT): kostas (Wed, 18 Oct 2017 13:49:29 GMT): jyellick (Wed, 18 Oct 2017 13:56:23 GMT): rahulhegde (Wed, 18 Oct 2017 14:18:45 GMT): rahulhegde (Wed, 18 Oct 2017 14:18:45 GMT): jyellick (Wed, 18 Oct 2017 14:22:12 GMT): rahulhegde (Wed, 18 Oct 2017 14:22:59 GMT): rahulhegde (Wed, 18 Oct 2017 14:22:59 GMT): mikykey (Wed, 18 Oct 2017 14:25:43 GMT): rahulhegde (Wed, 18 Oct 2017 14:30:05 GMT): jyellick (Wed, 18 Oct 2017 14:59:46 GMT): jyellick (Wed, 18 Oct 2017 14:59:46 GMT): Asara (Wed, 18 Oct 2017 14:59:54 GMT): jyellick (Wed, 18 Oct 2017 15:00:15 GMT): jyellick (Wed, 18 Oct 2017 15:00:15 GMT): Asara (Wed, 18 Oct 2017 15:00:57 GMT): jyellick (Wed, 18 Oct 2017 15:01:04 GMT): Asara (Wed, 18 Oct 2017 15:01:13 GMT): eetti (Wed, 18 Oct 2017 15:02:04 GMT): rahulhegde (Wed, 18 Oct 2017 15:03:36 GMT): jyellick (Wed, 18 Oct 2017 15:06:10 GMT): jyellick (Wed, 18 Oct 2017 15:07:32 GMT): eetti (Wed, 18 Oct 2017 15:11:54 GMT): jyellick (Wed, 18 Oct 2017 15:12:52 GMT): qizhang (Wed, 18 Oct 2017 15:16:21 GMT): qizhang (Wed, 18 Oct 2017 15:16:33 GMT): qizhang (Wed, 18 Oct 2017 15:16:33 GMT): jyellick (Wed, 18 Oct 2017 15:23:55 GMT): jyellick (Wed, 18 Oct 2017 15:23:55 GMT): jyellick (Wed, 18 Oct 2017 15:23:55 GMT): jyellick (Wed, 18 Oct 2017 15:23:55 GMT): jyellick (Wed, 18 Oct 2017 15:23:55 GMT): jyellick (Wed, 18 Oct 2017 15:24:47 GMT): rahulhegde (Wed, 18 Oct 2017 15:27:35 GMT): minollo (Wed, 18 Oct 2017 16:40:21 GMT): t_stephens67 (Wed, 18 Oct 2017 18:11:04 GMT): t_stephens67 (Wed, 18 Oct 2017 18:11:33 GMT): jyellick (Wed, 18 Oct 2017 18:21:05 GMT): t_stephens67 (Wed, 18 Oct 2017 18:24:54 GMT): t_stephens67 (Wed, 18 Oct 2017 18:27:34 GMT): jyellick (Wed, 18 Oct 2017 18:28:02 GMT): jyellick (Wed, 18 Oct 2017 18:28:28 GMT): t_stephens67 (Wed, 18 Oct 2017 19:33:03 GMT): jyellick (Wed, 18 Oct 2017 19:43:55 GMT): jyellick (Wed, 18 Oct 2017 19:43:55 GMT): jyellick (Wed, 18 Oct 2017 19:43:55 GMT): t_stephens67 (Wed, 18 Oct 2017 19:45:00 GMT): kostas (Wed, 18 Oct 2017 19:45:12 GMT): t_stephens67 (Wed, 18 Oct 2017 19:48:27 GMT): t_stephens67 (Wed, 18 Oct 2017 19:50:12 GMT): jyellick (Wed, 18 Oct 2017 19:51:28 GMT): jyellick (Wed, 18 Oct 2017 19:51:42 GMT): t_stephens67 (Wed, 18 Oct 2017 19:53:53 GMT): jyellick (Wed, 18 Oct 2017 19:54:41 GMT): tongli (Wed, 18 Oct 2017 19:54:41 GMT): t_stephens67 (Wed, 18 Oct 2017 19:55:58 GMT): kostas (Wed, 18 Oct 2017 19:57:05 GMT): kostas (Wed, 18 Oct 2017 19:57:05 GMT): kostas (Wed, 18 Oct 2017 19:58:08 GMT): kostas (Wed, 18 Oct 2017 19:58:08 GMT): kostas (Wed, 18 Oct 2017 19:58:08 GMT): jyellick (Wed, 18 Oct 2017 20:04:02 GMT): vdods (Wed, 18 Oct 2017 20:06:53 GMT): kostas (Wed, 18 Oct 2017 20:07:20 GMT): kostas (Wed, 18 Oct 2017 20:08:58 GMT): jyellick (Wed, 18 Oct 2017 20:09:54 GMT): kostas (Wed, 18 Oct 2017 20:09:54 GMT): vdods (Wed, 18 Oct 2017 20:09:55 GMT): kostas (Wed, 18 Oct 2017 20:11:07 GMT): vdods (Wed, 18 Oct 2017 20:12:09 GMT): vdods (Wed, 18 Oct 2017 20:12:17 GMT): kostas (Wed, 18 Oct 2017 20:12:44 GMT): kostas (Wed, 18 Oct 2017 20:13:01 GMT): vdods (Wed, 18 Oct 2017 20:13:09 GMT): kostas (Wed, 18 Oct 2017 20:13:18 GMT): vdods (Wed, 18 Oct 2017 20:14:21 GMT): kostas (Wed, 18 Oct 2017 20:14:49 GMT): vdods (Wed, 18 Oct 2017 20:15:06 GMT): vdods (Wed, 18 Oct 2017 20:15:21 GMT): kostas (Wed, 18 Oct 2017 20:16:59 GMT): kostas (Wed, 18 Oct 2017 20:18:31 GMT): kostas (Wed, 18 Oct 2017 20:18:31 GMT): vdods (Wed, 18 Oct 2017 20:26:18 GMT): kostas (Wed, 18 Oct 2017 20:27:36 GMT): vdods (Wed, 18 Oct 2017 20:27:58 GMT): vdods (Wed, 18 Oct 2017 22:05:49 GMT): bh4rtp (Thu, 19 Oct 2017 00:47:40 GMT): asaningmaxchain (Thu, 19 Oct 2017 01:44:48 GMT): asaningmaxchain (Thu, 19 Oct 2017 01:44:57 GMT): asaningmaxchain (Thu, 19 Oct 2017 01:44:57 GMT): thakkarparth007 (Thu, 19 Oct 2017 03:30:54 GMT): jyellick (Thu, 19 Oct 2017 04:46:17 GMT): jyellick (Thu, 19 Oct 2017 04:46:17 GMT): jyellick (Thu, 19 Oct 2017 04:46:17 GMT): ahmadzafar (Thu, 19 Oct 2017 05:10:36 GMT): ahmadzafar (Thu, 19 Oct 2017 05:13:22 GMT): ahmadzafar (Thu, 19 Oct 2017 05:16:19 GMT): jyellick (Thu, 19 Oct 2017 05:23:59 GMT): jyellick (Thu, 19 Oct 2017 05:23:59 GMT): ahmadzafar (Thu, 19 Oct 2017 05:26:39 GMT): Glen (Thu, 19 Oct 2017 08:36:00 GMT): bh4rtp (Thu, 19 Oct 2017 10:26:33 GMT): Vadim (Thu, 19 Oct 2017 11:14:06 GMT): bh4rtp (Thu, 19 Oct 2017 11:15:31 GMT): Vadim (Thu, 19 Oct 2017 11:16:14 GMT): bh4rtp (Thu, 19 Oct 2017 11:16:55 GMT): Vadim (Thu, 19 Oct 2017 11:17:14 GMT): bh4rtp (Thu, 19 Oct 2017 11:20:33 GMT): jyellick (Thu, 19 Oct 2017 13:35:38 GMT): jyellick (Thu, 19 Oct 2017 13:36:31 GMT): rahulhegde (Thu, 19 Oct 2017 13:40:01 GMT): rahulhegde (Thu, 19 Oct 2017 13:40:59 GMT): rahulhegde (Thu, 19 Oct 2017 13:40:59 GMT): jyellick (Thu, 19 Oct 2017 13:56:49 GMT): jyellick (Thu, 19 Oct 2017 13:56:49 GMT): wy (Thu, 19 Oct 2017 13:57:50 GMT): jyellick (Thu, 19 Oct 2017 14:01:23 GMT): rahulhegde (Thu, 19 Oct 2017 14:46:32 GMT): rahulhegde (Thu, 19 Oct 2017 16:06:44 GMT): jyellick (Thu, 19 Oct 2017 16:08:27 GMT): jyellick (Thu, 19 Oct 2017 16:08:27 GMT): vdods (Thu, 19 Oct 2017 22:06:14 GMT): kostas (Thu, 19 Oct 2017 22:29:48 GMT): kostas (Thu, 19 Oct 2017 22:30:16 GMT): Glen (Fri, 20 Oct 2017 00:02:37 GMT): Glen (Fri, 20 Oct 2017 00:02:37 GMT): wy (Fri, 20 Oct 2017 04:10:49 GMT): vdods (Fri, 20 Oct 2017 07:59:46 GMT): kostas (Fri, 20 Oct 2017 08:49:40 GMT): kostas (Fri, 20 Oct 2017 08:49:40 GMT): kostas (Fri, 20 Oct 2017 08:49:40 GMT): wy (Fri, 20 Oct 2017 08:50:17 GMT): kostas (Fri, 20 Oct 2017 08:57:04 GMT): kostas (Fri, 20 Oct 2017 08:57:22 GMT): kostas (Fri, 20 Oct 2017 08:59:56 GMT): kostas (Fri, 20 Oct 2017 09:00:43 GMT): wy (Fri, 20 Oct 2017 09:07:03 GMT): daygee (Fri, 20 Oct 2017 09:38:07 GMT): kostas (Fri, 20 Oct 2017 09:38:07 GMT): daygee (Fri, 20 Oct 2017 09:44:09 GMT): daygee (Fri, 20 Oct 2017 09:44:58 GMT): daygee (Fri, 20 Oct 2017 09:45:28 GMT): daygee (Fri, 20 Oct 2017 09:46:00 GMT): daygee (Fri, 20 Oct 2017 09:46:52 GMT): daygee (Fri, 20 Oct 2017 09:47:03 GMT): daygee (Fri, 20 Oct 2017 09:47:40 GMT): daygee (Fri, 20 Oct 2017 09:47:48 GMT): thakkarparth007 (Fri, 20 Oct 2017 10:04:49 GMT): thakkarparth007 (Fri, 20 Oct 2017 10:06:05 GMT): daygee (Fri, 20 Oct 2017 11:02:42 GMT): kostas (Fri, 20 Oct 2017 11:59:34 GMT): kostas (Fri, 20 Oct 2017 12:00:21 GMT): daygee (Fri, 20 Oct 2017 12:26:34 GMT): wy (Fri, 20 Oct 2017 13:45:10 GMT): jyellick (Fri, 20 Oct 2017 13:54:36 GMT): wy (Fri, 20 Oct 2017 13:57:31 GMT): jyellick (Fri, 20 Oct 2017 13:58:41 GMT): wy (Fri, 20 Oct 2017 14:01:39 GMT): jyellick (Fri, 20 Oct 2017 14:03:07 GMT): jyellick (Fri, 20 Oct 2017 14:03:20 GMT): wy (Fri, 20 Oct 2017 14:04:51 GMT): jyellick (Fri, 20 Oct 2017 14:05:23 GMT): jyellick (Fri, 20 Oct 2017 14:05:23 GMT): wy (Fri, 20 Oct 2017 14:07:17 GMT): jyellick (Fri, 20 Oct 2017 14:09:17 GMT): wy (Fri, 20 Oct 2017 14:11:06 GMT): wy (Fri, 20 Oct 2017 14:11:40 GMT): jyellick (Fri, 20 Oct 2017 14:14:35 GMT): wy (Fri, 20 Oct 2017 14:16:46 GMT): jyellick (Fri, 20 Oct 2017 14:46:19 GMT): wy (Fri, 20 Oct 2017 14:56:25 GMT): jyellick (Fri, 20 Oct 2017 14:58:39 GMT): wy (Fri, 20 Oct 2017 15:00:40 GMT): wy (Fri, 20 Oct 2017 15:00:40 GMT): jyellick (Fri, 20 Oct 2017 15:03:39 GMT): wy (Fri, 20 Oct 2017 15:36:22 GMT): steveruckdashel (Fri, 20 Oct 2017 15:57:55 GMT): jyellick (Fri, 20 Oct 2017 16:11:46 GMT): jyellick (Fri, 20 Oct 2017 16:11:46 GMT): steveruckdashel (Fri, 20 Oct 2017 16:22:10 GMT): kostas (Fri, 20 Oct 2017 16:36:56 GMT): kostas (Fri, 20 Oct 2017 16:40:20 GMT): kostas (Fri, 20 Oct 2017 16:40:20 GMT): kostas (Fri, 20 Oct 2017 16:41:25 GMT): vdods (Fri, 20 Oct 2017 18:12:10 GMT): vdods (Fri, 20 Oct 2017 18:16:12 GMT): vdods (Fri, 20 Oct 2017 18:18:10 GMT): kostas (Fri, 20 Oct 2017 19:48:32 GMT): kostas (Fri, 20 Oct 2017 19:48:32 GMT): jyellick (Fri, 20 Oct 2017 20:45:27 GMT): vdods (Fri, 20 Oct 2017 21:10:43 GMT): jyellick (Fri, 20 Oct 2017 21:11:06 GMT): vdods (Fri, 20 Oct 2017 21:11:13 GMT): vdods (Fri, 20 Oct 2017 21:11:22 GMT): vdods (Fri, 20 Oct 2017 21:11:42 GMT): jyellick (Fri, 20 Oct 2017 21:11:53 GMT): vdods (Fri, 20 Oct 2017 21:12:04 GMT): kostas (Fri, 20 Oct 2017 21:12:05 GMT): kostas (Fri, 20 Oct 2017 21:12:13 GMT): vdods (Fri, 20 Oct 2017 21:12:19 GMT): jyellick (Fri, 20 Oct 2017 21:15:38 GMT): jyellick (Fri, 20 Oct 2017 21:15:38 GMT): jyellick (Fri, 20 Oct 2017 21:15:38 GMT): jyellick (Fri, 20 Oct 2017 21:15:38 GMT): jyellick (Fri, 20 Oct 2017 21:16:12 GMT): vdods (Fri, 20 Oct 2017 21:16:48 GMT): vdods (Fri, 20 Oct 2017 21:16:52 GMT): jyellick (Fri, 20 Oct 2017 21:17:31 GMT): jyellick (Fri, 20 Oct 2017 21:17:40 GMT): vdods (Fri, 20 Oct 2017 21:19:49 GMT): vdods (Fri, 20 Oct 2017 21:20:14 GMT): jyellick (Fri, 20 Oct 2017 21:20:16 GMT): vdods (Fri, 20 Oct 2017 21:35:10 GMT): vdods (Fri, 20 Oct 2017 21:37:17 GMT): vdods (Fri, 20 Oct 2017 21:38:03 GMT): vdods (Fri, 20 Oct 2017 21:39:33 GMT): vdods (Fri, 20 Oct 2017 21:40:16 GMT): kostas (Fri, 20 Oct 2017 21:42:04 GMT): vdods (Fri, 20 Oct 2017 21:49:00 GMT): vdods (Fri, 20 Oct 2017 21:52:10 GMT): kostas (Fri, 20 Oct 2017 22:27:29 GMT): kostas (Fri, 20 Oct 2017 22:28:49 GMT): kostas (Fri, 20 Oct 2017 22:29:26 GMT): kostas (Fri, 20 Oct 2017 22:30:16 GMT): kostas (Fri, 20 Oct 2017 22:30:58 GMT): vdods (Fri, 20 Oct 2017 23:45:35 GMT): vdods (Fri, 20 Oct 2017 23:50:08 GMT): vdods (Fri, 20 Oct 2017 23:50:37 GMT): vdods (Sat, 21 Oct 2017 00:52:08 GMT): vdods (Sat, 21 Oct 2017 00:52:30 GMT): vdods (Sat, 21 Oct 2017 00:54:37 GMT): vdods (Sat, 21 Oct 2017 00:54:37 GMT): mogamboizer (Sat, 21 Oct 2017 15:10:57 GMT): outis (Mon, 23 Oct 2017 06:57:31 GMT): outis (Mon, 23 Oct 2017 07:08:22 GMT): outis (Mon, 23 Oct 2017 07:08:22 GMT): outis (Mon, 23 Oct 2017 07:08:22 GMT): chfalak (Mon, 23 Oct 2017 07:35:07 GMT): honeyc (Mon, 23 Oct 2017 10:30:04 GMT): kostas (Mon, 23 Oct 2017 12:33:57 GMT): kostas (Mon, 23 Oct 2017 12:34:11 GMT): harsha (Mon, 23 Oct 2017 12:50:19 GMT): niteshsolanki (Mon, 23 Oct 2017 13:53:17 GMT): rbulgarelli (Mon, 23 Oct 2017 14:21:51 GMT): rbulgarelli (Mon, 23 Oct 2017 14:23:31 GMT): rbulgarelli (Mon, 23 Oct 2017 14:23:56 GMT): kostas (Mon, 23 Oct 2017 17:06:14 GMT): kostas (Mon, 23 Oct 2017 17:06:49 GMT): kostas (Mon, 23 Oct 2017 17:07:18 GMT): kostas (Mon, 23 Oct 2017 17:07:51 GMT): kostas (Mon, 23 Oct 2017 17:09:27 GMT): niteshsolanki (Mon, 23 Oct 2017 18:53:42 GMT): kostas (Mon, 23 Oct 2017 18:54:19 GMT): kostas (Mon, 23 Oct 2017 18:54:19 GMT): kostas (Mon, 23 Oct 2017 18:54:57 GMT): niteshsolanki (Mon, 23 Oct 2017 18:58:28 GMT): kostas (Mon, 23 Oct 2017 18:59:36 GMT): niteshsolanki (Mon, 23 Oct 2017 19:03:13 GMT): rbulgarelli (Mon, 23 Oct 2017 19:18:27 GMT): kostas (Mon, 23 Oct 2017 19:21:49 GMT): kostas (Mon, 23 Oct 2017 19:21:58 GMT): niteshsolanki (Mon, 23 Oct 2017 19:57:40 GMT): kostas (Mon, 23 Oct 2017 20:01:01 GMT): kostas (Mon, 23 Oct 2017 20:01:40 GMT): niteshsolanki (Mon, 23 Oct 2017 20:06:06 GMT): kostas (Mon, 23 Oct 2017 20:06:27 GMT): niteshsolanki (Mon, 23 Oct 2017 20:06:45 GMT): rbulgarelli (Mon, 23 Oct 2017 20:07:32 GMT): rbulgarelli (Mon, 23 Oct 2017 20:07:51 GMT): rbulgarelli (Mon, 23 Oct 2017 20:07:58 GMT): rbulgarelli (Mon, 23 Oct 2017 20:08:06 GMT): rbulgarelli (Mon, 23 Oct 2017 20:08:42 GMT): outis (Tue, 24 Oct 2017 00:50:52 GMT): honeyc (Tue, 24 Oct 2017 02:05:58 GMT): honeyc (Tue, 24 Oct 2017 02:05:58 GMT): kostas (Tue, 24 Oct 2017 02:11:46 GMT): kostas (Tue, 24 Oct 2017 02:12:53 GMT): honeyc (Tue, 24 Oct 2017 02:19:30 GMT): DannyWong (Tue, 24 Oct 2017 02:23:48 GMT): honeyc (Tue, 24 Oct 2017 02:30:48 GMT): kostas (Tue, 24 Oct 2017 02:41:51 GMT): kostas (Tue, 24 Oct 2017 03:00:32 GMT): sanchezl (Tue, 24 Oct 2017 03:19:51 GMT): sanchezl (Tue, 24 Oct 2017 03:26:23 GMT): ahmadzafar (Tue, 24 Oct 2017 05:13:50 GMT): Vadim (Tue, 24 Oct 2017 05:41:24 GMT): Vadim (Tue, 24 Oct 2017 05:41:49 GMT): ahmadzafar (Tue, 24 Oct 2017 05:47:12 GMT): Vadim (Tue, 24 Oct 2017 05:48:09 GMT): ahmadzafar (Tue, 24 Oct 2017 05:49:56 GMT): Vadim (Tue, 24 Oct 2017 05:50:08 GMT): ahmadzafar (Tue, 24 Oct 2017 05:52:37 GMT): Vadim (Tue, 24 Oct 2017 05:53:02 GMT): ahmadzafar (Tue, 24 Oct 2017 05:53:17 GMT): Vadim (Tue, 24 Oct 2017 05:53:20 GMT): ahmadzafar (Tue, 24 Oct 2017 05:54:45 GMT): Vadim (Tue, 24 Oct 2017 05:56:29 GMT): ahmadzafar (Tue, 24 Oct 2017 06:27:41 GMT): honeyc (Tue, 24 Oct 2017 07:11:27 GMT): C0rWin (Tue, 24 Oct 2017 07:11:39 GMT): honeyc (Tue, 24 Oct 2017 07:12:13 GMT): C0rWin (Tue, 24 Oct 2017 07:14:15 GMT): C0rWin (Tue, 24 Oct 2017 07:14:46 GMT): honeyc (Tue, 24 Oct 2017 07:16:04 GMT): JeremyMet (Tue, 24 Oct 2017 08:57:30 GMT): JeremyMet (Tue, 24 Oct 2017 09:04:04 GMT): JeremyMet (Tue, 24 Oct 2017 09:04:04 GMT): JeremyMet (Tue, 24 Oct 2017 09:04:04 GMT): JeremyMet (Tue, 24 Oct 2017 09:04:04 GMT): JeremyMet (Tue, 24 Oct 2017 09:04:04 GMT): JeremyMet (Tue, 24 Oct 2017 09:04:04 GMT): JeremyMet (Tue, 24 Oct 2017 09:04:04 GMT): JeremyMet (Tue, 24 Oct 2017 09:04:04 GMT): risabhsharma71 (Tue, 24 Oct 2017 09:38:06 GMT): risabhsharma71 (Tue, 24 Oct 2017 09:38:44 GMT): Vadim (Tue, 24 Oct 2017 09:50:24 GMT): JeremyMet (Tue, 24 Oct 2017 10:12:59 GMT): rbulgarelli (Tue, 24 Oct 2017 12:19:48 GMT): rbulgarelli (Tue, 24 Oct 2017 12:19:59 GMT): rbulgarelli (Tue, 24 Oct 2017 12:20:04 GMT): rbulgarelli (Tue, 24 Oct 2017 12:20:08 GMT): rbulgarelli (Tue, 24 Oct 2017 12:23:28 GMT): kostas (Tue, 24 Oct 2017 13:09:06 GMT): rbulgarelli (Tue, 24 Oct 2017 13:11:22 GMT): rbulgarelli (Tue, 24 Oct 2017 13:56:32 GMT): sanchezl (Tue, 24 Oct 2017 14:32:50 GMT): sanchezl (Tue, 24 Oct 2017 14:33:19 GMT): sanchezl (Tue, 24 Oct 2017 14:34:30 GMT): rbulgarelli (Tue, 24 Oct 2017 17:36:36 GMT): kostas (Tue, 24 Oct 2017 17:53:21 GMT): kostas (Tue, 24 Oct 2017 17:53:36 GMT): t_stephens67 (Wed, 25 Oct 2017 14:58:21 GMT): Vadim (Wed, 25 Oct 2017 15:00:41 GMT): t_stephens67 (Wed, 25 Oct 2017 15:00:55 GMT): kostas (Wed, 25 Oct 2017 15:01:37 GMT): t_stephens67 (Wed, 25 Oct 2017 15:03:18 GMT): kostas (Wed, 25 Oct 2017 15:03:48 GMT): t_stephens67 (Wed, 25 Oct 2017 15:04:11 GMT): t_stephens67 (Wed, 25 Oct 2017 15:07:10 GMT): kostas (Wed, 25 Oct 2017 15:07:17 GMT): t_stephens67 (Wed, 25 Oct 2017 15:08:08 GMT): t_stephens67 (Wed, 25 Oct 2017 15:09:09 GMT): linzheng (Thu, 26 Oct 2017 04:16:17 GMT): srongzhe (Thu, 26 Oct 2017 09:02:23 GMT): srongzhe (Thu, 26 Oct 2017 09:02:36 GMT): srongzhe (Thu, 26 Oct 2017 09:04:28 GMT): srongzhe (Thu, 26 Oct 2017 10:25:34 GMT): kostas (Thu, 26 Oct 2017 10:39:07 GMT): kostas (Thu, 26 Oct 2017 10:40:42 GMT): kostas (Thu, 26 Oct 2017 10:57:58 GMT): srongzhe (Thu, 26 Oct 2017 12:24:41 GMT): kostas (Thu, 26 Oct 2017 12:38:36 GMT): kostas (Thu, 26 Oct 2017 12:38:36 GMT): Baha-sk (Thu, 26 Oct 2017 18:43:47 GMT): Baha-sk (Thu, 26 Oct 2017 18:48:14 GMT): Baha-sk (Thu, 26 Oct 2017 18:49:26 GMT): jyellick (Thu, 26 Oct 2017 18:49:32 GMT): jyellick (Thu, 26 Oct 2017 18:49:44 GMT): Baha-sk (Thu, 26 Oct 2017 18:49:50 GMT): Baha-sk (Thu, 26 Oct 2017 18:51:59 GMT): Baha-sk (Thu, 26 Oct 2017 18:54:08 GMT): jyellick (Thu, 26 Oct 2017 18:55:51 GMT): jyellick (Thu, 26 Oct 2017 18:56:06 GMT): jyellick (Thu, 26 Oct 2017 18:58:19 GMT): Baha-sk (Thu, 26 Oct 2017 18:59:21 GMT): jyellick (Thu, 26 Oct 2017 18:59:45 GMT): kostas (Thu, 26 Oct 2017 19:00:03 GMT): Baha-sk (Thu, 26 Oct 2017 19:00:05 GMT): jyellick (Thu, 26 Oct 2017 19:00:20 GMT): jyellick (Thu, 26 Oct 2017 19:00:37 GMT): Baha-sk (Thu, 26 Oct 2017 19:02:12 GMT): Baha-sk (Thu, 26 Oct 2017 19:02:12 GMT): Baha-sk (Thu, 26 Oct 2017 19:02:18 GMT): Baha-sk (Thu, 26 Oct 2017 19:02:35 GMT): Baha-sk (Thu, 26 Oct 2017 19:03:19 GMT): jyellick (Thu, 26 Oct 2017 19:03:31 GMT): jyellick (Thu, 26 Oct 2017 19:03:54 GMT): jyellick (Thu, 26 Oct 2017 19:03:54 GMT): Baha-sk (Thu, 26 Oct 2017 19:03:58 GMT): jyellick (Thu, 26 Oct 2017 19:04:21 GMT): Baha-sk (Thu, 26 Oct 2017 19:05:37 GMT): Baha-sk (Thu, 26 Oct 2017 19:06:11 GMT): Baha-sk (Thu, 26 Oct 2017 19:06:40 GMT): kostas (Thu, 26 Oct 2017 19:07:19 GMT): jyellick (Thu, 26 Oct 2017 19:07:59 GMT): Baha-sk (Thu, 26 Oct 2017 19:09:54 GMT): jyellick (Thu, 26 Oct 2017 19:11:01 GMT): jyellick (Thu, 26 Oct 2017 19:11:56 GMT): Baha-sk (Thu, 26 Oct 2017 19:12:35 GMT): Baha-sk (Thu, 26 Oct 2017 19:12:49 GMT): Baha-sk (Thu, 26 Oct 2017 19:13:00 GMT): jyellick (Thu, 26 Oct 2017 19:13:26 GMT): Baha-sk (Thu, 26 Oct 2017 19:14:28 GMT): Baha-sk (Thu, 26 Oct 2017 19:14:57 GMT): jyellick (Thu, 26 Oct 2017 19:15:44 GMT): jyellick (Thu, 26 Oct 2017 19:16:09 GMT): Baha-sk (Thu, 26 Oct 2017 19:18:01 GMT): jyellick (Thu, 26 Oct 2017 19:20:04 GMT): Baha-sk (Thu, 26 Oct 2017 19:21:21 GMT): Baha-sk (Thu, 26 Oct 2017 19:22:35 GMT): nate94305 (Fri, 27 Oct 2017 01:52:18 GMT): nate94305 (Fri, 27 Oct 2017 01:53:34 GMT): nate94305 (Fri, 27 Oct 2017 01:53:34 GMT): nate94305 (Fri, 27 Oct 2017 01:53:34 GMT): kostas (Fri, 27 Oct 2017 02:54:43 GMT): srongzhe (Fri, 27 Oct 2017 03:03:56 GMT): asaningmaxchain (Fri, 27 Oct 2017 03:19:37 GMT): asaningmaxchain (Fri, 27 Oct 2017 03:19:38 GMT): kostas (Fri, 27 Oct 2017 03:20:36 GMT): asaningmaxchain (Fri, 27 Oct 2017 03:21:04 GMT): asaningmaxchain (Fri, 27 Oct 2017 03:21:11 GMT): asaningmaxchain (Fri, 27 Oct 2017 03:21:24 GMT): asaningmaxchain (Fri, 27 Oct 2017 03:21:42 GMT): kostas (Fri, 27 Oct 2017 03:37:59 GMT): kostas (Fri, 27 Oct 2017 03:37:59 GMT): kostas (Fri, 27 Oct 2017 03:37:59 GMT): kostas (Fri, 27 Oct 2017 03:39:43 GMT): kostas (Fri, 27 Oct 2017 03:39:52 GMT): kostas (Fri, 27 Oct 2017 03:40:49 GMT): asaningmaxchain (Fri, 27 Oct 2017 04:02:49 GMT): asaningmaxchain (Fri, 27 Oct 2017 04:09:50 GMT): MadhavaReddy (Fri, 27 Oct 2017 04:13:48 GMT): kostas (Fri, 27 Oct 2017 04:14:11 GMT): MadhavaReddy (Fri, 27 Oct 2017 04:43:07 GMT): nate94305 (Fri, 27 Oct 2017 04:43:32 GMT): nate94305 (Fri, 27 Oct 2017 04:43:32 GMT): nate94305 (Fri, 27 Oct 2017 04:43:32 GMT): kostas (Fri, 27 Oct 2017 04:58:41 GMT): nate94305 (Fri, 27 Oct 2017 04:59:58 GMT): nate94305 (Fri, 27 Oct 2017 04:59:58 GMT): kostas (Fri, 27 Oct 2017 05:00:17 GMT): nate94305 (Fri, 27 Oct 2017 05:01:33 GMT): nate94305 (Fri, 27 Oct 2017 05:03:18 GMT): kostas (Fri, 27 Oct 2017 05:03:43 GMT): nate94305 (Fri, 27 Oct 2017 05:04:53 GMT): kostas (Fri, 27 Oct 2017 05:06:11 GMT): nate94305 (Fri, 27 Oct 2017 05:06:29 GMT): kostas (Fri, 27 Oct 2017 05:06:42 GMT): nate94305 (Fri, 27 Oct 2017 05:07:09 GMT): kostas (Fri, 27 Oct 2017 05:07:38 GMT): nate94305 (Fri, 27 Oct 2017 05:08:38 GMT): kostas (Fri, 27 Oct 2017 05:09:12 GMT): kostas (Fri, 27 Oct 2017 05:09:12 GMT): kostas (Fri, 27 Oct 2017 05:11:08 GMT): nate94305 (Fri, 27 Oct 2017 05:11:16 GMT): nate94305 (Fri, 27 Oct 2017 05:11:16 GMT): nate94305 (Fri, 27 Oct 2017 05:11:54 GMT): nate94305 (Fri, 27 Oct 2017 05:11:54 GMT): kostas (Fri, 27 Oct 2017 05:12:44 GMT): kostas (Fri, 27 Oct 2017 05:13:10 GMT): nate94305 (Fri, 27 Oct 2017 05:13:19 GMT): nate94305 (Fri, 27 Oct 2017 05:13:19 GMT): kostas (Fri, 27 Oct 2017 05:13:45 GMT): nate94305 (Fri, 27 Oct 2017 05:15:10 GMT): nate94305 (Fri, 27 Oct 2017 05:15:10 GMT): nate94305 (Fri, 27 Oct 2017 05:15:48 GMT): kostas (Fri, 27 Oct 2017 05:16:35 GMT): kostas (Fri, 27 Oct 2017 05:16:42 GMT): nate94305 (Fri, 27 Oct 2017 05:17:25 GMT): kostas (Fri, 27 Oct 2017 05:17:36 GMT): nate94305 (Fri, 27 Oct 2017 05:17:44 GMT): nate94305 (Fri, 27 Oct 2017 05:19:42 GMT): guoger (Fri, 27 Oct 2017 05:42:34 GMT): niteshsolanki (Fri, 27 Oct 2017 08:45:47 GMT): niteshsolanki (Fri, 27 Oct 2017 08:45:47 GMT): guoger (Fri, 27 Oct 2017 08:54:31 GMT): guoger (Fri, 27 Oct 2017 08:54:47 GMT): niteshsolanki (Fri, 27 Oct 2017 08:57:44 GMT): niteshsolanki (Fri, 27 Oct 2017 08:57:44 GMT): asaningmaxchain (Fri, 27 Oct 2017 09:05:53 GMT): guoger (Fri, 27 Oct 2017 09:09:07 GMT): niteshsolanki (Fri, 27 Oct 2017 09:46:40 GMT): niteshsolanki (Fri, 27 Oct 2017 09:46:40 GMT): guoger (Fri, 27 Oct 2017 09:49:06 GMT): niteshsolanki (Fri, 27 Oct 2017 09:52:18 GMT): guoger (Fri, 27 Oct 2017 09:53:07 GMT): guoger (Fri, 27 Oct 2017 09:54:49 GMT): guoger (Fri, 27 Oct 2017 09:56:28 GMT): guoger (Fri, 27 Oct 2017 09:56:28 GMT): niteshsolanki (Fri, 27 Oct 2017 09:58:14 GMT): guoger (Fri, 27 Oct 2017 10:00:29 GMT): guoger (Fri, 27 Oct 2017 10:01:49 GMT): guoger (Fri, 27 Oct 2017 10:02:17 GMT): niteshsolanki (Fri, 27 Oct 2017 10:26:29 GMT): nate94305 (Fri, 27 Oct 2017 11:07:55 GMT): nate94305 (Fri, 27 Oct 2017 11:07:55 GMT): nate94305 (Fri, 27 Oct 2017 11:07:55 GMT): nikit-os (Fri, 27 Oct 2017 12:28:33 GMT): jyellick (Fri, 27 Oct 2017 13:25:45 GMT): jyellick (Fri, 27 Oct 2017 13:29:46 GMT): jyellick (Fri, 27 Oct 2017 13:29:46 GMT): nate94305 (Fri, 27 Oct 2017 13:42:52 GMT): jyellick (Fri, 27 Oct 2017 13:45:24 GMT): niteshsolanki (Fri, 27 Oct 2017 13:45:42 GMT): niteshsolanki (Fri, 27 Oct 2017 13:45:42 GMT): niteshsolanki (Fri, 27 Oct 2017 13:45:42 GMT): nate94305 (Fri, 27 Oct 2017 13:47:28 GMT): nate94305 (Fri, 27 Oct 2017 13:47:28 GMT): niteshsolanki (Fri, 27 Oct 2017 13:48:19 GMT): jyellick (Fri, 27 Oct 2017 13:50:17 GMT): nate94305 (Fri, 27 Oct 2017 13:52:37 GMT): nate94305 (Fri, 27 Oct 2017 13:52:37 GMT): jyellick (Fri, 27 Oct 2017 13:54:09 GMT): jyellick (Fri, 27 Oct 2017 13:54:09 GMT): nate94305 (Fri, 27 Oct 2017 13:59:34 GMT): nate94305 (Fri, 27 Oct 2017 13:59:34 GMT): nate94305 (Fri, 27 Oct 2017 13:59:34 GMT): jyellick (Fri, 27 Oct 2017 14:06:30 GMT): nate94305 (Fri, 27 Oct 2017 14:28:56 GMT): nate94305 (Fri, 27 Oct 2017 14:28:56 GMT): nate94305 (Fri, 27 Oct 2017 14:28:56 GMT): nate94305 (Fri, 27 Oct 2017 14:28:56 GMT): nate94305 (Fri, 27 Oct 2017 14:42:23 GMT): nate94305 (Fri, 27 Oct 2017 14:42:23 GMT): jyellick (Fri, 27 Oct 2017 16:14:50 GMT): nate94305 (Fri, 27 Oct 2017 23:44:08 GMT): nate94305 (Fri, 27 Oct 2017 23:46:21 GMT): srongzhe (Sat, 28 Oct 2017 01:10:42 GMT): srongzhe (Sat, 28 Oct 2017 01:11:26 GMT): srongzhe (Sat, 28 Oct 2017 01:14:10 GMT): guoger (Sat, 28 Oct 2017 04:35:28 GMT): srongzhe (Sat, 28 Oct 2017 12:12:41 GMT): srongzhe (Sat, 28 Oct 2017 13:08:47 GMT): srongzhe (Sat, 28 Oct 2017 13:08:53 GMT): yoheiueda (Sun, 29 Oct 2017 13:46:12 GMT): guoger (Sun, 29 Oct 2017 14:36:11 GMT): guoger (Sun, 29 Oct 2017 14:36:11 GMT): guoger (Sun, 29 Oct 2017 14:36:49 GMT): guoger (Sun, 29 Oct 2017 14:45:46 GMT): guoger (Sun, 29 Oct 2017 14:45:46 GMT): guoger (Sun, 29 Oct 2017 14:45:46 GMT): guoger (Sun, 29 Oct 2017 14:45:46 GMT): yoheiueda (Sun, 29 Oct 2017 16:47:57 GMT): yoheiueda (Sun, 29 Oct 2017 16:47:57 GMT): yoheiueda (Sun, 29 Oct 2017 16:48:46 GMT): yoheiueda (Sun, 29 Oct 2017 17:22:40 GMT): srongzhe (Sun, 29 Oct 2017 22:59:22 GMT): guoger (Sun, 29 Oct 2017 23:12:49 GMT): guoger (Sun, 29 Oct 2017 23:14:17 GMT): yoheiueda (Mon, 30 Oct 2017 01:22:57 GMT): asaningmaxchain (Mon, 30 Oct 2017 08:14:59 GMT): asaningmaxchain (Mon, 30 Oct 2017 08:15:04 GMT): asaningmaxchain (Mon, 30 Oct 2017 08:15:04 GMT): asaningmaxchain (Mon, 30 Oct 2017 08:15:04 GMT): asaningmaxchain (Mon, 30 Oct 2017 08:15:30 GMT): asaningmaxchain (Mon, 30 Oct 2017 08:15:35 GMT): kostas (Mon, 30 Oct 2017 09:09:52 GMT): asaningmaxchain (Mon, 30 Oct 2017 09:54:40 GMT): asaningmaxchain (Mon, 30 Oct 2017 09:56:06 GMT): asaningmaxchain (Mon, 30 Oct 2017 09:56:08 GMT): asaningmaxchain (Mon, 30 Oct 2017 10:12:18 GMT): kostas (Mon, 30 Oct 2017 10:31:42 GMT): asaningmaxchain (Mon, 30 Oct 2017 10:32:41 GMT): iamdm (Mon, 30 Oct 2017 13:26:30 GMT): JohnWhitton (Mon, 30 Oct 2017 20:04:11 GMT): JohnWhitton (Mon, 30 Oct 2017 20:04:11 GMT): JohnWhitton (Mon, 30 Oct 2017 20:09:16 GMT): JohnWhitton (Mon, 30 Oct 2017 20:09:16 GMT): jyellick (Mon, 30 Oct 2017 21:08:48 GMT): JohnWhitton (Mon, 30 Oct 2017 21:26:44 GMT): JohnWhitton (Mon, 30 Oct 2017 21:27:26 GMT): kostas (Mon, 30 Oct 2017 21:49:23 GMT): qizhang (Tue, 31 Oct 2017 00:09:29 GMT): qizhang (Tue, 31 Oct 2017 00:09:29 GMT): qizhang (Tue, 31 Oct 2017 00:09:29 GMT): qizhang (Tue, 31 Oct 2017 00:09:47 GMT): qizhang (Tue, 31 Oct 2017 00:09:47 GMT): qizhang (Tue, 31 Oct 2017 00:12:03 GMT): qizhang (Tue, 31 Oct 2017 00:12:03 GMT): qizhang (Tue, 31 Oct 2017 00:49:00 GMT): asaningmaxchain (Tue, 31 Oct 2017 01:05:38 GMT): qizhang (Tue, 31 Oct 2017 01:09:44 GMT): asaningmaxchain (Tue, 31 Oct 2017 01:10:02 GMT): qizhang (Tue, 31 Oct 2017 01:10:24 GMT): qizhang (Tue, 31 Oct 2017 01:10:59 GMT): asaningmaxchain (Tue, 31 Oct 2017 01:11:52 GMT): qizhang (Tue, 31 Oct 2017 01:12:36 GMT): asaningmaxchain (Tue, 31 Oct 2017 01:54:28 GMT): asaningmaxchain (Tue, 31 Oct 2017 01:54:56 GMT): jyellick (Tue, 31 Oct 2017 02:27:49 GMT): jyellick (Tue, 31 Oct 2017 02:27:49 GMT): jyellick (Tue, 31 Oct 2017 02:32:59 GMT): asaningmaxchain (Tue, 31 Oct 2017 02:33:59 GMT): iamdm (Tue, 31 Oct 2017 12:43:16 GMT): UtkarshSingh (Tue, 31 Oct 2017 12:46:55 GMT): UtkarshSingh (Tue, 31 Oct 2017 12:51:54 GMT): Vadim (Tue, 31 Oct 2017 12:56:26 GMT): UtkarshSingh (Tue, 31 Oct 2017 12:58:15 GMT): Vadim (Tue, 31 Oct 2017 12:58:52 GMT): UtkarshSingh (Tue, 31 Oct 2017 13:00:55 GMT): jyellick (Tue, 31 Oct 2017 14:14:09 GMT): jyellick (Tue, 31 Oct 2017 14:14:36 GMT): jyellick (Tue, 31 Oct 2017 14:16:52 GMT): Ratnakar (Tue, 31 Oct 2017 14:42:10 GMT): ryokawajp (Tue, 31 Oct 2017 16:07:10 GMT): ryokawajp (Tue, 31 Oct 2017 16:09:19 GMT): ryokawajp (Tue, 31 Oct 2017 16:12:37 GMT): ryokawajp (Tue, 31 Oct 2017 16:13:48 GMT): jyellick (Tue, 31 Oct 2017 16:55:05 GMT): jyellick (Tue, 31 Oct 2017 16:55:05 GMT): jyellick (Tue, 31 Oct 2017 17:02:40 GMT): jyellick (Tue, 31 Oct 2017 17:06:05 GMT): david_dornseifer (Tue, 31 Oct 2017 23:15:31 GMT): david_dornseifer (Tue, 31 Oct 2017 23:15:31 GMT): david_dornseifer (Tue, 31 Oct 2017 23:16:44 GMT): jeffgarratt (Tue, 31 Oct 2017 23:48:45 GMT): jeffgarratt (Tue, 31 Oct 2017 23:54:04 GMT): jeffgarratt (Tue, 31 Oct 2017 23:54:04 GMT): david_dornseifer (Wed, 01 Nov 2017 00:23:35 GMT): david_dornseifer (Wed, 01 Nov 2017 00:24:31 GMT): ryokawajp (Wed, 01 Nov 2017 02:18:44 GMT): jyellick (Wed, 01 Nov 2017 05:44:26 GMT): asaningmaxchain (Wed, 01 Nov 2017 05:46:51 GMT): asaningmaxchain (Wed, 01 Nov 2017 05:46:51 GMT): asaningmaxchain (Wed, 01 Nov 2017 05:46:51 GMT): jyellick (Wed, 01 Nov 2017 05:47:58 GMT): asaningmaxchain (Wed, 01 Nov 2017 05:48:03 GMT): asaningmaxchain (Wed, 01 Nov 2017 05:48:43 GMT): asaningmaxchain (Wed, 01 Nov 2017 05:50:47 GMT): asaningmaxchain (Wed, 01 Nov 2017 05:52:18 GMT): jyellick (Wed, 01 Nov 2017 06:12:37 GMT): asaningmaxchain (Wed, 01 Nov 2017 06:45:33 GMT): kostas (Wed, 01 Nov 2017 09:38:39 GMT): kostas (Wed, 01 Nov 2017 09:38:39 GMT): asaningmaxchain (Wed, 01 Nov 2017 09:45:16 GMT): asaningmaxchain (Wed, 01 Nov 2017 09:45:16 GMT): asaningmaxchain (Wed, 01 Nov 2017 09:45:16 GMT): asaningmaxchain (Wed, 01 Nov 2017 09:45:36 GMT): asaningmaxchain (Wed, 01 Nov 2017 09:46:17 GMT): kostas (Wed, 01 Nov 2017 09:46:39 GMT): asaningmaxchain (Wed, 01 Nov 2017 09:47:02 GMT): asaningmaxchain (Wed, 01 Nov 2017 09:47:02 GMT): kostas (Wed, 01 Nov 2017 09:55:03 GMT): kostas (Wed, 01 Nov 2017 09:55:03 GMT): kostas (Wed, 01 Nov 2017 09:55:14 GMT): asaningmaxchain (Wed, 01 Nov 2017 09:57:55 GMT): asaningmaxchain (Wed, 01 Nov 2017 09:57:55 GMT): asaningmaxchain (Wed, 01 Nov 2017 09:57:55 GMT): kostas (Wed, 01 Nov 2017 10:00:33 GMT): kostas (Wed, 01 Nov 2017 10:00:33 GMT): kostas (Wed, 01 Nov 2017 10:03:15 GMT): kostas (Wed, 01 Nov 2017 10:03:44 GMT): asaningmaxchain (Wed, 01 Nov 2017 10:04:26 GMT): asaningmaxchain (Wed, 01 Nov 2017 10:04:26 GMT): kostas (Wed, 01 Nov 2017 10:04:45 GMT): kostas (Wed, 01 Nov 2017 10:04:45 GMT): asaningmaxchain (Wed, 01 Nov 2017 10:05:26 GMT): asaningmaxchain (Wed, 01 Nov 2017 10:05:26 GMT): asaningmaxchain (Wed, 01 Nov 2017 10:05:26 GMT): kostas (Wed, 01 Nov 2017 10:06:13 GMT): asaningmaxchain (Wed, 01 Nov 2017 10:06:49 GMT): asaningmaxchain (Wed, 01 Nov 2017 10:15:37 GMT): kayadhami (Wed, 01 Nov 2017 16:01:13 GMT): david_dornseifer (Wed, 01 Nov 2017 16:28:24 GMT): MadhavaReddy (Thu, 02 Nov 2017 07:39:50 GMT): MadhavaReddy (Thu, 02 Nov 2017 07:40:33 GMT): MadhavaReddy (Thu, 02 Nov 2017 07:40:35 GMT): SimonOberzan (Thu, 02 Nov 2017 09:20:42 GMT): SimonOberzan (Thu, 02 Nov 2017 09:20:42 GMT): SimonOberzan (Thu, 02 Nov 2017 09:20:42 GMT): Glen (Thu, 02 Nov 2017 10:22:37 GMT): Glen (Thu, 02 Nov 2017 10:22:57 GMT): Glen (Thu, 02 Nov 2017 10:24:09 GMT): Glen (Thu, 02 Nov 2017 10:24:09 GMT): Glen (Thu, 02 Nov 2017 10:24:21 GMT): UtkarshSingh (Thu, 02 Nov 2017 10:25:02 GMT): Vadim (Thu, 02 Nov 2017 10:27:57 GMT): Glen (Thu, 02 Nov 2017 10:44:32 GMT): risabhsharma71 (Thu, 02 Nov 2017 10:51:36 GMT): risabhsharma71 (Thu, 02 Nov 2017 10:52:14 GMT): risabhsharma71 (Thu, 02 Nov 2017 10:53:03 GMT): risabhsharma71 (Thu, 02 Nov 2017 10:53:18 GMT): kostas (Thu, 02 Nov 2017 10:53:31 GMT): kostas (Thu, 02 Nov 2017 10:54:16 GMT): kostas (Thu, 02 Nov 2017 10:56:17 GMT): risabhsharma71 (Thu, 02 Nov 2017 10:56:29 GMT): risabhsharma71 (Thu, 02 Nov 2017 10:56:40 GMT): risabhsharma71 (Thu, 02 Nov 2017 10:56:53 GMT): kostas (Thu, 02 Nov 2017 10:57:56 GMT): kostas (Thu, 02 Nov 2017 10:58:48 GMT): kostas (Thu, 02 Nov 2017 11:00:14 GMT): kostas (Thu, 02 Nov 2017 11:02:44 GMT): risabhsharma71 (Thu, 02 Nov 2017 11:04:34 GMT): risabhsharma71 (Thu, 02 Nov 2017 11:05:38 GMT): kostas (Thu, 02 Nov 2017 11:15:20 GMT): kostas (Thu, 02 Nov 2017 11:15:20 GMT): risabhsharma71 (Thu, 02 Nov 2017 11:17:15 GMT): Glen (Thu, 02 Nov 2017 11:18:42 GMT): kostas (Thu, 02 Nov 2017 11:19:04 GMT): Glen (Thu, 02 Nov 2017 11:19:25 GMT): Glen (Thu, 02 Nov 2017 11:19:25 GMT): kostas (Thu, 02 Nov 2017 11:20:39 GMT): Glen (Thu, 02 Nov 2017 11:22:01 GMT): Glen (Thu, 02 Nov 2017 11:22:01 GMT): Glen (Thu, 02 Nov 2017 11:26:21 GMT): risabhsharma71 (Thu, 02 Nov 2017 11:29:07 GMT): kostas (Thu, 02 Nov 2017 11:29:28 GMT): risabhsharma71 (Thu, 02 Nov 2017 11:29:58 GMT): kostas (Thu, 02 Nov 2017 11:30:10 GMT): risabhsharma71 (Thu, 02 Nov 2017 11:30:18 GMT): kostas (Thu, 02 Nov 2017 11:30:49 GMT): kostas (Thu, 02 Nov 2017 11:30:57 GMT): risabhsharma71 (Thu, 02 Nov 2017 11:32:25 GMT): risabhsharma71 (Thu, 02 Nov 2017 11:32:32 GMT): risabhsharma71 (Thu, 02 Nov 2017 11:33:14 GMT): kostas (Thu, 02 Nov 2017 11:33:28 GMT): risabhsharma71 (Thu, 02 Nov 2017 11:34:35 GMT): Glen (Thu, 02 Nov 2017 11:36:55 GMT): kostas (Thu, 02 Nov 2017 11:37:28 GMT): UtkarshSingh (Thu, 02 Nov 2017 12:40:57 GMT): Vadim (Thu, 02 Nov 2017 12:42:04 GMT): UtkarshSingh (Thu, 02 Nov 2017 12:44:03 GMT): Vadim (Thu, 02 Nov 2017 12:44:22 GMT): Vadim (Thu, 02 Nov 2017 12:45:29 GMT): UtkarshSingh (Thu, 02 Nov 2017 12:48:15 GMT): Vadim (Thu, 02 Nov 2017 12:51:15 GMT): UtkarshSingh (Thu, 02 Nov 2017 13:07:07 GMT): Vadim (Thu, 02 Nov 2017 13:07:51 GMT): UtkarshSingh (Thu, 02 Nov 2017 13:13:45 GMT): Vadim (Thu, 02 Nov 2017 13:14:02 GMT): UtkarshSingh (Thu, 02 Nov 2017 13:17:10 GMT): UtkarshSingh (Thu, 02 Nov 2017 13:18:28 GMT): Vadim (Thu, 02 Nov 2017 13:25:51 GMT): UtkarshSingh (Thu, 02 Nov 2017 13:43:12 GMT): UtkarshSingh (Thu, 02 Nov 2017 13:49:51 GMT): jeffgarratt (Thu, 02 Nov 2017 13:54:29 GMT): UtkarshSingh (Thu, 02 Nov 2017 14:06:55 GMT): jeffgarratt (Thu, 02 Nov 2017 14:07:12 GMT): jeffgarratt (Thu, 02 Nov 2017 14:07:39 GMT): jeffgarratt (Thu, 02 Nov 2017 14:07:39 GMT): jeffgarratt (Thu, 02 Nov 2017 14:09:25 GMT): UtkarshSingh (Thu, 02 Nov 2017 14:09:35 GMT): jeffgarratt (Thu, 02 Nov 2017 14:09:52 GMT): jeffgarratt (Thu, 02 Nov 2017 14:10:07 GMT): jeffgarratt (Thu, 02 Nov 2017 14:10:42 GMT): jeffgarratt (Thu, 02 Nov 2017 14:10:42 GMT): jeffgarratt (Thu, 02 Nov 2017 14:11:17 GMT): jeffgarratt (Thu, 02 Nov 2017 14:11:37 GMT): jyellick (Thu, 02 Nov 2017 14:12:56 GMT): jyellick (Thu, 02 Nov 2017 14:13:15 GMT): Vadim (Thu, 02 Nov 2017 14:13:39 GMT): Vadim (Thu, 02 Nov 2017 14:14:07 GMT): jyellick (Thu, 02 Nov 2017 14:15:21 GMT): UtkarshSingh (Thu, 02 Nov 2017 14:43:18 GMT): MadhavaReddy (Thu, 02 Nov 2017 14:56:18 GMT): kostas (Thu, 02 Nov 2017 14:57:13 GMT): MadhavaReddy (Thu, 02 Nov 2017 15:02:33 GMT): kostas (Thu, 02 Nov 2017 15:04:01 GMT): MadhavaReddy (Thu, 02 Nov 2017 15:08:26 GMT): MadhavaReddy (Thu, 02 Nov 2017 15:24:52 GMT): qizhang (Thu, 02 Nov 2017 15:28:38 GMT): kostas (Thu, 02 Nov 2017 15:32:08 GMT): qizhang (Thu, 02 Nov 2017 15:32:36 GMT): kostas (Thu, 02 Nov 2017 15:32:41 GMT): kostas (Thu, 02 Nov 2017 15:34:01 GMT): MadhavaReddy (Thu, 02 Nov 2017 15:41:37 GMT): kostas (Thu, 02 Nov 2017 15:44:54 GMT): MadhavaReddy (Thu, 02 Nov 2017 15:45:23 GMT): MadhavaReddy (Thu, 02 Nov 2017 15:45:23 GMT): kostas (Thu, 02 Nov 2017 15:57:14 GMT): gauthampamu (Thu, 02 Nov 2017 16:57:08 GMT): kostas (Thu, 02 Nov 2017 17:33:43 GMT): kostas (Thu, 02 Nov 2017 17:33:43 GMT): MadhavaReddy (Thu, 02 Nov 2017 18:10:38 GMT): gauthampamu (Thu, 02 Nov 2017 18:50:40 GMT): srongzhe (Fri, 03 Nov 2017 01:45:25 GMT): srongzhe (Fri, 03 Nov 2017 01:45:25 GMT): srongzhe (Fri, 03 Nov 2017 01:45:41 GMT): srongzhe (Fri, 03 Nov 2017 01:46:11 GMT): srongzhe (Fri, 03 Nov 2017 01:46:39 GMT): srongzhe (Fri, 03 Nov 2017 01:46:39 GMT): srongzhe (Fri, 03 Nov 2017 01:47:02 GMT): Ratnakar (Fri, 03 Nov 2017 01:58:15 GMT): Ratnakar (Fri, 03 Nov 2017 01:58:15 GMT): Ratnakar (Fri, 03 Nov 2017 01:59:17 GMT): Ratnakar (Fri, 03 Nov 2017 01:59:17 GMT): srongzhe (Fri, 03 Nov 2017 02:02:53 GMT): Ratnakar (Fri, 03 Nov 2017 02:05:02 GMT): srongzhe (Fri, 03 Nov 2017 02:07:59 GMT): srongzhe (Fri, 03 Nov 2017 02:07:59 GMT): srongzhe (Fri, 03 Nov 2017 02:08:01 GMT): srongzhe (Fri, 03 Nov 2017 02:09:04 GMT): srongzhe (Fri, 03 Nov 2017 02:10:56 GMT): srongzhe (Fri, 03 Nov 2017 02:10:56 GMT): srongzhe (Fri, 03 Nov 2017 02:10:58 GMT): srongzhe (Fri, 03 Nov 2017 02:11:14 GMT): srongzhe (Fri, 03 Nov 2017 02:12:23 GMT): srongzhe (Fri, 03 Nov 2017 02:12:23 GMT): srongzhe (Fri, 03 Nov 2017 02:12:26 GMT): jeffgarratt (Fri, 03 Nov 2017 02:18:00 GMT): jeffgarratt (Fri, 03 Nov 2017 02:18:53 GMT): jeffgarratt (Fri, 03 Nov 2017 02:19:06 GMT): MadhavaReddy (Fri, 03 Nov 2017 02:24:49 GMT): srongzhe (Fri, 03 Nov 2017 02:38:50 GMT): srongzhe (Fri, 03 Nov 2017 02:38:50 GMT): srongzhe (Fri, 03 Nov 2017 02:39:28 GMT): srongzhe (Fri, 03 Nov 2017 02:39:31 GMT): srongzhe (Fri, 03 Nov 2017 02:41:15 GMT): Ratnakar (Fri, 03 Nov 2017 02:49:56 GMT): Ratnakar (Fri, 03 Nov 2017 02:49:56 GMT): srongzhe (Fri, 03 Nov 2017 02:56:00 GMT): srongzhe (Fri, 03 Nov 2017 02:57:22 GMT): Ratnakar (Fri, 03 Nov 2017 03:07:51 GMT): srongzhe (Fri, 03 Nov 2017 03:15:38 GMT): srongzhe (Fri, 03 Nov 2017 03:25:17 GMT): srongzhe (Fri, 03 Nov 2017 03:25:17 GMT): srongzhe (Fri, 03 Nov 2017 03:25:43 GMT): srongzhe (Fri, 03 Nov 2017 03:25:45 GMT): jyellick (Fri, 03 Nov 2017 03:38:26 GMT): srongzhe (Fri, 03 Nov 2017 05:23:20 GMT): srongzhe (Fri, 03 Nov 2017 05:23:46 GMT): srongzhe (Fri, 03 Nov 2017 05:24:26 GMT): niteshsolanki (Fri, 03 Nov 2017 05:49:09 GMT): srongzhe (Fri, 03 Nov 2017 05:54:50 GMT): srongzhe (Fri, 03 Nov 2017 05:55:08 GMT): kostas (Fri, 03 Nov 2017 09:13:06 GMT): MadhavaReddy (Fri, 03 Nov 2017 09:14:12 GMT): kostas (Fri, 03 Nov 2017 09:16:39 GMT): bennettneale (Fri, 03 Nov 2017 16:24:29 GMT): agiledeveloper (Fri, 03 Nov 2017 18:04:12 GMT): agiledeveloper (Fri, 03 Nov 2017 18:05:10 GMT): agiledeveloper (Fri, 03 Nov 2017 18:05:10 GMT): agiledeveloper (Fri, 03 Nov 2017 18:05:10 GMT): agiledeveloper (Fri, 03 Nov 2017 18:07:33 GMT): agiledeveloper (Fri, 03 Nov 2017 18:07:33 GMT): agiledeveloper (Fri, 03 Nov 2017 18:14:02 GMT): knagware9 (Fri, 03 Nov 2017 19:16:41 GMT): jyellick (Fri, 03 Nov 2017 19:30:39 GMT): kostas (Sat, 04 Nov 2017 13:24:51 GMT): risabhsharma71 (Mon, 06 Nov 2017 08:55:01 GMT): asuchit (Mon, 06 Nov 2017 09:29:28 GMT): qizhang (Mon, 06 Nov 2017 19:33:09 GMT): yacovm (Mon, 06 Nov 2017 21:00:26 GMT): yacovm (Mon, 06 Nov 2017 21:01:10 GMT): kostas (Mon, 06 Nov 2017 23:06:35 GMT): yacovm (Mon, 06 Nov 2017 23:07:57 GMT): yacovm (Mon, 06 Nov 2017 23:07:57 GMT): yacovm (Mon, 06 Nov 2017 23:08:03 GMT): yacovm (Mon, 06 Nov 2017 23:08:03 GMT): kostas (Mon, 06 Nov 2017 23:09:03 GMT): kostas (Mon, 06 Nov 2017 23:13:03 GMT): kostas (Mon, 06 Nov 2017 23:13:18 GMT): yacovm (Tue, 07 Nov 2017 00:25:26 GMT): yacovm (Tue, 07 Nov 2017 00:25:26 GMT): kostas (Tue, 07 Nov 2017 01:30:40 GMT): Ryo (Tue, 07 Nov 2017 06:53:00 GMT): yacovm (Tue, 07 Nov 2017 07:30:40 GMT): yacovm (Tue, 07 Nov 2017 07:31:52 GMT): yacovm (Tue, 07 Nov 2017 07:32:07 GMT): yacovm (Tue, 07 Nov 2017 07:33:07 GMT): kostas (Tue, 07 Nov 2017 11:11:40 GMT): tom.appleyard (Tue, 07 Nov 2017 14:51:28 GMT): tom.appleyard (Tue, 07 Nov 2017 14:51:53 GMT): tom.appleyard (Tue, 07 Nov 2017 14:52:08 GMT): tom.appleyard (Tue, 07 Nov 2017 14:52:37 GMT): tom.appleyard (Tue, 07 Nov 2017 14:52:43 GMT): yacovm (Tue, 07 Nov 2017 14:53:52 GMT): tom.appleyard (Tue, 07 Nov 2017 14:54:00 GMT): tom.appleyard (Tue, 07 Nov 2017 14:54:04 GMT): yacovm (Tue, 07 Nov 2017 14:54:39 GMT): jyellick (Tue, 07 Nov 2017 15:03:07 GMT): jyellick (Tue, 07 Nov 2017 15:03:07 GMT): jyellick (Tue, 07 Nov 2017 15:04:46 GMT): tom.appleyard (Tue, 07 Nov 2017 15:12:39 GMT): tom.appleyard (Tue, 07 Nov 2017 15:12:52 GMT): jyellick (Tue, 07 Nov 2017 15:13:48 GMT): jyellick (Tue, 07 Nov 2017 15:13:55 GMT): jyellick (Tue, 07 Nov 2017 15:14:08 GMT): jyellick (Tue, 07 Nov 2017 15:14:08 GMT): jyellick (Tue, 07 Nov 2017 15:14:08 GMT): jyellick (Tue, 07 Nov 2017 15:14:08 GMT): tom.appleyard (Tue, 07 Nov 2017 15:15:17 GMT): tom.appleyard (Tue, 07 Nov 2017 15:15:59 GMT): tom.appleyard (Tue, 07 Nov 2017 15:25:08 GMT): jyellick (Tue, 07 Nov 2017 15:26:43 GMT): jyellick (Tue, 07 Nov 2017 15:27:36 GMT): tom.appleyard (Tue, 07 Nov 2017 15:29:11 GMT): tom.appleyard (Tue, 07 Nov 2017 15:29:33 GMT): Asara (Tue, 07 Nov 2017 15:32:37 GMT): Asara (Tue, 07 Nov 2017 15:33:01 GMT): tom.appleyard (Tue, 07 Nov 2017 15:33:53 GMT): tom.appleyard (Tue, 07 Nov 2017 15:34:12 GMT): Asara (Tue, 07 Nov 2017 15:38:16 GMT): Asara (Tue, 07 Nov 2017 15:39:01 GMT): Asara (Tue, 07 Nov 2017 15:39:18 GMT): jyellick (Tue, 07 Nov 2017 15:42:10 GMT): jyellick (Tue, 07 Nov 2017 15:42:10 GMT): tom.appleyard (Tue, 07 Nov 2017 15:49:12 GMT): tom.appleyard (Tue, 07 Nov 2017 15:49:39 GMT): tom.appleyard (Tue, 07 Nov 2017 15:52:13 GMT): jyellick (Tue, 07 Nov 2017 15:54:29 GMT): tom.appleyard (Tue, 07 Nov 2017 15:56:50 GMT): jyellick (Tue, 07 Nov 2017 15:58:44 GMT): jyellick (Tue, 07 Nov 2017 15:58:44 GMT): tom.appleyard (Tue, 07 Nov 2017 16:00:29 GMT): jyellick (Tue, 07 Nov 2017 16:00:41 GMT): tom.appleyard (Tue, 07 Nov 2017 16:00:45 GMT): tom.appleyard (Tue, 07 Nov 2017 16:01:12 GMT): tom.appleyard (Tue, 07 Nov 2017 16:01:16 GMT): tom.appleyard (Tue, 07 Nov 2017 16:01:25 GMT): jyellick (Tue, 07 Nov 2017 16:01:58 GMT): tom.appleyard (Tue, 07 Nov 2017 16:02:09 GMT): qizhang (Tue, 07 Nov 2017 16:02:36 GMT): tom.appleyard (Tue, 07 Nov 2017 16:02:38 GMT): jyellick (Tue, 07 Nov 2017 16:02:39 GMT): jyellick (Tue, 07 Nov 2017 16:02:39 GMT): tom.appleyard (Tue, 07 Nov 2017 16:02:43 GMT): jyellick (Tue, 07 Nov 2017 16:04:14 GMT): tom.appleyard (Tue, 07 Nov 2017 16:04:36 GMT): tom.appleyard (Tue, 07 Nov 2017 16:04:50 GMT): jyellick (Tue, 07 Nov 2017 16:05:16 GMT): jyellick (Tue, 07 Nov 2017 16:05:50 GMT): tom.appleyard (Tue, 07 Nov 2017 16:08:18 GMT): tom.appleyard (Tue, 07 Nov 2017 16:08:44 GMT): tom.appleyard (Tue, 07 Nov 2017 16:08:49 GMT): kostas (Tue, 07 Nov 2017 16:15:31 GMT): kostas (Tue, 07 Nov 2017 16:15:40 GMT): kostas (Tue, 07 Nov 2017 16:15:40 GMT): kostas (Tue, 07 Nov 2017 16:15:47 GMT): kostas (Tue, 07 Nov 2017 16:16:03 GMT): qizhang (Tue, 07 Nov 2017 16:16:51 GMT): kostas (Tue, 07 Nov 2017 16:17:45 GMT): qizhang (Tue, 07 Nov 2017 16:19:59 GMT): qizhang (Tue, 07 Nov 2017 16:20:22 GMT): kostas (Tue, 07 Nov 2017 16:20:38 GMT): qizhang (Tue, 07 Nov 2017 16:23:43 GMT): jyellick (Tue, 07 Nov 2017 16:25:32 GMT): qizhang (Tue, 07 Nov 2017 16:26:43 GMT): yacovm (Tue, 07 Nov 2017 16:42:51 GMT): kostas (Tue, 07 Nov 2017 17:43:11 GMT): kostas (Tue, 07 Nov 2017 17:44:18 GMT): yacovm (Tue, 07 Nov 2017 17:59:56 GMT): yacovm (Tue, 07 Nov 2017 18:00:04 GMT): yacovm (Tue, 07 Nov 2017 18:00:37 GMT): yacovm (Tue, 07 Nov 2017 18:00:37 GMT): kostas (Tue, 07 Nov 2017 18:05:31 GMT): kostas (Tue, 07 Nov 2017 18:05:41 GMT): kostas (Tue, 07 Nov 2017 18:08:26 GMT): kostas (Tue, 07 Nov 2017 18:08:33 GMT): qizhang (Tue, 07 Nov 2017 19:14:02 GMT): jyellick (Tue, 07 Nov 2017 19:33:47 GMT): jyellick (Tue, 07 Nov 2017 19:33:47 GMT): jyellick (Tue, 07 Nov 2017 19:34:56 GMT): ashablyg (Tue, 07 Nov 2017 20:31:58 GMT): kostas (Tue, 07 Nov 2017 20:57:00 GMT): kostas (Tue, 07 Nov 2017 20:57:26 GMT): kostas (Tue, 07 Nov 2017 20:58:10 GMT): kostas (Tue, 07 Nov 2017 20:58:51 GMT): qizhang (Wed, 08 Nov 2017 02:49:50 GMT): asaningmaxchain (Wed, 08 Nov 2017 05:33:58 GMT): asaningmaxchain (Wed, 08 Nov 2017 05:33:58 GMT): asaningmaxchain (Wed, 08 Nov 2017 08:21:26 GMT): asaningmaxchain (Wed, 08 Nov 2017 08:43:21 GMT): asaningmaxchain (Wed, 08 Nov 2017 08:43:21 GMT): bretharrison (Wed, 08 Nov 2017 14:44:31 GMT): sukritVisa (Wed, 08 Nov 2017 21:41:32 GMT): NakaoK (Thu, 09 Nov 2017 06:44:10 GMT): yacovm (Thu, 09 Nov 2017 10:28:19 GMT): asaningmaxchain (Thu, 09 Nov 2017 11:01:54 GMT): asaningmaxchain (Thu, 09 Nov 2017 11:01:54 GMT): asaningmaxchain (Thu, 09 Nov 2017 11:31:35 GMT): asaningmaxchain (Thu, 09 Nov 2017 11:31:35 GMT): asaningmaxchain (Thu, 09 Nov 2017 11:31:55 GMT): shiyj93 (Thu, 09 Nov 2017 12:02:47 GMT): kostas (Thu, 09 Nov 2017 12:52:41 GMT): yacovm (Thu, 09 Nov 2017 12:53:20 GMT): kostas (Thu, 09 Nov 2017 12:55:11 GMT): kostas (Thu, 09 Nov 2017 12:55:11 GMT): kostas (Thu, 09 Nov 2017 12:55:11 GMT): kostas (Thu, 09 Nov 2017 12:55:52 GMT): kostas (Thu, 09 Nov 2017 12:57:22 GMT): rickr (Thu, 09 Nov 2017 13:33:59 GMT): kostas (Thu, 09 Nov 2017 13:46:49 GMT): rickr (Thu, 09 Nov 2017 13:53:24 GMT): kostas (Thu, 09 Nov 2017 14:01:46 GMT): ajp (Thu, 09 Nov 2017 14:01:46 GMT): asaningmaxchain (Fri, 10 Nov 2017 01:13:15 GMT): asaningmaxchain (Fri, 10 Nov 2017 01:19:15 GMT): kostas (Fri, 10 Nov 2017 02:12:11 GMT): kostas (Fri, 10 Nov 2017 02:12:11 GMT): asaningmaxchain (Fri, 10 Nov 2017 02:13:19 GMT): asaningmaxchain (Fri, 10 Nov 2017 02:13:46 GMT): asaningmaxchain (Fri, 10 Nov 2017 02:13:46 GMT): guoger (Fri, 10 Nov 2017 02:32:03 GMT): asaningmaxchain (Fri, 10 Nov 2017 02:32:34 GMT): asaningmaxchain (Fri, 10 Nov 2017 02:32:34 GMT): guoger (Fri, 10 Nov 2017 02:33:53 GMT): nickyng (Fri, 10 Nov 2017 04:08:30 GMT): sarifuddin (Sun, 12 Nov 2017 14:45:44 GMT): novira (Sun, 12 Nov 2017 17:07:04 GMT): novira (Sun, 12 Nov 2017 17:14:16 GMT): novira (Sun, 12 Nov 2017 17:14:25 GMT): novira (Sun, 12 Nov 2017 17:14:35 GMT): novira (Sun, 12 Nov 2017 17:14:40 GMT): kostas (Sun, 12 Nov 2017 17:21:32 GMT): DeepaR (Mon, 13 Nov 2017 10:59:24 GMT): DeepaR (Mon, 13 Nov 2017 11:11:02 GMT): DeepaR (Mon, 13 Nov 2017 11:11:13 GMT): DeepaR (Mon, 13 Nov 2017 11:11:35 GMT): kostas (Mon, 13 Nov 2017 14:34:56 GMT): kostas (Mon, 13 Nov 2017 14:35:07 GMT): asaningmaxchain (Mon, 13 Nov 2017 14:52:06 GMT): asaningmaxchain (Mon, 13 Nov 2017 14:56:45 GMT): asaningmaxchain (Mon, 13 Nov 2017 14:56:45 GMT): asaningmaxchain (Mon, 13 Nov 2017 14:56:45 GMT): jyellick (Mon, 13 Nov 2017 14:57:25 GMT): jyellick (Mon, 13 Nov 2017 14:57:25 GMT): asaningmaxchain (Mon, 13 Nov 2017 14:58:38 GMT): jyellick (Mon, 13 Nov 2017 14:59:00 GMT): asaningmaxchain (Mon, 13 Nov 2017 15:00:54 GMT): asaningmaxchain (Mon, 13 Nov 2017 15:01:17 GMT): rickr (Mon, 13 Nov 2017 15:07:25 GMT): rickr (Mon, 13 Nov 2017 15:07:25 GMT): jyellick (Mon, 13 Nov 2017 15:08:38 GMT): jyellick (Mon, 13 Nov 2017 15:11:36 GMT): jyellick (Mon, 13 Nov 2017 15:12:09 GMT): asaningmaxchain (Mon, 13 Nov 2017 15:14:58 GMT): asaningmaxchain (Mon, 13 Nov 2017 15:14:58 GMT): asaningmaxchain (Mon, 13 Nov 2017 15:15:12 GMT): asaningmaxchain (Mon, 13 Nov 2017 15:15:14 GMT): asaningmaxchain (Mon, 13 Nov 2017 15:15:14 GMT): niteshsolanki (Mon, 13 Nov 2017 15:15:37 GMT): jyellick (Mon, 13 Nov 2017 15:23:44 GMT): jyellick (Mon, 13 Nov 2017 15:24:06 GMT): jyellick (Mon, 13 Nov 2017 15:24:18 GMT): rickr (Mon, 13 Nov 2017 16:00:30 GMT): latitiah (Mon, 13 Nov 2017 17:26:37 GMT): latitiah (Mon, 13 Nov 2017 17:26:37 GMT): jyellick (Mon, 13 Nov 2017 17:34:56 GMT): jyellick (Mon, 13 Nov 2017 17:34:56 GMT): latitiah (Mon, 13 Nov 2017 17:41:19 GMT): latitiah (Mon, 13 Nov 2017 17:41:45 GMT): jyellick (Mon, 13 Nov 2017 17:50:34 GMT): myin2000 (Mon, 13 Nov 2017 20:25:08 GMT): wordforthis (Mon, 13 Nov 2017 22:22:25 GMT): asaningmaxchain (Mon, 13 Nov 2017 23:46:51 GMT): asaningmaxchain (Mon, 13 Nov 2017 23:46:51 GMT): asaningmaxchain (Tue, 14 Nov 2017 01:14:51 GMT): UtkarshSingh (Tue, 14 Nov 2017 07:00:35 GMT): deepakvparmar (Tue, 14 Nov 2017 09:04:20 GMT): deepakvparmar (Tue, 14 Nov 2017 09:12:42 GMT): Vadim (Tue, 14 Nov 2017 09:13:32 GMT): deepakvparmar (Tue, 14 Nov 2017 09:15:54 GMT): Vadim (Tue, 14 Nov 2017 09:17:19 GMT): deepakvparmar (Tue, 14 Nov 2017 09:21:59 GMT): DeepaR (Tue, 14 Nov 2017 14:02:09 GMT): david_dornseifer (Tue, 14 Nov 2017 23:00:12 GMT): rickr (Wed, 15 Nov 2017 00:12:33 GMT): rickr (Wed, 15 Nov 2017 00:17:57 GMT): jyellick (Wed, 15 Nov 2017 04:27:51 GMT): jyellick (Wed, 15 Nov 2017 04:28:59 GMT): jyellick (Wed, 15 Nov 2017 04:28:59 GMT): DeepaR (Wed, 15 Nov 2017 14:02:24 GMT): linyuadam (Wed, 15 Nov 2017 15:21:59 GMT): LeoKotschenreuther (Wed, 15 Nov 2017 17:22:17 GMT): LeoKotschenreuther (Wed, 15 Nov 2017 17:22:17 GMT): bjwswang (Thu, 16 Nov 2017 03:16:29 GMT): MohammadObaid (Thu, 16 Nov 2017 08:32:25 GMT): MohammadObaid (Thu, 16 Nov 2017 08:32:39 GMT): MohammadObaid (Thu, 16 Nov 2017 08:34:03 GMT): MohammadObaid (Thu, 16 Nov 2017 08:34:39 GMT): yacovm (Thu, 16 Nov 2017 08:46:19 GMT): MohammadObaid (Thu, 16 Nov 2017 09:12:23 GMT): yacovm (Thu, 16 Nov 2017 09:23:38 GMT): yacovm (Thu, 16 Nov 2017 09:23:48 GMT): yacovm (Thu, 16 Nov 2017 09:24:01 GMT): MohammadObaid (Thu, 16 Nov 2017 09:25:50 GMT): MohammadObaid (Thu, 16 Nov 2017 09:27:43 GMT): yacovm (Thu, 16 Nov 2017 09:49:54 GMT): yacovm (Thu, 16 Nov 2017 09:49:58 GMT): yacovm (Thu, 16 Nov 2017 09:50:39 GMT): yacovm (Thu, 16 Nov 2017 09:50:41 GMT): nileshyjadhav (Thu, 16 Nov 2017 16:52:46 GMT): rickr (Thu, 16 Nov 2017 18:59:52 GMT): jyellick (Thu, 16 Nov 2017 20:55:55 GMT): rickr (Thu, 16 Nov 2017 21:24:03 GMT): Asara (Thu, 16 Nov 2017 22:08:25 GMT): Asara (Thu, 16 Nov 2017 22:11:00 GMT): vdods (Fri, 17 Nov 2017 02:10:32 GMT): Amjadnz (Fri, 17 Nov 2017 19:58:25 GMT): Amjadnz (Fri, 17 Nov 2017 19:59:23 GMT): Amjadnz (Fri, 17 Nov 2017 19:59:23 GMT): Amjadnz (Fri, 17 Nov 2017 20:00:31 GMT): Amjadnz (Fri, 17 Nov 2017 20:04:41 GMT): Amjadnz (Fri, 17 Nov 2017 20:05:20 GMT): kostas (Fri, 17 Nov 2017 21:15:30 GMT): Asara (Fri, 17 Nov 2017 21:16:12 GMT): kostas (Fri, 17 Nov 2017 21:17:15 GMT): kostas (Fri, 17 Nov 2017 21:18:10 GMT): kostas (Fri, 17 Nov 2017 21:28:06 GMT): kostas (Fri, 17 Nov 2017 21:28:23 GMT): kostas (Fri, 17 Nov 2017 21:29:35 GMT): kostas (Fri, 17 Nov 2017 21:30:19 GMT): kostas (Fri, 17 Nov 2017 21:31:14 GMT): kostas (Fri, 17 Nov 2017 21:31:37 GMT): vdods (Sat, 18 Nov 2017 01:27:43 GMT): vdods (Sat, 18 Nov 2017 01:28:42 GMT): vdods (Sat, 18 Nov 2017 01:29:04 GMT): Amjadnz (Sat, 18 Nov 2017 06:29:18 GMT): Amjadnz (Sat, 18 Nov 2017 06:29:18 GMT): Amjadnz (Sat, 18 Nov 2017 06:29:18 GMT): Amjadnz (Sat, 18 Nov 2017 08:47:31 GMT): Amjadnz (Sat, 18 Nov 2017 08:48:49 GMT): Amjadnz (Sat, 18 Nov 2017 08:49:37 GMT): Amjadnz (Sat, 18 Nov 2017 08:50:20 GMT): Amjadnz (Sat, 18 Nov 2017 08:51:12 GMT): Amjadnz (Sat, 18 Nov 2017 08:53:08 GMT): Amjadnz (Sat, 18 Nov 2017 09:06:31 GMT): Amjadnz (Sat, 18 Nov 2017 14:57:26 GMT): Amjadnz (Sat, 18 Nov 2017 14:58:16 GMT): Amjadnz (Sat, 18 Nov 2017 14:58:38 GMT): Amjadnz (Sat, 18 Nov 2017 14:58:38 GMT): Amjadnz (Sat, 18 Nov 2017 14:59:55 GMT): Amjadnz (Sat, 18 Nov 2017 22:40:41 GMT): MohammadObaid (Mon, 20 Nov 2017 05:57:04 GMT): jyellick (Mon, 20 Nov 2017 14:29:08 GMT): MohammadObaid (Mon, 20 Nov 2017 16:38:54 GMT): vsadriano (Tue, 21 Nov 2017 12:56:29 GMT): jojialex2 (Wed, 22 Nov 2017 09:02:39 GMT): Vadim (Wed, 22 Nov 2017 09:04:54 GMT): takeo (Wed, 22 Nov 2017 09:21:19 GMT): takeo (Wed, 22 Nov 2017 09:21:51 GMT): takeo (Wed, 22 Nov 2017 09:21:51 GMT): takeo (Wed, 22 Nov 2017 09:25:02 GMT): takeo (Wed, 22 Nov 2017 09:25:02 GMT): Luke_Chen (Wed, 22 Nov 2017 10:45:27 GMT): jojialex2 (Wed, 22 Nov 2017 11:13:54 GMT): AfromR (Wed, 22 Nov 2017 11:35:58 GMT): AfromR (Wed, 22 Nov 2017 11:41:01 GMT): kostas (Wed, 22 Nov 2017 11:41:27 GMT): AfromR (Wed, 22 Nov 2017 11:42:21 GMT): AfromR (Wed, 22 Nov 2017 11:42:32 GMT): kostas (Wed, 22 Nov 2017 11:42:36 GMT): kostas (Wed, 22 Nov 2017 11:43:31 GMT): AkshayJindal (Wed, 22 Nov 2017 11:46:18 GMT): AfromR (Wed, 22 Nov 2017 11:47:30 GMT): kostas (Wed, 22 Nov 2017 12:30:44 GMT): AfromR (Wed, 22 Nov 2017 12:54:28 GMT): AfromR (Wed, 22 Nov 2017 13:01:39 GMT): kostas (Wed, 22 Nov 2017 13:15:08 GMT): aabdulwahed (Thu, 23 Nov 2017 10:31:11 GMT): aabdulwahed (Thu, 23 Nov 2017 10:35:34 GMT): Subramanyam (Thu, 23 Nov 2017 12:09:30 GMT): Subramanyam (Thu, 23 Nov 2017 12:13:12 GMT): kostas (Thu, 23 Nov 2017 13:53:56 GMT): kostas (Thu, 23 Nov 2017 13:54:26 GMT): Subramanyam (Thu, 23 Nov 2017 13:56:31 GMT): Subramanyam (Thu, 23 Nov 2017 13:56:48 GMT): Subramanyam (Thu, 23 Nov 2017 13:58:07 GMT): kostas (Thu, 23 Nov 2017 13:58:33 GMT): Glen (Fri, 24 Nov 2017 00:23:51 GMT): Glen (Fri, 24 Nov 2017 00:23:51 GMT): jackeyliliang (Fri, 24 Nov 2017 02:59:47 GMT): takeo (Fri, 24 Nov 2017 04:44:23 GMT): Glen (Fri, 24 Nov 2017 05:17:23 GMT): jyellick (Fri, 24 Nov 2017 05:28:10 GMT): jyellick (Fri, 24 Nov 2017 05:30:43 GMT): jyellick (Fri, 24 Nov 2017 05:30:43 GMT): Glen (Fri, 24 Nov 2017 05:45:27 GMT): Glen (Fri, 24 Nov 2017 05:45:27 GMT): Glen (Fri, 24 Nov 2017 05:45:27 GMT): takeo (Fri, 24 Nov 2017 05:47:03 GMT): takeo (Fri, 24 Nov 2017 05:48:54 GMT): jyellick (Fri, 24 Nov 2017 06:25:12 GMT): jyellick (Fri, 24 Nov 2017 06:26:26 GMT): Glen (Fri, 24 Nov 2017 06:27:23 GMT): asuchit (Fri, 24 Nov 2017 06:30:57 GMT): grapebaba (Fri, 24 Nov 2017 11:35:35 GMT): grapebaba (Fri, 24 Nov 2017 11:36:42 GMT): grapebaba (Fri, 24 Nov 2017 11:37:01 GMT): grapebaba (Fri, 24 Nov 2017 11:37:16 GMT): grapebaba (Fri, 24 Nov 2017 11:37:29 GMT): grapebaba (Fri, 24 Nov 2017 11:38:18 GMT): grapebaba (Fri, 24 Nov 2017 11:38:34 GMT): grapebaba (Fri, 24 Nov 2017 11:38:59 GMT): grapebaba (Fri, 24 Nov 2017 11:39:06 GMT): jojialex2 (Fri, 24 Nov 2017 11:47:20 GMT): mastersingh24 (Fri, 24 Nov 2017 11:50:05 GMT): jojialex2 (Fri, 24 Nov 2017 11:56:19 GMT): jojialex2 (Fri, 24 Nov 2017 11:56:19 GMT): jojialex2 (Fri, 24 Nov 2017 11:57:55 GMT): jojialex2 (Fri, 24 Nov 2017 11:58:45 GMT): grapebaba (Fri, 24 Nov 2017 13:01:08 GMT): grapebaba (Fri, 24 Nov 2017 13:01:21 GMT): grapebaba (Fri, 24 Nov 2017 13:01:28 GMT): grapebaba (Fri, 24 Nov 2017 13:01:50 GMT): grapebaba (Fri, 24 Nov 2017 13:02:11 GMT): yacovm (Fri, 24 Nov 2017 16:40:43 GMT): MadhavaReddy (Sun, 26 Nov 2017 06:51:19 GMT): MadhavaReddy (Sun, 26 Nov 2017 06:51:21 GMT): baoyangc (Sun, 26 Nov 2017 16:41:25 GMT): kostas (Sun, 26 Nov 2017 19:50:20 GMT): MadhavaReddy (Mon, 27 Nov 2017 05:40:28 GMT): jojialex2 (Tue, 28 Nov 2017 09:31:49 GMT): paul.sitoh (Tue, 28 Nov 2017 11:53:51 GMT): gauthampamu (Tue, 28 Nov 2017 14:35:54 GMT): jyellick (Tue, 28 Nov 2017 15:19:31 GMT): jyellick (Tue, 28 Nov 2017 15:21:04 GMT): jyellick (Tue, 28 Nov 2017 15:21:04 GMT): jyellick (Tue, 28 Nov 2017 15:21:04 GMT): gauthampamu (Tue, 28 Nov 2017 17:51:33 GMT): RohanMalcolm (Tue, 28 Nov 2017 18:41:11 GMT): jyellick (Tue, 28 Nov 2017 19:23:20 GMT): sanchezl (Tue, 28 Nov 2017 19:24:38 GMT): sanchezl (Tue, 28 Nov 2017 19:32:39 GMT): grapebaba (Wed, 29 Nov 2017 01:36:20 GMT): jyellick (Wed, 29 Nov 2017 02:21:47 GMT): jyellick (Wed, 29 Nov 2017 02:21:47 GMT): jyellick (Wed, 29 Nov 2017 02:21:47 GMT): grapebaba (Wed, 29 Nov 2017 02:29:59 GMT): grapebaba (Wed, 29 Nov 2017 02:30:12 GMT): jyellick (Wed, 29 Nov 2017 02:34:06 GMT): grapebaba (Wed, 29 Nov 2017 02:45:31 GMT): iamdm (Wed, 29 Nov 2017 06:56:08 GMT): egeek (Wed, 29 Nov 2017 11:16:20 GMT): kostas (Wed, 29 Nov 2017 12:54:48 GMT): kostas (Wed, 29 Nov 2017 12:54:48 GMT): iamdm (Wed, 29 Nov 2017 13:00:42 GMT): Russell-Columbia (Wed, 29 Nov 2017 23:01:19 GMT): KangkanBora (Thu, 30 Nov 2017 11:16:54 GMT): Luxii (Thu, 30 Nov 2017 13:06:50 GMT): Luxii (Thu, 30 Nov 2017 13:07:07 GMT): kostas (Thu, 30 Nov 2017 13:17:57 GMT): kostas (Thu, 30 Nov 2017 13:18:22 GMT): chrisg (Thu, 30 Nov 2017 18:57:06 GMT): chrisg (Thu, 30 Nov 2017 19:29:38 GMT): chrisg (Thu, 30 Nov 2017 19:29:38 GMT): Luxii (Fri, 01 Dec 2017 06:14:28 GMT): Vadim (Fri, 01 Dec 2017 06:20:49 GMT): Vadim (Fri, 01 Dec 2017 06:21:24 GMT): Luxii (Fri, 01 Dec 2017 06:22:13 GMT): jyellick (Fri, 01 Dec 2017 06:25:27 GMT): ArnabChatterjee (Fri, 01 Dec 2017 08:27:19 GMT): ArnabChatterjee (Fri, 01 Dec 2017 08:30:42 GMT): Vadim (Fri, 01 Dec 2017 08:32:44 GMT): Luxii (Fri, 01 Dec 2017 10:58:41 GMT): Luxii (Fri, 01 Dec 2017 10:59:44 GMT): Luxii (Fri, 01 Dec 2017 11:00:26 GMT): Luxii (Fri, 01 Dec 2017 11:01:36 GMT): Luxii (Fri, 01 Dec 2017 11:08:22 GMT): Luxii (Fri, 01 Dec 2017 11:08:37 GMT): kostas (Fri, 01 Dec 2017 13:28:26 GMT): Luxii (Fri, 01 Dec 2017 13:31:34 GMT): yacovm (Fri, 01 Dec 2017 13:55:41 GMT): yacovm (Fri, 01 Dec 2017 13:55:41 GMT): yacovm (Fri, 01 Dec 2017 13:55:46 GMT): yacovm (Fri, 01 Dec 2017 13:55:51 GMT): yacovm (Fri, 01 Dec 2017 13:55:59 GMT): yacovm (Fri, 01 Dec 2017 13:56:08 GMT): yacovm (Fri, 01 Dec 2017 13:56:17 GMT): bizhenchao1201 (Sun, 03 Dec 2017 09:34:18 GMT): asaningmaxchain (Sun, 03 Dec 2017 11:39:53 GMT): asaningmaxchain (Sun, 03 Dec 2017 11:39:53 GMT): asaningmaxchain (Sun, 03 Dec 2017 11:40:37 GMT): asaningmaxchain (Sun, 03 Dec 2017 11:40:37 GMT): asaningmaxchain (Sun, 03 Dec 2017 11:42:06 GMT): asaningmaxchain (Sun, 03 Dec 2017 11:44:56 GMT): asaningmaxchain (Sun, 03 Dec 2017 11:49:59 GMT): jyellick (Sun, 03 Dec 2017 23:28:38 GMT): asaningmaxchain (Mon, 04 Dec 2017 01:07:30 GMT): asaningmaxchain (Mon, 04 Dec 2017 01:09:18 GMT): asaningmaxchain (Mon, 04 Dec 2017 01:09:44 GMT): Luxii (Mon, 04 Dec 2017 06:50:44 GMT): Luxii (Mon, 04 Dec 2017 06:53:12 GMT): simcan (Mon, 04 Dec 2017 10:29:28 GMT): kostas (Mon, 04 Dec 2017 12:21:48 GMT): Luxii (Mon, 04 Dec 2017 12:45:49 GMT): jworthington (Mon, 04 Dec 2017 15:20:01 GMT): jworthington (Mon, 04 Dec 2017 15:20:01 GMT): jworthington (Mon, 04 Dec 2017 15:20:17 GMT): Vadim (Mon, 04 Dec 2017 15:26:04 GMT): Vadim (Mon, 04 Dec 2017 15:27:38 GMT): Vadim (Mon, 04 Dec 2017 15:28:03 GMT): jworthington (Mon, 04 Dec 2017 15:28:07 GMT): Vadim (Mon, 04 Dec 2017 15:28:32 GMT): jworthington (Mon, 04 Dec 2017 15:28:51 GMT): Vadim (Mon, 04 Dec 2017 15:29:34 GMT): Vadim (Mon, 04 Dec 2017 15:29:34 GMT): jworthington (Mon, 04 Dec 2017 15:30:43 GMT): Vadim (Mon, 04 Dec 2017 15:31:55 GMT): jworthington (Mon, 04 Dec 2017 15:35:00 GMT): jworthington (Mon, 04 Dec 2017 15:36:45 GMT): Vadim (Mon, 04 Dec 2017 15:37:33 GMT): Vadim (Mon, 04 Dec 2017 15:37:33 GMT): jworthington (Mon, 04 Dec 2017 15:38:39 GMT): Vadim (Mon, 04 Dec 2017 15:38:52 GMT): Vadim (Mon, 04 Dec 2017 15:40:17 GMT): jworthington (Mon, 04 Dec 2017 15:41:38 GMT): Glen (Tue, 05 Dec 2017 03:26:33 GMT): jyellick (Tue, 05 Dec 2017 04:19:01 GMT): jyellick (Tue, 05 Dec 2017 04:19:01 GMT): Glen (Tue, 05 Dec 2017 05:28:40 GMT): Glen (Tue, 05 Dec 2017 05:28:40 GMT): Glen (Tue, 05 Dec 2017 05:31:09 GMT): ancythomas (Tue, 05 Dec 2017 08:56:22 GMT): ngeorge (Tue, 05 Dec 2017 12:35:39 GMT): ancythomas (Tue, 05 Dec 2017 12:37:01 GMT): ancythomas (Tue, 05 Dec 2017 12:38:16 GMT): Vadim (Tue, 05 Dec 2017 12:42:03 GMT): Vadim (Tue, 05 Dec 2017 12:42:43 GMT): ancythomas (Tue, 05 Dec 2017 12:43:35 GMT): Vadim (Tue, 05 Dec 2017 12:44:57 GMT): Vadim (Tue, 05 Dec 2017 12:44:57 GMT): ancythomas (Tue, 05 Dec 2017 12:48:01 GMT): Vadim (Tue, 05 Dec 2017 12:49:29 GMT): Vadim (Tue, 05 Dec 2017 12:49:45 GMT): jyellick (Tue, 05 Dec 2017 15:46:13 GMT): jyellick (Tue, 05 Dec 2017 15:47:34 GMT): rahulhegde (Wed, 06 Dec 2017 12:25:54 GMT): rahulhegde (Wed, 06 Dec 2017 12:25:54 GMT): rahulhegde (Wed, 06 Dec 2017 12:25:54 GMT): rahulhegde (Wed, 06 Dec 2017 12:25:54 GMT): rahulhegde (Wed, 06 Dec 2017 12:25:54 GMT): kostas (Wed, 06 Dec 2017 13:57:58 GMT): LabibFarag (Wed, 06 Dec 2017 19:43:36 GMT): jworthington (Wed, 06 Dec 2017 22:40:46 GMT): kostas (Wed, 06 Dec 2017 22:50:13 GMT): jworthington (Wed, 06 Dec 2017 22:52:59 GMT): jworthington (Wed, 06 Dec 2017 22:55:54 GMT): kostas (Wed, 06 Dec 2017 22:57:37 GMT): kostas (Wed, 06 Dec 2017 22:58:56 GMT): jworthington (Wed, 06 Dec 2017 22:59:02 GMT): kostas (Wed, 06 Dec 2017 22:59:10 GMT): jworthington (Wed, 06 Dec 2017 22:59:25 GMT): ArnabChatterjee (Thu, 07 Dec 2017 00:43:52 GMT): ArnabChatterjee (Thu, 07 Dec 2017 00:43:52 GMT): ArnabChatterjee (Thu, 07 Dec 2017 00:43:52 GMT): ArnabChatterjee (Thu, 07 Dec 2017 00:43:52 GMT): kostas (Thu, 07 Dec 2017 01:44:25 GMT): kostas (Thu, 07 Dec 2017 01:44:36 GMT): ArnabChatterjee (Thu, 07 Dec 2017 01:44:49 GMT): ArnabChatterjee (Thu, 07 Dec 2017 01:44:55 GMT): kostas (Thu, 07 Dec 2017 01:45:03 GMT): kostas (Thu, 07 Dec 2017 01:45:19 GMT): kostas (Thu, 07 Dec 2017 01:45:34 GMT): kostas (Thu, 07 Dec 2017 01:46:22 GMT): kostas (Thu, 07 Dec 2017 01:46:31 GMT): ArnabChatterjee (Thu, 07 Dec 2017 01:47:37 GMT): jyellick (Thu, 07 Dec 2017 02:35:24 GMT): ArnabChatterjee (Thu, 07 Dec 2017 07:16:36 GMT): gbolo (Thu, 07 Dec 2017 15:54:45 GMT): jyellick (Thu, 07 Dec 2017 16:09:02 GMT): jyellick (Thu, 07 Dec 2017 16:10:10 GMT): gbolo (Thu, 07 Dec 2017 17:04:44 GMT): kostas (Thu, 07 Dec 2017 19:31:05 GMT): kostas (Thu, 07 Dec 2017 19:34:04 GMT): jworthington (Thu, 07 Dec 2017 21:24:09 GMT): naveenv (Fri, 08 Dec 2017 05:09:26 GMT): JayJong (Fri, 08 Dec 2017 11:11:53 GMT): gdinhof (Fri, 08 Dec 2017 17:02:07 GMT): asaningmaxchain (Sat, 09 Dec 2017 05:16:22 GMT): asaningmaxchain (Sat, 09 Dec 2017 05:16:22 GMT): asaningmaxchain (Sat, 09 Dec 2017 05:16:22 GMT): asaningmaxchain (Sat, 09 Dec 2017 05:16:22 GMT): asaningmaxchain (Sat, 09 Dec 2017 05:16:22 GMT): asaningmaxchain (Sat, 09 Dec 2017 05:16:22 GMT): asaningmaxchain (Sat, 09 Dec 2017 05:16:22 GMT): kostas (Sat, 09 Dec 2017 22:30:25 GMT): jyellick (Sun, 10 Dec 2017 04:03:03 GMT): asaningmaxchain (Sun, 10 Dec 2017 04:17:53 GMT): jyellick (Sun, 10 Dec 2017 04:19:27 GMT): jyellick (Sun, 10 Dec 2017 04:19:49 GMT): asaningmaxchain (Sun, 10 Dec 2017 04:20:42 GMT): asaningmaxchain (Sun, 10 Dec 2017 04:20:42 GMT): asaningmaxchain (Sun, 10 Dec 2017 04:23:47 GMT): nehirakdag (Sun, 10 Dec 2017 04:56:17 GMT): alvaradojl (Sun, 10 Dec 2017 15:47:46 GMT): asaningmaxchain (Mon, 11 Dec 2017 04:28:32 GMT): asaningmaxchain (Mon, 11 Dec 2017 07:30:52 GMT): asaningmaxchain (Mon, 11 Dec 2017 07:30:52 GMT): asaningmaxchain (Mon, 11 Dec 2017 10:10:41 GMT): asaningmaxchain (Mon, 11 Dec 2017 14:09:43 GMT): jyellick (Mon, 11 Dec 2017 14:44:29 GMT): asaningmaxchain (Mon, 11 Dec 2017 14:45:52 GMT): kostas (Mon, 11 Dec 2017 14:51:00 GMT): jyellick (Mon, 11 Dec 2017 15:14:25 GMT): jyellick (Mon, 11 Dec 2017 15:15:55 GMT): asaningmaxchain (Mon, 11 Dec 2017 15:21:39 GMT): asaningmaxchain (Mon, 11 Dec 2017 15:21:39 GMT): jyellick (Mon, 11 Dec 2017 15:25:35 GMT): asaningmaxchain (Mon, 11 Dec 2017 15:28:39 GMT): jyellick (Mon, 11 Dec 2017 15:31:02 GMT): asaningmaxchain (Mon, 11 Dec 2017 15:35:13 GMT): guolidong (Tue, 12 Dec 2017 05:58:21 GMT): Glen (Tue, 12 Dec 2017 11:56:49 GMT): jyellick (Tue, 12 Dec 2017 14:37:57 GMT): vsadriano (Tue, 12 Dec 2017 14:45:02 GMT): jyellick (Tue, 12 Dec 2017 14:46:14 GMT): jyellick (Tue, 12 Dec 2017 14:47:23 GMT): vsadriano (Tue, 12 Dec 2017 14:50:47 GMT): jyellick (Tue, 12 Dec 2017 14:53:28 GMT): asaningmaxchain (Tue, 12 Dec 2017 15:49:31 GMT): asaningmaxchain (Tue, 12 Dec 2017 15:50:43 GMT): asaningmaxchain (Tue, 12 Dec 2017 15:51:19 GMT): asaningmaxchain (Tue, 12 Dec 2017 15:51:21 GMT): asaningmaxchain (Tue, 12 Dec 2017 15:56:14 GMT): asaningmaxchain (Tue, 12 Dec 2017 15:57:10 GMT): jyellick (Tue, 12 Dec 2017 15:59:20 GMT): asaningmaxchain (Tue, 12 Dec 2017 16:07:14 GMT): asaningmaxchain (Tue, 12 Dec 2017 16:07:14 GMT): vsadriano (Tue, 12 Dec 2017 17:58:52 GMT): vsadriano (Tue, 12 Dec 2017 19:27:41 GMT): vsadriano (Tue, 12 Dec 2017 19:28:58 GMT): jyellick (Tue, 12 Dec 2017 20:12:52 GMT): vsadriano (Tue, 12 Dec 2017 20:20:04 GMT): vsadriano (Tue, 12 Dec 2017 20:20:29 GMT): vsadriano (Tue, 12 Dec 2017 20:20:36 GMT): vsadriano (Tue, 12 Dec 2017 20:20:42 GMT): vsadriano (Tue, 12 Dec 2017 20:21:09 GMT): vsadriano (Tue, 12 Dec 2017 20:25:18 GMT): jyellick (Tue, 12 Dec 2017 20:27:25 GMT): jyellick (Tue, 12 Dec 2017 20:27:35 GMT): jyellick (Tue, 12 Dec 2017 20:28:27 GMT): jyellick (Tue, 12 Dec 2017 20:28:56 GMT): jyellick (Tue, 12 Dec 2017 20:30:03 GMT): jyellick (Tue, 12 Dec 2017 20:30:03 GMT): jyellick (Tue, 12 Dec 2017 20:30:03 GMT): vsadriano (Tue, 12 Dec 2017 20:30:15 GMT): vsadriano (Wed, 13 Dec 2017 13:18:57 GMT): kostas (Wed, 13 Dec 2017 13:41:33 GMT): vsadriano (Wed, 13 Dec 2017 20:43:35 GMT): vsadriano (Wed, 13 Dec 2017 20:43:49 GMT): vsadriano (Wed, 13 Dec 2017 20:44:02 GMT): alix (Thu, 14 Dec 2017 03:14:14 GMT): jyellick (Thu, 14 Dec 2017 03:16:28 GMT): jyellick (Thu, 14 Dec 2017 03:16:28 GMT): alix (Thu, 14 Dec 2017 03:22:23 GMT): alix (Thu, 14 Dec 2017 03:24:13 GMT): Glen (Thu, 14 Dec 2017 11:07:45 GMT): vsadriano (Thu, 14 Dec 2017 12:18:56 GMT): kostas (Thu, 14 Dec 2017 14:14:25 GMT): asaningmaxchain (Thu, 14 Dec 2017 14:42:38 GMT): kostas (Thu, 14 Dec 2017 15:02:42 GMT): asaningmaxchain (Thu, 14 Dec 2017 15:03:13 GMT): kostas (Thu, 14 Dec 2017 15:05:50 GMT): asaningmaxchain (Thu, 14 Dec 2017 15:08:19 GMT): kostas (Thu, 14 Dec 2017 17:12:14 GMT): gauthampamu (Thu, 14 Dec 2017 20:59:26 GMT): kostas (Thu, 14 Dec 2017 21:01:15 GMT): kostas (Thu, 14 Dec 2017 21:01:23 GMT): kostas (Thu, 14 Dec 2017 21:01:23 GMT): kostas (Thu, 14 Dec 2017 21:02:01 GMT): rock_martin (Fri, 15 Dec 2017 11:13:51 GMT): rock_martin (Fri, 15 Dec 2017 11:14:21 GMT): kostas (Fri, 15 Dec 2017 14:02:03 GMT): Norberthu (Fri, 15 Dec 2017 14:29:25 GMT): nickgaski (Fri, 15 Dec 2017 23:10:53 GMT): nickgaski (Fri, 15 Dec 2017 23:10:58 GMT): rahulhegde (Sat, 16 Dec 2017 06:21:44 GMT): rahulhegde (Sat, 16 Dec 2017 07:51:24 GMT): rahulhegde (Sat, 16 Dec 2017 07:51:24 GMT): rahulhegde (Sat, 16 Dec 2017 07:51:24 GMT): rahulhegde (Sat, 16 Dec 2017 07:51:24 GMT): rahulhegde (Sat, 16 Dec 2017 07:51:24 GMT): rahulhegde (Sat, 16 Dec 2017 07:51:24 GMT): rahulhegde (Sat, 16 Dec 2017 07:51:24 GMT): rahulhegde (Sat, 16 Dec 2017 15:40:12 GMT): rahulhegde (Sat, 16 Dec 2017 15:40:12 GMT): jyellick (Sun, 17 Dec 2017 03:22:51 GMT): rock_martin (Mon, 18 Dec 2017 09:37:48 GMT): wanghhao (Mon, 18 Dec 2017 10:06:53 GMT): rahulhegde (Mon, 18 Dec 2017 13:26:34 GMT): rahulhegde (Mon, 18 Dec 2017 13:28:22 GMT): jyellick (Mon, 18 Dec 2017 14:39:50 GMT): zhishui (Tue, 19 Dec 2017 08:11:50 GMT): JohnWhitton (Tue, 19 Dec 2017 17:23:50 GMT): jyellick (Tue, 19 Dec 2017 20:16:37 GMT): JohnWhitton (Tue, 19 Dec 2017 20:17:38 GMT): JohnWhitton (Tue, 19 Dec 2017 20:18:44 GMT): JohnWhitton (Tue, 19 Dec 2017 20:18:52 GMT): jyellick (Tue, 19 Dec 2017 20:18:59 GMT): jyellick (Tue, 19 Dec 2017 20:19:09 GMT): JohnWhitton (Tue, 19 Dec 2017 20:19:29 GMT): jyellick (Tue, 19 Dec 2017 20:19:53 GMT): jyellick (Tue, 19 Dec 2017 20:19:53 GMT): JohnWhitton (Tue, 19 Dec 2017 20:20:23 GMT): jyellick (Tue, 19 Dec 2017 20:21:36 GMT): jyellick (Tue, 19 Dec 2017 20:21:50 GMT): JohnWhitton (Tue, 19 Dec 2017 20:21:57 GMT): jyellick (Tue, 19 Dec 2017 20:23:20 GMT): JohnWhitton (Tue, 19 Dec 2017 20:23:50 GMT): jyellick (Tue, 19 Dec 2017 20:24:10 GMT): JohnWhitton (Tue, 19 Dec 2017 20:25:52 GMT): jyellick (Tue, 19 Dec 2017 20:26:15 GMT): JohnWhitton (Tue, 19 Dec 2017 20:29:34 GMT): jyellick (Tue, 19 Dec 2017 20:30:53 GMT): JohnWhitton (Tue, 19 Dec 2017 20:31:25 GMT): JohnWhitton (Tue, 19 Dec 2017 20:32:39 GMT): jyellick (Tue, 19 Dec 2017 20:33:28 GMT): jyellick (Tue, 19 Dec 2017 20:33:31 GMT): jyellick (Tue, 19 Dec 2017 20:33:39 GMT): jyellick (Tue, 19 Dec 2017 20:33:45 GMT): jyellick (Tue, 19 Dec 2017 20:34:37 GMT): JohnWhitton (Tue, 19 Dec 2017 20:35:37 GMT): jyellick (Tue, 19 Dec 2017 20:36:56 GMT): jyellick (Tue, 19 Dec 2017 20:38:16 GMT): JohnWhitton (Tue, 19 Dec 2017 20:38:29 GMT): JohnWhitton (Tue, 19 Dec 2017 20:40:45 GMT): rahulhegde (Wed, 20 Dec 2017 04:44:47 GMT): Glen (Wed, 20 Dec 2017 05:18:05 GMT): handasontam (Wed, 20 Dec 2017 10:28:11 GMT): kostas (Wed, 20 Dec 2017 14:06:28 GMT): kostas (Wed, 20 Dec 2017 14:07:45 GMT): kostas (Wed, 20 Dec 2017 14:07:45 GMT): asaningmaxchain (Thu, 21 Dec 2017 07:26:50 GMT): asaningmaxchain (Thu, 21 Dec 2017 07:28:35 GMT): ankitkamra (Thu, 21 Dec 2017 10:03:41 GMT): handasontam (Thu, 21 Dec 2017 11:13:54 GMT): asaningmaxchain (Thu, 21 Dec 2017 12:27:58 GMT): jyellick (Thu, 21 Dec 2017 14:49:57 GMT): jyellick (Thu, 21 Dec 2017 14:50:17 GMT): asaningmaxchain (Thu, 21 Dec 2017 15:23:28 GMT): jyellick (Thu, 21 Dec 2017 15:25:22 GMT): asaningmaxchain (Thu, 21 Dec 2017 15:27:28 GMT): asaningmaxchain (Thu, 21 Dec 2017 15:27:28 GMT): jyellick (Thu, 21 Dec 2017 15:27:58 GMT): asaningmaxchain (Thu, 21 Dec 2017 15:30:28 GMT): asaningmaxchain (Thu, 21 Dec 2017 15:30:28 GMT): jyellick (Thu, 21 Dec 2017 15:33:11 GMT): asaningmaxchain (Thu, 21 Dec 2017 15:52:38 GMT): asaningmaxchain (Thu, 21 Dec 2017 15:52:53 GMT): asaningmaxchain (Thu, 21 Dec 2017 15:53:59 GMT): guoger (Thu, 21 Dec 2017 16:21:31 GMT): asaningmaxchain (Thu, 21 Dec 2017 16:22:06 GMT): guoger (Thu, 21 Dec 2017 16:22:10 GMT): guoger (Thu, 21 Dec 2017 16:23:17 GMT): novusopt (Thu, 21 Dec 2017 16:52:58 GMT): novusopt (Thu, 21 Dec 2017 16:54:00 GMT): jyellick (Thu, 21 Dec 2017 17:02:45 GMT): collins (Thu, 21 Dec 2017 18:16:45 GMT): jyellick (Thu, 21 Dec 2017 18:22:35 GMT): collins (Thu, 21 Dec 2017 18:24:33 GMT): collins (Thu, 21 Dec 2017 18:24:33 GMT): jyellick (Thu, 21 Dec 2017 18:27:55 GMT): collins (Thu, 21 Dec 2017 18:29:48 GMT): aberfou (Thu, 21 Dec 2017 18:49:32 GMT): handasontam (Fri, 22 Dec 2017 02:20:58 GMT): asaningmaxchain (Fri, 22 Dec 2017 08:36:25 GMT): jyellick (Fri, 22 Dec 2017 14:54:16 GMT): asaningmaxchain (Fri, 22 Dec 2017 14:55:08 GMT): jyellick (Fri, 22 Dec 2017 14:56:28 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:37:29 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:37:29 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:38:48 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:38:48 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:38:48 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:38:48 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:38:48 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:38:48 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:38:48 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:38:48 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:41:54 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:41:54 GMT): asaningmaxchain (Sat, 23 Dec 2017 02:44:53 GMT): Glen (Mon, 25 Dec 2017 02:12:01 GMT): Glen (Mon, 25 Dec 2017 02:12:01 GMT): luomin (Tue, 26 Dec 2017 08:13:05 GMT): ArnabChatterjee (Tue, 26 Dec 2017 10:09:30 GMT): yacovm (Tue, 26 Dec 2017 11:17:42 GMT): vsadriano (Tue, 26 Dec 2017 20:18:46 GMT): vsadriano (Tue, 26 Dec 2017 20:20:23 GMT): ArnabChatterjee (Wed, 27 Dec 2017 04:22:42 GMT): Roger (Wed, 27 Dec 2017 05:58:41 GMT): MR (Thu, 28 Dec 2017 09:35:01 GMT): Glen (Fri, 29 Dec 2017 05:22:17 GMT): Glen (Fri, 29 Dec 2017 05:22:17 GMT): Glen (Fri, 29 Dec 2017 05:57:28 GMT): asaningmaxchain (Fri, 29 Dec 2017 06:19:10 GMT): Glen (Fri, 29 Dec 2017 06:21:08 GMT): Glen (Fri, 29 Dec 2017 06:21:08 GMT): Glen (Fri, 29 Dec 2017 06:21:27 GMT): Glen (Fri, 29 Dec 2017 06:21:27 GMT): Glen (Fri, 29 Dec 2017 06:22:51 GMT): Glen (Fri, 29 Dec 2017 06:28:33 GMT): asaningmaxchain (Fri, 29 Dec 2017 06:50:41 GMT): Glen (Fri, 29 Dec 2017 07:22:10 GMT): Glen (Fri, 29 Dec 2017 07:22:44 GMT): Glen (Fri, 29 Dec 2017 07:23:52 GMT): Glen (Fri, 29 Dec 2017 07:25:27 GMT): asaningmaxchain (Fri, 29 Dec 2017 07:27:22 GMT): asaningmaxchain (Fri, 29 Dec 2017 07:27:22 GMT): Glen (Fri, 29 Dec 2017 07:28:18 GMT): Glen (Fri, 29 Dec 2017 07:28:41 GMT): Glen (Fri, 29 Dec 2017 07:29:21 GMT): Glen (Fri, 29 Dec 2017 07:51:59 GMT): Glen (Fri, 29 Dec 2017 07:51:59 GMT): Glen (Fri, 29 Dec 2017 07:51:59 GMT): Glen (Fri, 29 Dec 2017 07:52:33 GMT): handasontam (Fri, 29 Dec 2017 08:18:59 GMT): asaningmaxchain (Fri, 29 Dec 2017 09:07:14 GMT): Glen (Sat, 30 Dec 2017 03:32:41 GMT): CodeReaper (Tue, 02 Jan 2018 07:46:50 GMT): handasontam (Tue, 02 Jan 2018 09:57:58 GMT): knagware9 (Tue, 02 Jan 2018 10:38:08 GMT): collins (Tue, 02 Jan 2018 17:30:38 GMT): Glen (Wed, 03 Jan 2018 03:22:29 GMT): asaningmaxchain (Wed, 03 Jan 2018 09:57:56 GMT): yacovm (Wed, 03 Jan 2018 11:50:15 GMT): yacovm (Wed, 03 Jan 2018 11:50:15 GMT): asaningmaxchain (Wed, 03 Jan 2018 12:06:39 GMT): asaningmaxchain (Wed, 03 Jan 2018 12:06:39 GMT): yacovm (Wed, 03 Jan 2018 12:07:12 GMT): yacovm (Wed, 03 Jan 2018 12:07:21 GMT): asaningmaxchain (Wed, 03 Jan 2018 12:10:52 GMT): yacovm (Wed, 03 Jan 2018 12:21:39 GMT): asaningmaxchain (Wed, 03 Jan 2018 14:34:40 GMT): yacovm (Wed, 03 Jan 2018 14:34:58 GMT): sasiedu (Wed, 03 Jan 2018 18:36:35 GMT): sasiedu (Wed, 03 Jan 2018 18:36:59 GMT): jyellick (Wed, 03 Jan 2018 19:53:23 GMT): jyellick (Wed, 03 Jan 2018 19:53:23 GMT): tennenjl (Thu, 04 Jan 2018 00:25:57 GMT): tennenjl (Thu, 04 Jan 2018 00:25:57 GMT): DannyWong (Thu, 04 Jan 2018 06:15:55 GMT): DannyWong (Thu, 04 Jan 2018 06:16:06 GMT): DannyWong (Thu, 04 Jan 2018 06:16:32 GMT): allonblocks21 (Thu, 04 Jan 2018 11:09:04 GMT): sasiedu (Thu, 04 Jan 2018 13:33:38 GMT): asaningmaxchain (Thu, 04 Jan 2018 13:57:36 GMT): asaningmaxchain (Thu, 04 Jan 2018 13:57:36 GMT): asaningmaxchain (Thu, 04 Jan 2018 13:57:55 GMT): jyellick (Thu, 04 Jan 2018 14:45:44 GMT): tennenjl (Thu, 04 Jan 2018 15:18:14 GMT): tamycova (Thu, 04 Jan 2018 22:57:14 GMT): tamycova (Thu, 04 Jan 2018 22:57:45 GMT): voutasaurus (Thu, 04 Jan 2018 23:20:17 GMT): voutasaurus (Thu, 04 Jan 2018 23:29:05 GMT): voutasaurus (Thu, 04 Jan 2018 23:29:05 GMT): SanketPanchamia (Fri, 05 Jan 2018 03:32:07 GMT): SanketPanchamia (Fri, 05 Jan 2018 03:36:41 GMT): SanketPanchamia (Fri, 05 Jan 2018 03:36:43 GMT): jyellick (Fri, 05 Jan 2018 04:46:26 GMT): jyellick (Fri, 05 Jan 2018 04:47:57 GMT): jyellick (Fri, 05 Jan 2018 04:50:49 GMT): SanketPanchamia (Fri, 05 Jan 2018 09:34:18 GMT): tamycova (Fri, 05 Jan 2018 10:45:45 GMT): Dpkkmr (Fri, 05 Jan 2018 11:15:54 GMT): sasiedu (Fri, 05 Jan 2018 12:03:17 GMT): voutasaurus (Fri, 05 Jan 2018 17:10:12 GMT): voutasaurus (Fri, 05 Jan 2018 17:10:50 GMT): voutasaurus (Fri, 05 Jan 2018 17:10:50 GMT): voutasaurus (Fri, 05 Jan 2018 17:14:50 GMT): jyellick (Fri, 05 Jan 2018 17:17:01 GMT): jyellick (Fri, 05 Jan 2018 17:18:42 GMT): jyellick (Fri, 05 Jan 2018 17:20:34 GMT): voutasaurus (Fri, 05 Jan 2018 17:21:49 GMT): voutasaurus (Fri, 05 Jan 2018 19:12:34 GMT): voutasaurus (Fri, 05 Jan 2018 19:39:18 GMT): voutasaurus (Fri, 05 Jan 2018 19:39:18 GMT): jyellick (Fri, 05 Jan 2018 21:20:29 GMT): tamycova (Sat, 06 Jan 2018 11:31:59 GMT): arjunkhera (Sun, 07 Jan 2018 10:54:09 GMT): sasiedu (Sun, 07 Jan 2018 11:53:50 GMT): jyellick (Sun, 07 Jan 2018 19:18:14 GMT): handasontam (Mon, 08 Jan 2018 07:53:45 GMT): mastersingh24 (Mon, 08 Jan 2018 12:28:11 GMT): asaningmaxchain (Mon, 08 Jan 2018 12:36:45 GMT): mastersingh24 (Mon, 08 Jan 2018 12:37:24 GMT): asaningmaxchain (Mon, 08 Jan 2018 12:37:33 GMT): mastersingh24 (Mon, 08 Jan 2018 12:37:43 GMT): asaningmaxchain (Mon, 08 Jan 2018 12:38:17 GMT): asaningmaxchain (Mon, 08 Jan 2018 12:38:17 GMT): asaningmaxchain (Mon, 08 Jan 2018 12:39:35 GMT): mastersingh24 (Mon, 08 Jan 2018 12:40:08 GMT): mastersingh24 (Mon, 08 Jan 2018 12:40:30 GMT): asaningmaxchain (Mon, 08 Jan 2018 12:40:35 GMT): kostas (Mon, 08 Jan 2018 13:56:27 GMT): kostas (Mon, 08 Jan 2018 13:57:16 GMT): kostas (Mon, 08 Jan 2018 13:57:25 GMT): kostas (Mon, 08 Jan 2018 13:57:33 GMT): kostas (Mon, 08 Jan 2018 13:57:52 GMT): kostas (Mon, 08 Jan 2018 13:58:14 GMT): kostas (Mon, 08 Jan 2018 14:08:15 GMT): kostas (Mon, 08 Jan 2018 14:08:15 GMT): kostas (Mon, 08 Jan 2018 14:08:15 GMT): asaningmaxchain (Mon, 08 Jan 2018 14:08:44 GMT): kostas (Mon, 08 Jan 2018 14:19:50 GMT): kostas (Mon, 08 Jan 2018 14:19:54 GMT): kostas (Mon, 08 Jan 2018 14:20:00 GMT): kostas (Mon, 08 Jan 2018 14:20:05 GMT): kostas (Mon, 08 Jan 2018 14:20:50 GMT): asaningmaxchain (Mon, 08 Jan 2018 14:21:31 GMT): mschmidt (Mon, 08 Jan 2018 14:22:43 GMT): asaningmaxchain (Mon, 08 Jan 2018 14:23:36 GMT): kostas (Mon, 08 Jan 2018 14:24:19 GMT): asaningmaxchain (Mon, 08 Jan 2018 14:26:52 GMT): kostas (Mon, 08 Jan 2018 14:29:45 GMT): asaningmaxchain (Mon, 08 Jan 2018 14:34:55 GMT): asaningmaxchain (Mon, 08 Jan 2018 14:34:55 GMT): asaningmaxchain (Mon, 08 Jan 2018 14:34:55 GMT): asaningmaxchain (Mon, 08 Jan 2018 14:35:46 GMT): jyellick (Mon, 08 Jan 2018 14:36:53 GMT): asaningmaxchain (Mon, 08 Jan 2018 14:37:29 GMT): asaningmaxchain (Mon, 08 Jan 2018 14:38:06 GMT): novusopt (Mon, 08 Jan 2018 15:54:39 GMT): novusopt (Mon, 08 Jan 2018 15:55:14 GMT): jyellick (Mon, 08 Jan 2018 15:55:43 GMT): jyellick (Mon, 08 Jan 2018 15:55:43 GMT): novusopt (Mon, 08 Jan 2018 15:58:11 GMT): jyellick (Mon, 08 Jan 2018 15:58:36 GMT): novusopt (Mon, 08 Jan 2018 15:58:38 GMT): novusopt (Mon, 08 Jan 2018 15:59:47 GMT): novusopt (Mon, 08 Jan 2018 16:36:50 GMT): jyellick (Mon, 08 Jan 2018 16:37:47 GMT): jyellick (Mon, 08 Jan 2018 16:38:14 GMT): jyellick (Mon, 08 Jan 2018 16:39:14 GMT): jyellick (Mon, 08 Jan 2018 16:39:57 GMT): novusopt (Mon, 08 Jan 2018 16:42:21 GMT): jyellick (Mon, 08 Jan 2018 16:46:25 GMT): novusopt (Mon, 08 Jan 2018 16:47:37 GMT): Asara (Mon, 08 Jan 2018 17:32:20 GMT): Asara (Mon, 08 Jan 2018 17:32:20 GMT): jyellick (Mon, 08 Jan 2018 17:43:04 GMT): jyellick (Mon, 08 Jan 2018 17:43:04 GMT): sanchezl (Mon, 08 Jan 2018 18:18:10 GMT): Asara (Mon, 08 Jan 2018 18:19:27 GMT): JOYELIN (Tue, 09 Jan 2018 08:35:15 GMT): tingfa1 (Tue, 09 Jan 2018 13:04:11 GMT): CodeReaper (Wed, 10 Jan 2018 13:50:47 GMT): CodeReaper (Wed, 10 Jan 2018 13:50:47 GMT): CodeReaper (Wed, 10 Jan 2018 14:04:26 GMT): kostas (Wed, 10 Jan 2018 14:20:57 GMT): CodeReaper (Wed, 10 Jan 2018 14:21:17 GMT): LeoKotschenreuther (Wed, 10 Jan 2018 19:05:42 GMT): rahulhegde (Wed, 10 Jan 2018 19:09:07 GMT): jyellick (Wed, 10 Jan 2018 19:13:15 GMT): kostas (Wed, 10 Jan 2018 19:16:11 GMT): kostas (Wed, 10 Jan 2018 19:16:47 GMT): yacovm (Wed, 10 Jan 2018 19:20:42 GMT): rahulhegde (Wed, 10 Jan 2018 19:20:45 GMT): rahulhegde (Wed, 10 Jan 2018 19:20:45 GMT): rahulhegde (Wed, 10 Jan 2018 19:23:00 GMT): jyellick (Wed, 10 Jan 2018 19:28:28 GMT): kostas (Wed, 10 Jan 2018 19:38:05 GMT): kostas (Wed, 10 Jan 2018 19:38:31 GMT): kostas (Wed, 10 Jan 2018 19:38:55 GMT): rahulhegde (Thu, 11 Jan 2018 01:36:20 GMT): rahulhegde (Thu, 11 Jan 2018 01:36:20 GMT): rahulhegde (Thu, 11 Jan 2018 01:36:20 GMT): rahulhegde (Thu, 11 Jan 2018 01:36:20 GMT): rahulhegde (Thu, 11 Jan 2018 01:41:54 GMT): Glen (Thu, 11 Jan 2018 01:43:04 GMT): Glen (Thu, 11 Jan 2018 01:43:04 GMT): Glen (Thu, 11 Jan 2018 01:46:33 GMT): jyellick (Thu, 11 Jan 2018 02:53:09 GMT): jyellick (Thu, 11 Jan 2018 02:53:09 GMT): Glen (Thu, 11 Jan 2018 04:01:12 GMT): jyellick (Thu, 11 Jan 2018 04:01:36 GMT): Glen (Thu, 11 Jan 2018 04:03:26 GMT): Glen (Thu, 11 Jan 2018 04:05:11 GMT): Glen (Thu, 11 Jan 2018 04:05:11 GMT): Glen (Thu, 11 Jan 2018 04:05:11 GMT): Glen (Thu, 11 Jan 2018 04:05:42 GMT): Glen (Thu, 11 Jan 2018 04:06:39 GMT): Glen (Thu, 11 Jan 2018 04:07:21 GMT): Glen (Thu, 11 Jan 2018 04:07:39 GMT): jyellick (Thu, 11 Jan 2018 04:09:05 GMT): Glen (Thu, 11 Jan 2018 06:01:02 GMT): novusopt (Thu, 11 Jan 2018 07:04:56 GMT): novusopt (Thu, 11 Jan 2018 07:04:56 GMT): novusopt (Thu, 11 Jan 2018 07:04:56 GMT): novusopt (Thu, 11 Jan 2018 07:04:56 GMT): novusopt (Thu, 11 Jan 2018 07:07:03 GMT): novusopt (Thu, 11 Jan 2018 07:07:29 GMT): novusopt (Thu, 11 Jan 2018 07:07:52 GMT): Vadim (Thu, 11 Jan 2018 07:08:57 GMT): Vadim (Thu, 11 Jan 2018 07:09:34 GMT): Vadim (Thu, 11 Jan 2018 07:11:42 GMT): novusopt (Thu, 11 Jan 2018 07:15:23 GMT): novusopt (Thu, 11 Jan 2018 07:48:53 GMT): novusopt (Thu, 11 Jan 2018 08:54:53 GMT): novusopt (Thu, 11 Jan 2018 09:54:36 GMT): jyellick (Thu, 11 Jan 2018 14:45:37 GMT): jyellick (Thu, 11 Jan 2018 14:45:53 GMT): jyellick (Thu, 11 Jan 2018 14:46:42 GMT): jyellick (Thu, 11 Jan 2018 14:46:42 GMT): novusopt (Thu, 11 Jan 2018 14:46:54 GMT): novusopt (Thu, 11 Jan 2018 14:47:07 GMT): novusopt (Thu, 11 Jan 2018 14:48:11 GMT): novusopt (Thu, 11 Jan 2018 14:49:16 GMT): novusopt (Thu, 11 Jan 2018 14:49:23 GMT): asaningmaxchain (Thu, 11 Jan 2018 15:18:54 GMT): jyellick (Thu, 11 Jan 2018 15:20:43 GMT): asaningmaxchain (Thu, 11 Jan 2018 15:27:44 GMT): jyellick (Thu, 11 Jan 2018 15:53:40 GMT): jyellick (Thu, 11 Jan 2018 15:54:15 GMT): collins (Thu, 11 Jan 2018 19:21:01 GMT): jyellick (Thu, 11 Jan 2018 19:24:24 GMT): jyellick (Thu, 11 Jan 2018 19:26:03 GMT): collins (Thu, 11 Jan 2018 19:35:42 GMT): jyellick (Thu, 11 Jan 2018 19:36:34 GMT): jyellick (Thu, 11 Jan 2018 19:36:34 GMT): collins (Thu, 11 Jan 2018 19:39:22 GMT): kostas (Thu, 11 Jan 2018 19:42:25 GMT): kostas (Thu, 11 Jan 2018 19:43:43 GMT): kostas (Thu, 11 Jan 2018 19:44:48 GMT): collins (Thu, 11 Jan 2018 19:47:07 GMT): kostas (Thu, 11 Jan 2018 19:47:30 GMT): kostas (Thu, 11 Jan 2018 19:47:30 GMT): kostas (Thu, 11 Jan 2018 19:48:38 GMT): kostas (Thu, 11 Jan 2018 19:48:48 GMT): kostas (Thu, 11 Jan 2018 19:49:06 GMT): collins (Thu, 11 Jan 2018 19:50:11 GMT): kostas (Thu, 11 Jan 2018 19:50:29 GMT): kostas (Thu, 11 Jan 2018 19:50:50 GMT): collins (Thu, 11 Jan 2018 19:53:16 GMT): collins (Thu, 11 Jan 2018 19:53:16 GMT): kostas (Thu, 11 Jan 2018 19:53:59 GMT): kostas (Thu, 11 Jan 2018 19:54:30 GMT): collins (Thu, 11 Jan 2018 19:54:49 GMT): kostas (Thu, 11 Jan 2018 19:55:31 GMT): kostas (Thu, 11 Jan 2018 19:55:31 GMT): kostas (Thu, 11 Jan 2018 19:55:33 GMT): kostas (Thu, 11 Jan 2018 19:57:22 GMT): kostas (Thu, 11 Jan 2018 19:57:49 GMT): kostas (Thu, 11 Jan 2018 19:57:49 GMT): kostas (Thu, 11 Jan 2018 19:58:41 GMT): kostas (Thu, 11 Jan 2018 19:58:54 GMT): kostas (Thu, 11 Jan 2018 19:59:09 GMT): kostas (Thu, 11 Jan 2018 19:59:26 GMT): kostas (Thu, 11 Jan 2018 20:00:21 GMT): collins (Thu, 11 Jan 2018 20:07:03 GMT): varun-raj (Fri, 12 Jan 2018 08:16:25 GMT): jmcnevin (Fri, 12 Jan 2018 16:25:06 GMT): jyellick (Fri, 12 Jan 2018 16:35:31 GMT): jyellick (Fri, 12 Jan 2018 16:35:31 GMT): jmcnevin (Fri, 12 Jan 2018 18:43:14 GMT): jmcnevin (Fri, 12 Jan 2018 18:43:14 GMT): jmcnevin (Fri, 12 Jan 2018 19:56:18 GMT): jyellick (Fri, 12 Jan 2018 19:57:29 GMT): jyellick (Fri, 12 Jan 2018 19:58:39 GMT): jmcnevin (Fri, 12 Jan 2018 21:02:24 GMT): jyellick (Fri, 12 Jan 2018 21:04:50 GMT): jyellick (Fri, 12 Jan 2018 21:05:20 GMT): asaningmaxchain (Sat, 13 Jan 2018 01:51:50 GMT): tingfa1 (Sat, 13 Jan 2018 02:28:24 GMT): asaningmaxchain (Sat, 13 Jan 2018 04:38:17 GMT): praveentalari (Sat, 13 Jan 2018 07:15:01 GMT): praveentalari (Sat, 13 Jan 2018 07:21:09 GMT): praveentalari (Sat, 13 Jan 2018 07:21:09 GMT): asaningmaxchain (Sat, 13 Jan 2018 10:13:06 GMT): asaningmaxchain (Sat, 13 Jan 2018 10:13:06 GMT): Glen (Mon, 15 Jan 2018 01:45:38 GMT): YashGanthe (Mon, 15 Jan 2018 08:48:37 GMT): guoger (Mon, 15 Jan 2018 09:19:39 GMT): guoger (Mon, 15 Jan 2018 09:21:16 GMT): grapebaba (Mon, 15 Jan 2018 15:11:40 GMT): grapebaba (Mon, 15 Jan 2018 15:17:33 GMT): grapebaba (Mon, 15 Jan 2018 15:18:05 GMT): jyellick (Mon, 15 Jan 2018 15:19:15 GMT): sanchezl (Mon, 15 Jan 2018 15:24:29 GMT): grapebaba (Mon, 15 Jan 2018 15:25:25 GMT): grapebaba (Mon, 15 Jan 2018 15:25:36 GMT): sanchezl (Mon, 15 Jan 2018 15:47:41 GMT): asaningmaxchain (Mon, 15 Jan 2018 16:50:32 GMT): asaningmaxchain (Mon, 15 Jan 2018 16:50:32 GMT): jyellick (Mon, 15 Jan 2018 17:13:48 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:14:31 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:17:53 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:17:59 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:18:27 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:18:35 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:19:21 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:19:31 GMT): jyellick (Mon, 15 Jan 2018 17:20:22 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:20:58 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:21:47 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:23:17 GMT): jyellick (Mon, 15 Jan 2018 17:24:01 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:25:53 GMT): jyellick (Mon, 15 Jan 2018 17:27:55 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:29:59 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:29:59 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:30:25 GMT): jyellick (Mon, 15 Jan 2018 17:30:38 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:32:27 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:32:27 GMT): jyellick (Mon, 15 Jan 2018 17:35:41 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:44:00 GMT): jyellick (Mon, 15 Jan 2018 17:45:29 GMT): asaningmaxchain (Mon, 15 Jan 2018 17:47:55 GMT): jyellick (Mon, 15 Jan 2018 18:00:17 GMT): tingfa1 (Tue, 16 Jan 2018 07:16:32 GMT): javrevasandeep (Tue, 16 Jan 2018 11:58:31 GMT): javrevasandeep (Tue, 16 Jan 2018 12:00:35 GMT): guoger (Tue, 16 Jan 2018 14:19:33 GMT): javrevasandeep (Tue, 16 Jan 2018 14:21:03 GMT): javrevasandeep (Tue, 16 Jan 2018 14:21:50 GMT): guoger (Tue, 16 Jan 2018 14:25:30 GMT): asaningmaxchain (Tue, 16 Jan 2018 14:37:26 GMT): guoger (Tue, 16 Jan 2018 14:40:27 GMT): asaningmaxchain (Tue, 16 Jan 2018 14:40:59 GMT): asaningmaxchain (Tue, 16 Jan 2018 14:40:59 GMT): asaningmaxchain (Tue, 16 Jan 2018 14:51:23 GMT): asaningmaxchain (Tue, 16 Jan 2018 14:51:23 GMT): guoger (Tue, 16 Jan 2018 14:56:00 GMT): asaningmaxchain (Tue, 16 Jan 2018 14:57:55 GMT): jyellick (Tue, 16 Jan 2018 15:00:26 GMT): jyellick (Tue, 16 Jan 2018 15:01:52 GMT): asaningmaxchain (Tue, 16 Jan 2018 15:02:01 GMT): jyellick (Tue, 16 Jan 2018 15:02:51 GMT): asaningmaxchain (Tue, 16 Jan 2018 15:03:12 GMT): jyellick (Tue, 16 Jan 2018 15:03:32 GMT): asaningmaxchain (Tue, 16 Jan 2018 15:03:53 GMT): asaningmaxchain (Tue, 16 Jan 2018 15:04:47 GMT): asaningmaxchain (Tue, 16 Jan 2018 15:04:52 GMT): asaningmaxchain (Tue, 16 Jan 2018 15:09:09 GMT): guoger (Tue, 16 Jan 2018 15:15:22 GMT): asaningmaxchain (Tue, 16 Jan 2018 15:20:01 GMT): qylixin (Wed, 17 Jan 2018 06:07:34 GMT): grapebaba (Wed, 17 Jan 2018 08:14:37 GMT): grapebaba (Wed, 17 Jan 2018 08:14:59 GMT): javrevasandeep (Wed, 17 Jan 2018 08:15:45 GMT): grapebaba (Wed, 17 Jan 2018 08:16:08 GMT): tingfa1 (Wed, 17 Jan 2018 08:16:26 GMT): tingfa1 (Wed, 17 Jan 2018 08:16:35 GMT): grapebaba (Wed, 17 Jan 2018 08:17:32 GMT): grapebaba (Wed, 17 Jan 2018 08:19:07 GMT): grapebaba (Wed, 17 Jan 2018 08:21:56 GMT): grapebaba (Wed, 17 Jan 2018 08:22:21 GMT): grapebaba (Wed, 17 Jan 2018 08:22:48 GMT): grapebaba (Wed, 17 Jan 2018 08:36:09 GMT): grapebaba (Wed, 17 Jan 2018 08:36:26 GMT): asaningmaxchain (Wed, 17 Jan 2018 14:50:01 GMT): jyellick (Wed, 17 Jan 2018 14:51:33 GMT): jyellick (Wed, 17 Jan 2018 14:52:38 GMT): jyellick (Wed, 17 Jan 2018 14:54:37 GMT): jyellick (Wed, 17 Jan 2018 14:54:37 GMT): asaningmaxchain (Wed, 17 Jan 2018 14:54:59 GMT): asaningmaxchain (Wed, 17 Jan 2018 14:54:59 GMT): asaningmaxchain (Wed, 17 Jan 2018 14:57:00 GMT): asaningmaxchain (Wed, 17 Jan 2018 14:57:00 GMT): jyellick (Wed, 17 Jan 2018 14:57:56 GMT): asaningmaxchain (Wed, 17 Jan 2018 14:58:07 GMT): asaningmaxchain (Wed, 17 Jan 2018 14:58:07 GMT): grapebaba (Wed, 17 Jan 2018 15:11:12 GMT): grapebaba (Wed, 17 Jan 2018 15:11:37 GMT): grapebaba (Wed, 17 Jan 2018 15:11:40 GMT): grapebaba (Wed, 17 Jan 2018 15:11:55 GMT): jyellick (Wed, 17 Jan 2018 15:12:34 GMT): grapebaba (Wed, 17 Jan 2018 15:12:48 GMT): grapebaba (Wed, 17 Jan 2018 15:12:56 GMT): grapebaba (Wed, 17 Jan 2018 15:13:09 GMT): grapebaba (Wed, 17 Jan 2018 15:14:27 GMT): grapebaba (Wed, 17 Jan 2018 15:14:41 GMT): grapebaba (Wed, 17 Jan 2018 15:15:03 GMT): jyellick (Wed, 17 Jan 2018 15:20:36 GMT): jyellick (Wed, 17 Jan 2018 15:20:36 GMT): grapebaba (Wed, 17 Jan 2018 15:21:15 GMT): sanchezl (Wed, 17 Jan 2018 18:50:56 GMT): alexliu (Thu, 18 Jan 2018 03:21:11 GMT): Manish.Sharma (Thu, 18 Jan 2018 09:13:14 GMT): jks3462 (Thu, 18 Jan 2018 11:34:01 GMT): zhoui13 (Fri, 19 Jan 2018 02:06:03 GMT): ibmamnt (Fri, 19 Jan 2018 04:08:03 GMT): B2BProgrammer (Fri, 19 Jan 2018 07:55:24 GMT): Brucepark (Sat, 20 Jan 2018 06:12:45 GMT): asaningmaxchain (Sat, 20 Jan 2018 09:07:14 GMT): asaningmaxchain (Sat, 20 Jan 2018 09:07:14 GMT): IngoRammer (Sun, 21 Jan 2018 09:08:15 GMT): Brucepark (Mon, 22 Jan 2018 06:00:57 GMT): vdods (Mon, 22 Jan 2018 08:52:11 GMT): guoger (Mon, 22 Jan 2018 08:56:14 GMT): guoger (Mon, 22 Jan 2018 08:56:23 GMT): vdods (Mon, 22 Jan 2018 09:06:30 GMT): vdods (Mon, 22 Jan 2018 09:29:19 GMT): vdods (Mon, 22 Jan 2018 09:29:19 GMT): nhrishi (Mon, 22 Jan 2018 12:38:34 GMT): jyellick (Mon, 22 Jan 2018 13:15:20 GMT): vdods (Mon, 22 Jan 2018 17:56:05 GMT): jyellick (Mon, 22 Jan 2018 20:41:01 GMT): jyellick (Mon, 22 Jan 2018 20:41:29 GMT): jyellick (Mon, 22 Jan 2018 20:41:45 GMT): jyellick (Mon, 22 Jan 2018 20:41:55 GMT): Brucepark (Tue, 23 Jan 2018 01:31:29 GMT): Brucepark (Tue, 23 Jan 2018 01:31:37 GMT): guoger (Tue, 23 Jan 2018 01:38:06 GMT): Brucepark (Tue, 23 Jan 2018 01:53:47 GMT): Brucepark (Tue, 23 Jan 2018 01:53:48 GMT): jyellick (Tue, 23 Jan 2018 04:20:24 GMT): jyellick (Tue, 23 Jan 2018 04:20:24 GMT): javrevasandeep (Tue, 23 Jan 2018 14:08:01 GMT): javrevasandeep (Tue, 23 Jan 2018 14:08:35 GMT): javrevasandeep (Tue, 23 Jan 2018 14:08:41 GMT): javrevasandeep (Tue, 23 Jan 2018 14:08:47 GMT): javrevasandeep (Tue, 23 Jan 2018 14:08:54 GMT): jyellick (Tue, 23 Jan 2018 14:47:04 GMT): javrevasandeep (Tue, 23 Jan 2018 17:44:26 GMT): javrevasandeep (Tue, 23 Jan 2018 17:44:26 GMT): javrevasandeep (Tue, 23 Jan 2018 17:45:46 GMT): javrevasandeep (Tue, 23 Jan 2018 17:45:46 GMT): jyellick (Tue, 23 Jan 2018 17:46:44 GMT): javrevasandeep (Tue, 23 Jan 2018 17:47:01 GMT): javrevasandeep (Tue, 23 Jan 2018 17:47:35 GMT): javrevasandeep (Tue, 23 Jan 2018 17:47:35 GMT): javrevasandeep (Tue, 23 Jan 2018 17:47:46 GMT): jyellick (Tue, 23 Jan 2018 17:48:01 GMT): javrevasandeep (Tue, 23 Jan 2018 17:48:32 GMT): javrevasandeep (Tue, 23 Jan 2018 17:49:44 GMT): jyellick (Tue, 23 Jan 2018 17:51:42 GMT): jyellick (Tue, 23 Jan 2018 17:51:52 GMT): jyellick (Tue, 23 Jan 2018 17:52:42 GMT): javrevasandeep (Tue, 23 Jan 2018 17:54:27 GMT): javrevasandeep (Tue, 23 Jan 2018 17:54:27 GMT): javrevasandeep (Tue, 23 Jan 2018 17:56:50 GMT): jyellick (Tue, 23 Jan 2018 17:58:03 GMT): jyellick (Tue, 23 Jan 2018 17:58:59 GMT): jyellick (Tue, 23 Jan 2018 17:59:16 GMT): javrevasandeep (Tue, 23 Jan 2018 18:01:20 GMT): javrevasandeep (Tue, 23 Jan 2018 18:02:51 GMT): javrevasandeep (Tue, 23 Jan 2018 18:03:04 GMT): javrevasandeep (Tue, 23 Jan 2018 18:04:49 GMT): javrevasandeep (Tue, 23 Jan 2018 18:04:56 GMT): javrevasandeep (Tue, 23 Jan 2018 18:04:56 GMT): javrevasandeep (Tue, 23 Jan 2018 18:05:28 GMT): javrevasandeep (Tue, 23 Jan 2018 18:05:33 GMT): javrevasandeep (Tue, 23 Jan 2018 18:06:11 GMT): javrevasandeep (Tue, 23 Jan 2018 18:06:21 GMT): javrevasandeep (Tue, 23 Jan 2018 18:12:05 GMT): javrevasandeep (Tue, 23 Jan 2018 18:12:23 GMT): jyellick (Tue, 23 Jan 2018 18:12:32 GMT): jyellick (Tue, 23 Jan 2018 18:13:15 GMT): pmcosta1 (Tue, 23 Jan 2018 19:01:34 GMT): rohitadivi (Tue, 23 Jan 2018 19:03:07 GMT): rohitadivi (Tue, 23 Jan 2018 19:03:07 GMT): rohitadivi (Tue, 23 Jan 2018 19:03:07 GMT): jyellick (Tue, 23 Jan 2018 19:04:32 GMT): jyellick (Tue, 23 Jan 2018 19:06:05 GMT): pmcosta1 (Tue, 23 Jan 2018 19:06:15 GMT): pmcosta1 (Tue, 23 Jan 2018 19:17:55 GMT): pmcosta1 (Tue, 23 Jan 2018 19:17:55 GMT): pmcosta1 (Tue, 23 Jan 2018 19:18:28 GMT): pmcosta1 (Tue, 23 Jan 2018 19:20:31 GMT): pmcosta1 (Tue, 23 Jan 2018 19:21:29 GMT): pmcosta1 (Tue, 23 Jan 2018 19:22:17 GMT): pmcosta1 (Tue, 23 Jan 2018 19:23:04 GMT): pmcosta1 (Tue, 23 Jan 2018 19:23:35 GMT): pmcosta1 (Tue, 23 Jan 2018 19:23:35 GMT): jyellick (Tue, 23 Jan 2018 19:27:14 GMT): jyellick (Tue, 23 Jan 2018 19:27:24 GMT): pmcosta1 (Tue, 23 Jan 2018 19:27:47 GMT): jyellick (Tue, 23 Jan 2018 19:28:25 GMT): grapebaba (Wed, 24 Jan 2018 13:46:27 GMT): Vadim (Wed, 24 Jan 2018 13:46:53 GMT): grapebaba (Wed, 24 Jan 2018 13:47:26 GMT): grapebaba (Wed, 24 Jan 2018 13:48:01 GMT): Vadim (Wed, 24 Jan 2018 13:48:50 GMT): Vadim (Wed, 24 Jan 2018 13:49:28 GMT): ahmedsajid (Wed, 24 Jan 2018 14:19:18 GMT): grapebaba (Wed, 24 Jan 2018 14:41:10 GMT): javrevasandeep (Wed, 24 Jan 2018 14:51:35 GMT): javrevasandeep (Wed, 24 Jan 2018 14:52:35 GMT): Vadim (Wed, 24 Jan 2018 14:53:05 GMT): javrevasandeep (Wed, 24 Jan 2018 14:55:00 GMT): Vadim (Wed, 24 Jan 2018 14:56:22 GMT): Vadim (Wed, 24 Jan 2018 14:56:43 GMT): AshishMishra 1 (Wed, 24 Jan 2018 15:19:46 GMT): udaykhambadkone (Wed, 24 Jan 2018 16:24:30 GMT): udaykhambadkone (Wed, 24 Jan 2018 16:26:04 GMT): jyellick (Wed, 24 Jan 2018 16:30:31 GMT): udaykhambadkone (Wed, 24 Jan 2018 19:01:46 GMT): udaykhambadkone (Wed, 24 Jan 2018 19:01:56 GMT): AshishMishra 1 (Thu, 25 Jan 2018 01:58:49 GMT): asaningmaxchain123 (Thu, 25 Jan 2018 02:14:26 GMT): jyellick (Thu, 25 Jan 2018 02:27:38 GMT): jyellick (Thu, 25 Jan 2018 02:28:14 GMT): jyellick (Thu, 25 Jan 2018 02:28:14 GMT): blockhash (Thu, 25 Jan 2018 05:20:09 GMT): AshishMishra 1 (Thu, 25 Jan 2018 06:16:04 GMT): AshishMishra 1 (Thu, 25 Jan 2018 06:16:22 GMT): AshishMishra 1 (Thu, 25 Jan 2018 10:33:02 GMT): david_dornseifer (Thu, 25 Jan 2018 12:28:30 GMT): david_dornseifer (Thu, 25 Jan 2018 12:30:08 GMT): david_dornseifer (Thu, 25 Jan 2018 12:30:08 GMT): mp (Thu, 25 Jan 2018 14:08:49 GMT): kostas (Thu, 25 Jan 2018 14:56:38 GMT): kostas (Thu, 25 Jan 2018 14:57:05 GMT): kostas (Thu, 25 Jan 2018 14:57:33 GMT): kostas (Thu, 25 Jan 2018 14:57:33 GMT): kostas (Thu, 25 Jan 2018 14:57:48 GMT): kostas (Thu, 25 Jan 2018 14:58:10 GMT): kostas (Thu, 25 Jan 2018 14:58:20 GMT): pichayuthk (Thu, 25 Jan 2018 17:24:44 GMT): pichayuthk (Thu, 25 Jan 2018 17:33:14 GMT): pichayuthk (Thu, 25 Jan 2018 17:33:14 GMT): pichayuthk (Thu, 25 Jan 2018 17:33:14 GMT): jyellick (Thu, 25 Jan 2018 17:37:12 GMT): pichayuthk (Thu, 25 Jan 2018 17:40:50 GMT): pichayuthk (Thu, 25 Jan 2018 17:40:50 GMT): pichayuthk (Thu, 25 Jan 2018 17:40:50 GMT): pichayuthk (Thu, 25 Jan 2018 17:40:50 GMT): pichayuthk (Thu, 25 Jan 2018 17:40:50 GMT): kerokhin (Thu, 25 Jan 2018 18:16:33 GMT): jyellick (Thu, 25 Jan 2018 18:16:34 GMT): kerokhin (Thu, 25 Jan 2018 18:31:37 GMT): kostas (Thu, 25 Jan 2018 18:39:55 GMT): kerokhin (Thu, 25 Jan 2018 19:22:31 GMT): kerokhin (Thu, 25 Jan 2018 19:22:31 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 01:47:35 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 01:47:35 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 01:47:35 GMT): jyellick (Fri, 26 Jan 2018 02:48:46 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 02:49:57 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 02:50:37 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 02:50:37 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 02:51:35 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 02:51:58 GMT): jyellick (Fri, 26 Jan 2018 02:52:31 GMT): jyellick (Fri, 26 Jan 2018 02:52:35 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 02:53:25 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 02:53:25 GMT): jyellick (Fri, 26 Jan 2018 02:53:43 GMT): jyellick (Fri, 26 Jan 2018 02:54:09 GMT): jyellick (Fri, 26 Jan 2018 02:54:36 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 02:55:27 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 02:55:48 GMT): jyellick (Fri, 26 Jan 2018 02:55:48 GMT): jyellick (Fri, 26 Jan 2018 02:56:06 GMT): jyellick (Fri, 26 Jan 2018 02:56:46 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 02:57:13 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 02:57:18 GMT): jyellick (Fri, 26 Jan 2018 02:58:10 GMT): jyellick (Fri, 26 Jan 2018 02:59:23 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 03:01:46 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 03:03:01 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 03:03:01 GMT): guoger (Fri, 26 Jan 2018 03:06:16 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 03:09:55 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 03:09:55 GMT): asaningmaxchain123 (Fri, 26 Jan 2018 03:13:08 GMT): pichayuthk (Fri, 26 Jan 2018 03:14:51 GMT): jyellick (Fri, 26 Jan 2018 03:17:30 GMT): guoger (Fri, 26 Jan 2018 03:19:09 GMT): jyellick (Fri, 26 Jan 2018 03:33:10 GMT): jyellick (Fri, 26 Jan 2018 03:33:53 GMT): ibmamnt (Fri, 26 Jan 2018 08:18:02 GMT): yacovm (Fri, 26 Jan 2018 09:17:28 GMT): SjirNijssen (Sun, 28 Jan 2018 09:40:53 GMT): fengfengs (Mon, 29 Jan 2018 05:06:38 GMT): ascatox (Mon, 29 Jan 2018 08:28:59 GMT): ascatox (Mon, 29 Jan 2018 08:29:24 GMT): guoger (Mon, 29 Jan 2018 09:07:41 GMT): ascatox (Mon, 29 Jan 2018 09:08:27 GMT): guoger (Mon, 29 Jan 2018 09:09:02 GMT): ascatox (Mon, 29 Jan 2018 09:14:16 GMT): NINIU09 (Mon, 29 Jan 2018 09:41:05 GMT): NINIU09 (Mon, 29 Jan 2018 09:51:59 GMT): kostas (Mon, 29 Jan 2018 12:11:32 GMT): kostas (Mon, 29 Jan 2018 12:12:53 GMT): rock_martin (Mon, 29 Jan 2018 12:42:42 GMT): jyellick (Mon, 29 Jan 2018 14:50:13 GMT): jyellick (Mon, 29 Jan 2018 14:50:13 GMT): rockleelx (Tue, 30 Jan 2018 04:12:48 GMT): NINIU09 (Tue, 30 Jan 2018 08:31:26 GMT): zhasni (Tue, 30 Jan 2018 11:15:06 GMT): NeerajKumar (Tue, 30 Jan 2018 13:20:35 GMT): NeerajKumar (Tue, 30 Jan 2018 13:23:19 GMT): zhasni (Tue, 30 Jan 2018 13:49:10 GMT): MartinKrmer (Tue, 30 Jan 2018 13:49:19 GMT): MartinKrmer (Tue, 30 Jan 2018 13:51:21 GMT): GavinPacini (Tue, 30 Jan 2018 13:54:07 GMT): kapilAtrey (Tue, 30 Jan 2018 13:54:51 GMT): rake66 (Tue, 30 Jan 2018 13:59:31 GMT): rock_martin (Tue, 30 Jan 2018 14:00:31 GMT): rock_martin (Tue, 30 Jan 2018 14:01:26 GMT): jyellick (Tue, 30 Jan 2018 15:07:53 GMT): jyellick (Tue, 30 Jan 2018 15:09:29 GMT): jyellick (Tue, 30 Jan 2018 15:11:43 GMT): zhasni (Tue, 30 Jan 2018 15:13:20 GMT): jyellick (Tue, 30 Jan 2018 15:14:02 GMT): jyellick (Tue, 30 Jan 2018 15:14:09 GMT): jyellick (Tue, 30 Jan 2018 15:14:09 GMT): jyellick (Tue, 30 Jan 2018 15:14:27 GMT): jyellick (Tue, 30 Jan 2018 15:15:51 GMT): jyellick (Tue, 30 Jan 2018 15:16:45 GMT): jyellick (Tue, 30 Jan 2018 15:16:45 GMT): jyellick (Tue, 30 Jan 2018 15:17:27 GMT): jyellick (Tue, 30 Jan 2018 15:17:27 GMT): jyellick (Tue, 30 Jan 2018 15:17:45 GMT): jyellick (Tue, 30 Jan 2018 15:18:37 GMT): zhasni (Tue, 30 Jan 2018 15:20:32 GMT): jyellick (Tue, 30 Jan 2018 15:20:53 GMT): jyellick (Tue, 30 Jan 2018 15:21:38 GMT): zhasni (Tue, 30 Jan 2018 15:23:44 GMT): NeerajKumar (Tue, 30 Jan 2018 17:09:12 GMT): jyellick (Tue, 30 Jan 2018 17:11:27 GMT): NeerajKumar (Tue, 30 Jan 2018 17:12:29 GMT): NeerajKumar (Tue, 30 Jan 2018 17:12:50 GMT): jyellick (Tue, 30 Jan 2018 17:18:04 GMT): NeerajKumar (Tue, 30 Jan 2018 17:18:44 GMT): jyellick (Tue, 30 Jan 2018 17:18:58 GMT): NeerajKumar (Tue, 30 Jan 2018 17:19:41 GMT): NeerajKumar (Tue, 30 Jan 2018 17:20:36 GMT): NeerajKumar (Tue, 30 Jan 2018 17:20:38 GMT): asaningmaxchain123 (Tue, 30 Jan 2018 17:20:44 GMT): NeerajKumar (Tue, 30 Jan 2018 17:25:25 GMT): asaningmaxchain123 (Tue, 30 Jan 2018 17:26:32 GMT): asaningmaxchain123 (Tue, 30 Jan 2018 17:26:42 GMT): jyellick (Tue, 30 Jan 2018 17:27:28 GMT): ArnabChatterjee (Tue, 30 Jan 2018 23:53:57 GMT): Glen (Wed, 31 Jan 2018 00:34:04 GMT): ArnabChatterjee (Wed, 31 Jan 2018 00:37:42 GMT): ArnabChatterjee (Wed, 31 Jan 2018 00:37:42 GMT): Glen (Wed, 31 Jan 2018 00:43:49 GMT): jyellick (Wed, 31 Jan 2018 00:55:37 GMT): jyellick (Wed, 31 Jan 2018 00:55:37 GMT): jyellick (Wed, 31 Jan 2018 00:58:36 GMT): asaningmaxchain123 (Wed, 31 Jan 2018 01:02:02 GMT): asaningmaxchain123 (Wed, 31 Jan 2018 01:02:02 GMT): jyellick (Wed, 31 Jan 2018 01:03:47 GMT): ArnabChatterjee (Wed, 31 Jan 2018 02:13:27 GMT): jyellick (Wed, 31 Jan 2018 03:17:59 GMT): jyellick (Wed, 31 Jan 2018 03:17:59 GMT): ArnabChatterjee (Wed, 31 Jan 2018 03:57:04 GMT): ArnabChatterjee (Wed, 31 Jan 2018 03:57:04 GMT): zerppen (Wed, 31 Jan 2018 09:08:19 GMT): dave.enyeart (Wed, 31 Jan 2018 12:19:05 GMT): dave.enyeart (Wed, 31 Jan 2018 12:19:05 GMT): NagatoPeinI1 (Wed, 31 Jan 2018 12:19:05 GMT): CodeReaper (Wed, 31 Jan 2018 12:34:53 GMT): MartinKrmer (Wed, 31 Jan 2018 12:40:41 GMT): Vadim (Wed, 31 Jan 2018 12:45:19 GMT): MartinKrmer (Wed, 31 Jan 2018 12:51:43 GMT): Vadim (Wed, 31 Jan 2018 12:52:49 GMT): Vadim (Wed, 31 Jan 2018 12:53:23 GMT): MartinKrmer (Wed, 31 Jan 2018 12:56:00 GMT): Vadim (Wed, 31 Jan 2018 12:58:09 GMT): Dark_Knight (Wed, 31 Jan 2018 14:07:07 GMT): Dark_Knight (Wed, 31 Jan 2018 14:07:15 GMT): jyellick (Wed, 31 Jan 2018 14:44:14 GMT): jyellick (Wed, 31 Jan 2018 14:44:58 GMT): jyellick (Wed, 31 Jan 2018 14:45:18 GMT): jyellick (Wed, 31 Jan 2018 14:48:34 GMT): CodeReaper (Wed, 31 Jan 2018 14:57:33 GMT): jyellick (Wed, 31 Jan 2018 15:01:39 GMT): jyellick (Wed, 31 Jan 2018 15:02:15 GMT): jyellick (Wed, 31 Jan 2018 15:02:56 GMT): CodeReaper (Wed, 31 Jan 2018 15:03:16 GMT): CodeReaper (Wed, 31 Jan 2018 15:05:13 GMT): jyellick (Wed, 31 Jan 2018 15:07:17 GMT): CodeReaper (Wed, 31 Jan 2018 15:10:14 GMT): CodeReaper (Wed, 31 Jan 2018 15:10:44 GMT): jyellick (Wed, 31 Jan 2018 15:13:07 GMT): Dan (Wed, 31 Jan 2018 21:45:55 GMT): frankz (Thu, 01 Feb 2018 02:28:11 GMT): NeerajKumar (Thu, 01 Feb 2018 05:40:41 GMT): NeerajKumar (Thu, 01 Feb 2018 05:40:42 GMT): NeerajKumar (Thu, 01 Feb 2018 05:42:08 GMT): guoger (Thu, 01 Feb 2018 05:43:08 GMT): guoger (Thu, 01 Feb 2018 05:43:54 GMT): NeerajKumar (Thu, 01 Feb 2018 05:46:36 GMT): NeerajKumar (Thu, 01 Feb 2018 05:46:38 GMT): NeerajKumar (Thu, 01 Feb 2018 05:47:20 GMT): NeerajKumar (Thu, 01 Feb 2018 05:47:20 GMT): NeerajKumar (Thu, 01 Feb 2018 05:47:26 GMT): NeerajKumar (Thu, 01 Feb 2018 05:49:07 GMT): chandg12 (Thu, 01 Feb 2018 05:57:01 GMT): guoger (Thu, 01 Feb 2018 06:01:44 GMT): NeerajKumar (Thu, 01 Feb 2018 06:12:32 GMT): NeerajKumar (Thu, 01 Feb 2018 06:12:40 GMT): NeerajKumar (Thu, 01 Feb 2018 06:13:47 GMT): NeerajKumar (Thu, 01 Feb 2018 06:13:50 GMT): NeerajKumar (Thu, 01 Feb 2018 06:18:26 GMT): guoger (Thu, 01 Feb 2018 06:34:41 GMT): guoger (Thu, 01 Feb 2018 06:36:29 GMT): NeerajKumar (Thu, 01 Feb 2018 06:55:54 GMT): NeerajKumar (Thu, 01 Feb 2018 06:55:56 GMT): NeerajKumar (Thu, 01 Feb 2018 06:56:52 GMT): NeerajKumar (Thu, 01 Feb 2018 06:56:52 GMT): NeerajKumar (Thu, 01 Feb 2018 06:57:43 GMT): NeerajKumar (Thu, 01 Feb 2018 06:57:44 GMT): NeerajKumar (Thu, 01 Feb 2018 06:58:35 GMT): NeerajKumar (Thu, 01 Feb 2018 07:39:08 GMT): NeerajKumar (Thu, 01 Feb 2018 07:39:32 GMT): guoger (Thu, 01 Feb 2018 07:59:51 GMT): guoger (Thu, 01 Feb 2018 08:00:02 GMT): NeerajKumar (Thu, 01 Feb 2018 08:01:15 GMT): NeerajKumar (Thu, 01 Feb 2018 08:01:15 GMT): NeerajKumar (Thu, 01 Feb 2018 08:01:40 GMT): NeerajKumar (Thu, 01 Feb 2018 08:01:42 GMT): NeerajKumar (Thu, 01 Feb 2018 08:02:05 GMT): NeerajKumar (Thu, 01 Feb 2018 08:02:05 GMT): guoger (Thu, 01 Feb 2018 08:08:27 GMT): NeerajKumar (Thu, 01 Feb 2018 08:09:19 GMT): NeerajKumar (Thu, 01 Feb 2018 08:09:40 GMT): NeerajKumar (Thu, 01 Feb 2018 08:09:55 GMT): guoger (Thu, 01 Feb 2018 08:10:07 GMT): NeerajKumar (Thu, 01 Feb 2018 09:44:26 GMT): NeerajKumar (Thu, 01 Feb 2018 10:22:21 GMT): NeerajKumar (Thu, 01 Feb 2018 10:24:25 GMT): NeerajKumar (Thu, 01 Feb 2018 10:24:49 GMT): NeerajKumar (Thu, 01 Feb 2018 10:24:57 GMT): rake66 (Thu, 01 Feb 2018 10:43:11 GMT): rake66 (Thu, 01 Feb 2018 10:44:40 GMT): rake66 (Thu, 01 Feb 2018 10:46:00 GMT): NeerajKumar (Thu, 01 Feb 2018 10:46:32 GMT): rake66 (Thu, 01 Feb 2018 10:47:07 GMT): rake66 (Thu, 01 Feb 2018 10:47:44 GMT): NagatoPeinI1 (Thu, 01 Feb 2018 11:50:42 GMT): NagatoPeinI1 (Thu, 01 Feb 2018 11:55:30 GMT): BOGATIM (Thu, 01 Feb 2018 11:59:18 GMT): BOGATIM (Thu, 01 Feb 2018 12:03:21 GMT): jyellick (Thu, 01 Feb 2018 14:57:31 GMT): CodeReaper (Fri, 02 Feb 2018 07:08:20 GMT): CodeReaper (Fri, 02 Feb 2018 07:08:20 GMT): CodeReaper (Fri, 02 Feb 2018 07:08:20 GMT): niteshsolanki (Fri, 02 Feb 2018 10:57:38 GMT): niteshsolanki (Fri, 02 Feb 2018 10:57:38 GMT): kapilAtrey (Fri, 02 Feb 2018 12:14:55 GMT): kapilAtrey (Fri, 02 Feb 2018 12:14:55 GMT): kapilAtrey (Fri, 02 Feb 2018 12:14:55 GMT): jyellick (Fri, 02 Feb 2018 14:29:59 GMT): jyellick (Fri, 02 Feb 2018 14:32:30 GMT): jyellick (Fri, 02 Feb 2018 14:34:34 GMT): niteshsolanki (Fri, 02 Feb 2018 15:47:51 GMT): niteshsolanki (Fri, 02 Feb 2018 15:47:51 GMT): jyellick (Fri, 02 Feb 2018 17:33:25 GMT): niteshsolanki (Sat, 03 Feb 2018 06:05:38 GMT): niteshsolanki (Sat, 03 Feb 2018 06:05:38 GMT): MartinKrmer (Sat, 03 Feb 2018 12:19:09 GMT): jyellick (Sat, 03 Feb 2018 16:10:42 GMT): niteshsolanki (Sat, 03 Feb 2018 16:24:42 GMT): jyellick (Sun, 04 Feb 2018 04:23:47 GMT): niteshsolanki (Sun, 04 Feb 2018 04:36:53 GMT): jyellick (Sun, 04 Feb 2018 04:38:39 GMT): niteshsolanki (Sun, 04 Feb 2018 04:43:40 GMT): AkshayJindal (Sun, 04 Feb 2018 08:45:47 GMT): jyellick (Sun, 04 Feb 2018 14:59:25 GMT): ajithjosek (Sun, 04 Feb 2018 20:26:56 GMT): AkshayJindal (Sun, 04 Feb 2018 21:56:10 GMT): jyellick (Sun, 04 Feb 2018 22:00:38 GMT): kapilAtrey (Mon, 05 Feb 2018 05:09:40 GMT): kapilAtrey (Mon, 05 Feb 2018 05:09:40 GMT): guoger (Mon, 05 Feb 2018 05:12:31 GMT): kapilAtrey (Mon, 05 Feb 2018 05:13:33 GMT): kapilAtrey (Mon, 05 Feb 2018 05:13:33 GMT): guoger (Mon, 05 Feb 2018 05:13:50 GMT): guoger (Mon, 05 Feb 2018 05:14:31 GMT): guoger (Mon, 05 Feb 2018 05:14:31 GMT): kapilAtrey (Mon, 05 Feb 2018 05:16:24 GMT): kapilAtrey (Mon, 05 Feb 2018 05:17:39 GMT): guoger (Mon, 05 Feb 2018 05:18:35 GMT): guoger (Mon, 05 Feb 2018 05:18:48 GMT): kapilAtrey (Mon, 05 Feb 2018 05:22:15 GMT): gen_el (Mon, 05 Feb 2018 07:33:00 GMT): gen_el (Mon, 05 Feb 2018 07:33:00 GMT): SanketPanchamia (Mon, 05 Feb 2018 09:38:03 GMT): souvik (Mon, 05 Feb 2018 12:39:20 GMT): souvik (Mon, 05 Feb 2018 12:39:52 GMT): erzeghi (Mon, 05 Feb 2018 13:46:21 GMT): awattez (Mon, 05 Feb 2018 13:46:39 GMT): awattez (Mon, 05 Feb 2018 13:46:39 GMT): sanchezl (Mon, 05 Feb 2018 14:53:21 GMT): harsha (Mon, 05 Feb 2018 17:41:06 GMT): SanketPanchamia (Tue, 06 Feb 2018 04:38:32 GMT): yacovm (Tue, 06 Feb 2018 14:19:05 GMT): yacovm (Tue, 06 Feb 2018 16:15:01 GMT): sreedharn (Wed, 07 Feb 2018 20:33:47 GMT): manxiaqu (Thu, 08 Feb 2018 02:40:30 GMT): inatatsu (Thu, 08 Feb 2018 02:40:33 GMT): PyiTheinKyaw (Thu, 08 Feb 2018 08:47:18 GMT): kapilAtrey (Thu, 08 Feb 2018 10:46:30 GMT): kapilAtrey (Thu, 08 Feb 2018 10:46:31 GMT): kapilAtrey (Thu, 08 Feb 2018 10:48:21 GMT): kapilAtrey (Thu, 08 Feb 2018 10:48:22 GMT): jyellick (Thu, 08 Feb 2018 14:33:17 GMT): prasad.sripathi (Thu, 08 Feb 2018 15:43:18 GMT): volkanbaran (Thu, 08 Feb 2018 15:45:49 GMT): CodeReaper (Fri, 09 Feb 2018 05:37:17 GMT): CodeReaper (Fri, 09 Feb 2018 05:37:22 GMT): CodeReaper (Fri, 09 Feb 2018 05:38:37 GMT): kapilAtrey (Fri, 09 Feb 2018 05:48:39 GMT): kapilAtrey (Fri, 09 Feb 2018 05:50:21 GMT): jyellick (Fri, 09 Feb 2018 05:53:57 GMT): jyellick (Fri, 09 Feb 2018 05:53:57 GMT): jyellick (Fri, 09 Feb 2018 05:55:01 GMT): kapilAtrey (Fri, 09 Feb 2018 06:15:45 GMT): jyellick (Fri, 09 Feb 2018 06:23:18 GMT): kapilAtrey (Fri, 09 Feb 2018 06:24:00 GMT): jyellick (Fri, 09 Feb 2018 06:24:11 GMT): kapilAtrey (Fri, 09 Feb 2018 06:24:45 GMT): kapilAtrey (Fri, 09 Feb 2018 06:29:16 GMT): kapilAtrey (Fri, 09 Feb 2018 06:29:57 GMT): jyellick (Fri, 09 Feb 2018 06:30:25 GMT): kapilAtrey (Fri, 09 Feb 2018 06:39:29 GMT): jyellick (Fri, 09 Feb 2018 06:40:17 GMT): jyellick (Fri, 09 Feb 2018 06:40:17 GMT): jyellick (Fri, 09 Feb 2018 06:43:23 GMT): jyellick (Fri, 09 Feb 2018 06:43:23 GMT): jyellick (Fri, 09 Feb 2018 06:43:58 GMT): jyellick (Fri, 09 Feb 2018 06:44:43 GMT): kapilAtrey (Fri, 09 Feb 2018 06:46:16 GMT): kapilAtrey (Fri, 09 Feb 2018 07:01:00 GMT): kapilAtrey (Fri, 09 Feb 2018 07:01:34 GMT): kapilAtrey (Fri, 09 Feb 2018 07:47:48 GMT): DannyWong (Fri, 09 Feb 2018 07:51:41 GMT): DannyWong (Fri, 09 Feb 2018 07:58:08 GMT): jyellick (Fri, 09 Feb 2018 19:55:58 GMT): vu3mmg (Sat, 10 Feb 2018 10:52:50 GMT): jyellick (Sat, 10 Feb 2018 15:24:19 GMT): vu3mmg (Sat, 10 Feb 2018 23:49:34 GMT): vu3mmg (Sun, 11 Feb 2018 03:33:57 GMT): vu3mmg (Sun, 11 Feb 2018 03:34:44 GMT): vu3mmg (Sun, 11 Feb 2018 03:35:28 GMT): jyellick (Sun, 11 Feb 2018 15:52:48 GMT): jyellick (Sun, 11 Feb 2018 15:52:48 GMT): jyellick (Sun, 11 Feb 2018 15:54:02 GMT): jyellick (Sun, 11 Feb 2018 16:24:46 GMT): Bchainer (Mon, 12 Feb 2018 05:08:30 GMT): Bchainer (Mon, 12 Feb 2018 05:54:16 GMT): jyellick (Mon, 12 Feb 2018 06:37:09 GMT): qizhang (Mon, 12 Feb 2018 15:06:44 GMT): jyellick (Mon, 12 Feb 2018 15:19:41 GMT): jyellick (Mon, 12 Feb 2018 15:19:41 GMT): dsanchezseco (Mon, 12 Feb 2018 16:05:40 GMT): ohmeraka (Mon, 12 Feb 2018 16:24:20 GMT): vu3mmg (Tue, 13 Feb 2018 05:17:33 GMT): vu3mmg (Tue, 13 Feb 2018 05:20:21 GMT): vu3mmg (Tue, 13 Feb 2018 05:20:47 GMT): jyellick (Tue, 13 Feb 2018 05:23:58 GMT): jyellick (Tue, 13 Feb 2018 05:25:30 GMT): vu3mmg (Tue, 13 Feb 2018 10:11:54 GMT): vu3mmg (Tue, 13 Feb 2018 10:12:46 GMT): jyellick (Tue, 13 Feb 2018 18:32:36 GMT): vu3mmg (Wed, 14 Feb 2018 02:29:25 GMT): shalinigpt (Wed, 14 Feb 2018 09:09:34 GMT): ConstC (Thu, 15 Feb 2018 04:12:57 GMT): Ryan2 (Thu, 15 Feb 2018 06:48:43 GMT): vudathasaiomkar (Thu, 15 Feb 2018 07:38:10 GMT): PyiTheinKyaw (Thu, 15 Feb 2018 11:52:36 GMT): PyiTheinKyaw (Thu, 15 Feb 2018 11:52:50 GMT): kostas (Thu, 15 Feb 2018 13:40:45 GMT): PyiTheinKyaw (Thu, 15 Feb 2018 13:41:31 GMT): kostas (Thu, 15 Feb 2018 13:42:05 GMT): PyiTheinKyaw (Thu, 15 Feb 2018 13:43:52 GMT): PyiTheinKyaw (Thu, 15 Feb 2018 13:45:20 GMT): kostas (Thu, 15 Feb 2018 13:47:40 GMT): PyiTheinKyaw (Thu, 15 Feb 2018 13:48:41 GMT): kostas (Thu, 15 Feb 2018 13:51:41 GMT): kostas (Thu, 15 Feb 2018 13:51:41 GMT): kostas (Thu, 15 Feb 2018 13:51:41 GMT): kostas (Thu, 15 Feb 2018 13:51:56 GMT): kostas (Thu, 15 Feb 2018 13:52:18 GMT): SimonOberzan (Fri, 16 Feb 2018 10:09:34 GMT): Pranoti (Fri, 16 Feb 2018 11:55:41 GMT): Pranoti (Fri, 16 Feb 2018 11:59:01 GMT): kostas (Fri, 16 Feb 2018 14:03:47 GMT): eclairamb (Sat, 17 Feb 2018 18:01:01 GMT): vu3mmg (Sun, 18 Feb 2018 15:05:03 GMT): vu3mmg (Sun, 18 Feb 2018 15:05:11 GMT): vu3mmg (Sun, 18 Feb 2018 15:05:45 GMT): vu3mmg (Sun, 18 Feb 2018 15:06:21 GMT): kostas (Sun, 18 Feb 2018 16:05:00 GMT): vu3mmg (Mon, 19 Feb 2018 01:14:37 GMT): PyiTheinKyaw (Mon, 19 Feb 2018 11:02:35 GMT): vitiko (Mon, 19 Feb 2018 11:44:57 GMT): jyellick (Mon, 19 Feb 2018 14:50:30 GMT): jyellick (Mon, 19 Feb 2018 14:50:30 GMT): jyellick (Mon, 19 Feb 2018 14:50:52 GMT): PyiTheinKyaw (Tue, 20 Feb 2018 04:02:22 GMT): PyiTheinKyaw (Tue, 20 Feb 2018 04:02:42 GMT): jyellick (Tue, 20 Feb 2018 04:55:53 GMT): PyiTheinKyaw (Tue, 20 Feb 2018 04:56:23 GMT): PyiTheinKyaw (Tue, 20 Feb 2018 04:57:01 GMT): PyiTheinKyaw (Tue, 20 Feb 2018 04:57:08 GMT): jyellick (Tue, 20 Feb 2018 05:08:11 GMT): NeerajKumar (Tue, 20 Feb 2018 07:41:46 GMT): NeerajKumar (Tue, 20 Feb 2018 07:42:11 GMT): paul.sitoh (Tue, 20 Feb 2018 09:31:42 GMT): paul.sitoh (Tue, 20 Feb 2018 09:31:56 GMT): JiyunYang (Tue, 20 Feb 2018 10:49:03 GMT): JiyunYang (Tue, 20 Feb 2018 10:49:44 GMT): JiyunYang (Tue, 20 Feb 2018 12:15:21 GMT): kostas (Tue, 20 Feb 2018 12:37:24 GMT): kostas (Tue, 20 Feb 2018 12:37:38 GMT): NeerajKumar (Tue, 20 Feb 2018 12:38:18 GMT): kostas (Tue, 20 Feb 2018 12:39:30 GMT): NeerajKumar (Tue, 20 Feb 2018 12:39:55 GMT): NeerajKumar (Tue, 20 Feb 2018 12:40:11 GMT): kostas (Tue, 20 Feb 2018 12:40:42 GMT): kostas (Tue, 20 Feb 2018 12:41:27 GMT): NeerajKumar (Tue, 20 Feb 2018 12:41:46 GMT): NeerajKumar (Tue, 20 Feb 2018 12:41:57 GMT): NeerajKumar (Tue, 20 Feb 2018 12:44:05 GMT): NeerajKumar (Tue, 20 Feb 2018 12:44:22 GMT): NeerajKumar (Tue, 20 Feb 2018 12:45:03 GMT): NeerajKumar (Tue, 20 Feb 2018 12:45:11 GMT): NeerajKumar (Tue, 20 Feb 2018 12:45:13 GMT): NeerajKumar (Tue, 20 Feb 2018 12:50:54 GMT): yacovm (Tue, 20 Feb 2018 13:00:52 GMT): yacovm (Tue, 20 Feb 2018 13:01:07 GMT): kostas (Tue, 20 Feb 2018 13:01:42 GMT): kostas (Tue, 20 Feb 2018 13:02:01 GMT): kostas (Tue, 20 Feb 2018 13:02:23 GMT): kostas (Tue, 20 Feb 2018 13:04:05 GMT): kostas (Tue, 20 Feb 2018 13:04:25 GMT): NeerajKumar (Tue, 20 Feb 2018 13:20:02 GMT): NeerajKumar (Tue, 20 Feb 2018 13:21:06 GMT): NeerajKumar (Tue, 20 Feb 2018 13:21:08 GMT): NeerajKumar (Tue, 20 Feb 2018 13:22:02 GMT): NeerajKumar (Tue, 20 Feb 2018 13:22:03 GMT): NeerajKumar (Tue, 20 Feb 2018 13:22:18 GMT): kostas (Tue, 20 Feb 2018 13:43:03 GMT): KriangsakSriaroon (Wed, 21 Feb 2018 08:40:33 GMT): daanporon (Wed, 21 Feb 2018 10:58:30 GMT): daanporon (Wed, 21 Feb 2018 10:58:50 GMT): daanporon (Wed, 21 Feb 2018 11:03:25 GMT): daanporon (Wed, 21 Feb 2018 11:07:06 GMT): daanporon (Wed, 21 Feb 2018 11:07:06 GMT): daanporon (Wed, 21 Feb 2018 11:08:11 GMT): javrevasandeep (Wed, 21 Feb 2018 12:14:39 GMT): daanporon (Wed, 21 Feb 2018 13:55:40 GMT): daanporon (Wed, 21 Feb 2018 13:59:59 GMT): kostas (Wed, 21 Feb 2018 14:39:05 GMT): daanporon (Wed, 21 Feb 2018 14:46:48 GMT): Wangrj (Thu, 22 Feb 2018 06:20:15 GMT): Wangrj (Thu, 22 Feb 2018 08:21:44 GMT): PyiTheinKyaw (Thu, 22 Feb 2018 09:01:01 GMT): JayJong (Thu, 22 Feb 2018 09:28:55 GMT): jyellick (Thu, 22 Feb 2018 14:32:06 GMT): jyellick (Thu, 22 Feb 2018 14:32:24 GMT): jyellick (Thu, 22 Feb 2018 14:32:38 GMT): jyellick (Thu, 22 Feb 2018 14:33:21 GMT): jyellick (Thu, 22 Feb 2018 14:35:35 GMT): Wangrj (Fri, 23 Feb 2018 00:55:14 GMT): jyellick (Fri, 23 Feb 2018 01:04:47 GMT): Wangrj (Fri, 23 Feb 2018 01:18:36 GMT): jyellick (Fri, 23 Feb 2018 02:39:19 GMT): PyiTheinKyaw (Fri, 23 Feb 2018 03:31:12 GMT): jyellick (Fri, 23 Feb 2018 03:32:30 GMT): PyiTheinKyaw (Fri, 23 Feb 2018 03:37:40 GMT): PyiTheinKyaw (Fri, 23 Feb 2018 03:37:40 GMT): PyiTheinKyaw (Fri, 23 Feb 2018 03:41:26 GMT): jyellick (Fri, 23 Feb 2018 03:50:08 GMT): PyiTheinKyaw (Fri, 23 Feb 2018 03:51:09 GMT): PyiTheinKyaw (Fri, 23 Feb 2018 05:15:36 GMT): PyiTheinKyaw (Fri, 23 Feb 2018 05:16:05 GMT): jyellick (Fri, 23 Feb 2018 06:54:02 GMT): jyellick (Fri, 23 Feb 2018 06:54:58 GMT): jyellick (Fri, 23 Feb 2018 06:55:18 GMT): PyiTheinKyaw (Fri, 23 Feb 2018 06:57:47 GMT): PyiTheinKyaw (Fri, 23 Feb 2018 06:58:04 GMT): jyellick (Fri, 23 Feb 2018 06:58:52 GMT): jyellick (Fri, 23 Feb 2018 06:59:12 GMT): jyellick (Fri, 23 Feb 2018 06:59:34 GMT): jyellick (Fri, 23 Feb 2018 07:02:02 GMT): jyellick (Fri, 23 Feb 2018 07:02:02 GMT): PyiTheinKyaw (Fri, 23 Feb 2018 07:28:38 GMT): pmcosta1 (Fri, 23 Feb 2018 11:26:07 GMT): jrosmith (Fri, 23 Feb 2018 16:06:39 GMT): jrosmith (Fri, 23 Feb 2018 16:06:39 GMT): jrosmith (Fri, 23 Feb 2018 16:06:39 GMT): mastersingh24 (Fri, 23 Feb 2018 16:11:08 GMT): rahulhegde (Fri, 23 Feb 2018 21:24:42 GMT): rahulhegde (Fri, 23 Feb 2018 21:24:42 GMT): jyellick (Fri, 23 Feb 2018 21:25:45 GMT): jyellick (Fri, 23 Feb 2018 21:27:03 GMT): rahulhegde (Fri, 23 Feb 2018 21:29:17 GMT): jyellick (Fri, 23 Feb 2018 21:30:50 GMT): rahulhegde (Fri, 23 Feb 2018 21:39:54 GMT): rahulhegde (Fri, 23 Feb 2018 21:39:54 GMT): rahulhegde (Fri, 23 Feb 2018 21:39:54 GMT): SashiKanth (Sat, 24 Feb 2018 08:05:46 GMT): SashiKanth (Sat, 24 Feb 2018 08:05:50 GMT): kostas (Sat, 24 Feb 2018 14:10:54 GMT): yoko (Sun, 25 Feb 2018 03:49:32 GMT): yoko (Sun, 25 Feb 2018 03:49:42 GMT): shibu (Sun, 25 Feb 2018 06:41:23 GMT): shibu (Sun, 25 Feb 2018 06:44:07 GMT): jyellick (Sun, 25 Feb 2018 13:29:17 GMT): jyellick (Sun, 25 Feb 2018 13:30:11 GMT): rickr (Sun, 25 Feb 2018 13:44:58 GMT): rickr (Sun, 25 Feb 2018 13:45:25 GMT): jyellick (Sun, 25 Feb 2018 13:49:25 GMT): jyellick (Sun, 25 Feb 2018 13:49:51 GMT): jyellick (Sun, 25 Feb 2018 13:50:09 GMT): rickr (Sun, 25 Feb 2018 13:55:29 GMT): rickr (Sun, 25 Feb 2018 13:57:12 GMT): yacovm (Sun, 25 Feb 2018 13:58:49 GMT): yacovm (Sun, 25 Feb 2018 13:58:59 GMT): yacovm (Sun, 25 Feb 2018 13:59:05 GMT): jyellick (Sun, 25 Feb 2018 13:59:24 GMT): rickr (Sun, 25 Feb 2018 13:59:34 GMT): yacovm (Sun, 25 Feb 2018 14:00:26 GMT): yacovm (Sun, 25 Feb 2018 14:00:32 GMT): yacovm (Sun, 25 Feb 2018 14:00:35 GMT): yacovm (Sun, 25 Feb 2018 14:01:16 GMT): yacovm (Sun, 25 Feb 2018 14:01:19 GMT): jyellick (Sun, 25 Feb 2018 14:02:33 GMT): jyellick (Sun, 25 Feb 2018 14:03:03 GMT): rickr (Sun, 25 Feb 2018 14:03:51 GMT): yacovm (Sun, 25 Feb 2018 14:04:01 GMT): rickr (Sun, 25 Feb 2018 14:05:03 GMT): rickr (Sun, 25 Feb 2018 14:05:03 GMT): Ryan2 (Sun, 25 Feb 2018 16:49:01 GMT): Ryan2 (Sun, 25 Feb 2018 16:49:31 GMT): Ryan2 (Sun, 25 Feb 2018 16:49:32 GMT): Ryan2 (Sun, 25 Feb 2018 16:49:47 GMT): jyellick (Sun, 25 Feb 2018 16:53:33 GMT): Ryan2 (Sun, 25 Feb 2018 17:22:34 GMT): jyellick (Sun, 25 Feb 2018 17:24:51 GMT): jyellick (Sun, 25 Feb 2018 17:26:02 GMT): jyellick (Sun, 25 Feb 2018 17:26:02 GMT): Ryan2 (Sun, 25 Feb 2018 17:27:22 GMT): jyellick (Sun, 25 Feb 2018 17:27:41 GMT): Ryan2 (Sun, 25 Feb 2018 17:35:48 GMT): yacovm (Sun, 25 Feb 2018 17:40:37 GMT): yacovm (Sun, 25 Feb 2018 17:40:37 GMT): jyellick (Sun, 25 Feb 2018 18:01:11 GMT): jyellick (Sun, 25 Feb 2018 18:01:11 GMT): Ryan2 (Mon, 26 Feb 2018 00:36:54 GMT): Ryan2 (Mon, 26 Feb 2018 00:37:21 GMT): kostas (Mon, 26 Feb 2018 01:05:10 GMT): kostas (Mon, 26 Feb 2018 01:05:10 GMT): Ryan2 (Mon, 26 Feb 2018 02:03:28 GMT): regy14 (Mon, 26 Feb 2018 07:38:10 GMT): regy14 (Mon, 26 Feb 2018 07:45:42 GMT): kostas (Mon, 26 Feb 2018 14:57:34 GMT): Ryan2 (Tue, 27 Feb 2018 04:34:03 GMT): jyellick (Tue, 27 Feb 2018 04:34:47 GMT): Ryan2 (Tue, 27 Feb 2018 04:46:22 GMT): jyellick (Tue, 27 Feb 2018 04:49:29 GMT): Ryan2 (Tue, 27 Feb 2018 04:51:04 GMT): amolpednekar (Tue, 27 Feb 2018 06:24:35 GMT): amolpednekar (Tue, 27 Feb 2018 06:24:35 GMT): amolpednekar (Tue, 27 Feb 2018 06:24:35 GMT): amolpednekar (Tue, 27 Feb 2018 06:24:35 GMT): amolpednekar (Tue, 27 Feb 2018 06:24:35 GMT): SashiKanth (Tue, 27 Feb 2018 10:23:11 GMT): vu3mmg (Tue, 27 Feb 2018 11:22:55 GMT): vu3mmg (Tue, 27 Feb 2018 11:23:00 GMT): vu3mmg (Tue, 27 Feb 2018 11:23:01 GMT): vu3mmg (Tue, 27 Feb 2018 11:23:08 GMT): vu3mmg (Tue, 27 Feb 2018 11:31:27 GMT): vu3mmg (Tue, 27 Feb 2018 11:31:39 GMT): vu3mmg (Tue, 27 Feb 2018 11:32:03 GMT): vu3mmg (Tue, 27 Feb 2018 11:36:45 GMT): vu3mmg (Tue, 27 Feb 2018 11:36:45 GMT): vu3mmg (Tue, 27 Feb 2018 11:37:10 GMT): yacovm (Tue, 27 Feb 2018 11:38:14 GMT): vu3mmg (Tue, 27 Feb 2018 11:42:00 GMT): minollo (Tue, 27 Feb 2018 12:19:27 GMT): regy14 (Tue, 27 Feb 2018 12:28:08 GMT): regy14 (Tue, 27 Feb 2018 12:28:08 GMT): kostas (Tue, 27 Feb 2018 13:00:51 GMT): kostas (Tue, 27 Feb 2018 13:01:07 GMT): kostas (Tue, 27 Feb 2018 13:01:40 GMT): vu3mmg (Tue, 27 Feb 2018 13:02:08 GMT): vu3mmg (Tue, 27 Feb 2018 13:02:11 GMT): vu3mmg (Tue, 27 Feb 2018 13:02:12 GMT): vu3mmg (Tue, 27 Feb 2018 13:02:33 GMT): vu3mmg (Tue, 27 Feb 2018 13:02:51 GMT): kostas (Tue, 27 Feb 2018 13:02:57 GMT): vu3mmg (Tue, 27 Feb 2018 13:03:08 GMT): kostas (Tue, 27 Feb 2018 13:04:17 GMT): vu3mmg (Tue, 27 Feb 2018 13:06:34 GMT): vu3mmg (Tue, 27 Feb 2018 13:06:46 GMT): vu3mmg (Tue, 27 Feb 2018 13:07:17 GMT): vu3mmg (Tue, 27 Feb 2018 13:08:40 GMT): vu3mmg (Tue, 27 Feb 2018 13:08:46 GMT): vu3mmg (Tue, 27 Feb 2018 13:08:58 GMT): kostas (Tue, 27 Feb 2018 13:09:06 GMT): kostas (Tue, 27 Feb 2018 13:09:16 GMT): vu3mmg (Tue, 27 Feb 2018 13:09:18 GMT): vu3mmg (Tue, 27 Feb 2018 13:09:26 GMT): kostas (Tue, 27 Feb 2018 13:09:50 GMT): kostas (Tue, 27 Feb 2018 13:10:05 GMT): kostas (Tue, 27 Feb 2018 13:10:15 GMT): vu3mmg (Tue, 27 Feb 2018 13:10:21 GMT): vu3mmg (Tue, 27 Feb 2018 13:10:25 GMT): kostas (Tue, 27 Feb 2018 13:11:07 GMT): kostas (Tue, 27 Feb 2018 13:11:10 GMT): vu3mmg (Tue, 27 Feb 2018 13:11:34 GMT): vu3mmg (Tue, 27 Feb 2018 13:11:39 GMT): vu3mmg (Tue, 27 Feb 2018 13:11:43 GMT): kostas (Tue, 27 Feb 2018 13:11:51 GMT): kostas (Tue, 27 Feb 2018 13:11:52 GMT): vu3mmg (Tue, 27 Feb 2018 13:13:22 GMT): vu3mmg (Tue, 27 Feb 2018 13:13:27 GMT): vu3mmg (Tue, 27 Feb 2018 13:13:55 GMT): vu3mmg (Tue, 27 Feb 2018 13:13:58 GMT): vu3mmg (Tue, 27 Feb 2018 13:14:06 GMT): kostas (Tue, 27 Feb 2018 13:16:22 GMT): vu3mmg (Tue, 27 Feb 2018 13:16:52 GMT): vu3mmg (Tue, 27 Feb 2018 13:17:03 GMT): vu3mmg (Tue, 27 Feb 2018 13:17:09 GMT): vu3mmg (Tue, 27 Feb 2018 13:17:17 GMT): vu3mmg (Tue, 27 Feb 2018 13:17:44 GMT): kostas (Tue, 27 Feb 2018 13:20:05 GMT): kostas (Tue, 27 Feb 2018 13:20:05 GMT): vu3mmg (Tue, 27 Feb 2018 13:24:05 GMT): vu3mmg (Tue, 27 Feb 2018 13:24:38 GMT): vu3mmg (Tue, 27 Feb 2018 13:24:58 GMT): vu3mmg (Tue, 27 Feb 2018 13:30:08 GMT): kostas (Tue, 27 Feb 2018 13:48:12 GMT): vu3mmg (Tue, 27 Feb 2018 13:48:21 GMT): vu3mmg (Tue, 27 Feb 2018 13:48:26 GMT): kostas (Tue, 27 Feb 2018 13:48:40 GMT): kostas (Tue, 27 Feb 2018 13:48:40 GMT): vu3mmg (Tue, 27 Feb 2018 13:51:26 GMT): vu3mmg (Tue, 27 Feb 2018 13:51:27 GMT): vu3mmg (Tue, 27 Feb 2018 13:51:30 GMT): vu3mmg (Tue, 27 Feb 2018 13:51:43 GMT): vu3mmg (Tue, 27 Feb 2018 13:51:54 GMT): kostas (Tue, 27 Feb 2018 13:53:54 GMT): vu3mmg (Tue, 27 Feb 2018 13:54:07 GMT): kostas (Tue, 27 Feb 2018 13:54:27 GMT): vu3mmg (Tue, 27 Feb 2018 13:55:20 GMT): kostas (Tue, 27 Feb 2018 13:55:44 GMT): kostas (Tue, 27 Feb 2018 13:55:52 GMT): vu3mmg (Tue, 27 Feb 2018 13:56:01 GMT): kostas (Tue, 27 Feb 2018 13:56:04 GMT): vu3mmg (Tue, 27 Feb 2018 13:56:47 GMT): vu3mmg (Tue, 27 Feb 2018 13:57:08 GMT): vu3mmg (Tue, 27 Feb 2018 13:57:15 GMT): kostas (Tue, 27 Feb 2018 13:57:57 GMT): kostas (Tue, 27 Feb 2018 13:57:57 GMT): vu3mmg (Tue, 27 Feb 2018 13:58:02 GMT): vu3mmg (Tue, 27 Feb 2018 13:58:30 GMT): vu3mmg (Tue, 27 Feb 2018 13:58:43 GMT): kostas (Tue, 27 Feb 2018 14:00:53 GMT): kostas (Tue, 27 Feb 2018 14:01:15 GMT): vu3mmg (Tue, 27 Feb 2018 14:01:24 GMT): kostas (Tue, 27 Feb 2018 14:01:33 GMT): vu3mmg (Tue, 27 Feb 2018 14:02:50 GMT): vu3mmg (Tue, 27 Feb 2018 14:03:27 GMT): vu3mmg (Tue, 27 Feb 2018 14:03:39 GMT): vu3mmg (Tue, 27 Feb 2018 14:03:59 GMT): kostas (Tue, 27 Feb 2018 14:04:25 GMT): vu3mmg (Tue, 27 Feb 2018 14:04:51 GMT): vu3mmg (Tue, 27 Feb 2018 14:04:52 GMT): vu3mmg (Tue, 27 Feb 2018 14:05:19 GMT): kostas (Tue, 27 Feb 2018 14:06:47 GMT): kostas (Tue, 27 Feb 2018 14:08:10 GMT): vu3mmg (Tue, 27 Feb 2018 14:08:24 GMT): minollo (Tue, 27 Feb 2018 14:20:07 GMT): kostas (Tue, 27 Feb 2018 14:20:30 GMT): vu3mmg (Tue, 27 Feb 2018 14:26:18 GMT): vu3mmg (Tue, 27 Feb 2018 14:26:48 GMT): vu3mmg (Tue, 27 Feb 2018 14:27:08 GMT): vu3mmg (Tue, 27 Feb 2018 14:27:11 GMT): kostas (Tue, 27 Feb 2018 14:27:40 GMT): vu3mmg (Tue, 27 Feb 2018 14:27:53 GMT): vu3mmg (Tue, 27 Feb 2018 14:28:00 GMT): kostas (Tue, 27 Feb 2018 14:28:24 GMT): kostas (Tue, 27 Feb 2018 14:28:31 GMT): vu3mmg (Tue, 27 Feb 2018 14:28:32 GMT): vu3mmg (Tue, 27 Feb 2018 14:28:34 GMT): kostas (Tue, 27 Feb 2018 14:28:36 GMT): vu3mmg (Tue, 27 Feb 2018 14:28:39 GMT): vu3mmg (Tue, 27 Feb 2018 14:28:41 GMT): kostas (Tue, 27 Feb 2018 14:30:55 GMT): kostas (Tue, 27 Feb 2018 14:31:11 GMT): kostas (Tue, 27 Feb 2018 14:31:18 GMT): kostas (Tue, 27 Feb 2018 14:31:28 GMT): kostas (Tue, 27 Feb 2018 14:31:38 GMT): kostas (Tue, 27 Feb 2018 14:31:55 GMT): kostas (Tue, 27 Feb 2018 14:32:11 GMT): vu3mmg (Tue, 27 Feb 2018 14:32:45 GMT): vu3mmg (Tue, 27 Feb 2018 14:32:56 GMT): vu3mmg (Tue, 27 Feb 2018 14:33:09 GMT): vu3mmg (Tue, 27 Feb 2018 14:34:45 GMT): vu3mmg (Tue, 27 Feb 2018 14:37:14 GMT): vu3mmg (Tue, 27 Feb 2018 14:37:29 GMT): kostas (Tue, 27 Feb 2018 14:37:50 GMT): vu3mmg (Tue, 27 Feb 2018 14:38:42 GMT): vu3mmg (Tue, 27 Feb 2018 14:38:43 GMT): vu3mmg (Tue, 27 Feb 2018 14:47:48 GMT): vu3mmg (Tue, 27 Feb 2018 14:47:59 GMT): minollo (Tue, 27 Feb 2018 14:49:23 GMT): kostas (Tue, 27 Feb 2018 14:53:00 GMT): minollo (Tue, 27 Feb 2018 14:54:02 GMT): kostas (Tue, 27 Feb 2018 14:55:24 GMT): kostas (Tue, 27 Feb 2018 14:55:42 GMT): kostas (Tue, 27 Feb 2018 14:55:42 GMT): kostas (Tue, 27 Feb 2018 14:56:09 GMT): manish-sethi (Tue, 27 Feb 2018 14:56:09 GMT): kostas (Tue, 27 Feb 2018 14:56:10 GMT): kostas (Tue, 27 Feb 2018 14:56:28 GMT): kostas (Tue, 27 Feb 2018 14:56:29 GMT): manish-sethi (Tue, 27 Feb 2018 15:16:56 GMT): manish-sethi (Tue, 27 Feb 2018 15:17:11 GMT): lkchao78 (Tue, 27 Feb 2018 15:17:46 GMT): minollo (Tue, 27 Feb 2018 15:17:59 GMT): manish-sethi (Tue, 27 Feb 2018 15:19:28 GMT): minollo (Tue, 27 Feb 2018 15:19:42 GMT): manish-sethi (Tue, 27 Feb 2018 15:20:22 GMT): minollo (Tue, 27 Feb 2018 15:20:30 GMT): manish-sethi (Tue, 27 Feb 2018 15:20:37 GMT): minollo (Tue, 27 Feb 2018 15:20:50 GMT): manish-sethi (Tue, 27 Feb 2018 15:20:50 GMT): manish-sethi (Tue, 27 Feb 2018 15:21:49 GMT): manish-sethi (Tue, 27 Feb 2018 15:22:15 GMT): minollo (Tue, 27 Feb 2018 15:22:15 GMT): jyellick (Tue, 27 Feb 2018 15:45:06 GMT): chainoid (Tue, 27 Feb 2018 16:21:26 GMT): debutinfotech (Tue, 27 Feb 2018 17:12:19 GMT): debutinfotech (Tue, 27 Feb 2018 17:12:38 GMT): debutinfotech (Tue, 27 Feb 2018 17:14:12 GMT): yacovm (Tue, 27 Feb 2018 17:14:40 GMT): jyellick (Tue, 27 Feb 2018 17:19:41 GMT): jyellick (Tue, 27 Feb 2018 17:19:41 GMT): jyellick (Tue, 27 Feb 2018 17:20:16 GMT): jyellick (Tue, 27 Feb 2018 17:20:16 GMT): jyellick (Tue, 27 Feb 2018 17:20:16 GMT): VikasJakhar (Tue, 27 Feb 2018 20:08:47 GMT): debutinfotech (Wed, 28 Feb 2018 06:17:33 GMT): Ryan2 (Wed, 28 Feb 2018 06:20:28 GMT): Ryan2 (Wed, 28 Feb 2018 06:20:28 GMT): debutinfotech (Wed, 28 Feb 2018 09:18:00 GMT): debutinfotech (Wed, 28 Feb 2018 09:18:33 GMT): harsha (Wed, 28 Feb 2018 09:19:22 GMT): debutinfotech (Wed, 28 Feb 2018 09:19:49 GMT): PyiTheinKyaw (Wed, 28 Feb 2018 09:20:50 GMT): debutinfotech (Wed, 28 Feb 2018 09:23:16 GMT): debutinfotech (Wed, 28 Feb 2018 09:26:19 GMT): debutinfotech (Wed, 28 Feb 2018 09:26:53 GMT): debutinfotech (Wed, 28 Feb 2018 09:27:01 GMT): dinesh.rivankar (Wed, 28 Feb 2018 09:28:24 GMT): debutinfotech (Wed, 28 Feb 2018 09:28:25 GMT): debutinfotech (Wed, 28 Feb 2018 09:28:57 GMT): debutinfotech (Wed, 28 Feb 2018 09:29:01 GMT): chandrakanthm (Wed, 28 Feb 2018 09:55:16 GMT): pankajcheema (Wed, 28 Feb 2018 11:17:17 GMT): JayJong (Wed, 28 Feb 2018 11:43:05 GMT): dampuero (Wed, 28 Feb 2018 15:05:32 GMT): oqf (Wed, 28 Feb 2018 15:07:45 GMT): AndrewRy 1 (Wed, 28 Feb 2018 15:08:30 GMT): AndrewRy 1 (Wed, 28 Feb 2018 15:09:32 GMT): jyellick (Wed, 28 Feb 2018 15:46:58 GMT): jyellick (Wed, 28 Feb 2018 15:47:44 GMT): jyellick (Wed, 28 Feb 2018 15:48:30 GMT): jyellick (Wed, 28 Feb 2018 15:48:30 GMT): AndrewRy 1 (Thu, 01 Mar 2018 01:15:52 GMT): AndrewRy 1 (Thu, 01 Mar 2018 01:15:52 GMT): jyellick (Thu, 01 Mar 2018 01:16:12 GMT): jyellick (Thu, 01 Mar 2018 01:16:25 GMT): AndrewRy 1 (Thu, 01 Mar 2018 01:17:31 GMT): jyellick (Thu, 01 Mar 2018 01:18:21 GMT): jyellick (Thu, 01 Mar 2018 01:18:34 GMT): jyellick (Thu, 01 Mar 2018 01:18:41 GMT): jyellick (Thu, 01 Mar 2018 01:18:58 GMT): AndrewRy 1 (Thu, 01 Mar 2018 01:20:38 GMT): jyellick (Thu, 01 Mar 2018 01:23:13 GMT): jyellick (Thu, 01 Mar 2018 01:23:26 GMT): jyellick (Thu, 01 Mar 2018 01:23:40 GMT): AndrewRy 1 (Thu, 01 Mar 2018 01:24:15 GMT): Taffies (Thu, 01 Mar 2018 03:52:55 GMT): JayJong (Thu, 01 Mar 2018 10:04:07 GMT): JayJong (Thu, 01 Mar 2018 10:20:16 GMT): jyellick (Thu, 01 Mar 2018 14:22:35 GMT): javrevasandeep (Thu, 01 Mar 2018 16:03:10 GMT): javrevasandeep (Thu, 01 Mar 2018 16:04:30 GMT): javrevasandeep (Thu, 01 Mar 2018 16:04:42 GMT): javrevasandeep (Thu, 01 Mar 2018 16:04:43 GMT): javrevasandeep (Thu, 01 Mar 2018 16:05:22 GMT): javrevasandeep (Thu, 01 Mar 2018 16:05:25 GMT): javrevasandeep (Thu, 01 Mar 2018 16:09:08 GMT): jyellick (Thu, 01 Mar 2018 16:09:09 GMT): javrevasandeep (Thu, 01 Mar 2018 16:10:02 GMT): javrevasandeep (Thu, 01 Mar 2018 16:10:02 GMT): javrevasandeep (Thu, 01 Mar 2018 16:10:18 GMT): jyellick (Thu, 01 Mar 2018 16:11:26 GMT): javrevasandeep (Thu, 01 Mar 2018 16:19:34 GMT): jyellick (Thu, 01 Mar 2018 16:21:18 GMT): javrevasandeep (Thu, 01 Mar 2018 18:14:07 GMT): vu3mmg (Thu, 01 Mar 2018 18:15:16 GMT): kostas (Thu, 01 Mar 2018 18:16:21 GMT): jyellick (Thu, 01 Mar 2018 19:23:43 GMT): javrevasandeep (Thu, 01 Mar 2018 19:26:00 GMT): javrevasandeep (Thu, 01 Mar 2018 19:26:46 GMT): jyellick (Thu, 01 Mar 2018 19:27:42 GMT): jyellick (Thu, 01 Mar 2018 19:28:00 GMT): jyellick (Thu, 01 Mar 2018 19:29:44 GMT): javrevasandeep (Thu, 01 Mar 2018 19:29:56 GMT): javrevasandeep (Thu, 01 Mar 2018 19:29:59 GMT): jyellick (Thu, 01 Mar 2018 19:34:43 GMT): jyellick (Thu, 01 Mar 2018 19:36:21 GMT): javrevasandeep (Thu, 01 Mar 2018 19:50:22 GMT): AkramAlkhalil (Thu, 01 Mar 2018 20:04:46 GMT): jyellick (Thu, 01 Mar 2018 20:08:20 GMT): sanchezl (Thu, 01 Mar 2018 20:53:34 GMT): sanchezl (Thu, 01 Mar 2018 21:24:58 GMT): sanchezl (Thu, 01 Mar 2018 21:24:58 GMT): sthavisomboon (Thu, 01 Mar 2018 22:19:05 GMT): sthavisomboon (Thu, 01 Mar 2018 22:20:41 GMT): jyellick (Thu, 01 Mar 2018 22:22:09 GMT): jyellick (Thu, 01 Mar 2018 22:22:09 GMT): JayJong (Fri, 02 Mar 2018 04:42:55 GMT): Taffies (Fri, 02 Mar 2018 11:04:52 GMT): ArnabChatterjee (Fri, 02 Mar 2018 12:24:29 GMT): jyellick (Fri, 02 Mar 2018 16:25:05 GMT): jyellick (Fri, 02 Mar 2018 16:25:36 GMT): jyellick (Fri, 02 Mar 2018 16:26:39 GMT): jcap (Fri, 02 Mar 2018 18:44:17 GMT): jcap (Fri, 02 Mar 2018 18:44:49 GMT): kostas (Fri, 02 Mar 2018 18:46:49 GMT): jcap (Fri, 02 Mar 2018 18:47:18 GMT): grapebaba (Sun, 04 Mar 2018 05:55:02 GMT): NeerajKumar (Sun, 04 Mar 2018 11:33:05 GMT): NeerajKumar (Sun, 04 Mar 2018 11:34:37 GMT): jyellick (Sun, 04 Mar 2018 16:34:01 GMT): jyellick (Sun, 04 Mar 2018 16:35:16 GMT): jyellick (Sun, 04 Mar 2018 16:39:54 GMT): Ryan2 (Mon, 05 Mar 2018 04:52:07 GMT): AshishMishra 1 (Mon, 05 Mar 2018 11:43:09 GMT): pankajcheema (Mon, 05 Mar 2018 12:15:18 GMT): jyellick (Mon, 05 Mar 2018 14:03:52 GMT): akshay.sood (Mon, 05 Mar 2018 15:18:16 GMT): akshay.sood (Mon, 05 Mar 2018 15:18:19 GMT): jyellick (Mon, 05 Mar 2018 15:36:33 GMT): jyellick (Mon, 05 Mar 2018 15:41:43 GMT): tongli (Mon, 05 Mar 2018 17:51:27 GMT): tongli (Mon, 05 Mar 2018 17:52:31 GMT): tongli (Mon, 05 Mar 2018 17:53:42 GMT): jyellick (Mon, 05 Mar 2018 17:55:19 GMT): tongli (Mon, 05 Mar 2018 17:56:12 GMT): tongli (Mon, 05 Mar 2018 17:56:22 GMT): jyellick (Mon, 05 Mar 2018 17:57:18 GMT): tongli (Mon, 05 Mar 2018 17:57:26 GMT): jyellick (Mon, 05 Mar 2018 17:57:44 GMT): jyellick (Mon, 05 Mar 2018 17:58:09 GMT): tongli (Mon, 05 Mar 2018 17:58:43 GMT): tongli (Mon, 05 Mar 2018 17:59:16 GMT): jyellick (Mon, 05 Mar 2018 18:00:02 GMT): tongli (Mon, 05 Mar 2018 18:00:22 GMT): jyellick (Mon, 05 Mar 2018 18:00:32 GMT): jyellick (Mon, 05 Mar 2018 18:00:50 GMT): jyellick (Mon, 05 Mar 2018 18:00:50 GMT): tongli (Mon, 05 Mar 2018 18:01:41 GMT): jyellick (Mon, 05 Mar 2018 18:02:23 GMT): tongli (Mon, 05 Mar 2018 18:03:43 GMT): jyellick (Mon, 05 Mar 2018 18:03:44 GMT): tongli (Mon, 05 Mar 2018 18:04:17 GMT): tongli (Mon, 05 Mar 2018 18:06:17 GMT): jyellick (Mon, 05 Mar 2018 18:08:53 GMT): tongli (Mon, 05 Mar 2018 18:11:54 GMT): tongli (Mon, 05 Mar 2018 18:12:37 GMT): jyellick (Mon, 05 Mar 2018 18:12:37 GMT): tongli (Mon, 05 Mar 2018 18:13:27 GMT): jyellick (Mon, 05 Mar 2018 18:13:49 GMT): tongli (Mon, 05 Mar 2018 18:14:09 GMT): tongli (Mon, 05 Mar 2018 18:14:20 GMT): tongli (Mon, 05 Mar 2018 18:15:03 GMT): tongli (Mon, 05 Mar 2018 18:15:19 GMT): jyellick (Mon, 05 Mar 2018 18:17:23 GMT): jyellick (Mon, 05 Mar 2018 18:17:57 GMT): tongli (Mon, 05 Mar 2018 18:27:00 GMT): tongli (Mon, 05 Mar 2018 18:27:23 GMT): jyellick (Mon, 05 Mar 2018 21:32:04 GMT): jyellick (Mon, 05 Mar 2018 21:34:24 GMT): AndrewRy 1 (Tue, 06 Mar 2018 01:10:08 GMT): DennisM330 (Tue, 06 Mar 2018 15:00:22 GMT): DennisM330 (Tue, 06 Mar 2018 15:03:17 GMT): jyellick (Tue, 06 Mar 2018 16:53:51 GMT): jyellick (Tue, 06 Mar 2018 16:53:51 GMT): rjones (Tue, 06 Mar 2018 18:14:52 GMT): rjones (Tue, 06 Mar 2018 18:15:00 GMT): rjones (Tue, 06 Mar 2018 19:43:26 GMT): DennisM330 (Wed, 07 Mar 2018 12:19:39 GMT): DennisM330 (Wed, 07 Mar 2018 12:19:39 GMT): sh777 (Wed, 07 Mar 2018 14:48:37 GMT): sh777 (Wed, 07 Mar 2018 15:39:54 GMT): sh777 (Wed, 07 Mar 2018 15:40:12 GMT): sh777 (Wed, 07 Mar 2018 16:06:30 GMT): jyellick (Wed, 07 Mar 2018 16:16:26 GMT): jyellick (Wed, 07 Mar 2018 16:16:26 GMT): Glen (Thu, 08 Mar 2018 02:28:38 GMT): sh777 (Thu, 08 Mar 2018 02:28:43 GMT): jyellick (Thu, 08 Mar 2018 02:39:26 GMT): Glen (Thu, 08 Mar 2018 02:43:10 GMT): jyellick (Thu, 08 Mar 2018 02:43:24 GMT): Glen (Thu, 08 Mar 2018 02:44:19 GMT): Glen (Thu, 08 Mar 2018 07:09:02 GMT): Glen (Thu, 08 Mar 2018 07:09:02 GMT): Glen (Thu, 08 Mar 2018 07:09:56 GMT): DennisM330 (Thu, 08 Mar 2018 11:26:23 GMT): DennisM330 (Thu, 08 Mar 2018 11:26:23 GMT): Ryan2 (Thu, 08 Mar 2018 11:47:02 GMT): Ryan2 (Thu, 08 Mar 2018 11:47:02 GMT): yacovm (Thu, 08 Mar 2018 11:51:38 GMT): yacovm (Thu, 08 Mar 2018 11:51:43 GMT): Ryan2 (Thu, 08 Mar 2018 11:54:07 GMT): Ryan2 (Thu, 08 Mar 2018 11:54:34 GMT): Ryan2 (Thu, 08 Mar 2018 11:54:34 GMT): yacovm (Thu, 08 Mar 2018 12:28:35 GMT): yacovm (Thu, 08 Mar 2018 12:28:38 GMT): kiranarshakota (Thu, 08 Mar 2018 13:54:11 GMT): Ryan2 (Thu, 08 Mar 2018 14:07:48 GMT): jrosmith (Thu, 08 Mar 2018 16:19:51 GMT): jyellick (Thu, 08 Mar 2018 16:34:52 GMT): jrosmith (Thu, 08 Mar 2018 16:35:55 GMT): sh777 (Fri, 09 Mar 2018 15:56:19 GMT): Glen (Sat, 10 Mar 2018 04:37:28 GMT): lvzewen (Sat, 10 Mar 2018 07:27:31 GMT): Glen (Sat, 10 Mar 2018 08:22:01 GMT): Glen (Sat, 10 Mar 2018 08:22:01 GMT): NeerajKumar (Sat, 10 Mar 2018 10:56:39 GMT): grapebaba (Sat, 10 Mar 2018 15:25:34 GMT): NeerajKumar (Sat, 10 Mar 2018 20:27:19 GMT): jyellick (Sat, 10 Mar 2018 23:50:18 GMT): jyellick (Sat, 10 Mar 2018 23:51:19 GMT): jyellick (Sat, 10 Mar 2018 23:51:19 GMT): Norberthu (Sun, 11 Mar 2018 03:48:31 GMT): Norberthu (Sun, 11 Mar 2018 03:48:31 GMT): NeerajKumar (Sun, 11 Mar 2018 04:33:19 GMT): mikykey (Sun, 11 Mar 2018 16:52:37 GMT): sanchezl (Sun, 11 Mar 2018 21:56:02 GMT): TobiasN (Mon, 12 Mar 2018 00:12:47 GMT): jyellick (Mon, 12 Mar 2018 03:37:38 GMT): pankajcheema (Mon, 12 Mar 2018 04:48:36 GMT): jyellick (Mon, 12 Mar 2018 04:49:25 GMT): jyellick (Mon, 12 Mar 2018 04:49:51 GMT): pankajcheema (Mon, 12 Mar 2018 04:50:28 GMT): jyellick (Mon, 12 Mar 2018 04:51:32 GMT): pankajcheema (Mon, 12 Mar 2018 04:52:39 GMT): pankajcheema (Mon, 12 Mar 2018 04:52:47 GMT): pankajcheema (Mon, 12 Mar 2018 04:52:53 GMT): jyellick (Mon, 12 Mar 2018 04:53:08 GMT): pankajcheema (Mon, 12 Mar 2018 04:53:12 GMT): AndrewRy 1 (Mon, 12 Mar 2018 05:50:54 GMT): pankajcheema (Mon, 12 Mar 2018 06:20:30 GMT): pankajcheema (Mon, 12 Mar 2018 06:20:41 GMT): mikykey (Mon, 12 Mar 2018 08:57:10 GMT): Exci (Mon, 12 Mar 2018 08:57:26 GMT): pankajcheema (Mon, 12 Mar 2018 10:44:36 GMT): BOGATIM (Mon, 12 Mar 2018 12:30:02 GMT): bfuentes@fr.ibm.com (Mon, 12 Mar 2018 14:28:48 GMT): bfuentes@fr.ibm.com (Mon, 12 Mar 2018 14:28:51 GMT): bfuentes@fr.ibm.com (Mon, 12 Mar 2018 14:28:51 GMT): bfuentes@fr.ibm.com (Mon, 12 Mar 2018 14:28:51 GMT): bfuentes@fr.ibm.com (Mon, 12 Mar 2018 14:29:22 GMT): bfuentes@fr.ibm.com (Mon, 12 Mar 2018 14:29:48 GMT): bfuentes@fr.ibm.com (Mon, 12 Mar 2018 14:29:48 GMT): jyellick (Mon, 12 Mar 2018 14:37:25 GMT): jyellick (Mon, 12 Mar 2018 14:37:44 GMT): jyellick (Mon, 12 Mar 2018 14:40:41 GMT): jyellick (Mon, 12 Mar 2018 14:42:26 GMT): BOGATIM (Mon, 12 Mar 2018 15:08:33 GMT): akshay.sood (Mon, 12 Mar 2018 15:31:34 GMT): jyellick (Mon, 12 Mar 2018 15:47:51 GMT): jyellick (Mon, 12 Mar 2018 15:47:51 GMT): akshay.sood (Mon, 12 Mar 2018 15:48:26 GMT): akshay.sood (Mon, 12 Mar 2018 15:48:47 GMT): jyellick (Mon, 12 Mar 2018 15:49:08 GMT): akshay.sood (Mon, 12 Mar 2018 15:49:08 GMT): kostas (Mon, 12 Mar 2018 16:29:09 GMT): scottz (Tue, 13 Mar 2018 00:18:13 GMT): scottz (Tue, 13 Mar 2018 00:18:23 GMT): Ratnakar (Tue, 13 Mar 2018 00:32:25 GMT): StevenXu (Tue, 13 Mar 2018 01:15:38 GMT): jyellick (Tue, 13 Mar 2018 01:24:42 GMT): GopalPanda (Tue, 13 Mar 2018 01:53:38 GMT): baohua (Tue, 13 Mar 2018 02:05:18 GMT): jyellick (Tue, 13 Mar 2018 04:30:59 GMT): pankajcheema (Tue, 13 Mar 2018 05:31:23 GMT): pankajcheema (Tue, 13 Mar 2018 05:31:35 GMT): pankajcheema (Tue, 13 Mar 2018 05:31:49 GMT): pankajcheema (Tue, 13 Mar 2018 05:33:05 GMT): pankajcheema (Tue, 13 Mar 2018 05:33:08 GMT): pankajcheema (Tue, 13 Mar 2018 05:33:18 GMT): pankajcheema (Tue, 13 Mar 2018 05:35:10 GMT): varun-raj (Tue, 13 Mar 2018 06:09:06 GMT): pankajcheema (Tue, 13 Mar 2018 06:10:29 GMT): varun-raj (Tue, 13 Mar 2018 06:11:09 GMT): pankajcheema (Tue, 13 Mar 2018 06:28:37 GMT): pankajcheema (Tue, 13 Mar 2018 06:28:42 GMT): pankajcheema (Tue, 13 Mar 2018 06:29:00 GMT): fanjianhang (Tue, 13 Mar 2018 06:29:00 GMT): pankajcheema (Tue, 13 Mar 2018 06:32:33 GMT): pankajcheema (Tue, 13 Mar 2018 06:32:52 GMT): pankajcheema (Tue, 13 Mar 2018 06:33:04 GMT): fanjianhang (Tue, 13 Mar 2018 06:39:32 GMT): pankajcheema (Tue, 13 Mar 2018 06:39:54 GMT): fanjianhang (Tue, 13 Mar 2018 06:40:34 GMT): pankajcheema (Tue, 13 Mar 2018 06:41:40 GMT): pankajcheema (Tue, 13 Mar 2018 06:42:03 GMT): pankajcheema (Tue, 13 Mar 2018 07:37:52 GMT): pankajcheema (Tue, 13 Mar 2018 08:47:28 GMT): Ryan2 (Tue, 13 Mar 2018 11:32:54 GMT): jspark84 (Tue, 13 Mar 2018 13:17:06 GMT): antitoine (Tue, 13 Mar 2018 13:23:53 GMT): jyellick (Tue, 13 Mar 2018 13:31:08 GMT): jyellick (Tue, 13 Mar 2018 13:32:16 GMT): Ryan2 (Tue, 13 Mar 2018 14:54:46 GMT): jyellick (Tue, 13 Mar 2018 15:10:08 GMT): Ryan2 (Tue, 13 Mar 2018 15:15:47 GMT): Ryan2 (Tue, 13 Mar 2018 15:15:47 GMT): Ryan2 (Tue, 13 Mar 2018 15:15:47 GMT): jyellick (Tue, 13 Mar 2018 15:35:06 GMT): Ryan2 (Tue, 13 Mar 2018 15:44:26 GMT): Ryan2 (Tue, 13 Mar 2018 15:44:26 GMT): jyellick (Tue, 13 Mar 2018 15:45:50 GMT): Ryan2 (Tue, 13 Mar 2018 15:48:09 GMT): jyellick (Tue, 13 Mar 2018 15:49:00 GMT): Ryan2 (Tue, 13 Mar 2018 15:51:34 GMT): jyellick (Tue, 13 Mar 2018 15:52:15 GMT): jyellick (Tue, 13 Mar 2018 15:52:31 GMT): Ryan2 (Tue, 13 Mar 2018 15:57:44 GMT): Ryan2 (Tue, 13 Mar 2018 16:52:03 GMT): jyellick (Tue, 13 Mar 2018 16:54:49 GMT): AnthonyRoux (Wed, 14 Mar 2018 09:29:29 GMT): AnthonyRoux (Wed, 14 Mar 2018 10:34:35 GMT): jyellick (Wed, 14 Mar 2018 10:44:39 GMT): jyellick (Wed, 14 Mar 2018 10:44:39 GMT): jyellick (Wed, 14 Mar 2018 10:47:29 GMT): AnthonyRoux (Wed, 14 Mar 2018 10:55:34 GMT): jyellick (Wed, 14 Mar 2018 10:57:40 GMT): AnthonyRoux (Wed, 14 Mar 2018 11:11:02 GMT): jyellick (Wed, 14 Mar 2018 11:13:14 GMT): AnthonyRoux (Wed, 14 Mar 2018 11:17:55 GMT): AnthonyRoux (Wed, 14 Mar 2018 11:25:06 GMT): jyellick (Wed, 14 Mar 2018 12:15:50 GMT): AnthonyRoux (Wed, 14 Mar 2018 12:31:17 GMT): jyellick (Wed, 14 Mar 2018 12:39:58 GMT): AnthonyRoux (Wed, 14 Mar 2018 13:24:36 GMT): vieiramanoel (Wed, 14 Mar 2018 18:43:26 GMT): vieiramanoel (Wed, 14 Mar 2018 18:47:15 GMT): vieiramanoel (Wed, 14 Mar 2018 18:47:48 GMT): vieiramanoel (Wed, 14 Mar 2018 18:47:48 GMT): vieiramanoel (Wed, 14 Mar 2018 18:48:27 GMT): vieiramanoel (Wed, 14 Mar 2018 18:48:58 GMT): vieiramanoel (Wed, 14 Mar 2018 18:50:35 GMT): vieiramanoel (Wed, 14 Mar 2018 18:53:06 GMT): vieiramanoel (Wed, 14 Mar 2018 19:00:09 GMT): vieiramanoel (Wed, 14 Mar 2018 19:01:29 GMT): vieiramanoel (Wed, 14 Mar 2018 19:01:29 GMT): dokany (Wed, 14 Mar 2018 19:01:52 GMT): bandreghetti (Wed, 14 Mar 2018 19:20:22 GMT): jyellick (Wed, 14 Mar 2018 20:16:48 GMT): Glen (Thu, 15 Mar 2018 02:35:47 GMT): pankajcheema (Thu, 15 Mar 2018 05:31:55 GMT): pankajcheema (Thu, 15 Mar 2018 05:38:44 GMT): pankajcheema (Thu, 15 Mar 2018 05:38:47 GMT): pankajcheema (Thu, 15 Mar 2018 05:38:51 GMT): pankajcheema (Thu, 15 Mar 2018 05:39:00 GMT): pankajcheema (Thu, 15 Mar 2018 05:39:08 GMT): AshishMishra 1 (Thu, 15 Mar 2018 06:35:08 GMT): Glen (Thu, 15 Mar 2018 06:46:20 GMT): Glen (Thu, 15 Mar 2018 06:46:20 GMT): mikykey (Thu, 15 Mar 2018 07:02:24 GMT): mikykey (Thu, 15 Mar 2018 07:02:24 GMT): NeerajKumar (Thu, 15 Mar 2018 07:25:01 GMT): NeerajKumar (Thu, 15 Mar 2018 07:43:01 GMT): pankajcheema (Thu, 15 Mar 2018 07:50:07 GMT): pankajcheema (Thu, 15 Mar 2018 07:50:07 GMT): pankajcheema (Thu, 15 Mar 2018 07:50:07 GMT): TobiasN (Thu, 15 Mar 2018 08:28:10 GMT): TobiasN (Thu, 15 Mar 2018 08:33:20 GMT): TobiasN (Thu, 15 Mar 2018 08:35:34 GMT): TobiasN (Thu, 15 Mar 2018 08:40:48 GMT): TobiasN (Thu, 15 Mar 2018 08:42:25 GMT): pankajcheema (Thu, 15 Mar 2018 09:00:11 GMT): pankajcheema (Thu, 15 Mar 2018 09:00:28 GMT): pankajcheema (Thu, 15 Mar 2018 09:16:02 GMT): pankajcheema (Thu, 15 Mar 2018 09:16:39 GMT): pankajcheema (Thu, 15 Mar 2018 09:17:38 GMT): rocket.cat (Thu, 15 Mar 2018 09:17:38 GMT): DarshanBc (Thu, 15 Mar 2018 09:17:39 GMT): pankajcheema (Thu, 15 Mar 2018 09:18:25 GMT): Ryan2 (Thu, 15 Mar 2018 09:34:05 GMT): Ryan2 (Thu, 15 Mar 2018 09:34:05 GMT): Ryan2 (Thu, 15 Mar 2018 09:34:05 GMT): Ryan2 (Thu, 15 Mar 2018 09:34:05 GMT): Ryan2 (Thu, 15 Mar 2018 09:34:05 GMT): mikykey (Thu, 15 Mar 2018 10:50:21 GMT): pankajcheema (Thu, 15 Mar 2018 10:51:16 GMT): pankajcheema (Thu, 15 Mar 2018 10:51:59 GMT): jyellick (Thu, 15 Mar 2018 10:56:33 GMT): jyellick (Thu, 15 Mar 2018 10:56:46 GMT): jyellick (Thu, 15 Mar 2018 10:57:23 GMT): pankajcheema (Thu, 15 Mar 2018 10:57:34 GMT): jyellick (Thu, 15 Mar 2018 10:57:42 GMT): pankajcheema (Thu, 15 Mar 2018 10:58:16 GMT): jyellick (Thu, 15 Mar 2018 10:59:33 GMT): NeerajKumar (Thu, 15 Mar 2018 11:00:37 GMT): jyellick (Thu, 15 Mar 2018 11:03:21 GMT): jyellick (Thu, 15 Mar 2018 11:05:07 GMT): jyellick (Thu, 15 Mar 2018 11:05:34 GMT): pankajcheema (Thu, 15 Mar 2018 11:07:02 GMT): pankajcheema (Thu, 15 Mar 2018 11:07:26 GMT): jyellick (Thu, 15 Mar 2018 11:08:59 GMT): jyellick (Thu, 15 Mar 2018 11:09:12 GMT): pankajcheema (Thu, 15 Mar 2018 11:10:01 GMT): Ryan2 (Thu, 15 Mar 2018 12:31:31 GMT): vieiramanoel (Thu, 15 Mar 2018 15:42:54 GMT): jyellick (Thu, 15 Mar 2018 15:46:46 GMT): jyellick (Thu, 15 Mar 2018 15:47:09 GMT): vieiramanoel (Thu, 15 Mar 2018 15:48:26 GMT): jyellick (Thu, 15 Mar 2018 15:49:03 GMT): jyellick (Thu, 15 Mar 2018 15:50:03 GMT): vieiramanoel (Thu, 15 Mar 2018 15:52:10 GMT): vieiramanoel (Thu, 15 Mar 2018 15:52:54 GMT): jyellick (Thu, 15 Mar 2018 16:24:46 GMT): jyellick (Thu, 15 Mar 2018 16:25:00 GMT): vieiramanoel (Thu, 15 Mar 2018 16:55:25 GMT): jyellick (Thu, 15 Mar 2018 16:57:58 GMT): vieiramanoel (Thu, 15 Mar 2018 16:58:01 GMT): vieiramanoel (Thu, 15 Mar 2018 17:06:07 GMT): jyellick (Thu, 15 Mar 2018 18:59:31 GMT): vieiramanoel (Thu, 15 Mar 2018 19:12:44 GMT): vieiramanoel (Thu, 15 Mar 2018 19:13:17 GMT): vieiramanoel (Thu, 15 Mar 2018 19:15:11 GMT): vieiramanoel (Thu, 15 Mar 2018 19:16:31 GMT): jyellick (Thu, 15 Mar 2018 20:43:58 GMT): vieiramanoel (Thu, 15 Mar 2018 21:01:14 GMT): vieiramanoel (Thu, 15 Mar 2018 21:01:35 GMT): vieiramanoel (Thu, 15 Mar 2018 21:01:35 GMT): vieiramanoel (Thu, 15 Mar 2018 21:02:38 GMT): vieiramanoel (Thu, 15 Mar 2018 21:02:45 GMT): vieiramanoel (Thu, 15 Mar 2018 21:02:45 GMT): jyellick (Thu, 15 Mar 2018 21:05:32 GMT): jyellick (Thu, 15 Mar 2018 21:05:44 GMT): jyellick (Thu, 15 Mar 2018 21:06:07 GMT): vieiramanoel (Thu, 15 Mar 2018 21:08:34 GMT): jyellick (Thu, 15 Mar 2018 21:13:03 GMT): vieiramanoel (Thu, 15 Mar 2018 21:13:23 GMT): vieiramanoel (Thu, 15 Mar 2018 21:13:36 GMT): jyellick (Thu, 15 Mar 2018 21:14:07 GMT): vieiramanoel (Thu, 15 Mar 2018 21:14:12 GMT): jyellick (Thu, 15 Mar 2018 21:14:20 GMT): jyellick (Thu, 15 Mar 2018 21:14:34 GMT): vieiramanoel (Thu, 15 Mar 2018 21:14:53 GMT): vieiramanoel (Thu, 15 Mar 2018 21:15:05 GMT): vieiramanoel (Thu, 15 Mar 2018 21:15:05 GMT): vieiramanoel (Thu, 15 Mar 2018 21:15:36 GMT): vieiramanoel (Thu, 15 Mar 2018 21:15:36 GMT): vieiramanoel (Thu, 15 Mar 2018 21:17:07 GMT): vieiramanoel (Thu, 15 Mar 2018 21:17:07 GMT): vieiramanoel (Thu, 15 Mar 2018 21:17:07 GMT): jyellick (Thu, 15 Mar 2018 21:20:33 GMT): vieiramanoel (Thu, 15 Mar 2018 21:20:58 GMT): jyellick (Thu, 15 Mar 2018 21:21:16 GMT): jyellick (Thu, 15 Mar 2018 21:21:20 GMT): vieiramanoel (Thu, 15 Mar 2018 21:22:04 GMT): jyellick (Thu, 15 Mar 2018 21:22:23 GMT): vieiramanoel (Thu, 15 Mar 2018 21:22:25 GMT): jyellick (Thu, 15 Mar 2018 21:22:39 GMT): jyellick (Thu, 15 Mar 2018 21:22:39 GMT): jyellick (Thu, 15 Mar 2018 21:23:16 GMT): jyellick (Thu, 15 Mar 2018 21:25:36 GMT): jyellick (Thu, 15 Mar 2018 21:25:44 GMT): vieiramanoel (Thu, 15 Mar 2018 22:04:21 GMT): vieiramanoel (Thu, 15 Mar 2018 22:04:48 GMT): vieiramanoel (Thu, 15 Mar 2018 22:06:09 GMT): vieiramanoel (Thu, 15 Mar 2018 22:06:09 GMT): vieiramanoel (Thu, 15 Mar 2018 22:28:16 GMT): vieiramanoel (Thu, 15 Mar 2018 22:28:16 GMT): vieiramanoel (Thu, 15 Mar 2018 22:29:55 GMT): vieiramanoel (Thu, 15 Mar 2018 22:30:14 GMT): vieiramanoel (Thu, 15 Mar 2018 22:30:46 GMT): vieiramanoel (Thu, 15 Mar 2018 22:30:46 GMT): vieiramanoel (Thu, 15 Mar 2018 22:30:46 GMT): vieiramanoel (Thu, 15 Mar 2018 22:39:22 GMT): vieiramanoel (Thu, 15 Mar 2018 22:46:51 GMT): vieiramanoel (Thu, 15 Mar 2018 22:47:40 GMT): vieiramanoel (Thu, 15 Mar 2018 22:47:40 GMT): vieiramanoel (Thu, 15 Mar 2018 22:47:40 GMT): Glen (Fri, 16 Mar 2018 00:36:04 GMT): Glen (Fri, 16 Mar 2018 00:36:04 GMT): jyellick (Fri, 16 Mar 2018 00:36:51 GMT): Glen (Fri, 16 Mar 2018 00:41:51 GMT): Glen (Fri, 16 Mar 2018 00:42:29 GMT): NeerajKumar (Fri, 16 Mar 2018 03:10:13 GMT): NeerajKumar (Fri, 16 Mar 2018 03:13:53 GMT): jyellick (Fri, 16 Mar 2018 03:39:18 GMT): NeerajKumar (Fri, 16 Mar 2018 03:44:35 GMT): yopep (Fri, 16 Mar 2018 04:05:21 GMT): NeerajKumar (Fri, 16 Mar 2018 06:50:16 GMT): jyellick (Fri, 16 Mar 2018 10:23:59 GMT): NeerajKumar (Fri, 16 Mar 2018 12:01:09 GMT): jyellick (Fri, 16 Mar 2018 12:03:34 GMT): NeerajKumar (Fri, 16 Mar 2018 12:06:48 GMT): NeerajKumar (Fri, 16 Mar 2018 12:09:52 GMT): sanchezl (Fri, 16 Mar 2018 13:45:29 GMT): sanchezl (Fri, 16 Mar 2018 13:45:29 GMT): sanchezl (Fri, 16 Mar 2018 13:51:57 GMT): neharprodduturi (Fri, 16 Mar 2018 21:14:00 GMT): chenjun-bj (Sat, 17 Mar 2018 01:00:26 GMT): NeerajKumar (Sat, 17 Mar 2018 06:27:51 GMT): NeerajKumar (Sun, 18 Mar 2018 12:16:11 GMT): grapebaba (Sun, 18 Mar 2018 12:32:32 GMT): grapebaba (Sun, 18 Mar 2018 12:34:13 GMT): CodeReaper (Sun, 18 Mar 2018 17:39:26 GMT): dsanchezseco (Mon, 19 Mar 2018 10:46:49 GMT): dsanchezseco (Mon, 19 Mar 2018 10:46:49 GMT): dsanchezseco (Mon, 19 Mar 2018 10:46:49 GMT): dsanchezseco (Mon, 19 Mar 2018 10:46:49 GMT): dsanchezseco (Mon, 19 Mar 2018 10:49:17 GMT): dsanchezseco (Mon, 19 Mar 2018 10:50:12 GMT): dsanchezseco (Mon, 19 Mar 2018 10:51:42 GMT): ShereenSallam (Mon, 19 Mar 2018 15:33:45 GMT): gewing (Mon, 19 Mar 2018 19:17:21 GMT): huy.tranibm (Mon, 19 Mar 2018 22:31:49 GMT): zhmz1326 (Tue, 20 Mar 2018 09:48:24 GMT): jyellick (Tue, 20 Mar 2018 16:34:03 GMT): wjzheng (Tue, 20 Mar 2018 19:07:54 GMT): ShikarSharma (Tue, 20 Mar 2018 22:43:52 GMT): PyiTheinKyaw (Wed, 21 Mar 2018 04:33:08 GMT): dsanchezseco (Wed, 21 Mar 2018 08:21:37 GMT): Rosan (Wed, 21 Mar 2018 12:09:59 GMT): sanchezl (Wed, 21 Mar 2018 13:02:39 GMT): dsanchezseco (Wed, 21 Mar 2018 15:12:08 GMT): dsanchezseco (Wed, 21 Mar 2018 15:12:08 GMT): dsanchezseco (Wed, 21 Mar 2018 15:12:08 GMT): jyellick (Wed, 21 Mar 2018 15:16:38 GMT): jyellick (Wed, 21 Mar 2018 15:17:06 GMT): jyellick (Wed, 21 Mar 2018 15:17:38 GMT): dsanchezseco (Wed, 21 Mar 2018 15:20:00 GMT): vieiramanoel (Wed, 21 Mar 2018 19:42:02 GMT): vieiramanoel (Wed, 21 Mar 2018 19:42:38 GMT): vieiramanoel (Wed, 21 Mar 2018 19:43:27 GMT): vieiramanoel (Wed, 21 Mar 2018 19:44:04 GMT): vieiramanoel (Wed, 21 Mar 2018 19:45:10 GMT): vieiramanoel (Wed, 21 Mar 2018 19:46:53 GMT): vieiramanoel (Wed, 21 Mar 2018 19:46:53 GMT): vieiramanoel (Wed, 21 Mar 2018 19:46:53 GMT): vieiramanoel (Wed, 21 Mar 2018 19:49:54 GMT): jyellick (Wed, 21 Mar 2018 20:44:34 GMT): azur3s0ng (Wed, 21 Mar 2018 20:44:34 GMT): azur3s0ng (Thu, 22 Mar 2018 04:02:59 GMT): azur3s0ng (Thu, 22 Mar 2018 04:02:59 GMT): jyellick (Thu, 22 Mar 2018 05:41:17 GMT): jyellick (Thu, 22 Mar 2018 05:42:10 GMT): azur3s0ng (Thu, 22 Mar 2018 05:42:31 GMT): jyellick (Thu, 22 Mar 2018 05:43:00 GMT): azur3s0ng (Thu, 22 Mar 2018 05:46:52 GMT): jyellick (Thu, 22 Mar 2018 05:47:35 GMT): azur3s0ng (Thu, 22 Mar 2018 05:48:40 GMT): azur3s0ng (Thu, 22 Mar 2018 06:24:36 GMT): javrevasandeep (Thu, 22 Mar 2018 13:30:00 GMT): javrevasandeep (Thu, 22 Mar 2018 13:30:02 GMT): azur3s0ng (Thu, 22 Mar 2018 17:04:32 GMT): jyellick (Thu, 22 Mar 2018 17:20:22 GMT): jyellick (Thu, 22 Mar 2018 17:20:43 GMT): azur3s0ng (Thu, 22 Mar 2018 17:23:43 GMT): jyellick (Thu, 22 Mar 2018 17:24:12 GMT): azur3s0ng (Thu, 22 Mar 2018 17:24:22 GMT): azur3s0ng (Thu, 22 Mar 2018 17:24:30 GMT): jyellick (Thu, 22 Mar 2018 17:27:16 GMT): azur3s0ng (Thu, 22 Mar 2018 17:28:28 GMT): jyellick (Thu, 22 Mar 2018 17:30:58 GMT): azur3s0ng (Thu, 22 Mar 2018 17:39:20 GMT): dharuq (Thu, 22 Mar 2018 18:29:40 GMT): azur3s0ng (Thu, 22 Mar 2018 23:34:43 GMT): azur3s0ng (Thu, 22 Mar 2018 23:39:55 GMT): jyellick (Thu, 22 Mar 2018 23:43:24 GMT): azur3s0ng (Thu, 22 Mar 2018 23:44:15 GMT): azur3s0ng (Thu, 22 Mar 2018 23:45:08 GMT): jyellick (Thu, 22 Mar 2018 23:45:10 GMT): azur3s0ng (Thu, 22 Mar 2018 23:45:42 GMT): jyellick (Thu, 22 Mar 2018 23:45:45 GMT): jyellick (Thu, 22 Mar 2018 23:47:20 GMT): azur3s0ng (Thu, 22 Mar 2018 23:47:53 GMT): azur3s0ng (Thu, 22 Mar 2018 23:48:42 GMT): jyellick (Thu, 22 Mar 2018 23:50:18 GMT): azur3s0ng (Thu, 22 Mar 2018 23:52:21 GMT): azur3s0ng (Thu, 22 Mar 2018 23:52:37 GMT): jyellick (Thu, 22 Mar 2018 23:54:05 GMT): azur3s0ng (Thu, 22 Mar 2018 23:54:46 GMT): azur3s0ng (Thu, 22 Mar 2018 23:56:08 GMT): azur3s0ng (Thu, 22 Mar 2018 23:56:45 GMT): jyellick (Thu, 22 Mar 2018 23:57:02 GMT): jyellick (Thu, 22 Mar 2018 23:57:35 GMT): azur3s0ng (Fri, 23 Mar 2018 00:05:38 GMT): azur3s0ng (Fri, 23 Mar 2018 02:46:29 GMT): jyellick (Fri, 23 Mar 2018 02:46:41 GMT): bandreghetti (Fri, 23 Mar 2018 12:26:38 GMT): bandreghetti (Fri, 23 Mar 2018 12:26:38 GMT): bandreghetti (Fri, 23 Mar 2018 12:26:38 GMT): bandreghetti (Fri, 23 Mar 2018 12:26:38 GMT): bandreghetti (Fri, 23 Mar 2018 12:26:38 GMT): bandreghetti (Fri, 23 Mar 2018 12:26:38 GMT): bandreghetti (Fri, 23 Mar 2018 12:26:38 GMT): bandreghetti (Fri, 23 Mar 2018 12:26:38 GMT): bandreghetti (Fri, 23 Mar 2018 12:26:38 GMT): bandreghetti (Fri, 23 Mar 2018 12:26:38 GMT): bandreghetti (Fri, 23 Mar 2018 12:26:38 GMT): bandreghetti (Fri, 23 Mar 2018 13:06:36 GMT): sanchezl (Fri, 23 Mar 2018 13:46:02 GMT): bandreghetti (Fri, 23 Mar 2018 14:21:48 GMT): sanchezl (Fri, 23 Mar 2018 14:32:59 GMT): robinrob (Fri, 23 Mar 2018 14:49:29 GMT): bandreghetti (Fri, 23 Mar 2018 14:50:08 GMT): bandreghetti (Fri, 23 Mar 2018 14:50:08 GMT): sanchezl (Fri, 23 Mar 2018 14:58:25 GMT): bandreghetti (Fri, 23 Mar 2018 15:00:10 GMT): bandreghetti (Fri, 23 Mar 2018 15:00:10 GMT): sanchezl (Fri, 23 Mar 2018 15:00:53 GMT): bandreghetti (Fri, 23 Mar 2018 15:01:32 GMT): bandreghetti (Fri, 23 Mar 2018 15:03:24 GMT): bandreghetti (Fri, 23 Mar 2018 15:03:33 GMT): bandreghetti (Fri, 23 Mar 2018 15:03:33 GMT): bandreghetti (Fri, 23 Mar 2018 15:05:39 GMT): bandreghetti (Fri, 23 Mar 2018 15:06:04 GMT): sanchezl (Fri, 23 Mar 2018 15:09:30 GMT): bandreghetti (Fri, 23 Mar 2018 15:58:37 GMT): bandreghetti (Fri, 23 Mar 2018 15:58:55 GMT): patelan (Fri, 23 Mar 2018 19:36:32 GMT): patelan (Fri, 23 Mar 2018 19:36:48 GMT): jyellick (Fri, 23 Mar 2018 19:39:37 GMT): jyellick (Fri, 23 Mar 2018 19:40:25 GMT): jyellick (Fri, 23 Mar 2018 19:40:53 GMT): patelan (Fri, 23 Mar 2018 19:44:46 GMT): patelan (Fri, 23 Mar 2018 19:44:58 GMT): patelan (Fri, 23 Mar 2018 19:45:10 GMT): jyellick (Fri, 23 Mar 2018 19:45:22 GMT): jyellick (Fri, 23 Mar 2018 19:45:22 GMT): patelan (Fri, 23 Mar 2018 19:47:18 GMT): patelan (Fri, 23 Mar 2018 19:48:04 GMT): jyellick (Fri, 23 Mar 2018 19:48:13 GMT): patelan (Fri, 23 Mar 2018 19:49:02 GMT): jyellick (Fri, 23 Mar 2018 19:50:09 GMT): jyellick (Fri, 23 Mar 2018 19:50:19 GMT): jyellick (Fri, 23 Mar 2018 19:52:01 GMT): jyellick (Fri, 23 Mar 2018 19:52:45 GMT): patelan (Fri, 23 Mar 2018 19:54:43 GMT): albert.lacambra (Fri, 23 Mar 2018 20:06:40 GMT): patelan (Fri, 23 Mar 2018 20:09:01 GMT): patelan (Fri, 23 Mar 2018 20:09:43 GMT): jyellick (Fri, 23 Mar 2018 20:10:41 GMT): patelan (Fri, 23 Mar 2018 20:11:36 GMT): freddyisaac (Sat, 24 Mar 2018 03:10:25 GMT): agiledeveloper (Sat, 24 Mar 2018 20:37:18 GMT): agiledeveloper (Sat, 24 Mar 2018 20:37:18 GMT): agiledeveloper (Sat, 24 Mar 2018 20:37:47 GMT): agiledeveloper (Sat, 24 Mar 2018 20:41:45 GMT): agiledeveloper (Sat, 24 Mar 2018 20:51:29 GMT): agiledeveloper (Sat, 24 Mar 2018 20:51:29 GMT): agiledeveloper (Sat, 24 Mar 2018 20:51:29 GMT): agiledeveloper (Sat, 24 Mar 2018 20:51:29 GMT): agiledeveloper (Sat, 24 Mar 2018 20:51:29 GMT): agiledeveloper (Sat, 24 Mar 2018 20:51:29 GMT): agiledeveloper (Sat, 24 Mar 2018 20:51:29 GMT): jyellick (Sun, 25 Mar 2018 16:22:33 GMT): jyellick (Sun, 25 Mar 2018 16:22:33 GMT): jyellick (Sun, 25 Mar 2018 16:29:45 GMT): agiledeveloper (Sun, 25 Mar 2018 21:01:54 GMT): agiledeveloper (Sun, 25 Mar 2018 21:03:15 GMT): agiledeveloper (Sun, 25 Mar 2018 21:26:54 GMT): jyellick (Mon, 26 Mar 2018 04:36:32 GMT): Ryan2 (Mon, 26 Mar 2018 09:39:23 GMT): Ryan2 (Mon, 26 Mar 2018 09:51:19 GMT): Ryan2 (Mon, 26 Mar 2018 09:51:19 GMT): CodeReaper (Mon, 26 Mar 2018 09:57:55 GMT): CodeReaper (Mon, 26 Mar 2018 09:58:11 GMT): navdevl (Mon, 26 Mar 2018 10:01:10 GMT): navdevl (Mon, 26 Mar 2018 10:04:37 GMT): navdevl (Mon, 26 Mar 2018 10:04:37 GMT): navdevl (Mon, 26 Mar 2018 10:05:03 GMT): Ryan2 (Mon, 26 Mar 2018 10:14:47 GMT): bandreghetti (Mon, 26 Mar 2018 11:43:27 GMT): pankajcheema (Mon, 26 Mar 2018 12:47:35 GMT): pankajcheema (Mon, 26 Mar 2018 12:56:42 GMT): pankajcheema (Mon, 26 Mar 2018 12:56:42 GMT): lkchao78 (Mon, 26 Mar 2018 12:57:07 GMT): pankajcheema (Mon, 26 Mar 2018 12:58:12 GMT): lmckhoi (Mon, 26 Mar 2018 13:48:31 GMT): mynet (Mon, 26 Mar 2018 16:35:50 GMT): bandreghetti (Mon, 26 Mar 2018 22:23:41 GMT): jyellick (Tue, 27 Mar 2018 01:19:43 GMT): jyellick (Tue, 27 Mar 2018 01:20:50 GMT): Ryan2 (Tue, 27 Mar 2018 03:06:43 GMT): varun-raj (Tue, 27 Mar 2018 04:48:16 GMT): pankajcheema (Tue, 27 Mar 2018 05:35:49 GMT): navdevl (Tue, 27 Mar 2018 05:39:48 GMT): navdevl (Tue, 27 Mar 2018 05:39:48 GMT): pankajcheema (Tue, 27 Mar 2018 06:10:15 GMT): pankajcheema (Tue, 27 Mar 2018 06:10:15 GMT): thakurnikk (Tue, 27 Mar 2018 06:27:24 GMT): tonyyang132 (Tue, 27 Mar 2018 07:34:12 GMT): bandreghetti (Tue, 27 Mar 2018 11:47:34 GMT): wbhagan (Tue, 27 Mar 2018 16:06:05 GMT): chadevans (Tue, 27 Mar 2018 16:24:35 GMT): jyellick (Tue, 27 Mar 2018 17:41:15 GMT): Ryan2 (Tue, 27 Mar 2018 18:23:07 GMT): Ryan2 (Tue, 27 Mar 2018 18:23:07 GMT): jyellick (Tue, 27 Mar 2018 18:31:44 GMT): thoduerr (Tue, 27 Mar 2018 21:51:53 GMT): Ryan2 (Wed, 28 Mar 2018 02:29:07 GMT): jyellick (Wed, 28 Mar 2018 02:41:27 GMT): Ryan2 (Wed, 28 Mar 2018 02:48:27 GMT): NeerajKumar (Wed, 28 Mar 2018 10:39:00 GMT): jyellick (Thu, 29 Mar 2018 01:15:53 GMT): jyellick (Thu, 29 Mar 2018 01:15:53 GMT): richzhao (Thu, 29 Mar 2018 16:19:17 GMT): kerokhin (Fri, 30 Mar 2018 13:24:44 GMT): jyellick (Fri, 30 Mar 2018 13:55:31 GMT): jyellick (Fri, 30 Mar 2018 13:55:41 GMT): kerokhin (Fri, 30 Mar 2018 15:35:37 GMT): kerokhin (Fri, 30 Mar 2018 15:35:37 GMT): jyellick (Fri, 30 Mar 2018 15:44:42 GMT): kerokhin (Fri, 30 Mar 2018 16:05:02 GMT): jyellick (Fri, 30 Mar 2018 16:05:54 GMT): jyellick (Fri, 30 Mar 2018 16:06:22 GMT): jyellick (Fri, 30 Mar 2018 16:06:22 GMT): albert.lacambra (Mon, 02 Apr 2018 18:07:13 GMT): albert.lacambra (Mon, 02 Apr 2018 18:07:38 GMT): albert.lacambra (Mon, 02 Apr 2018 18:08:03 GMT): albert.lacambra (Mon, 02 Apr 2018 18:08:17 GMT): albert.lacambra (Mon, 02 Apr 2018 18:09:20 GMT): jyellick (Mon, 02 Apr 2018 18:20:08 GMT): albert.lacambra (Mon, 02 Apr 2018 18:24:33 GMT): albert.lacambra (Mon, 02 Apr 2018 18:24:45 GMT): albert.lacambra (Mon, 02 Apr 2018 18:25:14 GMT): albert.lacambra (Mon, 02 Apr 2018 18:25:37 GMT): albert.lacambra (Mon, 02 Apr 2018 18:25:46 GMT): albert.lacambra (Mon, 02 Apr 2018 18:30:17 GMT): albert.lacambra (Mon, 02 Apr 2018 18:30:41 GMT): jyellick (Mon, 02 Apr 2018 18:33:00 GMT): jyellick (Mon, 02 Apr 2018 18:33:00 GMT): jyellick (Mon, 02 Apr 2018 18:33:27 GMT): jyellick (Mon, 02 Apr 2018 18:33:27 GMT): jyellick (Mon, 02 Apr 2018 18:34:10 GMT): albert.lacambra (Mon, 02 Apr 2018 18:36:18 GMT): albert.lacambra (Mon, 02 Apr 2018 18:36:40 GMT): albert.lacambra (Mon, 02 Apr 2018 18:36:47 GMT): albert.lacambra (Mon, 02 Apr 2018 18:39:01 GMT): jyellick (Mon, 02 Apr 2018 19:05:34 GMT): Ryan2 (Tue, 03 Apr 2018 02:55:12 GMT): jyellick (Tue, 03 Apr 2018 02:57:34 GMT): jyellick (Tue, 03 Apr 2018 02:57:51 GMT): jyellick (Tue, 03 Apr 2018 02:58:07 GMT): jyellick (Tue, 03 Apr 2018 02:58:34 GMT): Ryan2 (Tue, 03 Apr 2018 04:22:45 GMT): Ryan2 (Tue, 03 Apr 2018 04:22:45 GMT): jyellick (Tue, 03 Apr 2018 04:24:01 GMT): Ryan2 (Tue, 03 Apr 2018 04:25:58 GMT): jyellick (Tue, 03 Apr 2018 04:27:59 GMT): Ryan2 (Tue, 03 Apr 2018 04:28:53 GMT): albert.lacambra (Tue, 03 Apr 2018 04:57:13 GMT): ganeshraut (Tue, 03 Apr 2018 05:27:16 GMT): Ryan2 (Tue, 03 Apr 2018 06:28:21 GMT): Ryan2 (Tue, 03 Apr 2018 06:28:21 GMT): Ryan2 (Tue, 03 Apr 2018 06:28:21 GMT): Ryan2 (Tue, 03 Apr 2018 06:28:21 GMT): Ryan2 (Tue, 03 Apr 2018 06:28:21 GMT): Taffies (Tue, 03 Apr 2018 07:08:50 GMT): vudathasaiomkar (Tue, 03 Apr 2018 11:05:46 GMT): jaswanth (Tue, 03 Apr 2018 11:13:35 GMT): jaswanth (Tue, 03 Apr 2018 11:23:42 GMT): jaswanth (Tue, 03 Apr 2018 11:23:42 GMT): jaswanth (Tue, 03 Apr 2018 11:23:42 GMT): jaswanth (Tue, 03 Apr 2018 11:23:42 GMT): jaswanth (Tue, 03 Apr 2018 11:39:10 GMT): jyellick (Tue, 03 Apr 2018 13:54:16 GMT): iamdm (Tue, 03 Apr 2018 14:24:06 GMT): jyellick (Tue, 03 Apr 2018 14:37:16 GMT): jyellick (Tue, 03 Apr 2018 14:37:31 GMT): iamdm (Tue, 03 Apr 2018 14:39:02 GMT): jyellick (Tue, 03 Apr 2018 14:41:59 GMT): jyellick (Tue, 03 Apr 2018 14:43:05 GMT): iamdm (Tue, 03 Apr 2018 14:46:54 GMT): jyellick (Tue, 03 Apr 2018 14:48:15 GMT): jyellick (Tue, 03 Apr 2018 14:48:15 GMT): jyellick (Tue, 03 Apr 2018 14:49:04 GMT): iamdm (Tue, 03 Apr 2018 15:06:45 GMT): iamdm (Tue, 03 Apr 2018 15:09:37 GMT): Rumeel_Hussain (Tue, 03 Apr 2018 15:11:16 GMT): jyellick (Tue, 03 Apr 2018 15:12:23 GMT): iamdm (Tue, 03 Apr 2018 16:11:52 GMT): jyellick (Tue, 03 Apr 2018 16:12:49 GMT): iamdm (Tue, 03 Apr 2018 16:14:17 GMT): iamdm (Tue, 03 Apr 2018 16:16:44 GMT): jyellick (Tue, 03 Apr 2018 16:45:28 GMT): jyellick (Tue, 03 Apr 2018 16:45:52 GMT): iamdm (Tue, 03 Apr 2018 16:50:24 GMT): iamdm (Tue, 03 Apr 2018 16:51:07 GMT): jyellick (Tue, 03 Apr 2018 17:13:26 GMT): jyellick (Tue, 03 Apr 2018 17:13:34 GMT): iamdm (Tue, 03 Apr 2018 17:14:29 GMT): iamdm (Tue, 03 Apr 2018 17:15:03 GMT): jyellick (Tue, 03 Apr 2018 17:20:02 GMT): jyellick (Tue, 03 Apr 2018 17:20:30 GMT): iamdm (Tue, 03 Apr 2018 17:21:00 GMT): iamdm (Tue, 03 Apr 2018 17:21:43 GMT): jyellick (Tue, 03 Apr 2018 17:21:53 GMT): patelan (Tue, 03 Apr 2018 20:07:08 GMT): patelan (Tue, 03 Apr 2018 20:07:08 GMT): patelan (Tue, 03 Apr 2018 20:08:50 GMT): jyellick (Tue, 03 Apr 2018 20:23:41 GMT): jyellick (Tue, 03 Apr 2018 20:23:41 GMT): patelan (Tue, 03 Apr 2018 20:29:16 GMT): jyellick (Tue, 03 Apr 2018 20:30:02 GMT): patelan (Tue, 03 Apr 2018 20:34:46 GMT): Ryan2 (Wed, 04 Apr 2018 01:28:56 GMT): Ryan2 (Wed, 04 Apr 2018 01:28:56 GMT): Ryan2 (Wed, 04 Apr 2018 01:28:56 GMT): jaswanth (Wed, 04 Apr 2018 04:48:38 GMT): jaswanth (Wed, 04 Apr 2018 04:48:38 GMT): jaswanth (Wed, 04 Apr 2018 04:48:38 GMT): jaswanth (Wed, 04 Apr 2018 04:48:38 GMT): jaswanth (Wed, 04 Apr 2018 04:48:38 GMT): Ryan2 (Wed, 04 Apr 2018 06:15:30 GMT): Ryan2 (Wed, 04 Apr 2018 06:15:30 GMT): Ryan2 (Wed, 04 Apr 2018 06:15:30 GMT): Ryan2 (Wed, 04 Apr 2018 06:15:30 GMT): Ryan2 (Wed, 04 Apr 2018 06:15:30 GMT): chenjun-bj (Wed, 04 Apr 2018 08:11:21 GMT): Ryan2 (Wed, 04 Apr 2018 10:11:21 GMT): Ryan2 (Wed, 04 Apr 2018 10:11:21 GMT): Stecec (Wed, 04 Apr 2018 12:26:39 GMT): jyellick (Wed, 04 Apr 2018 13:49:47 GMT): patelan (Wed, 04 Apr 2018 15:14:44 GMT): patelan (Wed, 04 Apr 2018 15:15:52 GMT): patelan (Wed, 04 Apr 2018 15:16:37 GMT): jyellick (Wed, 04 Apr 2018 15:43:44 GMT): patelan (Wed, 04 Apr 2018 15:47:17 GMT): jyellick (Wed, 04 Apr 2018 16:23:08 GMT): patelan (Wed, 04 Apr 2018 16:43:30 GMT): yoheiueda (Thu, 05 Apr 2018 03:51:09 GMT): yoheiueda (Thu, 05 Apr 2018 03:51:40 GMT): yoheiueda (Thu, 05 Apr 2018 03:51:40 GMT): yoheiueda (Thu, 05 Apr 2018 03:51:40 GMT): jaswanth (Thu, 05 Apr 2018 04:23:30 GMT): jyellick (Thu, 05 Apr 2018 04:40:19 GMT): jyellick (Thu, 05 Apr 2018 04:41:18 GMT): yoheiueda (Thu, 05 Apr 2018 05:01:55 GMT): yoheiueda (Thu, 05 Apr 2018 05:02:48 GMT): jaswanth (Thu, 05 Apr 2018 06:48:04 GMT): jaswanth (Thu, 05 Apr 2018 06:48:04 GMT): dsanchezseco (Thu, 05 Apr 2018 10:24:54 GMT): dsanchezseco (Thu, 05 Apr 2018 10:26:35 GMT): dsanchezseco (Thu, 05 Apr 2018 10:26:35 GMT): dsanchezseco (Thu, 05 Apr 2018 10:26:35 GMT): dsanchezseco (Thu, 05 Apr 2018 10:42:03 GMT): sanchezl (Thu, 05 Apr 2018 14:54:42 GMT): dsanchezseco (Thu, 05 Apr 2018 14:56:02 GMT): dsanchezseco (Thu, 05 Apr 2018 14:57:24 GMT): sanchezl (Thu, 05 Apr 2018 15:04:25 GMT): sanchezl (Thu, 05 Apr 2018 15:05:01 GMT): dsanchezseco (Thu, 05 Apr 2018 15:08:07 GMT): jyellick (Thu, 05 Apr 2018 15:39:25 GMT): jyellick (Thu, 05 Apr 2018 15:42:10 GMT): Ryan2 (Fri, 06 Apr 2018 00:08:51 GMT): Ryan2 (Fri, 06 Apr 2018 09:43:10 GMT): Ryan2 (Fri, 06 Apr 2018 09:43:10 GMT): Ryan2 (Fri, 06 Apr 2018 09:43:10 GMT): Ryan2 (Fri, 06 Apr 2018 09:43:10 GMT): Ryan2 (Fri, 06 Apr 2018 09:43:10 GMT): Ryan2 (Fri, 06 Apr 2018 09:43:10 GMT): dsanchezseco (Fri, 06 Apr 2018 09:46:18 GMT): ranjan008 (Fri, 06 Apr 2018 10:04:28 GMT): bh4rtp (Fri, 06 Apr 2018 12:57:53 GMT): mastersingh24 (Fri, 06 Apr 2018 13:45:47 GMT): mastersingh24 (Fri, 06 Apr 2018 13:46:34 GMT): jyellick (Fri, 06 Apr 2018 14:07:48 GMT): asaningmaxchain123 (Sat, 07 Apr 2018 02:45:21 GMT): asaningmaxchain123 (Sat, 07 Apr 2018 02:45:21 GMT): asaningmaxchain123 (Sat, 07 Apr 2018 02:45:21 GMT): asaningmaxchain123 (Sat, 07 Apr 2018 02:45:21 GMT): asaningmaxchain123 (Sat, 07 Apr 2018 02:48:15 GMT): asaningmaxchain123 (Sat, 07 Apr 2018 02:50:11 GMT): asaningmaxchain123 (Sat, 07 Apr 2018 02:50:11 GMT): kkado (Sat, 07 Apr 2018 04:13:01 GMT): vu3mmg (Sat, 07 Apr 2018 06:47:28 GMT): mastersingh24 (Sat, 07 Apr 2018 08:27:22 GMT): vu3mmg (Sat, 07 Apr 2018 08:34:24 GMT): Ryan2 (Sun, 08 Apr 2018 06:24:39 GMT): terby (Mon, 09 Apr 2018 05:19:00 GMT): dsanchezseco (Mon, 09 Apr 2018 08:53:26 GMT): Ryan2 (Mon, 09 Apr 2018 09:45:09 GMT): dsanchezseco (Mon, 09 Apr 2018 11:09:49 GMT): dsanchezseco (Mon, 09 Apr 2018 11:09:49 GMT): dsanchezseco (Mon, 09 Apr 2018 11:09:49 GMT): rogeriofza (Mon, 09 Apr 2018 11:32:48 GMT): Ryan2 (Tue, 10 Apr 2018 04:53:18 GMT): Ryan2 (Tue, 10 Apr 2018 09:50:15 GMT): Ryan2 (Tue, 10 Apr 2018 09:50:15 GMT): C0rWin (Tue, 10 Apr 2018 10:00:49 GMT): C0rWin (Tue, 10 Apr 2018 10:02:24 GMT): jyellick (Tue, 10 Apr 2018 13:48:45 GMT): Ryan2 (Tue, 10 Apr 2018 23:48:37 GMT): Ryan2 (Wed, 11 Apr 2018 01:27:41 GMT): jyellick (Wed, 11 Apr 2018 01:29:52 GMT): Glen (Wed, 11 Apr 2018 01:44:36 GMT): Ryan2 (Wed, 11 Apr 2018 01:47:07 GMT): jyellick (Wed, 11 Apr 2018 01:51:55 GMT): Ryan2 (Wed, 11 Apr 2018 01:52:47 GMT): ibmamnt (Wed, 11 Apr 2018 02:14:31 GMT): jyellick (Wed, 11 Apr 2018 02:27:46 GMT): jyellick (Wed, 11 Apr 2018 02:27:46 GMT): jyellick (Wed, 11 Apr 2018 02:28:28 GMT): jyellick (Wed, 11 Apr 2018 02:28:28 GMT): ibmamnt (Wed, 11 Apr 2018 02:30:07 GMT): Ryan2 (Wed, 11 Apr 2018 02:52:35 GMT): Ryan2 (Wed, 11 Apr 2018 02:52:35 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 09:55:20 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 09:55:27 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 09:55:27 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 09:55:27 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 09:55:27 GMT): jojialex2 (Wed, 11 Apr 2018 12:14:36 GMT): MonnyClara (Wed, 11 Apr 2018 13:12:51 GMT): jyellick (Wed, 11 Apr 2018 14:11:30 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 14:23:04 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 14:23:04 GMT): jyellick (Wed, 11 Apr 2018 14:23:30 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 14:25:24 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 14:25:24 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 14:25:24 GMT): jyellick (Wed, 11 Apr 2018 14:44:23 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:05:57 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:05:57 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:05:57 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:05:57 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:07:45 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:08:43 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:08:43 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:08:43 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:08:43 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:08:43 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:08:43 GMT): jyellick (Wed, 11 Apr 2018 15:27:14 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:28:54 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:28:54 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:28:54 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:33:05 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:33:05 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:33:05 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:33:05 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:33:05 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:33:05 GMT): asaningmaxchain123 (Wed, 11 Apr 2018 15:34:04 GMT): mozkarakoc (Wed, 11 Apr 2018 19:24:37 GMT): mozkarakoc (Wed, 11 Apr 2018 21:08:20 GMT): mozkarakoc (Wed, 11 Apr 2018 21:08:20 GMT): mozkarakoc (Wed, 11 Apr 2018 21:08:20 GMT): jyellick (Wed, 11 Apr 2018 21:12:24 GMT): mozkarakoc (Wed, 11 Apr 2018 21:56:16 GMT): Ryan2 (Thu, 12 Apr 2018 00:07:31 GMT): Ryan2 (Thu, 12 Apr 2018 02:15:43 GMT): jyellick (Thu, 12 Apr 2018 02:31:49 GMT): jyellick (Thu, 12 Apr 2018 02:32:13 GMT): Ryan2 (Thu, 12 Apr 2018 02:37:29 GMT): jyellick (Thu, 12 Apr 2018 03:28:34 GMT): Ryan2 (Thu, 12 Apr 2018 05:07:11 GMT): anishman (Thu, 12 Apr 2018 07:36:30 GMT): anishman (Thu, 12 Apr 2018 07:36:30 GMT): anishman (Thu, 12 Apr 2018 07:36:30 GMT): anishman (Thu, 12 Apr 2018 07:36:30 GMT): anishman (Thu, 12 Apr 2018 07:36:30 GMT): anishman (Thu, 12 Apr 2018 07:45:52 GMT): Glen (Thu, 12 Apr 2018 07:57:50 GMT): Glen (Thu, 12 Apr 2018 07:57:50 GMT): mozkarakoc (Thu, 12 Apr 2018 09:59:17 GMT): mozkarakoc (Thu, 12 Apr 2018 09:59:17 GMT): jyellick (Thu, 12 Apr 2018 13:31:49 GMT): davidgsmits (Thu, 12 Apr 2018 13:49:43 GMT): mozkarakoc (Thu, 12 Apr 2018 15:13:56 GMT): jyellick (Thu, 12 Apr 2018 15:14:28 GMT): jyellick (Thu, 12 Apr 2018 15:14:58 GMT): mozkarakoc (Thu, 12 Apr 2018 15:15:16 GMT): jyellick (Thu, 12 Apr 2018 15:15:30 GMT): mozkarakoc (Thu, 12 Apr 2018 15:18:05 GMT): jyellick (Thu, 12 Apr 2018 15:35:08 GMT): jyellick (Thu, 12 Apr 2018 15:35:36 GMT): jyellick (Thu, 12 Apr 2018 15:35:59 GMT): jyellick (Thu, 12 Apr 2018 15:36:15 GMT): mozkarakoc (Thu, 12 Apr 2018 15:41:55 GMT): anishman (Thu, 12 Apr 2018 16:06:09 GMT): anishman (Thu, 12 Apr 2018 16:06:09 GMT): Ryan2 (Fri, 13 Apr 2018 00:22:59 GMT): Ryan2 (Fri, 13 Apr 2018 00:22:59 GMT): Ryan2 (Fri, 13 Apr 2018 00:22:59 GMT): Ryan2 (Fri, 13 Apr 2018 00:22:59 GMT): Ryan2 (Fri, 13 Apr 2018 00:22:59 GMT): jyellick (Fri, 13 Apr 2018 02:11:44 GMT): jyellick (Fri, 13 Apr 2018 02:11:51 GMT): Glen (Fri, 13 Apr 2018 03:50:30 GMT): Glen (Fri, 13 Apr 2018 03:50:53 GMT): jyellick (Fri, 13 Apr 2018 03:51:48 GMT): Glen (Fri, 13 Apr 2018 03:52:09 GMT): Glen (Fri, 13 Apr 2018 03:54:02 GMT): jyellick (Fri, 13 Apr 2018 04:26:30 GMT): Glen (Fri, 13 Apr 2018 06:31:23 GMT): bel0335 (Fri, 13 Apr 2018 08:37:52 GMT): mozkarakoc (Fri, 13 Apr 2018 12:22:07 GMT): JayPandya (Fri, 13 Apr 2018 12:58:38 GMT): jyellick (Fri, 13 Apr 2018 13:25:42 GMT): kly4 (Fri, 13 Apr 2018 20:22:53 GMT): Glen (Sat, 14 Apr 2018 02:15:30 GMT): Glen (Sat, 14 Apr 2018 02:16:35 GMT): jyellick (Sat, 14 Apr 2018 02:41:51 GMT): jyellick (Sat, 14 Apr 2018 02:42:20 GMT): jyellick (Sat, 14 Apr 2018 02:42:44 GMT): Glen (Sat, 14 Apr 2018 02:43:15 GMT): mozkarakoc (Sat, 14 Apr 2018 15:13:18 GMT): mozkarakoc (Sat, 14 Apr 2018 15:13:18 GMT): mozkarakoc (Sat, 14 Apr 2018 15:13:18 GMT): mozkarakoc (Sat, 14 Apr 2018 15:13:18 GMT): mozkarakoc (Sat, 14 Apr 2018 15:13:18 GMT): jyellick (Sun, 15 Apr 2018 02:49:41 GMT): jyellick (Sun, 15 Apr 2018 02:49:41 GMT): mozkarakoc (Sun, 15 Apr 2018 05:49:13 GMT): jyellick (Sun, 15 Apr 2018 17:26:38 GMT): mozkarakoc (Sun, 15 Apr 2018 18:37:01 GMT): mozkarakoc (Sun, 15 Apr 2018 18:44:25 GMT): mozkarakoc (Sun, 15 Apr 2018 18:44:25 GMT): mozkarakoc (Sun, 15 Apr 2018 18:44:25 GMT): jyellick (Mon, 16 Apr 2018 04:33:13 GMT): jyellick (Mon, 16 Apr 2018 04:33:28 GMT): mozkarakoc (Mon, 16 Apr 2018 06:33:17 GMT): mozkarakoc (Mon, 16 Apr 2018 06:33:17 GMT): SaraEmily (Mon, 16 Apr 2018 09:50:38 GMT): Mihai.A (Mon, 16 Apr 2018 10:41:47 GMT): jyellick (Mon, 16 Apr 2018 13:30:33 GMT): mozkarakoc (Mon, 16 Apr 2018 13:45:03 GMT): jyellick (Mon, 16 Apr 2018 13:50:23 GMT): mozkarakoc (Mon, 16 Apr 2018 13:54:32 GMT): mozkarakoc (Mon, 16 Apr 2018 21:59:13 GMT): mozkarakoc (Mon, 16 Apr 2018 21:59:13 GMT): ranjan008 (Tue, 17 Apr 2018 09:10:23 GMT): Clod16 (Tue, 17 Apr 2018 09:50:24 GMT): ascatox (Tue, 17 Apr 2018 11:01:45 GMT): ascatox (Tue, 17 Apr 2018 11:02:52 GMT): ascatox (Tue, 17 Apr 2018 11:02:52 GMT): ascatox (Tue, 17 Apr 2018 11:02:52 GMT): ascatox (Tue, 17 Apr 2018 11:06:12 GMT): Gh0stR0ck (Tue, 17 Apr 2018 12:59:15 GMT): Unni_1994 (Tue, 17 Apr 2018 13:23:14 GMT): Unni_1994 (Tue, 17 Apr 2018 13:25:41 GMT): Unni_1994 (Tue, 17 Apr 2018 13:29:26 GMT): jyellick (Tue, 17 Apr 2018 13:30:14 GMT): jyellick (Tue, 17 Apr 2018 13:31:32 GMT): jyellick (Tue, 17 Apr 2018 13:32:14 GMT): kkermanizadeh (Tue, 17 Apr 2018 21:04:28 GMT): Unni_1994 (Wed, 18 Apr 2018 05:08:14 GMT): pjjp (Wed, 18 Apr 2018 12:47:49 GMT): mozkarakoc (Wed, 18 Apr 2018 13:03:49 GMT): Unni_1994 (Wed, 18 Apr 2018 13:14:04 GMT): Unni_1994 (Wed, 18 Apr 2018 13:14:16 GMT): jyellick (Wed, 18 Apr 2018 13:17:22 GMT): sh777 (Wed, 18 Apr 2018 15:19:40 GMT): Ryan2 (Thu, 19 Apr 2018 02:39:46 GMT): Ryan2 (Thu, 19 Apr 2018 02:39:46 GMT): jyellick (Thu, 19 Apr 2018 02:40:20 GMT): jyellick (Thu, 19 Apr 2018 02:40:35 GMT): jyellick (Thu, 19 Apr 2018 02:43:31 GMT): Ryan2 (Thu, 19 Apr 2018 06:21:40 GMT): Ryan2 (Thu, 19 Apr 2018 06:21:40 GMT): ranjan008 (Thu, 19 Apr 2018 09:51:05 GMT): duwenhui (Thu, 19 Apr 2018 12:48:36 GMT): duwenhui (Thu, 19 Apr 2018 12:48:36 GMT): duwenhui (Thu, 19 Apr 2018 12:48:36 GMT): baoyangc (Thu, 19 Apr 2018 15:09:50 GMT): jyellick (Thu, 19 Apr 2018 15:35:37 GMT): baoyangc (Thu, 19 Apr 2018 15:37:23 GMT): jyellick (Thu, 19 Apr 2018 15:37:40 GMT): jyellick (Thu, 19 Apr 2018 15:37:56 GMT): baoyangc (Thu, 19 Apr 2018 15:38:29 GMT): jyellick (Thu, 19 Apr 2018 15:38:55 GMT): baoyangc (Thu, 19 Apr 2018 15:39:56 GMT): jyellick (Thu, 19 Apr 2018 15:41:11 GMT): baoyangc (Thu, 19 Apr 2018 15:41:19 GMT): baoyangc (Thu, 19 Apr 2018 15:41:19 GMT): jyellick (Thu, 19 Apr 2018 15:55:25 GMT): asaningmaxchain123 (Thu, 19 Apr 2018 15:56:06 GMT): jyellick (Thu, 19 Apr 2018 15:56:28 GMT): asaningmaxchain123 (Thu, 19 Apr 2018 15:57:05 GMT): jyellick (Thu, 19 Apr 2018 15:58:33 GMT): asaningmaxchain123 (Thu, 19 Apr 2018 16:00:31 GMT): asaningmaxchain123 (Thu, 19 Apr 2018 16:00:31 GMT): jyellick (Thu, 19 Apr 2018 16:01:21 GMT): baoyangc (Thu, 19 Apr 2018 16:04:11 GMT): asaningmaxchain123 (Thu, 19 Apr 2018 16:05:03 GMT): asaningmaxchain123 (Thu, 19 Apr 2018 16:05:03 GMT): asaningmaxchain123 (Thu, 19 Apr 2018 16:05:03 GMT): baoyangc (Thu, 19 Apr 2018 16:05:20 GMT): asaningmaxchain123 (Thu, 19 Apr 2018 16:05:37 GMT): baoyangc (Thu, 19 Apr 2018 16:05:58 GMT): baoyangc (Thu, 19 Apr 2018 16:06:29 GMT): baoyangc (Thu, 19 Apr 2018 16:06:50 GMT): baoyangc (Thu, 19 Apr 2018 16:06:50 GMT): jyellick (Thu, 19 Apr 2018 16:06:51 GMT): jyellick (Thu, 19 Apr 2018 16:08:01 GMT): baoyangc (Thu, 19 Apr 2018 16:11:06 GMT): jyellick (Thu, 19 Apr 2018 16:11:50 GMT): jyellick (Thu, 19 Apr 2018 16:12:35 GMT): baoyangc (Thu, 19 Apr 2018 16:18:49 GMT): baoyangc (Thu, 19 Apr 2018 16:19:14 GMT): jyellick (Thu, 19 Apr 2018 16:21:00 GMT): jyellick (Thu, 19 Apr 2018 16:21:10 GMT): jyellick (Thu, 19 Apr 2018 16:22:01 GMT): baoyangc (Thu, 19 Apr 2018 16:22:44 GMT): baoyangc (Thu, 19 Apr 2018 16:29:08 GMT): baoyangc (Thu, 19 Apr 2018 16:29:13 GMT): pankajcheema (Thu, 19 Apr 2018 16:38:35 GMT): pankajcheema (Thu, 19 Apr 2018 16:38:35 GMT): pankajcheema (Thu, 19 Apr 2018 16:38:48 GMT): pankajcheema (Thu, 19 Apr 2018 16:39:11 GMT): jyellick (Thu, 19 Apr 2018 16:40:05 GMT): jyellick (Thu, 19 Apr 2018 16:45:02 GMT): jyellick (Thu, 19 Apr 2018 16:45:24 GMT): jyellick (Thu, 19 Apr 2018 16:45:38 GMT): jyellick (Thu, 19 Apr 2018 16:46:01 GMT): jyellick (Thu, 19 Apr 2018 16:46:09 GMT): jyellick (Thu, 19 Apr 2018 16:46:14 GMT): jyellick (Thu, 19 Apr 2018 16:46:34 GMT): pankajcheema (Thu, 19 Apr 2018 16:48:13 GMT): baoyangc (Thu, 19 Apr 2018 16:48:20 GMT): jyellick (Thu, 19 Apr 2018 16:48:43 GMT): jyellick (Thu, 19 Apr 2018 16:49:05 GMT): duwenhui (Thu, 19 Apr 2018 16:49:24 GMT): baoyangc (Thu, 19 Apr 2018 16:49:26 GMT): jyellick (Thu, 19 Apr 2018 16:50:14 GMT): jyellick (Thu, 19 Apr 2018 16:50:45 GMT): baoyangc (Thu, 19 Apr 2018 16:51:05 GMT): pankajcheema (Thu, 19 Apr 2018 16:51:23 GMT): duwenhui (Thu, 19 Apr 2018 16:52:16 GMT): jyellick (Thu, 19 Apr 2018 16:53:01 GMT): pankajcheema (Thu, 19 Apr 2018 16:53:16 GMT): baoyangc (Thu, 19 Apr 2018 16:53:38 GMT): baoyangc (Thu, 19 Apr 2018 16:54:02 GMT): jyellick (Thu, 19 Apr 2018 16:54:44 GMT): baoyangc (Thu, 19 Apr 2018 16:55:10 GMT): duwenhui (Thu, 19 Apr 2018 16:56:16 GMT): jyellick (Thu, 19 Apr 2018 16:57:03 GMT): jyellick (Thu, 19 Apr 2018 16:57:24 GMT): duwenhui (Thu, 19 Apr 2018 17:01:19 GMT): duwenhui (Thu, 19 Apr 2018 17:01:19 GMT): baoyangc (Thu, 19 Apr 2018 17:05:37 GMT): baoyangc (Thu, 19 Apr 2018 17:06:29 GMT): baoyangc (Thu, 19 Apr 2018 17:11:41 GMT): jyellick (Thu, 19 Apr 2018 17:15:56 GMT): baoyangc (Thu, 19 Apr 2018 17:20:42 GMT): baoyangc (Thu, 19 Apr 2018 17:21:00 GMT): jyellick (Thu, 19 Apr 2018 17:37:21 GMT): sanchezl (Fri, 20 Apr 2018 01:55:12 GMT): jyellick (Fri, 20 Apr 2018 02:02:13 GMT): sanchezl (Fri, 20 Apr 2018 02:02:26 GMT): jyellick (Fri, 20 Apr 2018 02:02:32 GMT): jyellick (Fri, 20 Apr 2018 02:02:52 GMT): sanchezl (Fri, 20 Apr 2018 02:20:26 GMT): jyellick (Fri, 20 Apr 2018 02:21:32 GMT): Ryan2 (Fri, 20 Apr 2018 09:48:11 GMT): yacovm (Fri, 20 Apr 2018 13:19:17 GMT): yacovm (Fri, 20 Apr 2018 13:19:30 GMT): bandreghetti (Fri, 20 Apr 2018 13:49:48 GMT): jyellick (Fri, 20 Apr 2018 13:52:14 GMT): bandreghetti (Fri, 20 Apr 2018 14:02:10 GMT): sanchezl (Sat, 21 Apr 2018 02:26:57 GMT): anishman (Sat, 21 Apr 2018 02:27:24 GMT): anishman (Sat, 21 Apr 2018 02:27:24 GMT): tiennv (Sat, 21 Apr 2018 14:47:37 GMT): tiennv (Sat, 21 Apr 2018 14:52:15 GMT): NeerajKumar (Sun, 22 Apr 2018 11:06:25 GMT): jyellick (Mon, 23 Apr 2018 01:29:49 GMT): jyellick (Mon, 23 Apr 2018 01:31:45 GMT): tiennv (Mon, 23 Apr 2018 05:51:58 GMT): jyellick (Mon, 23 Apr 2018 05:53:20 GMT): Ryan2 (Mon, 23 Apr 2018 05:56:26 GMT): Ryan2 (Mon, 23 Apr 2018 05:56:26 GMT): jyellick (Mon, 23 Apr 2018 05:58:40 GMT): Ryan2 (Mon, 23 Apr 2018 06:02:13 GMT): Ryan2 (Mon, 23 Apr 2018 06:02:13 GMT): duwenhui (Mon, 23 Apr 2018 07:02:38 GMT): duwenhui (Mon, 23 Apr 2018 07:02:38 GMT): duwenhui (Mon, 23 Apr 2018 07:02:38 GMT): kostas (Mon, 23 Apr 2018 13:21:57 GMT): kostas (Mon, 23 Apr 2018 13:22:41 GMT): kostas (Mon, 23 Apr 2018 13:28:06 GMT): kostas (Mon, 23 Apr 2018 13:30:53 GMT): kostas (Mon, 23 Apr 2018 13:31:17 GMT): duwenhui (Mon, 23 Apr 2018 14:12:01 GMT): duwenhui (Mon, 23 Apr 2018 14:12:01 GMT): duwenhui (Mon, 23 Apr 2018 14:15:00 GMT): kostas (Mon, 23 Apr 2018 14:37:14 GMT): duwenhui (Tue, 24 Apr 2018 14:21:21 GMT): jyellick (Tue, 24 Apr 2018 14:44:51 GMT): duwenhui (Tue, 24 Apr 2018 14:49:25 GMT): jyellick (Tue, 24 Apr 2018 14:53:30 GMT): duwenhui (Tue, 24 Apr 2018 14:59:01 GMT): duwenhui (Tue, 24 Apr 2018 14:59:27 GMT): jyellick (Tue, 24 Apr 2018 16:06:39 GMT): Glen (Wed, 25 Apr 2018 00:42:18 GMT): Glen (Wed, 25 Apr 2018 00:46:18 GMT): Glen (Wed, 25 Apr 2018 01:34:17 GMT): Glen (Wed, 25 Apr 2018 01:39:23 GMT): Glen (Wed, 25 Apr 2018 01:39:23 GMT): Ryan2 (Wed, 25 Apr 2018 08:31:56 GMT): JayPandya (Wed, 25 Apr 2018 11:45:29 GMT): kostas (Wed, 25 Apr 2018 15:05:36 GMT): kostas (Wed, 25 Apr 2018 15:05:36 GMT): kostas (Wed, 25 Apr 2018 15:06:00 GMT): kostas (Wed, 25 Apr 2018 15:07:18 GMT): kostas (Wed, 25 Apr 2018 15:08:59 GMT): Glen (Wed, 25 Apr 2018 15:14:26 GMT): bourbonkidQ (Wed, 25 Apr 2018 15:52:59 GMT): Ryan2 (Wed, 25 Apr 2018 21:06:17 GMT): Ryan2 (Thu, 26 Apr 2018 02:44:09 GMT): RahulSonanis (Thu, 26 Apr 2018 05:14:05 GMT): JayPandya (Thu, 26 Apr 2018 12:11:12 GMT): jyellick (Thu, 26 Apr 2018 13:42:07 GMT): JayPandya (Thu, 26 Apr 2018 14:19:02 GMT): jyellick (Thu, 26 Apr 2018 14:36:44 GMT): JayPandya (Thu, 26 Apr 2018 14:43:03 GMT): jyellick (Thu, 26 Apr 2018 14:44:10 GMT): jyellick (Thu, 26 Apr 2018 14:44:38 GMT): voutasaurus (Thu, 26 Apr 2018 15:47:15 GMT): JayPandya (Thu, 26 Apr 2018 18:09:47 GMT): DarshanBc (Fri, 27 Apr 2018 13:37:48 GMT): DarshanBc (Fri, 27 Apr 2018 13:38:10 GMT): DarshanBc (Fri, 27 Apr 2018 13:38:10 GMT): jyellick (Fri, 27 Apr 2018 13:46:09 GMT): jyellick (Fri, 27 Apr 2018 13:46:39 GMT): DarshanBc (Fri, 27 Apr 2018 13:47:40 GMT): DarshanBc (Fri, 27 Apr 2018 13:49:53 GMT): DarshanBc (Fri, 27 Apr 2018 13:55:02 GMT): jyellick (Fri, 27 Apr 2018 14:39:39 GMT): patelan (Fri, 27 Apr 2018 15:36:30 GMT): patelan (Fri, 27 Apr 2018 15:36:30 GMT): patelan (Fri, 27 Apr 2018 15:36:30 GMT): patelan (Fri, 27 Apr 2018 15:36:30 GMT): jyellick (Fri, 27 Apr 2018 15:45:37 GMT): jyellick (Fri, 27 Apr 2018 15:45:37 GMT): chainsaw (Fri, 27 Apr 2018 15:52:58 GMT): patelan (Fri, 27 Apr 2018 18:13:47 GMT): patelan (Fri, 27 Apr 2018 18:13:47 GMT): jyellick (Fri, 27 Apr 2018 18:19:30 GMT): jyellick (Fri, 27 Apr 2018 18:19:30 GMT): jyellick (Fri, 27 Apr 2018 18:19:30 GMT): jyellick (Fri, 27 Apr 2018 18:19:30 GMT): patelan (Fri, 27 Apr 2018 18:27:10 GMT): patelan (Fri, 27 Apr 2018 18:29:55 GMT): patelan (Fri, 27 Apr 2018 19:07:36 GMT): patelan (Fri, 27 Apr 2018 19:07:46 GMT): patelan (Fri, 27 Apr 2018 20:55:03 GMT): jyellick (Fri, 27 Apr 2018 20:55:38 GMT): patelan (Fri, 27 Apr 2018 21:08:45 GMT): jyellick (Fri, 27 Apr 2018 21:09:37 GMT): jyellick (Fri, 27 Apr 2018 21:09:37 GMT): patelan (Fri, 27 Apr 2018 21:12:36 GMT): jyellick (Fri, 27 Apr 2018 21:15:05 GMT): jyellick (Fri, 27 Apr 2018 21:15:47 GMT): jyellick (Fri, 27 Apr 2018 21:17:27 GMT): DarshanBc (Sat, 28 Apr 2018 11:59:00 GMT): DarshanBc (Sat, 28 Apr 2018 15:58:19 GMT): DarshanBc (Sat, 28 Apr 2018 15:58:19 GMT): DarshanBc (Sat, 28 Apr 2018 16:00:22 GMT): DarshanBc (Sat, 28 Apr 2018 16:00:30 GMT): DarshanBc (Sat, 28 Apr 2018 16:03:56 GMT): DarshanBc (Sat, 28 Apr 2018 16:03:56 GMT): DarshanBc (Sat, 28 Apr 2018 16:28:35 GMT): JayPandya (Sun, 29 Apr 2018 14:29:01 GMT): JayPandya (Sun, 29 Apr 2018 14:29:43 GMT): JayPandya (Sun, 29 Apr 2018 14:29:43 GMT): MabelOza (Sun, 29 Apr 2018 16:51:48 GMT): simonghrt (Mon, 30 Apr 2018 12:07:27 GMT): patelan (Mon, 30 Apr 2018 14:16:28 GMT): kostas (Mon, 30 Apr 2018 14:33:56 GMT): JayPandya (Mon, 30 Apr 2018 14:36:07 GMT): kostas (Mon, 30 Apr 2018 14:36:31 GMT): JayPandya (Mon, 30 Apr 2018 14:36:46 GMT): JayPandya (Mon, 30 Apr 2018 14:37:15 GMT): kostas (Mon, 30 Apr 2018 14:38:26 GMT): JayPandya (Mon, 30 Apr 2018 14:38:49 GMT): JayPandya (Mon, 30 Apr 2018 14:39:39 GMT): kostas (Mon, 30 Apr 2018 14:39:43 GMT): JayPandya (Mon, 30 Apr 2018 14:41:04 GMT): JayPandya (Mon, 30 Apr 2018 14:41:50 GMT): JayPandya (Mon, 30 Apr 2018 14:42:21 GMT): kostas (Mon, 30 Apr 2018 14:42:45 GMT): JayPandya (Mon, 30 Apr 2018 14:47:49 GMT): JayPandya (Mon, 30 Apr 2018 14:48:22 GMT): kostas (Mon, 30 Apr 2018 14:49:02 GMT): kostas (Mon, 30 Apr 2018 14:49:08 GMT): kostas (Mon, 30 Apr 2018 14:49:40 GMT): kostas (Mon, 30 Apr 2018 14:49:53 GMT): JayPandya (Mon, 30 Apr 2018 14:50:50 GMT): kostas (Mon, 30 Apr 2018 14:53:50 GMT): kostas (Mon, 30 Apr 2018 14:54:00 GMT): kostas (Mon, 30 Apr 2018 14:54:00 GMT): JayPandya (Mon, 30 Apr 2018 14:57:34 GMT): patelan (Mon, 30 Apr 2018 14:59:06 GMT): kostas (Mon, 30 Apr 2018 15:00:52 GMT): JayPandya (Mon, 30 Apr 2018 15:02:59 GMT): JayPandya (Mon, 30 Apr 2018 15:02:59 GMT): kostas (Mon, 30 Apr 2018 15:03:54 GMT): kostas (Mon, 30 Apr 2018 15:04:10 GMT): JayPandya (Mon, 30 Apr 2018 15:05:16 GMT): JayPandya (Mon, 30 Apr 2018 15:05:36 GMT): kostas (Mon, 30 Apr 2018 15:05:55 GMT): kostas (Mon, 30 Apr 2018 15:06:47 GMT): JayPandya (Mon, 30 Apr 2018 15:07:24 GMT): kostas (Mon, 30 Apr 2018 15:08:32 GMT): JayPandya (Mon, 30 Apr 2018 15:09:55 GMT): kostas (Mon, 30 Apr 2018 15:10:21 GMT): JayPandya (Mon, 30 Apr 2018 15:13:11 GMT): kostas (Mon, 30 Apr 2018 15:13:23 GMT): kostas (Mon, 30 Apr 2018 15:13:32 GMT): JayPandya (Mon, 30 Apr 2018 15:13:51 GMT): kostas (Mon, 30 Apr 2018 15:14:39 GMT): kostas (Mon, 30 Apr 2018 15:14:49 GMT): kostas (Mon, 30 Apr 2018 15:14:49 GMT): JayPandya (Mon, 30 Apr 2018 15:15:37 GMT): kostas (Mon, 30 Apr 2018 15:17:31 GMT): kostas (Mon, 30 Apr 2018 15:17:48 GMT): kostas (Mon, 30 Apr 2018 15:18:26 GMT): kostas (Mon, 30 Apr 2018 15:18:46 GMT): kostas (Mon, 30 Apr 2018 15:19:14 GMT): JayPandya (Mon, 30 Apr 2018 15:21:22 GMT): kostas (Mon, 30 Apr 2018 15:23:08 GMT): JayPandya (Mon, 30 Apr 2018 15:25:49 GMT): JayPandya (Mon, 30 Apr 2018 15:26:11 GMT): kostas (Mon, 30 Apr 2018 15:27:55 GMT): kostas (Mon, 30 Apr 2018 15:28:28 GMT): kostas (Mon, 30 Apr 2018 15:29:33 GMT): JayPandya (Mon, 30 Apr 2018 15:30:12 GMT): JayPandya (Mon, 30 Apr 2018 15:31:00 GMT): kostas (Mon, 30 Apr 2018 15:31:35 GMT): JayPandya (Mon, 30 Apr 2018 15:32:37 GMT): kostas (Mon, 30 Apr 2018 15:38:22 GMT): JayPandya (Mon, 30 Apr 2018 15:42:39 GMT): kostas (Mon, 30 Apr 2018 15:44:39 GMT): JayPandya (Mon, 30 Apr 2018 15:44:58 GMT): JayPandya (Mon, 30 Apr 2018 15:45:19 GMT): JayPandya (Mon, 30 Apr 2018 15:45:19 GMT): JayPandya (Mon, 30 Apr 2018 15:48:38 GMT): patelan (Mon, 30 Apr 2018 16:34:58 GMT): kostas (Mon, 30 Apr 2018 17:22:13 GMT): patelan (Mon, 30 Apr 2018 17:53:35 GMT): kostas (Mon, 30 Apr 2018 17:59:04 GMT): kostas (Mon, 30 Apr 2018 17:59:04 GMT): kostas (Mon, 30 Apr 2018 17:59:40 GMT): kostas (Mon, 30 Apr 2018 18:00:00 GMT): kostas (Mon, 30 Apr 2018 18:00:29 GMT): kostas (Mon, 30 Apr 2018 18:01:00 GMT): patelan (Mon, 30 Apr 2018 18:02:58 GMT): patelan (Mon, 30 Apr 2018 18:03:29 GMT): patelan (Mon, 30 Apr 2018 18:03:33 GMT): patelan (Mon, 30 Apr 2018 18:03:54 GMT): patelan (Mon, 30 Apr 2018 18:04:01 GMT): patelan (Mon, 30 Apr 2018 18:04:19 GMT): kostas (Mon, 30 Apr 2018 18:04:51 GMT): patelan (Mon, 30 Apr 2018 18:05:44 GMT): kostas (Mon, 30 Apr 2018 18:05:52 GMT): patelan (Mon, 30 Apr 2018 18:06:07 GMT): kostas (Mon, 30 Apr 2018 18:07:13 GMT): patelan (Mon, 30 Apr 2018 18:07:21 GMT): kostas (Mon, 30 Apr 2018 18:07:40 GMT): kostas (Mon, 30 Apr 2018 18:08:15 GMT): kostas (Mon, 30 Apr 2018 18:08:43 GMT): kostas (Mon, 30 Apr 2018 18:09:16 GMT): kostas (Mon, 30 Apr 2018 18:09:31 GMT): patelan (Mon, 30 Apr 2018 18:11:07 GMT): patelan (Mon, 30 Apr 2018 18:11:48 GMT): patelan (Mon, 30 Apr 2018 18:12:15 GMT): kostas (Mon, 30 Apr 2018 18:13:00 GMT): kostas (Mon, 30 Apr 2018 18:13:29 GMT): patelan (Mon, 30 Apr 2018 18:17:00 GMT): patelan (Mon, 30 Apr 2018 18:48:34 GMT): kostas (Mon, 30 Apr 2018 19:02:58 GMT): kostas (Mon, 30 Apr 2018 19:03:04 GMT): kostas (Mon, 30 Apr 2018 19:04:20 GMT): kostas (Mon, 30 Apr 2018 19:04:37 GMT): kostas (Mon, 30 Apr 2018 19:05:10 GMT): patelan (Mon, 30 Apr 2018 19:06:10 GMT): patelan (Tue, 01 May 2018 16:30:50 GMT): patelan (Tue, 01 May 2018 16:30:53 GMT): kostas (Tue, 01 May 2018 16:31:43 GMT): patelan (Tue, 01 May 2018 16:33:31 GMT): kostas (Tue, 01 May 2018 16:34:12 GMT): patelan (Tue, 01 May 2018 16:34:27 GMT): patelan (Tue, 01 May 2018 17:58:40 GMT): patelan (Tue, 01 May 2018 18:00:11 GMT): jyellick (Tue, 01 May 2018 18:06:43 GMT): patelan (Tue, 01 May 2018 18:18:07 GMT): Unni_1994 (Wed, 02 May 2018 10:47:56 GMT): Unni_1994 (Wed, 02 May 2018 10:48:28 GMT): Unni_1994 (Wed, 02 May 2018 10:50:12 GMT): jyellick (Wed, 02 May 2018 14:26:30 GMT): jyellick (Wed, 02 May 2018 14:27:30 GMT): kevin-s-wang (Thu, 03 May 2018 02:36:56 GMT): bourbonkidQ (Thu, 03 May 2018 08:54:38 GMT): kostas (Thu, 03 May 2018 12:07:27 GMT): xiven (Thu, 03 May 2018 17:46:37 GMT): xiven (Thu, 03 May 2018 17:52:00 GMT): kostas (Thu, 03 May 2018 18:41:34 GMT): xiven (Thu, 03 May 2018 18:56:55 GMT): xiven (Thu, 03 May 2018 18:59:00 GMT): xiven (Thu, 03 May 2018 18:59:15 GMT): Tmeister (Thu, 03 May 2018 20:33:31 GMT): AkshayJindal (Fri, 04 May 2018 10:36:52 GMT): kostas (Fri, 04 May 2018 11:34:48 GMT): kostas (Fri, 04 May 2018 11:35:11 GMT): AkshayJindal (Fri, 04 May 2018 12:16:10 GMT): jyellick (Fri, 04 May 2018 13:53:27 GMT): patelan (Fri, 04 May 2018 14:59:46 GMT): patelan (Fri, 04 May 2018 15:00:05 GMT): patelan (Fri, 04 May 2018 15:00:27 GMT): patelan (Fri, 04 May 2018 15:01:41 GMT): kostas (Fri, 04 May 2018 15:14:41 GMT): patelan (Fri, 04 May 2018 15:44:44 GMT): patelan (Fri, 04 May 2018 15:44:44 GMT): patelan (Fri, 04 May 2018 16:28:57 GMT): patelan (Fri, 04 May 2018 16:34:35 GMT): kostas (Fri, 04 May 2018 19:50:57 GMT): patelan (Fri, 04 May 2018 19:59:10 GMT): patelan (Fri, 04 May 2018 19:59:15 GMT): kostas (Fri, 04 May 2018 20:01:13 GMT): kostas (Fri, 04 May 2018 20:01:50 GMT): kostas (Fri, 04 May 2018 20:02:58 GMT): kostas (Fri, 04 May 2018 20:03:50 GMT): patelan (Fri, 04 May 2018 20:04:18 GMT): patelan (Fri, 04 May 2018 20:04:43 GMT): kostas (Fri, 04 May 2018 20:05:20 GMT): kostas (Fri, 04 May 2018 20:05:52 GMT): kostas (Fri, 04 May 2018 20:05:52 GMT): patelan (Fri, 04 May 2018 20:10:01 GMT): kostas (Fri, 04 May 2018 20:18:37 GMT): kostas (Fri, 04 May 2018 20:18:37 GMT): kostas (Fri, 04 May 2018 20:18:37 GMT): kostas (Fri, 04 May 2018 20:18:50 GMT): kostas (Fri, 04 May 2018 20:18:54 GMT): Lexliw (Sat, 05 May 2018 17:26:06 GMT): bc2017 (Sun, 06 May 2018 00:49:35 GMT): bc2017 (Sun, 06 May 2018 01:40:24 GMT): bc2017 (Sun, 06 May 2018 01:40:51 GMT): bc2017 (Sun, 06 May 2018 01:43:22 GMT): bc2017 (Sun, 06 May 2018 01:44:06 GMT): bc2017 (Sun, 06 May 2018 01:44:40 GMT): jyellick (Sun, 06 May 2018 01:53:00 GMT): jyellick (Sun, 06 May 2018 01:53:19 GMT): jyellick (Sun, 06 May 2018 01:55:49 GMT): souvik (Mon, 07 May 2018 13:35:58 GMT): souvik (Mon, 07 May 2018 13:36:00 GMT): souvik (Mon, 07 May 2018 13:36:20 GMT): kostas (Mon, 07 May 2018 13:36:47 GMT): souvik (Mon, 07 May 2018 13:40:45 GMT): patelan (Mon, 07 May 2018 15:16:02 GMT): patelan (Mon, 07 May 2018 15:16:02 GMT): kostas (Mon, 07 May 2018 18:17:56 GMT): pankajcheema (Tue, 08 May 2018 05:29:41 GMT): pankajcheema (Tue, 08 May 2018 05:29:55 GMT): bh4rtp (Tue, 08 May 2018 05:54:53 GMT): bh4rtp (Tue, 08 May 2018 05:54:53 GMT): bh4rtp (Tue, 08 May 2018 05:54:53 GMT): pankajcheema (Tue, 08 May 2018 05:55:45 GMT): pankajcheema (Tue, 08 May 2018 05:57:42 GMT): bh4rtp (Tue, 08 May 2018 05:57:49 GMT): pankajcheema (Tue, 08 May 2018 05:58:35 GMT): pankajcheema (Tue, 08 May 2018 05:59:48 GMT): bh4rtp (Tue, 08 May 2018 06:02:23 GMT): pankajcheema (Tue, 08 May 2018 06:04:15 GMT): bh4rtp (Tue, 08 May 2018 06:07:13 GMT): pankajcheema (Tue, 08 May 2018 06:08:07 GMT): bh4rtp (Tue, 08 May 2018 06:08:52 GMT): pankajcheema (Tue, 08 May 2018 06:10:17 GMT): pankajcheema (Tue, 08 May 2018 06:10:58 GMT): pankajcheema (Tue, 08 May 2018 06:11:06 GMT): bh4rtp (Tue, 08 May 2018 06:11:45 GMT): pankajcheema (Tue, 08 May 2018 06:12:31 GMT): bh4rtp (Tue, 08 May 2018 06:12:39 GMT): pankajcheema (Tue, 08 May 2018 06:37:51 GMT): pankajcheema (Tue, 08 May 2018 06:37:58 GMT): bh4rtp (Tue, 08 May 2018 06:48:43 GMT): bh4rtp (Tue, 08 May 2018 06:48:43 GMT): pankajcheema (Tue, 08 May 2018 06:49:51 GMT): pankajcheema (Tue, 08 May 2018 06:50:23 GMT): bh4rtp (Tue, 08 May 2018 06:51:28 GMT): pankajcheema (Tue, 08 May 2018 06:52:09 GMT): pankajcheema (Tue, 08 May 2018 06:52:18 GMT): bh4rtp (Tue, 08 May 2018 06:57:13 GMT): bh4rtp (Tue, 08 May 2018 06:57:13 GMT): bh4rtp (Tue, 08 May 2018 06:57:13 GMT): Starseven (Tue, 08 May 2018 11:52:08 GMT): patelan (Tue, 08 May 2018 12:53:13 GMT): kostas (Tue, 08 May 2018 13:57:02 GMT): kostas (Tue, 08 May 2018 13:58:26 GMT): kostas (Tue, 08 May 2018 13:59:01 GMT): kostas (Tue, 08 May 2018 14:02:03 GMT): kostas (Tue, 08 May 2018 14:02:19 GMT): kostas (Tue, 08 May 2018 14:03:03 GMT): kostas (Tue, 08 May 2018 14:03:30 GMT): kostas (Tue, 08 May 2018 14:03:30 GMT): kostas (Tue, 08 May 2018 14:03:52 GMT): kostas (Tue, 08 May 2018 14:18:53 GMT): john_whitton (Tue, 08 May 2018 17:49:26 GMT): patelan (Tue, 08 May 2018 20:44:30 GMT): patelan (Tue, 08 May 2018 20:44:30 GMT): asaningmaxchain123 (Wed, 09 May 2018 09:44:00 GMT): asaningmaxchain123 (Wed, 09 May 2018 09:44:00 GMT): patelan (Wed, 09 May 2018 13:08:29 GMT): kostas (Wed, 09 May 2018 13:37:12 GMT): kostas (Wed, 09 May 2018 13:37:16 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:39:25 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:39:25 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:39:25 GMT): kostas (Wed, 09 May 2018 13:40:18 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:41:01 GMT): kostas (Wed, 09 May 2018 13:44:11 GMT): kostas (Wed, 09 May 2018 13:44:11 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:45:40 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:45:40 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:45:40 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:45:40 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:45:40 GMT): kostas (Wed, 09 May 2018 13:45:54 GMT): kostas (Wed, 09 May 2018 13:47:13 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:48:05 GMT): kostas (Wed, 09 May 2018 13:48:29 GMT): kostas (Wed, 09 May 2018 13:48:30 GMT): kostas (Wed, 09 May 2018 13:48:52 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:49:24 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:50:13 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:50:33 GMT): kostas (Wed, 09 May 2018 13:51:41 GMT): kostas (Wed, 09 May 2018 13:51:54 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:53:35 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:53:40 GMT): kostas (Wed, 09 May 2018 13:54:08 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:54:51 GMT): kostas (Wed, 09 May 2018 13:55:34 GMT): kostas (Wed, 09 May 2018 13:56:11 GMT): kostas (Wed, 09 May 2018 13:56:27 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:58:40 GMT): asaningmaxchain123 (Wed, 09 May 2018 13:58:40 GMT): kostas (Wed, 09 May 2018 14:01:24 GMT): asaningmaxchain123 (Wed, 09 May 2018 14:03:34 GMT): asaningmaxchain123 (Wed, 09 May 2018 14:16:22 GMT): asaningmaxchain123 (Wed, 09 May 2018 14:16:22 GMT): asaningmaxchain123 (Wed, 09 May 2018 14:16:22 GMT): souvik (Wed, 09 May 2018 14:18:58 GMT): asaningmaxchain123 (Wed, 09 May 2018 14:20:47 GMT): asaningmaxchain123 (Wed, 09 May 2018 14:24:21 GMT): asaningmaxchain123 (Wed, 09 May 2018 14:24:21 GMT): asaningmaxchain123 (Wed, 09 May 2018 14:24:21 GMT): asaningmaxchain123 (Wed, 09 May 2018 14:24:21 GMT): asaningmaxchain123 (Wed, 09 May 2018 14:24:21 GMT): patelan (Wed, 09 May 2018 19:07:31 GMT): patelan (Wed, 09 May 2018 19:23:59 GMT): patelan (Wed, 09 May 2018 20:25:31 GMT): patelan (Wed, 09 May 2018 20:25:31 GMT): antonioGlavocevic (Thu, 10 May 2018 03:22:31 GMT): antonioGlavocevic (Thu, 10 May 2018 03:46:44 GMT): antonioGlavocevic (Thu, 10 May 2018 03:46:44 GMT): antonioGlavocevic (Thu, 10 May 2018 03:46:44 GMT): antonioGlavocevic (Thu, 10 May 2018 03:46:44 GMT): antonioGlavocevic (Thu, 10 May 2018 03:46:44 GMT): antonioGlavocevic (Thu, 10 May 2018 04:15:26 GMT): AntonyKakoudakis (Thu, 10 May 2018 15:50:39 GMT): jyellick (Thu, 10 May 2018 18:17:37 GMT): jyellick (Thu, 10 May 2018 18:18:25 GMT): jyellick (Thu, 10 May 2018 18:19:48 GMT): jyellick (Thu, 10 May 2018 18:20:25 GMT): jyellick (Thu, 10 May 2018 18:20:25 GMT): varinder (Thu, 10 May 2018 20:12:08 GMT): asaningmaxchain123 (Fri, 11 May 2018 01:27:39 GMT): asaningmaxchain123 (Fri, 11 May 2018 01:28:45 GMT): asaningmaxchain123 (Fri, 11 May 2018 01:28:45 GMT): patelan (Fri, 11 May 2018 16:16:22 GMT): wtlife (Mon, 14 May 2018 06:08:33 GMT): wtlife (Mon, 14 May 2018 06:09:53 GMT): wtlife (Mon, 14 May 2018 07:12:44 GMT): DarshanBc (Mon, 14 May 2018 07:35:04 GMT): versus (Mon, 14 May 2018 09:06:04 GMT): jyellick (Mon, 14 May 2018 13:56:29 GMT): jyellick (Mon, 14 May 2018 13:57:27 GMT): asaningmaxchain123 (Mon, 14 May 2018 14:38:44 GMT): asaningmaxchain123 (Mon, 14 May 2018 14:39:44 GMT): asaningmaxchain123 (Mon, 14 May 2018 14:39:49 GMT): asaningmaxchain123 (Mon, 14 May 2018 14:40:07 GMT): asaningmaxchain123 (Mon, 14 May 2018 14:43:44 GMT): patelan (Mon, 14 May 2018 15:13:29 GMT): acbellini (Mon, 14 May 2018 16:05:53 GMT): julian (Tue, 15 May 2018 14:38:17 GMT): jyellick (Tue, 15 May 2018 15:08:26 GMT): asaningmaxchain123 (Tue, 15 May 2018 15:08:48 GMT): jyellick (Tue, 15 May 2018 15:14:16 GMT): julian (Tue, 15 May 2018 16:52:38 GMT): umapm113 (Wed, 16 May 2018 07:17:52 GMT): patelan (Wed, 16 May 2018 14:05:36 GMT): jyellick (Wed, 16 May 2018 14:41:34 GMT): jyellick (Wed, 16 May 2018 14:41:42 GMT): asaningmaxchain123 (Wed, 16 May 2018 14:47:54 GMT): asaningmaxchain123 (Wed, 16 May 2018 14:47:54 GMT): asaningmaxchain123 (Wed, 16 May 2018 14:47:54 GMT): gravity (Wed, 16 May 2018 14:53:49 GMT): gravity (Wed, 16 May 2018 14:54:19 GMT): jyellick (Wed, 16 May 2018 15:05:56 GMT): jyellick (Wed, 16 May 2018 15:06:20 GMT): jyellick (Wed, 16 May 2018 15:06:21 GMT): asaningmaxchain123 (Wed, 16 May 2018 15:07:10 GMT): acbellini (Wed, 16 May 2018 15:18:03 GMT): patelan (Wed, 16 May 2018 15:34:47 GMT): gravity (Wed, 16 May 2018 15:51:07 GMT): bourbonkidQ (Wed, 16 May 2018 16:04:14 GMT): jyellick (Wed, 16 May 2018 16:05:21 GMT): jyellick (Wed, 16 May 2018 16:06:04 GMT): jyellick (Wed, 16 May 2018 16:06:57 GMT): bourbonkidQ (Wed, 16 May 2018 16:12:23 GMT): patelan (Wed, 16 May 2018 16:19:39 GMT): patelan (Wed, 16 May 2018 16:19:39 GMT): patelan (Wed, 16 May 2018 16:19:39 GMT): patelan (Wed, 16 May 2018 16:19:39 GMT): patelan (Wed, 16 May 2018 16:19:39 GMT): patelan (Wed, 16 May 2018 16:19:39 GMT): acbellini (Wed, 16 May 2018 16:28:48 GMT): jyellick (Wed, 16 May 2018 16:33:47 GMT): acbellini (Wed, 16 May 2018 16:34:00 GMT): acbellini (Wed, 16 May 2018 16:34:04 GMT): bourbonkidQ (Wed, 16 May 2018 16:57:09 GMT): jyellick (Wed, 16 May 2018 17:01:19 GMT): bourbonkidQ (Wed, 16 May 2018 17:03:18 GMT): jyellick (Wed, 16 May 2018 17:04:05 GMT): gravity (Wed, 16 May 2018 18:19:24 GMT): jyellick (Wed, 16 May 2018 18:43:00 GMT): patelan (Wed, 16 May 2018 19:59:55 GMT): patelan (Wed, 16 May 2018 19:59:55 GMT): jyellick (Wed, 16 May 2018 20:51:20 GMT): jyellick (Wed, 16 May 2018 20:51:45 GMT): patelan (Wed, 16 May 2018 21:39:18 GMT): NeerajKumar (Thu, 17 May 2018 03:33:08 GMT): chongxinman (Thu, 17 May 2018 07:22:03 GMT): kostas (Thu, 17 May 2018 12:41:55 GMT): asaningmaxchain123 (Thu, 17 May 2018 12:54:57 GMT): asaningmaxchain123 (Thu, 17 May 2018 12:55:11 GMT): patelan (Thu, 17 May 2018 15:18:22 GMT): jyellick (Thu, 17 May 2018 15:31:12 GMT): patelan (Thu, 17 May 2018 15:46:29 GMT): patelan (Thu, 17 May 2018 15:46:29 GMT): patelan (Thu, 17 May 2018 15:46:29 GMT): patelan (Thu, 17 May 2018 15:46:29 GMT): patelan (Thu, 17 May 2018 16:25:28 GMT): patelan (Thu, 17 May 2018 16:25:28 GMT): Ryan2 (Fri, 18 May 2018 00:38:52 GMT): Ryan2 (Fri, 18 May 2018 00:38:52 GMT): Ryan2 (Fri, 18 May 2018 00:38:52 GMT): Ryan2 (Fri, 18 May 2018 00:38:52 GMT): Ryan2 (Fri, 18 May 2018 00:38:52 GMT): Ryan2 (Fri, 18 May 2018 00:38:52 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 08:59:17 GMT): SimonOberzan (Fri, 18 May 2018 09:25:40 GMT): SimonOberzan (Fri, 18 May 2018 09:25:40 GMT): SimonOberzan (Fri, 18 May 2018 09:32:51 GMT): SimonOberzan (Fri, 18 May 2018 11:54:47 GMT): SimonOberzan (Fri, 18 May 2018 11:54:47 GMT): SimonOberzan (Fri, 18 May 2018 11:54:47 GMT): jyellick (Fri, 18 May 2018 13:00:20 GMT): jyellick (Fri, 18 May 2018 13:04:52 GMT): jyellick (Fri, 18 May 2018 13:05:15 GMT): SimonOberzan (Fri, 18 May 2018 16:02:33 GMT): SimonOberzan (Fri, 18 May 2018 16:02:33 GMT): jyellick (Fri, 18 May 2018 16:04:44 GMT): jyellick (Fri, 18 May 2018 16:05:37 GMT): jyellick (Fri, 18 May 2018 16:09:03 GMT): jyellick (Fri, 18 May 2018 16:09:23 GMT): SimonOberzan (Fri, 18 May 2018 16:12:40 GMT): SimonOberzan (Fri, 18 May 2018 16:12:40 GMT): jyellick (Fri, 18 May 2018 16:41:31 GMT): jyellick (Fri, 18 May 2018 16:42:19 GMT): SimonOberzan (Fri, 18 May 2018 16:42:47 GMT): SimonOberzan (Fri, 18 May 2018 16:56:11 GMT): Ryan2 (Mon, 21 May 2018 02:28:17 GMT): DivyaAgrawal (Tue, 22 May 2018 19:47:45 GMT): DivyaAgrawal (Tue, 22 May 2018 19:58:48 GMT): DivyaAgrawal (Tue, 22 May 2018 19:58:48 GMT): DivyaAgrawal (Tue, 22 May 2018 19:58:48 GMT): jyellick (Tue, 22 May 2018 20:09:47 GMT): DivyaAgrawal (Tue, 22 May 2018 20:12:46 GMT): jyellick (Tue, 22 May 2018 20:14:07 GMT): jyellick (Tue, 22 May 2018 20:15:37 GMT): DivyaAgrawal (Tue, 22 May 2018 20:30:09 GMT): DivyaAgrawal (Tue, 22 May 2018 20:34:11 GMT): jyellick (Tue, 22 May 2018 20:35:16 GMT): DivyaAgrawal (Tue, 22 May 2018 20:38:04 GMT): jyellick (Tue, 22 May 2018 20:43:49 GMT): DivyaAgrawal (Tue, 22 May 2018 20:45:53 GMT): jyellick (Tue, 22 May 2018 20:52:48 GMT): DivyaAgrawal (Tue, 22 May 2018 20:55:59 GMT): DivyaAgrawal (Tue, 22 May 2018 21:48:19 GMT): jyellick (Wed, 23 May 2018 01:25:20 GMT): jyellick (Wed, 23 May 2018 01:25:20 GMT): jyellick (Wed, 23 May 2018 01:26:20 GMT): sukrit.handa@gmail.com (Wed, 23 May 2018 03:00:34 GMT): Glen (Wed, 23 May 2018 03:32:02 GMT): Glen (Wed, 23 May 2018 03:32:02 GMT): jyellick (Wed, 23 May 2018 03:33:54 GMT): Glen (Wed, 23 May 2018 03:36:29 GMT): Glen (Wed, 23 May 2018 03:41:54 GMT): ascatox (Wed, 23 May 2018 06:02:23 GMT): ascatox (Wed, 23 May 2018 06:02:47 GMT): ascatox (Wed, 23 May 2018 06:03:03 GMT): Ryan2 (Wed, 23 May 2018 06:03:11 GMT): DivyaAgrawal (Wed, 23 May 2018 07:17:47 GMT): jyellick (Wed, 23 May 2018 18:32:55 GMT): jyellick (Wed, 23 May 2018 18:34:43 GMT): Ryan2 (Thu, 24 May 2018 00:14:56 GMT): jyellick (Thu, 24 May 2018 01:10:15 GMT): Ryan2 (Thu, 24 May 2018 04:17:32 GMT): Ryan2 (Thu, 24 May 2018 09:42:49 GMT): jyellick (Thu, 24 May 2018 13:53:49 GMT): Ryan2 (Thu, 24 May 2018 14:00:18 GMT): Ryan2 (Thu, 24 May 2018 14:00:18 GMT): sanchezl (Thu, 24 May 2018 14:05:12 GMT): Ryan2 (Thu, 24 May 2018 14:11:33 GMT): Ryan2 (Thu, 24 May 2018 14:11:33 GMT): snakejerusalem (Thu, 24 May 2018 14:15:09 GMT): Ryan2 (Thu, 24 May 2018 14:25:32 GMT): pankajcheema (Thu, 24 May 2018 14:59:43 GMT): pankajcheema (Thu, 24 May 2018 14:59:46 GMT): pankajcheema (Thu, 24 May 2018 14:59:46 GMT): pankajcheema (Thu, 24 May 2018 14:59:51 GMT): pankajcheema (Thu, 24 May 2018 15:00:03 GMT): jyellick (Thu, 24 May 2018 15:01:04 GMT): jyellick (Thu, 24 May 2018 15:01:33 GMT): snakejerusalem (Thu, 24 May 2018 15:01:47 GMT): pankajcheema (Thu, 24 May 2018 15:02:37 GMT): pankajcheema (Thu, 24 May 2018 15:02:53 GMT): jyellick (Thu, 24 May 2018 15:02:54 GMT): pankajcheema (Thu, 24 May 2018 15:02:57 GMT): pankajcheema (Thu, 24 May 2018 15:03:09 GMT): jyellick (Thu, 24 May 2018 15:03:40 GMT): pankajcheema (Thu, 24 May 2018 15:04:39 GMT): KellyCooper (Thu, 24 May 2018 15:21:49 GMT): MihaiAA (Fri, 25 May 2018 06:49:22 GMT): Ryan2 (Fri, 25 May 2018 10:01:32 GMT): dave.enyeart (Fri, 25 May 2018 10:36:59 GMT): Ryan2 (Fri, 25 May 2018 11:55:44 GMT): DivyaAgrawal (Fri, 25 May 2018 11:58:10 GMT): david_dornseifer (Fri, 25 May 2018 15:20:35 GMT): jyellick (Fri, 25 May 2018 15:28:59 GMT): jyellick (Fri, 25 May 2018 15:29:27 GMT): davidkhala (Mon, 28 May 2018 03:04:42 GMT): Aswath8687 (Mon, 28 May 2018 04:09:21 GMT): y.yone (Mon, 28 May 2018 06:00:25 GMT): zjqpower (Mon, 28 May 2018 07:57:02 GMT): DerekC (Mon, 28 May 2018 08:44:54 GMT): david_dornseifer (Mon, 28 May 2018 11:37:07 GMT): david_dornseifer (Mon, 28 May 2018 11:37:07 GMT): sarapara (Mon, 28 May 2018 14:14:49 GMT): sarapara (Mon, 28 May 2018 14:36:26 GMT): dappcoder (Mon, 28 May 2018 14:47:20 GMT): Ryan2 (Mon, 28 May 2018 15:09:29 GMT): Ryan2 (Mon, 28 May 2018 15:09:29 GMT): yacovm (Mon, 28 May 2018 16:37:29 GMT): pankajcheema (Mon, 28 May 2018 17:28:52 GMT): pankajcheema (Mon, 28 May 2018 17:29:20 GMT): pankajcheema (Mon, 28 May 2018 17:29:21 GMT): pankajcheema (Mon, 28 May 2018 17:29:26 GMT): pankajcheema (Mon, 28 May 2018 17:29:29 GMT): hesonglin (Mon, 28 May 2018 21:01:34 GMT): Ryan2 (Mon, 28 May 2018 22:34:46 GMT): yacovm (Mon, 28 May 2018 22:38:17 GMT): Ryan2 (Mon, 28 May 2018 22:43:18 GMT): yacovm (Mon, 28 May 2018 22:44:04 GMT): yacovm (Mon, 28 May 2018 22:44:27 GMT): yacovm (Mon, 28 May 2018 22:44:38 GMT): yacovm (Mon, 28 May 2018 22:44:47 GMT): yacovm (Mon, 28 May 2018 22:44:56 GMT): yacovm (Mon, 28 May 2018 22:45:42 GMT): yacovm (Mon, 28 May 2018 22:47:41 GMT): yacovm (Mon, 28 May 2018 22:48:17 GMT): jyellick (Mon, 28 May 2018 23:17:48 GMT): jyellick (Mon, 28 May 2018 23:19:30 GMT): jyellick (Mon, 28 May 2018 23:23:18 GMT): jyellick (Mon, 28 May 2018 23:23:18 GMT): jyellick (Mon, 28 May 2018 23:26:29 GMT): pankajcheema (Tue, 29 May 2018 04:01:24 GMT): Luxii (Tue, 29 May 2018 11:10:42 GMT): david_dornseifer (Tue, 29 May 2018 12:19:22 GMT): david_dornseifer (Tue, 29 May 2018 12:19:22 GMT): sarapara (Tue, 29 May 2018 15:14:35 GMT): jyellick (Tue, 29 May 2018 15:17:06 GMT): sarapara (Tue, 29 May 2018 15:37:28 GMT): jyellick (Tue, 29 May 2018 16:12:08 GMT): AlexanderZhovnuvaty (Wed, 30 May 2018 11:37:47 GMT): paulananth (Wed, 30 May 2018 13:13:36 GMT): eetti (Wed, 30 May 2018 14:29:39 GMT): jyellick (Wed, 30 May 2018 14:31:37 GMT): eetti (Wed, 30 May 2018 14:32:12 GMT): jyellick (Wed, 30 May 2018 14:35:41 GMT): eetti (Wed, 30 May 2018 14:37:33 GMT): sarapara (Wed, 30 May 2018 17:21:12 GMT): jyellick (Wed, 30 May 2018 17:23:11 GMT): sarapara (Wed, 30 May 2018 17:25:33 GMT): jyellick (Wed, 30 May 2018 17:26:09 GMT): sarapara (Wed, 30 May 2018 17:29:33 GMT): dave.enyeart (Wed, 30 May 2018 17:33:33 GMT): dave.enyeart (Wed, 30 May 2018 17:33:39 GMT): dave.enyeart (Wed, 30 May 2018 17:34:25 GMT): dave.enyeart (Wed, 30 May 2018 17:34:50 GMT): rogerwilcos (Wed, 30 May 2018 23:13:07 GMT): Glen (Thu, 31 May 2018 01:29:42 GMT): jyellick (Thu, 31 May 2018 01:47:11 GMT): Glen (Thu, 31 May 2018 01:47:32 GMT): cnusri (Thu, 31 May 2018 08:24:43 GMT): krabradosty (Thu, 31 May 2018 08:56:24 GMT): krabradosty (Thu, 31 May 2018 08:56:40 GMT): krabradosty (Thu, 31 May 2018 08:56:40 GMT): krabradosty (Thu, 31 May 2018 08:58:04 GMT): Adhavpavan (Thu, 31 May 2018 09:11:47 GMT): jyellick (Thu, 31 May 2018 13:54:52 GMT): JayPandya (Thu, 31 May 2018 13:58:32 GMT): JayPandya (Thu, 31 May 2018 13:58:32 GMT): krabradosty (Thu, 31 May 2018 14:02:20 GMT): jyellick (Thu, 31 May 2018 14:03:50 GMT): krabradosty (Thu, 31 May 2018 14:08:57 GMT): jyellick (Thu, 31 May 2018 14:09:40 GMT): krabradosty (Thu, 31 May 2018 14:09:49 GMT): jyellick (Thu, 31 May 2018 14:10:01 GMT): JayPandya (Thu, 31 May 2018 14:27:58 GMT): JayPandya (Thu, 31 May 2018 14:27:58 GMT): JayPandya (Thu, 31 May 2018 14:28:11 GMT): JayPandya (Thu, 31 May 2018 14:28:43 GMT): JayPandya (Thu, 31 May 2018 14:36:02 GMT): jyellick (Thu, 31 May 2018 15:14:02 GMT): JayPandya (Thu, 31 May 2018 15:15:38 GMT): JayPandya (Thu, 31 May 2018 15:15:55 GMT): jyellick (Thu, 31 May 2018 15:29:29 GMT): JayPandya (Thu, 31 May 2018 15:34:06 GMT): JayPandya (Thu, 31 May 2018 16:01:42 GMT): JayPandya (Thu, 31 May 2018 16:03:12 GMT): jyellick (Thu, 31 May 2018 16:45:28 GMT): JayPandya (Thu, 31 May 2018 16:50:11 GMT): jyellick (Thu, 31 May 2018 16:59:12 GMT): JayPandya (Thu, 31 May 2018 17:12:58 GMT): JayPandya (Thu, 31 May 2018 17:13:46 GMT): JayPandya (Thu, 31 May 2018 17:13:46 GMT): JayPandya (Thu, 31 May 2018 17:13:46 GMT): JayPandya (Thu, 31 May 2018 17:13:52 GMT): eetti (Thu, 31 May 2018 18:14:50 GMT): eetti (Thu, 31 May 2018 18:18:10 GMT): yacovm (Thu, 31 May 2018 18:21:26 GMT): eetti (Thu, 31 May 2018 18:33:56 GMT): alexvicegrab (Fri, 01 Jun 2018 09:47:15 GMT): Ryan2 (Fri, 01 Jun 2018 09:53:19 GMT): ashishchainworks (Fri, 01 Jun 2018 12:47:41 GMT): ashishchainworks (Fri, 01 Jun 2018 12:47:55 GMT): kostas (Fri, 01 Jun 2018 13:16:33 GMT): ashishchainworks (Fri, 01 Jun 2018 13:23:13 GMT): jyellick (Fri, 01 Jun 2018 15:51:23 GMT): jyellick (Fri, 01 Jun 2018 15:52:35 GMT): eetti (Fri, 01 Jun 2018 17:05:23 GMT): jyellick (Fri, 01 Jun 2018 17:20:50 GMT): eetti (Fri, 01 Jun 2018 17:22:28 GMT): jyellick (Fri, 01 Jun 2018 17:24:07 GMT): eetti (Fri, 01 Jun 2018 17:28:05 GMT): Ryan2 (Fri, 01 Jun 2018 22:29:48 GMT): jyellick (Fri, 01 Jun 2018 23:54:19 GMT): Ryan2 (Sat, 02 Jun 2018 01:51:02 GMT): jyellick (Sat, 02 Jun 2018 02:34:56 GMT): jyellick (Sat, 02 Jun 2018 02:35:06 GMT): Ryan2 (Sat, 02 Jun 2018 02:51:20 GMT): jyellick (Sat, 02 Jun 2018 02:55:08 GMT): Ryan2 (Sat, 02 Jun 2018 03:32:36 GMT): Ryan2 (Sat, 02 Jun 2018 03:45:07 GMT): demonkm (Sun, 03 Jun 2018 02:29:25 GMT): jcwarfield (Sun, 03 Jun 2018 20:44:12 GMT): vu3mmg (Mon, 04 Jun 2018 15:58:12 GMT): jyellick (Mon, 04 Jun 2018 16:38:37 GMT): vu3mmg (Mon, 04 Jun 2018 16:39:19 GMT): pankajcheema (Tue, 05 Jun 2018 09:21:01 GMT): pankajcheema (Tue, 05 Jun 2018 09:21:05 GMT): pankajcheema (Tue, 05 Jun 2018 09:21:11 GMT): vick (Tue, 05 Jun 2018 09:21:11 GMT): pankajcheema (Tue, 05 Jun 2018 09:21:19 GMT): pankajcheema (Tue, 05 Jun 2018 12:02:59 GMT): pankajcheema (Tue, 05 Jun 2018 12:03:01 GMT): pankajcheema (Tue, 05 Jun 2018 12:03:19 GMT): pankajcheema (Tue, 05 Jun 2018 12:59:09 GMT): dappcoder (Tue, 05 Jun 2018 13:19:59 GMT): kostas (Tue, 05 Jun 2018 13:48:47 GMT): kostas (Tue, 05 Jun 2018 13:50:00 GMT): kostas (Tue, 05 Jun 2018 13:51:36 GMT): dappcoder (Tue, 05 Jun 2018 13:52:12 GMT): kostas (Tue, 05 Jun 2018 13:56:15 GMT): kostas (Tue, 05 Jun 2018 13:58:10 GMT): dappcoder (Tue, 05 Jun 2018 14:01:15 GMT): kostas (Tue, 05 Jun 2018 14:02:44 GMT): dappcoder (Tue, 05 Jun 2018 14:05:39 GMT): eabiodun (Wed, 06 Jun 2018 00:10:24 GMT): pankajcheema (Wed, 06 Jun 2018 09:00:03 GMT): pankajcheema (Wed, 06 Jun 2018 09:00:08 GMT): pankajcheema (Wed, 06 Jun 2018 09:00:47 GMT): pankajcheema (Wed, 06 Jun 2018 09:01:06 GMT): pankajcheema (Wed, 06 Jun 2018 09:01:12 GMT): pankajcheema (Wed, 06 Jun 2018 09:01:23 GMT): jrosmith (Wed, 06 Jun 2018 09:04:08 GMT): pankajcheema (Wed, 06 Jun 2018 18:31:55 GMT): pankajcheema (Wed, 06 Jun 2018 18:32:04 GMT): pankajcheema (Wed, 06 Jun 2018 18:32:14 GMT): pankajcheema (Wed, 06 Jun 2018 18:33:02 GMT): pankajcheema (Wed, 06 Jun 2018 19:00:23 GMT): jyellick (Wed, 06 Jun 2018 19:04:31 GMT): pankajcheema (Wed, 06 Jun 2018 19:06:58 GMT): pankajcheema (Wed, 06 Jun 2018 19:08:32 GMT): pankajcheema (Wed, 06 Jun 2018 19:08:32 GMT): jyellick (Wed, 06 Jun 2018 19:09:00 GMT): jyellick (Wed, 06 Jun 2018 19:09:17 GMT): pankajcheema (Wed, 06 Jun 2018 19:10:50 GMT): jyellick (Wed, 06 Jun 2018 19:11:32 GMT): pankajcheema (Wed, 06 Jun 2018 19:12:52 GMT): pankajcheema (Wed, 06 Jun 2018 19:13:44 GMT): pankajcheema (Wed, 06 Jun 2018 19:13:59 GMT): pankajcheema (Wed, 06 Jun 2018 19:14:48 GMT): pankajcheema (Wed, 06 Jun 2018 19:18:29 GMT): pankajcheema (Wed, 06 Jun 2018 19:18:51 GMT): kostas (Wed, 06 Jun 2018 19:19:25 GMT): kostas (Wed, 06 Jun 2018 19:19:25 GMT): kostas (Wed, 06 Jun 2018 19:19:54 GMT): pankajcheema (Wed, 06 Jun 2018 19:22:29 GMT): pankajcheema (Wed, 06 Jun 2018 19:28:18 GMT): pankajcheema (Wed, 06 Jun 2018 19:28:29 GMT): JayPandya (Thu, 07 Jun 2018 13:42:59 GMT): JayPandya (Thu, 07 Jun 2018 13:43:21 GMT): jyellick (Thu, 07 Jun 2018 14:08:06 GMT): JayPandya (Thu, 07 Jun 2018 14:09:11 GMT): JayPandya (Thu, 07 Jun 2018 14:10:20 GMT): jyellick (Thu, 07 Jun 2018 14:15:29 GMT): jyellick (Thu, 07 Jun 2018 14:15:58 GMT): JayPandya (Thu, 07 Jun 2018 14:17:48 GMT): JayPandya (Thu, 07 Jun 2018 14:18:08 GMT): jyellick (Thu, 07 Jun 2018 14:20:57 GMT): JayPandya (Thu, 07 Jun 2018 14:21:15 GMT): jyellick (Thu, 07 Jun 2018 14:21:15 GMT): JayPandya (Thu, 07 Jun 2018 14:21:53 GMT): jyellick (Thu, 07 Jun 2018 14:22:11 GMT): kostas (Thu, 07 Jun 2018 14:22:46 GMT): kostas (Thu, 07 Jun 2018 14:22:58 GMT): JayPandya (Thu, 07 Jun 2018 14:23:11 GMT): JayPandya (Thu, 07 Jun 2018 14:25:25 GMT): JayPandya (Thu, 07 Jun 2018 14:25:30 GMT): JayPandya (Thu, 07 Jun 2018 14:25:53 GMT): JayPandya (Thu, 07 Jun 2018 19:48:26 GMT): jyellick (Thu, 07 Jun 2018 19:49:44 GMT): JayPandya (Thu, 07 Jun 2018 19:59:31 GMT): JayPandya (Thu, 07 Jun 2018 20:05:00 GMT): JayPandya (Thu, 07 Jun 2018 20:06:38 GMT): jyellick (Thu, 07 Jun 2018 20:11:12 GMT): jyellick (Thu, 07 Jun 2018 20:11:39 GMT): jyellick (Thu, 07 Jun 2018 20:12:08 GMT): JayPandya (Thu, 07 Jun 2018 20:14:40 GMT): JayPandya (Thu, 07 Jun 2018 20:15:00 GMT): jyellick (Thu, 07 Jun 2018 20:16:20 GMT): jyellick (Thu, 07 Jun 2018 20:17:21 GMT): JayPandya (Thu, 07 Jun 2018 20:17:36 GMT): JayPandya (Thu, 07 Jun 2018 20:17:57 GMT): JayPandya (Thu, 07 Jun 2018 20:25:02 GMT): JayPandya (Thu, 07 Jun 2018 20:25:33 GMT): jyellick (Thu, 07 Jun 2018 20:31:17 GMT): jyellick (Thu, 07 Jun 2018 20:31:35 GMT): JayPandya (Thu, 07 Jun 2018 20:43:19 GMT): jyellick (Thu, 07 Jun 2018 20:44:06 GMT): jyellick (Thu, 07 Jun 2018 20:44:10 GMT): jyellick (Thu, 07 Jun 2018 20:45:03 GMT): JayPandya (Thu, 07 Jun 2018 20:47:54 GMT): JayPandya (Thu, 07 Jun 2018 20:50:29 GMT): JayPandya (Thu, 07 Jun 2018 20:51:16 GMT): JayPandya (Thu, 07 Jun 2018 20:51:50 GMT): JayPandya (Thu, 07 Jun 2018 21:20:28 GMT): jyellick (Fri, 08 Jun 2018 00:55:28 GMT): abraham (Fri, 08 Jun 2018 05:39:43 GMT): JayPandya (Fri, 08 Jun 2018 07:30:02 GMT): JayPandya (Fri, 08 Jun 2018 07:31:03 GMT): JayPandya (Fri, 08 Jun 2018 07:31:50 GMT): aKesav (Fri, 08 Jun 2018 07:43:56 GMT): jyellick (Fri, 08 Jun 2018 12:46:13 GMT): JayPandya (Fri, 08 Jun 2018 13:42:47 GMT): jyellick (Fri, 08 Jun 2018 13:43:44 GMT): JayPandya (Fri, 08 Jun 2018 13:47:19 GMT): jyellick (Fri, 08 Jun 2018 13:51:42 GMT): JayPandya (Fri, 08 Jun 2018 14:06:33 GMT): davidkhala (Mon, 11 Jun 2018 03:32:14 GMT): davidkhala (Mon, 11 Jun 2018 03:32:48 GMT): jyellick (Mon, 11 Jun 2018 03:34:55 GMT): davidkhala (Mon, 11 Jun 2018 03:36:43 GMT): davidkhala (Mon, 11 Jun 2018 03:37:05 GMT): davidkhala (Mon, 11 Jun 2018 03:38:28 GMT): davidkhala (Mon, 11 Jun 2018 03:42:29 GMT): jyellick (Mon, 11 Jun 2018 03:47:09 GMT): davidkhala (Mon, 11 Jun 2018 03:47:42 GMT): davidkhala (Mon, 11 Jun 2018 03:48:21 GMT): jyellick (Mon, 11 Jun 2018 03:48:49 GMT): jyellick (Mon, 11 Jun 2018 03:49:43 GMT): davidkhala (Mon, 11 Jun 2018 03:50:06 GMT): davidkhala (Mon, 11 Jun 2018 03:50:45 GMT): davidkhala (Mon, 11 Jun 2018 03:51:55 GMT): davidkhala (Mon, 11 Jun 2018 03:52:12 GMT): davidkhala (Mon, 11 Jun 2018 03:53:10 GMT): jyellick (Mon, 11 Jun 2018 03:53:55 GMT): davidkhala (Mon, 11 Jun 2018 03:54:30 GMT): davidkhala (Mon, 11 Jun 2018 03:54:47 GMT): davidkhala (Mon, 11 Jun 2018 03:55:04 GMT): jyellick (Mon, 11 Jun 2018 03:57:07 GMT): jyellick (Mon, 11 Jun 2018 03:57:32 GMT): davidkhala (Mon, 11 Jun 2018 03:59:21 GMT): jyellick (Mon, 11 Jun 2018 04:00:47 GMT): jyellick (Mon, 11 Jun 2018 04:00:47 GMT): davidkhala (Mon, 11 Jun 2018 04:02:51 GMT): davidkhala (Mon, 11 Jun 2018 04:02:51 GMT): jyellick (Mon, 11 Jun 2018 04:04:43 GMT): davidkhala (Mon, 11 Jun 2018 04:06:29 GMT): davidkhala (Mon, 11 Jun 2018 04:20:58 GMT): davidkhala (Mon, 11 Jun 2018 04:21:54 GMT): davidkhala (Mon, 11 Jun 2018 04:22:50 GMT): anishman (Mon, 11 Jun 2018 05:17:34 GMT): anishman (Mon, 11 Jun 2018 05:17:34 GMT): anishman (Mon, 11 Jun 2018 05:17:34 GMT): anishman (Mon, 11 Jun 2018 05:17:34 GMT): anishman (Mon, 11 Jun 2018 05:17:34 GMT): anishman (Mon, 11 Jun 2018 05:17:34 GMT): anishman (Mon, 11 Jun 2018 05:17:34 GMT): kostas (Mon, 11 Jun 2018 16:43:32 GMT): kostas (Mon, 11 Jun 2018 16:43:33 GMT): kostas (Mon, 11 Jun 2018 16:43:47 GMT): MarcelvandeKerkhof (Tue, 12 Jun 2018 10:16:19 GMT): davidkhala (Tue, 12 Jun 2018 15:21:16 GMT): jyellick (Tue, 12 Jun 2018 17:32:03 GMT): Dark_Knight (Tue, 12 Jun 2018 23:10:49 GMT): kostas (Wed, 13 Jun 2018 01:08:43 GMT): kostas (Wed, 13 Jun 2018 01:08:43 GMT): davidkhala (Wed, 13 Jun 2018 02:30:41 GMT): davidkhala (Wed, 13 Jun 2018 02:32:15 GMT): JackStrohm (Wed, 13 Jun 2018 13:44:55 GMT): moodysalem (Wed, 13 Jun 2018 13:46:34 GMT): FengChen_1982 (Thu, 14 Jun 2018 04:04:59 GMT): sarapara (Thu, 14 Jun 2018 05:41:26 GMT): JayPandya (Thu, 14 Jun 2018 11:37:33 GMT): JayPandya (Thu, 14 Jun 2018 11:37:33 GMT): jyellick (Thu, 14 Jun 2018 14:02:48 GMT): jyellick (Thu, 14 Jun 2018 14:03:38 GMT): JayPandya (Thu, 14 Jun 2018 14:05:47 GMT): JayPandya (Thu, 14 Jun 2018 14:05:47 GMT): JayPandya (Thu, 14 Jun 2018 14:06:25 GMT): JayPandya (Thu, 14 Jun 2018 14:11:53 GMT): JayPandya (Thu, 14 Jun 2018 14:11:53 GMT): JayPandya (Thu, 14 Jun 2018 14:11:53 GMT): jyellick (Thu, 14 Jun 2018 14:14:19 GMT): jyellick (Thu, 14 Jun 2018 14:15:04 GMT): JayPandya (Thu, 14 Jun 2018 14:16:11 GMT): jyellick (Thu, 14 Jun 2018 14:18:31 GMT): JayPandya (Thu, 14 Jun 2018 14:21:44 GMT): jyellick (Thu, 14 Jun 2018 14:23:07 GMT): jyellick (Thu, 14 Jun 2018 14:23:45 GMT): JayPandya (Thu, 14 Jun 2018 14:25:07 GMT): JayPandya (Thu, 14 Jun 2018 14:26:09 GMT): JayPandya (Thu, 14 Jun 2018 14:26:30 GMT): jyellick (Thu, 14 Jun 2018 14:32:52 GMT): jyellick (Thu, 14 Jun 2018 14:33:05 GMT): JayPandya (Thu, 14 Jun 2018 14:42:27 GMT): JayPandya (Thu, 14 Jun 2018 14:42:59 GMT): jyellick (Thu, 14 Jun 2018 14:43:19 GMT): jyellick (Thu, 14 Jun 2018 14:43:20 GMT): jyellick (Thu, 14 Jun 2018 14:43:33 GMT): JayPandya (Thu, 14 Jun 2018 14:46:09 GMT): JayPandya (Thu, 14 Jun 2018 14:46:09 GMT): JayPandya (Thu, 14 Jun 2018 14:46:19 GMT): hamptonsmith (Thu, 14 Jun 2018 15:48:23 GMT): hamptonsmith (Thu, 14 Jun 2018 15:50:34 GMT): jyellick (Thu, 14 Jun 2018 16:01:34 GMT): jyellick (Thu, 14 Jun 2018 16:03:25 GMT): jyellick (Thu, 14 Jun 2018 16:03:25 GMT): jyellick (Thu, 14 Jun 2018 16:03:25 GMT): JayPandya (Thu, 14 Jun 2018 16:09:40 GMT): JayPandya (Thu, 14 Jun 2018 16:10:01 GMT): jyellick (Thu, 14 Jun 2018 16:10:06 GMT): jyellick (Thu, 14 Jun 2018 16:10:51 GMT): JayPandya (Thu, 14 Jun 2018 16:10:53 GMT): JayPandya (Thu, 14 Jun 2018 16:11:04 GMT): JayPandya (Thu, 14 Jun 2018 16:11:23 GMT): jyellick (Thu, 14 Jun 2018 16:11:28 GMT): jyellick (Thu, 14 Jun 2018 16:13:42 GMT): JayPandya (Thu, 14 Jun 2018 16:15:46 GMT): JayPandya (Thu, 14 Jun 2018 16:15:49 GMT): jyellick (Thu, 14 Jun 2018 16:16:15 GMT): jyellick (Thu, 14 Jun 2018 16:16:47 GMT): jyellick (Thu, 14 Jun 2018 16:16:47 GMT): jyellick (Thu, 14 Jun 2018 16:17:06 GMT): hamptonsmith (Thu, 14 Jun 2018 16:40:53 GMT): jyellick (Thu, 14 Jun 2018 16:41:54 GMT): hamptonsmith (Thu, 14 Jun 2018 16:43:57 GMT): jrosmith (Thu, 14 Jun 2018 17:02:20 GMT): jrosmith (Thu, 14 Jun 2018 17:02:20 GMT): JayPandya (Thu, 14 Jun 2018 17:09:34 GMT): JayPandya (Thu, 14 Jun 2018 17:09:46 GMT): jyellick (Thu, 14 Jun 2018 17:10:00 GMT): jyellick (Thu, 14 Jun 2018 17:10:57 GMT): jyellick (Thu, 14 Jun 2018 17:11:28 GMT): jyellick (Thu, 14 Jun 2018 17:11:59 GMT): jrosmith (Thu, 14 Jun 2018 17:12:44 GMT): jyellick (Thu, 14 Jun 2018 17:12:51 GMT): jyellick (Thu, 14 Jun 2018 17:13:18 GMT): jrosmith (Thu, 14 Jun 2018 17:13:50 GMT): mogamboizer (Fri, 15 Jun 2018 03:17:53 GMT): mogamboizer (Fri, 15 Jun 2018 03:18:42 GMT): SaraEmily (Fri, 15 Jun 2018 06:49:18 GMT): Kyroy (Fri, 15 Jun 2018 07:34:43 GMT): Exci (Fri, 15 Jun 2018 12:54:03 GMT): jyellick (Fri, 15 Jun 2018 13:21:48 GMT): jyellick (Fri, 15 Jun 2018 13:21:48 GMT): jyellick (Fri, 15 Jun 2018 13:23:14 GMT): jyellick (Fri, 15 Jun 2018 13:23:14 GMT): mogamboizer (Fri, 15 Jun 2018 14:42:15 GMT): kostas (Fri, 15 Jun 2018 14:44:26 GMT): jyellick (Fri, 15 Jun 2018 14:44:48 GMT): kostas (Fri, 15 Jun 2018 14:44:51 GMT): mogamboizer (Fri, 15 Jun 2018 14:49:29 GMT): Exci (Fri, 15 Jun 2018 15:01:05 GMT): silliman (Fri, 15 Jun 2018 15:33:10 GMT): silliman (Fri, 15 Jun 2018 15:33:10 GMT): jyellick (Fri, 15 Jun 2018 15:42:32 GMT): minollo (Fri, 15 Jun 2018 16:04:54 GMT): jyellick (Fri, 15 Jun 2018 16:05:50 GMT): Exci (Sat, 16 Jun 2018 10:25:24 GMT): tallharish (Sat, 16 Jun 2018 15:05:00 GMT): tallharish (Sat, 16 Jun 2018 15:05:16 GMT): tallharish (Sat, 16 Jun 2018 15:05:16 GMT): kostas (Sat, 16 Jun 2018 21:16:56 GMT): kostas (Sat, 16 Jun 2018 21:17:32 GMT): kostas (Sat, 16 Jun 2018 21:17:47 GMT): kostas (Sat, 16 Jun 2018 21:18:29 GMT): tallharish (Mon, 18 Jun 2018 01:10:46 GMT): Kyroy (Mon, 18 Jun 2018 06:29:48 GMT): amolpednekar (Mon, 18 Jun 2018 06:59:07 GMT): kostas (Mon, 18 Jun 2018 11:06:59 GMT): moodysalem (Mon, 18 Jun 2018 21:08:57 GMT): gbolo (Mon, 18 Jun 2018 22:07:47 GMT): gbolo (Mon, 18 Jun 2018 22:07:52 GMT): gbolo (Mon, 18 Jun 2018 22:08:20 GMT): kostas (Mon, 18 Jun 2018 22:17:06 GMT): kostas (Mon, 18 Jun 2018 22:17:28 GMT): qsmen (Tue, 19 Jun 2018 08:49:26 GMT): qsmen (Tue, 19 Jun 2018 08:50:19 GMT): jyellick (Tue, 19 Jun 2018 13:56:00 GMT): jyellick (Tue, 19 Jun 2018 14:19:14 GMT): hamptonsmith (Tue, 19 Jun 2018 16:43:16 GMT): jrosmith (Tue, 19 Jun 2018 16:50:29 GMT): jrosmith (Tue, 19 Jun 2018 16:50:29 GMT): jyellick (Tue, 19 Jun 2018 16:56:12 GMT): jyellick (Tue, 19 Jun 2018 17:05:16 GMT): jrosmith (Tue, 19 Jun 2018 17:09:23 GMT): qsmen (Wed, 20 Jun 2018 01:53:45 GMT): pvrbharg (Wed, 20 Jun 2018 19:50:07 GMT): qsmen (Thu, 21 Jun 2018 01:38:09 GMT): qsmen (Thu, 21 Jun 2018 01:38:09 GMT): qsmen (Thu, 21 Jun 2018 01:40:57 GMT): qsmen (Thu, 21 Jun 2018 01:52:14 GMT): qsmen (Thu, 21 Jun 2018 05:32:33 GMT): jyellick (Thu, 21 Jun 2018 14:36:45 GMT): jyellick (Thu, 21 Jun 2018 14:39:58 GMT): pvrbharg (Thu, 21 Jun 2018 14:43:57 GMT): jyellick (Thu, 21 Jun 2018 14:45:40 GMT): jyellick (Thu, 21 Jun 2018 14:46:18 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 14:46:21 GMT): jyellick (Thu, 21 Jun 2018 14:47:56 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 14:48:20 GMT): jyellick (Thu, 21 Jun 2018 14:48:46 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 14:50:21 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 14:52:36 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 14:53:15 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 14:53:50 GMT): jyellick (Thu, 21 Jun 2018 14:54:27 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 14:56:34 GMT): jyellick (Thu, 21 Jun 2018 14:59:17 GMT): jyellick (Thu, 21 Jun 2018 14:59:17 GMT): jyellick (Thu, 21 Jun 2018 15:00:38 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 15:01:43 GMT): jyellick (Thu, 21 Jun 2018 15:01:48 GMT): jyellick (Thu, 21 Jun 2018 15:02:09 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 15:02:59 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 15:03:33 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 15:05:32 GMT): jyellick (Thu, 21 Jun 2018 15:07:11 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 15:08:57 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 15:09:08 GMT): jyellick (Thu, 21 Jun 2018 15:10:19 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 15:12:59 GMT): jyellick (Thu, 21 Jun 2018 15:15:34 GMT): jyellick (Thu, 21 Jun 2018 15:15:39 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 15:20:47 GMT): asaningmaxchain123 (Thu, 21 Jun 2018 15:20:57 GMT): pvrbharg (Thu, 21 Jun 2018 20:44:13 GMT): pvrbharg (Thu, 21 Jun 2018 20:44:13 GMT): pvrbharg (Thu, 21 Jun 2018 20:44:34 GMT): pvrbharg (Thu, 21 Jun 2018 20:45:15 GMT): qsmen (Fri, 22 Jun 2018 02:25:47 GMT): pvrbharg (Fri, 22 Jun 2018 02:38:28 GMT): qsmen (Fri, 22 Jun 2018 07:06:16 GMT): qsmen (Fri, 22 Jun 2018 07:06:26 GMT): nvxtien (Fri, 22 Jun 2018 11:38:04 GMT): kostas (Fri, 22 Jun 2018 13:19:47 GMT): kostas (Fri, 22 Jun 2018 13:19:56 GMT): kostas (Fri, 22 Jun 2018 13:20:00 GMT): jyellick (Fri, 22 Jun 2018 13:25:52 GMT): pvrbharg (Fri, 22 Jun 2018 17:28:22 GMT): pvrbharg (Fri, 22 Jun 2018 17:28:34 GMT): pvrbharg (Fri, 22 Jun 2018 17:28:38 GMT): jyellick (Fri, 22 Jun 2018 17:45:13 GMT): pvrbharg (Fri, 22 Jun 2018 18:05:21 GMT): pvrbharg (Fri, 22 Jun 2018 20:09:50 GMT): pvrbharg (Fri, 22 Jun 2018 20:10:02 GMT): pvrbharg (Fri, 22 Jun 2018 20:10:08 GMT): pvrbharg (Fri, 22 Jun 2018 20:10:57 GMT): jyellick (Fri, 22 Jun 2018 20:56:10 GMT): sureshtedla (Sat, 23 Jun 2018 23:31:46 GMT): Velen (Sun, 24 Jun 2018 12:35:20 GMT): qsmen (Mon, 25 Jun 2018 01:07:57 GMT): qsmen (Mon, 25 Jun 2018 01:07:57 GMT): HoneyShah (Mon, 25 Jun 2018 09:34:50 GMT): HoneyShah (Mon, 25 Jun 2018 10:05:12 GMT): HoneyShah (Mon, 25 Jun 2018 11:58:19 GMT): HoneyShah (Mon, 25 Jun 2018 11:58:25 GMT): lkchao78 (Mon, 25 Jun 2018 13:42:41 GMT): lkchao78 (Mon, 25 Jun 2018 13:50:02 GMT): lkchao78 (Mon, 25 Jun 2018 13:50:02 GMT): lkchao78 (Mon, 25 Jun 2018 13:50:02 GMT): lkchao78 (Mon, 25 Jun 2018 13:50:02 GMT): lkchao78 (Mon, 25 Jun 2018 13:50:02 GMT): lkchao78 (Mon, 25 Jun 2018 13:50:02 GMT): DivyaAgrawal (Mon, 25 Jun 2018 14:19:00 GMT): C0rWin (Tue, 26 Jun 2018 10:36:23 GMT): DivyaAgrawal (Tue, 26 Jun 2018 10:39:00 GMT): DivyaAgrawal (Tue, 26 Jun 2018 10:39:00 GMT): C0rWin (Tue, 26 Jun 2018 10:43:38 GMT): C0rWin (Tue, 26 Jun 2018 10:44:51 GMT): C0rWin (Tue, 26 Jun 2018 10:44:51 GMT): C0rWin (Tue, 26 Jun 2018 10:44:51 GMT): DivyaAgrawal (Tue, 26 Jun 2018 10:45:13 GMT): AshishMishra 1 (Tue, 26 Jun 2018 14:15:56 GMT): puneetsharma86 (Wed, 27 Jun 2018 09:38:27 GMT): puneetsharma86 (Wed, 27 Jun 2018 09:40:43 GMT): puneetsharma86 (Wed, 27 Jun 2018 09:40:43 GMT): DivyaAgrawal (Wed, 27 Jun 2018 10:48:04 GMT): C0rWin (Wed, 27 Jun 2018 13:15:46 GMT): Othman.Darwish (Wed, 27 Jun 2018 13:46:50 GMT): julian (Wed, 27 Jun 2018 14:17:33 GMT): moodysalem (Wed, 27 Jun 2018 18:42:03 GMT): kostas (Thu, 28 Jun 2018 03:22:00 GMT): kostas (Thu, 28 Jun 2018 03:22:32 GMT): kostas (Thu, 28 Jun 2018 03:23:31 GMT): kostas (Thu, 28 Jun 2018 03:24:13 GMT): puneetsharma86 (Thu, 28 Jun 2018 05:21:37 GMT): puneetsharma86 (Thu, 28 Jun 2018 05:59:32 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:01 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): anjalinaik (Thu, 28 Jun 2018 06:12:08 GMT): yacovm (Thu, 28 Jun 2018 06:13:42 GMT): yacovm (Thu, 28 Jun 2018 06:13:46 GMT): yacovm (Thu, 28 Jun 2018 06:13:59 GMT): awjh (Thu, 28 Jun 2018 09:58:03 GMT): moodysalem (Thu, 28 Jun 2018 14:19:50 GMT): kostas (Thu, 28 Jun 2018 15:20:10 GMT): kostas (Thu, 28 Jun 2018 15:21:09 GMT): kostas (Thu, 28 Jun 2018 15:21:09 GMT): moodysalem (Thu, 28 Jun 2018 16:41:43 GMT): moodysalem (Thu, 28 Jun 2018 21:47:22 GMT): kostas (Thu, 28 Jun 2018 21:56:48 GMT): DivyaAgrawal (Thu, 28 Jun 2018 22:06:56 GMT): moodysalem (Thu, 28 Jun 2018 22:13:17 GMT): DivyaAgrawal (Thu, 28 Jun 2018 22:19:30 GMT): DivyaAgrawal (Thu, 28 Jun 2018 22:20:49 GMT): kostas (Fri, 29 Jun 2018 00:47:02 GMT): pankajcheema (Fri, 29 Jun 2018 05:59:05 GMT): pankajcheema (Fri, 29 Jun 2018 05:59:12 GMT): anjalinaik (Fri, 29 Jun 2018 06:16:10 GMT): anjalinaik (Fri, 29 Jun 2018 07:06:37 GMT): SaraEmily (Fri, 29 Jun 2018 09:00:58 GMT): silliman (Fri, 29 Jun 2018 10:52:38 GMT): SaraEmily (Fri, 29 Jun 2018 11:12:30 GMT): SaraEmily (Fri, 29 Jun 2018 11:53:59 GMT): awjh (Fri, 29 Jun 2018 12:09:15 GMT): awjh (Fri, 29 Jun 2018 12:09:17 GMT): kostas (Fri, 29 Jun 2018 13:24:01 GMT): moodysalem (Fri, 29 Jun 2018 15:45:59 GMT): moodysalem (Fri, 29 Jun 2018 16:25:27 GMT): moodysalem (Fri, 29 Jun 2018 16:25:27 GMT): moodysalem (Fri, 29 Jun 2018 16:32:49 GMT): DivyaAgrawal (Fri, 29 Jun 2018 16:34:25 GMT): moodysalem (Fri, 29 Jun 2018 17:25:26 GMT): kostas (Fri, 29 Jun 2018 17:28:50 GMT): kostas (Fri, 29 Jun 2018 17:29:52 GMT): kostas (Fri, 29 Jun 2018 17:30:11 GMT): kostas (Fri, 29 Jun 2018 17:30:29 GMT): moodysalem (Fri, 29 Jun 2018 17:30:52 GMT): kostas (Fri, 29 Jun 2018 17:31:17 GMT): kostas (Fri, 29 Jun 2018 17:31:21 GMT): oqf (Sun, 01 Jul 2018 11:45:25 GMT): anjalinaik (Mon, 02 Jul 2018 08:02:55 GMT): Taffies (Tue, 03 Jul 2018 07:29:31 GMT): varun-raj (Tue, 03 Jul 2018 11:28:21 GMT): pvrbharg (Tue, 03 Jul 2018 16:03:49 GMT): pvrbharg (Tue, 03 Jul 2018 16:06:00 GMT): jyellick (Tue, 03 Jul 2018 16:27:40 GMT): pvrbharg (Tue, 03 Jul 2018 16:55:09 GMT): suchith.arodi (Tue, 03 Jul 2018 18:25:22 GMT): anjalinaik (Thu, 05 Jul 2018 03:50:05 GMT): jyellick (Thu, 05 Jul 2018 13:56:22 GMT): danny_lee (Thu, 05 Jul 2018 22:24:25 GMT): danny_lee (Thu, 05 Jul 2018 23:34:31 GMT): pvrbharg (Fri, 06 Jul 2018 00:03:05 GMT): pvrbharg (Fri, 06 Jul 2018 00:04:42 GMT): kostas (Fri, 06 Jul 2018 00:53:15 GMT): pankajcheema (Fri, 06 Jul 2018 12:36:49 GMT): jyellick (Fri, 06 Jul 2018 13:44:52 GMT): dharuq (Fri, 06 Jul 2018 15:12:37 GMT): dharuq (Fri, 06 Jul 2018 15:12:37 GMT): jyellick (Fri, 06 Jul 2018 16:28:40 GMT): moodysalem (Fri, 06 Jul 2018 19:21:27 GMT): jyellick (Fri, 06 Jul 2018 19:22:19 GMT): jyellick (Fri, 06 Jul 2018 19:22:41 GMT): pankajcheema (Sat, 07 Jul 2018 13:41:41 GMT): dharuq (Sat, 07 Jul 2018 15:45:29 GMT): dharuq (Sat, 07 Jul 2018 15:45:29 GMT): javrevasandeep (Sat, 07 Jul 2018 20:52:42 GMT): javrevasandeep (Sat, 07 Jul 2018 20:53:11 GMT): javrevasandeep (Sat, 07 Jul 2018 20:53:43 GMT): javrevasandeep (Sat, 07 Jul 2018 20:55:06 GMT): gitesh-tyagi (Mon, 09 Jul 2018 06:34:17 GMT): jyellick (Mon, 09 Jul 2018 13:43:04 GMT): toddinpal (Mon, 09 Jul 2018 15:58:05 GMT): jyellick (Mon, 09 Jul 2018 16:13:42 GMT): toddinpal (Mon, 09 Jul 2018 17:28:19 GMT): jyellick (Mon, 09 Jul 2018 17:30:51 GMT): toddinpal (Mon, 09 Jul 2018 17:32:55 GMT): jyellick (Mon, 09 Jul 2018 17:36:31 GMT): toddinpal (Mon, 09 Jul 2018 17:42:55 GMT): jyellick (Mon, 09 Jul 2018 18:04:34 GMT): toddinpal (Mon, 09 Jul 2018 18:08:29 GMT): jyellick (Mon, 09 Jul 2018 18:09:17 GMT): toddinpal (Mon, 09 Jul 2018 18:12:36 GMT): toddinpal (Mon, 09 Jul 2018 18:14:31 GMT): toddinpal (Mon, 09 Jul 2018 18:16:30 GMT): jyellick (Mon, 09 Jul 2018 18:26:04 GMT): toddinpal (Tue, 10 Jul 2018 00:26:06 GMT): jimthematrix (Tue, 10 Jul 2018 01:02:55 GMT): jimthematrix (Tue, 10 Jul 2018 01:02:55 GMT): jimthematrix (Tue, 10 Jul 2018 01:02:55 GMT): kostas (Tue, 10 Jul 2018 09:56:15 GMT): jimthematrix (Tue, 10 Jul 2018 23:31:02 GMT): NoLimitHoldem (Wed, 11 Jul 2018 06:15:28 GMT): jayeshjawale95 (Wed, 11 Jul 2018 07:32:30 GMT): Sreesha (Wed, 11 Jul 2018 11:52:21 GMT): Sreesha (Wed, 11 Jul 2018 11:53:58 GMT): Sreesha (Wed, 11 Jul 2018 11:54:23 GMT): Sreesha (Wed, 11 Jul 2018 11:55:19 GMT): jyellick (Wed, 11 Jul 2018 16:08:09 GMT): moodysalem (Wed, 11 Jul 2018 16:40:39 GMT): moodysalem (Wed, 11 Jul 2018 16:40:39 GMT): moodysalem (Wed, 11 Jul 2018 16:40:39 GMT): jyellick (Wed, 11 Jul 2018 18:43:04 GMT): JayPandya (Wed, 11 Jul 2018 18:45:27 GMT): jyellick (Wed, 11 Jul 2018 18:45:46 GMT): JayPandya (Wed, 11 Jul 2018 18:47:00 GMT): moodysalem (Wed, 11 Jul 2018 18:53:40 GMT): moodysalem (Wed, 11 Jul 2018 18:53:40 GMT): moodysalem (Wed, 11 Jul 2018 18:59:37 GMT): moodysalem (Wed, 11 Jul 2018 19:06:12 GMT): jyellick (Wed, 11 Jul 2018 19:56:08 GMT): jyellick (Wed, 11 Jul 2018 19:56:16 GMT): jyellick (Wed, 11 Jul 2018 19:57:02 GMT): jyellick (Wed, 11 Jul 2018 19:57:37 GMT): rahulhegde (Wed, 11 Jul 2018 21:42:54 GMT): rahulhegde (Wed, 11 Jul 2018 21:42:54 GMT): huikang (Thu, 12 Jul 2018 02:24:16 GMT): jyellick (Thu, 12 Jul 2018 02:40:18 GMT): jyellick (Thu, 12 Jul 2018 02:40:55 GMT): Sreesha (Thu, 12 Jul 2018 07:05:47 GMT): Sreesha (Thu, 12 Jul 2018 07:07:47 GMT): Sreesha (Thu, 12 Jul 2018 07:08:39 GMT): Sreesha (Thu, 12 Jul 2018 07:09:40 GMT): Sreesha (Thu, 12 Jul 2018 07:10:45 GMT): Sreesha (Thu, 12 Jul 2018 07:11:59 GMT): WadeLu (Thu, 12 Jul 2018 07:33:05 GMT): thakurnikk (Thu, 12 Jul 2018 10:34:50 GMT): thakurnikk (Thu, 12 Jul 2018 10:34:50 GMT): thakurnikk (Thu, 12 Jul 2018 10:34:50 GMT): jyellick (Thu, 12 Jul 2018 14:21:21 GMT): jyellick (Thu, 12 Jul 2018 14:22:35 GMT): jyellick (Thu, 12 Jul 2018 14:23:31 GMT): pvrbharg (Thu, 12 Jul 2018 17:31:45 GMT): jyellick (Thu, 12 Jul 2018 17:33:06 GMT): thakurnikk (Fri, 13 Jul 2018 04:37:56 GMT): jyellick (Fri, 13 Jul 2018 04:38:45 GMT): thakurnikk (Fri, 13 Jul 2018 04:39:14 GMT): Sreesha (Fri, 13 Jul 2018 04:44:34 GMT): thakurnikk (Fri, 13 Jul 2018 04:57:23 GMT): jyellick (Fri, 13 Jul 2018 05:06:57 GMT): sergefdrv (Fri, 13 Jul 2018 12:41:44 GMT): jyellick (Fri, 13 Jul 2018 13:33:32 GMT): tkuhrt (Fri, 13 Jul 2018 22:23:59 GMT): tkuhrt (Fri, 13 Jul 2018 22:24:55 GMT): tkuhrt (Fri, 13 Jul 2018 22:24:55 GMT): SaraEmily (Mon, 16 Jul 2018 11:41:51 GMT): yacovm (Mon, 16 Jul 2018 12:52:07 GMT): yacovm (Mon, 16 Jul 2018 12:52:57 GMT): SaraEmily (Mon, 16 Jul 2018 12:54:40 GMT): yacovm (Mon, 16 Jul 2018 12:55:13 GMT): SaraEmily (Mon, 16 Jul 2018 12:55:39 GMT): yacovm (Mon, 16 Jul 2018 12:56:00 GMT): SaraEmily (Mon, 16 Jul 2018 12:56:29 GMT): yacovm (Mon, 16 Jul 2018 12:57:40 GMT): SaraEmily (Mon, 16 Jul 2018 12:57:54 GMT): adarshsaraf123 (Mon, 16 Jul 2018 13:30:04 GMT): yacovm (Mon, 16 Jul 2018 13:51:40 GMT): ddurnev (Tue, 17 Jul 2018 07:02:22 GMT): ddurnev (Tue, 17 Jul 2018 07:06:14 GMT): ddurnev (Tue, 17 Jul 2018 07:07:48 GMT): ddurnev (Tue, 17 Jul 2018 07:07:48 GMT): qsmen (Tue, 17 Jul 2018 09:12:21 GMT): qsmen (Tue, 17 Jul 2018 09:13:45 GMT): qsmen (Tue, 17 Jul 2018 09:58:55 GMT): qsmen (Tue, 17 Jul 2018 09:58:55 GMT): rsherwood (Tue, 17 Jul 2018 12:53:01 GMT): IgorSim (Tue, 17 Jul 2018 13:05:19 GMT): titog (Tue, 17 Jul 2018 18:43:43 GMT): kristycarp (Tue, 17 Jul 2018 18:51:42 GMT): kristycarp (Tue, 17 Jul 2018 18:53:06 GMT): mastersingh24 (Tue, 17 Jul 2018 19:20:46 GMT): yacovm (Tue, 17 Jul 2018 19:22:14 GMT): louisliu2048 (Wed, 18 Jul 2018 02:20:43 GMT): Sreesha (Wed, 18 Jul 2018 09:48:02 GMT): FlavioSS (Wed, 18 Jul 2018 10:54:21 GMT): jyellick (Wed, 18 Jul 2018 14:36:09 GMT): ddurnev (Wed, 18 Jul 2018 15:45:08 GMT): jyellick (Wed, 18 Jul 2018 17:38:32 GMT): kostas (Wed, 18 Jul 2018 20:40:13 GMT): kostas (Wed, 18 Jul 2018 20:40:31 GMT): qsmen (Thu, 19 Jul 2018 01:10:54 GMT): rushiraj111 (Thu, 19 Jul 2018 07:13:04 GMT): Unni_1994 (Thu, 19 Jul 2018 09:17:24 GMT): Unni_1994 (Thu, 19 Jul 2018 09:19:04 GMT): Unni_1994 (Thu, 19 Jul 2018 09:20:25 GMT): ddurnev (Thu, 19 Jul 2018 09:59:44 GMT): Unni_1994 (Thu, 19 Jul 2018 13:19:57 GMT): Unni_1994 (Thu, 19 Jul 2018 13:20:19 GMT): Unni_1994 (Thu, 19 Jul 2018 13:21:06 GMT): Unni_1994 (Thu, 19 Jul 2018 13:21:50 GMT): Unni_1994 (Thu, 19 Jul 2018 13:22:26 GMT): Unni_1994 (Thu, 19 Jul 2018 13:22:48 GMT): jyellick (Thu, 19 Jul 2018 14:38:15 GMT): jyellick (Thu, 19 Jul 2018 14:38:44 GMT): jyellick (Thu, 19 Jul 2018 14:40:08 GMT): jyellick (Thu, 19 Jul 2018 14:41:00 GMT): jyellick (Thu, 19 Jul 2018 14:44:57 GMT): sanchezl (Thu, 19 Jul 2018 14:51:46 GMT): ddurnev (Thu, 19 Jul 2018 15:08:44 GMT): jyellick (Thu, 19 Jul 2018 15:11:53 GMT): jyellick (Thu, 19 Jul 2018 15:11:53 GMT): jyellick (Thu, 19 Jul 2018 15:11:53 GMT): sanchezl (Thu, 19 Jul 2018 15:19:18 GMT): Kyroy (Thu, 19 Jul 2018 15:52:46 GMT): mauriff (Thu, 19 Jul 2018 17:44:15 GMT): pankajcheema (Thu, 19 Jul 2018 17:44:35 GMT): jyellick (Thu, 19 Jul 2018 17:47:27 GMT): qsmen (Fri, 20 Jul 2018 02:14:23 GMT): qsmen (Fri, 20 Jul 2018 02:15:47 GMT): qsmen (Fri, 20 Jul 2018 02:15:47 GMT): pankajcheema (Fri, 20 Jul 2018 03:26:21 GMT): pankajcheema (Fri, 20 Jul 2018 03:27:55 GMT): pankajcheema (Fri, 20 Jul 2018 03:28:13 GMT): pandagopal (Fri, 20 Jul 2018 05:05:53 GMT): Kyroy (Fri, 20 Jul 2018 06:58:21 GMT): Kyroy (Fri, 20 Jul 2018 08:50:16 GMT): Unni_1994 (Fri, 20 Jul 2018 09:15:41 GMT): Unni_1994 (Fri, 20 Jul 2018 09:28:00 GMT): ddurnev (Fri, 20 Jul 2018 09:28:14 GMT): Unni_1994 (Fri, 20 Jul 2018 09:28:24 GMT): vladyslavmunin (Fri, 20 Jul 2018 11:53:56 GMT): sanchezl (Fri, 20 Jul 2018 13:14:43 GMT): Kyroy (Fri, 20 Jul 2018 13:29:56 GMT): sanchezl (Fri, 20 Jul 2018 13:38:51 GMT): jyellick (Fri, 20 Jul 2018 15:00:03 GMT): mrlee23 (Fri, 20 Jul 2018 15:00:10 GMT): jyellick (Fri, 20 Jul 2018 15:03:45 GMT): jyellick (Fri, 20 Jul 2018 15:04:14 GMT): pankajcheema (Sat, 21 Jul 2018 06:22:38 GMT): pankajcheema (Sat, 21 Jul 2018 06:23:57 GMT): pankajcheema (Sat, 21 Jul 2018 06:25:29 GMT): nukulsharma (Sun, 22 Jul 2018 03:57:36 GMT): nukulsharma (Sun, 22 Jul 2018 03:58:21 GMT): choco_coder (Sun, 22 Jul 2018 04:57:51 GMT): kevinmcmahon (Sun, 22 Jul 2018 14:08:17 GMT): rsherwood (Mon, 23 Jul 2018 08:57:13 GMT): jyellick (Mon, 23 Jul 2018 14:17:32 GMT): pankajcheema (Mon, 23 Jul 2018 15:09:49 GMT): pankajcheema (Mon, 23 Jul 2018 15:09:49 GMT): pankajcheema (Mon, 23 Jul 2018 15:10:08 GMT): pankajcheema (Mon, 23 Jul 2018 15:11:33 GMT): jyellick (Mon, 23 Jul 2018 16:09:28 GMT): jyellick (Mon, 23 Jul 2018 16:09:28 GMT): jyellick (Mon, 23 Jul 2018 16:15:48 GMT): jyellick (Mon, 23 Jul 2018 16:15:50 GMT): jrosmith (Mon, 23 Jul 2018 17:22:47 GMT): pankajcheema (Tue, 24 Jul 2018 03:52:42 GMT): pankajcheema (Tue, 24 Jul 2018 03:59:30 GMT): pankajcheema (Tue, 24 Jul 2018 03:59:40 GMT): mahesh.bandkar (Tue, 24 Jul 2018 08:26:28 GMT): username343 (Tue, 24 Jul 2018 14:38:04 GMT): username343 (Tue, 24 Jul 2018 14:38:21 GMT): username343 (Tue, 24 Jul 2018 14:40:16 GMT): jyellick (Tue, 24 Jul 2018 14:40:28 GMT): jyellick (Tue, 24 Jul 2018 14:40:46 GMT): jyellick (Tue, 24 Jul 2018 14:41:12 GMT): username343 (Tue, 24 Jul 2018 14:41:17 GMT): jyellick (Tue, 24 Jul 2018 14:41:24 GMT): username343 (Tue, 24 Jul 2018 14:41:52 GMT): jyellick (Tue, 24 Jul 2018 14:43:23 GMT): username343 (Tue, 24 Jul 2018 14:44:47 GMT): username343 (Tue, 24 Jul 2018 14:45:08 GMT): username343 (Tue, 24 Jul 2018 14:45:16 GMT): jyellick (Tue, 24 Jul 2018 14:46:28 GMT): jyellick (Tue, 24 Jul 2018 14:46:28 GMT): username343 (Tue, 24 Jul 2018 14:47:11 GMT): username343 (Tue, 24 Jul 2018 14:47:20 GMT): fabiomolinar (Tue, 24 Jul 2018 15:03:28 GMT): nukulsharma (Tue, 24 Jul 2018 15:25:56 GMT): VipinB (Tue, 24 Jul 2018 15:53:47 GMT): rchaturv (Tue, 24 Jul 2018 16:32:13 GMT): anarodrigues (Tue, 24 Jul 2018 18:53:30 GMT): alokmatta (Tue, 24 Jul 2018 22:47:49 GMT): nvlasov (Wed, 25 Jul 2018 05:05:15 GMT): nvlasov (Wed, 25 Jul 2018 05:06:03 GMT): adarshsaraf123 (Wed, 25 Jul 2018 06:22:25 GMT): nvlasov (Wed, 25 Jul 2018 06:51:47 GMT): nvlasov (Wed, 25 Jul 2018 06:53:21 GMT): kostas (Wed, 25 Jul 2018 15:43:18 GMT): kostas (Wed, 25 Jul 2018 16:27:31 GMT): vagnerasilva (Thu, 26 Jul 2018 21:54:35 GMT): Unni_1994 (Fri, 27 Jul 2018 11:21:06 GMT): Unni_1994 (Fri, 27 Jul 2018 11:22:52 GMT): kostas (Fri, 27 Jul 2018 13:30:52 GMT): kostas (Fri, 27 Jul 2018 13:31:08 GMT): kostas (Fri, 27 Jul 2018 14:10:35 GMT): pankajcheema (Sun, 29 Jul 2018 14:06:06 GMT): javrevasandeep (Mon, 30 Jul 2018 05:11:55 GMT): Unni_1994 (Mon, 30 Jul 2018 06:00:07 GMT): Unni_1994 (Mon, 30 Jul 2018 06:01:44 GMT): Unni_1994 (Mon, 30 Jul 2018 06:02:10 GMT): Unni_1994 (Mon, 30 Jul 2018 06:04:12 GMT): qsmen (Mon, 30 Jul 2018 09:11:20 GMT): qsmen (Mon, 30 Jul 2018 09:11:32 GMT): javrevasandeep (Mon, 30 Jul 2018 09:25:00 GMT): anzalbeg (Mon, 30 Jul 2018 09:33:39 GMT): knagware9 (Mon, 30 Jul 2018 11:56:53 GMT): bestbeforetoday (Mon, 30 Jul 2018 13:01:32 GMT): pankajcheema (Mon, 30 Jul 2018 13:37:45 GMT): pankajcheema (Mon, 30 Jul 2018 13:37:50 GMT): pankajcheema (Mon, 30 Jul 2018 13:39:10 GMT): pankajcheema (Mon, 30 Jul 2018 13:39:10 GMT): pankajcheema (Mon, 30 Jul 2018 13:40:29 GMT): kostas (Mon, 30 Jul 2018 13:53:23 GMT): kostas (Mon, 30 Jul 2018 13:56:26 GMT): kostas (Mon, 30 Jul 2018 13:56:26 GMT): kostas (Mon, 30 Jul 2018 13:57:01 GMT): kostas (Mon, 30 Jul 2018 13:58:02 GMT): kostas (Mon, 30 Jul 2018 14:01:16 GMT): kostas (Mon, 30 Jul 2018 14:01:31 GMT): kostas (Mon, 30 Jul 2018 14:01:40 GMT): kostas (Mon, 30 Jul 2018 14:02:02 GMT): kostas (Mon, 30 Jul 2018 14:02:12 GMT): kostas (Mon, 30 Jul 2018 14:05:41 GMT): kostas (Mon, 30 Jul 2018 14:05:52 GMT): kostas (Mon, 30 Jul 2018 14:06:10 GMT): pankajcheema (Mon, 30 Jul 2018 14:08:36 GMT): kostas (Mon, 30 Jul 2018 14:48:47 GMT): kostas (Mon, 30 Jul 2018 14:49:24 GMT): kostas (Mon, 30 Jul 2018 14:49:45 GMT): kostas (Mon, 30 Jul 2018 14:50:17 GMT): kostas (Mon, 30 Jul 2018 14:52:33 GMT): kostas (Mon, 30 Jul 2018 14:53:07 GMT): kostas (Mon, 30 Jul 2018 14:53:07 GMT): kostas (Mon, 30 Jul 2018 14:53:07 GMT): kostas (Mon, 30 Jul 2018 14:54:27 GMT): kostas (Mon, 30 Jul 2018 14:54:43 GMT): kostas (Mon, 30 Jul 2018 14:55:07 GMT): kostas (Mon, 30 Jul 2018 14:55:34 GMT): kostas (Mon, 30 Jul 2018 14:56:36 GMT): kostas (Mon, 30 Jul 2018 14:56:51 GMT): kostas (Mon, 30 Jul 2018 14:58:15 GMT): pankajcheema (Mon, 30 Jul 2018 16:21:15 GMT): pankajcheema (Mon, 30 Jul 2018 16:21:40 GMT): pankajcheema (Mon, 30 Jul 2018 16:29:37 GMT): pankajcheema (Mon, 30 Jul 2018 16:31:11 GMT): kostas (Mon, 30 Jul 2018 19:59:06 GMT): kostas (Mon, 30 Jul 2018 19:59:06 GMT): pankajcheema (Tue, 31 Jul 2018 04:57:42 GMT): pankajcheema (Tue, 31 Jul 2018 04:57:42 GMT): pankajcheema (Tue, 31 Jul 2018 04:57:42 GMT): pankajcheema (Tue, 31 Jul 2018 04:58:36 GMT): knagware9 (Tue, 31 Jul 2018 05:31:56 GMT): Aejnor (Tue, 31 Jul 2018 06:45:41 GMT): VinayChaudharyOfficial (Tue, 31 Jul 2018 07:01:51 GMT): VinayChaudharyOfficial (Tue, 31 Jul 2018 07:03:13 GMT): Kaschy (Tue, 31 Jul 2018 07:39:47 GMT): kostas (Tue, 31 Jul 2018 11:19:37 GMT): jyellick (Tue, 31 Jul 2018 13:28:29 GMT): jyellick (Tue, 31 Jul 2018 13:28:58 GMT): pankajcheema (Wed, 01 Aug 2018 04:46:38 GMT): pankajcheema (Wed, 01 Aug 2018 05:45:15 GMT): pankajcheema (Wed, 01 Aug 2018 05:47:07 GMT): pankajcheema (Wed, 01 Aug 2018 05:48:18 GMT): pankajcheema (Wed, 01 Aug 2018 05:48:32 GMT): VinayChaudharyOfficial (Wed, 01 Aug 2018 09:04:04 GMT): VinayChaudharyOfficial (Wed, 01 Aug 2018 09:05:24 GMT): amolpednekar (Wed, 01 Aug 2018 10:53:27 GMT): sheetal-hlf (Wed, 01 Aug 2018 11:08:29 GMT): seokju.hong (Wed, 01 Aug 2018 12:52:31 GMT): SaraEmily (Wed, 01 Aug 2018 13:49:24 GMT): jyellick (Wed, 01 Aug 2018 13:51:07 GMT): jyellick (Wed, 01 Aug 2018 13:51:35 GMT): jyellick (Wed, 01 Aug 2018 13:51:35 GMT): jyellick (Wed, 01 Aug 2018 13:53:08 GMT): kostas (Wed, 01 Aug 2018 13:54:06 GMT): kostas (Wed, 01 Aug 2018 13:54:10 GMT): kostas (Wed, 01 Aug 2018 13:54:25 GMT): kostas (Wed, 01 Aug 2018 13:54:43 GMT): SaraEmily (Wed, 01 Aug 2018 14:04:06 GMT): yacovm (Wed, 01 Aug 2018 14:08:20 GMT): SaraEmily (Wed, 01 Aug 2018 14:08:52 GMT): kostas (Wed, 01 Aug 2018 14:09:07 GMT): kostas (Wed, 01 Aug 2018 14:09:13 GMT): yacovm (Wed, 01 Aug 2018 14:09:17 GMT): yacovm (Wed, 01 Aug 2018 14:09:20 GMT): SaraEmily (Wed, 01 Aug 2018 14:09:30 GMT): hamptonsmith (Wed, 01 Aug 2018 15:45:35 GMT): kostas (Wed, 01 Aug 2018 15:46:36 GMT): ddurnev (Wed, 01 Aug 2018 16:21:11 GMT): pankajcheema (Thu, 02 Aug 2018 06:12:37 GMT): pankajcheema (Thu, 02 Aug 2018 07:41:08 GMT): dsanchezseco (Thu, 02 Aug 2018 07:51:50 GMT): dsanchezseco (Thu, 02 Aug 2018 07:51:50 GMT): dsanchezseco (Thu, 02 Aug 2018 07:51:50 GMT): qsmen (Thu, 02 Aug 2018 08:04:17 GMT): dsanchezseco (Thu, 02 Aug 2018 08:07:32 GMT): qsmen (Thu, 02 Aug 2018 08:22:34 GMT): dsanchezseco (Thu, 02 Aug 2018 08:26:48 GMT): dsanchezseco (Thu, 02 Aug 2018 08:32:41 GMT): qsmen (Thu, 02 Aug 2018 08:42:03 GMT): qsmen (Thu, 02 Aug 2018 09:46:27 GMT): Gaurav6794 (Thu, 02 Aug 2018 13:39:27 GMT): mhomaid (Thu, 02 Aug 2018 14:24:37 GMT): kostas (Thu, 02 Aug 2018 15:25:24 GMT): kostas (Thu, 02 Aug 2018 15:25:50 GMT): kostas (Thu, 02 Aug 2018 15:26:39 GMT): kostas (Thu, 02 Aug 2018 15:27:15 GMT): kostas (Thu, 02 Aug 2018 15:27:15 GMT): kostas (Thu, 02 Aug 2018 15:30:37 GMT): kostas (Thu, 02 Aug 2018 15:30:37 GMT): kostas (Thu, 02 Aug 2018 15:30:37 GMT): bh4rtp (Fri, 03 Aug 2018 01:05:27 GMT): bh4rtp (Fri, 03 Aug 2018 01:05:27 GMT): bh4rtp (Fri, 03 Aug 2018 01:05:27 GMT): bh4rtp (Fri, 03 Aug 2018 01:05:27 GMT): qsmen (Fri, 03 Aug 2018 01:16:50 GMT): bh4rtp (Fri, 03 Aug 2018 04:04:30 GMT): bh4rtp (Fri, 03 Aug 2018 04:04:46 GMT): Gaurav6794 (Fri, 03 Aug 2018 11:37:30 GMT): Gaurav6794 (Fri, 03 Aug 2018 11:37:30 GMT): kostas (Fri, 03 Aug 2018 12:20:37 GMT): kostas (Fri, 03 Aug 2018 12:21:00 GMT): kostas (Fri, 03 Aug 2018 12:21:44 GMT): kostas (Fri, 03 Aug 2018 12:22:08 GMT): zmaro (Fri, 03 Aug 2018 15:22:03 GMT): bh4rtp (Sat, 04 Aug 2018 14:29:22 GMT): bh4rtp (Sat, 04 Aug 2018 14:29:22 GMT): bh4rtp (Sat, 04 Aug 2018 15:06:43 GMT): qsmen (Mon, 06 Aug 2018 01:08:46 GMT): rahulhegde (Mon, 06 Aug 2018 02:50:54 GMT): rahulhegde (Mon, 06 Aug 2018 02:50:54 GMT): javrevasandeep (Mon, 06 Aug 2018 04:38:58 GMT): dsanchezseco (Mon, 06 Aug 2018 06:28:18 GMT): dsanchezseco (Mon, 06 Aug 2018 06:28:18 GMT): yacovm (Mon, 06 Aug 2018 07:12:56 GMT): yacovm (Mon, 06 Aug 2018 07:13:03 GMT): yacovm (Mon, 06 Aug 2018 07:13:20 GMT): gravity (Mon, 06 Aug 2018 08:42:17 GMT): bh4rtp (Mon, 06 Aug 2018 08:47:39 GMT): bh4rtp (Mon, 06 Aug 2018 08:47:39 GMT): ChunTung (Tue, 07 Aug 2018 04:40:49 GMT): ChunTung (Tue, 07 Aug 2018 04:47:55 GMT): ChunTung (Tue, 07 Aug 2018 04:47:55 GMT): ddurnev (Tue, 07 Aug 2018 09:12:36 GMT): ChunTung (Tue, 07 Aug 2018 10:04:49 GMT): ChunTung (Tue, 07 Aug 2018 10:04:49 GMT): Hz (Tue, 07 Aug 2018 11:35:17 GMT): rahulhegde (Tue, 07 Aug 2018 12:46:26 GMT): rahulhegde (Tue, 07 Aug 2018 12:46:26 GMT): kostas (Tue, 07 Aug 2018 13:27:54 GMT): asaningmaxchain123 (Tue, 07 Aug 2018 15:26:22 GMT): kostas (Tue, 07 Aug 2018 15:26:37 GMT): asaningmaxchain123 (Tue, 07 Aug 2018 15:27:35 GMT): rahulhegde (Tue, 07 Aug 2018 16:41:29 GMT): kostas (Wed, 08 Aug 2018 00:22:05 GMT): kostas (Wed, 08 Aug 2018 00:22:05 GMT): kostas (Wed, 08 Aug 2018 00:22:05 GMT): chandrika (Wed, 08 Aug 2018 07:59:47 GMT): chandrika (Wed, 08 Aug 2018 09:34:37 GMT): chandrika (Wed, 08 Aug 2018 13:43:02 GMT): jyellick (Wed, 08 Aug 2018 18:47:10 GMT): chandrika (Thu, 09 Aug 2018 04:58:12 GMT): chandrika (Thu, 09 Aug 2018 04:58:19 GMT): chandrika (Thu, 09 Aug 2018 04:58:43 GMT): chandrika (Thu, 09 Aug 2018 05:01:11 GMT): chandrika (Thu, 09 Aug 2018 05:01:13 GMT): chandrika (Thu, 09 Aug 2018 05:01:47 GMT): chandrika (Thu, 09 Aug 2018 05:02:32 GMT): kostas (Thu, 09 Aug 2018 11:04:44 GMT): kostas (Thu, 09 Aug 2018 11:04:44 GMT): Gaurav6794 (Thu, 09 Aug 2018 13:37:49 GMT): asaningmaxchain123 (Thu, 09 Aug 2018 15:25:17 GMT): nukulsharma (Sun, 12 Aug 2018 09:42:45 GMT): nukulsharma (Sun, 12 Aug 2018 09:42:45 GMT): nukulsharma (Sun, 12 Aug 2018 09:42:45 GMT): nukulsharma (Sun, 12 Aug 2018 09:42:45 GMT): nukulsharma (Sun, 12 Aug 2018 09:42:45 GMT): nukulsharma (Sun, 12 Aug 2018 09:42:45 GMT): nukulsharma (Sun, 12 Aug 2018 09:42:45 GMT): nukulsharma (Sun, 12 Aug 2018 18:03:52 GMT): chandrika (Mon, 13 Aug 2018 09:31:37 GMT): chandrika (Mon, 13 Aug 2018 09:32:06 GMT): chandrika (Mon, 13 Aug 2018 09:32:32 GMT): chandrika (Mon, 13 Aug 2018 09:33:04 GMT): jyellick (Mon, 13 Aug 2018 13:06:47 GMT): pankaj9310 (Mon, 13 Aug 2018 14:42:45 GMT): patent_person (Mon, 13 Aug 2018 20:34:22 GMT): SaraEmily (Wed, 15 Aug 2018 12:39:10 GMT): SaraEmily (Wed, 15 Aug 2018 12:41:27 GMT): kostas (Wed, 15 Aug 2018 12:51:43 GMT): kostas (Wed, 15 Aug 2018 12:51:55 GMT): kostas (Wed, 15 Aug 2018 12:52:03 GMT): SaraEmily (Wed, 15 Aug 2018 12:52:17 GMT): albert.lacambra (Wed, 15 Aug 2018 12:57:33 GMT): albert.lacambra (Wed, 15 Aug 2018 12:57:40 GMT): albert.lacambra (Wed, 15 Aug 2018 12:57:41 GMT): sanchezl (Wed, 15 Aug 2018 14:33:16 GMT): sandman (Fri, 17 Aug 2018 07:07:59 GMT): sandman (Fri, 17 Aug 2018 07:16:09 GMT): nukulsharma (Sun, 19 Aug 2018 07:26:10 GMT): vdods (Sun, 19 Aug 2018 21:02:29 GMT): jyellick (Mon, 20 Aug 2018 12:32:05 GMT): jyellick (Mon, 20 Aug 2018 12:37:25 GMT): jyellick (Mon, 20 Aug 2018 12:37:25 GMT): AnthonyRoux (Tue, 21 Aug 2018 09:18:28 GMT): OviiyaDominic (Tue, 21 Aug 2018 09:18:44 GMT): Kyroy (Tue, 21 Aug 2018 11:05:34 GMT): jyellick (Tue, 21 Aug 2018 14:32:51 GMT): AnthonyRoux (Tue, 21 Aug 2018 14:40:29 GMT): jyellick (Tue, 21 Aug 2018 14:40:53 GMT): jyellick (Tue, 21 Aug 2018 14:41:08 GMT): jyellick (Tue, 21 Aug 2018 14:41:22 GMT): jyellick (Tue, 21 Aug 2018 14:41:41 GMT): jyellick (Tue, 21 Aug 2018 14:41:56 GMT): AnthonyRoux (Tue, 21 Aug 2018 14:50:14 GMT): MikeEmery (Tue, 21 Aug 2018 18:12:24 GMT): MikeEmery (Tue, 21 Aug 2018 20:00:11 GMT): MikeEmery (Tue, 21 Aug 2018 20:03:08 GMT): MikeEmery (Tue, 21 Aug 2018 20:19:49 GMT): MikeEmery (Tue, 21 Aug 2018 21:02:33 GMT): am (Tue, 21 Aug 2018 21:15:58 GMT): kostas (Tue, 21 Aug 2018 21:20:53 GMT): kostas (Tue, 21 Aug 2018 21:21:15 GMT): MikeEmery (Tue, 21 Aug 2018 21:22:47 GMT): kostas (Tue, 21 Aug 2018 21:24:46 GMT): kostas (Tue, 21 Aug 2018 21:24:54 GMT): kostas (Tue, 21 Aug 2018 21:24:54 GMT): kostas (Tue, 21 Aug 2018 21:25:09 GMT): kostas (Tue, 21 Aug 2018 21:25:44 GMT): kostas (Tue, 21 Aug 2018 21:26:26 GMT): MikeEmery (Tue, 21 Aug 2018 21:29:03 GMT): MikeEmery (Tue, 21 Aug 2018 21:29:14 GMT): MikeEmery (Tue, 21 Aug 2018 21:30:36 GMT): kostas (Tue, 21 Aug 2018 21:34:24 GMT): kostas (Tue, 21 Aug 2018 21:37:08 GMT): kostas (Tue, 21 Aug 2018 21:37:33 GMT): kostas (Tue, 21 Aug 2018 21:38:58 GMT): kostas (Tue, 21 Aug 2018 21:40:13 GMT): kostas (Tue, 21 Aug 2018 21:40:18 GMT): kostas (Tue, 21 Aug 2018 21:40:25 GMT): kostas (Tue, 21 Aug 2018 21:41:34 GMT): kostas (Tue, 21 Aug 2018 21:42:21 GMT): kostas (Tue, 21 Aug 2018 21:42:53 GMT): MikeEmery (Tue, 21 Aug 2018 21:43:26 GMT): MikeEmery (Tue, 21 Aug 2018 21:43:46 GMT): kostas (Tue, 21 Aug 2018 21:44:44 GMT): kostas (Tue, 21 Aug 2018 21:53:54 GMT): kostas (Tue, 21 Aug 2018 21:54:20 GMT): MikeEmery (Tue, 21 Aug 2018 21:55:14 GMT): kostas (Tue, 21 Aug 2018 21:56:13 GMT): kostas (Tue, 21 Aug 2018 21:56:18 GMT): kostas (Tue, 21 Aug 2018 21:56:30 GMT): kostas (Tue, 21 Aug 2018 21:56:48 GMT): MikeEmery (Tue, 21 Aug 2018 21:57:49 GMT): MikeEmery (Tue, 21 Aug 2018 21:58:04 GMT): kostas (Tue, 21 Aug 2018 21:58:12 GMT): kostas (Tue, 21 Aug 2018 21:58:12 GMT): kostas (Tue, 21 Aug 2018 21:58:32 GMT): kostas (Tue, 21 Aug 2018 21:58:32 GMT): kostas (Tue, 21 Aug 2018 21:58:36 GMT): kostas (Tue, 21 Aug 2018 21:58:36 GMT): kostas (Tue, 21 Aug 2018 21:58:47 GMT): kostas (Tue, 21 Aug 2018 21:58:47 GMT): kostas (Tue, 21 Aug 2018 21:59:06 GMT): kostas (Tue, 21 Aug 2018 21:59:06 GMT): MikeEmery (Tue, 21 Aug 2018 22:07:08 GMT): jdfigure (Tue, 21 Aug 2018 22:09:18 GMT): vwagner (Tue, 21 Aug 2018 22:16:29 GMT): MikeEmery (Tue, 21 Aug 2018 22:20:07 GMT): bdjidi (Tue, 21 Aug 2018 22:48:32 GMT): xiven (Wed, 22 Aug 2018 12:42:27 GMT): xiven (Wed, 22 Aug 2018 12:42:27 GMT): xiven (Wed, 22 Aug 2018 12:42:27 GMT): xiven (Wed, 22 Aug 2018 12:42:27 GMT): xiven (Wed, 22 Aug 2018 12:42:27 GMT): xiven (Wed, 22 Aug 2018 12:42:27 GMT): xiven (Wed, 22 Aug 2018 12:51:14 GMT): xiven (Wed, 22 Aug 2018 12:51:14 GMT): jyellick (Wed, 22 Aug 2018 13:35:42 GMT): xiven (Wed, 22 Aug 2018 14:16:52 GMT): xiven (Wed, 22 Aug 2018 14:16:52 GMT): jyellick (Wed, 22 Aug 2018 14:21:47 GMT): jyellick (Wed, 22 Aug 2018 14:22:20 GMT): xiven (Wed, 22 Aug 2018 14:28:17 GMT): jyellick (Wed, 22 Aug 2018 14:30:59 GMT): jyellick (Wed, 22 Aug 2018 14:30:59 GMT): xiven (Wed, 22 Aug 2018 14:37:23 GMT): xiven (Wed, 22 Aug 2018 14:38:18 GMT): xiven (Wed, 22 Aug 2018 14:38:18 GMT): xiven (Wed, 22 Aug 2018 14:38:18 GMT): xiven (Wed, 22 Aug 2018 14:38:18 GMT): xiven (Wed, 22 Aug 2018 15:34:00 GMT): xiven (Wed, 22 Aug 2018 15:34:00 GMT): jyellick (Wed, 22 Aug 2018 15:35:49 GMT): xiven (Wed, 22 Aug 2018 15:39:09 GMT): jyellick (Wed, 22 Aug 2018 15:39:57 GMT): xiven (Wed, 22 Aug 2018 15:42:27 GMT): xiven (Wed, 22 Aug 2018 15:42:35 GMT): jyellick (Wed, 22 Aug 2018 15:43:13 GMT): jyellick (Wed, 22 Aug 2018 15:43:36 GMT): jyellick (Wed, 22 Aug 2018 15:43:36 GMT): jyellick (Wed, 22 Aug 2018 15:44:06 GMT): jyellick (Wed, 22 Aug 2018 15:44:43 GMT): xiven (Wed, 22 Aug 2018 15:44:45 GMT): jyellick (Wed, 22 Aug 2018 15:46:34 GMT): xiven (Wed, 22 Aug 2018 15:46:36 GMT): jyellick (Wed, 22 Aug 2018 15:47:49 GMT): jyellick (Wed, 22 Aug 2018 15:48:03 GMT): jyellick (Wed, 22 Aug 2018 15:48:17 GMT): jyellick (Wed, 22 Aug 2018 15:48:42 GMT): jyellick (Wed, 22 Aug 2018 15:49:11 GMT): moodysalem (Wed, 22 Aug 2018 15:56:10 GMT): jyellick (Wed, 22 Aug 2018 16:00:09 GMT): jyellick (Wed, 22 Aug 2018 16:00:09 GMT): jyellick (Wed, 22 Aug 2018 16:00:09 GMT): moodysalem (Wed, 22 Aug 2018 16:02:14 GMT): jyellick (Wed, 22 Aug 2018 16:02:55 GMT): jyellick (Wed, 22 Aug 2018 16:02:55 GMT): moodysalem (Wed, 22 Aug 2018 16:04:58 GMT): jyellick (Wed, 22 Aug 2018 16:05:34 GMT): moodysalem (Wed, 22 Aug 2018 16:05:34 GMT): jyellick (Wed, 22 Aug 2018 16:05:48 GMT): moodysalem (Wed, 22 Aug 2018 16:06:14 GMT): jyellick (Wed, 22 Aug 2018 16:06:22 GMT): jyellick (Wed, 22 Aug 2018 16:06:30 GMT): jyellick (Wed, 22 Aug 2018 16:07:31 GMT): moodysalem (Wed, 22 Aug 2018 16:09:01 GMT): jyellick (Wed, 22 Aug 2018 16:09:30 GMT): jyellick (Wed, 22 Aug 2018 16:09:48 GMT): jyellick (Wed, 22 Aug 2018 16:10:05 GMT): jyellick (Wed, 22 Aug 2018 16:10:25 GMT): jyellick (Wed, 22 Aug 2018 16:11:17 GMT): jyellick (Wed, 22 Aug 2018 16:12:16 GMT): jyellick (Wed, 22 Aug 2018 16:12:37 GMT): jyellick (Wed, 22 Aug 2018 16:13:05 GMT): jyellick (Wed, 22 Aug 2018 16:13:25 GMT): jyellick (Wed, 22 Aug 2018 16:14:07 GMT): jyellick (Wed, 22 Aug 2018 16:14:51 GMT): moodysalem (Wed, 22 Aug 2018 16:20:47 GMT): moodysalem (Wed, 22 Aug 2018 16:21:05 GMT): jyellick (Wed, 22 Aug 2018 16:23:56 GMT): jyellick (Wed, 22 Aug 2018 16:24:48 GMT): jyellick (Wed, 22 Aug 2018 16:25:02 GMT): jyellick (Wed, 22 Aug 2018 16:25:14 GMT): jyellick (Wed, 22 Aug 2018 16:27:03 GMT): pandrejko (Wed, 22 Aug 2018 16:27:03 GMT): bh4rtp (Wed, 22 Aug 2018 16:49:10 GMT): jyellick (Wed, 22 Aug 2018 16:50:52 GMT): joe-alewine (Wed, 22 Aug 2018 17:37:08 GMT): moodysalem (Wed, 22 Aug 2018 19:09:33 GMT): jyellick (Wed, 22 Aug 2018 19:10:47 GMT): xiven (Wed, 22 Aug 2018 19:12:51 GMT): jyellick (Wed, 22 Aug 2018 19:13:45 GMT): moodysalem (Wed, 22 Aug 2018 19:13:54 GMT): xiven (Wed, 22 Aug 2018 19:14:32 GMT): xiven (Wed, 22 Aug 2018 19:14:53 GMT): jyellick (Wed, 22 Aug 2018 19:14:58 GMT): jyellick (Wed, 22 Aug 2018 19:15:19 GMT): jyellick (Wed, 22 Aug 2018 19:15:19 GMT): moodysalem (Wed, 22 Aug 2018 19:15:44 GMT): jyellick (Wed, 22 Aug 2018 19:16:37 GMT): xiven (Wed, 22 Aug 2018 19:39:06 GMT): xiven (Wed, 22 Aug 2018 19:39:06 GMT): Ryan2 (Thu, 23 Aug 2018 02:56:52 GMT): OviiyaDominic (Thu, 23 Aug 2018 07:23:05 GMT): OviiyaDominic (Thu, 23 Aug 2018 07:23:44 GMT): OviiyaDominic (Thu, 23 Aug 2018 07:23:59 GMT): OviiyaDominic (Thu, 23 Aug 2018 07:23:59 GMT): xiven (Thu, 23 Aug 2018 17:13:10 GMT): xiven (Thu, 23 Aug 2018 20:22:20 GMT): jyellick (Thu, 23 Aug 2018 20:53:12 GMT): jyellick (Thu, 23 Aug 2018 20:54:18 GMT): jyellick (Thu, 23 Aug 2018 20:54:18 GMT): xiven (Thu, 23 Aug 2018 21:51:50 GMT): xiven (Fri, 24 Aug 2018 13:23:13 GMT): qiangjiyi (Tue, 28 Aug 2018 02:12:21 GMT): vu3mmg (Thu, 30 Aug 2018 06:30:11 GMT): vu3mmg (Thu, 30 Aug 2018 06:30:24 GMT): vu3mmg (Thu, 30 Aug 2018 06:30:55 GMT): huxiangdong (Thu, 30 Aug 2018 09:00:56 GMT): asaningmaxchain123 (Fri, 31 Aug 2018 14:19:09 GMT): asaningmaxchain123 (Fri, 31 Aug 2018 14:19:26 GMT): asaningmaxchain123 (Fri, 31 Aug 2018 14:19:37 GMT): asaningmaxchain123 (Fri, 31 Aug 2018 14:19:37 GMT): asaningmaxchain123 (Fri, 31 Aug 2018 14:21:34 GMT): aatkddny (Sun, 02 Sep 2018 16:01:00 GMT): aatkddny (Sun, 02 Sep 2018 16:01:11 GMT): aatkddny (Sun, 02 Sep 2018 16:01:11 GMT): aatkddny (Sun, 02 Sep 2018 16:01:11 GMT): Rachit_gaur (Mon, 03 Sep 2018 05:04:02 GMT): Rachit_gaur (Mon, 03 Sep 2018 05:04:07 GMT): bh4rtp (Mon, 03 Sep 2018 07:38:21 GMT): bh4rtp (Mon, 03 Sep 2018 07:38:21 GMT): bh4rtp (Mon, 03 Sep 2018 07:46:31 GMT): bh4rtp (Mon, 03 Sep 2018 07:58:57 GMT): bh4rtp (Mon, 03 Sep 2018 07:58:57 GMT): Sreesha (Mon, 03 Sep 2018 12:42:23 GMT): Sreesha (Mon, 03 Sep 2018 12:42:38 GMT): Sreesha (Mon, 03 Sep 2018 12:43:05 GMT): Sreesha (Mon, 03 Sep 2018 12:44:10 GMT): Sreesha (Mon, 03 Sep 2018 12:44:22 GMT): vsadriano (Mon, 03 Sep 2018 17:26:39 GMT): vsadriano (Mon, 03 Sep 2018 17:26:39 GMT): DennisM330 (Tue, 04 Sep 2018 01:17:30 GMT): DennisM330 (Tue, 04 Sep 2018 01:17:30 GMT): DennisM330 (Tue, 04 Sep 2018 01:17:30 GMT): DennisM330 (Tue, 04 Sep 2018 01:17:30 GMT): DennisM330 (Tue, 04 Sep 2018 01:21:09 GMT): huxiangdong (Tue, 04 Sep 2018 02:48:46 GMT): huxiangdong (Tue, 04 Sep 2018 02:48:46 GMT): huxiangdong (Tue, 04 Sep 2018 02:48:46 GMT): huxiangdong (Tue, 04 Sep 2018 02:54:31 GMT): huxiangdong (Tue, 04 Sep 2018 02:57:14 GMT): huxiangdong (Tue, 04 Sep 2018 02:58:20 GMT): huxiangdong (Tue, 04 Sep 2018 02:59:29 GMT): Sreesha (Tue, 04 Sep 2018 05:28:48 GMT): Rachit_gaur (Tue, 04 Sep 2018 10:26:32 GMT): jyellick (Tue, 04 Sep 2018 14:12:27 GMT): DennisM330 (Tue, 04 Sep 2018 14:54:34 GMT): BikashPal (Tue, 04 Sep 2018 15:20:20 GMT): MikeEmery (Tue, 04 Sep 2018 15:35:33 GMT): MikeEmery (Tue, 04 Sep 2018 15:35:33 GMT): MikeEmery (Tue, 04 Sep 2018 15:36:39 GMT): MikeEmery (Tue, 04 Sep 2018 15:36:39 GMT): MikeEmery (Tue, 04 Sep 2018 15:38:37 GMT): kostas (Tue, 04 Sep 2018 16:00:06 GMT): kostas (Tue, 04 Sep 2018 16:01:22 GMT): kostas (Tue, 04 Sep 2018 16:02:42 GMT): kostas (Tue, 04 Sep 2018 16:03:05 GMT): kostas (Tue, 04 Sep 2018 16:06:16 GMT): kostas (Tue, 04 Sep 2018 16:06:45 GMT): MikeEmery (Tue, 04 Sep 2018 16:07:17 GMT): MikeEmery (Tue, 04 Sep 2018 16:17:38 GMT): MikeEmery (Tue, 04 Sep 2018 16:19:04 GMT): kostas (Tue, 04 Sep 2018 16:24:27 GMT): MikeEmery (Tue, 04 Sep 2018 16:25:06 GMT): kostas (Tue, 04 Sep 2018 16:25:46 GMT): Raycoms (Tue, 04 Sep 2018 17:19:30 GMT): Raycoms (Tue, 04 Sep 2018 17:19:38 GMT): ClementeSerrano (Tue, 04 Sep 2018 17:47:02 GMT): DennisM330 (Tue, 04 Sep 2018 18:07:01 GMT): flyerwing (Wed, 05 Sep 2018 02:31:25 GMT): yj511608130 (Wed, 05 Sep 2018 02:34:59 GMT): adarshsaraf123 (Wed, 05 Sep 2018 07:10:29 GMT): adarshsaraf123 (Wed, 05 Sep 2018 07:16:41 GMT): adarshsaraf123 (Wed, 05 Sep 2018 07:17:37 GMT): adarshsaraf123 (Wed, 05 Sep 2018 07:20:10 GMT): adarshsaraf123 (Wed, 05 Sep 2018 07:21:41 GMT): adarshsaraf123 (Wed, 05 Sep 2018 08:22:31 GMT): adarshsaraf123 (Wed, 05 Sep 2018 08:22:31 GMT): adarshsaraf123 (Wed, 05 Sep 2018 08:22:31 GMT): Ferrymania (Wed, 05 Sep 2018 08:38:36 GMT): thakurnikk (Wed, 05 Sep 2018 08:56:27 GMT): thakurnikk (Wed, 05 Sep 2018 08:56:27 GMT): thakurnikk (Wed, 05 Sep 2018 08:56:27 GMT): thakurnikk (Wed, 05 Sep 2018 08:56:27 GMT): adarshsaraf123 (Wed, 05 Sep 2018 08:59:14 GMT): thakurnikk (Wed, 05 Sep 2018 09:01:15 GMT): thakurnikk (Wed, 05 Sep 2018 09:01:15 GMT): thakurnikk (Wed, 05 Sep 2018 09:01:15 GMT): adarshsaraf123 (Wed, 05 Sep 2018 09:02:44 GMT): thakurnikk (Wed, 05 Sep 2018 09:04:49 GMT): thakurnikk (Wed, 05 Sep 2018 09:04:49 GMT): Rachit_gaur (Wed, 05 Sep 2018 09:07:43 GMT): Rachit_gaur (Wed, 05 Sep 2018 09:07:43 GMT): Rachit_gaur (Wed, 05 Sep 2018 11:00:41 GMT): Raycoms (Wed, 05 Sep 2018 13:45:05 GMT): manish-sethi (Wed, 05 Sep 2018 14:44:01 GMT): iramiller (Wed, 05 Sep 2018 19:58:06 GMT): MikeEmery (Wed, 05 Sep 2018 20:46:07 GMT): kostas (Wed, 05 Sep 2018 21:19:24 GMT): kostas (Wed, 05 Sep 2018 21:20:36 GMT): kostas (Wed, 05 Sep 2018 21:21:13 GMT): kostas (Wed, 05 Sep 2018 21:21:30 GMT): kostas (Wed, 05 Sep 2018 21:22:13 GMT): kostas (Wed, 05 Sep 2018 21:22:50 GMT): kostas (Wed, 05 Sep 2018 21:22:50 GMT): kostas (Wed, 05 Sep 2018 21:22:50 GMT): kostas (Wed, 05 Sep 2018 21:24:49 GMT): kostas (Wed, 05 Sep 2018 21:25:37 GMT): kostas (Wed, 05 Sep 2018 21:26:29 GMT): kostas (Wed, 05 Sep 2018 21:26:33 GMT): kostas (Wed, 05 Sep 2018 21:27:20 GMT): Raycoms (Wed, 05 Sep 2018 21:41:49 GMT): Raycoms (Wed, 05 Sep 2018 21:55:38 GMT): Rachit_gaur (Thu, 06 Sep 2018 09:46:41 GMT): asaningmaxchain123 (Thu, 06 Sep 2018 15:52:01 GMT): Ryan2 (Fri, 07 Sep 2018 02:45:19 GMT): Ryan2 (Fri, 07 Sep 2018 02:45:19 GMT): ArianStef (Sat, 08 Sep 2018 19:17:04 GMT): ArianStef (Sat, 08 Sep 2018 19:19:01 GMT): raviyelleni (Sun, 09 Sep 2018 04:44:50 GMT): username343 (Mon, 10 Sep 2018 07:10:26 GMT): username343 (Mon, 10 Sep 2018 07:10:41 GMT): Alvin455024780 (Mon, 10 Sep 2018 07:20:06 GMT): Alvin455024780 (Mon, 10 Sep 2018 07:21:04 GMT): Alvin455024780 (Mon, 10 Sep 2018 07:22:06 GMT): narendranathreddy (Mon, 10 Sep 2018 09:02:39 GMT): issac.liu (Mon, 10 Sep 2018 09:21:45 GMT): jrosmith (Mon, 10 Sep 2018 12:38:43 GMT): Ashish_ydv (Mon, 10 Sep 2018 13:15:37 GMT): Ashish_ydv (Mon, 10 Sep 2018 13:16:26 GMT): Ashish_ydv (Mon, 10 Sep 2018 13:16:26 GMT): jyellick (Mon, 10 Sep 2018 18:57:47 GMT): latitiah (Mon, 10 Sep 2018 22:08:33 GMT): yacovm (Mon, 10 Sep 2018 22:11:35 GMT): yacovm (Mon, 10 Sep 2018 22:12:19 GMT): latitiah (Mon, 10 Sep 2018 22:12:55 GMT): yacovm (Mon, 10 Sep 2018 22:13:03 GMT): yacovm (Mon, 10 Sep 2018 22:13:12 GMT): latitiah (Mon, 10 Sep 2018 22:13:20 GMT): latitiah (Mon, 10 Sep 2018 22:13:20 GMT): latitiah (Mon, 10 Sep 2018 22:13:45 GMT): yacovm (Mon, 10 Sep 2018 22:14:10 GMT): latitiah (Mon, 10 Sep 2018 22:14:16 GMT): latitiah (Mon, 10 Sep 2018 22:14:32 GMT): latitiah (Mon, 10 Sep 2018 22:18:24 GMT): latitiah (Mon, 10 Sep 2018 22:18:39 GMT): ArianStef (Tue, 11 Sep 2018 08:14:45 GMT): ArianStef (Tue, 11 Sep 2018 08:15:14 GMT): ArianStef (Tue, 11 Sep 2018 08:16:42 GMT): jyellick (Tue, 11 Sep 2018 13:26:00 GMT): ArianStef (Tue, 11 Sep 2018 13:28:51 GMT): ArianStef (Tue, 11 Sep 2018 13:29:43 GMT): ArianStef (Tue, 11 Sep 2018 13:29:53 GMT): jyellick (Tue, 11 Sep 2018 13:30:45 GMT): jyellick (Tue, 11 Sep 2018 13:30:45 GMT): ArianStef (Tue, 11 Sep 2018 13:33:00 GMT): kazansky (Wed, 12 Sep 2018 08:13:04 GMT): yj511608130 (Wed, 12 Sep 2018 09:11:09 GMT): Rosan (Thu, 13 Sep 2018 06:25:33 GMT): Rosan (Thu, 13 Sep 2018 06:30:36 GMT): JaydipMakadia (Thu, 13 Sep 2018 13:12:13 GMT): guoger (Thu, 13 Sep 2018 13:37:56 GMT): guoger (Thu, 13 Sep 2018 13:38:11 GMT): jyellick (Thu, 13 Sep 2018 15:28:56 GMT): ArianStef (Thu, 13 Sep 2018 15:32:29 GMT): vdods (Fri, 14 Sep 2018 04:16:21 GMT): guoger (Fri, 14 Sep 2018 04:55:12 GMT): vdods (Fri, 14 Sep 2018 05:15:52 GMT): guoger (Fri, 14 Sep 2018 05:38:21 GMT): anjalinaik (Fri, 14 Sep 2018 07:25:29 GMT): anjalinaik (Fri, 14 Sep 2018 07:25:29 GMT): anjalinaik (Fri, 14 Sep 2018 07:25:29 GMT): anjalinaik (Fri, 14 Sep 2018 07:25:29 GMT): anjalinaik (Fri, 14 Sep 2018 07:25:29 GMT): anjalinaik (Fri, 14 Sep 2018 07:25:29 GMT): anjalinaik (Fri, 14 Sep 2018 07:25:29 GMT): anjalinaik (Fri, 14 Sep 2018 07:25:29 GMT): aatkddny (Fri, 14 Sep 2018 12:28:06 GMT): yacovm (Fri, 14 Sep 2018 12:39:42 GMT): aatkddny (Fri, 14 Sep 2018 14:25:12 GMT): yacovm (Fri, 14 Sep 2018 17:21:04 GMT): yacovm (Fri, 14 Sep 2018 17:21:10 GMT): dave.enyeart (Fri, 14 Sep 2018 18:58:26 GMT): dave.enyeart (Fri, 14 Sep 2018 18:59:00 GMT): aatkddny (Fri, 14 Sep 2018 21:26:25 GMT): waxer (Sun, 16 Sep 2018 01:06:29 GMT): waxer (Sun, 16 Sep 2018 01:12:17 GMT): waxer (Sun, 16 Sep 2018 01:12:41 GMT): Jgnuid (Mon, 17 Sep 2018 16:11:09 GMT): waxer (Mon, 17 Sep 2018 16:12:55 GMT): waxer (Mon, 17 Sep 2018 16:13:24 GMT): silliman (Mon, 17 Sep 2018 16:34:39 GMT): silliman (Mon, 17 Sep 2018 16:34:39 GMT): waxer (Mon, 17 Sep 2018 17:13:41 GMT): waxer (Mon, 17 Sep 2018 17:13:41 GMT): bh4rtp (Tue, 18 Sep 2018 02:36:04 GMT): bh4rtp (Tue, 18 Sep 2018 02:36:04 GMT): jyellick (Tue, 18 Sep 2018 16:46:09 GMT): bh4rtp (Wed, 19 Sep 2018 01:28:49 GMT): bh4rtp (Wed, 19 Sep 2018 02:34:30 GMT): guoger (Wed, 19 Sep 2018 03:35:54 GMT): xiven (Wed, 19 Sep 2018 17:56:52 GMT): sean (Wed, 19 Sep 2018 18:57:57 GMT): hypere (Wed, 19 Sep 2018 19:18:25 GMT): yousaf (Wed, 19 Sep 2018 19:19:39 GMT): yousaf (Wed, 19 Sep 2018 19:20:07 GMT): halilkalkan (Wed, 19 Sep 2018 19:28:13 GMT): halilkalkan (Wed, 19 Sep 2018 19:28:34 GMT): jvsclp (Wed, 19 Sep 2018 19:59:12 GMT): hypere (Wed, 19 Sep 2018 21:05:53 GMT): hypere (Wed, 19 Sep 2018 21:07:40 GMT): hypere (Wed, 19 Sep 2018 21:07:40 GMT): hypere (Wed, 19 Sep 2018 21:07:40 GMT): JaccobSmith (Thu, 20 Sep 2018 01:12:53 GMT): JaccobSmith (Thu, 20 Sep 2018 01:13:00 GMT): JaccobSmith (Thu, 20 Sep 2018 01:13:52 GMT): baoyangc (Thu, 20 Sep 2018 03:54:20 GMT): Shyam_Pratap_Singh (Thu, 20 Sep 2018 06:25:41 GMT): karelg (Thu, 20 Sep 2018 07:47:28 GMT): yousaf (Thu, 20 Sep 2018 11:41:48 GMT): marosoft (Thu, 20 Sep 2018 12:34:45 GMT): marosoft (Thu, 20 Sep 2018 12:37:24 GMT): jyellick (Thu, 20 Sep 2018 13:59:44 GMT): jyellick (Thu, 20 Sep 2018 14:00:29 GMT): jyellick (Thu, 20 Sep 2018 14:01:24 GMT): jyellick (Thu, 20 Sep 2018 14:02:44 GMT): jyellick (Thu, 20 Sep 2018 14:03:22 GMT): jyellick (Thu, 20 Sep 2018 14:03:22 GMT): yousaf (Thu, 20 Sep 2018 14:11:59 GMT): jyellick (Thu, 20 Sep 2018 14:18:00 GMT): yousaf (Thu, 20 Sep 2018 14:27:42 GMT): jyellick (Thu, 20 Sep 2018 14:32:32 GMT): yousaf (Thu, 20 Sep 2018 14:35:11 GMT): jvsclp (Thu, 20 Sep 2018 15:22:12 GMT): jvsclp (Thu, 20 Sep 2018 15:22:12 GMT): jvsclp (Thu, 20 Sep 2018 15:22:55 GMT): jyellick (Thu, 20 Sep 2018 15:31:53 GMT): jyellick (Thu, 20 Sep 2018 15:32:22 GMT): jvsclp (Thu, 20 Sep 2018 15:34:48 GMT): jvsclp (Thu, 20 Sep 2018 15:34:48 GMT): jvsclp (Thu, 20 Sep 2018 15:45:17 GMT): jvsclp (Thu, 20 Sep 2018 15:45:46 GMT): jyellick (Thu, 20 Sep 2018 16:12:21 GMT): yousaf (Thu, 20 Sep 2018 18:31:05 GMT): jyellick (Thu, 20 Sep 2018 19:06:59 GMT): jyellick (Thu, 20 Sep 2018 19:06:59 GMT): yousaf (Thu, 20 Sep 2018 19:37:38 GMT): jvsclp (Thu, 20 Sep 2018 20:41:16 GMT): jvsclp (Thu, 20 Sep 2018 20:42:19 GMT): jvsclp (Thu, 20 Sep 2018 20:49:03 GMT): jyellick (Thu, 20 Sep 2018 20:50:46 GMT): jyellick (Thu, 20 Sep 2018 20:52:22 GMT): jyellick (Thu, 20 Sep 2018 20:52:22 GMT): jyellick (Thu, 20 Sep 2018 20:53:05 GMT): jvsclp (Thu, 20 Sep 2018 20:55:02 GMT): yousaf (Thu, 20 Sep 2018 21:06:14 GMT): jvsclp (Thu, 20 Sep 2018 21:16:48 GMT): JaccobSmith (Fri, 21 Sep 2018 03:42:42 GMT): JaccobSmith (Fri, 21 Sep 2018 03:44:52 GMT): jyellick (Fri, 21 Sep 2018 03:59:41 GMT): jyellick (Fri, 21 Sep 2018 04:01:47 GMT): Rosan (Fri, 21 Sep 2018 04:59:42 GMT): Rosan (Fri, 21 Sep 2018 04:59:42 GMT): JaccobSmith (Fri, 21 Sep 2018 05:07:02 GMT): yousaf (Fri, 21 Sep 2018 06:49:57 GMT): yousaf (Fri, 21 Sep 2018 11:38:17 GMT): yousaf (Fri, 21 Sep 2018 11:38:17 GMT): jyellick (Fri, 21 Sep 2018 14:05:18 GMT): yousaf (Fri, 21 Sep 2018 14:08:38 GMT): jyellick (Fri, 21 Sep 2018 14:12:16 GMT): jyellick (Fri, 21 Sep 2018 14:13:04 GMT): yousaf (Fri, 21 Sep 2018 14:14:14 GMT): yousaf (Fri, 21 Sep 2018 14:15:05 GMT): jvsclp (Fri, 21 Sep 2018 14:17:54 GMT): jvsclp (Fri, 21 Sep 2018 14:18:26 GMT): jyellick (Fri, 21 Sep 2018 14:44:02 GMT): jyellick (Fri, 21 Sep 2018 14:44:33 GMT): yousaf (Fri, 21 Sep 2018 14:45:03 GMT): jyellick (Fri, 21 Sep 2018 14:45:59 GMT): yousaf (Fri, 21 Sep 2018 14:50:30 GMT): yousaf (Fri, 21 Sep 2018 14:51:32 GMT): jyellick (Fri, 21 Sep 2018 14:51:56 GMT): jyellick (Fri, 21 Sep 2018 14:51:56 GMT): jyellick (Fri, 21 Sep 2018 14:52:08 GMT): jvsclp (Fri, 21 Sep 2018 14:52:55 GMT): yousaf (Fri, 21 Sep 2018 14:53:29 GMT): jyellick (Fri, 21 Sep 2018 14:55:00 GMT): jyellick (Fri, 21 Sep 2018 14:55:39 GMT): jyellick (Fri, 21 Sep 2018 14:56:18 GMT): yousaf (Fri, 21 Sep 2018 14:57:36 GMT): jvsclp (Fri, 21 Sep 2018 14:59:17 GMT): jvsclp (Fri, 21 Sep 2018 15:01:18 GMT): jvsclp (Fri, 21 Sep 2018 15:01:18 GMT): jvsclp (Fri, 21 Sep 2018 15:02:42 GMT): yousaf (Fri, 21 Sep 2018 15:04:07 GMT): jvsclp (Fri, 21 Sep 2018 15:04:07 GMT): jvsclp (Fri, 21 Sep 2018 15:04:07 GMT): jyellick (Fri, 21 Sep 2018 15:06:53 GMT): yousaf (Fri, 21 Sep 2018 15:08:08 GMT): yousaf (Fri, 21 Sep 2018 15:08:49 GMT): jyellick (Fri, 21 Sep 2018 15:10:23 GMT): jyellick (Fri, 21 Sep 2018 15:11:02 GMT): jvsclp (Fri, 21 Sep 2018 15:15:14 GMT): yousaf (Fri, 21 Sep 2018 15:19:01 GMT): jyellick (Fri, 21 Sep 2018 15:26:27 GMT): jyellick (Fri, 21 Sep 2018 15:34:03 GMT): jvsclp (Fri, 21 Sep 2018 15:36:42 GMT): jyellick (Fri, 21 Sep 2018 15:38:24 GMT): jyellick (Fri, 21 Sep 2018 15:39:00 GMT): jyellick (Fri, 21 Sep 2018 15:39:50 GMT): jvsclp (Fri, 21 Sep 2018 15:41:23 GMT): jyellick (Fri, 21 Sep 2018 15:41:37 GMT): jvsclp (Fri, 21 Sep 2018 15:47:37 GMT): jvsclp (Fri, 21 Sep 2018 15:50:33 GMT): jvsclp (Fri, 21 Sep 2018 15:50:33 GMT): yousaf (Fri, 21 Sep 2018 16:53:29 GMT): yousaf (Fri, 21 Sep 2018 17:07:50 GMT): jyellick (Fri, 21 Sep 2018 17:51:55 GMT): jyellick (Fri, 21 Sep 2018 17:52:37 GMT): yousaf (Fri, 21 Sep 2018 18:44:03 GMT): jyellick (Fri, 21 Sep 2018 18:46:49 GMT): yousaf (Fri, 21 Sep 2018 19:13:32 GMT): jvsclp (Fri, 21 Sep 2018 19:51:09 GMT): jvsclp (Fri, 21 Sep 2018 19:51:09 GMT): jyellick (Fri, 21 Sep 2018 19:53:33 GMT): yousaf (Fri, 21 Sep 2018 19:54:58 GMT): jyellick (Fri, 21 Sep 2018 20:18:25 GMT): yousaf (Fri, 21 Sep 2018 20:28:52 GMT): yousaf (Fri, 21 Sep 2018 20:33:39 GMT): jyellick (Fri, 21 Sep 2018 20:58:50 GMT): jyellick (Fri, 21 Sep 2018 20:59:28 GMT): yousaf (Fri, 21 Sep 2018 21:12:38 GMT): yousaf (Fri, 21 Sep 2018 21:17:56 GMT): yousaf (Fri, 21 Sep 2018 21:24:47 GMT): jyellick (Mon, 24 Sep 2018 03:02:48 GMT): bh4rtp (Mon, 24 Sep 2018 04:03:49 GMT): bh4rtp (Mon, 24 Sep 2018 04:03:49 GMT): bh4rtp (Mon, 24 Sep 2018 04:03:49 GMT): bh4rtp (Mon, 24 Sep 2018 04:05:48 GMT): bh4rtp (Mon, 24 Sep 2018 04:05:48 GMT): SudeepS 2 (Mon, 24 Sep 2018 06:33:42 GMT): yousaf (Mon, 24 Sep 2018 06:34:55 GMT): SudeepS 2 (Mon, 24 Sep 2018 06:35:34 GMT): yousaf (Mon, 24 Sep 2018 06:38:16 GMT): adarshsaraf123 (Mon, 24 Sep 2018 06:50:19 GMT): adarshsaraf123 (Mon, 24 Sep 2018 06:50:19 GMT): adarshsaraf123 (Mon, 24 Sep 2018 06:52:40 GMT): SudeepS 2 (Mon, 24 Sep 2018 07:03:10 GMT): yousaf (Mon, 24 Sep 2018 07:05:03 GMT): yousaf (Mon, 24 Sep 2018 07:14:50 GMT): adarshsaraf123 (Mon, 24 Sep 2018 07:16:31 GMT): yousaf (Mon, 24 Sep 2018 07:17:42 GMT): vudathasaiomkar (Mon, 24 Sep 2018 07:32:43 GMT): vudathasaiomkar (Mon, 24 Sep 2018 07:32:43 GMT): vudathasaiomkar (Mon, 24 Sep 2018 07:32:43 GMT): reggiefelias (Mon, 24 Sep 2018 09:26:38 GMT): reggiefelias (Mon, 24 Sep 2018 09:28:01 GMT): reggiefelias (Mon, 24 Sep 2018 09:29:01 GMT): reggiefelias (Mon, 24 Sep 2018 10:09:54 GMT): jyellick (Mon, 24 Sep 2018 14:31:38 GMT): jyellick (Mon, 24 Sep 2018 14:34:23 GMT): jyellick (Mon, 24 Sep 2018 14:38:06 GMT): jyellick (Mon, 24 Sep 2018 14:38:06 GMT): jyellick (Mon, 24 Sep 2018 14:40:52 GMT): jyellick (Mon, 24 Sep 2018 14:41:47 GMT): yousaf (Mon, 24 Sep 2018 15:25:36 GMT): bh4rtp (Mon, 24 Sep 2018 16:26:10 GMT): bh4rtp (Mon, 24 Sep 2018 16:26:10 GMT): jyellick (Mon, 24 Sep 2018 16:45:57 GMT): jyellick (Mon, 24 Sep 2018 16:48:21 GMT): yousaf (Mon, 24 Sep 2018 20:47:51 GMT): jyellick (Mon, 24 Sep 2018 20:49:14 GMT): yousaf (Mon, 24 Sep 2018 20:54:51 GMT): jyellick (Tue, 25 Sep 2018 02:32:55 GMT): jyellick (Tue, 25 Sep 2018 02:33:11 GMT): yousaf (Tue, 25 Sep 2018 02:35:00 GMT): jyellick (Tue, 25 Sep 2018 02:38:34 GMT): yousaf (Tue, 25 Sep 2018 02:42:04 GMT): yousaf (Tue, 25 Sep 2018 02:43:47 GMT): jyellick (Tue, 25 Sep 2018 02:45:47 GMT): jyellick (Tue, 25 Sep 2018 02:46:11 GMT): jyellick (Tue, 25 Sep 2018 02:46:11 GMT): jyellick (Tue, 25 Sep 2018 02:46:55 GMT): yousaf (Tue, 25 Sep 2018 02:48:33 GMT): jyellick (Tue, 25 Sep 2018 02:48:42 GMT): yousaf (Tue, 25 Sep 2018 02:48:51 GMT): jyellick (Tue, 25 Sep 2018 02:49:09 GMT): jyellick (Tue, 25 Sep 2018 02:49:17 GMT): yousaf (Tue, 25 Sep 2018 02:49:55 GMT): yousaf (Tue, 25 Sep 2018 02:50:05 GMT): jyellick (Tue, 25 Sep 2018 02:50:49 GMT): yousaf (Tue, 25 Sep 2018 02:54:59 GMT): yousaf (Tue, 25 Sep 2018 02:57:55 GMT): jyellick (Tue, 25 Sep 2018 02:58:32 GMT): jyellick (Tue, 25 Sep 2018 02:58:55 GMT): jyellick (Tue, 25 Sep 2018 02:59:00 GMT): jyellick (Tue, 25 Sep 2018 02:59:05 GMT): jyellick (Tue, 25 Sep 2018 02:59:12 GMT): yousaf (Tue, 25 Sep 2018 02:59:25 GMT): jyellick (Tue, 25 Sep 2018 02:59:54 GMT): yousaf (Tue, 25 Sep 2018 03:00:58 GMT): jyellick (Tue, 25 Sep 2018 03:01:26 GMT): jyellick (Tue, 25 Sep 2018 03:01:36 GMT): jyellick (Tue, 25 Sep 2018 03:01:46 GMT): jyellick (Tue, 25 Sep 2018 03:02:05 GMT): yousaf (Tue, 25 Sep 2018 03:04:03 GMT): jyellick (Tue, 25 Sep 2018 03:04:48 GMT): jyellick (Tue, 25 Sep 2018 03:05:33 GMT): jyellick (Tue, 25 Sep 2018 03:05:33 GMT): yousaf (Tue, 25 Sep 2018 03:06:42 GMT): jyellick (Tue, 25 Sep 2018 03:08:52 GMT): yousaf (Tue, 25 Sep 2018 03:09:11 GMT): yousaf (Tue, 25 Sep 2018 03:20:22 GMT): jyellick (Tue, 25 Sep 2018 03:22:31 GMT): yousaf (Tue, 25 Sep 2018 03:24:49 GMT): jyellick (Tue, 25 Sep 2018 03:29:04 GMT): jyellick (Tue, 25 Sep 2018 03:29:32 GMT): yousaf (Tue, 25 Sep 2018 03:30:34 GMT): jyellick (Tue, 25 Sep 2018 03:52:03 GMT): jyellick (Tue, 25 Sep 2018 03:52:20 GMT): yousaf (Tue, 25 Sep 2018 03:58:22 GMT): jyellick (Tue, 25 Sep 2018 03:59:05 GMT): yousaf (Tue, 25 Sep 2018 03:59:49 GMT): qsmen (Tue, 25 Sep 2018 09:26:35 GMT): SudeepS 2 (Tue, 25 Sep 2018 12:15:52 GMT): MikeyGarcia (Tue, 25 Sep 2018 13:04:21 GMT): yousaf (Wed, 26 Sep 2018 01:10:06 GMT): jyellick (Wed, 26 Sep 2018 01:26:41 GMT): jyellick (Wed, 26 Sep 2018 01:28:49 GMT): yousaf (Wed, 26 Sep 2018 02:04:15 GMT): SudeepS 2 (Wed, 26 Sep 2018 03:10:17 GMT): SudeepS 2 (Wed, 26 Sep 2018 03:10:17 GMT): Ryan2 (Wed, 26 Sep 2018 05:28:51 GMT): Ryan2 (Wed, 26 Sep 2018 05:28:51 GMT): Ryan2 (Wed, 26 Sep 2018 05:28:51 GMT): Ryan2 (Wed, 26 Sep 2018 05:28:51 GMT): githubcpc (Wed, 26 Sep 2018 07:53:04 GMT): krabradosty (Wed, 26 Sep 2018 09:10:34 GMT): jyellick (Wed, 26 Sep 2018 13:23:23 GMT): jyellick (Wed, 26 Sep 2018 13:26:02 GMT): jyellick (Wed, 26 Sep 2018 13:26:02 GMT): jyellick (Wed, 26 Sep 2018 13:27:30 GMT): jyellick (Wed, 26 Sep 2018 13:29:45 GMT): waxer (Wed, 26 Sep 2018 14:25:15 GMT): jyellick (Wed, 26 Sep 2018 15:47:21 GMT): waxer (Wed, 26 Sep 2018 16:36:31 GMT): waxer (Wed, 26 Sep 2018 16:36:31 GMT): jyellick (Wed, 26 Sep 2018 16:38:31 GMT): waxer (Wed, 26 Sep 2018 16:42:56 GMT): waxer (Wed, 26 Sep 2018 16:43:10 GMT): jyellick (Wed, 26 Sep 2018 16:43:31 GMT): jyellick (Wed, 26 Sep 2018 16:43:52 GMT): waxer (Wed, 26 Sep 2018 16:44:38 GMT): waxer (Wed, 26 Sep 2018 16:45:15 GMT): waxer (Wed, 26 Sep 2018 16:45:26 GMT): waxer (Wed, 26 Sep 2018 16:47:47 GMT): waxer (Wed, 26 Sep 2018 16:50:41 GMT): jyellick (Wed, 26 Sep 2018 16:50:46 GMT): jyellick (Wed, 26 Sep 2018 16:51:00 GMT): jyellick (Wed, 26 Sep 2018 16:51:07 GMT): jyellick (Wed, 26 Sep 2018 16:51:43 GMT): waxer (Wed, 26 Sep 2018 16:52:06 GMT): waxer (Wed, 26 Sep 2018 16:52:42 GMT): jyellick (Wed, 26 Sep 2018 16:52:45 GMT): jyellick (Wed, 26 Sep 2018 16:52:56 GMT): waxer (Wed, 26 Sep 2018 16:53:52 GMT): jyellick (Wed, 26 Sep 2018 16:54:22 GMT): jyellick (Wed, 26 Sep 2018 16:54:39 GMT): jyellick (Wed, 26 Sep 2018 16:54:39 GMT): jyellick (Wed, 26 Sep 2018 16:55:01 GMT): jyellick (Wed, 26 Sep 2018 16:55:11 GMT): jyellick (Wed, 26 Sep 2018 16:56:40 GMT): jyellick (Wed, 26 Sep 2018 16:56:40 GMT): waxer (Wed, 26 Sep 2018 16:58:22 GMT): jyellick (Wed, 26 Sep 2018 16:59:33 GMT): jyellick (Wed, 26 Sep 2018 16:59:48 GMT): waxer (Wed, 26 Sep 2018 17:00:21 GMT): jyellick (Wed, 26 Sep 2018 17:00:29 GMT): jyellick (Wed, 26 Sep 2018 17:00:55 GMT): waxer (Wed, 26 Sep 2018 17:00:56 GMT): waxer (Wed, 26 Sep 2018 17:07:58 GMT): waxer (Wed, 26 Sep 2018 17:08:33 GMT): jyellick (Wed, 26 Sep 2018 17:09:52 GMT): waxer (Wed, 26 Sep 2018 17:11:10 GMT): waxer (Wed, 26 Sep 2018 17:11:37 GMT): jyellick (Wed, 26 Sep 2018 17:13:21 GMT): jvsclp (Wed, 26 Sep 2018 22:39:19 GMT): jvsclp (Wed, 26 Sep 2018 22:39:19 GMT): waxer (Thu, 27 Sep 2018 00:01:25 GMT): SudeepS 2 (Thu, 27 Sep 2018 04:33:25 GMT): jyellick (Thu, 27 Sep 2018 13:52:45 GMT): jvsclp (Thu, 27 Sep 2018 16:33:04 GMT): jvsclp (Thu, 27 Sep 2018 16:33:04 GMT): jyellick (Thu, 27 Sep 2018 17:01:56 GMT): jvsclp (Thu, 27 Sep 2018 17:08:16 GMT): hypere (Thu, 27 Sep 2018 19:17:17 GMT): jyellick (Thu, 27 Sep 2018 19:21:30 GMT): jyellick (Thu, 27 Sep 2018 19:22:25 GMT): jvsclp (Thu, 27 Sep 2018 20:43:23 GMT): jvsclp (Thu, 27 Sep 2018 20:44:29 GMT): jyellick (Thu, 27 Sep 2018 20:44:53 GMT): jyellick (Thu, 27 Sep 2018 20:46:06 GMT): jvsclp (Thu, 27 Sep 2018 20:59:47 GMT): srinivasd (Thu, 27 Sep 2018 21:30:47 GMT): srinivasd (Thu, 27 Sep 2018 21:31:15 GMT): jyellick (Fri, 28 Sep 2018 12:46:55 GMT): jyellick (Fri, 28 Sep 2018 12:48:35 GMT): jyellick (Fri, 28 Sep 2018 12:48:46 GMT): jvsclp (Fri, 28 Sep 2018 14:29:33 GMT): jyellick (Fri, 28 Sep 2018 14:46:42 GMT): jyellick (Fri, 28 Sep 2018 14:47:53 GMT): jyellick (Fri, 28 Sep 2018 14:48:17 GMT): jyellick (Fri, 28 Sep 2018 14:49:06 GMT): jvsclp (Fri, 28 Sep 2018 15:45:10 GMT): jvsclp (Fri, 28 Sep 2018 15:45:10 GMT): jvsclp (Fri, 28 Sep 2018 15:45:10 GMT): jvsclp (Fri, 28 Sep 2018 16:35:25 GMT): jvsclp (Fri, 28 Sep 2018 16:35:25 GMT): jvsclp (Fri, 28 Sep 2018 16:36:22 GMT): jvsclp (Fri, 28 Sep 2018 16:36:22 GMT): waxer (Fri, 28 Sep 2018 23:04:08 GMT): yacovm (Sat, 29 Sep 2018 05:07:36 GMT): yacovm (Sat, 29 Sep 2018 05:08:26 GMT): Ryan2 (Sat, 29 Sep 2018 11:08:41 GMT): Ryan2 (Sat, 29 Sep 2018 11:08:41 GMT): Ryan2 (Sat, 29 Sep 2018 11:08:41 GMT): Ryan2 (Sat, 29 Sep 2018 11:08:41 GMT): Ryan2 (Sat, 29 Sep 2018 11:08:41 GMT): dave.enyeart (Sat, 29 Sep 2018 13:38:42 GMT): dave.enyeart (Sat, 29 Sep 2018 13:39:34 GMT): Ryan2 (Sat, 29 Sep 2018 13:54:12 GMT): manish-sethi (Sat, 29 Sep 2018 18:49:11 GMT): manish-sethi (Sat, 29 Sep 2018 18:49:11 GMT): srinivasd (Mon, 01 Oct 2018 03:50:50 GMT): srinivasd (Mon, 01 Oct 2018 03:50:50 GMT): jyellick (Mon, 01 Oct 2018 14:12:03 GMT): Bartb0 (Tue, 02 Oct 2018 10:52:43 GMT): venzi (Tue, 02 Oct 2018 22:11:42 GMT): srinivasd (Wed, 03 Oct 2018 05:45:27 GMT): SudeepS 2 (Wed, 03 Oct 2018 10:27:40 GMT): venzi (Wed, 03 Oct 2018 11:43:23 GMT): paul.sitoh (Wed, 03 Oct 2018 13:27:57 GMT): jyellick (Wed, 03 Oct 2018 14:47:54 GMT): jyellick (Wed, 03 Oct 2018 14:48:48 GMT): jyellick (Wed, 03 Oct 2018 14:50:00 GMT): paul.sitoh (Wed, 03 Oct 2018 14:56:56 GMT): venzi (Wed, 03 Oct 2018 17:04:23 GMT): emiliastk (Wed, 03 Oct 2018 23:47:25 GMT): mrjdomingus (Thu, 04 Oct 2018 08:08:30 GMT): jiribroulik (Thu, 04 Oct 2018 12:48:08 GMT): jiribroulik (Thu, 04 Oct 2018 12:49:59 GMT): yacovm (Thu, 04 Oct 2018 12:58:11 GMT): aatkddny (Thu, 04 Oct 2018 13:26:39 GMT): aatkddny (Thu, 04 Oct 2018 13:26:39 GMT): yacovm (Thu, 04 Oct 2018 13:42:23 GMT): aatkddny (Thu, 04 Oct 2018 13:50:02 GMT): aatkddny (Thu, 04 Oct 2018 13:50:02 GMT): aatkddny (Thu, 04 Oct 2018 13:50:02 GMT): aatkddny (Thu, 04 Oct 2018 13:50:02 GMT): aatkddny (Thu, 04 Oct 2018 13:56:09 GMT): yacovm (Thu, 04 Oct 2018 13:56:15 GMT): yacovm (Thu, 04 Oct 2018 13:56:45 GMT): yacovm (Thu, 04 Oct 2018 13:56:53 GMT): yacovm (Thu, 04 Oct 2018 13:57:15 GMT): aatkddny (Thu, 04 Oct 2018 13:57:16 GMT): aatkddny (Thu, 04 Oct 2018 13:57:32 GMT): yacovm (Thu, 04 Oct 2018 13:57:37 GMT): aatkddny (Thu, 04 Oct 2018 13:57:38 GMT): aatkddny (Thu, 04 Oct 2018 13:57:42 GMT): aatkddny (Thu, 04 Oct 2018 13:57:58 GMT): yacovm (Thu, 04 Oct 2018 13:58:11 GMT): yacovm (Thu, 04 Oct 2018 13:58:52 GMT): aatkddny (Thu, 04 Oct 2018 13:59:42 GMT): yacovm (Thu, 04 Oct 2018 14:00:23 GMT): aatkddny (Thu, 04 Oct 2018 14:01:05 GMT): aatkddny (Thu, 04 Oct 2018 14:01:05 GMT): nsabharwal (Thu, 04 Oct 2018 20:54:30 GMT): emiliastk (Fri, 05 Oct 2018 02:10:52 GMT): emiliastk (Fri, 05 Oct 2018 04:51:56 GMT): yacovm (Fri, 05 Oct 2018 05:20:40 GMT): yacovm (Fri, 05 Oct 2018 05:21:04 GMT): yacovm (Fri, 05 Oct 2018 05:21:29 GMT): yacovm (Fri, 05 Oct 2018 05:21:51 GMT): silliman (Fri, 05 Oct 2018 11:04:51 GMT): tennenjl (Fri, 05 Oct 2018 13:34:26 GMT): f2632799 (Fri, 05 Oct 2018 15:31:18 GMT): f2632799 (Fri, 05 Oct 2018 15:33:32 GMT): nsabharwal (Fri, 05 Oct 2018 15:58:48 GMT): f2632799 (Fri, 05 Oct 2018 16:18:22 GMT): f2632799 (Fri, 05 Oct 2018 16:18:50 GMT): jdfigure (Fri, 05 Oct 2018 22:19:38 GMT): qiangqinqq (Sat, 06 Oct 2018 07:35:55 GMT): nsabharwal (Sat, 06 Oct 2018 13:41:28 GMT): waxer (Sun, 07 Oct 2018 13:41:07 GMT): ArianStef (Sun, 07 Oct 2018 14:29:31 GMT): paul.sitoh (Mon, 08 Oct 2018 10:23:24 GMT): paul.sitoh (Mon, 08 Oct 2018 10:23:24 GMT): paul.sitoh (Mon, 08 Oct 2018 10:23:24 GMT): paul.sitoh (Mon, 08 Oct 2018 10:23:24 GMT): waxer (Tue, 09 Oct 2018 01:20:18 GMT): kostas (Tue, 09 Oct 2018 01:21:59 GMT): waxer (Tue, 09 Oct 2018 01:43:03 GMT): waxer (Tue, 09 Oct 2018 01:43:29 GMT): JeffreyCheng 1 (Tue, 09 Oct 2018 15:45:03 GMT): hypere (Tue, 09 Oct 2018 20:05:18 GMT): hypere (Tue, 09 Oct 2018 20:05:18 GMT): alexvicegrab (Wed, 10 Oct 2018 09:21:42 GMT): guoger (Wed, 10 Oct 2018 10:03:08 GMT): alexvicegrab (Wed, 10 Oct 2018 10:35:28 GMT): alexvicegrab (Wed, 10 Oct 2018 10:35:28 GMT): kostas (Wed, 10 Oct 2018 10:36:12 GMT): alexvicegrab (Wed, 10 Oct 2018 10:36:30 GMT): aatkddny (Wed, 10 Oct 2018 11:52:05 GMT): aatkddny (Wed, 10 Oct 2018 11:52:05 GMT): aatkddny (Wed, 10 Oct 2018 11:52:05 GMT): f2632799 (Wed, 10 Oct 2018 12:01:38 GMT): alexvicegrab (Wed, 10 Oct 2018 13:49:09 GMT): alexvicegrab (Wed, 10 Oct 2018 13:49:09 GMT): alexvicegrab (Wed, 10 Oct 2018 13:49:09 GMT): alexvicegrab (Wed, 10 Oct 2018 13:49:09 GMT): gravity (Wed, 10 Oct 2018 13:52:10 GMT): Raycoms (Wed, 10 Oct 2018 14:26:41 GMT): guoger (Wed, 10 Oct 2018 16:04:16 GMT): guoger (Wed, 10 Oct 2018 16:04:57 GMT): Raycoms (Wed, 10 Oct 2018 16:09:14 GMT): alexvicegrab (Wed, 10 Oct 2018 16:23:47 GMT): Raycoms (Wed, 10 Oct 2018 17:30:02 GMT): Raycoms (Wed, 10 Oct 2018 17:30:02 GMT): waxer (Wed, 10 Oct 2018 19:36:23 GMT): waxer (Wed, 10 Oct 2018 19:41:05 GMT): kostas (Wed, 10 Oct 2018 19:44:08 GMT): kostas (Wed, 10 Oct 2018 19:45:05 GMT): kostas (Wed, 10 Oct 2018 19:48:18 GMT): kostas (Wed, 10 Oct 2018 19:49:02 GMT): kostas (Wed, 10 Oct 2018 19:49:23 GMT): kostas (Wed, 10 Oct 2018 19:49:48 GMT): Raycoms (Wed, 10 Oct 2018 19:55:09 GMT): Raycoms (Wed, 10 Oct 2018 19:55:27 GMT): kostas (Wed, 10 Oct 2018 19:55:44 GMT): Raycoms (Wed, 10 Oct 2018 20:00:42 GMT): mslavitch (Wed, 10 Oct 2018 20:11:56 GMT): sergefdrv (Thu, 11 Oct 2018 08:58:09 GMT): gen_el (Thu, 11 Oct 2018 12:12:03 GMT): waxer (Thu, 11 Oct 2018 13:10:21 GMT): gen_el (Thu, 11 Oct 2018 13:16:03 GMT): Raycoms (Thu, 11 Oct 2018 13:21:26 GMT): MohammadObaid (Fri, 12 Oct 2018 07:35:43 GMT): NoLimitHoldem (Fri, 12 Oct 2018 08:35:59 GMT): gravity (Fri, 12 Oct 2018 11:26:00 GMT): gravity (Fri, 12 Oct 2018 11:26:00 GMT): waxer (Fri, 12 Oct 2018 12:19:42 GMT): MohammadObaid (Fri, 12 Oct 2018 13:05:25 GMT): jdfigure (Fri, 12 Oct 2018 15:16:14 GMT): gen_el (Fri, 12 Oct 2018 15:18:53 GMT): gen_el (Fri, 12 Oct 2018 15:25:57 GMT): jdfigure (Fri, 12 Oct 2018 15:50:48 GMT): Raycoms (Fri, 12 Oct 2018 17:20:54 GMT): yousaf (Fri, 12 Oct 2018 19:59:50 GMT): SumanPapanaboina (Sun, 14 Oct 2018 15:36:36 GMT): ASell (Sun, 14 Oct 2018 21:34:29 GMT): Jgnuid (Mon, 15 Oct 2018 01:45:40 GMT): Msaleh97 (Mon, 15 Oct 2018 06:49:07 GMT): Msaleh97 (Mon, 15 Oct 2018 06:50:15 GMT): DeepakMP (Mon, 15 Oct 2018 07:44:49 GMT): DeepakMP (Mon, 15 Oct 2018 07:48:28 GMT): DeepakMP (Mon, 15 Oct 2018 07:48:29 GMT): DeepakMP (Mon, 15 Oct 2018 07:49:16 GMT): DeepakMP (Mon, 15 Oct 2018 07:51:37 GMT): DeepakMP (Mon, 15 Oct 2018 07:52:52 GMT): DeepakMP (Mon, 15 Oct 2018 07:54:24 GMT): DeepakMP (Mon, 15 Oct 2018 07:55:45 GMT): yousaf (Mon, 15 Oct 2018 08:19:34 GMT): mastersingh24 (Mon, 15 Oct 2018 10:55:27 GMT): dave.enyeart (Mon, 15 Oct 2018 11:12:38 GMT): DeepakMP (Mon, 15 Oct 2018 11:18:43 GMT): MohammadObaid (Mon, 15 Oct 2018 12:31:55 GMT): MohammadObaid (Mon, 15 Oct 2018 12:32:43 GMT): yousaf (Mon, 15 Oct 2018 13:46:50 GMT): jyellick (Mon, 15 Oct 2018 13:49:25 GMT): jyellick (Mon, 15 Oct 2018 13:50:28 GMT): yousaf (Mon, 15 Oct 2018 13:51:41 GMT): jyellick (Mon, 15 Oct 2018 13:52:20 GMT): yousaf (Mon, 15 Oct 2018 13:52:36 GMT): jyellick (Mon, 15 Oct 2018 13:54:13 GMT): yousaf (Mon, 15 Oct 2018 14:03:00 GMT): yousaf (Mon, 15 Oct 2018 14:05:29 GMT): jyellick (Mon, 15 Oct 2018 14:05:41 GMT): jyellick (Mon, 15 Oct 2018 14:06:07 GMT): jyellick (Mon, 15 Oct 2018 14:06:07 GMT): jyellick (Mon, 15 Oct 2018 14:06:07 GMT): yousaf (Mon, 15 Oct 2018 14:06:50 GMT): jyellick (Mon, 15 Oct 2018 14:07:15 GMT): yousaf (Mon, 15 Oct 2018 14:07:19 GMT): jyellick (Mon, 15 Oct 2018 14:07:26 GMT): yousaf (Mon, 15 Oct 2018 14:09:09 GMT): yousaf (Mon, 15 Oct 2018 14:10:53 GMT): jyellick (Mon, 15 Oct 2018 14:12:05 GMT): yousaf (Mon, 15 Oct 2018 14:16:26 GMT): jyellick (Mon, 15 Oct 2018 14:19:54 GMT): yousaf (Mon, 15 Oct 2018 14:20:21 GMT): yousaf (Mon, 15 Oct 2018 14:20:40 GMT): yousaf (Mon, 15 Oct 2018 14:34:56 GMT): jyellick (Mon, 15 Oct 2018 14:36:18 GMT): yousaf (Mon, 15 Oct 2018 14:42:48 GMT): jyellick (Mon, 15 Oct 2018 14:51:09 GMT): yousaf (Mon, 15 Oct 2018 16:05:05 GMT): yousaf (Mon, 15 Oct 2018 16:48:23 GMT): yousaf (Mon, 15 Oct 2018 17:10:31 GMT): jyellick (Mon, 15 Oct 2018 19:22:46 GMT): yousaf (Mon, 15 Oct 2018 19:26:02 GMT): Raycoms (Mon, 15 Oct 2018 19:35:20 GMT): jyellick (Mon, 15 Oct 2018 19:47:03 GMT): Raycoms (Mon, 15 Oct 2018 19:59:15 GMT): yousaf (Mon, 15 Oct 2018 21:48:09 GMT): MohitJuneja (Tue, 16 Oct 2018 03:25:36 GMT): jyellick (Tue, 16 Oct 2018 14:45:39 GMT): yousaf (Tue, 16 Oct 2018 15:28:37 GMT): Msaleh97 (Wed, 17 Oct 2018 01:07:39 GMT): guoger (Wed, 17 Oct 2018 09:39:55 GMT): Nareshtej (Wed, 17 Oct 2018 10:07:30 GMT): ataul443 (Wed, 17 Oct 2018 18:23:07 GMT): yousaf (Thu, 18 Oct 2018 17:00:30 GMT): jyellick (Thu, 18 Oct 2018 17:03:28 GMT): yousaf (Thu, 18 Oct 2018 17:06:38 GMT): yousaf (Thu, 18 Oct 2018 17:25:21 GMT): jyellick (Thu, 18 Oct 2018 19:29:55 GMT): kh.nguyen (Fri, 19 Oct 2018 00:21:05 GMT): angelsuarez (Fri, 19 Oct 2018 12:20:38 GMT): aatkddny (Fri, 19 Oct 2018 13:20:43 GMT): mastersingh24 (Fri, 19 Oct 2018 13:31:05 GMT): jyellick (Fri, 19 Oct 2018 14:01:37 GMT): jyellick (Fri, 19 Oct 2018 14:01:37 GMT): aatkddny (Fri, 19 Oct 2018 14:02:11 GMT): aatkddny (Fri, 19 Oct 2018 14:02:32 GMT): jyellick (Fri, 19 Oct 2018 14:03:26 GMT): jyellick (Fri, 19 Oct 2018 14:05:34 GMT): aatkddny (Fri, 19 Oct 2018 14:08:27 GMT): jyellick (Fri, 19 Oct 2018 14:32:20 GMT): jyellick (Fri, 19 Oct 2018 14:32:20 GMT): jyellick (Fri, 19 Oct 2018 14:33:40 GMT): yousaf (Fri, 19 Oct 2018 19:11:07 GMT): jyellick (Fri, 19 Oct 2018 19:16:50 GMT): yousaf (Fri, 19 Oct 2018 19:19:03 GMT): leura (Sat, 20 Oct 2018 09:36:38 GMT): gravity (Sat, 20 Oct 2018 10:07:06 GMT): waxer (Sat, 20 Oct 2018 10:38:10 GMT): waxer (Sat, 20 Oct 2018 10:38:38 GMT): waxer (Sat, 20 Oct 2018 10:39:12 GMT): waxer (Sat, 20 Oct 2018 10:39:38 GMT): guoger (Sat, 20 Oct 2018 11:07:43 GMT): guoger (Sat, 20 Oct 2018 11:12:46 GMT): guoger (Sat, 20 Oct 2018 11:12:46 GMT): guoger (Sat, 20 Oct 2018 11:13:08 GMT): gravity (Sat, 20 Oct 2018 11:24:48 GMT): Jgnuid (Sat, 20 Oct 2018 11:57:29 GMT): guoger (Sat, 20 Oct 2018 12:17:51 GMT): guoger (Sat, 20 Oct 2018 12:18:28 GMT): guoger (Sat, 20 Oct 2018 12:19:37 GMT): guoger (Sat, 20 Oct 2018 12:19:37 GMT): waxer (Sat, 20 Oct 2018 13:13:50 GMT): gravity (Sat, 20 Oct 2018 17:10:03 GMT): guoger (Sat, 20 Oct 2018 17:17:02 GMT): guoger (Sat, 20 Oct 2018 17:17:38 GMT): gravity (Sat, 20 Oct 2018 18:17:42 GMT): paul.sitoh (Sat, 20 Oct 2018 20:29:58 GMT): paul.sitoh (Sat, 20 Oct 2018 20:29:58 GMT): tiennv (Sun, 21 Oct 2018 07:28:48 GMT): tiennv (Sun, 21 Oct 2018 07:29:34 GMT): tiennv (Sun, 21 Oct 2018 07:31:02 GMT): tiennv (Sun, 21 Oct 2018 07:31:41 GMT): tiennv (Sun, 21 Oct 2018 07:31:52 GMT): paul.sitoh (Sun, 21 Oct 2018 13:31:37 GMT): paul.sitoh (Sun, 21 Oct 2018 13:31:37 GMT): paul.sitoh (Sun, 21 Oct 2018 13:31:37 GMT): paul.sitoh (Sun, 21 Oct 2018 13:31:37 GMT): MohammadObaid (Mon, 22 Oct 2018 12:30:08 GMT): rmaurer (Mon, 22 Oct 2018 13:13:06 GMT): jyellick (Tue, 23 Oct 2018 01:17:36 GMT): jyellick (Tue, 23 Oct 2018 01:17:59 GMT): jyellick (Tue, 23 Oct 2018 01:19:05 GMT): gravity (Tue, 23 Oct 2018 08:46:12 GMT): jrosmith (Tue, 23 Oct 2018 13:43:11 GMT): Raycoms (Tue, 23 Oct 2018 18:52:11 GMT): MohammadObaid (Wed, 24 Oct 2018 10:56:13 GMT): jyellick (Wed, 24 Oct 2018 15:06:14 GMT): jyellick (Wed, 24 Oct 2018 15:06:14 GMT): jrosmith (Wed, 24 Oct 2018 15:18:56 GMT): atirekg (Wed, 24 Oct 2018 16:35:11 GMT): akshay.sood (Wed, 24 Oct 2018 21:15:50 GMT): waxer (Wed, 24 Oct 2018 21:58:20 GMT): yacovm (Wed, 24 Oct 2018 22:32:06 GMT): waxer (Wed, 24 Oct 2018 22:51:10 GMT): zimabry (Thu, 25 Oct 2018 01:09:41 GMT): Ryan2 (Thu, 25 Oct 2018 05:03:38 GMT): Ryan2 (Thu, 25 Oct 2018 05:03:38 GMT): Ryan2 (Thu, 25 Oct 2018 05:03:38 GMT): akshay.sood (Thu, 25 Oct 2018 06:45:24 GMT): cagdast (Thu, 25 Oct 2018 07:31:41 GMT): ShobhitSrivastava (Thu, 25 Oct 2018 08:41:28 GMT): jyellick (Thu, 25 Oct 2018 13:55:12 GMT): jyellick (Thu, 25 Oct 2018 13:56:41 GMT): akshay.sood (Thu, 25 Oct 2018 13:58:51 GMT): akshay.sood (Thu, 25 Oct 2018 13:59:29 GMT): akshay.sood (Thu, 25 Oct 2018 13:59:29 GMT): jyellick (Thu, 25 Oct 2018 14:00:24 GMT): akshay.sood (Thu, 25 Oct 2018 14:00:41 GMT): akshay.sood (Thu, 25 Oct 2018 14:09:08 GMT): jyellick (Thu, 25 Oct 2018 14:09:23 GMT): akshay.sood (Thu, 25 Oct 2018 14:09:30 GMT): akshay.sood (Thu, 25 Oct 2018 14:10:21 GMT): jyellick (Thu, 25 Oct 2018 14:12:37 GMT): jyellick (Thu, 25 Oct 2018 14:13:00 GMT): akshay.sood (Thu, 25 Oct 2018 14:16:16 GMT): jyellick (Thu, 25 Oct 2018 14:17:03 GMT): jyellick (Thu, 25 Oct 2018 14:17:30 GMT): jyellick (Thu, 25 Oct 2018 14:17:53 GMT): akshay.sood (Thu, 25 Oct 2018 14:18:42 GMT): akshay.sood (Thu, 25 Oct 2018 14:18:57 GMT): jyellick (Thu, 25 Oct 2018 14:19:20 GMT): akshay.sood (Thu, 25 Oct 2018 14:19:28 GMT): akshay.sood (Thu, 25 Oct 2018 14:19:48 GMT): akshay.sood (Thu, 25 Oct 2018 14:20:16 GMT): jyellick (Thu, 25 Oct 2018 14:20:18 GMT): akshay.sood (Thu, 25 Oct 2018 14:20:32 GMT): akshay.sood (Thu, 25 Oct 2018 14:20:41 GMT): akshay.sood (Thu, 25 Oct 2018 14:20:44 GMT): akshay.sood (Thu, 25 Oct 2018 14:21:19 GMT): akshay.sood (Thu, 25 Oct 2018 14:21:19 GMT): jyellick (Thu, 25 Oct 2018 14:22:09 GMT): jyellick (Thu, 25 Oct 2018 14:22:56 GMT): akshay.sood (Thu, 25 Oct 2018 14:23:07 GMT): akshay.sood (Thu, 25 Oct 2018 14:25:40 GMT): npc0405 (Thu, 25 Oct 2018 15:14:25 GMT): npc0405 (Thu, 25 Oct 2018 15:15:07 GMT): npc0405 (Thu, 25 Oct 2018 15:15:26 GMT): npc0405 (Thu, 25 Oct 2018 15:22:06 GMT): jyellick (Thu, 25 Oct 2018 17:08:51 GMT): albert.lacambra (Thu, 25 Oct 2018 19:49:52 GMT): Ryan2 (Fri, 26 Oct 2018 00:32:36 GMT): jyellick (Fri, 26 Oct 2018 01:35:57 GMT): npc0405 (Fri, 26 Oct 2018 04:59:23 GMT): albert.lacambra (Fri, 26 Oct 2018 06:25:10 GMT): albert.lacambra (Fri, 26 Oct 2018 06:25:10 GMT): BikashPal (Fri, 26 Oct 2018 12:13:26 GMT): BikashPal (Fri, 26 Oct 2018 12:13:26 GMT): jyellick (Fri, 26 Oct 2018 13:15:34 GMT): jyellick (Fri, 26 Oct 2018 13:17:04 GMT): npc0405 (Fri, 26 Oct 2018 15:22:28 GMT): npc0405 (Fri, 26 Oct 2018 15:22:45 GMT): npc0405 (Fri, 26 Oct 2018 15:29:42 GMT): jyellick (Fri, 26 Oct 2018 17:36:51 GMT): jyellick (Fri, 26 Oct 2018 17:37:05 GMT): sean (Sat, 27 Oct 2018 12:43:51 GMT): davidkel (Mon, 29 Oct 2018 08:34:27 GMT): zimabry (Mon, 29 Oct 2018 21:08:33 GMT): jyellick (Mon, 29 Oct 2018 21:53:46 GMT): zimabry (Tue, 30 Oct 2018 00:16:09 GMT): ShaikSharuk (Tue, 30 Oct 2018 10:02:57 GMT): ShaikSharuk (Tue, 30 Oct 2018 10:06:25 GMT): knagware9 (Tue, 30 Oct 2018 12:10:16 GMT): knagware9 (Tue, 30 Oct 2018 12:10:17 GMT): knagware9 (Tue, 30 Oct 2018 12:10:36 GMT): knagware9 (Tue, 30 Oct 2018 12:10:37 GMT): npc0405 (Tue, 30 Oct 2018 12:35:40 GMT): npc0405 (Tue, 30 Oct 2018 12:36:03 GMT): javrevasandeep (Tue, 30 Oct 2018 13:47:31 GMT): javrevasandeep (Tue, 30 Oct 2018 13:47:33 GMT): jyellick (Tue, 30 Oct 2018 17:32:08 GMT): jyellick (Tue, 30 Oct 2018 17:37:18 GMT): jyellick (Tue, 30 Oct 2018 17:38:57 GMT): jyellick (Tue, 30 Oct 2018 17:41:56 GMT): jyellick (Tue, 30 Oct 2018 17:43:58 GMT): jyellick (Tue, 30 Oct 2018 17:43:58 GMT): jyellick (Tue, 30 Oct 2018 17:43:58 GMT): jyellick (Tue, 30 Oct 2018 17:45:40 GMT): jyellick (Tue, 30 Oct 2018 17:45:40 GMT): npc0405 (Tue, 30 Oct 2018 17:47:09 GMT): npc0405 (Tue, 30 Oct 2018 17:47:28 GMT): javrevasandeep (Tue, 30 Oct 2018 19:28:03 GMT): jyellick (Tue, 30 Oct 2018 19:39:51 GMT): jyellick (Tue, 30 Oct 2018 19:41:10 GMT): javrevasandeep (Tue, 30 Oct 2018 20:08:31 GMT): jyellick (Tue, 30 Oct 2018 20:46:39 GMT): javrevasandeep (Wed, 31 Oct 2018 05:15:59 GMT): knagware9 (Wed, 31 Oct 2018 05:16:35 GMT): NoLimitHoldem (Wed, 31 Oct 2018 06:25:06 GMT): knagware9 (Wed, 31 Oct 2018 06:59:37 GMT): npc0405 (Wed, 31 Oct 2018 07:04:48 GMT): knagware9 (Wed, 31 Oct 2018 10:27:45 GMT): knagware9 (Wed, 31 Oct 2018 10:28:09 GMT): knagware9 (Wed, 31 Oct 2018 10:29:37 GMT): bh4rtp (Wed, 31 Oct 2018 11:41:22 GMT): albert.lacambra (Wed, 31 Oct 2018 13:26:25 GMT): albert.lacambra (Wed, 31 Oct 2018 13:27:25 GMT): albert.lacambra (Wed, 31 Oct 2018 13:28:04 GMT): albert.lacambra (Wed, 31 Oct 2018 13:28:05 GMT): albert.lacambra (Wed, 31 Oct 2018 13:28:05 GMT): albert.lacambra (Wed, 31 Oct 2018 13:28:05 GMT): albert.lacambra (Wed, 31 Oct 2018 13:28:05 GMT): jyellick (Wed, 31 Oct 2018 13:42:20 GMT): jyellick (Wed, 31 Oct 2018 13:44:27 GMT): jyellick (Wed, 31 Oct 2018 13:46:15 GMT): jyellick (Wed, 31 Oct 2018 13:47:26 GMT): albert.lacambra (Wed, 31 Oct 2018 13:48:12 GMT): albert.lacambra (Wed, 31 Oct 2018 13:48:41 GMT): albert.lacambra (Wed, 31 Oct 2018 13:48:52 GMT): albert.lacambra (Wed, 31 Oct 2018 13:49:00 GMT): jyellick (Wed, 31 Oct 2018 13:49:35 GMT): albert.lacambra (Wed, 31 Oct 2018 13:50:10 GMT): jyellick (Wed, 31 Oct 2018 13:50:42 GMT): albert.lacambra (Wed, 31 Oct 2018 13:51:04 GMT): albert.lacambra (Wed, 31 Oct 2018 13:51:34 GMT): jyellick (Wed, 31 Oct 2018 13:51:36 GMT): albert.lacambra (Wed, 31 Oct 2018 13:52:06 GMT): albert.lacambra (Wed, 31 Oct 2018 13:52:06 GMT): albert.lacambra (Wed, 31 Oct 2018 13:52:10 GMT): jyellick (Wed, 31 Oct 2018 13:53:09 GMT): albert.lacambra (Wed, 31 Oct 2018 13:54:41 GMT): albert.lacambra (Wed, 31 Oct 2018 14:00:25 GMT): albert.lacambra (Wed, 31 Oct 2018 14:00:37 GMT): jyellick (Wed, 31 Oct 2018 14:01:28 GMT): parags (Wed, 31 Oct 2018 14:44:32 GMT): parags (Wed, 31 Oct 2018 14:46:40 GMT): jyellick (Wed, 31 Oct 2018 14:58:09 GMT): jyellick (Wed, 31 Oct 2018 14:59:53 GMT): jyellick (Wed, 31 Oct 2018 14:59:53 GMT): jyellick (Wed, 31 Oct 2018 14:59:53 GMT): parags (Wed, 31 Oct 2018 15:15:17 GMT): jyellick (Wed, 31 Oct 2018 15:28:26 GMT): parags (Wed, 31 Oct 2018 15:30:05 GMT): parags (Wed, 31 Oct 2018 15:40:52 GMT): jyellick (Wed, 31 Oct 2018 15:42:40 GMT): jyellick (Wed, 31 Oct 2018 15:42:40 GMT): albert.lacambra (Wed, 31 Oct 2018 15:47:09 GMT): jyellick (Wed, 31 Oct 2018 15:49:33 GMT): jyellick (Wed, 31 Oct 2018 15:49:33 GMT): albert.lacambra (Wed, 31 Oct 2018 15:52:03 GMT): albert.lacambra (Wed, 31 Oct 2018 15:52:35 GMT): jyellick (Wed, 31 Oct 2018 18:14:34 GMT): jrosmith (Wed, 31 Oct 2018 19:26:28 GMT): waxer (Wed, 31 Oct 2018 23:03:20 GMT): waxer (Wed, 31 Oct 2018 23:05:49 GMT): waxer (Wed, 31 Oct 2018 23:06:13 GMT): npc0405 (Thu, 01 Nov 2018 06:22:30 GMT): knagware9 (Thu, 01 Nov 2018 11:17:41 GMT): knagware9 (Thu, 01 Nov 2018 11:18:02 GMT): sushmitha (Thu, 01 Nov 2018 11:56:54 GMT): jyellick (Thu, 01 Nov 2018 14:07:07 GMT): jyellick (Thu, 01 Nov 2018 14:10:27 GMT): JPonna (Fri, 02 Nov 2018 02:41:59 GMT): rkrish82 (Fri, 02 Nov 2018 04:15:07 GMT): rkrish82 (Fri, 02 Nov 2018 04:16:04 GMT): rkrish82 (Fri, 02 Nov 2018 04:16:36 GMT): jyellick (Fri, 02 Nov 2018 04:17:54 GMT): rkrish82 (Fri, 02 Nov 2018 04:38:30 GMT): jyellick (Fri, 02 Nov 2018 04:41:20 GMT): jyellick (Fri, 02 Nov 2018 04:41:53 GMT): rkrish82 (Fri, 02 Nov 2018 04:51:53 GMT): knagware9 (Fri, 02 Nov 2018 04:55:26 GMT): rkrish82 (Fri, 02 Nov 2018 05:08:06 GMT): rkrish82 (Fri, 02 Nov 2018 05:14:37 GMT): knagware9 (Fri, 02 Nov 2018 05:22:24 GMT): rkrish82 (Fri, 02 Nov 2018 05:51:12 GMT): rkrish82 (Fri, 02 Nov 2018 05:52:41 GMT): knagware9 (Fri, 02 Nov 2018 06:18:31 GMT): rkrish82 (Fri, 02 Nov 2018 13:46:17 GMT): jyellick (Fri, 02 Nov 2018 14:00:36 GMT): jyellick (Fri, 02 Nov 2018 14:02:14 GMT): bh4rtp (Sat, 03 Nov 2018 01:12:06 GMT): bh4rtp (Sat, 03 Nov 2018 01:53:31 GMT): mhs22 (Mon, 05 Nov 2018 05:15:56 GMT): jyellick (Mon, 05 Nov 2018 14:35:27 GMT): jyellick (Mon, 05 Nov 2018 14:35:51 GMT): jyellick (Mon, 05 Nov 2018 14:36:12 GMT): MohammadObaid (Mon, 05 Nov 2018 18:30:57 GMT): jyellick (Mon, 05 Nov 2018 18:33:23 GMT): MohammadObaid (Mon, 05 Nov 2018 18:43:09 GMT): jyellick (Mon, 05 Nov 2018 19:01:46 GMT): MohammadObaid (Mon, 05 Nov 2018 19:10:22 GMT): jyellick (Mon, 05 Nov 2018 19:11:26 GMT): MohammadObaid (Mon, 05 Nov 2018 19:14:07 GMT): JaccobSmith (Tue, 06 Nov 2018 02:27:11 GMT): awes0menessInc (Tue, 06 Nov 2018 03:38:53 GMT): luckydogchina (Tue, 06 Nov 2018 03:42:34 GMT): jyellick (Tue, 06 Nov 2018 13:46:38 GMT): jyellick (Tue, 06 Nov 2018 13:46:38 GMT): jyellick (Tue, 06 Nov 2018 13:50:50 GMT): luckydogchina (Tue, 06 Nov 2018 14:39:16 GMT): kisna (Tue, 06 Nov 2018 18:48:40 GMT): kisna (Tue, 06 Nov 2018 18:52:21 GMT): kisna (Tue, 06 Nov 2018 18:52:21 GMT): kisna (Tue, 06 Nov 2018 19:42:09 GMT): kisna (Tue, 06 Nov 2018 19:42:09 GMT): jyellick (Tue, 06 Nov 2018 20:09:25 GMT): kisna (Tue, 06 Nov 2018 20:10:04 GMT): kisna (Tue, 06 Nov 2018 20:12:11 GMT): kisna (Tue, 06 Nov 2018 20:12:17 GMT): kisna (Tue, 06 Nov 2018 20:12:31 GMT): kisna (Tue, 06 Nov 2018 21:54:43 GMT): kisna (Tue, 06 Nov 2018 21:54:43 GMT): kisna (Tue, 06 Nov 2018 21:54:43 GMT): kisna (Tue, 06 Nov 2018 21:54:43 GMT): jyellick (Tue, 06 Nov 2018 21:55:56 GMT): jyellick (Tue, 06 Nov 2018 21:56:16 GMT): kisna (Tue, 06 Nov 2018 21:56:26 GMT): jyellick (Tue, 06 Nov 2018 21:56:34 GMT): jyellick (Tue, 06 Nov 2018 21:56:44 GMT): kisna (Tue, 06 Nov 2018 21:56:47 GMT): kisna (Tue, 06 Nov 2018 21:56:57 GMT): jyellick (Tue, 06 Nov 2018 21:57:08 GMT): kisna (Tue, 06 Nov 2018 21:57:14 GMT): jyellick (Tue, 06 Nov 2018 21:57:28 GMT): kisna (Tue, 06 Nov 2018 21:57:38 GMT): jyellick (Tue, 06 Nov 2018 21:57:42 GMT): kisna (Tue, 06 Nov 2018 22:00:35 GMT): jyellick (Tue, 06 Nov 2018 22:01:24 GMT): kisna (Tue, 06 Nov 2018 22:02:19 GMT): jyellick (Tue, 06 Nov 2018 22:02:31 GMT): kisna (Tue, 06 Nov 2018 22:03:27 GMT): kisna (Tue, 06 Nov 2018 22:03:27 GMT): kisna (Tue, 06 Nov 2018 22:03:27 GMT): jyellick (Tue, 06 Nov 2018 22:04:16 GMT): kisna (Tue, 06 Nov 2018 22:07:20 GMT): kisna (Tue, 06 Nov 2018 22:07:24 GMT): kisna (Tue, 06 Nov 2018 22:26:29 GMT): kisna (Tue, 06 Nov 2018 22:26:29 GMT): kisna (Tue, 06 Nov 2018 22:26:29 GMT): NoLimitHoldem (Wed, 07 Nov 2018 01:13:52 GMT): jyellick (Wed, 07 Nov 2018 01:53:18 GMT): jyellick (Wed, 07 Nov 2018 01:53:18 GMT): enriquebusti (Wed, 07 Nov 2018 12:01:36 GMT): fanliyan (Fri, 09 Nov 2018 09:03:12 GMT): fanliyan (Fri, 09 Nov 2018 09:03:58 GMT): krabradosty (Fri, 09 Nov 2018 12:15:42 GMT): jyellick (Fri, 09 Nov 2018 13:44:20 GMT): jyellick (Fri, 09 Nov 2018 13:44:53 GMT): krabradosty (Fri, 09 Nov 2018 13:58:42 GMT): krabradosty (Fri, 09 Nov 2018 13:58:42 GMT): jyellick (Fri, 09 Nov 2018 14:09:57 GMT): huikang (Fri, 09 Nov 2018 19:25:37 GMT): huikang (Fri, 09 Nov 2018 19:25:59 GMT): huikang (Fri, 09 Nov 2018 19:27:47 GMT): huikang (Fri, 09 Nov 2018 19:31:37 GMT): holzeis (Sun, 11 Nov 2018 22:39:53 GMT): holzeis (Sun, 11 Nov 2018 22:45:40 GMT): holzeis (Sun, 11 Nov 2018 22:45:40 GMT): holzeis (Sun, 11 Nov 2018 22:45:40 GMT): holzeis (Sun, 11 Nov 2018 22:45:40 GMT): guoger (Mon, 12 Nov 2018 01:16:17 GMT): holzeis (Mon, 12 Nov 2018 04:58:03 GMT): holzeis (Mon, 12 Nov 2018 04:58:03 GMT): holzeis (Mon, 12 Nov 2018 04:58:03 GMT): holzeis (Mon, 12 Nov 2018 04:58:03 GMT): guoger (Mon, 12 Nov 2018 05:27:15 GMT): fanliyan (Mon, 12 Nov 2018 06:05:22 GMT): holzeis (Mon, 12 Nov 2018 06:24:08 GMT): holzeis (Mon, 12 Nov 2018 06:24:08 GMT): AlexanderZhovnuvaty (Mon, 12 Nov 2018 11:06:06 GMT): holzeis (Mon, 12 Nov 2018 11:58:19 GMT): JaccobSmith (Thu, 15 Nov 2018 01:13:54 GMT): JaccobSmith (Thu, 15 Nov 2018 01:14:01 GMT): Skprog (Thu, 15 Nov 2018 06:40:23 GMT): MuthuT (Thu, 15 Nov 2018 06:42:57 GMT): Skprog (Thu, 15 Nov 2018 06:43:29 GMT): Skprog (Thu, 15 Nov 2018 06:44:00 GMT): nainiubaba (Thu, 15 Nov 2018 08:47:09 GMT): jyellick (Thu, 15 Nov 2018 14:37:36 GMT): jyellick (Thu, 15 Nov 2018 17:50:22 GMT): magar36 (Thu, 15 Nov 2018 17:51:40 GMT): jyellick (Thu, 15 Nov 2018 17:52:09 GMT): jyellick (Thu, 15 Nov 2018 17:52:57 GMT): magar36 (Thu, 15 Nov 2018 18:33:02 GMT): magar36 (Thu, 15 Nov 2018 18:35:02 GMT): jyellick (Thu, 15 Nov 2018 18:36:39 GMT): jyellick (Thu, 15 Nov 2018 18:37:36 GMT): jyellick (Thu, 15 Nov 2018 18:37:36 GMT): magar36 (Thu, 15 Nov 2018 18:59:54 GMT): magar36 (Thu, 15 Nov 2018 19:00:57 GMT): jyellick (Thu, 15 Nov 2018 19:04:39 GMT): jyellick (Thu, 15 Nov 2018 19:04:47 GMT): magar36 (Thu, 15 Nov 2018 19:06:08 GMT): jyellick (Thu, 15 Nov 2018 19:06:18 GMT): magar36 (Thu, 15 Nov 2018 19:06:38 GMT): jyellick (Thu, 15 Nov 2018 19:06:49 GMT): magar36 (Thu, 15 Nov 2018 19:07:08 GMT): magar36 (Thu, 15 Nov 2018 19:07:26 GMT): jyellick (Thu, 15 Nov 2018 19:08:03 GMT): magar36 (Thu, 15 Nov 2018 19:11:40 GMT): magar36 (Thu, 15 Nov 2018 23:11:32 GMT): magar36 (Thu, 15 Nov 2018 23:13:15 GMT): pankajcheema (Fri, 16 Nov 2018 06:49:59 GMT): pankajcheema (Fri, 16 Nov 2018 07:11:54 GMT): pankajcheema (Fri, 16 Nov 2018 07:12:14 GMT): sushmitha (Fri, 16 Nov 2018 12:01:39 GMT): sushmitha (Fri, 16 Nov 2018 12:03:00 GMT): Skprog (Fri, 16 Nov 2018 12:33:38 GMT): Skprog (Fri, 16 Nov 2018 12:34:09 GMT): Skprog (Fri, 16 Nov 2018 12:34:45 GMT): Skprog (Fri, 16 Nov 2018 12:35:26 GMT): Skprog (Fri, 16 Nov 2018 12:35:27 GMT): akshay.sood (Fri, 16 Nov 2018 14:18:32 GMT): jyellick (Fri, 16 Nov 2018 15:49:39 GMT): jyellick (Fri, 16 Nov 2018 15:51:43 GMT): akshay.sood (Fri, 16 Nov 2018 16:14:16 GMT): akshay.sood (Fri, 16 Nov 2018 16:14:16 GMT): BellaAdams (Sat, 17 Nov 2018 00:46:30 GMT): nainiubaba (Mon, 19 Nov 2018 06:07:59 GMT): nainiubaba (Mon, 19 Nov 2018 06:07:59 GMT): jyellick (Mon, 19 Nov 2018 14:02:13 GMT): aatkddny (Mon, 19 Nov 2018 16:15:42 GMT): aatkddny (Mon, 19 Nov 2018 16:15:42 GMT): aatkddny (Mon, 19 Nov 2018 16:15:42 GMT): aatkddny (Mon, 19 Nov 2018 16:15:42 GMT): aatkddny (Mon, 19 Nov 2018 16:17:02 GMT): aatkddny (Mon, 19 Nov 2018 16:17:59 GMT): jyellick (Mon, 19 Nov 2018 17:41:53 GMT): jyellick (Mon, 19 Nov 2018 17:41:53 GMT): aatkddny (Mon, 19 Nov 2018 19:46:51 GMT): aatkddny (Mon, 19 Nov 2018 19:46:51 GMT): aatkddny (Mon, 19 Nov 2018 19:46:51 GMT): aatkddny (Mon, 19 Nov 2018 19:46:51 GMT): yacovm (Mon, 19 Nov 2018 22:28:37 GMT): nainiubaba (Tue, 20 Nov 2018 01:17:46 GMT): kostas (Tue, 20 Nov 2018 01:52:14 GMT): kostas (Tue, 20 Nov 2018 01:52:14 GMT): kostas (Tue, 20 Nov 2018 01:52:14 GMT): kostas (Tue, 20 Nov 2018 01:52:39 GMT): kostas (Tue, 20 Nov 2018 01:52:59 GMT): kostas (Tue, 20 Nov 2018 01:53:29 GMT): kostas (Tue, 20 Nov 2018 01:53:55 GMT): kostas (Tue, 20 Nov 2018 01:53:59 GMT): kostas (Tue, 20 Nov 2018 01:54:20 GMT): aatkddny (Tue, 20 Nov 2018 02:26:59 GMT): aatkddny (Tue, 20 Nov 2018 02:26:59 GMT): kostas (Tue, 20 Nov 2018 02:44:53 GMT): aatkddny (Tue, 20 Nov 2018 02:52:20 GMT): aatkddny (Tue, 20 Nov 2018 02:52:20 GMT): kostas (Tue, 20 Nov 2018 02:58:52 GMT): aatkddny (Tue, 20 Nov 2018 03:00:12 GMT): kostas (Tue, 20 Nov 2018 03:00:53 GMT): kostas (Tue, 20 Nov 2018 03:00:53 GMT): kostas (Tue, 20 Nov 2018 03:02:22 GMT): kostas (Tue, 20 Nov 2018 03:02:22 GMT): kostas (Tue, 20 Nov 2018 03:02:22 GMT): kostas (Tue, 20 Nov 2018 03:03:01 GMT): kostas (Tue, 20 Nov 2018 03:04:47 GMT): aatkddny (Tue, 20 Nov 2018 03:15:36 GMT): aatkddny (Tue, 20 Nov 2018 03:15:36 GMT): aatkddny (Tue, 20 Nov 2018 03:15:36 GMT): aatkddny (Tue, 20 Nov 2018 03:15:36 GMT): aatkddny (Tue, 20 Nov 2018 03:15:36 GMT): aatkddny (Tue, 20 Nov 2018 03:15:36 GMT): aatkddny (Tue, 20 Nov 2018 03:15:36 GMT): bh4rtp (Tue, 20 Nov 2018 03:56:21 GMT): bh4rtp (Tue, 20 Nov 2018 03:56:21 GMT): kostas (Tue, 20 Nov 2018 04:06:36 GMT): LazarLukic (Tue, 20 Nov 2018 09:45:54 GMT): asaningmaxchain123 (Tue, 20 Nov 2018 13:18:48 GMT): asaningmaxchain123 (Tue, 20 Nov 2018 13:19:50 GMT): asaningmaxchain123 (Tue, 20 Nov 2018 13:21:09 GMT): aatkddny (Tue, 20 Nov 2018 14:02:04 GMT): asaningmaxchain123 (Tue, 20 Nov 2018 14:12:18 GMT): jrosmith (Tue, 20 Nov 2018 14:22:46 GMT): asaningmaxchain123 (Tue, 20 Nov 2018 14:34:19 GMT): asaningmaxchain123 (Tue, 20 Nov 2018 14:36:00 GMT): iamdm (Tue, 20 Nov 2018 14:40:54 GMT): jyellick (Tue, 20 Nov 2018 14:59:34 GMT): jyellick (Tue, 20 Nov 2018 15:00:33 GMT): iamdm (Tue, 20 Nov 2018 15:05:59 GMT): iamdm (Tue, 20 Nov 2018 15:10:54 GMT): jyellick (Tue, 20 Nov 2018 15:11:10 GMT): jyellick (Tue, 20 Nov 2018 15:11:10 GMT): jyellick (Tue, 20 Nov 2018 15:11:10 GMT): jyellick (Tue, 20 Nov 2018 15:12:11 GMT): jyellick (Tue, 20 Nov 2018 15:12:30 GMT): jyellick (Tue, 20 Nov 2018 15:12:38 GMT): iamdm (Tue, 20 Nov 2018 15:13:40 GMT): jyellick (Tue, 20 Nov 2018 15:14:13 GMT): jyellick (Tue, 20 Nov 2018 15:15:19 GMT): iamdm (Tue, 20 Nov 2018 15:16:16 GMT): iamdm (Tue, 20 Nov 2018 15:16:20 GMT): jyellick (Tue, 20 Nov 2018 15:16:46 GMT): iamdm (Tue, 20 Nov 2018 15:19:00 GMT): jyellick (Tue, 20 Nov 2018 15:19:22 GMT): jyellick (Tue, 20 Nov 2018 15:19:51 GMT): jyellick (Tue, 20 Nov 2018 15:20:06 GMT): iamdm (Tue, 20 Nov 2018 15:21:48 GMT): jyellick (Tue, 20 Nov 2018 15:24:43 GMT): jyellick (Tue, 20 Nov 2018 15:25:06 GMT): jyellick (Tue, 20 Nov 2018 15:26:07 GMT): iamdm (Tue, 20 Nov 2018 15:51:44 GMT): jyellick (Tue, 20 Nov 2018 15:52:29 GMT): jyellick (Tue, 20 Nov 2018 15:52:29 GMT): jyellick (Tue, 20 Nov 2018 15:52:50 GMT): iamdm (Tue, 20 Nov 2018 15:52:52 GMT): jyellick (Tue, 20 Nov 2018 15:53:19 GMT): jyellick (Tue, 20 Nov 2018 15:53:19 GMT): jyellick (Tue, 20 Nov 2018 15:53:40 GMT): iamdm (Tue, 20 Nov 2018 15:55:09 GMT): jyellick (Tue, 20 Nov 2018 15:57:34 GMT): jyellick (Tue, 20 Nov 2018 15:57:53 GMT): iamdm (Tue, 20 Nov 2018 15:59:13 GMT): iamdm (Tue, 20 Nov 2018 16:00:44 GMT): jyellick (Tue, 20 Nov 2018 16:01:26 GMT): asaningmaxchain123 (Tue, 20 Nov 2018 23:34:25 GMT): LazarLukic (Wed, 21 Nov 2018 11:23:11 GMT): jyellick (Wed, 21 Nov 2018 14:39:30 GMT): jyellick (Wed, 21 Nov 2018 14:39:53 GMT): gravity (Thu, 22 Nov 2018 14:18:05 GMT): gravity (Thu, 22 Nov 2018 14:18:05 GMT): iamdm (Thu, 22 Nov 2018 15:15:09 GMT): iamdm (Thu, 22 Nov 2018 15:15:25 GMT): miiiiiyoung (Fri, 23 Nov 2018 05:26:41 GMT): fanliyan (Fri, 23 Nov 2018 08:56:19 GMT): VadimInshakov (Mon, 26 Nov 2018 20:06:37 GMT): VadimInshakov (Mon, 26 Nov 2018 20:09:27 GMT): jyellick (Mon, 26 Nov 2018 20:10:51 GMT): jyellick (Mon, 26 Nov 2018 20:12:11 GMT): VadimInshakov (Mon, 26 Nov 2018 20:14:48 GMT): jyellick (Mon, 26 Nov 2018 20:15:40 GMT): gravity (Tue, 27 Nov 2018 08:17:34 GMT): aatkddny (Tue, 27 Nov 2018 14:10:26 GMT): sachin_bal (Wed, 28 Nov 2018 02:54:50 GMT): me020523 (Wed, 28 Nov 2018 03:16:33 GMT): shivann (Wed, 28 Nov 2018 09:53:19 GMT): ajit1433 (Wed, 28 Nov 2018 12:04:00 GMT): githubcpc (Thu, 29 Nov 2018 01:03:05 GMT): waxer (Thu, 29 Nov 2018 01:59:25 GMT): githubcpc (Thu, 29 Nov 2018 02:02:58 GMT): sushmitha (Thu, 29 Nov 2018 06:07:49 GMT): sushmitha (Thu, 29 Nov 2018 06:09:02 GMT): javrevasandeep (Thu, 29 Nov 2018 06:19:15 GMT): haggis (Thu, 29 Nov 2018 08:08:20 GMT): HoneyShah (Thu, 29 Nov 2018 11:31:55 GMT): gravity (Thu, 29 Nov 2018 17:51:55 GMT): jyellick (Thu, 29 Nov 2018 18:41:31 GMT): HoneyShah (Fri, 30 Nov 2018 05:01:55 GMT): StefanKosc (Fri, 30 Nov 2018 08:55:16 GMT): StefanKosc (Fri, 30 Nov 2018 08:57:36 GMT): maxrobot (Fri, 30 Nov 2018 09:03:32 GMT): migrenaa (Fri, 30 Nov 2018 10:00:38 GMT): migrenaa (Fri, 30 Nov 2018 10:00:42 GMT): waxer (Fri, 30 Nov 2018 13:01:50 GMT): krabradosty (Fri, 30 Nov 2018 14:09:02 GMT): mastersingh24 (Fri, 30 Nov 2018 17:33:56 GMT): krabradosty (Fri, 30 Nov 2018 18:06:21 GMT): krabradosty (Fri, 30 Nov 2018 18:06:21 GMT): krabradosty (Fri, 30 Nov 2018 18:06:21 GMT): krabradosty (Fri, 30 Nov 2018 18:06:21 GMT): mastersingh24 (Fri, 30 Nov 2018 18:18:07 GMT): mastersingh24 (Fri, 30 Nov 2018 18:18:55 GMT): mastersingh24 (Fri, 30 Nov 2018 18:20:12 GMT): krabradosty (Fri, 30 Nov 2018 20:27:59 GMT): javrevasandeep (Sat, 01 Dec 2018 14:09:22 GMT): mastersingh24 (Sat, 01 Dec 2018 15:00:57 GMT): javrevasandeep (Sat, 01 Dec 2018 18:07:54 GMT): mastersingh24 (Sat, 01 Dec 2018 22:35:19 GMT): ArpitKhurana1 (Sun, 02 Dec 2018 15:25:57 GMT): ArpitKhurana1 (Sun, 02 Dec 2018 15:27:43 GMT): guoger (Mon, 03 Dec 2018 01:57:35 GMT): ArpitKhurana1 (Mon, 03 Dec 2018 05:24:25 GMT): maxrobot (Mon, 03 Dec 2018 14:21:17 GMT): jyellick (Mon, 03 Dec 2018 14:36:25 GMT): maxrobot (Mon, 03 Dec 2018 14:37:06 GMT): maxrobot (Mon, 03 Dec 2018 14:37:35 GMT): jyellick (Mon, 03 Dec 2018 14:39:30 GMT): maxrobot (Mon, 03 Dec 2018 14:44:03 GMT): maxrobot (Mon, 03 Dec 2018 14:44:39 GMT): maxrobot (Mon, 03 Dec 2018 14:44:58 GMT): jyellick (Mon, 03 Dec 2018 14:54:17 GMT): maxrobot (Mon, 03 Dec 2018 14:55:18 GMT): maxrobot (Mon, 03 Dec 2018 14:56:21 GMT): Ryan2 (Mon, 03 Dec 2018 14:56:39 GMT): jyellick (Mon, 03 Dec 2018 14:57:23 GMT): jyellick (Mon, 03 Dec 2018 14:58:04 GMT): maxrobot (Mon, 03 Dec 2018 14:58:34 GMT): maxrobot (Mon, 03 Dec 2018 14:59:49 GMT): jyellick (Mon, 03 Dec 2018 15:00:47 GMT): maxrobot (Mon, 03 Dec 2018 15:01:16 GMT): maxrobot (Mon, 03 Dec 2018 15:02:20 GMT): maxrobot (Mon, 03 Dec 2018 15:02:45 GMT): maxrobot (Mon, 03 Dec 2018 17:19:08 GMT): jyellick (Mon, 03 Dec 2018 17:52:27 GMT): jyellick (Mon, 03 Dec 2018 17:52:46 GMT): jyellick (Mon, 03 Dec 2018 17:52:46 GMT): maxrobot (Tue, 04 Dec 2018 09:30:24 GMT): maxrobot (Tue, 04 Dec 2018 09:30:24 GMT): abityildiz (Tue, 04 Dec 2018 15:13:59 GMT): jyellick (Tue, 04 Dec 2018 15:53:25 GMT): MaxHuang (Tue, 04 Dec 2018 15:53:25 GMT): maxrobot (Tue, 04 Dec 2018 16:16:27 GMT): jyellick (Tue, 04 Dec 2018 16:17:30 GMT): maxrobot (Tue, 04 Dec 2018 16:18:04 GMT): maxrobot (Tue, 04 Dec 2018 16:19:22 GMT): jyellick (Tue, 04 Dec 2018 16:21:04 GMT): maxrobot (Tue, 04 Dec 2018 16:22:18 GMT): arjitkhullar (Wed, 05 Dec 2018 00:03:57 GMT): ArpitKhurana1 (Wed, 05 Dec 2018 04:50:29 GMT): guoger (Wed, 05 Dec 2018 04:55:38 GMT): ArpitKhurana1 (Wed, 05 Dec 2018 05:15:37 GMT): ArpitKhurana1 (Wed, 05 Dec 2018 05:15:37 GMT): qsmen (Wed, 05 Dec 2018 08:05:28 GMT): guoger (Wed, 05 Dec 2018 08:06:23 GMT): guoger (Wed, 05 Dec 2018 08:07:01 GMT): guoger (Wed, 05 Dec 2018 08:13:14 GMT): ArpitKhurana1 (Wed, 05 Dec 2018 08:46:01 GMT): FLASHJr (Wed, 05 Dec 2018 12:01:45 GMT): asaningmaxchain123 (Wed, 05 Dec 2018 14:28:23 GMT): asaningmaxchain123 (Wed, 05 Dec 2018 14:28:28 GMT): asaningmaxchain123 (Wed, 05 Dec 2018 14:28:28 GMT): jyellick (Wed, 05 Dec 2018 14:54:35 GMT): asaningmaxchain123 (Wed, 05 Dec 2018 14:58:56 GMT): gravity (Wed, 05 Dec 2018 16:11:51 GMT): guoger (Wed, 05 Dec 2018 16:33:46 GMT): guoger (Wed, 05 Dec 2018 16:34:12 GMT): guoger (Wed, 05 Dec 2018 16:34:12 GMT): gravity (Wed, 05 Dec 2018 16:36:15 GMT): guoger (Wed, 05 Dec 2018 16:36:42 GMT): gravity (Wed, 05 Dec 2018 16:38:09 GMT): gravity (Wed, 05 Dec 2018 16:54:39 GMT): gravity (Wed, 05 Dec 2018 16:54:39 GMT): guoger (Wed, 05 Dec 2018 16:57:44 GMT): gravity (Wed, 05 Dec 2018 17:20:13 GMT): IgorSim (Wed, 05 Dec 2018 23:39:02 GMT): qsmen (Thu, 06 Dec 2018 05:45:57 GMT): qsmen (Thu, 06 Dec 2018 05:46:41 GMT): qsmen (Thu, 06 Dec 2018 05:53:09 GMT): guoger (Thu, 06 Dec 2018 07:37:22 GMT): IgorSim (Thu, 06 Dec 2018 07:39:10 GMT): guoger (Thu, 06 Dec 2018 07:40:20 GMT): guoger (Thu, 06 Dec 2018 07:42:27 GMT): IgorSim (Thu, 06 Dec 2018 07:43:43 GMT): qsmen (Thu, 06 Dec 2018 08:18:41 GMT): qsmen (Thu, 06 Dec 2018 08:23:40 GMT): gravity (Thu, 06 Dec 2018 08:57:03 GMT): guoger (Thu, 06 Dec 2018 08:58:46 GMT): qsmen (Fri, 07 Dec 2018 00:53:30 GMT): aatkddny (Fri, 07 Dec 2018 01:18:49 GMT): aatkddny (Fri, 07 Dec 2018 01:18:49 GMT): aatkddny (Fri, 07 Dec 2018 01:18:49 GMT): aatkddny (Fri, 07 Dec 2018 01:18:49 GMT): aatkddny (Fri, 07 Dec 2018 01:18:49 GMT): aatkddny (Fri, 07 Dec 2018 01:18:49 GMT): guoger (Fri, 07 Dec 2018 06:42:06 GMT): tkg (Fri, 07 Dec 2018 11:28:59 GMT): IgorSim (Fri, 07 Dec 2018 20:57:16 GMT): yacovm (Fri, 07 Dec 2018 21:02:40 GMT): yacovm (Fri, 07 Dec 2018 21:02:59 GMT): yacovm (Fri, 07 Dec 2018 21:03:16 GMT): qsmen (Sun, 09 Dec 2018 08:28:45 GMT): qsmen (Sun, 09 Dec 2018 08:31:57 GMT): anjalinaik (Mon, 10 Dec 2018 12:24:33 GMT): anjalinaik (Mon, 10 Dec 2018 12:24:33 GMT): anjalinaik (Mon, 10 Dec 2018 12:24:33 GMT): IgorSim (Mon, 10 Dec 2018 14:15:22 GMT): IgorSim (Mon, 10 Dec 2018 14:15:22 GMT): yacovm (Mon, 10 Dec 2018 14:16:39 GMT): yacovm (Mon, 10 Dec 2018 14:17:23 GMT): yacovm (Mon, 10 Dec 2018 14:17:27 GMT): yacovm (Mon, 10 Dec 2018 14:17:47 GMT): yacovm (Mon, 10 Dec 2018 14:17:50 GMT): yacovm (Mon, 10 Dec 2018 14:17:55 GMT): yacovm (Mon, 10 Dec 2018 14:18:30 GMT): yacovm (Mon, 10 Dec 2018 14:36:47 GMT): yacovm (Mon, 10 Dec 2018 14:36:47 GMT): IgorSim (Mon, 10 Dec 2018 14:46:00 GMT): yacovm (Mon, 10 Dec 2018 16:00:41 GMT): yacovm (Mon, 10 Dec 2018 16:00:47 GMT): tock (Mon, 10 Dec 2018 16:00:47 GMT): maxrobot (Mon, 10 Dec 2018 16:03:15 GMT): maxrobot (Mon, 10 Dec 2018 16:03:45 GMT): yacovm (Mon, 10 Dec 2018 16:07:43 GMT): maxrobot (Mon, 10 Dec 2018 16:08:57 GMT): maxrobot (Mon, 10 Dec 2018 16:09:03 GMT): yacovm (Mon, 10 Dec 2018 16:11:16 GMT): tock (Mon, 10 Dec 2018 16:15:48 GMT): tock (Mon, 10 Dec 2018 16:15:48 GMT): yacovm (Mon, 10 Dec 2018 16:16:19 GMT): yacovm (Mon, 10 Dec 2018 16:16:39 GMT): tock (Mon, 10 Dec 2018 16:17:00 GMT): tock (Mon, 10 Dec 2018 16:17:00 GMT): yacovm (Mon, 10 Dec 2018 16:17:24 GMT): tock (Mon, 10 Dec 2018 16:18:23 GMT): tock (Mon, 10 Dec 2018 16:18:30 GMT): IgorSim (Mon, 10 Dec 2018 18:15:53 GMT): YashParihar (Tue, 11 Dec 2018 04:23:29 GMT): aatkddny (Wed, 12 Dec 2018 13:47:35 GMT): guoger (Wed, 12 Dec 2018 16:59:38 GMT): rsherwood (Wed, 12 Dec 2018 19:27:02 GMT): aatkddny (Wed, 12 Dec 2018 19:53:42 GMT): qizhang (Wed, 12 Dec 2018 22:36:03 GMT): javapriyan (Thu, 13 Dec 2018 05:29:54 GMT): javapriyan (Thu, 13 Dec 2018 05:39:46 GMT): guoger (Thu, 13 Dec 2018 09:10:18 GMT): guoger (Thu, 13 Dec 2018 09:10:41 GMT): javapriyan (Thu, 13 Dec 2018 10:16:02 GMT): qizhang (Thu, 13 Dec 2018 20:22:19 GMT): qizhang (Thu, 13 Dec 2018 20:22:30 GMT): Gaurang (Fri, 14 Dec 2018 16:41:28 GMT): Gaurang (Fri, 14 Dec 2018 16:43:25 GMT): Gaurang (Fri, 14 Dec 2018 16:43:25 GMT): deelthor (Tue, 18 Dec 2018 10:49:20 GMT): jiribroulik (Tue, 18 Dec 2018 13:38:42 GMT): yacovm (Tue, 18 Dec 2018 13:57:11 GMT): yacovm (Tue, 18 Dec 2018 13:57:13 GMT): jiribroulik (Tue, 18 Dec 2018 14:02:03 GMT): yacovm (Tue, 18 Dec 2018 14:05:11 GMT): yacovm (Tue, 18 Dec 2018 14:05:39 GMT): yacovm (Tue, 18 Dec 2018 14:05:44 GMT): yacovm (Tue, 18 Dec 2018 14:06:09 GMT): jiribroulik (Tue, 18 Dec 2018 14:12:10 GMT): yacovm (Tue, 18 Dec 2018 14:14:34 GMT): jiribroulik (Tue, 18 Dec 2018 14:17:19 GMT): yacovm (Tue, 18 Dec 2018 14:17:41 GMT): jiribroulik (Tue, 18 Dec 2018 14:47:29 GMT): magar36 (Tue, 18 Dec 2018 22:47:15 GMT): qsmen (Wed, 19 Dec 2018 09:42:47 GMT): jiribroulik (Wed, 19 Dec 2018 09:44:43 GMT): jiribroulik (Wed, 19 Dec 2018 09:49:18 GMT): jiribroulik (Wed, 19 Dec 2018 09:49:36 GMT): guoger (Wed, 19 Dec 2018 15:02:17 GMT): qizhang (Wed, 19 Dec 2018 20:44:16 GMT): qsmen (Thu, 20 Dec 2018 01:11:29 GMT): guoger (Thu, 20 Dec 2018 02:33:31 GMT): guoger (Thu, 20 Dec 2018 02:33:31 GMT): lightcap (Thu, 20 Dec 2018 16:38:49 GMT): magar36 (Fri, 21 Dec 2018 16:54:16 GMT): magar36 (Fri, 21 Dec 2018 17:05:52 GMT): qizhang (Sat, 22 Dec 2018 01:49:44 GMT): waxer (Sat, 22 Dec 2018 02:00:16 GMT): merq (Sat, 22 Dec 2018 03:46:28 GMT): Rosan (Mon, 24 Dec 2018 09:26:10 GMT): yousaf (Wed, 26 Dec 2018 19:13:38 GMT): yousaf (Wed, 26 Dec 2018 19:20:57 GMT): pankajcheema (Thu, 27 Dec 2018 06:31:21 GMT): pankajcheema (Thu, 27 Dec 2018 06:31:36 GMT): pankajcheema (Thu, 27 Dec 2018 06:31:36 GMT): pankajcheema (Thu, 27 Dec 2018 06:32:04 GMT): pankajcheema (Thu, 27 Dec 2018 06:32:18 GMT): pankajcheema (Thu, 27 Dec 2018 06:32:48 GMT): dave.enyeart (Thu, 27 Dec 2018 16:20:01 GMT): bh4rtp (Fri, 28 Dec 2018 03:51:02 GMT): bh4rtp (Fri, 28 Dec 2018 03:52:43 GMT): bh4rtp (Fri, 28 Dec 2018 09:21:05 GMT): yousaf (Fri, 28 Dec 2018 11:44:32 GMT): yousaf (Fri, 28 Dec 2018 11:49:05 GMT): dave.enyeart (Fri, 28 Dec 2018 22:55:37 GMT): bh4rtp (Sat, 29 Dec 2018 00:57:06 GMT): yousaf (Sat, 29 Dec 2018 07:40:36 GMT): mamtabhardwaj12 (Sat, 29 Dec 2018 12:14:53 GMT): mamtabhardwaj12 (Sat, 29 Dec 2018 12:49:04 GMT): mastersingh24 (Sat, 29 Dec 2018 14:46:14 GMT): mamtabhardwaj12 (Mon, 31 Dec 2018 09:22:17 GMT): mastersingh24 (Mon, 31 Dec 2018 14:45:12 GMT): liaoruohuai (Tue, 01 Jan 2019 12:34:31 GMT): liaoruohuai (Tue, 01 Jan 2019 12:35:26 GMT): jas.au (Tue, 01 Jan 2019 23:36:06 GMT): mamtabhardwaj12 (Wed, 02 Jan 2019 07:09:46 GMT): mastersingh24 (Wed, 02 Jan 2019 17:11:04 GMT): mastersingh24 (Wed, 02 Jan 2019 17:11:21 GMT): mamtabhardwaj12 (Thu, 03 Jan 2019 05:03:15 GMT): StefanKosc (Fri, 04 Jan 2019 08:54:28 GMT): StefanKosc (Fri, 04 Jan 2019 08:54:28 GMT): StefanKosc (Fri, 04 Jan 2019 08:54:28 GMT): StefanKosc (Fri, 04 Jan 2019 08:54:28 GMT): StefanKosc (Fri, 04 Jan 2019 08:54:28 GMT): StefanKosc (Fri, 04 Jan 2019 11:09:09 GMT): jyellick (Fri, 04 Jan 2019 15:04:03 GMT): StefanKosc (Fri, 04 Jan 2019 15:27:56 GMT): StefanKosc (Fri, 04 Jan 2019 15:27:56 GMT): StefanKosc (Fri, 04 Jan 2019 15:27:56 GMT): StefanKosc (Fri, 04 Jan 2019 15:27:56 GMT): StefanKosc (Fri, 04 Jan 2019 15:27:56 GMT): StefanKosc (Fri, 04 Jan 2019 15:27:56 GMT): StefanKosc (Fri, 04 Jan 2019 15:27:56 GMT): StefanKosc (Fri, 04 Jan 2019 15:31:54 GMT): jyellick (Fri, 04 Jan 2019 15:54:04 GMT): StefanKosc (Fri, 04 Jan 2019 16:41:09 GMT): x4e-salvi (Fri, 04 Jan 2019 18:52:21 GMT): hhlee (Mon, 07 Jan 2019 02:56:39 GMT): sanket1211 (Mon, 07 Jan 2019 07:27:19 GMT): deelthor (Mon, 07 Jan 2019 09:58:14 GMT): deelthor (Mon, 07 Jan 2019 09:58:14 GMT): guoger (Mon, 07 Jan 2019 10:01:44 GMT): deelthor (Mon, 07 Jan 2019 10:04:16 GMT): deelthor (Mon, 07 Jan 2019 10:12:16 GMT): yacovm (Mon, 07 Jan 2019 11:17:06 GMT): yacovm (Mon, 07 Jan 2019 11:17:16 GMT): yacovm (Mon, 07 Jan 2019 11:17:17 GMT): knagware9 (Mon, 07 Jan 2019 13:10:46 GMT): jyellick (Mon, 07 Jan 2019 13:41:45 GMT): knagware9 (Mon, 07 Jan 2019 13:49:37 GMT): knagware9 (Mon, 07 Jan 2019 13:50:00 GMT): GuillaumeTong (Tue, 08 Jan 2019 01:59:16 GMT): GuillaumeTong (Tue, 08 Jan 2019 01:59:36 GMT): jyellick (Tue, 08 Jan 2019 04:44:18 GMT): jyellick (Tue, 08 Jan 2019 04:45:41 GMT): jyellick (Tue, 08 Jan 2019 04:45:41 GMT): knagware9 (Tue, 08 Jan 2019 06:04:53 GMT): jyellick (Tue, 08 Jan 2019 06:26:32 GMT): knagware9 (Tue, 08 Jan 2019 06:30:13 GMT): knagware9 (Tue, 08 Jan 2019 06:52:45 GMT): knagware9 (Tue, 08 Jan 2019 06:56:13 GMT): NeelKantht (Tue, 08 Jan 2019 12:39:54 GMT): NeelKantht (Tue, 08 Jan 2019 12:44:54 GMT): NeelKantht (Wed, 09 Jan 2019 05:03:36 GMT): mozkarakoc (Wed, 09 Jan 2019 14:18:24 GMT): mozkarakoc (Wed, 09 Jan 2019 14:20:02 GMT): jyellick (Wed, 09 Jan 2019 14:59:43 GMT): jyellick (Thu, 10 Jan 2019 04:29:21 GMT): knagware9 (Thu, 10 Jan 2019 05:22:48 GMT): DmitriPlakhov (Thu, 10 Jan 2019 12:01:43 GMT): NeerajKumar (Thu, 10 Jan 2019 13:10:49 GMT): NeerajKumar (Thu, 10 Jan 2019 13:11:21 GMT): NeerajKumar (Thu, 10 Jan 2019 13:11:39 GMT): NeerajKumar (Thu, 10 Jan 2019 13:11:39 GMT): NeerajKumar (Thu, 10 Jan 2019 13:11:47 GMT): NeerajKumar (Thu, 10 Jan 2019 13:12:21 GMT): NeerajKumar (Thu, 10 Jan 2019 13:13:43 GMT): NeerajKumar (Thu, 10 Jan 2019 13:13:43 GMT): NeerajKumar (Thu, 10 Jan 2019 13:13:57 GMT): NeerajKumar (Thu, 10 Jan 2019 13:15:55 GMT): mastersingh24 (Thu, 10 Jan 2019 13:43:32 GMT): jcbombardelli (Thu, 10 Jan 2019 15:31:33 GMT): kariyappal (Fri, 11 Jan 2019 06:59:55 GMT): NeerajKumar (Fri, 11 Jan 2019 10:23:24 GMT): NeerajKumar (Fri, 11 Jan 2019 10:57:08 GMT): NeerajKumar (Fri, 11 Jan 2019 10:57:50 GMT): NeerajKumar (Fri, 11 Jan 2019 10:58:09 GMT): NeerajKumar (Fri, 11 Jan 2019 10:58:25 GMT): NeerajKumar (Fri, 11 Jan 2019 10:58:25 GMT): NeerajKumar (Fri, 11 Jan 2019 10:58:26 GMT): NeerajKumar (Fri, 11 Jan 2019 10:58:31 GMT): aatkddny (Fri, 11 Jan 2019 13:41:37 GMT): aatkddny (Fri, 11 Jan 2019 13:41:37 GMT): aatkddny (Fri, 11 Jan 2019 13:41:37 GMT): jyellick (Fri, 11 Jan 2019 15:54:19 GMT): jyellick (Fri, 11 Jan 2019 15:56:07 GMT): aatkddny (Fri, 11 Jan 2019 16:55:00 GMT): aatkddny (Fri, 11 Jan 2019 16:55:00 GMT): jyellick (Fri, 11 Jan 2019 16:55:56 GMT): aatkddny (Fri, 11 Jan 2019 16:59:44 GMT): jyellick (Fri, 11 Jan 2019 17:00:30 GMT): yacovm (Fri, 11 Jan 2019 17:51:50 GMT): aatkddny (Fri, 11 Jan 2019 18:29:50 GMT): kostas (Sat, 12 Jan 2019 17:14:47 GMT): kostas (Sat, 12 Jan 2019 17:15:37 GMT): kostas (Sat, 12 Jan 2019 17:15:42 GMT): NeerajKumar (Sun, 13 Jan 2019 07:40:34 GMT): jyellick (Mon, 14 Jan 2019 02:20:52 GMT): jyellick (Mon, 14 Jan 2019 02:21:57 GMT): gaijinviki (Thu, 17 Jan 2019 04:59:29 GMT): GuillaumeTong (Fri, 18 Jan 2019 02:21:21 GMT): jyellick (Fri, 18 Jan 2019 04:19:44 GMT): GuillaumeTong (Fri, 18 Jan 2019 04:29:29 GMT): GuillaumeTong (Fri, 18 Jan 2019 04:29:29 GMT): GuillaumeTong (Fri, 18 Jan 2019 04:29:29 GMT): jyellick (Fri, 18 Jan 2019 04:30:55 GMT): jyellick (Fri, 18 Jan 2019 04:31:07 GMT): GuillaumeTong (Fri, 18 Jan 2019 04:31:19 GMT): jyellick (Fri, 18 Jan 2019 04:31:30 GMT): GuillaumeTong (Fri, 18 Jan 2019 04:32:45 GMT): GuillaumeTong (Fri, 18 Jan 2019 04:36:29 GMT): jyellick (Fri, 18 Jan 2019 04:47:02 GMT): jyellick (Fri, 18 Jan 2019 04:47:33 GMT): GuillaumeTong (Fri, 18 Jan 2019 07:10:24 GMT): jyellick (Fri, 18 Jan 2019 14:29:55 GMT): minollo (Fri, 18 Jan 2019 19:50:28 GMT): yacovm (Fri, 18 Jan 2019 19:52:12 GMT): yacovm (Fri, 18 Jan 2019 19:52:29 GMT): yacovm (Fri, 18 Jan 2019 19:52:41 GMT): yacovm (Fri, 18 Jan 2019 19:52:46 GMT): yacovm (Fri, 18 Jan 2019 19:52:53 GMT): yacovm (Fri, 18 Jan 2019 19:53:05 GMT): yacovm (Fri, 18 Jan 2019 19:53:08 GMT): minollo (Fri, 18 Jan 2019 19:53:29 GMT): AbhishekDudhrejia (Mon, 21 Jan 2019 05:35:31 GMT): VadimInshakov (Mon, 21 Jan 2019 14:26:44 GMT): maxrobot (Mon, 21 Jan 2019 17:13:34 GMT): jyellick (Mon, 21 Jan 2019 17:15:02 GMT): VadimInshakov (Mon, 21 Jan 2019 17:18:46 GMT): jyellick (Mon, 21 Jan 2019 17:19:22 GMT): ycarmel (Tue, 22 Jan 2019 08:46:42 GMT): VadimInshakov (Tue, 22 Jan 2019 09:55:11 GMT): silliman (Tue, 22 Jan 2019 11:54:23 GMT): silliman (Tue, 22 Jan 2019 11:58:43 GMT): Jamie (Tue, 22 Jan 2019 17:08:49 GMT): alokkv (Tue, 22 Jan 2019 17:30:09 GMT): rodolfofranco (Tue, 22 Jan 2019 21:17:03 GMT): incarose (Wed, 23 Jan 2019 00:23:46 GMT): mallikarjunasai995 (Wed, 23 Jan 2019 04:01:35 GMT): unlimited (Wed, 23 Jan 2019 05:19:59 GMT): unlimited (Wed, 23 Jan 2019 05:32:12 GMT): silliman (Wed, 23 Jan 2019 07:42:17 GMT): alokkv (Wed, 23 Jan 2019 09:39:20 GMT): alokkv (Wed, 23 Jan 2019 09:39:20 GMT): knagware9 (Wed, 23 Jan 2019 09:41:53 GMT): knagware9 (Wed, 23 Jan 2019 09:42:36 GMT): knagware9 (Wed, 23 Jan 2019 09:56:24 GMT): knagware9 (Wed, 23 Jan 2019 11:32:36 GMT): jyellick (Wed, 23 Jan 2019 16:14:04 GMT): jyellick (Wed, 23 Jan 2019 16:17:30 GMT): jyellick (Wed, 23 Jan 2019 16:17:58 GMT): unlimited (Wed, 23 Jan 2019 16:26:46 GMT): silliman (Wed, 23 Jan 2019 17:05:46 GMT): unlimited (Wed, 23 Jan 2019 17:23:16 GMT): knagware9 (Thu, 24 Jan 2019 06:03:39 GMT): alokkv (Thu, 24 Jan 2019 09:51:12 GMT): knagware9 (Thu, 24 Jan 2019 10:13:09 GMT): knagware9 (Thu, 24 Jan 2019 10:13:32 GMT): adamhardie (Thu, 24 Jan 2019 12:48:37 GMT): adamhardie (Thu, 24 Jan 2019 12:48:56 GMT): adamhardie (Thu, 24 Jan 2019 12:49:00 GMT): adamhardie (Thu, 24 Jan 2019 12:49:00 GMT): adamhardie (Thu, 24 Jan 2019 12:49:00 GMT): adamhardie (Thu, 24 Jan 2019 12:49:00 GMT): mfaisaltariq (Thu, 24 Jan 2019 13:15:03 GMT): mfaisaltariq (Thu, 24 Jan 2019 13:20:45 GMT): mfaisaltariq (Thu, 24 Jan 2019 13:21:41 GMT): shivann (Thu, 24 Jan 2019 13:36:18 GMT): shivann (Thu, 24 Jan 2019 13:36:18 GMT): shivann (Thu, 24 Jan 2019 13:36:18 GMT): shivann (Thu, 24 Jan 2019 13:36:18 GMT): krabradosty (Thu, 24 Jan 2019 15:09:05 GMT): jyellick (Thu, 24 Jan 2019 16:37:51 GMT): jyellick (Thu, 24 Jan 2019 16:41:12 GMT): Ryan2 (Fri, 25 Jan 2019 05:18:06 GMT): jyellick (Fri, 25 Jan 2019 05:18:37 GMT): mfaisaltariq (Fri, 25 Jan 2019 05:29:13 GMT): jyellick (Fri, 25 Jan 2019 05:29:53 GMT): mfaisaltariq (Fri, 25 Jan 2019 05:37:07 GMT): Ryan2 (Fri, 25 Jan 2019 05:37:19 GMT): mfaisaltariq (Fri, 25 Jan 2019 05:37:26 GMT): mfaisaltariq (Fri, 25 Jan 2019 06:18:29 GMT): mfaisaltariq (Fri, 25 Jan 2019 06:20:57 GMT): mfaisaltariq (Fri, 25 Jan 2019 06:24:28 GMT): alokkv (Fri, 25 Jan 2019 11:58:17 GMT): jyellick (Fri, 25 Jan 2019 14:24:16 GMT): alokkv (Sat, 26 Jan 2019 05:54:25 GMT): alokkv (Sat, 26 Jan 2019 05:54:25 GMT): Jamie (Sat, 26 Jan 2019 13:31:40 GMT): Pradeep_Pentakota (Sat, 26 Jan 2019 13:42:45 GMT): binhn (Sat, 26 Jan 2019 16:49:47 GMT): knibals (Sun, 27 Jan 2019 12:18:30 GMT): knibals (Sun, 27 Jan 2019 17:31:38 GMT): jyellick (Mon, 28 Jan 2019 02:27:30 GMT): jyellick (Mon, 28 Jan 2019 02:33:02 GMT): Jamie (Mon, 28 Jan 2019 03:00:30 GMT): alokkv (Mon, 28 Jan 2019 03:34:52 GMT): jyellick (Mon, 28 Jan 2019 03:35:52 GMT): alokkv (Mon, 28 Jan 2019 04:02:40 GMT): jyellick (Mon, 28 Jan 2019 04:04:26 GMT): jyellick (Mon, 28 Jan 2019 04:04:26 GMT): jyellick (Mon, 28 Jan 2019 04:05:27 GMT): alokkv (Mon, 28 Jan 2019 04:10:53 GMT): jyellick (Mon, 28 Jan 2019 04:13:02 GMT): jyellick (Mon, 28 Jan 2019 04:13:11 GMT): alokkv (Mon, 28 Jan 2019 04:20:06 GMT): jyellick (Mon, 28 Jan 2019 04:22:30 GMT): alokkv (Mon, 28 Jan 2019 05:22:39 GMT): jyellick (Mon, 28 Jan 2019 05:24:36 GMT): jyellick (Mon, 28 Jan 2019 05:24:36 GMT): jyellick (Mon, 28 Jan 2019 05:24:36 GMT): alokkv (Mon, 28 Jan 2019 05:24:36 GMT): mfaisaltariq (Mon, 28 Jan 2019 05:25:05 GMT): jyellick (Mon, 28 Jan 2019 05:26:19 GMT): mfaisaltariq (Mon, 28 Jan 2019 05:28:37 GMT): HoneyShah (Mon, 28 Jan 2019 11:22:30 GMT): HoneyShah (Mon, 28 Jan 2019 11:22:30 GMT): HoneyShah (Mon, 28 Jan 2019 11:22:30 GMT): edisinovcic (Mon, 28 Jan 2019 13:15:56 GMT): jyellick (Mon, 28 Jan 2019 14:38:28 GMT): rodolfofranco (Mon, 28 Jan 2019 20:22:17 GMT): rodolfofranco (Mon, 28 Jan 2019 20:22:26 GMT): rodolfofranco (Mon, 28 Jan 2019 20:22:47 GMT): rodolfofranco (Mon, 28 Jan 2019 20:23:38 GMT): rodolfofranco (Mon, 28 Jan 2019 20:23:59 GMT): rodolfofranco (Mon, 28 Jan 2019 20:24:11 GMT): rodolfofranco (Mon, 28 Jan 2019 20:24:24 GMT): rodolfofranco (Mon, 28 Jan 2019 20:25:29 GMT): samirbalabantaray (Tue, 29 Jan 2019 09:23:37 GMT): HoneyShah (Tue, 29 Jan 2019 09:36:39 GMT): HoneyShah (Tue, 29 Jan 2019 09:36:39 GMT): NeerajKumar (Tue, 29 Jan 2019 10:18:13 GMT): NeerajKumar (Tue, 29 Jan 2019 10:18:13 GMT): NeerajKumar (Tue, 29 Jan 2019 10:18:13 GMT): bibek54 (Tue, 29 Jan 2019 11:03:24 GMT): bibek54 (Tue, 29 Jan 2019 11:03:46 GMT): jyellick (Tue, 29 Jan 2019 17:58:34 GMT): jyellick (Tue, 29 Jan 2019 17:59:27 GMT): jyellick (Tue, 29 Jan 2019 18:03:00 GMT): gaijinviki (Wed, 30 Jan 2019 11:37:49 GMT): gaijinviki (Wed, 30 Jan 2019 11:40:29 GMT): gaijinviki (Wed, 30 Jan 2019 11:41:28 GMT): gaijinviki (Wed, 30 Jan 2019 11:42:28 GMT): gaijinviki (Wed, 30 Jan 2019 11:42:28 GMT): gaijinviki (Wed, 30 Jan 2019 11:42:28 GMT): yousaf (Wed, 30 Jan 2019 17:02:13 GMT): jyellick (Wed, 30 Jan 2019 17:34:38 GMT): yousaf (Wed, 30 Jan 2019 17:37:23 GMT): yousaf (Wed, 30 Jan 2019 17:39:01 GMT): jyellick (Wed, 30 Jan 2019 17:55:41 GMT): yousaf (Wed, 30 Jan 2019 18:00:51 GMT): yousaf (Wed, 30 Jan 2019 18:11:43 GMT): jyellick (Wed, 30 Jan 2019 18:47:05 GMT): yousaf (Wed, 30 Jan 2019 20:25:23 GMT): yousaf (Wed, 30 Jan 2019 20:25:23 GMT): yousaf (Wed, 30 Jan 2019 20:28:29 GMT): yousaf (Wed, 30 Jan 2019 20:28:44 GMT): jyellick (Thu, 31 Jan 2019 02:00:57 GMT): gaijinviki (Thu, 31 Jan 2019 02:09:02 GMT): jyellick (Thu, 31 Jan 2019 02:16:45 GMT): jyellick (Thu, 31 Jan 2019 02:17:00 GMT): ConnorChristie (Thu, 31 Jan 2019 04:07:27 GMT): ConnorChristie (Thu, 31 Jan 2019 04:12:10 GMT): ConnorChristie (Thu, 31 Jan 2019 04:13:15 GMT): ConnorChristie (Thu, 31 Jan 2019 04:13:15 GMT): ConnorChristie (Thu, 31 Jan 2019 04:25:28 GMT): ConnorChristie (Thu, 31 Jan 2019 04:25:28 GMT): ConnorChristie (Thu, 31 Jan 2019 04:29:28 GMT): jyellick (Thu, 31 Jan 2019 05:06:12 GMT): jyellick (Thu, 31 Jan 2019 05:06:12 GMT): yousaf (Thu, 31 Jan 2019 09:48:23 GMT): kiranarshakota (Thu, 31 Jan 2019 13:43:01 GMT): jyellick (Thu, 31 Jan 2019 14:25:39 GMT): jyellick (Thu, 31 Jan 2019 14:26:17 GMT): kiranarshakota (Fri, 01 Feb 2019 04:28:38 GMT): kiranarshakota (Fri, 01 Feb 2019 04:29:12 GMT): kiranarshakota (Fri, 01 Feb 2019 04:29:24 GMT): gaijinviki (Fri, 01 Feb 2019 07:26:27 GMT): gaijinviki (Fri, 01 Feb 2019 07:26:27 GMT): gaijinviki (Fri, 01 Feb 2019 07:26:27 GMT): gaijinviki (Fri, 01 Feb 2019 07:26:27 GMT): guoger (Fri, 01 Feb 2019 07:56:37 GMT): guoger (Fri, 01 Feb 2019 07:58:08 GMT): gaijinviki (Fri, 01 Feb 2019 08:15:34 GMT): jyellick (Fri, 01 Feb 2019 14:40:34 GMT): ConnorChristie (Fri, 01 Feb 2019 19:08:10 GMT): jyellick (Fri, 01 Feb 2019 19:30:23 GMT): pasimoes (Sat, 02 Feb 2019 01:02:13 GMT): yousaf (Sun, 03 Feb 2019 15:04:40 GMT): gravity (Mon, 04 Feb 2019 09:50:24 GMT): gravity (Mon, 04 Feb 2019 09:50:24 GMT): gravity (Mon, 04 Feb 2019 09:50:24 GMT): gravity (Mon, 04 Feb 2019 09:50:24 GMT): gravity (Mon, 04 Feb 2019 10:06:13 GMT): Chandoo (Mon, 04 Feb 2019 17:01:41 GMT): jyellick (Mon, 04 Feb 2019 20:01:29 GMT): gravity (Mon, 04 Feb 2019 20:31:41 GMT): jyellick (Mon, 04 Feb 2019 20:41:29 GMT): gravity (Mon, 04 Feb 2019 22:00:20 GMT): jyellick (Tue, 05 Feb 2019 02:17:29 GMT): gravity (Tue, 05 Feb 2019 08:25:49 GMT): paul.sitoh (Wed, 06 Feb 2019 16:50:07 GMT): paul.sitoh (Wed, 06 Feb 2019 16:50:07 GMT): paul.sitoh (Wed, 06 Feb 2019 16:50:07 GMT): paul.sitoh (Wed, 06 Feb 2019 16:50:07 GMT): paul.sitoh (Wed, 06 Feb 2019 16:55:36 GMT): guoger (Wed, 06 Feb 2019 17:20:46 GMT): guoger (Wed, 06 Feb 2019 17:21:34 GMT): paul.sitoh (Wed, 06 Feb 2019 17:23:27 GMT): gaijinviki (Fri, 08 Feb 2019 08:03:20 GMT): yacovm (Fri, 08 Feb 2019 08:05:05 GMT): yacovm (Fri, 08 Feb 2019 08:05:22 GMT): gaijinviki (Fri, 08 Feb 2019 08:06:22 GMT): gaijinviki (Fri, 08 Feb 2019 08:12:03 GMT): gaijinviki (Fri, 08 Feb 2019 08:12:03 GMT): yacovm (Fri, 08 Feb 2019 08:12:42 GMT): yacovm (Fri, 08 Feb 2019 08:12:57 GMT): gaijinviki (Fri, 08 Feb 2019 08:13:00 GMT): yacovm (Fri, 08 Feb 2019 08:13:10 GMT): gaijinviki (Fri, 08 Feb 2019 08:14:12 GMT): yacovm (Fri, 08 Feb 2019 08:15:42 GMT): gaijinviki (Fri, 08 Feb 2019 08:17:04 GMT): gaijinviki (Fri, 08 Feb 2019 08:49:59 GMT): gaijinviki (Fri, 08 Feb 2019 08:49:59 GMT): gaijinviki (Fri, 08 Feb 2019 08:50:55 GMT): yacovm (Fri, 08 Feb 2019 09:02:16 GMT): deelthor (Fri, 08 Feb 2019 09:09:50 GMT): gaijinviki (Fri, 08 Feb 2019 09:54:43 GMT): gaijinviki (Fri, 08 Feb 2019 09:54:43 GMT): gaijinviki (Fri, 08 Feb 2019 09:54:43 GMT): gaijinviki (Fri, 08 Feb 2019 09:59:49 GMT): gaijinviki (Fri, 08 Feb 2019 09:59:49 GMT): gaijinviki (Fri, 08 Feb 2019 09:59:49 GMT): yacovm (Fri, 08 Feb 2019 10:02:51 GMT): gaijinviki (Fri, 08 Feb 2019 10:08:49 GMT): gaijinviki (Fri, 08 Feb 2019 10:08:49 GMT): gaijinviki (Fri, 08 Feb 2019 10:08:49 GMT): Mozer18 (Sun, 10 Feb 2019 00:47:19 GMT): NeerajKumar (Mon, 11 Feb 2019 07:18:39 GMT): rsherwood (Mon, 11 Feb 2019 12:13:41 GMT): kostas (Mon, 11 Feb 2019 14:25:55 GMT): kostas (Mon, 11 Feb 2019 14:26:01 GMT): waxer (Tue, 12 Feb 2019 03:44:16 GMT): kostas (Tue, 12 Feb 2019 08:33:43 GMT): NeerajKumar (Tue, 12 Feb 2019 15:59:17 GMT): guoger (Wed, 13 Feb 2019 00:57:41 GMT): NeerajKumar (Wed, 13 Feb 2019 09:56:17 GMT): NeerajKumar (Wed, 13 Feb 2019 09:56:52 GMT): NeerajKumar (Wed, 13 Feb 2019 09:57:02 GMT): NeerajKumar (Wed, 13 Feb 2019 09:57:29 GMT): NeerajKumar (Wed, 13 Feb 2019 09:57:39 GMT): NeerajKumar (Wed, 13 Feb 2019 09:57:39 GMT): NeerajKumar (Wed, 13 Feb 2019 09:57:56 GMT): kostas (Wed, 13 Feb 2019 22:17:20 GMT): kostas (Wed, 13 Feb 2019 22:17:45 GMT): kostas (Wed, 13 Feb 2019 22:18:32 GMT): NeerajKumar (Thu, 14 Feb 2019 07:38:35 GMT): NeerajKumar (Thu, 14 Feb 2019 07:38:43 GMT): NeerajKumar (Thu, 14 Feb 2019 07:39:19 GMT): NeerajKumar (Thu, 14 Feb 2019 07:39:19 GMT): NeerajKumar (Thu, 14 Feb 2019 07:40:08 GMT): NeerajKumar (Thu, 14 Feb 2019 07:40:16 GMT): NeerajKumar (Thu, 14 Feb 2019 08:10:50 GMT): NeerajKumar (Thu, 14 Feb 2019 08:10:52 GMT): NeerajKumar (Thu, 14 Feb 2019 08:11:07 GMT): NeerajKumar (Thu, 14 Feb 2019 08:11:10 GMT): VadimInshakov (Fri, 15 Feb 2019 08:00:50 GMT): knagware9 (Fri, 15 Feb 2019 09:50:16 GMT): knagware9 (Fri, 15 Feb 2019 09:50:57 GMT): knagware9 (Fri, 15 Feb 2019 09:51:14 GMT): waxer (Sun, 17 Feb 2019 18:29:07 GMT): waxer (Sun, 17 Feb 2019 18:30:03 GMT): waxer (Sun, 17 Feb 2019 18:30:22 GMT): waxer (Sun, 17 Feb 2019 18:30:51 GMT): waxer (Sun, 17 Feb 2019 23:36:43 GMT): guoger (Mon, 18 Feb 2019 08:58:41 GMT): knagware9 (Mon, 18 Feb 2019 09:04:33 GMT): JaccobSmith (Tue, 19 Feb 2019 02:34:47 GMT): JaccobSmith (Tue, 19 Feb 2019 02:35:12 GMT): JaccobSmith (Tue, 19 Feb 2019 07:35:40 GMT): JaccobSmith (Tue, 19 Feb 2019 07:35:43 GMT): braduf (Wed, 20 Feb 2019 15:50:40 GMT): braduf (Wed, 20 Feb 2019 15:52:49 GMT): braduf (Wed, 20 Feb 2019 15:52:49 GMT): jyellick (Wed, 20 Feb 2019 19:59:49 GMT): braduf (Wed, 20 Feb 2019 20:02:32 GMT): braduf (Wed, 20 Feb 2019 20:02:32 GMT): jyellick (Wed, 20 Feb 2019 20:04:45 GMT): jyellick (Wed, 20 Feb 2019 20:05:41 GMT): braduf (Wed, 20 Feb 2019 20:05:54 GMT): braduf (Wed, 20 Feb 2019 20:06:49 GMT): jyellick (Wed, 20 Feb 2019 20:06:58 GMT): jyellick (Wed, 20 Feb 2019 20:07:22 GMT): braduf (Wed, 20 Feb 2019 20:09:41 GMT): jyellick (Wed, 20 Feb 2019 20:10:05 GMT): braduf (Wed, 20 Feb 2019 20:11:19 GMT): jyellick (Wed, 20 Feb 2019 20:12:12 GMT): jyellick (Wed, 20 Feb 2019 20:13:04 GMT): braduf (Wed, 20 Feb 2019 20:14:37 GMT): braduf (Wed, 20 Feb 2019 20:15:29 GMT): braduf (Wed, 20 Feb 2019 20:15:42 GMT): jyellick (Wed, 20 Feb 2019 20:16:16 GMT): jyellick (Wed, 20 Feb 2019 20:18:14 GMT): braduf (Wed, 20 Feb 2019 20:25:24 GMT): jyellick (Wed, 20 Feb 2019 20:28:39 GMT): braduf (Wed, 20 Feb 2019 21:15:46 GMT): HoneyShah (Thu, 21 Feb 2019 05:38:17 GMT): HoneyShah (Thu, 21 Feb 2019 05:38:17 GMT): jyellick (Thu, 21 Feb 2019 05:39:55 GMT): HoneyShah (Thu, 21 Feb 2019 05:40:29 GMT): jyellick (Thu, 21 Feb 2019 05:41:02 GMT): jyellick (Thu, 21 Feb 2019 05:41:28 GMT): jyellick (Thu, 21 Feb 2019 05:42:31 GMT): jyellick (Thu, 21 Feb 2019 05:43:09 GMT): HoneyShah (Thu, 21 Feb 2019 05:43:17 GMT): jyellick (Thu, 21 Feb 2019 05:43:43 GMT): HoneyShah (Thu, 21 Feb 2019 05:46:29 GMT): HoneyShah (Thu, 21 Feb 2019 05:46:29 GMT): jyellick (Thu, 21 Feb 2019 05:47:18 GMT): HoneyShah (Thu, 21 Feb 2019 05:52:43 GMT): HoneyShah (Thu, 21 Feb 2019 05:52:43 GMT): jyellick (Thu, 21 Feb 2019 05:53:55 GMT): HoneyShah (Thu, 21 Feb 2019 05:55:12 GMT): jyellick (Thu, 21 Feb 2019 05:56:24 GMT): HoneyShah (Thu, 21 Feb 2019 06:00:05 GMT): jyellick (Thu, 21 Feb 2019 06:00:16 GMT): HoneyShah (Thu, 21 Feb 2019 06:02:03 GMT): HoneyShah (Thu, 21 Feb 2019 06:02:03 GMT): jyellick (Thu, 21 Feb 2019 06:04:11 GMT): jyellick (Thu, 21 Feb 2019 06:04:11 GMT): jyellick (Thu, 21 Feb 2019 06:04:11 GMT): jyellick (Thu, 21 Feb 2019 06:04:46 GMT): HoneyShah (Thu, 21 Feb 2019 06:05:50 GMT): HoneyShah (Thu, 21 Feb 2019 06:06:07 GMT): jyellick (Thu, 21 Feb 2019 06:06:34 GMT): HoneyShah (Thu, 21 Feb 2019 06:15:19 GMT): HoneyShah (Thu, 21 Feb 2019 06:17:06 GMT): HoneyShah (Thu, 21 Feb 2019 06:19:54 GMT): jyellick (Thu, 21 Feb 2019 06:20:08 GMT): jyellick (Thu, 21 Feb 2019 06:21:16 GMT): HoneyShah (Thu, 21 Feb 2019 06:31:46 GMT): krabradosty (Thu, 21 Feb 2019 14:24:56 GMT): krabradosty (Thu, 21 Feb 2019 14:24:56 GMT): krabradosty (Thu, 21 Feb 2019 14:24:56 GMT): jyellick (Thu, 21 Feb 2019 16:13:17 GMT): jyellick (Thu, 21 Feb 2019 16:13:17 GMT): jyellick (Thu, 21 Feb 2019 16:13:29 GMT): krabradosty (Thu, 21 Feb 2019 16:58:25 GMT): jyellick (Thu, 21 Feb 2019 17:16:32 GMT): jyellick (Thu, 21 Feb 2019 17:16:42 GMT): krabradosty (Thu, 21 Feb 2019 18:23:23 GMT): jyellick (Thu, 21 Feb 2019 18:24:05 GMT): Techie (Fri, 22 Feb 2019 12:40:03 GMT): Techie (Fri, 22 Feb 2019 12:40:07 GMT): krabradosty (Fri, 22 Feb 2019 14:52:59 GMT): jyellick (Fri, 22 Feb 2019 15:58:31 GMT): Pradeep_Pentakota (Mon, 25 Feb 2019 02:59:39 GMT): jyellick (Mon, 25 Feb 2019 14:59:42 GMT): yousaf (Mon, 25 Feb 2019 15:25:39 GMT): jyellick (Mon, 25 Feb 2019 16:19:12 GMT): yousaf (Mon, 25 Feb 2019 16:25:39 GMT): plato (Mon, 25 Feb 2019 21:45:27 GMT): krabradosty (Mon, 25 Feb 2019 22:58:12 GMT): krabradosty (Mon, 25 Feb 2019 22:58:12 GMT): aatkddny (Tue, 26 Feb 2019 00:23:46 GMT): aatkddny (Tue, 26 Feb 2019 00:23:46 GMT): aatkddny (Tue, 26 Feb 2019 00:23:46 GMT): aatkddny (Tue, 26 Feb 2019 00:23:46 GMT): aatkddny (Tue, 26 Feb 2019 00:23:46 GMT): aatkddny (Tue, 26 Feb 2019 00:23:46 GMT): aatkddny (Tue, 26 Feb 2019 00:23:46 GMT): aatkddny (Tue, 26 Feb 2019 00:23:46 GMT): aatkddny (Tue, 26 Feb 2019 00:23:46 GMT): alokkv (Tue, 26 Feb 2019 04:37:22 GMT): bricakeld (Tue, 26 Feb 2019 07:26:32 GMT): aatkddny (Wed, 27 Feb 2019 01:31:40 GMT): aatkddny (Wed, 27 Feb 2019 01:31:40 GMT): aatkddny (Wed, 27 Feb 2019 01:31:40 GMT): bibek54 (Wed, 27 Feb 2019 04:04:21 GMT): jyellick (Wed, 27 Feb 2019 05:02:56 GMT): jyellick (Wed, 27 Feb 2019 05:05:09 GMT): DeepakMule (Wed, 27 Feb 2019 06:34:40 GMT): bh4rtp (Wed, 27 Feb 2019 06:45:12 GMT): bh4rtp (Wed, 27 Feb 2019 06:47:50 GMT): bh4rtp (Wed, 27 Feb 2019 06:47:50 GMT): hssanbenrhouma (Wed, 27 Feb 2019 08:02:42 GMT): aatkddny (Wed, 27 Feb 2019 13:07:42 GMT): aatkddny (Wed, 27 Feb 2019 13:07:42 GMT): aatkddny (Wed, 27 Feb 2019 13:07:42 GMT): jyellick (Wed, 27 Feb 2019 14:25:01 GMT): benjamin.verhaegen (Thu, 28 Feb 2019 09:21:34 GMT): benjamin.verhaegen (Thu, 28 Feb 2019 09:25:39 GMT): nikolas (Thu, 28 Feb 2019 15:50:17 GMT): nikolas (Thu, 28 Feb 2019 16:16:12 GMT): yacovm (Thu, 28 Feb 2019 16:29:42 GMT): DeepakMule (Fri, 01 Mar 2019 06:44:09 GMT): benjamin.verhaegen (Fri, 01 Mar 2019 07:38:37 GMT): DeepakMule (Fri, 01 Mar 2019 08:25:00 GMT): DeepakMule (Fri, 01 Mar 2019 08:25:16 GMT): nikolas (Fri, 01 Mar 2019 08:38:56 GMT): benjamin.verhaegen (Fri, 01 Mar 2019 08:39:59 GMT): DeepakMule (Fri, 01 Mar 2019 08:55:06 GMT): benjamin.verhaegen (Fri, 01 Mar 2019 09:15:01 GMT): DeepakMule (Fri, 01 Mar 2019 10:29:59 GMT): benjamin.verhaegen (Fri, 01 Mar 2019 10:40:32 GMT): benjamin.verhaegen (Fri, 01 Mar 2019 10:40:41 GMT): benjamin.verhaegen (Fri, 01 Mar 2019 10:41:56 GMT): nikolas (Fri, 01 Mar 2019 10:46:36 GMT): maxrobot (Fri, 01 Mar 2019 11:09:59 GMT): maxrobot (Fri, 01 Mar 2019 11:10:09 GMT): DeepakMule (Fri, 01 Mar 2019 11:47:41 GMT): benjamin.verhaegen (Fri, 01 Mar 2019 11:50:18 GMT): benjamin.verhaegen (Fri, 01 Mar 2019 11:55:59 GMT): DeepakMule (Fri, 01 Mar 2019 12:45:16 GMT): DeepakMule (Fri, 01 Mar 2019 13:16:07 GMT): benjamin.verhaegen (Fri, 01 Mar 2019 13:18:13 GMT): Estebanrestrepo (Fri, 01 Mar 2019 19:40:13 GMT): Estebanrestrepo (Fri, 01 Mar 2019 21:34:18 GMT): DeepakMule (Mon, 04 Mar 2019 05:52:03 GMT): DeepakMule (Mon, 04 Mar 2019 05:52:03 GMT): benjamin.verhaegen (Mon, 04 Mar 2019 07:27:35 GMT): DeepakMule (Mon, 04 Mar 2019 14:00:24 GMT): bricakeld (Tue, 05 Mar 2019 12:56:53 GMT): bricakeld (Tue, 05 Mar 2019 12:56:53 GMT): sgaddam (Wed, 06 Mar 2019 06:12:38 GMT): gravity (Wed, 06 Mar 2019 16:01:01 GMT): Chandoo (Wed, 06 Mar 2019 16:31:01 GMT): gravity (Wed, 06 Mar 2019 16:33:38 GMT): aatkddny (Wed, 06 Mar 2019 17:05:17 GMT): aatkddny (Wed, 06 Mar 2019 17:05:17 GMT): gravity (Wed, 06 Mar 2019 17:15:55 GMT): aatkddny (Wed, 06 Mar 2019 18:21:44 GMT): aatkddny (Wed, 06 Mar 2019 18:21:44 GMT): braduf (Wed, 06 Mar 2019 19:15:23 GMT): braduf (Wed, 06 Mar 2019 19:15:23 GMT): jyellick (Wed, 06 Mar 2019 19:47:21 GMT): braduf (Wed, 06 Mar 2019 19:50:02 GMT): jyellick (Wed, 06 Mar 2019 19:51:14 GMT): braduf (Wed, 06 Mar 2019 19:55:43 GMT): jyellick (Wed, 06 Mar 2019 19:56:08 GMT): braduf (Wed, 06 Mar 2019 20:01:47 GMT): braduf (Wed, 06 Mar 2019 20:01:47 GMT): jyellick (Wed, 06 Mar 2019 20:02:46 GMT): jyellick (Wed, 06 Mar 2019 20:03:53 GMT): jyellick (Wed, 06 Mar 2019 20:04:02 GMT): braduf (Wed, 06 Mar 2019 20:08:07 GMT): jyellick (Wed, 06 Mar 2019 20:08:59 GMT): jyellick (Wed, 06 Mar 2019 20:09:37 GMT): jyellick (Wed, 06 Mar 2019 20:10:09 GMT): braduf (Wed, 06 Mar 2019 20:12:53 GMT): jyellick (Wed, 06 Mar 2019 20:14:58 GMT): braduf (Wed, 06 Mar 2019 20:16:47 GMT): jyellick (Wed, 06 Mar 2019 20:16:52 GMT): braduf (Wed, 06 Mar 2019 21:00:55 GMT): jyellick (Wed, 06 Mar 2019 21:01:48 GMT): jyellick (Wed, 06 Mar 2019 21:02:26 GMT): braduf (Wed, 06 Mar 2019 21:04:45 GMT): mauricio (Wed, 06 Mar 2019 21:42:24 GMT): JayJong (Thu, 07 Mar 2019 06:29:24 GMT): JayJong (Thu, 07 Mar 2019 06:29:24 GMT): guoger (Thu, 07 Mar 2019 06:41:56 GMT): cjml1982 (Thu, 07 Mar 2019 07:33:14 GMT): benjamin.verhaegen (Thu, 07 Mar 2019 07:35:18 GMT): benjamin.verhaegen (Thu, 07 Mar 2019 07:40:27 GMT): Dpkkmr (Thu, 07 Mar 2019 08:36:04 GMT): stephenman (Thu, 07 Mar 2019 08:55:18 GMT): stephenman (Thu, 07 Mar 2019 08:55:24 GMT): stephenman (Thu, 07 Mar 2019 08:57:02 GMT): stephenman (Thu, 07 Mar 2019 10:36:14 GMT): mfaisaltariq (Thu, 07 Mar 2019 14:07:14 GMT): BellaAdams (Thu, 07 Mar 2019 14:23:42 GMT): Kosalayb (Thu, 07 Mar 2019 14:47:23 GMT): Kosalayb (Thu, 07 Mar 2019 14:49:48 GMT): Kosalayb (Thu, 07 Mar 2019 15:25:14 GMT): Kosalayb (Thu, 07 Mar 2019 15:42:15 GMT): Kosalayb (Thu, 07 Mar 2019 15:46:09 GMT): Kosalayb (Thu, 07 Mar 2019 15:46:09 GMT): jyellick (Thu, 07 Mar 2019 15:48:51 GMT): Kosalayb (Thu, 07 Mar 2019 16:20:43 GMT): Kosalayb (Thu, 07 Mar 2019 17:04:51 GMT): Kosalayb (Thu, 07 Mar 2019 17:04:51 GMT): jyellick (Thu, 07 Mar 2019 18:24:17 GMT): jyellick (Thu, 07 Mar 2019 18:24:20 GMT): mfaisaltariq (Thu, 07 Mar 2019 19:29:35 GMT): mfaisaltariq (Thu, 07 Mar 2019 19:31:45 GMT): mfaisaltariq (Thu, 07 Mar 2019 19:32:29 GMT): mfaisaltariq (Thu, 07 Mar 2019 19:55:01 GMT): jyellick (Thu, 07 Mar 2019 20:04:19 GMT): mfaisaltariq (Thu, 07 Mar 2019 20:04:43 GMT): mfaisaltariq (Thu, 07 Mar 2019 20:05:02 GMT): mfaisaltariq (Thu, 07 Mar 2019 20:05:23 GMT): jyellick (Thu, 07 Mar 2019 20:06:27 GMT): mfaisaltariq (Thu, 07 Mar 2019 20:07:04 GMT): jyellick (Thu, 07 Mar 2019 20:07:10 GMT): mfaisaltariq (Thu, 07 Mar 2019 20:28:06 GMT): mfaisaltariq (Thu, 07 Mar 2019 21:03:19 GMT): mfaisaltariq (Thu, 07 Mar 2019 21:03:26 GMT): mfaisaltariq (Thu, 07 Mar 2019 21:03:49 GMT): rangak (Thu, 07 Mar 2019 21:18:40 GMT): rangak (Thu, 07 Mar 2019 21:18:40 GMT): rangak (Thu, 07 Mar 2019 22:16:26 GMT): yacovm (Thu, 07 Mar 2019 22:20:23 GMT): guoger (Fri, 08 Mar 2019 07:52:24 GMT): mamtabhardwaj12 (Fri, 08 Mar 2019 10:12:29 GMT): NavaL3 (Fri, 08 Mar 2019 10:18:19 GMT): guoger (Fri, 08 Mar 2019 10:18:57 GMT): NavaL3 (Fri, 08 Mar 2019 11:14:36 GMT): NavaL3 (Fri, 08 Mar 2019 11:14:36 GMT): NavaL3 (Fri, 08 Mar 2019 11:17:48 GMT): vieiramanoel (Fri, 08 Mar 2019 22:52:04 GMT): vieiramanoel (Fri, 08 Mar 2019 22:52:24 GMT): vieiramanoel (Fri, 08 Mar 2019 22:52:54 GMT): krabradosty (Sun, 10 Mar 2019 20:27:59 GMT): MaxNasonov (Mon, 11 Mar 2019 02:56:43 GMT): stephenman (Mon, 11 Mar 2019 04:40:21 GMT): stephenman (Mon, 11 Mar 2019 04:40:53 GMT): mastersingh24 (Mon, 11 Mar 2019 10:02:05 GMT): iramiller (Mon, 11 Mar 2019 15:30:52 GMT): KyunghoKim (Tue, 12 Mar 2019 03:06:39 GMT): HoneyShah (Tue, 12 Mar 2019 10:05:31 GMT): bricakeld (Tue, 12 Mar 2019 10:06:20 GMT): holzeis (Tue, 12 Mar 2019 13:33:45 GMT): holzeis (Tue, 12 Mar 2019 13:33:45 GMT): holzeis (Tue, 12 Mar 2019 13:33:45 GMT): braduf (Tue, 12 Mar 2019 14:30:16 GMT): mauricio (Tue, 12 Mar 2019 15:40:27 GMT): yacovm (Tue, 12 Mar 2019 17:50:20 GMT): yacovm (Tue, 12 Mar 2019 17:50:44 GMT): yacovm (Tue, 12 Mar 2019 17:50:53 GMT): yacovm (Tue, 12 Mar 2019 17:50:53 GMT): braduf (Tue, 12 Mar 2019 18:21:24 GMT): braduf (Tue, 12 Mar 2019 18:21:24 GMT): braduf (Tue, 12 Mar 2019 18:21:24 GMT): mauricio (Tue, 12 Mar 2019 18:51:44 GMT): yacovm (Tue, 12 Mar 2019 18:52:58 GMT): yacovm (Tue, 12 Mar 2019 18:53:08 GMT): mauricio (Tue, 12 Mar 2019 19:04:38 GMT): braduf (Tue, 12 Mar 2019 21:47:19 GMT): AkhilKura (Wed, 13 Mar 2019 04:03:27 GMT): FabricBeer (Wed, 13 Mar 2019 04:13:00 GMT): HoneyShah (Wed, 13 Mar 2019 04:55:26 GMT): Mahesh-Raj (Wed, 13 Mar 2019 05:32:02 GMT): Mahesh-Raj (Wed, 13 Mar 2019 05:34:13 GMT): Mahesh-Raj (Wed, 13 Mar 2019 05:34:13 GMT): Mahesh-Raj (Wed, 13 Mar 2019 05:34:13 GMT): stephenman (Wed, 13 Mar 2019 05:43:41 GMT): stephenman (Wed, 13 Mar 2019 05:44:53 GMT): knagware9 (Wed, 13 Mar 2019 06:07:51 GMT): githubcpc (Wed, 13 Mar 2019 06:09:11 GMT): Mahesh-Raj (Wed, 13 Mar 2019 06:27:27 GMT): knagware9 (Wed, 13 Mar 2019 07:20:18 GMT): Mahesh-Raj (Wed, 13 Mar 2019 07:23:14 GMT): Mahesh-Raj (Wed, 13 Mar 2019 07:23:14 GMT): knagware9 (Wed, 13 Mar 2019 07:39:55 GMT): knagware9 (Wed, 13 Mar 2019 07:40:39 GMT): Javi (Wed, 13 Mar 2019 07:45:18 GMT): Mahesh-Raj (Wed, 13 Mar 2019 09:05:14 GMT): knagware9 (Wed, 13 Mar 2019 09:10:45 GMT): Mahesh-Raj (Wed, 13 Mar 2019 09:12:18 GMT): SahithiDyavarashetti (Wed, 13 Mar 2019 10:21:10 GMT): mauricio (Wed, 13 Mar 2019 13:39:16 GMT): braduf (Wed, 13 Mar 2019 15:14:39 GMT): knagware9 (Thu, 14 Mar 2019 04:09:23 GMT): AshishMishra 1 (Thu, 14 Mar 2019 06:50:19 GMT): Mahesh-Raj (Thu, 14 Mar 2019 07:57:28 GMT): klkumar369 (Thu, 14 Mar 2019 16:15:26 GMT): knagware9 (Fri, 15 Mar 2019 06:15:08 GMT): qsmen (Mon, 18 Mar 2019 06:56:24 GMT): qsmen (Mon, 18 Mar 2019 06:56:24 GMT): bilalahmed (Mon, 18 Mar 2019 09:17:10 GMT): bilalahmed (Mon, 18 Mar 2019 09:17:49 GMT): knagware9 (Mon, 18 Mar 2019 10:39:05 GMT): zacpl (Mon, 18 Mar 2019 12:32:40 GMT): bilalahmed (Mon, 18 Mar 2019 12:34:03 GMT): aambati (Mon, 18 Mar 2019 21:05:12 GMT): aambati (Mon, 18 Mar 2019 21:05:12 GMT): iramiller (Mon, 18 Mar 2019 23:42:45 GMT): knagware9 (Tue, 19 Mar 2019 05:08:47 GMT): bilalahmed (Tue, 19 Mar 2019 06:36:43 GMT): migrenaa (Tue, 19 Mar 2019 09:24:54 GMT): jyellick (Tue, 19 Mar 2019 12:33:45 GMT): jyellick (Tue, 19 Mar 2019 12:34:18 GMT): braduf (Wed, 20 Mar 2019 14:01:55 GMT): braduf (Wed, 20 Mar 2019 14:01:55 GMT): braduf (Wed, 20 Mar 2019 14:01:55 GMT): braduf (Wed, 20 Mar 2019 14:01:55 GMT): migrenaa (Wed, 20 Mar 2019 15:53:31 GMT): jyellick (Wed, 20 Mar 2019 17:31:22 GMT): jyellick (Wed, 20 Mar 2019 17:31:54 GMT): migrenaa (Wed, 20 Mar 2019 18:04:51 GMT): migrenaa (Wed, 20 Mar 2019 18:06:42 GMT): migrenaa (Wed, 20 Mar 2019 18:06:42 GMT): jyellick (Wed, 20 Mar 2019 18:48:25 GMT): jyellick (Wed, 20 Mar 2019 18:48:57 GMT): alokkv (Thu, 21 Mar 2019 07:20:41 GMT): alokkv (Thu, 21 Mar 2019 07:20:41 GMT): alokkv (Thu, 21 Mar 2019 07:20:41 GMT): yousaf (Thu, 21 Mar 2019 19:26:08 GMT): iramiller (Fri, 22 Mar 2019 19:17:27 GMT): jyellick (Fri, 22 Mar 2019 19:28:25 GMT): jyellick (Fri, 22 Mar 2019 19:29:53 GMT): iramiller (Fri, 22 Mar 2019 19:44:27 GMT): alokkv (Sat, 23 Mar 2019 17:53:03 GMT): alokkv (Sat, 23 Mar 2019 17:53:06 GMT): alokkv (Sat, 23 Mar 2019 17:53:06 GMT): alokkv (Sat, 23 Mar 2019 17:53:06 GMT): aatkddny (Sun, 24 Mar 2019 03:19:40 GMT): aatkddny (Sun, 24 Mar 2019 03:19:40 GMT): yanli133 (Mon, 25 Mar 2019 06:55:03 GMT): krishnatejap (Mon, 25 Mar 2019 11:49:50 GMT): krishnatejap (Mon, 25 Mar 2019 11:51:51 GMT): iramiller (Mon, 25 Mar 2019 16:27:57 GMT): Nammalvar (Wed, 27 Mar 2019 15:58:32 GMT): klkumar369 (Wed, 27 Mar 2019 17:24:28 GMT): jyellick (Wed, 27 Mar 2019 18:41:21 GMT): jyellick (Wed, 27 Mar 2019 18:41:34 GMT): spartucus (Thu, 28 Mar 2019 02:43:01 GMT): klkumar369 (Thu, 28 Mar 2019 07:27:03 GMT): raphaelbenoit (Thu, 28 Mar 2019 09:41:21 GMT): ivanovv (Thu, 28 Mar 2019 16:07:33 GMT): ivanovv (Thu, 28 Mar 2019 16:25:41 GMT): yacovm (Thu, 28 Mar 2019 16:30:56 GMT): yacovm (Thu, 28 Mar 2019 16:31:08 GMT): yacovm (Thu, 28 Mar 2019 16:31:16 GMT): yacovm (Thu, 28 Mar 2019 16:31:41 GMT): ivanovv (Thu, 28 Mar 2019 16:34:55 GMT): ivanovv (Thu, 28 Mar 2019 16:36:28 GMT): yacovm (Thu, 28 Mar 2019 16:41:01 GMT): yacovm (Thu, 28 Mar 2019 16:41:08 GMT): yacovm (Thu, 28 Mar 2019 16:41:22 GMT): yacovm (Thu, 28 Mar 2019 16:41:26 GMT): brockhager (Fri, 29 Mar 2019 14:54:13 GMT): adamhardie (Fri, 29 Mar 2019 15:51:55 GMT): adamhardie (Fri, 29 Mar 2019 15:52:39 GMT): adamhardie (Fri, 29 Mar 2019 15:53:01 GMT): bricakeld (Sat, 30 Mar 2019 08:32:51 GMT): yacovm (Sat, 30 Mar 2019 10:56:00 GMT): bricakeld (Sat, 30 Mar 2019 14:44:05 GMT): yacovm (Sat, 30 Mar 2019 15:45:28 GMT): adamhardie (Mon, 01 Apr 2019 11:27:23 GMT): adamhardie (Mon, 01 Apr 2019 11:27:45 GMT): jyellick (Mon, 01 Apr 2019 13:58:50 GMT): adamhardie (Mon, 01 Apr 2019 14:51:10 GMT): SahithiDyavarashetti (Tue, 02 Apr 2019 09:14:11 GMT): corpix (Tue, 02 Apr 2019 12:44:40 GMT): jyellick (Tue, 02 Apr 2019 13:01:12 GMT): shimos (Tue, 02 Apr 2019 21:02:52 GMT): AliciaDominic (Wed, 03 Apr 2019 04:47:53 GMT): gen_el (Wed, 03 Apr 2019 11:35:39 GMT): gen_el (Wed, 03 Apr 2019 11:35:39 GMT): gen_el (Wed, 03 Apr 2019 11:36:17 GMT): gen_el (Wed, 03 Apr 2019 11:37:35 GMT): guoger (Wed, 03 Apr 2019 13:03:21 GMT): bilalahmed (Wed, 03 Apr 2019 13:48:26 GMT): guoger (Wed, 03 Apr 2019 14:18:02 GMT): biksen (Thu, 04 Apr 2019 05:37:03 GMT): biksen (Thu, 04 Apr 2019 05:49:25 GMT): guoger (Thu, 04 Apr 2019 06:24:43 GMT): guoger (Thu, 04 Apr 2019 06:24:43 GMT): bilalahmed (Thu, 04 Apr 2019 06:59:20 GMT): biksen (Thu, 04 Apr 2019 07:44:29 GMT): biksen (Thu, 04 Apr 2019 07:44:29 GMT): AkhilKura (Thu, 04 Apr 2019 08:11:26 GMT): itg1996 (Thu, 04 Apr 2019 08:11:51 GMT): AkhilKura (Thu, 04 Apr 2019 08:12:31 GMT): itg1996 (Thu, 04 Apr 2019 08:13:35 GMT): guoger (Thu, 04 Apr 2019 09:34:13 GMT): guoger (Thu, 04 Apr 2019 09:35:36 GMT): bilalahmed (Thu, 04 Apr 2019 09:37:58 GMT): bilalahmed (Thu, 04 Apr 2019 09:38:09 GMT): guoger (Thu, 04 Apr 2019 09:41:15 GMT): guoger (Thu, 04 Apr 2019 09:41:54 GMT): bilalahmed (Thu, 04 Apr 2019 09:42:05 GMT): guoger (Thu, 04 Apr 2019 09:42:27 GMT): guoger (Thu, 04 Apr 2019 09:45:46 GMT): guoger (Thu, 04 Apr 2019 09:46:34 GMT): bilalahmed (Thu, 04 Apr 2019 09:48:28 GMT): adamhardie (Thu, 04 Apr 2019 11:42:23 GMT): yacovm (Thu, 04 Apr 2019 21:36:40 GMT): seokm0 (Fri, 05 Apr 2019 06:04:23 GMT): seokm0 (Fri, 05 Apr 2019 06:57:34 GMT): adamhardie (Fri, 05 Apr 2019 08:58:23 GMT): yacovm (Fri, 05 Apr 2019 09:00:02 GMT): yacovm (Fri, 05 Apr 2019 09:00:35 GMT): yacovm (Fri, 05 Apr 2019 09:00:49 GMT): yacovm (Fri, 05 Apr 2019 09:01:03 GMT): adamhardie (Fri, 05 Apr 2019 09:03:16 GMT): rtroostibm (Fri, 05 Apr 2019 18:08:51 GMT): balazsprehoda (Sun, 07 Apr 2019 15:09:09 GMT): bilalahmed (Mon, 08 Apr 2019 12:38:06 GMT): Chandoo (Mon, 08 Apr 2019 15:12:48 GMT): Chandoo (Mon, 08 Apr 2019 18:47:08 GMT): bilalahmed (Tue, 09 Apr 2019 07:21:49 GMT): bilalahmed (Tue, 09 Apr 2019 07:21:59 GMT): guoger (Tue, 09 Apr 2019 08:05:23 GMT): adamhardie (Tue, 09 Apr 2019 09:19:11 GMT): adamhardie (Tue, 09 Apr 2019 09:19:38 GMT): adamhardie (Tue, 09 Apr 2019 09:19:46 GMT): itg1996 (Tue, 09 Apr 2019 13:31:12 GMT): itg1996 (Tue, 09 Apr 2019 13:35:32 GMT): Chandoo (Tue, 09 Apr 2019 17:22:18 GMT): Chandoo (Tue, 09 Apr 2019 17:22:18 GMT): jyellick (Tue, 09 Apr 2019 17:23:02 GMT): jyellick (Tue, 09 Apr 2019 17:23:02 GMT): jyellick (Tue, 09 Apr 2019 17:23:02 GMT): parasu17 (Tue, 09 Apr 2019 17:23:11 GMT): jyellick (Tue, 09 Apr 2019 17:24:32 GMT): jyellick (Tue, 09 Apr 2019 17:24:57 GMT): brockhager (Tue, 09 Apr 2019 19:45:27 GMT): jyellick (Wed, 10 Apr 2019 02:50:03 GMT): guoger (Wed, 10 Apr 2019 03:37:17 GMT): itg1996 (Wed, 10 Apr 2019 05:43:59 GMT): itg1996 (Wed, 10 Apr 2019 05:44:26 GMT): bilalahmed (Wed, 10 Apr 2019 07:57:04 GMT): bilalahmed (Wed, 10 Apr 2019 07:57:39 GMT): guoger (Wed, 10 Apr 2019 08:00:14 GMT): guoger (Wed, 10 Apr 2019 08:00:14 GMT): bilalahmed (Wed, 10 Apr 2019 08:01:58 GMT): bilalahmed (Wed, 10 Apr 2019 08:02:19 GMT): bilalahmed (Wed, 10 Apr 2019 08:05:12 GMT): guoger (Wed, 10 Apr 2019 08:06:56 GMT): guoger (Wed, 10 Apr 2019 08:07:19 GMT): guoger (Wed, 10 Apr 2019 08:10:02 GMT): bilalahmed (Wed, 10 Apr 2019 08:10:48 GMT): bilalahmed (Wed, 10 Apr 2019 08:11:43 GMT): bilalahmed (Wed, 10 Apr 2019 08:12:36 GMT): guoger (Wed, 10 Apr 2019 08:14:28 GMT): javrevasandeep (Wed, 10 Apr 2019 08:23:56 GMT): bilalahmed (Wed, 10 Apr 2019 08:27:37 GMT): guoger (Wed, 10 Apr 2019 08:40:38 GMT): bilalahmed (Wed, 10 Apr 2019 08:53:17 GMT): spartucus (Wed, 10 Apr 2019 08:53:37 GMT): bilalahmed (Wed, 10 Apr 2019 08:56:16 GMT): spartucus (Wed, 10 Apr 2019 09:34:21 GMT): bilalahmed (Wed, 10 Apr 2019 09:57:47 GMT): kariyappal (Wed, 10 Apr 2019 12:25:22 GMT): awattez (Wed, 10 Apr 2019 12:57:07 GMT): awattez (Wed, 10 Apr 2019 12:57:50 GMT): bilalahmed (Wed, 10 Apr 2019 13:03:06 GMT): bilalahmed (Wed, 10 Apr 2019 13:04:00 GMT): javrevasandeep (Wed, 10 Apr 2019 13:05:11 GMT): jyellick (Wed, 10 Apr 2019 13:09:02 GMT): jyellick (Wed, 10 Apr 2019 13:09:02 GMT): jyellick (Wed, 10 Apr 2019 13:09:32 GMT): jyellick (Wed, 10 Apr 2019 13:10:27 GMT): javrevasandeep (Wed, 10 Apr 2019 13:59:42 GMT): javrevasandeep (Wed, 10 Apr 2019 13:59:42 GMT): javrevasandeep (Wed, 10 Apr 2019 14:03:47 GMT): jyellick (Wed, 10 Apr 2019 14:06:04 GMT): javrevasandeep (Wed, 10 Apr 2019 14:10:26 GMT): jyellick (Wed, 10 Apr 2019 19:09:42 GMT): pankajcheema (Thu, 11 Apr 2019 09:07:34 GMT): pankajcheema (Thu, 11 Apr 2019 09:08:06 GMT): pankajcheema (Thu, 11 Apr 2019 09:08:06 GMT): pankajcheema (Thu, 11 Apr 2019 09:08:45 GMT): pankajcheema (Thu, 11 Apr 2019 09:08:51 GMT): jyellick (Thu, 11 Apr 2019 14:56:32 GMT): TristanBustillo (Fri, 12 Apr 2019 02:31:52 GMT): stephenman (Fri, 12 Apr 2019 02:53:42 GMT): LovepreetSingh (Fri, 12 Apr 2019 05:03:56 GMT): LovepreetSingh (Fri, 12 Apr 2019 05:04:04 GMT): LovepreetSingh (Fri, 12 Apr 2019 05:08:47 GMT): LovepreetSingh (Fri, 12 Apr 2019 05:08:47 GMT): benjamin.verhaegen (Fri, 12 Apr 2019 07:17:16 GMT): guoger (Fri, 12 Apr 2019 07:27:51 GMT): guoger (Fri, 12 Apr 2019 07:28:31 GMT): benjamin.verhaegen (Fri, 12 Apr 2019 07:29:23 GMT): guoger (Fri, 12 Apr 2019 07:31:20 GMT): guoger (Fri, 12 Apr 2019 07:31:33 GMT): benjamin.verhaegen (Fri, 12 Apr 2019 07:32:20 GMT): benjamin.verhaegen (Fri, 12 Apr 2019 07:32:22 GMT): guoger (Fri, 12 Apr 2019 07:32:51 GMT): guoger (Fri, 12 Apr 2019 07:33:00 GMT): guoger (Fri, 12 Apr 2019 07:33:09 GMT): benjamin.verhaegen (Fri, 12 Apr 2019 07:33:45 GMT): benjamin.verhaegen (Fri, 12 Apr 2019 07:35:59 GMT): guoger (Fri, 12 Apr 2019 07:57:13 GMT): benjamin.verhaegen (Fri, 12 Apr 2019 08:06:54 GMT): biksen (Mon, 15 Apr 2019 07:07:00 GMT): biksen (Mon, 15 Apr 2019 07:08:06 GMT): guoger (Mon, 15 Apr 2019 07:35:48 GMT): biksen (Mon, 15 Apr 2019 07:37:13 GMT): david_dornseifer (Mon, 15 Apr 2019 12:55:41 GMT): yacovm (Mon, 15 Apr 2019 13:35:13 GMT): david_dornseifer (Mon, 15 Apr 2019 14:01:03 GMT): david_dornseifer (Mon, 15 Apr 2019 14:01:03 GMT): yacovm (Mon, 15 Apr 2019 14:28:01 GMT): yacovm (Mon, 15 Apr 2019 14:28:07 GMT): david_dornseifer (Mon, 15 Apr 2019 14:31:09 GMT): nikolas (Tue, 16 Apr 2019 19:58:00 GMT): RocMax (Tue, 16 Apr 2019 23:41:45 GMT): david_dornseifer (Wed, 17 Apr 2019 10:56:01 GMT): david_dornseifer (Wed, 17 Apr 2019 10:58:34 GMT): jyellick (Wed, 17 Apr 2019 13:31:37 GMT): jyellick (Wed, 17 Apr 2019 13:34:07 GMT): david_dornseifer (Wed, 17 Apr 2019 13:46:54 GMT): david_dornseifer (Wed, 17 Apr 2019 13:46:54 GMT): david_dornseifer (Wed, 17 Apr 2019 13:47:43 GMT): jyellick (Wed, 17 Apr 2019 16:20:33 GMT): jyellick (Wed, 17 Apr 2019 16:20:51 GMT): jyellick (Wed, 17 Apr 2019 16:21:04 GMT): yacovm (Wed, 17 Apr 2019 16:30:47 GMT): lesleyannj (Wed, 17 Apr 2019 16:41:44 GMT): JuanSuero (Wed, 17 Apr 2019 17:19:32 GMT): JuanSuero (Wed, 17 Apr 2019 18:13:08 GMT): jyellick (Wed, 17 Apr 2019 18:17:01 GMT): magar36 (Wed, 17 Apr 2019 18:23:09 GMT): JuanSuero (Wed, 17 Apr 2019 18:56:19 GMT): JuanSuero (Wed, 17 Apr 2019 18:56:19 GMT): jyellick (Wed, 17 Apr 2019 19:26:18 GMT): jyellick (Wed, 17 Apr 2019 19:27:28 GMT): magar36 (Wed, 17 Apr 2019 19:43:06 GMT): magar36 (Wed, 17 Apr 2019 19:44:35 GMT): jyellick (Wed, 17 Apr 2019 19:51:54 GMT): jyellick (Wed, 17 Apr 2019 19:52:34 GMT): jyellick (Wed, 17 Apr 2019 19:52:44 GMT): magar36 (Wed, 17 Apr 2019 20:15:56 GMT): JuanSuero (Wed, 17 Apr 2019 20:33:14 GMT): JuanSuero (Wed, 17 Apr 2019 20:33:14 GMT): jyellick (Wed, 17 Apr 2019 20:37:00 GMT): JuanSuero (Wed, 17 Apr 2019 21:00:41 GMT): JuanSuero (Wed, 17 Apr 2019 21:00:41 GMT): JuanSuero (Wed, 17 Apr 2019 21:03:36 GMT): kostas (Thu, 18 Apr 2019 00:19:54 GMT): kostas (Thu, 18 Apr 2019 00:22:03 GMT): david_dornseifer (Thu, 18 Apr 2019 15:04:46 GMT): david_dornseifer (Thu, 18 Apr 2019 15:06:31 GMT): jyellick (Thu, 18 Apr 2019 15:08:48 GMT): JuanSuero (Thu, 18 Apr 2019 17:35:35 GMT): JuanSuero (Thu, 18 Apr 2019 17:35:35 GMT): JuanSuero (Thu, 18 Apr 2019 17:36:25 GMT): JuanSuero (Thu, 18 Apr 2019 17:36:25 GMT): JuanSuero (Thu, 18 Apr 2019 21:16:13 GMT): qsmen (Fri, 19 Apr 2019 05:54:56 GMT): qsmen (Fri, 19 Apr 2019 05:55:01 GMT): jyellick (Fri, 19 Apr 2019 16:24:28 GMT): jyellick (Fri, 19 Apr 2019 16:24:57 GMT): GowriR (Sun, 21 Apr 2019 12:26:40 GMT): GowriR (Sun, 21 Apr 2019 12:27:05 GMT): sachinbal (Mon, 22 Apr 2019 03:49:26 GMT): vanitas92 (Mon, 22 Apr 2019 07:37:22 GMT): lazar-lukic (Mon, 22 Apr 2019 08:24:41 GMT): lazar-lukic (Mon, 22 Apr 2019 08:37:00 GMT): lazar-lukic (Mon, 22 Apr 2019 08:43:33 GMT): lazar-lukic (Mon, 22 Apr 2019 08:43:33 GMT): lazar-lukic (Mon, 22 Apr 2019 10:13:58 GMT): bilalahmed (Mon, 22 Apr 2019 13:10:02 GMT): biksen (Mon, 22 Apr 2019 14:48:15 GMT): biksen (Mon, 22 Apr 2019 14:48:58 GMT): biksen (Mon, 22 Apr 2019 14:48:58 GMT): biksen (Mon, 22 Apr 2019 14:58:58 GMT): david_dornseifer (Tue, 23 Apr 2019 06:39:38 GMT): biksen (Tue, 23 Apr 2019 08:56:07 GMT): qsmen (Wed, 24 Apr 2019 02:05:05 GMT): qsmen (Wed, 24 Apr 2019 02:05:16 GMT): guoger (Wed, 24 Apr 2019 05:03:25 GMT): qsmen (Wed, 24 Apr 2019 05:27:25 GMT): DrTES (Wed, 24 Apr 2019 08:43:58 GMT): dsanchezseco (Wed, 24 Apr 2019 09:21:50 GMT): dsanchezseco (Wed, 24 Apr 2019 09:22:55 GMT): guoger (Wed, 24 Apr 2019 09:54:49 GMT): dsanchezseco (Wed, 24 Apr 2019 10:01:05 GMT): dsanchezseco (Wed, 24 Apr 2019 10:01:49 GMT): guoger (Wed, 24 Apr 2019 10:16:03 GMT): dsanchezseco (Wed, 24 Apr 2019 10:33:08 GMT): dsanchezseco (Wed, 24 Apr 2019 10:33:08 GMT): dsanchezseco (Wed, 24 Apr 2019 10:33:30 GMT): guoger (Wed, 24 Apr 2019 10:34:27 GMT): guoger (Wed, 24 Apr 2019 10:34:50 GMT): dsanchezseco (Wed, 24 Apr 2019 10:38:48 GMT): dsanchezseco (Wed, 24 Apr 2019 10:40:24 GMT): dsanchezseco (Wed, 24 Apr 2019 10:42:30 GMT): yacovm (Wed, 24 Apr 2019 10:46:13 GMT): yacovm (Wed, 24 Apr 2019 10:46:24 GMT): yacovm (Wed, 24 Apr 2019 10:46:28 GMT): minollo (Wed, 24 Apr 2019 10:46:52 GMT): yacovm (Wed, 24 Apr 2019 10:47:05 GMT): yacovm (Wed, 24 Apr 2019 10:47:27 GMT): yacovm (Wed, 24 Apr 2019 10:47:43 GMT): minollo (Wed, 24 Apr 2019 10:48:27 GMT): yacovm (Wed, 24 Apr 2019 10:48:37 GMT): yacovm (Wed, 24 Apr 2019 10:49:19 GMT): yacovm (Wed, 24 Apr 2019 10:49:30 GMT): yacovm (Wed, 24 Apr 2019 10:49:34 GMT): dsanchezseco (Wed, 24 Apr 2019 10:49:43 GMT): yacovm (Wed, 24 Apr 2019 10:49:45 GMT): yacovm (Wed, 24 Apr 2019 10:50:13 GMT): yacovm (Wed, 24 Apr 2019 10:50:21 GMT): yacovm (Wed, 24 Apr 2019 10:50:25 GMT): yacovm (Wed, 24 Apr 2019 10:50:32 GMT): dsanchezseco (Wed, 24 Apr 2019 10:50:41 GMT): yacovm (Wed, 24 Apr 2019 10:50:58 GMT): yacovm (Wed, 24 Apr 2019 10:51:08 GMT): yacovm (Wed, 24 Apr 2019 10:51:12 GMT): yacovm (Wed, 24 Apr 2019 10:51:16 GMT): dsanchezseco (Wed, 24 Apr 2019 10:51:47 GMT): dsanchezseco (Wed, 24 Apr 2019 10:52:31 GMT): yacovm (Wed, 24 Apr 2019 10:53:08 GMT): yacovm (Wed, 24 Apr 2019 10:53:14 GMT): yacovm (Wed, 24 Apr 2019 10:53:24 GMT): dsanchezseco (Wed, 24 Apr 2019 10:53:29 GMT): yacovm (Wed, 24 Apr 2019 10:54:11 GMT): yacovm (Wed, 24 Apr 2019 10:54:18 GMT): yacovm (Wed, 24 Apr 2019 10:54:24 GMT): yacovm (Wed, 24 Apr 2019 10:54:35 GMT): yacovm (Wed, 24 Apr 2019 10:55:00 GMT): guoger (Wed, 24 Apr 2019 10:55:07 GMT): guoger (Wed, 24 Apr 2019 10:55:41 GMT): yacovm (Wed, 24 Apr 2019 10:55:52 GMT): guoger (Wed, 24 Apr 2019 10:56:10 GMT): yacovm (Wed, 24 Apr 2019 10:56:28 GMT): yacovm (Wed, 24 Apr 2019 10:56:41 GMT): yacovm (Wed, 24 Apr 2019 10:57:01 GMT): guoger (Wed, 24 Apr 2019 10:57:58 GMT): yacovm (Wed, 24 Apr 2019 10:58:08 GMT): minollo (Wed, 24 Apr 2019 10:59:26 GMT): yacovm (Wed, 24 Apr 2019 10:59:46 GMT): yacovm (Wed, 24 Apr 2019 10:59:57 GMT): dsanchezseco (Wed, 24 Apr 2019 11:03:27 GMT): yacovm (Wed, 24 Apr 2019 11:04:27 GMT): yacovm (Wed, 24 Apr 2019 11:04:37 GMT): yacovm (Wed, 24 Apr 2019 11:04:48 GMT): dsanchezseco (Wed, 24 Apr 2019 11:04:49 GMT): yacovm (Wed, 24 Apr 2019 11:05:19 GMT): yacovm (Wed, 24 Apr 2019 11:05:25 GMT): dsanchezseco (Wed, 24 Apr 2019 11:05:26 GMT): dsanchezseco (Wed, 24 Apr 2019 11:05:41 GMT): yacovm (Wed, 24 Apr 2019 11:05:55 GMT): yacovm (Wed, 24 Apr 2019 11:06:07 GMT): dsanchezseco (Wed, 24 Apr 2019 11:09:14 GMT): dsanchezseco (Wed, 24 Apr 2019 11:09:14 GMT): dsanchezseco (Wed, 24 Apr 2019 11:09:32 GMT): dsanchezseco (Wed, 24 Apr 2019 11:10:58 GMT): dsanchezseco (Wed, 24 Apr 2019 11:11:38 GMT): yacovm (Wed, 24 Apr 2019 11:13:01 GMT): yacovm (Wed, 24 Apr 2019 11:13:05 GMT): dsanchezseco (Wed, 24 Apr 2019 11:17:17 GMT): yacovm (Wed, 24 Apr 2019 11:18:15 GMT): yacovm (Wed, 24 Apr 2019 11:18:25 GMT): dsanchezseco (Wed, 24 Apr 2019 11:18:46 GMT): yacovm (Wed, 24 Apr 2019 11:19:21 GMT): dsanchezseco (Wed, 24 Apr 2019 11:19:51 GMT): dsanchezseco (Wed, 24 Apr 2019 11:20:05 GMT): yacovm (Wed, 24 Apr 2019 11:20:09 GMT): david_dornseifer (Wed, 24 Apr 2019 11:32:20 GMT): aambati (Wed, 24 Apr 2019 14:51:18 GMT): aambati (Wed, 24 Apr 2019 14:51:18 GMT): vieiramanoel (Wed, 24 Apr 2019 16:31:02 GMT): vieiramanoel (Wed, 24 Apr 2019 16:31:11 GMT): yacovm (Wed, 24 Apr 2019 16:36:54 GMT): vieiramanoel (Wed, 24 Apr 2019 16:42:47 GMT): Chandoo (Wed, 24 Apr 2019 17:59:08 GMT): biksen (Wed, 24 Apr 2019 18:26:39 GMT): braduf (Wed, 24 Apr 2019 19:48:35 GMT): yacovm (Wed, 24 Apr 2019 20:00:52 GMT): yacovm (Wed, 24 Apr 2019 20:01:09 GMT): yacovm (Wed, 24 Apr 2019 20:01:37 GMT): yacovm (Wed, 24 Apr 2019 20:02:20 GMT): yacovm (Wed, 24 Apr 2019 20:02:51 GMT): yacovm (Wed, 24 Apr 2019 20:02:55 GMT): yacovm (Wed, 24 Apr 2019 20:03:08 GMT): braduf (Wed, 24 Apr 2019 20:04:28 GMT): Chandoo (Wed, 24 Apr 2019 20:34:28 GMT): yacovm (Wed, 24 Apr 2019 20:41:12 GMT): yacovm (Wed, 24 Apr 2019 20:41:15 GMT): yacovm (Wed, 24 Apr 2019 20:41:24 GMT): yacovm (Wed, 24 Apr 2019 20:41:40 GMT): yacovm (Wed, 24 Apr 2019 20:41:46 GMT): biksen (Thu, 25 Apr 2019 07:39:50 GMT): guoger (Thu, 25 Apr 2019 08:10:26 GMT): biksen (Thu, 25 Apr 2019 08:34:25 GMT): guoger (Thu, 25 Apr 2019 08:35:14 GMT): biksen (Thu, 25 Apr 2019 08:36:24 GMT): guoger (Thu, 25 Apr 2019 08:40:44 GMT): biksen (Thu, 25 Apr 2019 08:42:23 GMT): biksen (Thu, 25 Apr 2019 08:42:50 GMT): biksen (Thu, 25 Apr 2019 08:43:21 GMT): biksen (Thu, 25 Apr 2019 08:44:36 GMT): guoger (Thu, 25 Apr 2019 08:49:36 GMT): biksen (Thu, 25 Apr 2019 08:51:04 GMT): guoger (Thu, 25 Apr 2019 09:03:38 GMT): JuanSuero (Thu, 25 Apr 2019 13:00:10 GMT): JuanSuero (Thu, 25 Apr 2019 13:00:10 GMT): JuanSuero (Thu, 25 Apr 2019 13:00:10 GMT): Fias (Fri, 26 Apr 2019 06:54:08 GMT): GowriR (Fri, 26 Apr 2019 14:30:31 GMT): GowriR (Fri, 26 Apr 2019 14:31:33 GMT): GowriR (Fri, 26 Apr 2019 14:31:34 GMT): GowriR (Fri, 26 Apr 2019 14:31:53 GMT): JuanSuero (Fri, 26 Apr 2019 16:15:13 GMT): JuanSuero (Fri, 26 Apr 2019 16:15:13 GMT): vieiramanoel (Fri, 26 Apr 2019 19:05:50 GMT): vieiramanoel (Fri, 26 Apr 2019 19:06:00 GMT): vieiramanoel (Fri, 26 Apr 2019 19:06:11 GMT): vieiramanoel (Fri, 26 Apr 2019 19:11:26 GMT): vieiramanoel (Fri, 26 Apr 2019 19:11:26 GMT): JuanSuero (Sat, 27 Apr 2019 21:40:48 GMT): JuanSuero (Sat, 27 Apr 2019 23:24:12 GMT): JuanSuero (Sat, 27 Apr 2019 23:24:12 GMT): bh4rtp (Mon, 29 Apr 2019 06:43:34 GMT): bh4rtp (Mon, 29 Apr 2019 06:44:50 GMT): Vishal3152 (Mon, 29 Apr 2019 07:15:56 GMT): Vishal3152 (Mon, 29 Apr 2019 07:17:20 GMT): Vishal3152 (Mon, 29 Apr 2019 09:19:14 GMT): Vishal3152 (Mon, 29 Apr 2019 09:19:42 GMT): mastersingh24 (Mon, 29 Apr 2019 09:55:04 GMT): Chandoo (Mon, 29 Apr 2019 18:42:36 GMT): guoger (Tue, 30 Apr 2019 02:50:43 GMT): Vishal3152 (Tue, 30 Apr 2019 11:19:58 GMT): mastersingh24 (Tue, 30 Apr 2019 12:35:33 GMT): jtrayfield (Tue, 30 Apr 2019 17:48:32 GMT): jtrayfield (Tue, 30 Apr 2019 17:51:21 GMT): jtrayfield (Tue, 30 Apr 2019 17:51:52 GMT): Vishal3152 (Wed, 01 May 2019 09:47:30 GMT): Chandoo (Wed, 01 May 2019 13:59:29 GMT): Chandoo (Wed, 01 May 2019 14:02:21 GMT): Chandoo (Wed, 01 May 2019 14:02:43 GMT): Chandoo (Wed, 01 May 2019 14:02:43 GMT): guoger (Thu, 02 May 2019 03:37:39 GMT): GowriR (Thu, 02 May 2019 16:54:10 GMT): abityildiz (Fri, 03 May 2019 06:30:40 GMT): Chandoo (Fri, 03 May 2019 15:59:27 GMT): Chandoo (Fri, 03 May 2019 16:02:39 GMT): amolpednekar (Tue, 07 May 2019 09:54:24 GMT): amolpednekar (Tue, 07 May 2019 10:13:00 GMT): david_dornseifer (Wed, 08 May 2019 13:35:24 GMT): caveman7 (Thu, 09 May 2019 08:34:26 GMT): caveman7 (Thu, 09 May 2019 10:20:45 GMT): krabradosty (Thu, 09 May 2019 16:33:39 GMT): guoger (Fri, 10 May 2019 07:50:56 GMT): abityildiz (Sat, 11 May 2019 12:06:19 GMT): smallant (Sat, 11 May 2019 17:00:45 GMT): smallant (Sat, 11 May 2019 17:00:45 GMT): Ramrockez143 (Mon, 13 May 2019 06:39:59 GMT): Ramrockez143 (Mon, 13 May 2019 06:40:00 GMT): Ramrockez143 (Mon, 13 May 2019 06:40:25 GMT): abityildiz (Mon, 13 May 2019 06:45:23 GMT): mauricio (Tue, 14 May 2019 13:05:22 GMT): rickr (Tue, 14 May 2019 13:35:49 GMT): rickr (Tue, 14 May 2019 13:37:50 GMT): rickr (Tue, 14 May 2019 13:38:39 GMT): rickr (Tue, 14 May 2019 13:44:08 GMT): mauricio (Tue, 14 May 2019 14:04:23 GMT): mauricio (Tue, 14 May 2019 14:04:23 GMT): rickr (Tue, 14 May 2019 14:09:28 GMT): rickr (Tue, 14 May 2019 14:09:57 GMT): mauricio (Tue, 14 May 2019 14:15:02 GMT): mauricio (Tue, 14 May 2019 14:15:39 GMT): mauricio (Tue, 14 May 2019 14:16:04 GMT): mauricio (Tue, 14 May 2019 14:17:41 GMT): mauricio (Tue, 14 May 2019 14:17:41 GMT): rickr (Tue, 14 May 2019 14:28:17 GMT): rickr (Tue, 14 May 2019 14:46:46 GMT): dave.enyeart (Tue, 14 May 2019 18:30:58 GMT): dave.enyeart (Tue, 14 May 2019 18:31:06 GMT): dave.enyeart (Tue, 14 May 2019 18:31:25 GMT): rickr (Wed, 15 May 2019 02:19:31 GMT): Ramrockez143 (Wed, 15 May 2019 09:04:04 GMT): Ramrockez143 (Wed, 15 May 2019 09:04:21 GMT): mauricio (Wed, 15 May 2019 12:22:16 GMT): mauricio (Wed, 15 May 2019 12:22:48 GMT): mauricio (Wed, 15 May 2019 12:22:48 GMT): shrivastava.amit (Thu, 16 May 2019 12:37:10 GMT): Rajatsharma (Thu, 16 May 2019 21:13:27 GMT): Rajatsharma (Thu, 16 May 2019 21:13:28 GMT): Rajatsharma (Thu, 16 May 2019 21:13:28 GMT): Rajatsharma (Thu, 16 May 2019 22:03:34 GMT): circlespainter (Sat, 18 May 2019 07:35:03 GMT): rsherwood (Mon, 20 May 2019 12:41:26 GMT): rsherwood (Mon, 20 May 2019 12:54:49 GMT): jyellick (Mon, 20 May 2019 17:00:27 GMT): jyellick (Mon, 20 May 2019 17:01:10 GMT): rsherwood (Mon, 20 May 2019 17:02:13 GMT): jyellick (Mon, 20 May 2019 18:08:01 GMT): JoshFodale (Mon, 20 May 2019 20:53:10 GMT): JoshFodale (Mon, 20 May 2019 20:53:11 GMT): JoshFodale (Mon, 20 May 2019 20:53:11 GMT): JoshFodale (Mon, 20 May 2019 20:53:49 GMT): Rajatsharma (Tue, 21 May 2019 10:21:59 GMT): darapich92 (Tue, 21 May 2019 15:05:36 GMT): darapich92 (Tue, 21 May 2019 15:07:30 GMT): Shyam_Pratap_Singh (Tue, 21 May 2019 17:38:13 GMT): JoshFodale (Tue, 21 May 2019 18:16:16 GMT): guoger (Wed, 22 May 2019 08:28:09 GMT): guoger (Wed, 22 May 2019 08:31:10 GMT): Rajatsharma (Wed, 22 May 2019 08:37:08 GMT): guoger (Wed, 22 May 2019 08:43:18 GMT): Rajatsharma (Wed, 22 May 2019 08:45:25 GMT): Rajatsharma (Wed, 22 May 2019 08:45:25 GMT): Rajatsharma (Wed, 22 May 2019 08:45:25 GMT): guoger (Wed, 22 May 2019 08:46:39 GMT): Rajatsharma (Wed, 22 May 2019 08:48:03 GMT): Rajatsharma (Wed, 22 May 2019 08:59:04 GMT): Rajatsharma (Wed, 22 May 2019 08:59:04 GMT): guoger (Wed, 22 May 2019 09:00:35 GMT): guoger (Wed, 22 May 2019 09:00:57 GMT): Rajatsharma (Wed, 22 May 2019 09:01:13 GMT): Rajatsharma (Wed, 22 May 2019 09:03:24 GMT): guoger (Wed, 22 May 2019 09:05:30 GMT): Rajatsharma (Wed, 22 May 2019 09:11:45 GMT): Rajatsharma (Wed, 22 May 2019 09:12:58 GMT): guoger (Wed, 22 May 2019 09:14:50 GMT): Rajatsharma (Wed, 22 May 2019 09:15:17 GMT): Rajatsharma (Wed, 22 May 2019 09:15:55 GMT): guoger (Wed, 22 May 2019 09:17:57 GMT): Rajatsharma (Wed, 22 May 2019 09:19:38 GMT): Rajatsharma (Wed, 22 May 2019 09:21:01 GMT): guoger (Wed, 22 May 2019 09:23:37 GMT): Rajatsharma (Wed, 22 May 2019 09:24:49 GMT): Rajatsharma (Wed, 22 May 2019 09:32:57 GMT): guoger (Wed, 22 May 2019 09:33:40 GMT): Rajatsharma (Wed, 22 May 2019 09:34:45 GMT): guoger (Wed, 22 May 2019 09:35:44 GMT): Rajatsharma (Wed, 22 May 2019 09:36:30 GMT): Rajatsharma (Wed, 22 May 2019 09:49:26 GMT): guoger (Wed, 22 May 2019 10:23:37 GMT): Rajatsharma (Wed, 22 May 2019 10:42:55 GMT): Rajatsharma (Wed, 22 May 2019 10:46:16 GMT): Rajatsharma (Wed, 22 May 2019 10:54:39 GMT): minollo (Wed, 22 May 2019 12:59:25 GMT): SashaPESIC (Wed, 22 May 2019 14:43:02 GMT): minollo (Wed, 22 May 2019 16:06:18 GMT): yacovm (Wed, 22 May 2019 16:06:43 GMT): yacovm (Wed, 22 May 2019 16:06:48 GMT): yacovm (Wed, 22 May 2019 16:07:09 GMT): yacovm (Wed, 22 May 2019 16:07:49 GMT): minollo (Wed, 22 May 2019 16:08:46 GMT): dave.enyeart (Wed, 22 May 2019 18:06:14 GMT): dave.enyeart (Wed, 22 May 2019 18:06:42 GMT): minollo (Wed, 22 May 2019 18:07:11 GMT): dave.enyeart (Wed, 22 May 2019 18:08:50 GMT): minollo (Wed, 22 May 2019 18:09:24 GMT): guoger (Thu, 23 May 2019 02:18:23 GMT): Rajatsharma (Thu, 23 May 2019 08:47:06 GMT): Rajatsharma (Thu, 23 May 2019 08:47:06 GMT): Rajatsharma (Thu, 23 May 2019 08:47:06 GMT): Rajatsharma (Thu, 23 May 2019 08:48:49 GMT): kn3118 (Thu, 23 May 2019 16:40:32 GMT): bmatsuo (Fri, 24 May 2019 16:39:40 GMT): Rajatsharma (Sun, 26 May 2019 19:43:59 GMT): guoger (Mon, 27 May 2019 09:19:37 GMT): shrivastava.amit (Wed, 29 May 2019 09:23:04 GMT): shrivastava.amit (Wed, 29 May 2019 09:23:59 GMT): shrivastava.amit (Wed, 29 May 2019 09:24:00 GMT): shrivastava.amit (Wed, 29 May 2019 09:24:18 GMT): RodrigoMedeiros (Wed, 29 May 2019 17:15:50 GMT): bandreghetti (Wed, 29 May 2019 21:37:21 GMT): bandreghetti (Wed, 29 May 2019 21:37:21 GMT): yacovm (Wed, 29 May 2019 22:06:20 GMT): javrevasandeep (Thu, 30 May 2019 05:47:13 GMT): guoger (Thu, 30 May 2019 08:13:29 GMT): guoger (Thu, 30 May 2019 08:13:58 GMT): donjon (Thu, 30 May 2019 09:41:36 GMT): krabradosty (Thu, 30 May 2019 12:19:09 GMT): krabradosty (Thu, 30 May 2019 12:19:09 GMT): jyellick (Thu, 30 May 2019 12:35:28 GMT): krabradosty (Thu, 30 May 2019 15:45:17 GMT): bandreghetti (Thu, 30 May 2019 16:05:54 GMT): bandreghetti (Thu, 30 May 2019 16:35:08 GMT): yacovm (Thu, 30 May 2019 16:43:17 GMT): yacovm (Thu, 30 May 2019 16:43:29 GMT): bandreghetti (Thu, 30 May 2019 16:43:44 GMT): yacovm (Thu, 30 May 2019 16:43:47 GMT): yacovm (Thu, 30 May 2019 16:44:01 GMT): bandreghetti (Thu, 30 May 2019 16:44:34 GMT): yacovm (Thu, 30 May 2019 16:44:36 GMT): krabradosty (Thu, 30 May 2019 17:51:55 GMT): krabradosty (Thu, 30 May 2019 17:51:55 GMT): krabradosty (Thu, 30 May 2019 18:21:15 GMT): krabradosty (Thu, 30 May 2019 18:21:15 GMT): BlueKing (Fri, 31 May 2019 13:05:55 GMT): anand.fast (Tue, 04 Jun 2019 01:21:11 GMT): Unni_1994 (Thu, 06 Jun 2019 08:00:53 GMT): Unni_1994 (Thu, 06 Jun 2019 08:00:53 GMT): guoger (Thu, 06 Jun 2019 08:24:42 GMT): Unni_1994 (Thu, 06 Jun 2019 08:43:45 GMT): Unni_1994 (Thu, 06 Jun 2019 08:53:17 GMT): guoger (Thu, 06 Jun 2019 08:54:03 GMT): guoger (Thu, 06 Jun 2019 08:54:03 GMT): Unni_1994 (Thu, 06 Jun 2019 08:55:41 GMT): guoger (Thu, 06 Jun 2019 08:56:14 GMT): Unni_1994 (Thu, 06 Jun 2019 08:57:20 GMT): Unni_1994 (Thu, 06 Jun 2019 08:59:44 GMT): guoger (Thu, 06 Jun 2019 09:01:23 GMT): guoger (Thu, 06 Jun 2019 09:01:42 GMT): Unni_1994 (Thu, 06 Jun 2019 09:08:04 GMT): SamYuan1990 (Thu, 06 Jun 2019 09:20:18 GMT): guoger (Thu, 06 Jun 2019 09:33:29 GMT): Rajatsharma (Mon, 10 Jun 2019 14:26:59 GMT): Rajatsharma (Mon, 10 Jun 2019 14:29:34 GMT): Rajatsharma (Mon, 10 Jun 2019 14:31:05 GMT): Rajatsharma (Mon, 10 Jun 2019 15:02:06 GMT): Rajatsharma (Mon, 10 Jun 2019 15:08:57 GMT): dsanchezseco (Mon, 10 Jun 2019 15:31:31 GMT): jyellick (Mon, 10 Jun 2019 15:32:51 GMT): dsanchezseco (Mon, 10 Jun 2019 15:34:06 GMT): jyellick (Mon, 10 Jun 2019 15:35:27 GMT): dsanchezseco (Mon, 10 Jun 2019 15:35:38 GMT): jyellick (Mon, 10 Jun 2019 15:35:45 GMT): dsanchezseco (Mon, 10 Jun 2019 15:36:04 GMT): jyellick (Mon, 10 Jun 2019 15:36:35 GMT): dsanchezseco (Mon, 10 Jun 2019 15:36:59 GMT): dsanchezseco (Mon, 10 Jun 2019 15:37:31 GMT): jyellick (Mon, 10 Jun 2019 15:38:40 GMT): dsanchezseco (Mon, 10 Jun 2019 15:39:52 GMT): dsanchezseco (Mon, 10 Jun 2019 15:48:48 GMT): jyellick (Mon, 10 Jun 2019 15:49:49 GMT): dsanchezseco (Mon, 10 Jun 2019 15:50:07 GMT): jyellick (Mon, 10 Jun 2019 15:50:55 GMT): dsanchezseco (Mon, 10 Jun 2019 15:50:56 GMT): dsanchezseco (Mon, 10 Jun 2019 15:51:46 GMT): dsanchezseco (Mon, 10 Jun 2019 15:56:50 GMT): dsanchezseco (Mon, 10 Jun 2019 16:00:57 GMT): dsanchezseco (Mon, 10 Jun 2019 16:01:29 GMT): jyellick (Mon, 10 Jun 2019 16:13:15 GMT): jyellick (Mon, 10 Jun 2019 16:13:42 GMT): dsanchezseco (Mon, 10 Jun 2019 16:20:02 GMT): dsanchezseco (Mon, 10 Jun 2019 16:20:28 GMT): dsanchezseco (Mon, 10 Jun 2019 16:21:37 GMT): dsanchezseco (Mon, 10 Jun 2019 16:25:04 GMT): dsanchezseco (Mon, 10 Jun 2019 16:25:45 GMT): jyellick (Mon, 10 Jun 2019 16:26:03 GMT): dsanchezseco (Mon, 10 Jun 2019 16:47:05 GMT): jyellick (Mon, 10 Jun 2019 16:48:59 GMT): jyellick (Mon, 10 Jun 2019 16:49:18 GMT): dsanchezseco (Mon, 10 Jun 2019 16:51:20 GMT): dsanchezseco (Mon, 10 Jun 2019 16:51:32 GMT): Unni_1994 (Tue, 11 Jun 2019 13:40:14 GMT): guoger (Tue, 11 Jun 2019 13:51:12 GMT): jyellick (Tue, 11 Jun 2019 14:18:12 GMT): guoger (Tue, 11 Jun 2019 14:21:48 GMT): jyellick (Tue, 11 Jun 2019 14:22:34 GMT): guoger (Tue, 11 Jun 2019 14:23:12 GMT): Rajatsharma (Tue, 11 Jun 2019 14:27:21 GMT): Rajatsharma (Tue, 11 Jun 2019 14:28:02 GMT): jyellick (Tue, 11 Jun 2019 14:31:17 GMT): Rajatsharma (Tue, 11 Jun 2019 14:33:50 GMT): Rajatsharma (Tue, 11 Jun 2019 14:34:46 GMT): jyellick (Tue, 11 Jun 2019 14:35:23 GMT): Rajatsharma (Tue, 11 Jun 2019 14:35:29 GMT): Rajatsharma (Tue, 11 Jun 2019 14:36:02 GMT): jyellick (Tue, 11 Jun 2019 14:38:37 GMT): Rajatsharma (Tue, 11 Jun 2019 14:40:25 GMT): jyellick (Tue, 11 Jun 2019 14:42:29 GMT): Rajatsharma (Tue, 11 Jun 2019 14:42:37 GMT): jyellick (Tue, 11 Jun 2019 14:43:19 GMT): jyellick (Tue, 11 Jun 2019 14:44:27 GMT): Rajatsharma (Tue, 11 Jun 2019 14:47:46 GMT): jyellick (Tue, 11 Jun 2019 14:50:43 GMT): Rajatsharma (Tue, 11 Jun 2019 14:52:48 GMT): Unni_1994 (Tue, 11 Jun 2019 15:00:00 GMT): dsanchezseco (Tue, 11 Jun 2019 15:26:26 GMT): dsanchezseco (Tue, 11 Jun 2019 15:27:35 GMT): dsanchezseco (Tue, 11 Jun 2019 15:28:20 GMT): dsanchezseco (Tue, 11 Jun 2019 15:28:58 GMT): jyellick (Tue, 11 Jun 2019 15:30:39 GMT): dsanchezseco (Tue, 11 Jun 2019 15:32:56 GMT): dsanchezseco (Tue, 11 Jun 2019 15:33:29 GMT): jyellick (Tue, 11 Jun 2019 15:33:41 GMT): jyellick (Tue, 11 Jun 2019 15:37:06 GMT): jyellick (Tue, 11 Jun 2019 15:37:15 GMT): dsanchezseco (Tue, 11 Jun 2019 15:38:00 GMT): dsanchezseco (Tue, 11 Jun 2019 15:38:33 GMT): jyellick (Tue, 11 Jun 2019 15:38:53 GMT): dsanchezseco (Tue, 11 Jun 2019 15:39:04 GMT): dsanchezseco (Tue, 11 Jun 2019 15:39:11 GMT): dsanchezseco (Tue, 11 Jun 2019 15:39:11 GMT): Swhit210 (Tue, 11 Jun 2019 17:06:03 GMT): Swhit210 (Tue, 11 Jun 2019 17:10:29 GMT): jyellick (Tue, 11 Jun 2019 18:22:35 GMT): mbanerjee (Tue, 11 Jun 2019 19:14:16 GMT): bandreghetti (Tue, 11 Jun 2019 19:54:45 GMT): jyellick (Tue, 11 Jun 2019 19:56:30 GMT): jyellick (Tue, 11 Jun 2019 19:56:55 GMT): bandreghetti (Tue, 11 Jun 2019 20:45:42 GMT): scottz (Tue, 11 Jun 2019 20:52:00 GMT): Rajatsharma (Wed, 12 Jun 2019 07:55:21 GMT): guoger (Wed, 12 Jun 2019 08:05:32 GMT): Rajatsharma (Wed, 12 Jun 2019 08:28:26 GMT): Rajatsharma (Wed, 12 Jun 2019 09:48:40 GMT): Rajatsharma (Wed, 12 Jun 2019 09:50:20 GMT): Rajatsharma (Wed, 12 Jun 2019 09:51:12 GMT): Rajatsharma (Wed, 12 Jun 2019 09:52:42 GMT): guoger (Wed, 12 Jun 2019 10:01:31 GMT): Rajatsharma (Wed, 12 Jun 2019 10:08:47 GMT): Rajatsharma (Wed, 12 Jun 2019 10:11:21 GMT): guoger (Wed, 12 Jun 2019 10:18:43 GMT): Rajatsharma (Wed, 12 Jun 2019 11:17:08 GMT): Rajatsharma (Wed, 12 Jun 2019 11:18:04 GMT): Rajatsharma (Wed, 12 Jun 2019 11:18:59 GMT): Rajatsharma (Wed, 12 Jun 2019 12:27:29 GMT): mbanerjee (Wed, 12 Jun 2019 18:33:49 GMT): jyellick (Wed, 12 Jun 2019 18:51:41 GMT): mbanerjee (Wed, 12 Jun 2019 19:22:41 GMT): mbanerjee (Wed, 12 Jun 2019 19:22:54 GMT): mbanerjee (Wed, 12 Jun 2019 19:23:52 GMT): mbanerjee (Wed, 12 Jun 2019 19:24:06 GMT): jyellick (Wed, 12 Jun 2019 19:35:59 GMT): mbanerjee (Wed, 12 Jun 2019 19:37:04 GMT): jyellick (Wed, 12 Jun 2019 19:38:31 GMT): mbanerjee (Wed, 12 Jun 2019 19:38:49 GMT): mbanerjee (Wed, 12 Jun 2019 19:39:08 GMT): mbanerjee (Wed, 12 Jun 2019 19:39:39 GMT): mbanerjee (Wed, 12 Jun 2019 19:39:59 GMT): jyellick (Wed, 12 Jun 2019 19:42:17 GMT): mbanerjee (Wed, 12 Jun 2019 19:43:52 GMT): mbanerjee (Wed, 12 Jun 2019 19:45:03 GMT): mbanerjee (Wed, 12 Jun 2019 19:45:03 GMT): jyellick (Wed, 12 Jun 2019 19:46:23 GMT): jyellick (Wed, 12 Jun 2019 19:47:50 GMT): mbanerjee (Wed, 12 Jun 2019 19:50:44 GMT): jyellick (Wed, 12 Jun 2019 19:50:58 GMT): mbanerjee (Wed, 12 Jun 2019 19:51:10 GMT): mbanerjee (Wed, 12 Jun 2019 19:51:19 GMT): jyellick (Wed, 12 Jun 2019 19:52:37 GMT): Rajatsharma (Wed, 12 Jun 2019 19:52:54 GMT): adityanalge (Wed, 12 Jun 2019 19:56:29 GMT): mbanerjee (Wed, 12 Jun 2019 20:04:57 GMT): jyellick (Wed, 12 Jun 2019 20:06:55 GMT): mbanerjee (Wed, 12 Jun 2019 20:07:17 GMT): mbanerjee (Wed, 12 Jun 2019 20:12:02 GMT): mbanerjee (Wed, 12 Jun 2019 20:12:16 GMT): mbanerjee (Wed, 12 Jun 2019 20:12:51 GMT): jyellick (Wed, 12 Jun 2019 20:36:39 GMT): aatkddny (Thu, 13 Jun 2019 00:12:39 GMT): caveman7 (Thu, 13 Jun 2019 02:30:23 GMT): jyellick (Thu, 13 Jun 2019 03:04:10 GMT): jyellick (Thu, 13 Jun 2019 03:04:37 GMT): alokkv (Thu, 13 Jun 2019 09:21:49 GMT): Rajatsharma (Thu, 13 Jun 2019 10:25:41 GMT): Rajatsharma (Thu, 13 Jun 2019 10:25:41 GMT): aatkddny (Thu, 13 Jun 2019 11:26:24 GMT): ajayatgit (Thu, 13 Jun 2019 23:18:50 GMT): ArtemFrantsiian (Fri, 14 Jun 2019 08:28:11 GMT): ArtemFrantsiian (Fri, 14 Jun 2019 08:31:14 GMT): ArtemFrantsiian (Fri, 14 Jun 2019 08:32:58 GMT): ArtemFrantsiian (Fri, 14 Jun 2019 08:32:58 GMT): ArtemFrantsiian (Fri, 14 Jun 2019 08:35:54 GMT): guoger (Fri, 14 Jun 2019 12:59:11 GMT): guoger (Fri, 14 Jun 2019 13:00:04 GMT): Rajatsharma (Fri, 14 Jun 2019 13:01:45 GMT): Rajatsharma (Fri, 14 Jun 2019 13:02:06 GMT): guoger (Fri, 14 Jun 2019 13:07:09 GMT): Rajatsharma (Fri, 14 Jun 2019 13:11:02 GMT): guoger (Fri, 14 Jun 2019 13:12:10 GMT): Rajatsharma (Fri, 14 Jun 2019 13:12:34 GMT): Rajatsharma (Fri, 14 Jun 2019 13:13:24 GMT): guoger (Fri, 14 Jun 2019 13:13:29 GMT): guoger (Fri, 14 Jun 2019 13:13:50 GMT): guoger (Fri, 14 Jun 2019 13:14:06 GMT): guoger (Fri, 14 Jun 2019 13:14:11 GMT): guoger (Fri, 14 Jun 2019 13:14:23 GMT): Rajatsharma (Fri, 14 Jun 2019 13:17:21 GMT): guoger (Fri, 14 Jun 2019 13:18:14 GMT): Rajatsharma (Fri, 14 Jun 2019 13:18:47 GMT): Rajatsharma (Fri, 14 Jun 2019 13:18:57 GMT): adityanalge (Fri, 14 Jun 2019 22:46:43 GMT): adityanalge (Fri, 14 Jun 2019 22:46:52 GMT): adityanalge (Fri, 14 Jun 2019 22:47:31 GMT): dsanchezseco (Mon, 17 Jun 2019 09:12:31 GMT): aatkddny (Mon, 17 Jun 2019 13:42:02 GMT): aatkddny (Mon, 17 Jun 2019 13:42:02 GMT): Swhit210 (Mon, 17 Jun 2019 14:04:14 GMT): Swhit210 (Mon, 17 Jun 2019 14:19:02 GMT): Swhit210 (Mon, 17 Jun 2019 20:17:27 GMT): dsanchezseco (Tue, 18 Jun 2019 09:03:29 GMT): phantom.assasin (Tue, 18 Jun 2019 10:41:13 GMT): phantom.assasin (Tue, 18 Jun 2019 10:49:45 GMT): aatkddny (Tue, 18 Jun 2019 12:55:59 GMT): aatkddny (Tue, 18 Jun 2019 12:55:59 GMT): aatkddny (Tue, 18 Jun 2019 13:44:57 GMT): jyellick (Tue, 18 Jun 2019 13:50:37 GMT): jyellick (Tue, 18 Jun 2019 13:51:23 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 13:52:15 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:03:50 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:03:50 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:04:29 GMT): jyellick (Tue, 18 Jun 2019 14:07:30 GMT): dsanchezseco (Tue, 18 Jun 2019 14:09:06 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:11:34 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:12:12 GMT): jyellick (Tue, 18 Jun 2019 14:12:34 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:12:36 GMT): jyellick (Tue, 18 Jun 2019 14:13:20 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:13:59 GMT): dsanchezseco (Tue, 18 Jun 2019 14:14:27 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:15:07 GMT): dsanchezseco (Tue, 18 Jun 2019 14:15:13 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:16:32 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:18:34 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:23:38 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:24:55 GMT): jyellick (Tue, 18 Jun 2019 14:35:33 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:38:38 GMT): jyellick (Tue, 18 Jun 2019 14:41:04 GMT): joaquimpedrooliveira (Tue, 18 Jun 2019 14:42:00 GMT): wangdong (Wed, 19 Jun 2019 02:01:39 GMT): HoneyShah (Wed, 19 Jun 2019 05:20:02 GMT): HoneyShah (Wed, 19 Jun 2019 05:20:02 GMT): phantom.assasin (Wed, 19 Jun 2019 07:13:06 GMT): BrajeshKumar (Wed, 19 Jun 2019 08:53:40 GMT): HoneyShah (Wed, 19 Jun 2019 10:33:20 GMT): guoger (Wed, 19 Jun 2019 14:25:50 GMT): braduf (Wed, 19 Jun 2019 19:55:09 GMT): braduf (Wed, 19 Jun 2019 19:55:09 GMT): yacovm (Wed, 19 Jun 2019 20:38:08 GMT): yacovm (Wed, 19 Jun 2019 20:38:40 GMT): jyellick (Wed, 19 Jun 2019 20:40:16 GMT): yacovm (Wed, 19 Jun 2019 20:42:06 GMT): yacovm (Wed, 19 Jun 2019 20:42:15 GMT): jyellick (Wed, 19 Jun 2019 20:42:59 GMT): jyellick (Wed, 19 Jun 2019 20:43:28 GMT): yacovm (Wed, 19 Jun 2019 20:43:48 GMT): yacovm (Wed, 19 Jun 2019 20:44:11 GMT): jyellick (Wed, 19 Jun 2019 20:44:18 GMT): yacovm (Wed, 19 Jun 2019 20:44:25 GMT): yacovm (Wed, 19 Jun 2019 20:48:40 GMT): yacovm (Wed, 19 Jun 2019 20:49:08 GMT): jyellick (Wed, 19 Jun 2019 20:53:47 GMT): braduf (Wed, 19 Jun 2019 22:49:13 GMT): braduf (Wed, 19 Jun 2019 22:49:13 GMT): guoger (Thu, 20 Jun 2019 02:19:54 GMT): guoger (Thu, 20 Jun 2019 02:20:34 GMT): guoger (Thu, 20 Jun 2019 03:09:18 GMT): guoger (Thu, 20 Jun 2019 03:09:40 GMT): HoneyShah (Thu, 20 Jun 2019 03:42:24 GMT): HoneyShah (Thu, 20 Jun 2019 03:42:24 GMT): guoger (Thu, 20 Jun 2019 03:54:38 GMT): HoneyShah (Thu, 20 Jun 2019 03:57:21 GMT): HoneyShah (Thu, 20 Jun 2019 03:57:21 GMT): guoger (Thu, 20 Jun 2019 03:58:37 GMT): guoger (Thu, 20 Jun 2019 03:59:22 GMT): HoneyShah (Thu, 20 Jun 2019 04:08:22 GMT): HoneyShah (Thu, 20 Jun 2019 04:24:32 GMT): HoneyShah (Thu, 20 Jun 2019 04:24:32 GMT): HoneyShah (Thu, 20 Jun 2019 04:24:32 GMT): HoneyShah (Thu, 20 Jun 2019 04:24:32 GMT): HoneyShah (Thu, 20 Jun 2019 08:04:35 GMT): phantom.assasin (Thu, 20 Jun 2019 08:18:02 GMT): phantom.assasin (Thu, 20 Jun 2019 08:18:28 GMT): phantom.assasin (Thu, 20 Jun 2019 08:28:27 GMT): guoger (Thu, 20 Jun 2019 08:28:45 GMT): HoneyShah (Thu, 20 Jun 2019 08:31:46 GMT): guoger (Thu, 20 Jun 2019 08:37:11 GMT): HoneyShah (Thu, 20 Jun 2019 08:39:08 GMT): HoneyShah (Thu, 20 Jun 2019 08:52:36 GMT): guoger (Thu, 20 Jun 2019 08:57:26 GMT): HoneyShah (Thu, 20 Jun 2019 09:01:05 GMT): guoger (Thu, 20 Jun 2019 09:01:48 GMT): HoneyShah (Thu, 20 Jun 2019 09:03:17 GMT): guoger (Thu, 20 Jun 2019 09:03:27 GMT): HoneyShah (Thu, 20 Jun 2019 09:06:44 GMT): HoneyShah (Thu, 20 Jun 2019 09:06:44 GMT): guoger (Thu, 20 Jun 2019 09:22:28 GMT): guoger (Thu, 20 Jun 2019 09:25:39 GMT): guoger (Thu, 20 Jun 2019 09:25:39 GMT): HoneyShah (Thu, 20 Jun 2019 09:29:50 GMT): guoger (Thu, 20 Jun 2019 09:30:47 GMT): HoneyShah (Thu, 20 Jun 2019 09:31:57 GMT): guoger (Thu, 20 Jun 2019 09:35:33 GMT): HoneyShah (Thu, 20 Jun 2019 09:39:41 GMT): guoger (Thu, 20 Jun 2019 09:42:38 GMT): HoneyShah (Thu, 20 Jun 2019 09:43:18 GMT): guoger (Thu, 20 Jun 2019 09:43:55 GMT): guoger (Thu, 20 Jun 2019 09:44:09 GMT): guoger (Thu, 20 Jun 2019 09:45:03 GMT): guoger (Thu, 20 Jun 2019 09:45:36 GMT): HoneyShah (Thu, 20 Jun 2019 09:47:49 GMT): HoneyShah (Thu, 20 Jun 2019 09:48:57 GMT): HoneyShah (Thu, 20 Jun 2019 09:49:53 GMT): guoger (Thu, 20 Jun 2019 11:19:44 GMT): HoneyShah (Thu, 20 Jun 2019 11:47:16 GMT): HoneyShah (Thu, 20 Jun 2019 11:47:16 GMT): HoneyShah (Thu, 20 Jun 2019 11:47:16 GMT): HoneyShah (Thu, 20 Jun 2019 11:47:16 GMT): HoneyShah (Thu, 20 Jun 2019 11:48:38 GMT): HoneyShah (Thu, 20 Jun 2019 11:48:38 GMT): guoger (Thu, 20 Jun 2019 12:32:35 GMT): HoneyShah (Thu, 20 Jun 2019 12:46:42 GMT): HoneyShah (Thu, 20 Jun 2019 12:48:34 GMT): HoneyShah (Thu, 20 Jun 2019 12:49:39 GMT): braduf (Thu, 20 Jun 2019 13:53:21 GMT): guoger (Thu, 20 Jun 2019 13:54:18 GMT): guoger (Thu, 20 Jun 2019 13:55:14 GMT): guoger (Thu, 20 Jun 2019 13:55:36 GMT): braduf (Thu, 20 Jun 2019 13:56:19 GMT): guoger (Thu, 20 Jun 2019 13:56:31 GMT): guoger (Thu, 20 Jun 2019 14:31:23 GMT): Taffies (Mon, 24 Jun 2019 03:26:39 GMT): jyellick (Mon, 24 Jun 2019 03:47:28 GMT): Taffies (Mon, 24 Jun 2019 07:10:15 GMT): Taffies (Mon, 24 Jun 2019 07:10:15 GMT): yacovm (Mon, 24 Jun 2019 07:44:21 GMT): yacovm (Mon, 24 Jun 2019 07:44:23 GMT): Coada (Mon, 24 Jun 2019 08:16:45 GMT): Coada (Mon, 24 Jun 2019 08:16:46 GMT): jyellick (Mon, 24 Jun 2019 14:05:32 GMT): ET-TAOUSYZouhair (Tue, 25 Jun 2019 15:02:59 GMT): aatkddny (Wed, 26 Jun 2019 12:31:34 GMT): aatkddny (Wed, 26 Jun 2019 12:31:34 GMT): aatkddny (Wed, 26 Jun 2019 12:31:34 GMT): guoger (Wed, 26 Jun 2019 12:36:23 GMT): guoger (Wed, 26 Jun 2019 12:37:14 GMT): aatkddny (Wed, 26 Jun 2019 12:54:03 GMT): aatkddny (Thu, 27 Jun 2019 14:31:04 GMT): aatkddny (Thu, 27 Jun 2019 14:31:04 GMT): aatkddny (Thu, 27 Jun 2019 14:31:04 GMT): aatkddny (Thu, 27 Jun 2019 14:31:04 GMT): aatkddny (Fri, 28 Jun 2019 15:26:34 GMT): aatkddny (Fri, 28 Jun 2019 15:26:34 GMT): yacovm (Fri, 28 Jun 2019 16:53:36 GMT): yacovm (Fri, 28 Jun 2019 16:54:08 GMT): aatkddny (Fri, 28 Jun 2019 17:02:31 GMT): aatkddny (Fri, 28 Jun 2019 17:02:31 GMT): aatkddny (Fri, 28 Jun 2019 17:02:31 GMT): yacovm (Fri, 28 Jun 2019 17:15:02 GMT): yacovm (Fri, 28 Jun 2019 17:15:05 GMT): aatkddny (Fri, 28 Jun 2019 17:19:34 GMT): aatkddny (Fri, 28 Jun 2019 17:19:34 GMT): aatkddny (Fri, 28 Jun 2019 17:19:34 GMT): aatkddny (Fri, 28 Jun 2019 17:19:34 GMT): yacovm (Fri, 28 Jun 2019 17:24:27 GMT): yacovm (Fri, 28 Jun 2019 17:24:32 GMT): yacovm (Fri, 28 Jun 2019 17:24:43 GMT): yacovm (Fri, 28 Jun 2019 17:24:48 GMT): yacovm (Fri, 28 Jun 2019 17:25:09 GMT): aatkddny (Fri, 28 Jun 2019 22:50:09 GMT): mastersingh24 (Sat, 29 Jun 2019 13:40:19 GMT): mastersingh24 (Sat, 29 Jun 2019 13:46:25 GMT): mastersingh24 (Sat, 29 Jun 2019 13:47:08 GMT): mastersingh24 (Sat, 29 Jun 2019 13:48:17 GMT): yacovm (Sat, 29 Jun 2019 20:12:43 GMT): yacovm (Sat, 29 Jun 2019 20:13:00 GMT): yacovm (Sat, 29 Jun 2019 20:13:58 GMT): yacovm (Sat, 29 Jun 2019 20:14:03 GMT): aatkddny (Sat, 29 Jun 2019 20:19:00 GMT): aatkddny (Sat, 29 Jun 2019 20:19:00 GMT): aatkddny (Sat, 29 Jun 2019 20:22:46 GMT): yacovm (Sat, 29 Jun 2019 21:36:43 GMT): mastersingh24 (Mon, 01 Jul 2019 08:49:45 GMT): mastersingh24 (Mon, 01 Jul 2019 08:49:45 GMT): aatkddny (Mon, 01 Jul 2019 12:04:33 GMT): aatkddny (Mon, 01 Jul 2019 12:18:19 GMT): aatkddny (Mon, 01 Jul 2019 12:18:19 GMT): aatkddny (Mon, 01 Jul 2019 12:18:19 GMT): aatkddny (Mon, 01 Jul 2019 12:18:19 GMT): aatkddny (Mon, 01 Jul 2019 12:18:19 GMT): aatkddny (Mon, 01 Jul 2019 12:18:19 GMT): aatkddny (Mon, 01 Jul 2019 12:18:19 GMT): javrevasandeep (Tue, 02 Jul 2019 11:34:08 GMT): aatkddny (Tue, 02 Jul 2019 12:54:38 GMT): aatkddny (Tue, 02 Jul 2019 12:54:38 GMT): guoger (Tue, 02 Jul 2019 13:04:35 GMT): javrevasandeep (Tue, 02 Jul 2019 13:47:43 GMT): javrevasandeep (Tue, 02 Jul 2019 15:21:18 GMT): guoger (Tue, 02 Jul 2019 15:52:52 GMT): javrevasandeep (Tue, 02 Jul 2019 16:03:02 GMT): guoger (Tue, 02 Jul 2019 16:03:41 GMT): javrevasandeep (Tue, 02 Jul 2019 16:04:27 GMT): javrevasandeep (Tue, 02 Jul 2019 16:05:26 GMT): javrevasandeep (Tue, 02 Jul 2019 16:29:23 GMT): SanketPanchamia (Wed, 03 Jul 2019 12:08:21 GMT): joeljhanster (Thu, 04 Jul 2019 05:25:13 GMT): joeljhanster (Thu, 04 Jul 2019 05:25:20 GMT): Mozer18 (Thu, 04 Jul 2019 13:49:40 GMT): Mozer18 (Thu, 04 Jul 2019 13:50:43 GMT): Mozer18 (Thu, 04 Jul 2019 13:51:29 GMT): delao (Thu, 04 Jul 2019 16:58:46 GMT): FernandaSartori (Thu, 04 Jul 2019 16:58:50 GMT): guoger (Fri, 05 Jul 2019 01:51:54 GMT): mattiabolzonella1 (Mon, 08 Jul 2019 13:58:52 GMT): mattiabolzonella1 (Mon, 08 Jul 2019 14:01:42 GMT): mattiabolzonella1 (Mon, 08 Jul 2019 14:01:42 GMT): mattiabolzonella1 (Mon, 08 Jul 2019 14:01:42 GMT): yacovm (Mon, 08 Jul 2019 18:20:36 GMT): mattiabolzonella1 (Tue, 09 Jul 2019 07:57:39 GMT): Bentipe (Tue, 09 Jul 2019 12:15:56 GMT): Bentipe (Tue, 09 Jul 2019 12:17:52 GMT): Bentipe (Tue, 09 Jul 2019 12:18:03 GMT): aatkddny (Tue, 09 Jul 2019 22:12:26 GMT): aatkddny (Tue, 09 Jul 2019 22:12:26 GMT): aatkddny (Tue, 09 Jul 2019 22:24:05 GMT): guoger (Wed, 10 Jul 2019 03:34:31 GMT): guoger (Wed, 10 Jul 2019 03:39:33 GMT): Bentipe (Wed, 10 Jul 2019 06:48:40 GMT): Bentipe (Wed, 10 Jul 2019 06:49:27 GMT): Bentipe (Wed, 10 Jul 2019 06:50:24 GMT): Bentipe (Wed, 10 Jul 2019 06:51:50 GMT): guoger (Wed, 10 Jul 2019 06:55:56 GMT): mattiabolzonella1 (Wed, 10 Jul 2019 07:34:31 GMT): mattiabolzonella1 (Wed, 10 Jul 2019 07:34:31 GMT): mattiabolzonella1 (Wed, 10 Jul 2019 08:06:08 GMT): donjohnny (Wed, 10 Jul 2019 09:15:27 GMT): Bentipe (Wed, 10 Jul 2019 09:25:23 GMT): Bentipe (Wed, 10 Jul 2019 10:10:24 GMT): Bentipe (Wed, 10 Jul 2019 10:10:44 GMT): Bentipe (Wed, 10 Jul 2019 10:11:00 GMT): Taffies (Wed, 10 Jul 2019 10:14:33 GMT): Taffies (Wed, 10 Jul 2019 10:14:33 GMT): Bentipe (Wed, 10 Jul 2019 10:16:13 GMT): aatkddny (Wed, 10 Jul 2019 12:23:50 GMT): aatkddny (Wed, 10 Jul 2019 12:23:50 GMT): aatkddny (Wed, 10 Jul 2019 12:26:57 GMT): soumyanayak (Wed, 10 Jul 2019 15:50:44 GMT): soumyanayak (Wed, 10 Jul 2019 15:51:05 GMT): soumyanayak (Wed, 10 Jul 2019 16:06:35 GMT): guoger (Wed, 10 Jul 2019 16:07:01 GMT): soumyanayak (Wed, 10 Jul 2019 16:08:13 GMT): guoger (Thu, 11 Jul 2019 02:27:04 GMT): Bentipe (Thu, 11 Jul 2019 09:01:53 GMT): Bentipe (Thu, 11 Jul 2019 09:02:18 GMT): Bentipe (Thu, 11 Jul 2019 09:03:36 GMT): Bentipe (Thu, 11 Jul 2019 09:04:44 GMT): sanket1211 (Thu, 11 Jul 2019 12:26:48 GMT): mastersingh24 (Thu, 11 Jul 2019 12:51:44 GMT): sanket1211 (Thu, 11 Jul 2019 12:55:32 GMT): rahulhegde (Thu, 11 Jul 2019 16:53:58 GMT): rahulhegde (Thu, 11 Jul 2019 16:53:58 GMT): rahulhegde (Thu, 11 Jul 2019 16:53:58 GMT): jyellick (Thu, 11 Jul 2019 16:55:02 GMT): jyellick (Thu, 11 Jul 2019 16:59:26 GMT): jyellick (Thu, 11 Jul 2019 16:59:47 GMT): rahulhegde (Thu, 11 Jul 2019 17:02:25 GMT): jyellick (Thu, 11 Jul 2019 17:02:52 GMT): rahulhegde (Thu, 11 Jul 2019 17:04:28 GMT): rahulhegde (Thu, 11 Jul 2019 17:05:10 GMT): rahulhegde (Thu, 11 Jul 2019 17:05:10 GMT): rahulhegde (Thu, 11 Jul 2019 17:05:20 GMT): rahulhegde (Thu, 11 Jul 2019 17:10:42 GMT): rahulhegde (Thu, 11 Jul 2019 17:10:42 GMT): jyellick (Thu, 11 Jul 2019 17:23:05 GMT): jyellick (Thu, 11 Jul 2019 17:23:05 GMT): jyellick (Thu, 11 Jul 2019 17:23:05 GMT): jyellick (Thu, 11 Jul 2019 17:25:40 GMT): rahulhegde (Thu, 11 Jul 2019 17:30:55 GMT): jyellick (Thu, 11 Jul 2019 17:32:32 GMT): rahulhegde (Thu, 11 Jul 2019 17:33:56 GMT): rahulhegde (Thu, 11 Jul 2019 17:33:56 GMT): rahulhegde (Thu, 11 Jul 2019 17:33:56 GMT): jyellick (Thu, 11 Jul 2019 18:38:53 GMT): rahulhegde (Thu, 11 Jul 2019 19:52:43 GMT): alexx (Mon, 15 Jul 2019 03:21:43 GMT): alexx (Mon, 15 Jul 2019 03:24:48 GMT): alexx (Mon, 15 Jul 2019 03:27:44 GMT): shrugupt (Mon, 15 Jul 2019 08:12:36 GMT): sanket1211 (Mon, 15 Jul 2019 09:18:49 GMT): yacovm (Mon, 15 Jul 2019 09:53:38 GMT): yacovm (Mon, 15 Jul 2019 09:53:38 GMT): sanket1211 (Mon, 15 Jul 2019 10:06:56 GMT): donjohnny (Mon, 15 Jul 2019 11:55:38 GMT): RahulHundet (Mon, 15 Jul 2019 12:15:12 GMT): RahulHundet (Mon, 15 Jul 2019 12:15:15 GMT): RahulHundet (Mon, 15 Jul 2019 12:15:41 GMT): sanket1211 (Mon, 15 Jul 2019 13:04:30 GMT): qsmen (Tue, 16 Jul 2019 06:26:40 GMT): qsmen (Tue, 16 Jul 2019 06:27:21 GMT): guoger (Tue, 16 Jul 2019 06:32:20 GMT): guoger (Tue, 16 Jul 2019 06:33:16 GMT): guoger (Tue, 16 Jul 2019 06:33:27 GMT): qsmen (Tue, 16 Jul 2019 06:34:04 GMT): qsmen (Tue, 16 Jul 2019 06:35:08 GMT): qsmen (Tue, 16 Jul 2019 06:35:08 GMT): guoger (Tue, 16 Jul 2019 06:57:06 GMT): qsmen (Tue, 16 Jul 2019 07:42:29 GMT): qsmen (Tue, 16 Jul 2019 07:42:29 GMT): guoger (Tue, 16 Jul 2019 07:56:42 GMT): guoger (Tue, 16 Jul 2019 07:57:47 GMT): qsmen (Tue, 16 Jul 2019 08:06:36 GMT): qsmen (Tue, 16 Jul 2019 08:07:36 GMT): qsmen (Tue, 16 Jul 2019 08:15:49 GMT): qsmen (Tue, 16 Jul 2019 08:15:49 GMT): Bentipe (Tue, 16 Jul 2019 08:40:01 GMT): heenas06 (Tue, 16 Jul 2019 08:50:34 GMT): qsmen (Tue, 16 Jul 2019 09:00:21 GMT): Bentipe (Tue, 16 Jul 2019 09:00:51 GMT): PulkitSarraf (Tue, 16 Jul 2019 09:55:24 GMT): PulkitSarraf (Tue, 16 Jul 2019 09:55:43 GMT): guoger (Tue, 16 Jul 2019 10:30:57 GMT): PulkitSarraf (Tue, 16 Jul 2019 10:31:35 GMT): PulkitSarraf (Tue, 16 Jul 2019 10:33:28 GMT): PulkitSarraf (Tue, 16 Jul 2019 10:34:15 GMT): PulkitSarraf (Tue, 16 Jul 2019 10:34:43 GMT): guoger (Tue, 16 Jul 2019 10:34:51 GMT): PulkitSarraf (Tue, 16 Jul 2019 10:35:12 GMT): PulkitSarraf (Tue, 16 Jul 2019 10:35:39 GMT): guoger (Tue, 16 Jul 2019 10:38:54 GMT): PulkitSarraf (Tue, 16 Jul 2019 10:39:46 GMT): guoger (Tue, 16 Jul 2019 10:40:06 GMT): PulkitSarraf (Tue, 16 Jul 2019 10:40:30 GMT): guoger (Tue, 16 Jul 2019 10:40:54 GMT): PulkitSarraf (Tue, 16 Jul 2019 10:50:25 GMT): PulkitSarraf (Tue, 16 Jul 2019 10:50:35 GMT): PulkitSarraf (Tue, 16 Jul 2019 11:26:01 GMT): PulkitSarraf (Tue, 16 Jul 2019 11:27:57 GMT): PulkitSarraf (Tue, 16 Jul 2019 11:28:00 GMT): guoger (Tue, 16 Jul 2019 11:45:02 GMT): PulkitSarraf (Tue, 16 Jul 2019 11:56:06 GMT): PulkitSarraf (Tue, 16 Jul 2019 11:56:09 GMT): guoger (Tue, 16 Jul 2019 11:57:46 GMT): PulkitSarraf (Tue, 16 Jul 2019 12:05:45 GMT): heenas06 (Tue, 16 Jul 2019 12:30:17 GMT): YashGupta (Tue, 16 Jul 2019 14:10:41 GMT): YashGupta (Tue, 16 Jul 2019 14:10:42 GMT): YashGupta (Tue, 16 Jul 2019 14:10:42 GMT): jyellick (Tue, 16 Jul 2019 14:11:53 GMT): guoger (Tue, 16 Jul 2019 15:44:09 GMT): sanket1211 (Wed, 17 Jul 2019 06:09:57 GMT): soumyanayak (Wed, 17 Jul 2019 06:38:00 GMT): soumyanayak (Wed, 17 Jul 2019 06:38:16 GMT): soumyanayak (Wed, 17 Jul 2019 06:38:40 GMT): soumyanayak (Wed, 17 Jul 2019 06:38:42 GMT): YashGupta (Wed, 17 Jul 2019 07:17:17 GMT): PulkitSarraf (Wed, 17 Jul 2019 08:39:32 GMT): guoger (Wed, 17 Jul 2019 08:39:55 GMT): PulkitSarraf (Wed, 17 Jul 2019 08:41:22 GMT): guoger (Wed, 17 Jul 2019 08:42:20 GMT): guoger (Wed, 17 Jul 2019 08:42:20 GMT): PulkitSarraf (Wed, 17 Jul 2019 08:42:42 GMT): guoger (Wed, 17 Jul 2019 08:43:18 GMT): PulkitSarraf (Wed, 17 Jul 2019 08:44:10 GMT): PulkitSarraf (Wed, 17 Jul 2019 08:44:40 GMT): guoger (Wed, 17 Jul 2019 08:45:58 GMT): PulkitSarraf (Wed, 17 Jul 2019 08:46:49 GMT): ravinayag (Wed, 17 Jul 2019 15:45:11 GMT): afifield (Wed, 17 Jul 2019 16:58:30 GMT): jaswanth (Thu, 18 Jul 2019 05:11:49 GMT): guoger (Thu, 18 Jul 2019 05:33:42 GMT): jaswanth (Thu, 18 Jul 2019 05:34:58 GMT): guoger (Thu, 18 Jul 2019 07:50:14 GMT): PulkitSarraf (Thu, 18 Jul 2019 10:21:12 GMT): PulkitSarraf (Thu, 18 Jul 2019 10:21:14 GMT): jaswanth (Thu, 18 Jul 2019 10:21:42 GMT): PulkitSarraf (Thu, 18 Jul 2019 10:23:15 GMT): tommyjay (Thu, 18 Jul 2019 16:39:50 GMT): tommyjay (Thu, 18 Jul 2019 16:39:51 GMT): tommyjay (Thu, 18 Jul 2019 16:39:51 GMT): jyellick (Thu, 18 Jul 2019 19:31:48 GMT): Utsav_Solanki (Fri, 19 Jul 2019 04:26:42 GMT): Utsav_Solanki (Fri, 19 Jul 2019 04:26:45 GMT): Utsav_Solanki (Fri, 19 Jul 2019 04:26:45 GMT): jyellick (Fri, 19 Jul 2019 04:28:01 GMT): jyellick (Fri, 19 Jul 2019 04:29:26 GMT): Utsav_Solanki (Fri, 19 Jul 2019 04:31:54 GMT): Utsav_Solanki (Fri, 19 Jul 2019 04:41:21 GMT): Utsav_Solanki (Fri, 19 Jul 2019 04:41:21 GMT): Utsav_Solanki (Fri, 19 Jul 2019 04:42:19 GMT): Utsav_Solanki (Fri, 19 Jul 2019 04:43:04 GMT): jyellick (Fri, 19 Jul 2019 04:44:46 GMT): Utsav_Solanki (Fri, 19 Jul 2019 05:10:49 GMT): soumyanayak (Fri, 19 Jul 2019 09:32:08 GMT): soumyanayak (Fri, 19 Jul 2019 13:24:42 GMT): RahulHundet (Tue, 23 Jul 2019 18:06:39 GMT): jyellick (Tue, 23 Jul 2019 18:09:24 GMT): RahulHundet (Tue, 23 Jul 2019 18:12:58 GMT): RahulHundet (Tue, 23 Jul 2019 18:13:33 GMT): jyellick (Tue, 23 Jul 2019 18:14:24 GMT): jyellick (Tue, 23 Jul 2019 18:14:53 GMT): jyellick (Tue, 23 Jul 2019 18:15:40 GMT): RahulHundet (Tue, 23 Jul 2019 18:18:45 GMT): jyellick (Tue, 23 Jul 2019 18:20:07 GMT): RahulHundet (Tue, 23 Jul 2019 18:28:09 GMT): heenas06 (Wed, 24 Jul 2019 11:31:17 GMT): guoger (Wed, 24 Jul 2019 11:36:38 GMT): heenas06 (Wed, 24 Jul 2019 11:38:33 GMT): guoger (Wed, 24 Jul 2019 11:39:03 GMT): heenas06 (Wed, 24 Jul 2019 11:53:55 GMT): guoger (Wed, 24 Jul 2019 11:57:22 GMT): heenas06 (Wed, 24 Jul 2019 11:57:57 GMT): pvrbharg (Wed, 24 Jul 2019 12:11:49 GMT): heenas06 (Wed, 24 Jul 2019 12:49:46 GMT): guoger (Wed, 24 Jul 2019 14:58:25 GMT): soumyanayak (Mon, 29 Jul 2019 14:35:25 GMT): tommyjay (Mon, 29 Jul 2019 15:10:29 GMT): ItaloCarrasco (Mon, 29 Jul 2019 22:28:15 GMT): ItaloCarrasco (Mon, 29 Jul 2019 22:28:15 GMT): mattiabolzonella1 (Tue, 30 Jul 2019 08:57:56 GMT): soumyanayak (Tue, 30 Jul 2019 09:59:10 GMT): Rajatsharma (Tue, 30 Jul 2019 10:00:40 GMT): guoger (Tue, 30 Jul 2019 10:05:43 GMT): soumyanayak (Tue, 30 Jul 2019 10:07:36 GMT): guoger (Tue, 30 Jul 2019 10:08:59 GMT): guoger (Tue, 30 Jul 2019 10:09:34 GMT): soumyanayak (Tue, 30 Jul 2019 10:10:45 GMT): soumyanayak (Tue, 30 Jul 2019 10:11:05 GMT): guoger (Tue, 30 Jul 2019 10:14:01 GMT): guoger (Tue, 30 Jul 2019 10:31:53 GMT): soumyanayak (Tue, 30 Jul 2019 11:21:02 GMT): guoger (Tue, 30 Jul 2019 11:24:30 GMT): soumyanayak (Tue, 30 Jul 2019 11:28:03 GMT): aviralagrawal (Tue, 30 Jul 2019 13:00:54 GMT): delao (Tue, 30 Jul 2019 17:15:53 GMT): tommyjay (Tue, 30 Jul 2019 17:26:51 GMT): delao (Tue, 30 Jul 2019 17:28:20 GMT): tommyjay (Tue, 30 Jul 2019 17:29:19 GMT): tommyjay (Tue, 30 Jul 2019 17:32:21 GMT): delao (Tue, 30 Jul 2019 18:50:35 GMT): tommyjay (Tue, 30 Jul 2019 19:09:29 GMT): tommyjay (Tue, 30 Jul 2019 19:09:54 GMT): ItaloCarrasco (Wed, 31 Jul 2019 20:06:59 GMT): jyellick (Wed, 31 Jul 2019 20:08:42 GMT): ItaloCarrasco (Wed, 31 Jul 2019 20:10:06 GMT): jyellick (Wed, 31 Jul 2019 20:12:34 GMT): ItaloCarrasco (Wed, 31 Jul 2019 20:17:24 GMT): jyellick (Wed, 31 Jul 2019 20:18:48 GMT): ItaloCarrasco (Wed, 31 Jul 2019 20:31:54 GMT): jyellick (Wed, 31 Jul 2019 20:32:45 GMT): ItaloCarrasco (Wed, 31 Jul 2019 20:40:18 GMT): jyellick (Wed, 31 Jul 2019 20:43:25 GMT): jyellick (Wed, 31 Jul 2019 20:43:55 GMT): huxd (Fri, 02 Aug 2019 01:30:00 GMT): PulkitSarraf (Fri, 02 Aug 2019 03:56:27 GMT): PulkitSarraf (Fri, 02 Aug 2019 03:57:08 GMT): guoger (Fri, 02 Aug 2019 06:47:38 GMT): ItaloCarrasco (Fri, 02 Aug 2019 20:12:05 GMT): jyellick (Fri, 02 Aug 2019 20:14:16 GMT): jyellick (Fri, 02 Aug 2019 20:15:15 GMT): jyellick (Fri, 02 Aug 2019 20:15:48 GMT): jyellick (Fri, 02 Aug 2019 20:15:48 GMT): ItaloCarrasco (Fri, 02 Aug 2019 20:16:33 GMT): jyellick (Fri, 02 Aug 2019 20:17:59 GMT): ItaloCarrasco (Fri, 02 Aug 2019 20:23:10 GMT): ItaloCarrasco (Fri, 02 Aug 2019 20:23:37 GMT): wesleyW 2 (Sun, 04 Aug 2019 00:59:33 GMT): indirajith (Tue, 06 Aug 2019 20:26:53 GMT): indirajith (Tue, 06 Aug 2019 20:27:27 GMT): indirajith (Tue, 06 Aug 2019 21:42:29 GMT): mastersingh24 (Wed, 07 Aug 2019 11:32:04 GMT): MHBauer (Thu, 08 Aug 2019 18:07:37 GMT): swelankarcls (Thu, 08 Aug 2019 21:06:45 GMT): swelankarcls (Thu, 08 Aug 2019 21:07:55 GMT): Utsav_Solanki (Fri, 09 Aug 2019 05:03:36 GMT): Utsav_Solanki (Fri, 09 Aug 2019 05:03:36 GMT): Utsav_Solanki (Fri, 09 Aug 2019 05:03:36 GMT): Utsav_Solanki (Fri, 09 Aug 2019 05:03:36 GMT): Utsav_Solanki (Fri, 09 Aug 2019 05:03:36 GMT): jyellick (Fri, 09 Aug 2019 13:11:47 GMT): Puneeth987 (Sat, 10 Aug 2019 17:54:48 GMT): Puneeth987 (Sat, 10 Aug 2019 18:09:04 GMT): Puneeth987 (Sat, 10 Aug 2019 18:09:44 GMT): jyellick (Sun, 11 Aug 2019 02:54:41 GMT): SatheeshNehru (Mon, 12 Aug 2019 05:18:47 GMT): Adam_Hardie (Mon, 12 Aug 2019 13:21:21 GMT): Adam_Hardie (Mon, 12 Aug 2019 13:21:22 GMT): Adam_Hardie (Mon, 12 Aug 2019 13:25:37 GMT): guoger (Mon, 12 Aug 2019 13:33:44 GMT): Adam_Hardie (Mon, 12 Aug 2019 13:38:32 GMT): kelvinzhong (Tue, 13 Aug 2019 09:59:47 GMT): kelvinzhong (Tue, 13 Aug 2019 10:02:46 GMT): kelvinzhong (Tue, 13 Aug 2019 10:02:51 GMT): kelvinzhong (Tue, 13 Aug 2019 10:05:00 GMT): kelvinzhong (Tue, 13 Aug 2019 10:05:00 GMT): guoger (Tue, 13 Aug 2019 10:06:55 GMT): guoger (Tue, 13 Aug 2019 10:07:44 GMT): kelvinzhong (Tue, 13 Aug 2019 10:10:36 GMT): guoger (Tue, 13 Aug 2019 10:10:55 GMT): kelvinzhong (Tue, 13 Aug 2019 10:11:38 GMT): kelvinzhong (Tue, 13 Aug 2019 10:13:49 GMT): guoger (Tue, 13 Aug 2019 10:14:30 GMT): guoger (Tue, 13 Aug 2019 10:15:29 GMT): kelvinzhong (Tue, 13 Aug 2019 10:15:58 GMT): kelvinzhong (Tue, 13 Aug 2019 10:16:42 GMT): guoger (Tue, 13 Aug 2019 10:18:53 GMT): guoger (Tue, 13 Aug 2019 10:19:00 GMT): guoger (Tue, 13 Aug 2019 10:19:11 GMT): kelvinzhong (Tue, 13 Aug 2019 10:21:05 GMT): guoger (Tue, 13 Aug 2019 10:21:55 GMT): guoger (Tue, 13 Aug 2019 10:22:23 GMT): kelvinzhong (Tue, 13 Aug 2019 10:22:41 GMT): guoger (Tue, 13 Aug 2019 10:22:57 GMT): guoger (Tue, 13 Aug 2019 10:24:04 GMT): kelvinzhong (Tue, 13 Aug 2019 10:25:45 GMT): kelvinzhong (Tue, 13 Aug 2019 10:26:10 GMT): kelvinzhong (Tue, 13 Aug 2019 10:28:54 GMT): guoger (Tue, 13 Aug 2019 10:29:24 GMT): guoger (Tue, 13 Aug 2019 10:30:28 GMT): kelvinzhong (Tue, 13 Aug 2019 10:30:45 GMT): kelvinzhong (Tue, 13 Aug 2019 10:30:45 GMT): guoger (Tue, 13 Aug 2019 10:31:38 GMT): guoger (Tue, 13 Aug 2019 10:32:23 GMT): kelvinzhong (Tue, 13 Aug 2019 10:34:45 GMT): guoger (Tue, 13 Aug 2019 10:35:27 GMT): kelvinzhong (Tue, 13 Aug 2019 10:43:21 GMT): guoger (Tue, 13 Aug 2019 10:44:22 GMT): kelvinzhong (Tue, 13 Aug 2019 10:45:12 GMT): kelvinzhong (Tue, 13 Aug 2019 10:46:21 GMT): guoger (Tue, 13 Aug 2019 10:59:01 GMT): kelvinzhong (Wed, 14 Aug 2019 08:07:37 GMT): guoger (Wed, 14 Aug 2019 08:22:08 GMT): guoger (Wed, 14 Aug 2019 08:23:54 GMT): kelvinzhong (Wed, 14 Aug 2019 08:59:29 GMT): kelvinzhong (Wed, 14 Aug 2019 09:02:13 GMT): guoger (Wed, 14 Aug 2019 09:06:28 GMT): kelvinzhong (Wed, 14 Aug 2019 09:10:16 GMT): kelvinzhong (Wed, 14 Aug 2019 09:11:35 GMT): kelvinzhong (Wed, 14 Aug 2019 09:11:35 GMT): guoger (Wed, 14 Aug 2019 09:13:54 GMT): guoger (Wed, 14 Aug 2019 09:14:22 GMT): guoger (Wed, 14 Aug 2019 09:14:22 GMT): guoger (Wed, 14 Aug 2019 09:15:21 GMT): guoger (Wed, 14 Aug 2019 09:15:21 GMT): kelvinzhong (Wed, 14 Aug 2019 09:17:58 GMT): kelvinzhong (Wed, 14 Aug 2019 09:18:12 GMT): kelvinzhong (Wed, 14 Aug 2019 09:18:30 GMT): guoger (Wed, 14 Aug 2019 09:18:49 GMT): ItaloCarrasco (Wed, 14 Aug 2019 14:59:45 GMT): jyellick (Wed, 14 Aug 2019 15:00:29 GMT): ItaloCarrasco (Wed, 14 Aug 2019 15:03:07 GMT): jyellick (Wed, 14 Aug 2019 15:05:23 GMT): ItaloCarrasco (Wed, 14 Aug 2019 15:05:50 GMT): jyellick (Wed, 14 Aug 2019 15:07:22 GMT): jyellick (Wed, 14 Aug 2019 15:08:14 GMT): jyellick (Wed, 14 Aug 2019 15:08:22 GMT): ItaloCarrasco (Wed, 14 Aug 2019 15:13:39 GMT): jyellick (Wed, 14 Aug 2019 15:14:15 GMT): ItaloCarrasco (Wed, 14 Aug 2019 15:22:10 GMT): jyellick (Wed, 14 Aug 2019 15:23:04 GMT): ItaloCarrasco (Wed, 14 Aug 2019 15:26:32 GMT): jyellick (Wed, 14 Aug 2019 15:29:08 GMT): rahulhegde (Wed, 14 Aug 2019 23:34:51 GMT): rahulhegde (Wed, 14 Aug 2019 23:34:51 GMT): rahulhegde (Wed, 14 Aug 2019 23:34:51 GMT): rahulhegde (Thu, 15 Aug 2019 00:03:52 GMT): rahulhegde (Thu, 15 Aug 2019 00:03:52 GMT): jyellick (Thu, 15 Aug 2019 01:58:31 GMT): jyellick (Thu, 15 Aug 2019 02:17:38 GMT): jyellick (Thu, 15 Aug 2019 02:17:38 GMT): yacovm (Thu, 15 Aug 2019 06:32:11 GMT): yacovm (Thu, 15 Aug 2019 06:33:55 GMT): yacovm (Thu, 15 Aug 2019 13:49:28 GMT): jyellick (Thu, 15 Aug 2019 14:05:59 GMT): jyellick (Thu, 15 Aug 2019 14:05:59 GMT): yacovm (Thu, 15 Aug 2019 14:07:04 GMT): yacovm (Thu, 15 Aug 2019 14:07:20 GMT): yacovm (Thu, 15 Aug 2019 14:07:41 GMT): rahulhegde (Thu, 15 Aug 2019 14:13:58 GMT): rahulhegde (Thu, 15 Aug 2019 14:13:58 GMT): rahulhegde (Thu, 15 Aug 2019 14:13:58 GMT): yacovm (Thu, 15 Aug 2019 14:13:59 GMT): yacovm (Thu, 15 Aug 2019 14:14:25 GMT): yacovm (Thu, 15 Aug 2019 14:14:36 GMT): yacovm (Thu, 15 Aug 2019 14:15:11 GMT): yacovm (Thu, 15 Aug 2019 14:15:32 GMT): rahulhegde (Thu, 15 Aug 2019 14:19:01 GMT): rahulhegde (Thu, 15 Aug 2019 14:19:01 GMT): yacovm (Thu, 15 Aug 2019 14:19:16 GMT): yacovm (Thu, 15 Aug 2019 14:19:29 GMT): yacovm (Thu, 15 Aug 2019 14:19:35 GMT): rahulhegde (Thu, 15 Aug 2019 14:22:23 GMT): yacovm (Thu, 15 Aug 2019 14:22:35 GMT): yacovm (Thu, 15 Aug 2019 14:22:46 GMT): yacovm (Thu, 15 Aug 2019 14:23:05 GMT): rahulhegde (Thu, 15 Aug 2019 14:23:57 GMT): yacovm (Thu, 15 Aug 2019 14:24:50 GMT): rahulhegde (Thu, 15 Aug 2019 14:28:18 GMT): rahulhegde (Thu, 15 Aug 2019 14:28:18 GMT): yacovm (Thu, 15 Aug 2019 14:30:17 GMT): yacovm (Thu, 15 Aug 2019 14:30:27 GMT): rahulhegde (Thu, 15 Aug 2019 14:34:01 GMT): rahulhegde (Thu, 15 Aug 2019 14:34:01 GMT): rahulhegde (Thu, 15 Aug 2019 14:34:01 GMT): yacovm (Thu, 15 Aug 2019 14:34:46 GMT): yacovm (Thu, 15 Aug 2019 14:34:57 GMT): yacovm (Thu, 15 Aug 2019 14:35:09 GMT): rahulhegde (Thu, 15 Aug 2019 14:36:38 GMT): rahulhegde (Thu, 15 Aug 2019 14:36:38 GMT): rahulhegde (Thu, 15 Aug 2019 14:42:04 GMT): rahulhegde (Thu, 15 Aug 2019 14:42:04 GMT): rahulhegde (Thu, 15 Aug 2019 14:42:04 GMT): rahulhegde (Thu, 15 Aug 2019 14:42:04 GMT): yacovm (Thu, 15 Aug 2019 15:17:07 GMT): yacovm (Thu, 15 Aug 2019 15:17:07 GMT): yacovm (Thu, 15 Aug 2019 15:17:10 GMT): yacovm (Thu, 15 Aug 2019 15:17:42 GMT): yacovm (Thu, 15 Aug 2019 15:18:53 GMT): rahulhegde (Thu, 15 Aug 2019 16:24:31 GMT): yacovm (Thu, 15 Aug 2019 16:25:41 GMT): yacovm (Thu, 15 Aug 2019 16:25:46 GMT): Adam_Hardie (Thu, 15 Aug 2019 16:25:54 GMT): yacovm (Thu, 15 Aug 2019 16:25:54 GMT): yacovm (Thu, 15 Aug 2019 16:26:26 GMT): yacovm (Thu, 15 Aug 2019 16:26:35 GMT): yacovm (Thu, 15 Aug 2019 16:27:02 GMT): rahulhegde (Thu, 15 Aug 2019 16:27:11 GMT): yacovm (Thu, 15 Aug 2019 16:27:20 GMT): rahulhegde (Thu, 15 Aug 2019 16:27:51 GMT): yacovm (Thu, 15 Aug 2019 16:28:17 GMT): rahulhegde (Thu, 15 Aug 2019 16:30:01 GMT): rahulhegde (Thu, 15 Aug 2019 16:30:01 GMT): rahulhegde (Thu, 15 Aug 2019 16:30:01 GMT): rahulhegde (Thu, 15 Aug 2019 16:32:42 GMT): yacovm (Thu, 15 Aug 2019 16:40:38 GMT): rahulhegde (Thu, 15 Aug 2019 16:57:03 GMT): gaijinviki (Fri, 16 Aug 2019 01:16:34 GMT): gaijinviki (Fri, 16 Aug 2019 01:16:34 GMT): gaijinviki (Fri, 16 Aug 2019 01:16:34 GMT): gaijinviki (Fri, 16 Aug 2019 01:16:34 GMT): gunso (Fri, 16 Aug 2019 01:19:46 GMT): jyellick (Fri, 16 Aug 2019 01:36:43 GMT): gaijinviki (Fri, 16 Aug 2019 01:40:48 GMT): jyellick (Fri, 16 Aug 2019 01:42:04 GMT): gaijinviki (Fri, 16 Aug 2019 01:44:57 GMT): jyellick (Fri, 16 Aug 2019 01:45:38 GMT): jyellick (Fri, 16 Aug 2019 01:45:38 GMT): jyellick (Fri, 16 Aug 2019 01:45:58 GMT): gaijinviki (Fri, 16 Aug 2019 01:47:29 GMT): jyellick (Fri, 16 Aug 2019 01:47:43 GMT): jyellick (Fri, 16 Aug 2019 01:47:58 GMT): gaijinviki (Fri, 16 Aug 2019 01:50:18 GMT): jyellick (Fri, 16 Aug 2019 01:50:28 GMT): gaijinviki (Fri, 16 Aug 2019 01:53:48 GMT): jyellick (Fri, 16 Aug 2019 01:55:06 GMT): jyellick (Fri, 16 Aug 2019 01:55:21 GMT): jyellick (Fri, 16 Aug 2019 01:56:01 GMT): jyellick (Fri, 16 Aug 2019 01:56:15 GMT): gaijinviki (Fri, 16 Aug 2019 01:58:22 GMT): jyellick (Fri, 16 Aug 2019 01:59:32 GMT): jyellick (Fri, 16 Aug 2019 01:59:54 GMT): jyellick (Fri, 16 Aug 2019 02:00:29 GMT): gaijinviki (Fri, 16 Aug 2019 02:01:23 GMT): gaijinviki (Fri, 16 Aug 2019 02:02:43 GMT): jyellick (Fri, 16 Aug 2019 02:03:47 GMT): jyellick (Fri, 16 Aug 2019 02:15:00 GMT): rahulhegde (Sat, 17 Aug 2019 14:31:13 GMT): guoger (Sun, 18 Aug 2019 13:36:41 GMT): rahulhegde (Sun, 18 Aug 2019 15:07:10 GMT): guoger (Sun, 18 Aug 2019 15:15:39 GMT): rahulhegde (Sun, 18 Aug 2019 18:06:32 GMT): rahulhegde (Sun, 18 Aug 2019 18:06:32 GMT): guoger (Mon, 19 Aug 2019 02:38:13 GMT): ygnr (Mon, 19 Aug 2019 06:41:40 GMT): guoger (Mon, 19 Aug 2019 07:02:50 GMT): mbanerjee (Mon, 19 Aug 2019 23:29:26 GMT): mbanerjee (Mon, 19 Aug 2019 23:29:26 GMT): mbanerjee (Mon, 19 Aug 2019 23:29:28 GMT): ygnr (Tue, 20 Aug 2019 01:09:47 GMT): ygnr (Tue, 20 Aug 2019 01:10:01 GMT): galaxystar (Tue, 20 Aug 2019 01:19:17 GMT): guoger (Tue, 20 Aug 2019 02:59:23 GMT): guoger (Tue, 20 Aug 2019 02:59:23 GMT): pankajcheema (Tue, 20 Aug 2019 09:17:51 GMT): pankajcheema (Tue, 20 Aug 2019 09:18:04 GMT): guoger (Tue, 20 Aug 2019 09:34:58 GMT): pankajcheema (Tue, 20 Aug 2019 09:36:16 GMT): pankajcheema (Tue, 20 Aug 2019 09:36:41 GMT): pankajcheema (Tue, 20 Aug 2019 09:36:41 GMT): guoger (Tue, 20 Aug 2019 09:43:58 GMT): pankajcheema (Tue, 20 Aug 2019 09:44:25 GMT): guoger (Tue, 20 Aug 2019 09:50:25 GMT): pankajcheema (Tue, 20 Aug 2019 09:50:57 GMT): pankajcheema (Tue, 20 Aug 2019 09:51:00 GMT): ItaloCarrasco (Tue, 20 Aug 2019 21:44:28 GMT): guoger (Wed, 21 Aug 2019 02:05:26 GMT): ItaloCarrasco (Wed, 21 Aug 2019 13:56:04 GMT): jyellick (Wed, 21 Aug 2019 13:57:41 GMT): guoger (Wed, 21 Aug 2019 14:04:36 GMT): shitaibin (Thu, 22 Aug 2019 12:16:34 GMT): pankajcheema (Fri, 23 Aug 2019 09:09:50 GMT): pankajcheema (Fri, 23 Aug 2019 09:09:50 GMT): pankajcheema (Fri, 23 Aug 2019 09:10:14 GMT): guoger (Fri, 23 Aug 2019 09:16:09 GMT): pankajcheema (Fri, 23 Aug 2019 09:17:56 GMT): pankajcheema (Fri, 23 Aug 2019 09:18:06 GMT): pankajcheema (Fri, 23 Aug 2019 09:18:11 GMT): pankajcheema (Fri, 23 Aug 2019 09:18:29 GMT): pankajcheema (Fri, 23 Aug 2019 09:19:08 GMT): pankajcheema (Fri, 23 Aug 2019 09:20:45 GMT): pankajcheema (Fri, 23 Aug 2019 09:20:55 GMT): pankajcheema (Fri, 23 Aug 2019 09:21:05 GMT): guoger (Fri, 23 Aug 2019 09:21:34 GMT): pankajcheema (Fri, 23 Aug 2019 09:22:39 GMT): pankajcheema (Fri, 23 Aug 2019 09:23:20 GMT): pankajcheema (Fri, 23 Aug 2019 09:25:17 GMT): pankajcheema (Fri, 23 Aug 2019 09:25:24 GMT): guoger (Fri, 23 Aug 2019 09:27:38 GMT): pankajcheema (Fri, 23 Aug 2019 09:28:13 GMT): pankajcheema (Fri, 23 Aug 2019 09:28:43 GMT): pankajcheema (Fri, 23 Aug 2019 09:29:27 GMT): pankajcheema (Fri, 23 Aug 2019 09:29:40 GMT): pankajcheema (Fri, 23 Aug 2019 09:29:47 GMT): pankajcheema (Fri, 23 Aug 2019 09:29:52 GMT): guoger (Fri, 23 Aug 2019 09:31:59 GMT): guoger (Fri, 23 Aug 2019 09:32:16 GMT): guoger (Fri, 23 Aug 2019 09:32:37 GMT): pankajcheema (Fri, 23 Aug 2019 09:34:21 GMT): pankajcheema (Fri, 23 Aug 2019 09:34:44 GMT): pankajcheema (Fri, 23 Aug 2019 09:34:45 GMT): pankajcheema (Fri, 23 Aug 2019 09:34:55 GMT): guoger (Fri, 23 Aug 2019 09:35:04 GMT): pankajcheema (Fri, 23 Aug 2019 09:35:20 GMT): guoger (Fri, 23 Aug 2019 09:35:28 GMT): pankajcheema (Fri, 23 Aug 2019 09:35:29 GMT): guoger (Fri, 23 Aug 2019 09:35:42 GMT): pankajcheema (Fri, 23 Aug 2019 09:36:19 GMT): guoger (Fri, 23 Aug 2019 09:36:33 GMT): pankajcheema (Fri, 23 Aug 2019 09:36:40 GMT): pankajcheema (Fri, 23 Aug 2019 09:36:51 GMT): guoger (Fri, 23 Aug 2019 09:37:05 GMT): pankajcheema (Fri, 23 Aug 2019 09:37:32 GMT): pankajcheema (Fri, 23 Aug 2019 09:37:32 GMT): guoger (Fri, 23 Aug 2019 09:37:56 GMT): guoger (Fri, 23 Aug 2019 09:38:26 GMT): pankajcheema (Fri, 23 Aug 2019 09:39:01 GMT): pankajcheema (Fri, 23 Aug 2019 09:39:10 GMT): pankajcheema (Fri, 23 Aug 2019 09:39:18 GMT): guoger (Fri, 23 Aug 2019 09:39:39 GMT): guoger (Fri, 23 Aug 2019 09:39:54 GMT): pankajcheema (Fri, 23 Aug 2019 09:40:14 GMT): pankajcheema (Fri, 23 Aug 2019 09:40:21 GMT): pankajcheema (Fri, 23 Aug 2019 09:40:30 GMT): guoger (Fri, 23 Aug 2019 09:40:41 GMT): pankajcheema (Fri, 23 Aug 2019 09:40:56 GMT): guoger (Fri, 23 Aug 2019 09:42:30 GMT): guoger (Fri, 23 Aug 2019 09:43:35 GMT): pankajcheema (Fri, 23 Aug 2019 09:43:50 GMT): pankajcheema (Fri, 23 Aug 2019 09:44:05 GMT): pankajcheema (Fri, 23 Aug 2019 10:10:35 GMT): pankajcheema (Fri, 23 Aug 2019 10:10:41 GMT): pankajcheema (Fri, 23 Aug 2019 10:11:04 GMT): pankajcheema (Fri, 23 Aug 2019 10:11:13 GMT): pankajcheema (Fri, 23 Aug 2019 10:17:17 GMT): pankajcheema (Fri, 23 Aug 2019 10:17:51 GMT): pankajcheema (Fri, 23 Aug 2019 10:17:51 GMT): pankajcheema (Fri, 23 Aug 2019 10:17:51 GMT): pankajcheema (Fri, 23 Aug 2019 10:23:08 GMT): pankajcheema (Fri, 23 Aug 2019 10:23:47 GMT): pankajcheema (Fri, 23 Aug 2019 10:24:26 GMT): pankajcheema (Fri, 23 Aug 2019 10:24:26 GMT): pankajcheema (Fri, 23 Aug 2019 10:25:17 GMT): guoger (Fri, 23 Aug 2019 10:40:08 GMT): guoger (Fri, 23 Aug 2019 10:40:48 GMT): guoger (Fri, 23 Aug 2019 10:40:56 GMT): pankajcheema (Fri, 23 Aug 2019 10:41:16 GMT): pankajcheema (Fri, 23 Aug 2019 10:42:17 GMT): pankajcheema (Fri, 23 Aug 2019 10:42:53 GMT): pankajcheema (Fri, 23 Aug 2019 10:43:02 GMT): Rajatsharma (Fri, 23 Aug 2019 12:27:56 GMT): Rajatsharma (Fri, 23 Aug 2019 12:29:05 GMT): sarapaul (Sun, 25 Aug 2019 22:10:56 GMT): Deepakshinde (Mon, 26 Aug 2019 12:15:26 GMT): Deepakshinde (Mon, 26 Aug 2019 13:34:09 GMT): Deepakshinde (Mon, 26 Aug 2019 13:34:09 GMT): mastersingh24 (Mon, 26 Aug 2019 13:36:18 GMT): Deepakshinde (Mon, 26 Aug 2019 13:40:29 GMT): Deepakshinde (Mon, 26 Aug 2019 13:40:57 GMT): soumyanayak (Mon, 26 Aug 2019 13:44:06 GMT): soumyanayak (Mon, 26 Aug 2019 13:44:22 GMT): Deepakshinde (Tue, 27 Aug 2019 06:27:01 GMT): mastersingh24 (Tue, 27 Aug 2019 07:18:03 GMT): Deepakshinde (Tue, 27 Aug 2019 09:08:56 GMT): mastersingh24 (Tue, 27 Aug 2019 09:09:18 GMT): mastersingh24 (Tue, 27 Aug 2019 09:09:41 GMT): Deepakshinde (Tue, 27 Aug 2019 09:11:49 GMT): mastersingh24 (Tue, 27 Aug 2019 09:12:44 GMT): Deepakshinde (Tue, 27 Aug 2019 09:13:56 GMT): Deepakshinde (Tue, 27 Aug 2019 09:14:02 GMT): Deepakshinde (Tue, 27 Aug 2019 09:15:41 GMT): Deepakshinde (Tue, 27 Aug 2019 09:23:42 GMT): soumyanayak (Tue, 27 Aug 2019 10:03:16 GMT): soumyanayak (Tue, 27 Aug 2019 10:03:16 GMT): jyellick (Tue, 27 Aug 2019 15:51:36 GMT): soumyanayak (Tue, 27 Aug 2019 18:13:45 GMT): soumyanayak (Tue, 27 Aug 2019 18:28:29 GMT): jyellick (Tue, 27 Aug 2019 19:21:10 GMT): soumyanayak (Tue, 27 Aug 2019 19:51:53 GMT): tommyjay (Tue, 27 Aug 2019 20:06:19 GMT): tommyjay (Tue, 27 Aug 2019 20:06:41 GMT): tommyjay (Tue, 27 Aug 2019 20:06:41 GMT): jyellick (Wed, 28 Aug 2019 03:10:00 GMT): jyellick (Wed, 28 Aug 2019 03:10:23 GMT): ravinayag (Wed, 28 Aug 2019 11:41:09 GMT): ravinayag (Wed, 28 Aug 2019 11:41:14 GMT): ravinayag (Wed, 28 Aug 2019 11:41:23 GMT): ravinayag (Wed, 28 Aug 2019 11:42:00 GMT): tommyjay (Wed, 28 Aug 2019 13:09:18 GMT): tommyjay (Wed, 28 Aug 2019 14:09:10 GMT): indirajith (Wed, 28 Aug 2019 19:40:34 GMT): guoger (Thu, 29 Aug 2019 02:06:31 GMT): kelvinzhong (Thu, 29 Aug 2019 03:53:46 GMT): jyellick (Thu, 29 Aug 2019 04:31:28 GMT): jyellick (Thu, 29 Aug 2019 04:32:17 GMT): kelvinzhong (Thu, 29 Aug 2019 05:56:27 GMT): kelvinzhong (Thu, 29 Aug 2019 06:12:02 GMT): Coada (Thu, 29 Aug 2019 07:46:13 GMT): kelvinzhong (Thu, 29 Aug 2019 08:30:19 GMT): kelvinzhong (Thu, 29 Aug 2019 08:30:19 GMT): kelvinzhong (Thu, 29 Aug 2019 08:46:31 GMT): Heena078 (Thu, 29 Aug 2019 09:20:32 GMT): Heena078 (Thu, 29 Aug 2019 09:20:53 GMT): indirajith (Thu, 29 Aug 2019 10:58:06 GMT): Coada (Thu, 29 Aug 2019 13:05:55 GMT): Puneeth987 (Thu, 29 Aug 2019 13:40:41 GMT): ravinayag (Fri, 30 Aug 2019 05:00:00 GMT): ravinayag (Fri, 30 Aug 2019 05:14:48 GMT): Heena078 (Fri, 30 Aug 2019 05:58:37 GMT): indirajith (Fri, 30 Aug 2019 11:31:17 GMT): abel23 (Sat, 31 Aug 2019 11:37:27 GMT): rsherwood (Mon, 02 Sep 2019 14:29:25 GMT): yacovm (Mon, 02 Sep 2019 15:34:34 GMT): jyellick (Tue, 03 Sep 2019 00:56:02 GMT): soumyanayak (Tue, 03 Sep 2019 05:18:21 GMT): guoger (Tue, 03 Sep 2019 05:49:17 GMT): guoger (Tue, 03 Sep 2019 05:50:34 GMT): soumyanayak (Tue, 03 Sep 2019 05:51:50 GMT): guoger (Tue, 03 Sep 2019 05:53:24 GMT): HoneyShah (Tue, 03 Sep 2019 06:12:58 GMT): daniblum (Tue, 03 Sep 2019 17:33:19 GMT): daniblum (Tue, 03 Sep 2019 17:33:20 GMT): jyellick (Tue, 03 Sep 2019 18:52:31 GMT): steigensonne (Wed, 04 Sep 2019 06:28:31 GMT): steigensonne (Wed, 04 Sep 2019 06:32:05 GMT): guoger (Wed, 04 Sep 2019 07:10:18 GMT): narendranathreddy (Wed, 04 Sep 2019 14:09:15 GMT): narendranathreddy (Wed, 04 Sep 2019 14:09:46 GMT): narendranathreddy (Wed, 04 Sep 2019 14:12:30 GMT): guoger (Wed, 04 Sep 2019 14:13:58 GMT): guoger (Wed, 04 Sep 2019 14:14:10 GMT): Deepakshinde (Wed, 04 Sep 2019 14:21:24 GMT): narendranathreddy (Wed, 04 Sep 2019 14:24:31 GMT): guoger (Wed, 04 Sep 2019 14:24:58 GMT): guoger (Wed, 04 Sep 2019 14:24:59 GMT): narendranathreddy (Wed, 04 Sep 2019 14:25:16 GMT): narendranathreddy (Wed, 04 Sep 2019 14:25:26 GMT): guoger (Wed, 04 Sep 2019 14:25:27 GMT): narendranathreddy (Wed, 04 Sep 2019 14:25:45 GMT): guoger (Wed, 04 Sep 2019 14:27:38 GMT): guoger (Wed, 04 Sep 2019 14:27:55 GMT): narendranathreddy (Wed, 04 Sep 2019 14:28:26 GMT): narendranathreddy (Wed, 04 Sep 2019 14:29:01 GMT): narendranathreddy (Wed, 04 Sep 2019 14:31:33 GMT): guoger (Wed, 04 Sep 2019 15:12:46 GMT): guoger (Wed, 04 Sep 2019 15:12:48 GMT): rahulhegde (Wed, 04 Sep 2019 16:34:11 GMT): rahulhegde (Wed, 04 Sep 2019 16:34:11 GMT): rahulhegde (Wed, 04 Sep 2019 16:34:11 GMT): rahulhegde (Wed, 04 Sep 2019 16:34:11 GMT): rahulhegde (Wed, 04 Sep 2019 16:34:11 GMT): rahulhegde (Wed, 04 Sep 2019 16:34:11 GMT): HoneyShah (Thu, 05 Sep 2019 05:10:12 GMT): generak (Thu, 05 Sep 2019 06:14:57 GMT): generak (Thu, 05 Sep 2019 06:14:58 GMT): rahulhegde (Thu, 05 Sep 2019 14:33:16 GMT): rahulhegde (Thu, 05 Sep 2019 14:33:16 GMT): rahulhegde (Thu, 05 Sep 2019 14:33:16 GMT): rahulhegde (Thu, 05 Sep 2019 14:33:16 GMT): rahulhegde (Thu, 05 Sep 2019 14:33:16 GMT): rahulhegde (Thu, 05 Sep 2019 14:33:16 GMT): jyellick (Thu, 05 Sep 2019 15:37:55 GMT): jyellick (Thu, 05 Sep 2019 15:38:48 GMT): jyellick (Thu, 05 Sep 2019 15:38:48 GMT): jyellick (Thu, 05 Sep 2019 15:39:44 GMT): jyellick (Thu, 05 Sep 2019 15:41:12 GMT): jyellick (Thu, 05 Sep 2019 15:41:12 GMT): jyellick (Thu, 05 Sep 2019 15:42:05 GMT): rahulhegde (Thu, 05 Sep 2019 15:42:38 GMT): rahulhegde (Thu, 05 Sep 2019 15:42:38 GMT): rahulhegde (Thu, 05 Sep 2019 15:44:04 GMT): rahulhegde (Thu, 05 Sep 2019 15:50:52 GMT): jyellick (Thu, 05 Sep 2019 15:56:50 GMT): jyellick (Thu, 05 Sep 2019 15:58:35 GMT): jyellick (Thu, 05 Sep 2019 15:59:41 GMT): rahulhegde (Thu, 05 Sep 2019 16:03:13 GMT): jyellick (Thu, 05 Sep 2019 16:05:54 GMT): rahulhegde (Thu, 05 Sep 2019 16:06:00 GMT): jyellick (Thu, 05 Sep 2019 16:06:04 GMT): rahulhegde (Thu, 05 Sep 2019 16:08:59 GMT): rahulhegde (Thu, 05 Sep 2019 16:12:18 GMT): jyellick (Thu, 05 Sep 2019 16:13:03 GMT): jyellick (Thu, 05 Sep 2019 16:13:03 GMT): jyellick (Thu, 05 Sep 2019 16:13:03 GMT): jyellick (Thu, 05 Sep 2019 16:13:03 GMT): jyellick (Thu, 05 Sep 2019 16:13:37 GMT): rahulhegde (Thu, 05 Sep 2019 16:19:10 GMT): rahulhegde (Thu, 05 Sep 2019 16:19:10 GMT): jyellick (Thu, 05 Sep 2019 16:19:42 GMT): jyellick (Thu, 05 Sep 2019 16:19:51 GMT): rahulhegde (Thu, 05 Sep 2019 16:20:44 GMT): rahulhegde (Thu, 05 Sep 2019 16:20:50 GMT): jyellick (Thu, 05 Sep 2019 16:21:26 GMT): jyellick (Thu, 05 Sep 2019 16:21:38 GMT): rahulhegde (Thu, 05 Sep 2019 16:27:32 GMT): rahulhegde (Thu, 05 Sep 2019 16:27:37 GMT): jyellick (Thu, 05 Sep 2019 16:28:02 GMT): rahulhegde (Thu, 05 Sep 2019 16:31:01 GMT): rahulhegde (Thu, 05 Sep 2019 16:31:01 GMT): jyellick (Thu, 05 Sep 2019 16:32:55 GMT): rahulhegde (Thu, 05 Sep 2019 16:34:08 GMT): rahulhegde (Thu, 05 Sep 2019 16:34:08 GMT): tommyjay (Fri, 06 Sep 2019 13:50:04 GMT): jyellick (Fri, 06 Sep 2019 13:51:55 GMT): jyellick (Fri, 06 Sep 2019 13:52:29 GMT): tommyjay (Fri, 06 Sep 2019 14:02:59 GMT): jyellick (Fri, 06 Sep 2019 14:29:03 GMT): tommyjay (Fri, 06 Sep 2019 15:23:22 GMT): jyellick (Fri, 06 Sep 2019 15:24:38 GMT): tommyjay (Fri, 06 Sep 2019 18:48:31 GMT): sandman (Mon, 09 Sep 2019 07:00:34 GMT): sandman (Mon, 09 Sep 2019 07:00:51 GMT): sandman (Mon, 09 Sep 2019 07:01:37 GMT): sandman (Mon, 09 Sep 2019 07:23:01 GMT): sandman (Mon, 09 Sep 2019 07:23:24 GMT): guoger (Mon, 09 Sep 2019 08:36:29 GMT): sandman (Mon, 09 Sep 2019 08:50:39 GMT): sandman (Mon, 09 Sep 2019 08:51:55 GMT): guoger (Mon, 09 Sep 2019 08:55:25 GMT): guoger (Mon, 09 Sep 2019 08:55:44 GMT): sandman (Mon, 09 Sep 2019 08:58:03 GMT): sandman (Mon, 09 Sep 2019 09:27:17 GMT): sandman (Mon, 09 Sep 2019 09:28:34 GMT): guoger (Mon, 09 Sep 2019 09:30:17 GMT): sandman (Mon, 09 Sep 2019 09:42:43 GMT): HoneyShah (Mon, 09 Sep 2019 09:44:38 GMT): sandman (Mon, 09 Sep 2019 09:45:46 GMT): sandman (Mon, 09 Sep 2019 09:45:59 GMT): sandman (Mon, 09 Sep 2019 10:43:20 GMT): guoger (Mon, 09 Sep 2019 10:44:00 GMT): Puneeth987 (Mon, 09 Sep 2019 11:11:33 GMT): sandman (Mon, 09 Sep 2019 11:35:37 GMT): Puneeth987 (Mon, 09 Sep 2019 12:44:23 GMT): guoger (Mon, 09 Sep 2019 14:31:31 GMT): sandman (Tue, 10 Sep 2019 10:57:05 GMT): SarvottamKumar (Tue, 10 Sep 2019 11:57:08 GMT): anujhlf (Tue, 10 Sep 2019 14:00:14 GMT): magar36 (Tue, 10 Sep 2019 18:02:09 GMT): magar36 (Tue, 10 Sep 2019 18:02:41 GMT): magar36 (Tue, 10 Sep 2019 18:03:38 GMT): magar36 (Tue, 10 Sep 2019 18:07:24 GMT): magar36 (Tue, 10 Sep 2019 18:09:22 GMT): magar36 (Tue, 10 Sep 2019 18:42:05 GMT): magar36 (Tue, 10 Sep 2019 18:42:05 GMT): delao (Tue, 10 Sep 2019 19:32:27 GMT): delao (Tue, 10 Sep 2019 19:34:25 GMT): delao (Tue, 10 Sep 2019 19:34:59 GMT): delao (Tue, 10 Sep 2019 19:37:42 GMT): delao (Tue, 10 Sep 2019 19:38:40 GMT): delao (Tue, 10 Sep 2019 19:39:33 GMT): delao (Tue, 10 Sep 2019 19:40:27 GMT): delao (Tue, 10 Sep 2019 19:40:58 GMT): delao (Tue, 10 Sep 2019 19:41:29 GMT): delao (Tue, 10 Sep 2019 19:41:48 GMT): delao (Tue, 10 Sep 2019 19:42:07 GMT): delao (Tue, 10 Sep 2019 20:13:47 GMT): guoger (Wed, 11 Sep 2019 02:44:19 GMT): guoger (Wed, 11 Sep 2019 02:44:28 GMT): guoger (Wed, 11 Sep 2019 02:46:06 GMT): ygnr (Wed, 11 Sep 2019 04:39:12 GMT): ygnr (Wed, 11 Sep 2019 04:39:12 GMT): guoger (Wed, 11 Sep 2019 07:07:33 GMT): delao (Wed, 11 Sep 2019 12:18:37 GMT): delao (Wed, 11 Sep 2019 12:19:08 GMT): delao (Wed, 11 Sep 2019 12:27:51 GMT): ygnr (Thu, 12 Sep 2019 00:25:27 GMT): guoger (Thu, 12 Sep 2019 02:34:13 GMT): guoger (Thu, 12 Sep 2019 02:34:38 GMT): guoger (Thu, 12 Sep 2019 02:35:56 GMT): indirajith (Thu, 12 Sep 2019 13:59:57 GMT): jyellick (Thu, 12 Sep 2019 16:13:53 GMT): jyellick (Thu, 12 Sep 2019 16:15:18 GMT): indirajith (Thu, 12 Sep 2019 19:15:40 GMT): indirajith (Fri, 13 Sep 2019 20:38:04 GMT): jyellick (Fri, 13 Sep 2019 20:38:57 GMT): jyellick (Fri, 13 Sep 2019 20:39:27 GMT): jyellick (Fri, 13 Sep 2019 20:40:42 GMT): indirajith (Fri, 13 Sep 2019 20:42:11 GMT): indirajith (Fri, 13 Sep 2019 20:43:02 GMT): jyellick (Fri, 13 Sep 2019 20:43:11 GMT): jyellick (Fri, 13 Sep 2019 20:43:33 GMT): jyellick (Fri, 13 Sep 2019 20:43:33 GMT): jyellick (Fri, 13 Sep 2019 20:43:33 GMT): jyellick (Fri, 13 Sep 2019 20:44:48 GMT): jyellick (Fri, 13 Sep 2019 20:45:25 GMT): jyellick (Fri, 13 Sep 2019 20:45:25 GMT): indirajith (Fri, 13 Sep 2019 20:45:44 GMT): jyellick (Fri, 13 Sep 2019 20:46:46 GMT): jyellick (Fri, 13 Sep 2019 20:46:46 GMT): indirajith (Fri, 13 Sep 2019 20:46:59 GMT): indirajith (Sun, 15 Sep 2019 11:17:13 GMT): indirajith (Sun, 15 Sep 2019 11:17:13 GMT): indirajith (Sun, 15 Sep 2019 11:17:13 GMT): indirajith (Sun, 15 Sep 2019 11:17:13 GMT): soumyanayak (Mon, 16 Sep 2019 09:10:59 GMT): soumyanayak (Mon, 16 Sep 2019 09:10:59 GMT): soumyanayak (Mon, 16 Sep 2019 09:10:59 GMT): soumyanayak (Mon, 16 Sep 2019 09:11:24 GMT): davidkhala (Tue, 17 Sep 2019 04:08:28 GMT): jyellick (Tue, 17 Sep 2019 04:09:09 GMT): jyellick (Tue, 17 Sep 2019 04:09:09 GMT): davidkhala (Tue, 17 Sep 2019 04:27:01 GMT): jyellick (Tue, 17 Sep 2019 04:29:19 GMT): davidkhala (Tue, 17 Sep 2019 04:29:38 GMT): jyellick (Tue, 17 Sep 2019 04:30:32 GMT): davidkhala (Tue, 17 Sep 2019 04:31:20 GMT): davidkhala (Tue, 17 Sep 2019 04:31:20 GMT): soumyanayak (Tue, 17 Sep 2019 05:28:06 GMT): davidkhala (Tue, 17 Sep 2019 05:29:35 GMT): soumyanayak (Tue, 17 Sep 2019 05:29:48 GMT): davidkhala (Tue, 17 Sep 2019 05:30:51 GMT): soumyanayak (Tue, 17 Sep 2019 05:31:14 GMT): davidkhala (Tue, 17 Sep 2019 05:31:19 GMT): jyellick (Tue, 17 Sep 2019 05:32:06 GMT): soumyanayak (Tue, 17 Sep 2019 05:32:40 GMT): soumyanayak (Tue, 17 Sep 2019 05:33:49 GMT): jyellick (Tue, 17 Sep 2019 05:34:40 GMT): soumyanayak (Tue, 17 Sep 2019 05:39:58 GMT): soumyanayak (Tue, 17 Sep 2019 05:40:01 GMT): soumyanayak (Tue, 17 Sep 2019 05:40:01 GMT): jyellick (Tue, 17 Sep 2019 05:54:21 GMT): soumyanayak (Tue, 17 Sep 2019 05:55:43 GMT): soumyanayak (Tue, 17 Sep 2019 05:56:20 GMT): soumyanayak (Tue, 17 Sep 2019 05:57:12 GMT): jyellick (Tue, 17 Sep 2019 05:57:37 GMT): jyellick (Tue, 17 Sep 2019 05:58:03 GMT): soumyanayak (Tue, 17 Sep 2019 06:05:51 GMT): soumyanayak (Tue, 17 Sep 2019 06:05:51 GMT): jyellick (Tue, 17 Sep 2019 06:06:45 GMT): soumyanayak (Tue, 17 Sep 2019 06:07:50 GMT): soumyanayak (Tue, 17 Sep 2019 06:08:23 GMT): soumyanayak (Tue, 17 Sep 2019 06:08:23 GMT): guoger (Tue, 17 Sep 2019 06:47:09 GMT): soumyanayak (Tue, 17 Sep 2019 06:51:09 GMT): guoger (Tue, 17 Sep 2019 06:53:39 GMT): guoger (Tue, 17 Sep 2019 06:54:03 GMT): guoger (Tue, 17 Sep 2019 06:54:36 GMT): soumyanayak (Tue, 17 Sep 2019 06:54:52 GMT): soumyanayak (Tue, 17 Sep 2019 06:54:56 GMT): guoger (Tue, 17 Sep 2019 06:58:42 GMT): soumyanayak (Tue, 17 Sep 2019 06:59:14 GMT): Utsav_Solanki (Tue, 17 Sep 2019 11:37:47 GMT): Utsav_Solanki (Tue, 17 Sep 2019 11:37:53 GMT): guoger (Tue, 17 Sep 2019 11:53:52 GMT): Utsav_Solanki (Tue, 17 Sep 2019 12:11:29 GMT): Utsav_Solanki (Tue, 17 Sep 2019 12:11:29 GMT): Utsav_Solanki (Tue, 17 Sep 2019 12:13:35 GMT): delao (Tue, 17 Sep 2019 12:47:29 GMT): guoger (Tue, 17 Sep 2019 14:50:46 GMT): guoger (Tue, 17 Sep 2019 14:50:56 GMT): tommyjay (Tue, 17 Sep 2019 20:11:26 GMT): Utsav_Solanki (Wed, 18 Sep 2019 04:39:16 GMT): Utsav_Solanki (Wed, 18 Sep 2019 04:45:23 GMT): mbanerjee (Thu, 19 Sep 2019 04:41:23 GMT): mbanerjee (Thu, 19 Sep 2019 04:41:40 GMT): mbanerjee (Thu, 19 Sep 2019 04:41:52 GMT): mbanerjee (Thu, 19 Sep 2019 04:41:52 GMT): guoger (Thu, 19 Sep 2019 05:14:47 GMT): mbanerjee (Thu, 19 Sep 2019 06:00:54 GMT): mbanerjee (Thu, 19 Sep 2019 06:00:54 GMT): mbanerjee (Thu, 19 Sep 2019 06:02:11 GMT): mbanerjee (Thu, 19 Sep 2019 06:02:24 GMT): mbanerjee (Thu, 19 Sep 2019 06:02:24 GMT): Utsav_Solanki (Thu, 19 Sep 2019 09:05:35 GMT): Utsav_Solanki (Thu, 19 Sep 2019 09:05:35 GMT): soumyanayak (Thu, 19 Sep 2019 09:08:20 GMT): soumyanayak (Thu, 19 Sep 2019 09:23:51 GMT): soumyanayak (Thu, 19 Sep 2019 09:24:22 GMT): soumyanayak (Thu, 19 Sep 2019 09:24:55 GMT): soumyanayak (Thu, 19 Sep 2019 09:54:54 GMT): soumyanayak (Thu, 19 Sep 2019 14:08:53 GMT): Adam_Hardie (Thu, 19 Sep 2019 15:37:42 GMT): Adam_Hardie (Thu, 19 Sep 2019 15:38:01 GMT): Adam_Hardie (Thu, 19 Sep 2019 15:38:03 GMT): soumyanayak (Thu, 19 Sep 2019 15:54:38 GMT): Adam_Hardie (Thu, 19 Sep 2019 15:55:04 GMT): soumyanayak (Thu, 19 Sep 2019 15:55:06 GMT): soumyanayak (Thu, 19 Sep 2019 15:55:20 GMT): Adam_Hardie (Thu, 19 Sep 2019 15:55:33 GMT): soumyanayak (Thu, 19 Sep 2019 15:56:21 GMT): Adam_Hardie (Thu, 19 Sep 2019 16:37:19 GMT): Adam_Hardie (Thu, 19 Sep 2019 16:38:32 GMT): Adam_Hardie (Thu, 19 Sep 2019 16:45:18 GMT): Adam_Hardie (Thu, 19 Sep 2019 16:56:44 GMT): rahulhegde (Thu, 19 Sep 2019 18:25:34 GMT): rahulhegde (Thu, 19 Sep 2019 18:25:34 GMT): rahulhegde (Thu, 19 Sep 2019 18:25:34 GMT): rahulhegde (Thu, 19 Sep 2019 18:25:34 GMT): soumyanayak (Fri, 20 Sep 2019 02:33:32 GMT): soumyanayak (Fri, 20 Sep 2019 02:43:42 GMT): guoger (Fri, 20 Sep 2019 06:50:43 GMT): soumyanayak (Fri, 20 Sep 2019 06:53:44 GMT): Adam_Hardie (Fri, 20 Sep 2019 08:44:32 GMT): guoger (Fri, 20 Sep 2019 09:12:18 GMT): guoger (Fri, 20 Sep 2019 09:12:30 GMT): Adam_Hardie (Fri, 20 Sep 2019 09:13:05 GMT): PulkitSarraf (Fri, 20 Sep 2019 09:49:11 GMT): Adam_Hardie (Fri, 20 Sep 2019 10:13:36 GMT): Adam_Hardie (Fri, 20 Sep 2019 10:13:56 GMT): Adam_Hardie (Fri, 20 Sep 2019 10:14:31 GMT): soumyanayak (Fri, 20 Sep 2019 10:54:59 GMT): soumyanayak (Fri, 20 Sep 2019 10:55:30 GMT): rahulhegde (Fri, 20 Sep 2019 11:43:31 GMT): rahulhegde (Fri, 20 Sep 2019 11:43:31 GMT): Adam_Hardie (Fri, 20 Sep 2019 13:15:12 GMT): Adam_Hardie (Fri, 20 Sep 2019 13:15:15 GMT): indirajith (Mon, 23 Sep 2019 07:36:04 GMT): soumyanayak (Mon, 23 Sep 2019 11:30:40 GMT): guoger (Mon, 23 Sep 2019 14:19:02 GMT): guoger (Mon, 23 Sep 2019 14:19:59 GMT): guoger (Mon, 23 Sep 2019 14:20:09 GMT): guoger (Mon, 23 Sep 2019 14:21:12 GMT): indirajith (Mon, 23 Sep 2019 14:35:03 GMT): indirajith (Mon, 23 Sep 2019 14:37:45 GMT): soumyanayak (Mon, 23 Sep 2019 15:27:54 GMT): soumyanayak (Mon, 23 Sep 2019 15:30:13 GMT): soumyanayak (Mon, 23 Sep 2019 15:30:26 GMT): guoger (Mon, 23 Sep 2019 16:11:38 GMT): guoger (Mon, 23 Sep 2019 16:11:52 GMT): guoger (Mon, 23 Sep 2019 16:11:58 GMT): mbanerjee (Mon, 23 Sep 2019 21:17:56 GMT): indirajith (Mon, 23 Sep 2019 21:49:45 GMT): indirajith (Mon, 23 Sep 2019 21:53:14 GMT): indirajith (Mon, 23 Sep 2019 21:54:35 GMT): soumyanayak (Tue, 24 Sep 2019 02:18:37 GMT): soumyanayak (Tue, 24 Sep 2019 02:56:17 GMT): indirajith (Tue, 24 Sep 2019 08:28:23 GMT): rahulhegde (Tue, 24 Sep 2019 15:52:33 GMT): rahulhegde (Tue, 24 Sep 2019 15:52:33 GMT): soumyanayak (Tue, 24 Sep 2019 16:17:49 GMT): jyellick (Tue, 24 Sep 2019 19:43:08 GMT): jyellick (Tue, 24 Sep 2019 19:43:08 GMT): jyellick (Tue, 24 Sep 2019 19:48:31 GMT): rahulhegde (Tue, 24 Sep 2019 20:08:31 GMT): rahulhegde (Tue, 24 Sep 2019 20:08:31 GMT): rahulhegde (Tue, 24 Sep 2019 20:11:54 GMT): rahulhegde (Tue, 24 Sep 2019 20:36:28 GMT): jyellick (Tue, 24 Sep 2019 20:47:36 GMT): rahulhegde (Tue, 24 Sep 2019 20:52:26 GMT): rahulhegde (Tue, 24 Sep 2019 20:59:30 GMT): rahulhegde (Tue, 24 Sep 2019 20:59:30 GMT): yacovm (Tue, 24 Sep 2019 21:04:37 GMT): rahulhegde (Wed, 25 Sep 2019 10:52:27 GMT): rahulhegde (Wed, 25 Sep 2019 10:52:27 GMT): jona-sc (Thu, 26 Sep 2019 10:28:36 GMT): rahulhegde (Thu, 26 Sep 2019 12:57:14 GMT): yacovm (Thu, 26 Sep 2019 13:01:39 GMT): yacovm (Thu, 26 Sep 2019 13:01:45 GMT): jyellick (Thu, 26 Sep 2019 14:04:33 GMT): jyellick (Thu, 26 Sep 2019 14:05:30 GMT): rahulhegde (Thu, 26 Sep 2019 18:49:25 GMT): rahulhegde (Thu, 26 Sep 2019 18:49:25 GMT): jyellick (Thu, 26 Sep 2019 18:53:56 GMT): yacovm (Thu, 26 Sep 2019 18:57:18 GMT): yacovm (Thu, 26 Sep 2019 19:00:04 GMT): rahulhegde (Thu, 26 Sep 2019 19:07:01 GMT): yacovm (Thu, 26 Sep 2019 19:16:25 GMT): yacovm (Thu, 26 Sep 2019 19:16:33 GMT): yacovm (Thu, 26 Sep 2019 19:16:33 GMT): rahulhegde (Thu, 26 Sep 2019 19:22:05 GMT): yacovm (Thu, 26 Sep 2019 19:22:48 GMT): yacovm (Thu, 26 Sep 2019 19:23:07 GMT): yacovm (Thu, 26 Sep 2019 19:23:24 GMT): rahulhegde (Thu, 26 Sep 2019 19:24:48 GMT): yacovm (Thu, 26 Sep 2019 19:25:15 GMT): yacovm (Thu, 26 Sep 2019 19:25:53 GMT): yacovm (Thu, 26 Sep 2019 19:26:01 GMT): rahulhegde (Thu, 26 Sep 2019 19:27:27 GMT): rahulhegde (Thu, 26 Sep 2019 19:27:27 GMT): yacovm (Thu, 26 Sep 2019 19:28:02 GMT): yacovm (Thu, 26 Sep 2019 19:28:22 GMT): yacovm (Thu, 26 Sep 2019 19:28:36 GMT): rahulhegde (Thu, 26 Sep 2019 19:29:37 GMT): yacovm (Thu, 26 Sep 2019 19:32:16 GMT): rahulhegde (Thu, 26 Sep 2019 19:32:33 GMT): yacovm (Thu, 26 Sep 2019 21:09:33 GMT): rahulhegde (Fri, 27 Sep 2019 00:01:54 GMT): rahulhegde (Fri, 27 Sep 2019 00:03:01 GMT): indirajith (Fri, 27 Sep 2019 11:06:36 GMT): indirajith (Fri, 27 Sep 2019 11:07:01 GMT): indirajith (Fri, 27 Sep 2019 11:32:17 GMT): indirajith (Fri, 27 Sep 2019 11:32:17 GMT): indirajith (Fri, 27 Sep 2019 11:32:17 GMT): soumyanayak (Fri, 27 Sep 2019 11:39:30 GMT): indirajith (Fri, 27 Sep 2019 12:20:47 GMT): soumyanayak (Fri, 27 Sep 2019 12:33:27 GMT): soumyanayak (Fri, 27 Sep 2019 12:34:31 GMT): indirajith (Fri, 27 Sep 2019 12:35:04 GMT): indirajith (Sun, 29 Sep 2019 19:11:28 GMT): indirajith (Sun, 29 Sep 2019 19:12:15 GMT): soumyanayak (Mon, 30 Sep 2019 06:51:04 GMT): indirajith (Mon, 30 Sep 2019 07:51:58 GMT): indirajith (Mon, 30 Sep 2019 07:58:13 GMT): soumyanayak (Mon, 30 Sep 2019 09:15:25 GMT): adityanalge (Fri, 04 Oct 2019 18:48:27 GMT): adityanalge (Fri, 04 Oct 2019 18:48:34 GMT): jyellick (Fri, 04 Oct 2019 19:11:19 GMT): jyellick (Fri, 04 Oct 2019 19:11:19 GMT): jyellick (Fri, 04 Oct 2019 19:11:19 GMT): adityanalge (Fri, 11 Oct 2019 02:23:58 GMT): Utsav_Solanki (Sat, 12 Oct 2019 05:31:31 GMT): Utsav_Solanki (Sat, 12 Oct 2019 05:31:31 GMT): Utsav_Solanki (Sat, 12 Oct 2019 06:16:00 GMT): HLFPOC (Mon, 14 Oct 2019 05:07:01 GMT): Utsav_Solanki (Mon, 14 Oct 2019 07:19:33 GMT): Utsav_Solanki (Mon, 14 Oct 2019 07:19:33 GMT): soumyanayak (Mon, 14 Oct 2019 13:30:06 GMT): soumyanayak (Mon, 14 Oct 2019 13:31:16 GMT): Utsav_Solanki (Tue, 15 Oct 2019 04:33:33 GMT): Utsav_Solanki (Tue, 15 Oct 2019 04:33:53 GMT): Utsav_Solanki (Tue, 15 Oct 2019 04:39:48 GMT): guoger (Tue, 15 Oct 2019 04:57:26 GMT): Utsav_Solanki (Tue, 15 Oct 2019 06:52:40 GMT): Utsav_Solanki (Tue, 15 Oct 2019 06:54:15 GMT): Utsav_Solanki (Tue, 15 Oct 2019 06:54:15 GMT): soumyanayak (Tue, 15 Oct 2019 07:00:31 GMT): Utsav_Solanki (Tue, 15 Oct 2019 07:01:25 GMT): soumyanayak (Tue, 15 Oct 2019 07:02:45 GMT): barney2k7 (Tue, 15 Oct 2019 07:51:45 GMT): barney2k7 (Tue, 15 Oct 2019 07:51:46 GMT): Utsav_Solanki (Wed, 16 Oct 2019 09:51:19 GMT): Utsav_Solanki (Wed, 16 Oct 2019 09:51:19 GMT): Utsav_Solanki (Wed, 16 Oct 2019 09:51:27 GMT): Utsav_Solanki (Wed, 16 Oct 2019 09:51:27 GMT): soumyanayak (Wed, 16 Oct 2019 13:10:29 GMT): Utsav_Solanki (Wed, 16 Oct 2019 13:20:58 GMT): Utsav_Solanki (Wed, 16 Oct 2019 13:20:58 GMT): Utsav_Solanki (Wed, 16 Oct 2019 13:20:58 GMT): soumyanayak (Wed, 16 Oct 2019 13:23:28 GMT): Utsav_Solanki (Wed, 16 Oct 2019 13:24:42 GMT): icordoba (Wed, 16 Oct 2019 15:23:00 GMT): icordoba (Wed, 16 Oct 2019 15:23:09 GMT): icordoba (Wed, 16 Oct 2019 15:24:11 GMT): yacovm (Wed, 16 Oct 2019 15:27:06 GMT): knagware9 (Thu, 17 Oct 2019 07:44:06 GMT): Utsav_Solanki (Thu, 17 Oct 2019 08:40:44 GMT): knagware9 (Thu, 17 Oct 2019 09:01:35 GMT): knagware9 (Thu, 17 Oct 2019 09:01:36 GMT): pankajcheema (Thu, 17 Oct 2019 11:52:42 GMT): icordoba (Thu, 17 Oct 2019 16:53:27 GMT): icordoba (Thu, 17 Oct 2019 16:53:27 GMT): icordoba (Thu, 17 Oct 2019 16:53:34 GMT): icordoba (Thu, 17 Oct 2019 16:53:45 GMT): icordoba (Thu, 17 Oct 2019 16:54:34 GMT): icordoba (Thu, 17 Oct 2019 16:55:06 GMT): icordoba (Thu, 17 Oct 2019 17:03:09 GMT): icordoba (Thu, 17 Oct 2019 17:03:21 GMT): icordoba (Thu, 17 Oct 2019 17:03:21 GMT): jyellick (Thu, 17 Oct 2019 17:37:59 GMT): pankajcheema (Thu, 17 Oct 2019 17:39:05 GMT): pankajcheema (Thu, 17 Oct 2019 17:39:12 GMT): jyellick (Thu, 17 Oct 2019 17:40:12 GMT): jyellick (Thu, 17 Oct 2019 17:40:43 GMT): pankajcheema (Thu, 17 Oct 2019 17:41:14 GMT): icordoba (Thu, 17 Oct 2019 22:06:31 GMT): icordoba (Thu, 17 Oct 2019 22:06:31 GMT): karthikcyadav (Fri, 18 Oct 2019 12:50:20 GMT): sureshtedla (Fri, 18 Oct 2019 15:13:50 GMT): sureshtedla (Fri, 18 Oct 2019 15:13:56 GMT): sureshtedla (Fri, 18 Oct 2019 15:14:48 GMT): sureshtedla (Fri, 18 Oct 2019 15:14:53 GMT): sureshtedla (Fri, 18 Oct 2019 15:16:01 GMT): sureshtedla (Fri, 18 Oct 2019 15:16:06 GMT): sureshtedla (Fri, 18 Oct 2019 15:16:41 GMT): jyellick (Fri, 18 Oct 2019 17:41:14 GMT): jyellick (Fri, 18 Oct 2019 17:41:32 GMT): davidkhala (Sun, 20 Oct 2019 07:14:39 GMT): yacovm (Sun, 20 Oct 2019 07:22:09 GMT): minollo (Mon, 21 Oct 2019 17:51:27 GMT): yacovm (Mon, 21 Oct 2019 18:00:59 GMT): yacovm (Mon, 21 Oct 2019 18:01:17 GMT): yacovm (Mon, 21 Oct 2019 18:02:17 GMT): yacovm (Mon, 21 Oct 2019 18:02:51 GMT): minollo (Mon, 21 Oct 2019 18:04:27 GMT): ItaloCarrasco (Tue, 22 Oct 2019 16:11:28 GMT): joseph-d-p (Thu, 24 Oct 2019 02:35:31 GMT): daijianw (Thu, 24 Oct 2019 14:01:12 GMT): yacovm (Thu, 24 Oct 2019 14:02:13 GMT): yacovm (Thu, 24 Oct 2019 14:02:26 GMT): yacovm (Thu, 24 Oct 2019 14:02:37 GMT): yacovm (Thu, 24 Oct 2019 14:03:01 GMT): yacovm (Thu, 24 Oct 2019 14:03:25 GMT): yacovm (Thu, 24 Oct 2019 14:03:34 GMT): yacovm (Thu, 24 Oct 2019 14:04:20 GMT): yacovm (Thu, 24 Oct 2019 14:04:23 GMT): daijianw (Thu, 24 Oct 2019 14:10:28 GMT): daijianw (Thu, 24 Oct 2019 14:17:56 GMT): saanvijay (Thu, 24 Oct 2019 14:37:05 GMT): yacovm (Thu, 24 Oct 2019 15:06:47 GMT): karthikcyadav (Fri, 25 Oct 2019 10:26:37 GMT): mastersingh24 (Mon, 28 Oct 2019 15:47:13 GMT): knagware9 (Tue, 29 Oct 2019 10:57:50 GMT): davidkhala (Wed, 30 Oct 2019 13:50:06 GMT): aatkddny (Wed, 30 Oct 2019 15:04:07 GMT): aatkddny (Wed, 30 Oct 2019 15:04:07 GMT): jyellick (Wed, 30 Oct 2019 15:05:53 GMT): jyellick (Wed, 30 Oct 2019 15:07:34 GMT): aatkddny (Wed, 30 Oct 2019 15:08:16 GMT): aatkddny (Wed, 30 Oct 2019 15:08:16 GMT): jyellick (Wed, 30 Oct 2019 15:08:44 GMT): jyellick (Wed, 30 Oct 2019 15:08:51 GMT): jyellick (Wed, 30 Oct 2019 15:09:15 GMT): jyellick (Wed, 30 Oct 2019 15:09:28 GMT): jyellick (Wed, 30 Oct 2019 15:09:28 GMT): jyellick (Wed, 30 Oct 2019 15:09:53 GMT): aatkddny (Wed, 30 Oct 2019 15:10:41 GMT): jyellick (Wed, 30 Oct 2019 17:02:33 GMT): aatkddny (Wed, 30 Oct 2019 20:05:10 GMT): aatkddny (Wed, 30 Oct 2019 20:05:10 GMT): yacovm (Wed, 30 Oct 2019 20:08:12 GMT): yacovm (Wed, 30 Oct 2019 20:08:20 GMT): yacovm (Wed, 30 Oct 2019 20:09:36 GMT): aatkddny (Wed, 30 Oct 2019 22:42:54 GMT): aatkddny (Wed, 30 Oct 2019 22:53:41 GMT): aatkddny (Wed, 30 Oct 2019 22:53:41 GMT): yacovm (Wed, 30 Oct 2019 23:11:29 GMT): yacovm (Wed, 30 Oct 2019 23:11:41 GMT): yacovm (Wed, 30 Oct 2019 23:11:54 GMT): yacovm (Wed, 30 Oct 2019 23:12:07 GMT): yacovm (Wed, 30 Oct 2019 23:12:23 GMT): yacovm (Wed, 30 Oct 2019 23:12:51 GMT): yacovm (Wed, 30 Oct 2019 23:13:12 GMT): yacovm (Wed, 30 Oct 2019 23:13:24 GMT): yacovm (Wed, 30 Oct 2019 23:13:31 GMT): aatkddny (Wed, 30 Oct 2019 23:16:55 GMT): aatkddny (Wed, 30 Oct 2019 23:16:55 GMT): aatkddny (Wed, 30 Oct 2019 23:16:55 GMT): knagware9 (Thu, 31 Oct 2019 06:04:27 GMT): barney2k7 (Thu, 31 Oct 2019 07:16:15 GMT): yacovm (Thu, 31 Oct 2019 07:24:50 GMT): yacovm (Thu, 31 Oct 2019 07:24:58 GMT): yacovm (Thu, 31 Oct 2019 07:24:58 GMT): barney2k7 (Thu, 31 Oct 2019 07:47:20 GMT): AshishMishra 1 (Thu, 31 Oct 2019 09:16:40 GMT): AshishMishra 1 (Thu, 31 Oct 2019 09:17:53 GMT): yacovm (Thu, 31 Oct 2019 09:36:35 GMT): aatkddny (Thu, 31 Oct 2019 15:19:52 GMT): aatkddny (Thu, 31 Oct 2019 15:19:52 GMT): jyellick (Thu, 31 Oct 2019 17:03:45 GMT): jyellick (Thu, 31 Oct 2019 17:09:24 GMT): yacovm (Thu, 31 Oct 2019 17:30:12 GMT): yacovm (Thu, 31 Oct 2019 17:30:25 GMT): yacovm (Thu, 31 Oct 2019 17:30:36 GMT): yacovm (Thu, 31 Oct 2019 17:33:55 GMT): yacovm (Thu, 31 Oct 2019 17:34:03 GMT): jyellick (Thu, 31 Oct 2019 17:34:42 GMT): jyellick (Thu, 31 Oct 2019 17:34:42 GMT): jyellick (Thu, 31 Oct 2019 17:45:15 GMT): aatkddny (Thu, 31 Oct 2019 17:45:15 GMT): aatkddny (Thu, 31 Oct 2019 17:45:15 GMT): aatkddny (Thu, 31 Oct 2019 17:45:49 GMT): aatkddny (Thu, 31 Oct 2019 17:45:49 GMT): aatkddny (Thu, 31 Oct 2019 17:45:49 GMT): AllanHansen (Fri, 01 Nov 2019 00:36:10 GMT): AshishMishra 1 (Fri, 01 Nov 2019 10:52:26 GMT): aatkddny (Fri, 01 Nov 2019 14:17:41 GMT): yacovm (Fri, 01 Nov 2019 17:45:09 GMT): yacovm (Fri, 01 Nov 2019 17:45:20 GMT): yacovm (Fri, 01 Nov 2019 17:45:29 GMT): yacovm (Fri, 01 Nov 2019 17:45:45 GMT): aatkddny (Fri, 01 Nov 2019 18:02:58 GMT): aatkddny (Fri, 01 Nov 2019 18:02:58 GMT): hawkinggg (Mon, 04 Nov 2019 07:15:16 GMT): aatkddny (Wed, 06 Nov 2019 15:57:57 GMT): aatkddny (Wed, 06 Nov 2019 15:57:57 GMT): aatkddny (Wed, 06 Nov 2019 15:57:57 GMT): aatkddny (Wed, 06 Nov 2019 15:57:57 GMT): jyellick (Wed, 06 Nov 2019 18:08:57 GMT): jyellick (Wed, 06 Nov 2019 18:08:57 GMT): jyellick (Wed, 06 Nov 2019 18:09:51 GMT): jyellick (Wed, 06 Nov 2019 18:11:06 GMT): yingmsky (Sat, 09 Nov 2019 12:28:36 GMT): aatkddny (Sun, 10 Nov 2019 18:57:25 GMT): tommyjay (Mon, 11 Nov 2019 16:45:15 GMT): delao (Mon, 11 Nov 2019 17:23:46 GMT): tommyjay (Mon, 11 Nov 2019 17:32:48 GMT): delao (Mon, 11 Nov 2019 17:33:37 GMT): tommyjay (Mon, 11 Nov 2019 18:15:47 GMT): tommyjay (Mon, 11 Nov 2019 18:20:10 GMT): delao (Mon, 11 Nov 2019 18:23:35 GMT): tommyjay (Mon, 11 Nov 2019 20:59:48 GMT): aatkddny (Tue, 12 Nov 2019 14:07:05 GMT): guoger (Tue, 12 Nov 2019 14:55:47 GMT): aatkddny (Tue, 12 Nov 2019 14:57:15 GMT): aatkddny (Tue, 12 Nov 2019 14:57:26 GMT): jyellick (Tue, 12 Nov 2019 15:40:29 GMT): jyellick (Tue, 12 Nov 2019 15:41:25 GMT): yacovm (Tue, 12 Nov 2019 15:43:55 GMT): yacovm (Tue, 12 Nov 2019 15:44:04 GMT): yacovm (Tue, 12 Nov 2019 15:44:38 GMT): yacovm (Tue, 12 Nov 2019 15:44:47 GMT): yacovm (Tue, 12 Nov 2019 15:44:50 GMT): aatkddny (Tue, 12 Nov 2019 15:45:27 GMT): aatkddny (Tue, 12 Nov 2019 15:45:27 GMT): yacovm (Tue, 12 Nov 2019 15:45:33 GMT): DilipManjunatha (Wed, 13 Nov 2019 13:07:23 GMT): mbanerjee (Fri, 15 Nov 2019 03:57:43 GMT): jyellick (Fri, 15 Nov 2019 03:59:29 GMT): tommyjay (Fri, 15 Nov 2019 20:40:10 GMT): jyxie2007 (Mon, 18 Nov 2019 04:00:15 GMT): jyellick (Mon, 18 Nov 2019 17:32:49 GMT): AllanHansen (Tue, 19 Nov 2019 09:57:18 GMT): AllanHansen (Tue, 19 Nov 2019 09:57:18 GMT): guoger (Tue, 19 Nov 2019 18:59:27 GMT): AllanHansen (Tue, 19 Nov 2019 19:05:20 GMT): biksen (Thu, 21 Nov 2019 14:17:22 GMT): biksen (Thu, 21 Nov 2019 14:17:22 GMT): bestbeforetoday (Thu, 21 Nov 2019 16:20:42 GMT): bestbeforetoday (Thu, 21 Nov 2019 16:27:34 GMT): BrettLogan (Thu, 21 Nov 2019 17:20:19 GMT): BrettLogan (Thu, 21 Nov 2019 17:20:20 GMT): scottz (Thu, 21 Nov 2019 17:22:06 GMT): BrettLogan (Thu, 21 Nov 2019 17:22:39 GMT): BrettLogan (Thu, 21 Nov 2019 17:23:17 GMT): scottz (Thu, 21 Nov 2019 17:23:44 GMT): scottz (Thu, 21 Nov 2019 17:23:59 GMT): jyellick (Thu, 21 Nov 2019 17:29:11 GMT): jyellick (Thu, 21 Nov 2019 17:29:16 GMT): jyellick (Thu, 21 Nov 2019 17:29:53 GMT): jyellick (Thu, 21 Nov 2019 17:29:53 GMT): jyellick (Thu, 21 Nov 2019 17:31:44 GMT): jyellick (Thu, 21 Nov 2019 17:33:55 GMT): scottz (Thu, 21 Nov 2019 18:09:39 GMT): guoger (Thu, 21 Nov 2019 18:29:10 GMT): jyellick (Thu, 21 Nov 2019 18:30:06 GMT): jyellick (Thu, 21 Nov 2019 18:32:33 GMT): jyellick (Thu, 21 Nov 2019 18:32:47 GMT): guoger (Thu, 21 Nov 2019 18:34:03 GMT): jyellick (Thu, 21 Nov 2019 18:36:44 GMT): jyellick (Thu, 21 Nov 2019 18:36:52 GMT): jyellick (Thu, 21 Nov 2019 18:36:59 GMT): guoger (Thu, 21 Nov 2019 18:39:31 GMT): guoger (Thu, 21 Nov 2019 18:40:02 GMT): biksen (Thu, 21 Nov 2019 18:54:28 GMT): adityanalge (Fri, 22 Nov 2019 18:34:54 GMT): adityanalge (Fri, 22 Nov 2019 18:36:00 GMT): biksen (Sun, 24 Nov 2019 06:00:43 GMT): biksen (Sun, 24 Nov 2019 06:31:30 GMT): biksen (Sun, 24 Nov 2019 06:31:49 GMT): mbanerjee (Mon, 25 Nov 2019 23:12:33 GMT): mbanerjee (Mon, 25 Nov 2019 23:15:19 GMT): guoger (Tue, 26 Nov 2019 02:05:34 GMT): guptasndp10 (Tue, 26 Nov 2019 09:39:32 GMT): aatkddny (Tue, 26 Nov 2019 14:36:55 GMT): aatkddny (Tue, 26 Nov 2019 14:36:55 GMT): adityanalge (Wed, 27 Nov 2019 00:28:21 GMT): rahulhegde (Wed, 27 Nov 2019 16:10:50 GMT): rahulhegde (Wed, 27 Nov 2019 16:10:50 GMT): mbanerjee (Wed, 27 Nov 2019 18:38:28 GMT): aatkddny (Wed, 27 Nov 2019 19:23:17 GMT): delao (Wed, 27 Nov 2019 20:31:30 GMT): guoger (Thu, 28 Nov 2019 02:49:13 GMT): guoger (Thu, 28 Nov 2019 02:50:07 GMT): drjkr4844 (Thu, 28 Nov 2019 07:53:26 GMT): drjkr4844 (Thu, 28 Nov 2019 07:53:56 GMT): drjkr4844 (Thu, 28 Nov 2019 07:53:56 GMT): drjkr4844 (Thu, 28 Nov 2019 07:53:56 GMT): drjkr4844 (Thu, 28 Nov 2019 07:53:56 GMT): drjkr4844 (Thu, 28 Nov 2019 07:53:56 GMT): drjkr4844 (Thu, 28 Nov 2019 07:53:56 GMT): guoger (Thu, 28 Nov 2019 08:22:59 GMT): drjkr4844 (Thu, 28 Nov 2019 08:39:29 GMT): drjkr4844 (Thu, 28 Nov 2019 09:02:41 GMT): BranimirMalesevic (Thu, 28 Nov 2019 15:54:21 GMT): BranimirMalesevic (Thu, 28 Nov 2019 15:55:50 GMT): BranimirMalesevic (Thu, 28 Nov 2019 15:55:50 GMT): BranimirMalesevic (Thu, 28 Nov 2019 15:56:11 GMT): BranimirMalesevic (Thu, 28 Nov 2019 16:43:24 GMT): guoger (Fri, 29 Nov 2019 01:25:09 GMT): karthikeyanb (Fri, 29 Nov 2019 06:11:03 GMT): karthikeyanb (Fri, 29 Nov 2019 06:11:03 GMT): BranimirMalesevic (Fri, 29 Nov 2019 08:02:04 GMT): BranimirMalesevic (Fri, 29 Nov 2019 08:02:05 GMT): BranimirMalesevic (Fri, 29 Nov 2019 08:02:56 GMT): guoger (Fri, 29 Nov 2019 08:25:58 GMT): BranimirMalesevic (Fri, 29 Nov 2019 08:28:50 GMT): guoger (Fri, 29 Nov 2019 08:42:54 GMT): guoger (Fri, 29 Nov 2019 08:42:58 GMT): guoger (Fri, 29 Nov 2019 08:43:07 GMT): guoger (Fri, 29 Nov 2019 08:43:57 GMT): BranimirMalesevic (Fri, 29 Nov 2019 08:44:19 GMT): guoger (Fri, 29 Nov 2019 08:44:53 GMT): BranimirMalesevic (Fri, 29 Nov 2019 08:45:05 GMT): marinkovicvlado (Fri, 29 Nov 2019 08:46:25 GMT): guoger (Fri, 29 Nov 2019 08:47:55 GMT): BranimirMalesevic (Fri, 29 Nov 2019 08:48:20 GMT): BranimirMalesevic (Fri, 29 Nov 2019 08:48:21 GMT): guoger (Fri, 29 Nov 2019 09:07:01 GMT): guoger (Fri, 29 Nov 2019 09:07:10 GMT): guoger (Fri, 29 Nov 2019 09:07:17 GMT): BranimirMalesevic (Fri, 29 Nov 2019 09:19:45 GMT): BranimirMalesevic (Fri, 29 Nov 2019 09:20:53 GMT): BranimirMalesevic (Fri, 29 Nov 2019 10:17:19 GMT): BranimirMalesevic (Fri, 29 Nov 2019 10:17:19 GMT): guoger (Fri, 29 Nov 2019 10:35:13 GMT): guoger (Fri, 29 Nov 2019 10:37:01 GMT): guoger (Fri, 29 Nov 2019 10:37:12 GMT): BranimirMalesevic (Fri, 29 Nov 2019 12:11:49 GMT): BranimirMalesevic (Fri, 29 Nov 2019 12:12:51 GMT): RahulHundet (Mon, 02 Dec 2019 09:05:08 GMT): RahulHundet (Mon, 02 Dec 2019 09:05:24 GMT): RahulHundet (Mon, 02 Dec 2019 09:07:03 GMT): RahulHundet (Mon, 02 Dec 2019 09:07:56 GMT): RahulHundet (Mon, 02 Dec 2019 09:08:20 GMT): guoger (Mon, 02 Dec 2019 09:49:05 GMT): RahulHundet (Mon, 02 Dec 2019 10:11:31 GMT): RahulHundet (Mon, 02 Dec 2019 10:11:50 GMT): RahulHundet (Mon, 02 Dec 2019 10:12:13 GMT): RahulHundet (Mon, 02 Dec 2019 10:12:26 GMT): RahulHundet (Mon, 02 Dec 2019 10:12:35 GMT): RahulHundet (Mon, 02 Dec 2019 10:12:52 GMT): adityanalge (Mon, 02 Dec 2019 19:30:39 GMT): adityanalge (Mon, 02 Dec 2019 19:30:57 GMT): indirajith (Mon, 02 Dec 2019 22:59:23 GMT): indirajith (Mon, 02 Dec 2019 23:02:46 GMT): jyellick (Tue, 03 Dec 2019 02:51:03 GMT): jyellick (Tue, 03 Dec 2019 02:52:07 GMT): RahulHundet (Tue, 03 Dec 2019 08:47:40 GMT): guoger (Tue, 03 Dec 2019 09:39:39 GMT): RahulHundet (Tue, 03 Dec 2019 09:53:02 GMT): RahulHundet (Tue, 03 Dec 2019 09:57:20 GMT): RahulHundet (Tue, 03 Dec 2019 10:15:51 GMT): RahulHundet (Tue, 03 Dec 2019 10:16:02 GMT): RahulHundet (Tue, 03 Dec 2019 10:19:41 GMT): RahulHundet (Tue, 03 Dec 2019 10:19:58 GMT): RahulHundet (Tue, 03 Dec 2019 10:19:59 GMT): RahulHundet (Tue, 03 Dec 2019 11:09:24 GMT): RahulHundet (Tue, 03 Dec 2019 11:10:14 GMT): RahulHundet (Tue, 03 Dec 2019 11:11:24 GMT): indirajith (Tue, 03 Dec 2019 11:31:20 GMT): indirajith (Tue, 03 Dec 2019 12:04:47 GMT): indirajith (Tue, 03 Dec 2019 12:05:19 GMT): indirajith (Tue, 03 Dec 2019 12:05:19 GMT): jyellick (Tue, 03 Dec 2019 15:46:18 GMT): jyellick (Tue, 03 Dec 2019 15:46:36 GMT): jyellick (Tue, 03 Dec 2019 15:46:36 GMT): indirajith (Tue, 03 Dec 2019 16:11:12 GMT): jyellick (Tue, 03 Dec 2019 16:12:55 GMT): indirajith (Tue, 03 Dec 2019 16:15:43 GMT): indirajith (Wed, 04 Dec 2019 13:23:11 GMT): indirajith (Wed, 04 Dec 2019 13:23:21 GMT): indirajith (Wed, 04 Dec 2019 13:25:24 GMT): jyellick (Wed, 04 Dec 2019 15:58:19 GMT): tommyjay (Wed, 04 Dec 2019 21:03:57 GMT): jyellick (Wed, 04 Dec 2019 21:04:45 GMT): jyellick (Wed, 04 Dec 2019 21:05:15 GMT): tommyjay (Wed, 04 Dec 2019 21:06:32 GMT): jyellick (Wed, 04 Dec 2019 21:07:13 GMT): jyellick (Wed, 04 Dec 2019 21:07:36 GMT): tommyjay (Wed, 04 Dec 2019 21:07:45 GMT): jyellick (Wed, 04 Dec 2019 21:08:06 GMT):
Hi@kostas ,I seem to find a bug in your sbft codes, in maybeSendCommit function, it checks s.cur.prepared || len(s.cur.prep)
Has joined the channel.
ERROR: compose.cli.errors.log_timeout_error: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
anyone encountered the problem
?
broker.go:96: Failed to connect to broker 0ec84c399891:9092: dial tcp: lookup 0ec84c399891 on 127.0.0.11:53: no such host
hey @jyellick, was wondering if you could help me understand something. in the balance_transfer example, we create a channel, an organization joins it, the chaincode gets installed on the peers of that organization, but we don't call `channel.intitialize()` until right before calling `channel.sendInstantiateProposal()`.
why is that? the docs say it 'initializes the channel object with the MSPs,' but do we have to wait until before instantiating chaincode to do that? or could we have initialized the channel after joining?
@chenxuan This indicates that your DNS resolution is broken. A google search led me to https://github.com/moby/moby/issues/20335 , perhaps you have some mismatch of ipv4/ipv6 in your environment?
@chenxuan This indicates that your DNS resolution is broken. A google search led me to https://github.com/moby/moby/issues/20335 , perhaps you have some mismatch of ipv4/ipv6 in your environment? Perhaps it is another issue, but it sounds to me like this is a problem with compose, rather than with fabric.
@jrosmith This is probably a better question for the #fabric-sdk-node channel, but I believe `channel.initialize()` is allocating local SDK resources for the channel, rather than performing any remote operation. Joining a channel is in a sense a 'peer global' operation, as is installing a chaincode. So, the order you specified makes sense intuitively.
@jyellick that makes sense! i'll follow up in the other channel. thanks again
2017-06-24 06:07:48.207 UTC [orderer/common/broadcast] Handle -> WARN 284 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
any one occur the problem
Has joined the channel.
@chenxuan by default policy settings channels can only be done by someone with Admin rights for the channel based upon the most recent accepted config_update block
@chenxuan by default policy settings, channels can only be done by someone with Admin rights for the channel based upon the most recent accepted config_update block
@chenxuan by default policy settings, updates to channel config can only be done by someone with Admin rights for the channel based upon the most recent accepted config_update block
(status: 500, message: Cannot create ledger from genesis block, due to LedgerID already exists), cause=null}
anyone occur the question
i have use the docker rm $(docker ps -aq) -f
and then docker-compose -f *.yaml up
however the peer1.org1.example.com join channel successfully
the peer0.org1.example.com occur the error
how can i resovle?
@jeffgarratt
give me some advice
@guruce
@chenxuan You would get that error if you tried to join a peer to a channel for a second time. It is saying that it already has already allocated database resources for a ledger by that name
@jyellick hi, has kafka `Error: Got unexpected status: SERVICE_UNAVAILABLE` been solved?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=zyHHarLJvQQgskYrb) @jyellick the e2e has same operation why not show the error log
@bh4rtp This status is returned when Kafka resources are still being initialized, or the Kafka cluster has not been configured correctly.
@jyellick how to fix it?
2017-06-26 12:33:18.888 UTC [endorser] ProcessProposal -> ERRO 2ec simulateProposal() resulted in chaincode response status 500 for txid: 3eea7b88cfdcc9b88caf9fd7e40b6c8238cc51aa0c4a2e4c5d8fa59b48f66a5e
@bh4rtp: If it's the former, there's no "fixing" it. Kafka resources are not being deployed instantaneously. If you repeat the request within a reasonably short delay it should pass. If it's the latter, we have no idea what your setup is and thus, are unable to help you.
@bh4rtp: If it's the former, there's no "fixing" it. Kafka resources cannot be deployed instantaneously. If you repeat the request within a reasonably short delay it should pass. If it's the latter, we have no idea what your setup is and thus, are unable to help you.
@bh4rtp: If it's the former, there's no "fixing" it. Kafka resources cannot be deployed instantaneously. If you repeat the request with a reasonably short delay it should pass. If it's the latter, we have no idea what your setup is and thus, are unable to help you.
@chenxuan and everyone else who posts errors here looking for help: It is almost impossible, certainly ineffective, and definitely time-consuming for folks here to help you when _you_ don't provide enough details about your problem. You cannot post a single log statement expecting folks to _infer_ what the problem is. Walk the extra mile, and take the time to write a _detailed_ post explaining what the problem is: what is your setup, what you are doing, what is your expectation, what you are seeing instead. As I've suggested here many times before: think of this as a StackOverflow question and phrase it as such.
@kostas i'm sorry
i will describe my question detailly
Hey all
Is there no way to build a fabric network using byfn.sh that includes e2e and couchdb?
Is there no way to build a fabric network using `byfn.sh` that includes e2e and couchdb?
When I update `./byfn.sh` to include `COMPOSE_FILE=docker-compose-e2e.yaml` and run `./byfn.sh generate` (which runs successfully), followed by `./byfn.sh -m up`, I get this error
```
Starting with channel 'mychannel' and CLI timeout of '10000'
ERROR: yaml.parser.ParserError: while parsing a block mapping
in "./docker-compose-e2e.yaml", line 6, column 1
expected
And since it is looking for the cli container, I am assuming this is not the right way to do this.
Which is the same error I get running `docker-compose -f docker-compose-e2e.yaml -f docker-compose-couch.yaml up -d`
Which was how I was bringing the network up under 1.0-beta
sorry if this is the wrong channel to ask in
Welp.
Traced it down
Turns out when `./byfn.sh -m generate` gets run, its creating the e2e compose file off the template, and in the template there is a tab in front of the initial 'networks' field
Might want to update that...
That whole initial networks declaration is indented more than necessary
@Asara Looking at this now
I see no tab character in `docker-compose-e2e-template.yaml`
@Asara Could you verify you are at the latest version of the samples?
```
# git pull
Already up-to-date.
# git status
# On branch master
# pwd
/opt/fabric-samples
```
@jyellick: so yeah I think it is the latest version of samples
It could just be extra whitespace. I can check tomorrow if its actually a tab. But the whitespace is causing docker to freak.
Thanks man
hey all
i use the orderer base on the kafka
i run the e2e
2017-06-27 01:28:50.893 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-06-27 01:28:50.894 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-06-27 01:28:50.896 UTC [grpc] Printf -> DEBU 006 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 172.16.10.217:7050: getsockopt: connection refused"; Reconnecting to {orderer.example.com:7050
occur the problem
it seems the orderer.example.com 7050 can't connect
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5f60c4c7d5b2 hyperledger/fabric-orderer:x86_64-1.0.0-beta "orderer" 3 minutes ago Up 3 minutes 0.0.0.0:7050->7050/tcp orderer.example.com
273ae699b35d hyperledger/fabric-kafka:x86_64-1.0.0-beta "/docker-entrypoint.s" 3 minutes ago Up 3 minutes 9093/tcp, 0.0.0.0:32962->9092/tcp kafka2.example.com
a8f81952e30d hyperledger/fabric-kafka:x86_64-1.0.0-beta "/docker-entrypoint.s" 3 minutes ago Up 3 minutes 9093/tcp, 0.0.0.0:32961->9092/tcp kafka3.example.com
0479f4d77e84 hyperledger/fabric-kafka:x86_64-1.0.0-beta "/docker-entrypoint.s" 3 minutes ago Up 3 minutes 9093/tcp, 0.0.0.0:32960->9092/tcp kafka1.example.com
d5c23155ca6d hyperledger/fabric-kafka:x86_64-1.0.0-beta "/docker-entrypoint.s" 3 minutes ago Up 3 minutes 9093/tcp, 0.0.0.0:32959->9092/tcp kafka0.example.com
a34e2fff4769 hyperledger/fabric-zookeeper:x86_64-1.0.0-beta "/docker-entrypoint.s" 3 minutes ago Up 3 minutes 0.0.0.0:32958->2181/tcp, 0.0.0.0:32957->2888/tcp, 0.0.0.0:32956->3888/tcp zookeeper1.example.com
14aa8787e31e hyperledger/fabric-ca "sh -c 'fabric-ca-ser" 3 minutes ago Up 3 minutes 0.0.0.0:7054->7054/tcp ca_peerOrg1
621642dab98d hyperledger/fabric-ca "sh -c 'fabric-ca-ser" 3 minutes ago Up 3 minutes 0.0.0.0:8054->7054/tcp ca_peerOrg2
fada80108407 hyperledger/fabric-tools "/bin/bash -c './scri" 3 minutes ago Up 3 minutes cli
14b8f2d95f44 hyperledger/fabric-zookeeper:x86_64-1.0.0-beta "/docker-entrypoint.s" 3 minutes ago Up 3 minutes 0.0.0.0:32955->2181/tcp, 0.0.0.0:32954->2888/tcp, 0.0.0.0:32953->3888/tcp zookeeper2.example.com
f5a8205f977f hyperledger/fabric-zookeeper:x86_64-1.0.0-beta "/docker-entrypoint.s" 3 minutes ago Up 3 minutes 0.0.0.0:32952->2181/tcp, 0.0.0.0:32951->2888/tcp, 0.0.0.0:32950->3888/tcp zookeeper0.example.com
but i see the all contains starts up successfully
@kostas Hi,I've made one Sbft to run, but don't know how to configure more peers in orderer.yaml as the peers are configured as ":6101":"sbft/testdata/cert1.pem". When I prefix the port number with a domain or IP, it will not be parsed correctly.How to configure it?
hey all
i use the sdk to operation fabric
Message Attachments
there is my chaincode
but when i use the command docker logs dev*
show nothing
i'm confuse
Hi, I have been looking at the SBFT code, looks like there it cannot have multiple rounds of consensus happening in parallel, so until the 3 phase commit in done on `SeqView` (2, 4), and Pre-Prepare, Prepare and Commit for `SeqView` (2, 5) won't be processed. Is it correct?
Hi, I have been looking at the SBFT code, looks like there it cannot have multiple rounds of consensus happening in parallel, so until the 3 phase commit in done on `SeqView (2, 4)`, and Pre-Prepare, Prepare and Commit for `SeqView (2, 5)` won't be processed. Is it correct?
Hi, I have been looking at the SBFT code, looks like there it cannot have multiple rounds of consensus happening in parallel, so until the 3 phase commit in done on `SeqView (2,4)`, and Pre-Prepare, Prepare and Commit for `SeqView (2,5)` won't be processed. Is it correct?
Hi, I have been looking at the SBFT code, looks like there it cannot have multiple rounds of consensus happening in parallel, so until the 3 phase commit in done on `SeqView (2,4)`, Pre-Prepare, Prepare and Commit for `SeqView (2,5)` won't be processed. Is it correct?
Has joined the channel.
Hi, can somone explain what is the difference between a solo and a kafka consensus?
@LoveshHarchandani - yes, that's the "S" = simplified. https://jira.hyperledger.org/browse/FAB-378
@wy - solo is a single machine, use it for development; if it crashes everything halts. Kafka uses a crash-fault tolerant Apache Kafka cluster.
@cca88 Thanks
@Asara I was looking for a literal tab character, which is what threw me off. You're absolutely right thought, this is a bug. I've submitted a patch for it. Thanks for identifying!
@Asara I was looking for a literal tab character, which is what threw me off. You're absolutely right though, this is a bug. I've submitted a patch for it. Thanks for identifying!
No stress! Glad to help :)
hi
i use the orderer base on the kafka
Error for partition [testchainid,0] to broker 2:org.apache.kafka.common.errors.UnknownTopicOrPartitionException: This server does not host this topic-partition. (kafka.server.ReplicaFetcherThread)
the one of the kafka cluster show this
exist on this broker.
orderer.example.com | 2017-06-28 07:43:37.045 UTC [orderer/main] main -> INFO 0e8 Beginning to serve requests
orderer.example.com | [sarama] 2017/06/28 07:43:37.044664 async_producer.go:744: producer/broker/2 state change to [retrying] on testchainid/0 because kafka server: Request was for a topic or partition that does not exist on this broker.
orderer.example.com | [sarama] 2017/06/28 07:43:37.044914 async_producer.go:806: Producer shutting down.
orderer.example.com | [sarama] 2017/06/28 07:43:37.045144 client.go:187: Closing Client
orderer.example.com | [sarama] 2017/06/28 07:43:37.045373 broker.go:182: Closed connection to broker kafka2.example.com:11092
orderer.example.com | [sarama] 2017/06/28 07:43:37.045429 async_producer.go:663: producer/broker/2 shut down
orderer.example.com | [sarama] 2017/06/28 07:43:37.045473 broker.go:182: Closed connection to broker kafka1.example.com:9092
the orderer show the this
it seems create topic
error
@chenxuan If the partition was created successfully, it sounds like it moved to another broker. I expect you will see this if you inspect the Kafka logs. If so, you will need to discover why the partition leader has changed.
@jyellick ok
Has joined the channel.
Do we have any timetable when SBFT may be part of HLFV1?
@tennenjl It has been ruled out for inclusion in v1. There is ongoing planning for v1.1, and it is an item being discussed, but no decisions have been made yet.
Thanks!
Is someone able to explain or provide links on what are the differences of the current kafka consensus and SBFT?
@wy Kafka is a crash fault tolerant consensus algorithm (CFT), SBFT is a byzantine fault tolerant consensus algorithm (BFT)
are there any articles/documents regarding CFT?
@wy CFT is a very standard distributed systems concept. You may read more about Kafka's implementation here: https://kafka.apache.org
Error: Error endorsing query: rpc error: code = Unknown desc = could not find chaincode with name 'mycc' - make sure the chaincode mycc has been successfully instantiated and try again -
@jyellick
2017-06-29 11:01:35.628 UTC [lscc] Invoke -> ERRO 756 error getting chaincode mycc on channel: mychannel(err:could not find chaincode with name 'mycc')
i starts the fabric network use the multiple orderer base on the kafka
@here on the TSC call, we'll be reviewing a proposal from Babble to incubate their consensus
https://www.gotomeeting.com/join/613310429
now, in case it wasn't clear
the proposal is here: https://docs.google.com/document/d/1wyYVNPDyJKHhdbznWWC1GrWHxFat7qBcB4laa44_Uis/edit#heading=h.lej19udj34tq
Has joined the channel.
hi
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d40416f1cd85 dev-peer1.org0.example.com-cc-2 "chaincode -peer.addr" 32 seconds ago Exited (0) 12 seconds ago dev-peer1.org0.example.com-cc-2
d97684ee926e dev-peer0.org1.example.com-cc-2 "chaincode -peer.addr" 32 seconds ago Up 16 seconds dev-peer0.org1.example.com-cc-2
a6e81f7001f0 hyperledger/fabric-orderer:x86_64-1.0.0-beta "orderer" 2 minutes ago Up 2 minutes 0.0.0.0:7050->7050/tcp orderer1.example.com
816ea65c8f0b hyperledger/fabric-orderer:x86_64-1.0.0-beta "orderer" 2 minutes ago Up 2 minutes 0.0.0.0:5050->7050/tcp orderer3.example.com
94607d5f2bc1 hyperledger/fabric-orderer:x86_64-1.0.0-beta "orderer" 2 minutes ago Up 2 minutes 0.0.0.0:6050->7050/tcp orderer2.example.com
c2d8651f96fc hyperledger/fabric-orderer:x86_64-1.0.0-beta "orderer" 2 minutes ago Up 2 minutes 0.0.0.0:8050->7050/tcp orderer.example.com
112c4043cca0 hyperledger/fabric-kafka:x86_64-1.0.0-beta "/docker-entrypoint.s" 2 minutes ago Up 2 minutes 9093/tcp, 0.0.0.0:33165->9092/tcp kafka0.example.com
64dd0383aeec hyperledger/fabric-kafka:x86_64-1.0.0-beta "/docker-entrypoint.s" 2 minutes ago Up 2 minutes 9093/tcp, 0.0.0.0:33166->9092/tcp kafka2.example.com
325b867b8b15 hyperledger/fabric-kafka:x86_64-1.0.0-beta "/docker-entrypoint.s" 2 minutes ago Up 2 minutes 9093/tcp, 0.0.0.0:33164->9092/tcp kafka3.example.com
62477ab61b4f hyperledger/fabric-kafka:x86_64-1.0.0-beta "/docker-entrypoint.s" 2 minutes ago Up 2 minutes 9093/tcp, 0.0.0.0:33163->9092/tcp kafka1.example.com
8840f00b181c hyperledger/fabric-peer:x86_64-1.0.0-beta "peer node start" 2 minutes ago Up 2 minutes 0.0.0.0:10051->7051/tcp, 0.0.0.0:10053->7053/tcp peer1.org2.example.com
07f16efc5911 hyperledger/fabric-peer:x86_64-1.0.0-beta "peer node start" 2 minutes ago Up 2 minutes 0.0.0.0:9051->7051/tcp, 0.0.0.0:9053->7053/tcp peer0.org2.example.com
a854e3877300 hyperledger/fabric-ca:x86_64-1.0.0-beta "sh -c 'fabric-ca-ser" 2 minutes ago Up 2 minutes 0.0.0.0:7054->7054/tcp ca0
0e2062de35d3 hyperledger/fabric-peer:x86_64-1.0.0-beta "peer node start" 2 minutes ago Up 2 minutes 0.0.0.0:8051->7051/tcp, 0.0.0.0:8053->7053/tcp peer1.org1.example.com
566f55c11c0d hyperledger/fabric-zookeeper:x86_64-1.0.0-beta "/docker-entrypoint.s" 2 minutes ago Up 2 minutes 0.0.0.0:33162->2181/tcp, 0.0.0.0:33161->2888/tcp, 0.0.0.0:33160->3888/tcp zookeeper2.example.com
aba9c9d95528 hyperledger/fabric-peer:x86_64-1.0.0-beta "peer node start" 2 minutes ago Up 2 minutes 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer0.org1.example.com
0604c694d714 hyperledger/fabric-zookeeper:x86_64-1.0.0-beta "/docker-entrypoint.s" 2 minutes ago Up 2 minutes 0.0.0.0:33159->2181/tcp, 0.0.0.0:33158->2888/tcp, 0.0.0.0:33157->3888/tcp zookeeper1.example.com
321bf4f7c2f8 hyperledger/fabric-zookeeper:x86_64-1.0.0-beta "/docker-entrypoint.s" 2 minutes ago Up 2 minutes 0.0.0.0:33156->2181/tcp, 0.0.0.0:33155->2888/tcp, 0.0.0.0:33154->3888/tcp zookeeper0.example.com
fb4cc98b8dfa hyperledger/fabric-ca:x86_64-1.0.0-beta "sh -c 'fabric-ca-ser" 2 minutes ago Up 2 minutes 0.0.0.0:8054->7054/tcp ca1
anyone occur the question
as you see the peer1.org0.example.com exit
as you see the dev-peer1.org0.example.com exit
but i don't have the peer1.org0.example.com
what a strange things
@chenxuan There is very little likelihood of anyone diagnosing your problems with limited information like this. As I have suggested previously, I know you are running in Ubuntu and not in the Vagrant dev environment. Since the vagrant dev environment is Ubuntu based, I strongly suggest you execute the bddtests inside vagrant to confirm you are executing them correctly. Then, if you are, compare between the steps inside vagrant and on your local machine to find where the divergence occurs. Then, with a specific error and details about why it is occuring, post here.
@jyellick i'm sorry
Has joined the channel.
hi are there any diagrams available to illustrate the current consensus mechanism for hyperledger?
Has joined the channel.
@wy: Figure 8 in https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4 shows how the Kafka-based ordering service works
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=EHBAA57StMAiLGtaP) @kostas Thank you very much, ill read through the document
The `docker logs kafka_id` is near 09:00, but `docker exec -it kafka_id date` is 5 hour forward. why?
The `docker logs kafka_id` is near 09:00, but `docker exec -it kafka_id date` is 5 hours forward. why?
The same result is found for `zookeeper`.
Has joined the channel.
Hi, a question about the role of orderers in the kafka consensus, do all orderers assemble the blocks? or just 1?
Has joined the channel.
Has joined the channel.
Hihi! when I try to update the anchor peers, the following error occurs:Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application/xxxMSP not satisfied: Failed to authenticate policy
is there any hint?
the tx file is generated by the configtxgen tools
Has joined the channel.
Hi, while restarting the fabric network if the client try to connect its get connected but the invoking transactions failed with MSP identification error..:(
Is there any way to identify whether OK to do transactions by clients upon restarting the network ?
Hi, while restarting the fabric network if the client try to connect its get connected but the invoking transactions failed with MSP identification error.. :(
Is there any way to identify whether OK to do transactions by clients upon restarting the network ?
hi, is there a sample configtx.yaml file for kafka orderer?
Has joined the channel.
Took 0m12.344s
(behave_venv) root@tinkpad-ThinkPad-X240:/opt/go/src/github.com/hyperledger/fabric/bddtests# peer
panic: Fatal error when initializing core config : Fatal error when reading core config file: Unsupported Config Type ""
bddtest
@chenxuan got same error before. are you running the configtxgen? If so, set you environment variable of FABRIC_CFG_PATH=$PWD
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/behave/model.py", line 1456, in run
match.run(runner.context)
File "/usr/local/lib/python2.7/dist-packages/behave/model.py", line 1903, in run
self.func(context, *args, **kwargs)
File "steps/endorser_impl.py", line 124, in step_impl
resultsDict = dict(zip(endorsers, [respFuture.result() for respFuture in proposalResponseFutures]))
File "/usr/local/lib/python2.7/dist-packages/grpc/_channel.py", line 294, in result
raise self
_Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, could not find chaincode with name 'example02' - make sure the chaincode example02 has been successfully instantiated and try again)>
Captured stdout:
Will copy gensisiBlock over at this point
bddtest
@jyellick
i run the twice it's ok
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=3HsoyE7bShd9rSveK)
@wy Although I would consider it an implementation detail (we considered doing it both ways), all Kafka brokers assemble blocks after messages have been ordered.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=3HsoyE7bShd9rSveK)
@wy Although I would consider it an implementation detail (we considered doing it both ways), all Kafka ordering shims assemble blocks after messages have been ordered.
@mavericklam This generally indicates that the wrong identity signed. Usually, this is because a user cert was used, rather than an admin cert. It could also mean that the signature was invalid. I would need to see some additional logs to be sure. [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=6P5hnshFQ2dYYWeDi)
Restarting the network should not have an effect on which MSP ids are allowed to transact for a given channel. If you can find a client authorized to call `Deliver`, you may invoke `peer channel fetch config` and inspect the config via `configtxlator` to identity who is authorized on a given channel. [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=fNq4yf5e4PKTLBNRC)
@bh4rtp There is a `SampleInsecureKafka` profile in the standard `configtx.yaml`, note, the only real difference is that the orderer type has been overridden from `solo` to `kafka`. You should also configure your broker addresses in the orderer section, there is an existing entry of `127.0.0.1:9092` which is almost definitely wrong and will need to be customized to the list of your brokers for your backing kafka cluster.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=7a2QLq23burkAqZJE)
@jyellick can i configure the broker addresses using `hostname:port`, i.e. `kafka0.example.com`?
@jyellick can i configure the broker addresses using `hostname:port`, i.e. `kafka0.example.com:9092`?
@bh4rtp Yes, you can
@jyellick thanks. i had used the `examples/e2e_cli/configtx.yaml`. it did not work. is your mentioned stardand `configtx.yaml` `sampleconfig/configtx.yaml`?
Yes, the standard `sampleconfig/configtx.yaml`, though any of them should be easily adaptable
@jyellick i have changed the `configtx.yaml` and set timeout to be `300` for `peer channel create`. but create channel still failed with error:
`Error: Got unexpected status: SERVICE_UNAVAILABLE`
i suspected the local host resources and deployed the network onto the server. however it doesn't work.
my fabric network has 1 orderer, 4 peers, 1 cli, 3 zookeepers, 4 kafka and 4 couchdb nodes.
@jyellick i have changed the `configtx.yaml` and set timeout to be `300` for `peer channel create`. but create channel still failed with error:
`Error: Got unexpected status: SERVICE_UNAVAILABLE`
i suspected the local host resources and deployed the network onto the server. however it doesn't work with the same error.
my fabric network has 1 orderer, 4 peers, 1 cli, 3 zookeepers, 4 kafka and 4 couchdb nodes.
@jyellick i have changed the `configtx.yaml` and set timeout to be `300` for `peer channel create`. but create channel still failed with error:
`Error: Got unexpected status: SERVICE_UNAVAILABLE`
i suspected the local host resources and deployed the network onto the server. however it doesn't work with the same error.
my fabric network has 1 orderer, 4 peers, 1 cli, 3 zookeepers, 4 kafka and 4 couchdb nodes.
i am stuck with this problem for almost two weeks.
```2017-07-06 11:14:32.651 CST [orderer/main] func1 -> DEBU 498 Closing Deliver stream
2017-07-06 11:14:36.850 CST [orderer/kafka] try -> DEBU 499 [channel: orderereprich1] Connecting to the Kafka cluster
2017-07-06 11:14:41.850 CST [orderer/kafka] try -> DEBU 49a [channel: orderereprich1] Connecting to the Kafka cluster
2017-07-06 11:14:41.887 CST [orderer/kafka] try -> DEBU 49b [channel: orderereprich1] Error is nil, breaking the retry loop
2017-07-06 11:14:41.887 CST [orderer/kafka] startThread -> INFO 49c [channel: orderereprich1] Producer set up successfully
2017-07-06 11:14:41.887 CST [orderer/kafka] sendConnectMessage -> INFO 49d [channel: orderereprich1] About to post the CONNECT message...
2017-07-06 11:14:41.888 CST [orderer/kafka] try -> DEBU 49e [channel: orderereprich1] Retrying every 5s for a total of 10m0s
2017-07-06 11:14:46.888 CST [orderer/kafka] try -> DEBU 49f [channel: orderereprich1] Attempting to post the CONNECT message...
2017-07-06 11:14:47.858 CST [orderer/kafka] try -> DEBU 4a0 [channel: orderereprich1] Error is nil, breaking the retry loop
2017-07-06 11:14:47.858 CST [orderer/kafka] startThread -> INFO 4a1 [channel: orderereprich1] CONNECT message posted successfully
2017-07-06 11:14:47.858 CST [orderer/kafka] setupParentConsumerForChannel -> INFO 4a2 [channel: orderereprich1] Setting up the parent consumer for this channel...
2017-07-06 11:14:47.858 CST [orderer/kafka] try -> DEBU 4a3 [channel: orderereprich1] Retrying every 5s for a total of 10m0s
2017-07-06 11:14:52.858 CST [orderer/kafka] try -> DEBU 4a4 [channel: orderereprich1] Connecting to the Kafka cluster
2017-07-06 11:14:52.865 CST [orderer/kafka] try -> DEBU 4a5 [channel: orderereprich1] Error is nil, breaking the retry loop
2017-07-06 11:14:52.865 CST [orderer/kafka] startThread -> INFO 4a6 [channel: orderereprich1] Parent consumer set up successfully
2017-07-06 11:14:52.866 CST [orderer/kafka] setupChannelConsumerForChannel -> INFO 4a7 [channel: orderereprich1] Setting up the channel consumer for this channel (start offset: -2)...
2017-07-06 11:14:52.866 CST [orderer/kafka] try -> DEBU 4a8 [channel: orderereprich1] Retrying every 5s for a total of 10m0s
2017-07-06 11:14:57.866 CST [orderer/kafka] try -> DEBU 4a9 [channel: orderereprich1] Connecting to the Kafka cluster
2017-07-06 11:14:57.899 CST [orderer/kafka] try -> DEBU 4aa [channel: orderereprich1] Error is nil, breaking the retry loop
2017-07-06 11:14:57.899 CST [orderer/kafka] startThread -> INFO 4ab [channel: orderereprich1] Channel consumer set up successfully
2017-07-06 11:14:57.899 CST [orderer/kafka] startThread -> INFO 4ac [channel: orderereprich1] Start phase completed successfully
2017-07-06 11:14:57.951 CST [orderer/kafka] processMessagesToBlocks -> DEBU 4ad [channel: orderereprich1] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
2017-07-06 11:14:57.951 CST [orderer/kafka] processConnect -> DEBU 4ae [channel: orderereprich1] It's a connect message - ignoring```
the orderer container log seems to say the channel creation succeeded at `11:14:57.899` and started at `11:14:36.850`. it spent 20 seconds and default timeout is 5 seconds. while i had changed the timeout like this:
```peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile $ORDERER_CA --timeout 5000```
why does the timeout setting by `--timeout 5000` take effects?
why does not the timeout setting by `--timeout 5000` take effects?
Hi@chenxuan and @jyellick I'm also trying to make a kafka cluster with 4 brokers and 3 zookeeper but run into an orderer error: Cannot post CONNECT message:kafka server :In the middle of a leadership of election, there is currently no leader for this partition...
@bh4rtp Please dm me the entire orderer log
@Glen I suspect that your Kafka cluster is not configured correctly. Please review your Kafka cluster's logs to find the problem
Hi @jyellick I've only configured the configtx.yml and docker-compose-cli.yml as follows: OrdererType: kafka
Addresses:
- orderer.example.com:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB
Kafka:
# Brokers: A list of Kafka brokers to which the orderer connects
# NOTE: Use IP:port notation
Brokers:
- kafka0:9092
- kafka1:9092
Hi @jyellick I've only configured the configtx.yml and docker-compose-cli.yml as follows: OrdererType: kafka
Addresses:
- orderer.example.com:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB
Kafka:
# Brokers: A list of Kafka brokers to which the orderer connects
# NOTE: Use IP:port notation
Brokers:
- kafka0:9092
- kafka1:9092
@Glen I do not believe this is a fabric problem, I believe this is a Kafka configuration problem. Your Kafka cluster has not elected a leader, which usually indicates a misconfiguration of the brokers and their advertised addresses/ports
yes my docker-compose-cli.yml is trying to pass some environment variables as follows;
kafka3:
container_name: kafka3
image: hyperledger/fabric-kafka
environment:
KAFKA_BROKER_ID: 0
KAFKA_MESSAGE_MAX_BYTES: 103809024
KAFKA_REPLICA_FETCH_MAX_BYTES: 103809024
KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE: "false"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
depends_on:
- zookeeper
here I deploy 4 kafka and 1 zookeeper and 1 orderer
I'm not sure if I need to configure more elsewhere except these two configuration files
Please check your Kafka logs to identify the issue
[2017-07-06 14:56:21,044] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
java.lang.RuntimeException: A broker is already registered on the path /brokers/ids/0. This probably indicates that you either have configured a brokerid that is already in use, or else you have shutdown this broker and restarted it faster than the zookeeper timeout so it appears to be re-registering.
at kafka.utils.ZkUtils.registerBrokerInZk(ZkUtils.scala:295)
at kafka.utils.ZkUtils.registerBrokerInZk(ZkUtils.scala:281)
at kafka.server.KafkaHealthcheck.register(KafkaHealthcheck.scala:64)
at kafka.server.KafkaHealthcheck.startup(KafkaHealthcheck.scala:45)
at kafka.server.KafkaServer.startup(KafkaServer.scala:231)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:37)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2017-07-06 14:56:21,048] INFO [Kafka Server 0], shutting down (kafka.server.KafkaServer)
Oh, I see
three kafka borkers died
It looks like you have specified `KAFKA_BROKER_ID: 0` for kafka3, but I would have expected this to be `KAFKA_BROKER_ID: 3`, (and `2`, and `1`, for `kafka2` and `kafka1` respectively)
sorry ,I got it
I copied the configuration from somewher
Hi @jyellick , I configured 3 zookeepers as this:`KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181,zookeeper1:2181,zookeeper2:2181` in my docker-compose-cli.yaml but got this error in orderer log : 2017-07-06 15:38:56.784 UTC [orderer/kafka] Send -> WARN 0e2 [channel: testchainid] Blob destined for partition 0, but posted to -1 instead
2017-07-06 15:38:56.784 UTC [orderer/kafka] Send -> INFO 0e3 [channel testchainid] Failed to post blob to the Kafka cluster: kafka server: Unexpected (unknown?) server error.
2017-07-06 15:38:56.784 UTC [orderer/kafka] Start -> CRIT 0e4 [channel: testchainid] Cannot post CONNECT message: kafka server: Unexpected (unknown?) server error.
@Glen This again sounds like a misconfiguration of your Kafka cluster, please investigate the logs of the brokers/zookeepers to discover the problem
@Glen This again sounds like a reconfiguration of your Kafka cluster, please investigate the logs of the brokers/zookeepers to discover the problem
ok
strange! one of the brokers print the following error `
kafka0:
container_name: kafka0
image: hyperledger/fabric-kafka
environment:
KAFKA_BROKER_ID: 0
KAFKA_MESSAGE_MAX_BYTES: 103809024
KAFKA_REPLICA_FETCH_MAX_BYTES: 103809024
KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE: "false"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181,zookeeper1:2181,zookeeper2:2181
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
KAFKA_MIN_INSYNC_REPLICAS: 2
depends_on:
- zookeeper
- zookeeper1
- zookeeper2`
kafka.admin.AdminOperationException: replication factor: 3 larger than available brokers: 1
at kafka.admin.AdminUtils$.assignReplicasToBrokers(AdminUtils.scala:77)
at kafka.admin.AdminUtils$.createTopic(AdminUtils.scala:236)
at kafka.server.KafkaApis$$anonfun$20.apply(KafkaApis.scala:572)
at kafka.server.KafkaApis$$anonfun$20.apply(KafkaApis.scala:555)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:79)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractSet.scala$collection$SetLike$$super$map(Set.scala:47)
at scala.collection.SetLike$class.map(SetLike.scala:92)
at scala.collection.AbstractSet.map(Set.scala:47)
at kafka.server.KafkaApis.getTopicMetadata(KafkaApis.scala:555)
at kafka.server.KafkaApis.handleTopicMetadataRequest(KafkaApis.scala:624)
at kafka.server.KafkaApis.handle(KafkaApis.scala:71)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
at java.lang.Thread.run(Thread.java:745)
[2017-07-06 15:48:55,819] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
one of the brokers ran into this error, but my configuration for the brokers are the same
I think my zookeeper maybe not configured correctly
188dca220000 type:create cxid:0x13 zxid:0xc txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
2017-07-06 15:38:55,488 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x15d188dca220000 type:setData cxid:0x21 zxid:0x12 txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch
2017-07-06 15:38:55,610 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x15d188dca220000 type:delete cxid:0x30 zxid:0x14 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
2017-07-06 15:38:55,925 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x15d188dca220000 type:create cxid:0x37 zxid:0x15 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2017-07-06 15:38:55,928 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x15d188dca220000 type:create cxid:0x38 zxid:0x16 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
this is the log on zookeeper
Yes, since you have just modified the zookeepers, it seems that this is likely the problem
Has joined the channel.
Has joined the channel.
There a single orderer (solo) in the Fabric Samples example. I want to setup the network with Kafka ordering. I noticed there is dc-orderer-kafka.yaml file in bddtests folder in fabric source code. I wanted to ask if anyone tried to build a simple network with Kafka. If anyone already tried to combine the two docker compose file and if you have an sample configuration, it would help me.
Also in the dc-orderer-kafka.yaml sample, there are three orderers, four kafka brokers and three zookeepers. Is it possible to add new kafka brokers or orderers without restarting the network. Also if you are building a network with four to five participants and each participant in the network has their own data center. Where should we run the orderers and kafka servers. Are we required to run the orderer services in data center of the founder of the network. Also is it possible for other participants to contribute orderers to the network.
@gauthampamu
> Is it possible to add new kafka brokers or orderers without restarting the network.
Yes, you may add brokers and orderers dynamically. You will want to issue a reconfiguration transaction after adding orderers so that peers can discover them.
> Also if you are building a network with four to five participants and each participant in the network has their own data center. Where should we run the orderers and kafka servers. Are we required to run the orderer services in data center of the founder of the network.
Typically, yes, the network founder would run the Kafka servers, but there are no explicit requirements for where the brokers are located.
> Also is it possible for other participants to contribute orderers to the network.
Certainly, though keep in mind, Kafka is not a BFT system.
Has joined the channel.
Hey all, I was trying to understand how the orderer consumes messages of kafka. Can anyone please point me to the piece of code that does that and explain its working as well?
@Rachitga Please see https://github.com/hyperledger/fabric/blob/master/docs/source/kafka.rst which links to https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit
You may find the relevant code in `fabric/orderer/kafka`
@jyellick , thanks, the link that you sent mentions how to bring up the kafka service. I have been able to successfully do that.
As for the code, could you be more specific? I wanted to know the process of the deliver message and how the client gets delivered a batch of transactions.
In summary, there is a topic and partition for each channel. And there is a go routine reading on a Kafka consumer for each channel. It reads the ordered messages, validates them, constructs the block, and writes it to the ledger on the local filesystem. Clients calling `Deliver` are executing queries against this local ledger.
The actual lines of code in the `fabric/orderer/kafka` package is quite low and can be reviewed fairly completely within a few hours.
In specific I was looking for this, when a client calls deliver, the channel and a topic will be consumed, the deliver provides the channel name and the partition and consumes the topic and then constructs a block. So I was looking for the function where the deliver mentions the topic name and tries to consume the messages.
I was looking in the folder you mentioned, for the producer i found the code in orderer/kafka/chain.go. For the consumer I was wondering where the code was, do you have an idea?
This is not the architecture. There is a single consumer go routine, which operates independently of the `Deliver` call. It writes blocks to the local filesystem. These blocks are retrieved via the `Deliver` call. `Deliver` does not trigger the allocation of any Kafka resources.
You will find the consumer go routine in `processMessagesToBlocks` in `chain.go`
`case in, ok := <-chain.channelConsumer.Messages():`
Ohh! Thanks for clearing that up, I was failing to find the consume function call in Deliver, now I know why. So what is the trigger for writing the blocks to the local file system?
Is there a timeout system that is implemented? Thanks I will look at the functions you mentioned.
Please look in `chain.go` for `support.WriteBlock`
Please look in `chain.go` for `support.WriteBlock`. Each of these indicates a condition when a block is committed to the local filesystem.
There is a batch timeout system, involving first to arrive "time to cut" messages. `chain.go` is where you will find this behavior.
Ohkay, thanks! I will look into it.
@jyellick Thanks for answering my question. If we are building a blockchain network for production. How many orderers or kafka servers should we need to have so that we can perform rolling upgrade if we need to upgrade from the orderers to install some security fix or fix for the orderer or kafka.
Has joined the channel.
@gauthampamu This will of course be dependent on your availability requirements. Minimally, we recommend 3 zookeeper, 4 kafka brokers with a RF of 3 and an ISR of 2, and 3 orderers. You may add additional nodes for additional availability.
I only have one cert in the msp/cacerts of the orderer, but I have two CAs in the network I'm trying to build; is this correct? It seems like the orderer should have two certs, one for each CA.
The orderer keeps closing the channel when my Java SDK tries to communicate with it after enrolling w/ one of the CAs; I'm thinking the above might be the cause.
The orderer keeps closing the grpc channel when my Java SDK tries to communicate with it after enrolling w/ one of the CAs; I'm thinking the above might be the cause.
@jyellick So can you tell me how many additional nodes we need for high availability so that we can perform rolling update of the network. Also can you explain what you mean by RF of 3 and ISR of 2. Also how many brokers should we have for production environment. Also how many zookeepers we need for production environment. For examples, do we need 6 zookeepers and run 3 Zookeepers on one VM1 and other 3 on different VM2. Similarly, do need 8 brokers with 4 on one vm3 and 4 brokers on vm4.
I want to define that golden topology for production environment with can support upgrade of the environment and also handle failures to one of the orderers or zookeepers.
@gauthampamu The Hyperledger Fabric is flexible and can be set up to meet whatever specifications you require; there is not just one best way. It is up to the network administrator. If that is your decision, then it seems like you would benefit from a discussion with an experienced network service provider. If you are creating your own network service, then you need to decide what HA requirements to provide. Based on the reliability and distribution of the servers throughout the network, the geography, and the reliability of power supply, etc, how many orderers or kafkabrokers can you guarantee to keep in service? How many simultaneous failures of orderers would be acceptable? Or of kafka-brokers? Answers to these and your own network administration requirements will influence how many orderers and kafkabrokers you should use, and what configuration settings you should use for RF (KAFKA_DEFAULT_REPLICATION_FACTOR) and ISR (KAFKA_MIN_INSYNC_REPLICAS) for all your karka brokers. Those are explained in that file dc-orderer-kafka.yml.
@gauthampamu The Hyperledger Fabric is flexible and can be set up to meet whatever specifications you require; there is not just one best way. It is up to the network administrator. If that is your decision, then it seems like you would benefit from a discussion with an experienced network service provider. If you are creating your own network service, then you need to decide what HA requirements to provide. Based on the reliability and distribution of the servers throughout the network, the geography, and the reliability of power supply, etc, how many orderers or kafkabrokers can you guarantee to keep in service? How many simultaneous failures of orderers would be acceptable? Or of kafka-brokers? Answers to these and your own network administration requirements will influence how many orderers and kafkabrokers you should use, and what configuration settings you should use for RF (KAFKA_DEFAULT_REPLICATION_FACTOR) and ISR (KAFKA_MIN_INSYNC_REPLICAS) for all your karka brokers. Those are explained in that file dc-orderer-kafka.yml. And more kafka information is available at http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html , although that doc page will not address your administration questions.
@pschnap
> I only have one cert in the msp/cacerts of the orderer, but I have two CAs in the network I'm trying to build; is this correct? It seems like the orderer should have two certs, one for each CA.
This is the local MSP dir for the orderer. It is identify the controlling organization for that particular process. The crypto material for the whole network (including the other CAs) is stored internally on the blockchain.
> The orderer keeps closing the grpc channel when my Java SDK tries to communicate with it after enrolling w/ one of the CAs; I'm thinking the above might be the cause.
If you can post orderer logs, it might be helpful. If you see nothing in the orderer logs, this sounds like possibly a TLS problem to me.
@pschnap
> I only have one cert in the msp/cacerts of the orderer, but I have two CAs in the network I'm trying to build; is this correct? It seems like the orderer should have two certs, one for each CA.
This is the local MSP dir for the orderer. That cacerts dir identifies the controlling organization for that particular process. The crypto material for the whole network (including the other CAs) is stored internally on the blockchain.
> The orderer keeps closing the grpc channel when my Java SDK tries to communicate with it after enrolling w/ one of the CAs; I'm thinking the above might be the cause.
If you can post orderer logs, it might be helpful. If you see nothing in the orderer logs, this sounds like possibly a TLS problem to me.
@jyellick oh no, not another TLS problem ;) -- I'll post the logs in private chat so as not to flood the channel
@scottz Thanks for response. Read documentation on Kafka. I wanted to have the service even if one of the orderer is down and similarly for Kafka. Two scenarios : 1) Ability to keep the service even if one of the orderer is down and one of the kafka broker is down. 2) Second scenarios, how many orderers and kafka brokers we need if want to provide the service even if two orderers are down. According to documentation, you will need 3 or 5 Zookeepers. For the Kafka cluster, we will need 4 servers and we can still service if even if one of the kafka is down. Lets say if both kafkta servers on the same VM, in that case, I would think we will need around 6 kafka, 3 kafka for vm. Similar I would think we need 6 orderers so that we can have three orderers for VM, in that case we can still function if one orderer is down or even whole VM is down.
@gauthampamu In production, you should never host two services on the same VM. The point of splitting these services is to accommodate not only process level crashes, but also machine level crashes. For most deployment scenarios, the idea is to eliminate all 'single points of failure', so, similarly, if all of your servers were plugged into the same switch, if the switch died, you would lose your service. So, each server should be wired into at least two separate switches. The principal expands, that if everything is hosted in one data server, and the building catches fire, your service goes down, so at least two datacenters. Or maybe you are concerned about natural disasters, so maybe two different cities are required. As @scottz pointed out, this is ultimately a problem for an experienced network service provider, and there is no "one size fits all" solution. You will need to think about what your resiliency needs are, and what sort of failures should be tolerated, then come up with a plan from there.
The default configuration of 3 zookeepers, 4 kafka brokers (rf=3, isr=2), and 3 orderers, means that you may lose a zookeeper, a broker, and up to two orderers, and your network should still remain available. If you lose 2 zookeepers, 2 brokers, or all 3 orderer nodes, your service will become unavailable.
I have a question on Hyperledger fabric SDK. In most samples, they create the chain with multiple orderers but does it send the transaction to just one orderer. I looked at the source code Chain.js, line number 1866. It seems to be sending the orderer to just one orderer. var orderer = self.getOrderers()[0];
return orderer.sendBroadcast(envelope);
@gauthampamu I am not sure what the question is
Any transaction may be directed to any orderer, including channel creation transactions. Unless you are interested in BFT at the orderer (which is not supported for v1), there is no need to ever send a transaction to more than one orderer.
That answers my questions. Thanks
Is the same case for sending the transaction proposal. For example if you have two peers per organization, is it sufficient to sent the tx proposal to just one of the peer.
It will depend on the endorsement policy. But typically, receiving an endorsement from more than one peer within an organization is not needed. Typically, if multiple endorsements are required, it is from peers in different organizations.
Has joined the channel.
If we are using endorsement policy, we need network connectivity between the Nodejs application and all the peers involved in the endorsement. If the peers are running in different organization, they organization will have to expose those hostname and ports to other organization in order for the endorsement to work
I am asking these questions to understand the network connectivity requirements
All chaincodes have an endorsement policy, it is a question of what is required by that policy. But yes, if the policy requires endorsements from multiple orgs, then those orgs will all need externally accessible peers to perform those endorsements.
In production environment, the network connectivity is very strict...
Yes. I am certain everyone will have their own deployment requirements, with assorted firewalling, VPNs, etc.
I would think... there is network connectivity between the peers.. how is the secured..what configuration is required for securing the communication between the peers.
Do we need to export and import certificates from one peer to other peer to establish secure TLS communication between peers
@gauthampamu Please see the https://github.com/hyperledger/fabric-samples/tree/master/first-network
When executed, you will see that each organization has a TLS CA cert. Other orgs may use this cert to authenticate the TLS connections are legitimate.
In this way, each organization does need to supply a TLS CA cert, but there is no need for the tedium of exchanging individual peer certs.
hi
can you see something about the tps
Lifting the server siege...
Transactions: 1110 hits
Availability: 100.00 %
Elapsed time: 21.44 secs
Data transferred: 0.06 MB
Response time: 3.32 secs
Transaction rate: 51.77 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 171.97
Successful transactions: 1110
Failed transactions: 0
Longest transaction: 4.07
Shortest transaction: 0.68
the tps is 52
?
i don't know where is the bottleneck @here
i user the kafka
four orderer
################################################################################
Orderer: &OrdererDefaults
# Orderer Type: The orderer implementation to start
# Available types are "solo" and "kafka"
OrdererType: kafka
Addresses:
- orderer.example.com:7050
- orderer1.example.com:7050
- orderer2.example.com:7050
- orderer3.example.com:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB
Kafka:
# Brokers: A list of Kafka brokers to which the orderer connects
# NOTE: Use IP:port notation
Brokers:
- kafka0.example.com:9092
- kafka1.example.com:9092
- kafka2.example.com:9092
- kafka3.example.com:9092
# Organizations is the list of orgs which are defined as participants on
# the orderer side of the network
Organizations:
################################################################################
the above is my config about the configtx
io.grpc.netty.NettyClientTransport$3: Frame size 4666142 exceeds maximum: 4194304.
@chenxuan There is no way to identity the bottleneck from the information you have provided. You will need to use performance profiling tools to determine this.
hi everyone
i want to test the tps about the fabric
the below is my steps
first i deploy the fabric in one machine
the config about fabric network is
Message Attachments
as the picture show the orderer base on the kafka
Orderer: &OrdererDefaults
# Orderer Type: The orderer implementation to start
# Available types are "solo" and "kafka"
OrdererType: kafka
Addresses:
- orderer.example.com:7050
- orderer1.example.com:7050
- orderer2.example.com:7050
- orderer3.example.com:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 1s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 100
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB
Kafka:
# Brokers: A list of Kafka brokers to which the orderer connects
# NOTE: Use IP:port notation
Brokers:
- kafka0.example.com:9092
- kafka1.example.com:9092
- kafka2.example.com:9092
- kafka3.example.com:9092
# Organizations is the list of orgs which are defined as participants on
# the orderer side of the network
Organizations:
###########
the above is orderer config
so the fabric network works wel
well
ant then i use the java sdk to provide restful interface to operation the fabric
Message Attachments
the picture show the interface that i provide
and then i use the siege(a tool to test web performce)
to test the tps
Lifting the server siege...
Transactions: 4300 hits
Availability: 100.00 %
Elapsed time: 49.36 secs
Data transferred: 0.24 MB
Response time: 4.18 secs
Transaction rate: 87.12 trans/sec
Throughput: 0.00 MB/sec
Concurrency: 364.40
Successful transactions: 4300
Failed transactions: 0
Longest transaction: 5.24
Shortest transaction: 1.59
the result is bad
@jyellick
@chenxuan I understand that the result is poor, but what is your question? If you are asking where the performance bottleneck is, and how to improve it, there is no way to determine that from the information you have provided.
For what it's worth, using only my local laptop with 8 threads writing signed messages to `Broadcast` over gRPC to an orderer using the Solo consensus method also running on my local laptop, I was seeing around 1000 tps. The Kafka consensus method should be faster than this.
For what it's worth, I recently tested using only my local laptop with 8 threads writing signed messages to `Broadcast` over gRPC to an orderer using the Solo consensus method also running on my local laptop, I was seeing around 1000 tps. The Kafka consensus method should be faster than this.
@jyellick ok
i use the couchdb store the data
Message Attachments
the above is my chaincode
when i to query the data
the fabric produce the block
i am confuse
@jyellick
can you take a look
?
Has joined the channel.
Hello all,
I was looking at the code of the orderer and kafka,
I was trying to understand a few things in the file orderer/kafka/chain.go,
```
type chainImpl struct {
consenter commonConsenter
support multichain.ConsenterSupport
channel channel
lastOffsetPersisted int64
lastCutBlockNumber uint64
producer sarama.SyncProducer
parentConsumer sarama.Consumer
channelConsumer sarama.PartitionConsumer
halted bool // For the Enqueue() calls
exitChan chan struct{} // For the Chain's Halt() method
startCompleted bool // For testing
}
```
so here in the structure can someone explain me the use of lastOffsetPersisted,
also in function processRegular can someone explain me the use of the code
```
for i, batch := range batches {
// If more than one batch is produced, exactly 2 batches are produced.
// The receivedOffset for the first batch is one less than the supplied
// offset to this function.
offset := receivedOffset - int64(len(batches)-i-1)
block := support.CreateNextBlock(batch)
encodedLastOffsetPersisted := utils.MarshalOrPanic(&ab.KafkaMetadata{LastOffsetPersisted: offset})
support.WriteBlock(block, committers[i], encodedLastOffsetPersisted)
*lastCutBlockNumber++
logger.Debugf("[channel: %s] Batch filled, just cut block %d - last persisted offset is now %d", support.ChainID(), *lastCutBlockNumber, offset)
}
```
@Rachitga: Every block corresponds to a certain (non-fixed) number of messages posted to the chain's corresponding Kafka partition. So block 10 may, for example, contain messages with offsets 65 to 72. When we cut block 10, we record the offset of the last message we persisted (72 in this case), so that if the Kafka orderer restarts we know where to begin our processing.
The second code block ensures that the right `LastOffsetPersisted` value is encoded in each block, when we have two blocks created in one go.
@chenxuan: All feedback is welcome, but we've repeatedly asked you to provide relevant details when posting issues here, and you have repeatedly not done so. Furthermore, you are frequently using this channel (#fabric-consensus) to post *non*-consensus-related questions, such as your last question here earlier today (https://chat.hyperledger.org/channel/fabric-consensus?msg=DZEexQhK7SampPBWC). Can you please stop doing this?
@kostas i send the chaincode file
How is a chaincode-related question related to ordering?
@kostas i am sorry
Does anyone know of a target release version for the PBFT ordering service?
I'm assuming that the PBFT implementation is going to remove the need for a centralized piece of infrastructure?
Correct. No target release date. It's pretty safe to say that it'll come with a minor release within the 1.x track.
@jmcnevin: Correct. No target release date. It's pretty safe to say that it'll come with a minor release within the 1.x track.
Great, thank you.
Has joined the channel.
chenxuan
kostas
jyellick
Discussion on the development and use of the fabric ordering service and its consensus components.
Welcome to #fabric-consensus. Questions here should be related to either the ordering service code and its APIs (Broadcast/Deliver), configuration transactions, or the ordering service consensus plugins (Solo/Kafka/SBFT). Before postingyour question, please take time to ensure that your question is precise and concise. For example: Bad question: Why do I get the error "BAD_REQUEST"? Good question: Using `fabric-examples/first-network/byfn.sh`, when submitting the channel creation as `Admin@org1.example.com` it succeeds, but when using `User1@org1.example.com` it fails with "BAD_REQUEST". Why does this second request fail?
Discussion on the development and use of the fabric ordering service and its consensus components.
Welcome to #fabric-consensus. Questions here should be related to either the ordering service code and its APIs (Broadcast/Deliver), configuration transactions, or the ordering service consensus plugins (Solo/Kafka/SBFT). Before posting your question, please take time to ensure that your question is precise and concise. For example: Bad question: Why do I get the error `BAD_REQUEST`? Good question: Using `fabric-examples/first-network/byfn.sh`, when submitting the channel creation as `Admin@org1.example.com` it succeeds, but when using `User1@org1.example.com` it fails with `BAD_REQUEST`. Why does this second request fail?
Has joined the channel.
@kostas , thanks for your reply, so offset helps us to keep track of the individual transaction number in the blocks?
```
for i, batch := range batches {
// If more than one batch is produced, exactly 2 batches are produced.
// The receivedOffset for the first batch is one less than the supplied
// offset to this function.
offset := receivedOffset - int64(len(batches)-i-1)
block := support.CreateNextBlock(batch)
encodedLastOffsetPersisted := utils.MarshalOrPanic(&ab.KafkaMetadata{LastOffsetPersisted: offset})
support.WriteBlock(block, committers[i], encodedLastOffsetPersisted)
*lastCutBlockNumber++
logger.Debugf("[channel: %s] Batch filled, just cut block %d - last persisted offset is now %d", support.ChainID(), *lastCutBlockNumber, offset)
}
```
I was liitle confused by this specific line number,
Could you once clarify what is the reason for creation of an array of batches?
@kostas , thanks for your reply, so offset helps us to keep track of the individual transaction number in the blocks?
And for the second doubt,
```
for i, batch := range batches {
// If more than one batch is produced, exactly 2 batches are produced.
// The receivedOffset for the first batch is one less than the supplied
// offset to this function.
offset := receivedOffset - int64(len(batches)-i-1)
block := support.CreateNextBlock(batch)
encodedLastOffsetPersisted := utils.MarshalOrPanic(&ab.KafkaMetadata{LastOffsetPersisted: offset})
support.WriteBlock(block, committers[i], encodedLastOffsetPersisted)
*lastCutBlockNumber++
logger.Debugf("[channel: %s] Batch filled, just cut block %d - last persisted offset is now %d", support.ChainID(), *lastCutBlockNumber, offset)
}
```
I was liitle confused by the specific line number,
offset := receivedOffset - int64(len(batches)-i-1)
So what I am thinking right now is that if the batches are 1 in number, then offset is same as received offset for the one entry in the array.
i.e. offset := receivedOffset - (1 - 0 - 1)
If the batch array is of size 2, then for the first batch, we get an offset reduced by one, and for the second batch we get the same offset as the receivedoffset.
So we are placing the first batch with one message first, and the second batch with the rest of the messages later.
Am I correct here? And I also wanted to know, the receivedOffset field is read from the message, more specifically,
in := <-channelConsumer.Messages()
recievedOffset := in.Offset,
so is it something that each kafka consumer maintains separately?
The offset is a partition property. It's maintained at the brokers.
@Rachitga: The offset is a partition property. It's maintained at the brokers.
@Rachitga: The offset is a partition property. It's maintained at by the brokers (replica) maintaining this partition.
@Rachitga: The offset is a partition property. It's maintained by the brokers (replica) maintaining this partition.
Your other statements are correct.
okay, thanks, also could you tell me about the committers array that is used in support.WriteBlock function?
it is returned by the function, support.BlockCutter().Ordered(env), in function processRegular in orderer/kafka/chain.go.
okay, thanks @kostas , also could you tell me about the committers array that is used in support.WriteBlock function?
it is returned by the function, support.BlockCutter().Ordered(env), in function processRegular in orderer/kafka/chain.go.
@Rachitga: You'll need to a bit more specific with the question. Did you study the code? Which part specifically gives you pause?
so I studied the code, what I presently understand is, that when broadcast is sent to an orderer then the kafka producer gets called and message is sent to a queue, now the orderer maintains a ledger and also consumes the messages from the queue into blocks, and then a deliver call is made by the committer to read these blocks. So what i thought was that support.WriteBlock will have each orderer write a block into their ledger, so why is a committer array being sent?
Did you study the `filter.Committer` interface?
committers has a datatype of []filter.Committer
Correct.
Yeah i looked in the file common/filter/filter.go,
```
// Committer is returned by postfiltering and should be invoked once the message has been written to the blockchain
type Committer interface {
// Commit performs whatever action should be performed upon committing of a message
Commit()
// Isolated returns whether this transaction should have a block to itself or may be mixed with other transactions
Isolated() bool
}
```
Does anyone know how the blockchain on each of the peer catching up with the latest blocks if one or more peers or down for a period of time, for example, 10min vs 1hr vs 1day?
but am getting confused on the idea why a commit interface is being present in writing the ledger on orderer
@Rachitga: You are confused, but what is the source of confusion? Not calling you, just try to figure out your expectation.
@Rachitga: You are confused, but what is the source of confusion? Not calling you out, just try to figure out your expectation.
@Rachitga: You are confused, but what is the source of confusion? Not calling you out, just trying to figure out your expectation.
For instance, if a config transaction has come in, this will need to be applied.
And the `Committer` construct is the way to do this.
Is it the word "commit" that's throwing you off? Or something else.
Is it the word "commit" that's throwing you off? Or something else? That's what I can't quite get.
Look at `orderer/common/configtxfilter/filter.go` for instance.
(You'll see that the `Commit()` call taps into the `configtx` manager's `Apply()` method.)
@eliranbi: Please rephrase? What's the source of concern?
Thanks, I understood what you were mentioning, also
Thanks, I understood what you were mentioning! Now also I have understood most of the orderer code, :relaxed: . I also wanted to understand the filtering of messages at the orderer.
Thanks, I understood what you were mentioning! Now I have understood most of the orderer code, :relaxed: . I also wanted to understand the filtering of messages at the orderer.
The orderers keep a ledger of all past transactions for the channel, so whether you're out for an hour or a week, it won't matter. And I presume that for peers that do not reach out to the orderer directly, gossiping with other peers in the channel takes care of this, though this is a question for the #fabric-gossip folks.
@eliranbi: The orderers keep a ledger of all past transactions for the channel, so whether you're out for an hour or a week, it won't matter. And I presume that for peers that do not reach out to the orderer directly, gossiping with other peers in the channel takes care of this, though this is a question for the #fabric-gossip folks.
@eliranbi: The orderers keep a ledger of all past transactions for the channel, so whether you're out for an hour or a week, it won't matter. And I presume that for peers that do not reach out to the orderer directly, gossiping with other peers in the channel takes care of this, though this is a question for the #fabric-gossip folks. (Although I'd love to have a mid-level view of how that works as well.)
My concern is that if we have a network of 3-4 organizations and each has a peer what happens if one of the peers is down for long period of times lets say for 3hrs, and the rest of the peers continue to process transactions and added to the blockchain? how is the peer that was down catching up with the rest of the blocks?
On what criteria is the filtering of message done?
@Rachitga: Are you studying the code out of curiosity, for bugs, or to develop your own consensus plugin? Just curious.
@eliranbi: I answered that above I think.
yes thanks! I just saw that
@kostas , I was studying the code for development purposes.
@Rachitga: It's not clear what you mean by filtering of messages. Be more specific. Links to specific lines on Github are awesome.
https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit
```
the lines on the first page that say
(3) they also do transaction filtering and validation for configuration transactions that either reconfigure an existing chain or create a new one.
```
Ah.
If you study the `Handle` logic in the `broadcast` package this will become clearer: https://github.com/hyperledger/fabric/blob/master/orderer/common/broadcast/broadcast.go#L76
I found certain sections of code, such as in orderer/broadcast/broadcast.go, where I suspect this filtering might have been applied
``` logger.Debugf("Broadcast is filtering message of type %s for channel %s", cb.HeaderType_name[chdr.Type], chdr.ChannelId)
// Normal transaction for existing chain
_, filterErr := support.Filters().Apply(msg)
if filterErr != nil {
logger.Warningf("Rejecting broadcast message because of filter error: %s", filterErr)
return srv.Send(&ab.BroadcastResponse{Status: cb.Status_BAD_REQUEST})
}
```
You're spot on,
You're spot on.
If you follow that trail from `broadcast.Handle()` you'll see for instance how a new channel is created.
I was looking at the broadcast.Handle() function as you mentioned, and I understood the enqueue statements and the reading of the envelope through broadcast, but what does filter do? It filters messages on what criteria? Does it read policy and try to see if signatures satisfy policy?
@kostas , I was looking at the broadcast.Handle() function as you mentioned, and I understood the enqueue statements and the reading of the envelope through broadcast, but what does filter do? It filters messages on what criteria? Does it read policy and try to see if signatures satisfy policy?
@kostas , I was looking at the broadcast.Handle() function as you mentioned, and I understood the enqueue statements and the reading of the envelope message received through broadcast, but what does filter do? It filters messages on what criteria? Does it read policy and try to see if signatures satisfy policy?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=PzEJ9fpL6fBDHzGB5) @kostas , also a followup question on this, this offset is a property of a topic or a partition?
(Partition.)
To answer your previous question, here's _one_ way of going at it:
`support.Filters()`
Look up the signature of `Filters()` and it'll be: `Filters() *filter.RuleSet`
(Just a sec, contact lens stuck)
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=B5Z8cz5TNqhvp9pzk) @kostas :sweat_smile: , sure.
I'm guessing that if you're using a proper IDE like Gogland you'll be able to get to that function's implementation right away.
If you don't, just grep for: `) Filters() *filter.Ruleset {` and you'll find all the structs implementing it.
That'll take you to: `func (cs *chainSupport) Filters() *filter.RuleSet {` (line 203, `chainsupport.go`)
The function's definition reads: `return cs.filters`
So all you need to do is figure out where those filters are set.
This takes some slight digging around, but soon enough you'll bump into the construct for `cs` (chainSupport): `newChainSupport()` in line 115 (`chainsupport.go`). This takes a `filters` argument of type `*filter.Ruleset`.
This takes some slight digging around, but soon enough you'll (hopefully) bump into the construct for `cs` (chainSupport): `newChainSupport()` in line 115 (`chainsupport.go`). This takes a `filters` argument of type `*filter.Ruleset`.
This takes some slight digging around, but soon enough you'll (hopefully) bump into the constructor for `cs` (chainSupport): `newChainSupport()` in line 115 (`chainsupport.go`). This takes a `filters` argument of type `*filter.Ruleset`.
This takes some slight digging around, but soon enough you'll (hopefully) bump into the constructor for `cs` (chainSupport): `newChainSupport()` in line 115 of `chainsupport.go`. This takes a `filters` argument of type `*filter.Ruleset`.
Next step then is to figure out who's calling `newChainSupport`.
Next step then is to figure out who's calling `newChainSupport()`.
You'll find three references in `multichain/manager.go`.
You'll see that the `filters` input argument is created via a call to `create(Standard|SystemChain)Filters()`. You can take it from there.
@kostas , am understanding what you are getting at. Thanks, let me just go through these functions and try to understand once. :slight_smile: .
@jyellick some preliminary numbers on using node sdk to send broadcast requests to the orderer (SOLO, on macbook pro):
```
** Orderer.js class sendBroadcast() API performance **
ok 1 Successfully loaded member from persistence
ok 2 Sent 1000 requests to orderer broadcast API in 151 milliseconds, averaging 6623 sent requests per second
ok 3 Completed 1000 requests to orderer broadcast API in 4858 milliseconds, averaging 206 requests per second
#
** gRPC orderer client low-level API performance **
ok 4 Successfully loaded member from persistence
ok 5 Sent 1000 requests to orderer client low-level API in 1 milliseconds, averaging 1000000 sent requests per second
ok 6 Completed 1000 requests to orderer client low-level API in 6017 milliseconds, averaging 166 requests per second
# Ending the broadcast stream
first test calls the Orderer.js sendBroadcast() method, the 2nd test calls the underlying grpc service client and re-use the broadcast stream object without calling broadcast.end() after each data reception callback
@jimthematrix am I misreading the data, or are you seeing better performance before the optimization?
Maybe I can work with you tomorrow to compare the golang client with the node code to see if we can account for this discrepancy. I can also try modifying the go client mimic the node behavior to see if the go code performance is simply that much better. It seems unlikely to me, but worth investigating
Has joined the channel.
And just to double check, can you confirm that multiple `write` calls (equivalent to the goalng `Send`) are being called without waiting for the callback upon its completion to end? Essentially, I would expect the behavior to look something like this:
```
write
write
write
...
write
callback
callback
write
callback
write
...
callback
callback
...
callback
```
etc.
@jimthematrix I've modified my golang test client to follow the SDK strategy of tearing down the Broadcast after each iteration. With sufficient threads (ie 100 or so), the bottleneck is not the gRPC connection, and I see around 900tps with both strategies. With only a single go routine, this drops to around 700 tps in the SDK stragegy, but holds at 900 tps while re-using the Broadcast. So, although it sounds like re-using the Broadcast connection is probably still superior, I don't think it accounts for the significant difference between the golang and node clients. I've added you as a reviewer on a draft changeset of mine which is producing these results. Perhaps you could run it on your Mac to make sure we aren't chasing mismatched hardware?
And one final question. What is the size of the payload you're sending in?
@jyellick the difference b/w the two runs are basically negligible (over multiple runs they are essentially the same number)
although the numbers do seem low compared to what you had with a GO client
so the fact that the two numbers (one calling broadcast.end() b/w calls, another doesn't) are equivalent, is consistent with what you saw above.
yep that's what I was going to ask next, to run your tests on my machine, will give it a GO ;-)
@jyellick yes the `write()` calls are sent in a blast in both cases, which is demonstrated by the fact that:
```
Sent 1000 requests to orderer broadcast API in 151 milliseconds
```(these are the `write()` calls)
vs.
```
Completed 1000 requests to orderer broadcast API in 4858 milliseconds
both elapsed time values use the same start time, which is right before the 'write()' calls
this is what I got running CR 11267 on my machine, with a single goroutine:
```
Completed 1000 requests in 4275 milliseconds, averaging 233 requests per second
I'm going to try a lighter weight message payload (type: message instead of endorser_transaction) like you have in the test, to see if it'll pull the GO and node SDK based clients closer in the performance numbers
@jimthematrix Still rather surprising to me that the rate drops that much on your machine. I find it unlikely my laptop is 4x faster than yours. How did you run `broadcast_timestamp`? I have been doing:
```
time ./broadcast_timestamp -messages 10000 -goroutines 8
```
Also, perhaps you could link me to a draft CR so I can run the node test locally?
@jimthematrix Also, how are you running the orderer? My guess is that you're doing it inside a docker container, but I think docker on Mac is notoriously slow? Could you try building/running the orderer locally?
I see that according to http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html, we can set OrdererKafkaVersion to 0.9.0.1 or 0.10.0.0 0.or 10.0.1 or 0.10.1.0. That controls the messaging protocols versions used. Does that mean that the kafka broker images we use are actually version 0.10.1.0 (which is compatible with all those versions we can select)?
Or does the user somehow need to create correct images for the desired kafka version that they will be specifying in the orderer.yaml?
`Out of the box the Kafka version defaults to 0.9.0.1.`
https://github.com/hyperledger/fabric/blob/master/images/kafka/Dockerfile.in#L8
> https://chat.hyperledger.org/channel/fabric-consensus?msg=z5TgJrt6wp7NyhqmE
Yes.
> Or does the user somehow need to create correct images for the desired kafka version that they will be specifying in the orderer.yaml?
Yes.
I understand to set the kafka_version in th orderer.yaml file. But is that the ONLY thing I need to do? I worry that that would only alter the orderer behavior while using the same old kafka images.
On the other hand, the note "(Hyperledger Fabric uses the sarama client library and vendors a version of it that supports Kafka 0.9 and 0.10.)" on page http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html seems to indicate maybe we are always using the 0.10.1.0 kafka software images. Just looking for confirmation...
@scottz: Not sure where the confusion stems from. We are saying: "We can work with Kafka versions 0.9.x and 0.10.x. So _identify the Kafka version that you are running_, and then just set the value on the `orderer.yaml` file before launching the ordering service and you're good to go."
> I worry that that would only alter the orderer behavior while using the same old kafka images.
This is indeed what would happen. We are not claiming otherwise.
> seems to indicate maybe we are always using the 0.10.1.0 kafka software images.
Why would it indicate that?
We write that: `Out of the box the Kafka version defaults to 0.9.0.1.`
Fabric team ships images for kafka 0.9.0.1. An orderer-service-provider that wants to use the timestamps and enhancements in kafka 0.10 would need to set that kafka_version in orderer.yaml *AND* rebuild their own kafka images after editing https://github.com/hyperledger/fabric/blob/master/images/kafka/Dockerfile.in lines 8. Correct?
The statement "vendors a version of it that supports Kafka 0.9 and 0.10" implies to me that the user does not have to rebuild kafka, but would simply need to choose the version in orderer.yaml and the kafka image would behave accordingly. You are saying that I misinterpreted it. (It appears you are right, because we tried setting orderer.yaml kafka_version to 0.10.0.0, but the kafka broker logs still say 0.9.) So I think we can add a few words to that doc page to make sure nobody else can make same mistake.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=NNtoLN6LmhTFSXnW7) @kostas And remember this setting is mentioned in orderer.yaml as the default selection, so this statement could be misinterpreted. The kafka images version is a different thing than the selection of kafka version in the orderer.
> Correct?
Correct.
> So I think we can add a few words to that doc page to make sure nobody else can make same mistake.
Not against this, and I see how we could be a bit more explicit. Would you like to submit a PR with this change?
@scottz: I am not against this, and I see how we could be a bit more explicit. Would you like to submit a PR with this change?
@scottz: I am not against this, and I see how we could be a bit more explicit / less ambiguous. Would you like to submit a PR with this change?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=nhB3YSr44st66L3g5) @jyellick https://gerrit.hyperledger.org/r/#/c/11557/
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=8EwEyiL54mc4aowWA) @jyellick yes running inside docker
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=8EwEyiL54mc4aowWA) @jyellick yes running inside docker on mac
@jimthematrix I'm assuming I start up an orderer, configure some variables, then run `node test/integration/perf/orderer.js`? What variables (and where) do I need to set them?
@jyellick no need to set any variables, main thing is to make sure the local MSP folder under sampleconfig has the right materials (so that signing etc. can be verified properly), so do `cp -R /fabric-sdk-node/test/fixtures/channel/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/
@jyellick no need to set any variables, main thing is to make sure the local MSP folder under sampleconfig has the right materials (so that signing etc. can be verified properly), so do
```cp -R /fabric-sdk-node/test/fixtures/channel/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=LdXk46Yu7HzAGcE9L) @kostas If it does not get addressed by FAB-5252, then I will open another Jira and give it a go.
@jimthematrix This looks like it's simply a difference in our machines/deployments. I was finally able to get your `orderer.js` to run with the following results:
```
** gRPC orderer client low-level API performance **
ok 1 Sent 1000 "MESSAGE" requests to orderer client low-level API in 2 milliseconds, averaging 500000 sent requests per second
ok 2 Completed 1000 "MESSAGE" requests to orderer client low-level API in 1089 milliseconds, averaging 918 requests per second.
ok 3 Sent 1000 "ENDORSER_TRANSACTION" requests to orderer client low-level API in 1 milliseconds, averaging 1000000 sent requests per second
ok 4 Completed 1000 "ENDORSER_TRANSACTION" requests to orderer client low-level API in 1097 milliseconds, averaging 912 requests per second.
#
** Orderer.js class sendBroadcast() API performance **
# Ending the broadcast stream
# Ending the broadcast stream
ok 5 Sent 1000 "MESSAGE" requests to orderer broadcast API in 102 milliseconds, averaging 9804 sent requests per second
ok 6 Completed 1000 "MESSAGE" requests to orderer broadcast API in 1198 milliseconds, averaging 835 requests per second.
ok 7 Sent 1000 "ENDORSER_TRANSACTION" requests to orderer broadcast API in 81 milliseconds, averaging 12346 sent requests per second
ok 8 Completed 1000 "ENDORSER_TRANSACTION" requests to orderer broadcast API in 1180 milliseconds, averaging 847 requests per second.
```
I'm still seeing slightly faster results with the golang client, but only 5-10% faster, which seems well within the margin of gRPC implementation differences
Hello all, I wanted to know when the deliver handler gets called?
More specifically, the function deliver in fabric/orderer/server.go,
```
// Deliver sends a stream of blocks to a client after ordering
func (s *server) Deliver(srv ab.AtomicBroadcast_DeliverServer) error {
logger.Debugf("Starting new Deliver handler")
return s.dh.Handle(srv)
}
```
I wanted to find out where at the orderer this function is called.
When a Deliver call reaches the ordering service:
https://github.com/hyperledger/fabric/blob/master/protos/orderer/ab.proto#L76
This is the implementation of the service definition.
Thanks for the reply @kostas , so a deliver handler should be called after I have written a block, but for some reason at my orderer I am not seeing the log of "Starting new Deliver handler", implying that this function is not getting called after I have written the block, it is being called before writing the block, and i can see a lot of messages which indicate that the block had not been written till that point.
You may also see `fabric/orderer/server.go` which routes requests to `fabric/orderer/common/deliver/deliver.go`. The `server.go` structure is registered with the gRPC server in `fabric/orderer/main.go`
@Rachitga The Deliver function is called when a client invokes the `Deliver` API
If there is no client invoking this API, writing a block will not trigger it
If there is no client invoking this API, writing a block will not trigger it (or more precisely, writing blocks triggers activity in `deliver.go`, however, the specific message `Starting new Deliver handler` is only invoked when a client connects to the `Deliver` service)
at the client I see the following logs,
```
2017-07-12 12:13:23,465 DEBUG Orderer:161 - Order.sendDeliver name: orderer.example.com, url: grpc://localhost:7050
2017-07-12 12:13:23,473 DEBUG OrdererClient:180 - resp status value: 404, resp: NOT_FOUND, type case: STATUS
2017-07-12 12:13:23,474 WARN OrdererClient:210 - onCompleted
2017-07-12 12:13:23,474 DEBUG Channel:660 - Channel foo getGenesisBlock deliver status: 404
2017-07-12 12:13:23,475 WARN Channel:662 - Bad deliver expected status 200 got 404, Channel foo
```
Yes, what is happening here, is that your SDK has sent in a channel creation request for channel foo
The orderer has accepted this request, but it takes some time for the orderer to create the channel and its resources
The SDK is being a little over-eager and is requesting the genesis block for that channel, before the orderer has had a chance to finish creating it
(Or, perhaps it is your code which is)
Either way, the correct response is to wait a moment, and then try again.
so how long will the sdk keep requesting? Is it hard coded at the sdk side?
do you have an idea about the sdk?
That's a question for the SDK folks.
(#fabric-sdk-
You can ask in #fabric-sdk-node
(I believe your log output looks like the node SDK, at least)
okay, thanks! I was trying to do exactly that, I was sending a channel creation request but it was failing, thanks!
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=CbtWnnQFiDpemWrTQ) @jyellick Am using java-sdk actually
Ah, thanks for that
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=6nJSwYtuwk25cwNfk) @jyellick wow thanks for confirming that, and the numbers are against native orderer process or docker?
Those numbers are against the orderer running bare metal on my laptop
ok, so a pretty big hit (~5X) when running docker
I will try to run it on docker on my laptop (Linux) to see if it is possibly more of a problem with the Mac docker
maybe as a result of the native docker daemon for mac/win, I wonder what the difference is on native linux
maybe as a result of the native docker daemon for mac/win, I wonder what the difference is when running docker on linux
My recollection is that networking throughput under docker on Mac is known to be quite slow, but I'm uncertain
cool, would be great to find out...
@jimthematrix Testing on my laptop, running the orderer alone in docker, i saw the throughput drop by about half. So, down to about 500 tps
ok, so it's slower than native process but still much faster in docker over linux than docker over mac
although it's kind of difficult to compare since they are done on different hardware
although it's kind of difficult to compare since they are done on different hardware (mac on my machine vs. linux on yours)
Has joined the channel.
Hello, i have a question on kafka. I use one orderer that connects to 4 brokers. I am using `- KAFKA_DEFAULT_REPLICATION_FACTOR=3` and `- KAFKA_MIN_INSYNC_REPLICAS=2`
Hello, i have a question on kafka. I use one orderer that connects to 4 brokers. I am using
`- KAFKA_DEFAULT_REPLICATION_FACTOR=3` and `- KAFKA_MIN_INSYNC_REPLICAS=2` on the brokers.
I noticed that there is one partition created on one broker (`kafka2`). The leader (`kafka0`) and one more broker (`kafka3`) are copying that partition from `kafka2` while `kafka1` remains pretty much inactive. The orderer is just fetching metadata from 2 brokers, the leader (`kafka0`) and the holder of the partition (`kafka2`).
I was wondering, does it seem correct?
Also what happens if `kafka2` (holder of the partition) starts missing packets?
Hello, i have a question on kafka. I use one orderer that connects to 4 brokers. I am using
`- KAFKA_DEFAULT_REPLICATION_FACTOR=3` and `- KAFKA_MIN_INSYNC_REPLICAS=2` on the brokers.
I noticed that there is one partition created on one broker `kafka2`. The leader ( `kafka0` ) and one more broker (`kafka3`) are copying that partition from `kafka2` while `kafka1` remains pretty much inactive. The orderer is just fetching metadata from 2 brokers, the leader (`kafka0`) and the holder of the partition (`kafka2`).
I was wondering, does it seem correct?
Also what happens if `kafka2` (holder of the partition) starts missing packets?
Hello, i have a question on kafka. I use one orderer that connects to 4 brokers. I am using
`- KAFKA_DEFAULT_REPLICATION_FACTOR=3` and `- KAFKA_MIN_INSYNC_REPLICAS=2` on the brokers.
I noticed that there is one partition created on one broker `kafka2`. The leader ( `kafka0` ) and one more broker ( `kafka3` ) are copying that partition from `kafka2` , while `kafka1` remains pretty much inactive. The orderer is just fetching metadata from 2 brokers, the leader ( `kafka0` ) and the holder of the partition ( `kafka2` ).
I was wondering, does it seem correct?
Also what happens if `kafka2` (holder of the partition) starts missing packets?
> I was wondering, does it seem correct?
It does.
> Also what happens if `kafka2` (holder of the partition) starts missing packets?
One of `kafka0` or `kafka3` will now own this partition, and everything will keep working w/o issues.
When `kafka2`'s connectivity issues are resolved, it shall rejoin the replication set for this partition.
Ah great makes sense, thank you.
https://chat.hyperledger.org/channel/fabric-questions?msg=HrYaxXZF8GXdKLKWm
Has joined the channel.
I'm using the basic-network configtx.yaml file for this but still have the error
@FollowingGhosts This indicates that the profile you used for the channel creation did not specify any members. The orderer will reject a channel creation request that creates a channel with no members, that is the error you are seeing.
I was using the profile provided in the examples, but I think I may be having issues with the configtxgen
I have checked the config file using configtxlator and it does in fact specify members
``` "read_set": {
"groups": {
"Application": {
"groups": {
"Org1MSP": {},
"Org2MSP": {}
}
}
},
"values": {
"Consortium": {
"value": {
"name": "SampleConsortium"
}
}
}
},
"write_set": {
"groups": {
"Application": {
"groups": {
"Org1MSP": {},
"Org2MSP": {}
@FollowingGhosts How are you sending the channel creation tx?
I realised it was an issue with the way I was sending it
I hadn't specified the channel name in my script
Great, glad you've got it figured out
Has joined the channel.
Hello, I'm looking for code which creates a new MSP and sets it up. Could someone point me to the right place?
@thakkarparth007: Hi, the right place is #fabric-crypto.
yacovm directed me here :P
Dang
This isn't about MSP as such, I wanna know where the MSP is used
An MSP instance is created only on receiving a new config tx, right?
@thakkarparth007 Not exactly
There are two primary kinds of MSPs, "Local MSPs" and "Validating MSPs"
The local MSP is on the filesystem of the orderer or peer, and contains the process's local signing identity, and local CA etc.
The local material is generally used for operations which do not have a network effect. For instance, installing a chaincode onto a peer. This requires the signature of an admin for that peer, but that peer admin is not necessary a network admin.
On the other hand, there are also "Validating MSPs", I've also heard them called "Channel MSPs". Regardless, these are encoded in a very similar way to local MSPs, but, do not have any private key material or signing certs.
These validating MSP definitions are what are used in the channel configuration, they are set in the consortium definition on the ordering system channel, and then that crypto material becomes the basis for your new channels.
There is a lot more which can be said, but I'll stop here and let you process and or ask questions.
I'll get to the point on what he wants to ask, Jason
when you get a config update
you re-create an MSP instance
and assign that MSP instance
Sorry, was reading all that. Took time to process. :P
or rather assign the instance of the new MSP into the instance of the existing channel MSP
something like this, right? @jyellick
@thakkarparth007 @yacovm Yes, there is the concept of an "MSP Manager" for a channel. This contains all of the MSP definitions as defined in the channel's most recent configuration block. The way this is exposed internally, is as a wrapper which stores a reference to an underlying MSPManager. When a config update comes through, a new manager is constructed, and then the underlying reference is changed to point to the new manager.
@thakkarparth007 @yacovm Yes, there is the concept of an "MSP Manager" for a channel. This contains all of the MSP definitions as defined in the channel's most recent configuration block. The way this is exposed internally, is as a wrapper which stores a reference to an underlying MSPManager. When a config update comes through, a new manager is constructed, and then the reference is changed to point to the new underlying manager.
Right, so how does a cert revocation work? let's say a cert is revoked, a config-tx would be added, right?
Which would cause a new MSPManager instance to be created
yeah
Where does this happen?
Where does this happen in code?
To revoke a cert, you would construct a configtx which updates the MSP definition which owns that cert, and add it to the CRL. The configtx is submitted, it gets processed, triggering a new MSP instance to be constructed (which has this cert revoked). This new MSP instance is used in the construction of the new MSPManager, which is swapped out atomically when the new configuration commits.
Has joined the channel.
Oh, is there a one-one correspondence between an MSP Manager and an MSP?
`fabric/common/config/msp/config.go`
An MSP Manager is constructed with a list of MSPs
```
mspList := make([]msp.MSP, len(pendingConfig.idMap))
i := 0
for _, pendingMSP := range pendingConfig.idMap {
mspList[i] = pendingMSP.msp
i++
}
pendingConfig.proposedMgr = msp.NewMSPManager()
err := pendingConfig.proposedMgr.Setup(mspList)
```
So even if one msp changes, all msp's are recreated?
Correct
Oh. Okay
all MSPs *of that channel*
Why wouldn't we simply recreate that particular msp
Why wouldn't we simply recreate that particular msp?
The MSP Manager is treated as a unit. This isn't as important for X.509, but is needed to support more novel crypto schemes
it's an optimization, nothing more. a config update isn't freqquent
Thanks for adding that yacovm. Might have confused me in future
Oh, okay
So, if I somehow maintain a cache of, say, DeserializeIdentity inside an msp instance, that won't bite me if a cert is revoked, right?
because anyway a new msp instance is created
Correct
Awesome. Thanks!
Has joined the channel.
Hello All,
I was trying to look in the orderer code and how a block is cut,
in the file https://github.com/hyperledger/fabric/blob/master/orderer/common/blockcutter/blockcutter.go
line number 125, inside the function, func (r *receiver) Ordered(msg *cb.Envelope) ([][]*cb.Envelope, [][]filter.Committer, bool)
```
messageWillOverflowBatchSizeBytes := r.pendingBatchSizeBytes+messageSizeBytes > r.sharedConfigManager.BatchSize().PreferredMaxBytes
```
Is there a bug in this line of code?
should it not be AbsoluteMaxBytes instead of PreferredMaxBytes, because in configtx.yaml,
```
# Absolute Max Bytes: The absolute maximum number of bytes allowed for the serialized messages in a batch. AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for the serialized messages in a batch. A message larger than the preferred max bytes will result in a batch larger than preferred max bytes. PreferredMaxBytes: 512 KB
```
r.pendingBatchSizeBytes is the current size of the batch bytes as shown by the code on lines 135-137
```
logger.Debugf("Enqueuing message into batch")
r.pendingBatch = append(r.pendingBatch, msg)
r.pendingBatchSizeBytes += messageSizeBytes
```
ohkay sorry, I understood my mistake, preferred max bytes is the preferred size for one batch
could someone explain me the use of absolute max bytes though?
and is it implemented somewhere in the code?
@Rachitga: You'll never get a batch of serialized messages that's larger than `AbsoluteMaxBytes`.
> could someone explain me the use of absolute max bytes though?
@Rachitga: You'll never get a batch of serialized messages that's larger than `AbsoluteMaxBytes`.
> could someone explain me the use of absolute max bytes though?
@Rachitga You'll never get a batch of serialized messages that's larger than `AbsoluteMaxBytes`.
> could someone explain me the use of absolute max bytes though?
@Rachitga: You'll never get a batch of serialized messages that's larger than `AbsoluteMaxBytes`.
> and is it implemented somewhere in the code?
Of course. Searching for `AbsoluteMaxBytes` should have led you here: https://github.com/hyperledger/fabric/blob/master/orderer/multichain/chainsupport.go#L171
Thanks, @kostas
Y
Hello all,
Hello all,
I wanted to make some changes in some of the properties of a channel at the orderer, *dynamically*.
Specifically I wanted to dynamically update my batch size(the max messages count, the preferred max bytes) after a channel is created, is that possible?
And if it is, what is the effect during the change or transition? Are some blocks dropped or fail the validation process as a result of this change?
And how to make such a change?
There is a tutorial for that @Rachitga
https://github.com/hyperledger/fabric/tree/release/examples/configtxupdate
@yacovm thanks! This covers on how to make the change.
Can you also tell me about the changes during the transition? Meaning how does the configtxlater tool work?
Lets say an organisation is added in the middle or lets say a batch size is reduced while transactions are pending, and the different ordering service nodes get the signed and edited configtxupdate from the sdk at different times, will that cause inconsistencies in the ledger, or will that effect changes how the kafka queue are consumed and how the blocks are cut?
well, every update to the channel has its own sequence since it's a configuration block
and all orderer nodes see the transactions at the same order
so they all cut the same sequenced config block
and apply it, and also the peers apply it to
and apply it, and also the peers apply it too
@yacovm, thanks!
Has joined the channel.
Is it possible to have validators similar to Ripple so that multiple parties are validating the transactions?
Isn't this what endorsers in Fabric are supposed to do?
I am looking for configtx.yaml file for configuring the kafka with three orderers.
In the configtx.yaml file, it is configured for one orderer with solo type. I want change it to Kafka and provide the list of orderers. What is the syntax for the addresses for providing the orderers. Is it comma separated list. Orderer: &OrdererDefaults
# Orderer Type: The orderer implementation to start
# Available types are "solo" and "kafka"
OrdererType: solo
Addresses:
- orderer.example.com:7050
@gauthampamu It is standard yaml list notation:
```
Addresses:
- orderer1.example.com:7050
- orderer2.example.com:7050
- orderer3.example.com:7050
```
Is it possible to run a crossing network and hyperledger on top to do the consensus of the executions so that it's low latency and max transactions?
@jmar42 Implementing a crossing network type system on top of hyperledger should be possible. Without know more about your specific goals, it's difficult to say. You might find others with similar goals in the general #fabric channel
@jmar42 Implementing a crossing network type system on top of hyperledger fabric should be possible. Without know more about your specific goals, it's difficult to say. You might find others with similar goals in the general #fabric channel
@jmar42 Implementing a crossing network type system on top of hyperledger fabric should be possible. Without knowing more about your specific goals, it's difficult to say. You might find others with similar goals in the general #fabric channel
@jmar42 Implementing a crossing network type system on top of hyperledger fabric should be possible. Without knowing more about your specific goals, it's difficult to say more. You might find others with similar goals in the general #fabric channel
@jyellick Thanks for the response. I have another question regarding generating the certificates for the Orderers. What changes should I make to the cryptoconfig.yaml file to generate certificates for three orderers. I am basically customizing first-network for Kafka. If anyone already enhanced the first network example or any example in Fabric Samples with Kafka, can you please share the steps to integrate it. Is this correct configuration for cryptoconfig to generate the certs for three orderers. I just ran the script and it generated the files under orderer folder. OrdererOrgs:
# ---------------------------------------------------------------------------
# Orderer
# ---------------------------------------------------------------------------
- Name: Orderer
Domain: example.com
# ---------------------------------------------------------------------------
# "Specs" - See PeerOrgs below for complete description
# ---------------------------------------------------------------------------
Specs:
- Hostname: orderer0
- Hostname: orderer1
- Hostname: orderer2
@gauthampamu Correct, if you define multiple `Hostname`s under Specs, you should get a local MSP dir for each of your orderers.
Simply use the corresponding correct dir for each of your orderers
For the network, you can add new peers for existing organization or new organization but is it possible to add new orderers after the network is running ? Also similarly is it possible to add new zookeepers or kafka brokers after you start the network.
If you want to add two additional Kafka brokers without stopping the network, is it possible ?
Also let says you have an organization for the orderers, is it possible for the organization that provides peers to also contribute an additional orderer to the network. Can an organization just contribute an orderer or do they also need to provide Kafka broker and zookeeper. Thanks in advance.
> is it possible to add new orderers after the network is running ?
Yes, it is possible, but there are some limitations. In particular see https://jira.hyperledger.org/browse/FAB-5157 and https://jira.hyperledger.org/browse/FAB-5288
> Also similarly is it possible to add new zookeepers or kafka brokers after you start the network.
I will defer to @kostas here for specifics, but it should be possible to add new ZK nodes and Kafka brokers after starting the network.
> Also let says you have an organization for the orderers, is it possible for the organization that provides peers to also contribute an additional orderer to the network.
An org may function as an application (peer) org, or an orderer org, or both. When using Kafka, the benefits of having multiple ordering orgs usually does not outweigh the gains however.
> Can an organization just contribute an orderer or do they also need to provide Kafka broker and zookeeper.
The only requirement is that the org's orderer is authorized and capable of communicating with the backing Kafka/ZK cluster,
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=SaCMmvjjGDcYyxsgx) @jyellick Oh Ok. Thanks d
for
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=SaCMmvjjGDcYyxsgx) @jyellick Ok thx for response. Also, is it possible to have multiple validators like Ripple?
Has joined the channel.
Has joined the channel.
Hello All,
When client sends a transaction, does it send a message to one ordering service node, or does it send it to a lot of ordering nodes (I was seeing the code at client side which was leading me to believe that the request is sent to a collection of orderers)??
In case it sends it to more than one orderers, how is it decided that duplicates are not published into the kafka queue?
@Rachitga: One ordering node. If that is not available, it switches to another node.
> In case it sends it to more than one orderers, how is it decided that duplicates are not published into the kafka queue?
@Rachitga: Just to folow up on this: if you decided -- for whatever reason -- that you want to send to multiple orderers, you would end up with duplicates in the Kafka queue and the ledger, _but_ only the first transaction would actually be valid, i.e. modify the KV store. All the rest would thenn refer to invalid key versions.
@kostas , thanks!
If PBFT consensus or a variation called SBFT available in 1.0 GA?
No, the sbft consensus is not available in v1.0.0
@DennisM330 No, the sbft consensus is not available in v1.0.0
@jyellick Thanks for the response. When you are designing the architecture, what are different possibilities after the network is up and running. For example, lets say you have a network with two organizations with two peers for organizations and you have organization for Orderers (Kafka) and you have five orderers. 1) After you start the network, you want to add new organization 2) If you initially have just one peer for organization, the participant might want to add an additional peer for the organization. 3) Once you have 5 orderers, you not need any more orderers right ? If you have just three orderers, it might make sense to add an additional orderer.
> 1) After you start the network, you want to add new organization
Yes, this is possible
> 2) If you initially have just one peer for organization, the participant might want to add an additional peer for the organization.
Yes, this is possible, and very straightforward. Each member is not restricted for the number of peers, each peer must simply have a validly signed cert from the org CA.
> 3) Once you have 5 orderers, you not need any more orderers right ? If you have just three orderers, it might make sense to add an additional orderer.
This really depends on your setup. Perhaps you wish to have 3 orderers in each geography. Or, perhaps for scale reasons, you wish to have 7 orderers. There is no real restriction here.
@gauthampamu
> 1) After you start the network, you want to add new organization
Yes, this is possible
> 2) If you initially have just one peer for organization, the participant might want to add an additional peer for the organization.
Yes, this is possible, and very straightforward. Each member is not restricted for the number of peers, each peer must simply have a validly signed cert from the org CA.
> 3) Once you have 5 orderers, you not need any more orderers right ? If you have just three orderers, it might make sense to add an additional orderer.
This really depends on your setup. Perhaps you wish to have 3 orderers in each geography. Or, perhaps for scale reasons, you wish to have 7 orderers. There is no real restriction here.
@jyellick Let say you start with three orderer initially. I am assuming you can easily add additional orderer and increase that from 3 to 5, how should you adjust the number of kafka brokers and zookeepers when you add orderers. I have read the number of Kafka brokers should be odd. What is the different combination of orderers, kafka and zookeepers. Also when should you add new orderers. Let says you getting 10 TPS with three orderers and you have to scale it to 40 TPS. What should you change to the architecture to scale the through put. I understand that there is no magic formula and the answer would depend but I wanted to know whether the scale would improve with adding additional orderers to the network.
@gauthampamu
> I have read the number of Kafka brokers should be odd.
I believe you have read the number _Zookeeper nodes_ should be odd. The Kafka brokers may certainly safely have an even number.
> What is the different combination of orderers, kafka and zookeepers.
In general, adding ZK is done to improve high availability and tolerate more crashes. Adding Kafka brokers can also be used to increase availability. Adding brokers will generally not increase throughput within a single channel, but might increase throughput with sufficiently many channels. Adding orderer nodes can be used to increase throughput if the clients distribute their load across them, or can be added to increase availability.
> Let says you getting 10 TPS with three orderers and you have to scale it to 40 TPS. What should you change to the architecture to scale the through put. I understand that there is no magic formula and the answer would depend but I wanted to know whether the scale would improve with adding additional orderers to the network.
As you say, there is no magic formula. In general, I would expect for the orderer processes to be the bottleneck of the system, as they are doing more work generally than Kafka, so I would expect that adding additional ordering nodes is your most likely avenue to scaling. However, there are some known limitations to this in v1, because all orderers end up validating all transactions at least once, so at some point, adding additional orderers will not give you additional scale. There is an issue open for this which will help the system scale better horizontally ( https://jira.hyperledger.org/browse/FAB-5258 ). For now, adding orderers should help somewhat, but deploying the orderer on faster hardware is likely to give you the most direct throughput boost.
If you are building the network for production. I have read that it is recommended to have 5 zookeepers. So if we plan to have 5 zookeepers, what should be the number for Orderers and Kafka
Can we still use same 5 orderer and 5 kafka for production, should have 4 Orderers, 4 or 6 kafka brokers and 5 zookeepers.
Also how many system should we use for Ordering service, I have created this topology, I will need your feedback. Let me know how should adjust the number of orderers and kafka brokers and how I should distribute them on the VM. For now I have assigned two VMs for ordering service. I have been advised to run even the CA on the same systems.
> Can we still use same 5 orderer and 5 kafka for production, should have 4 Orderers, 4 or 6 kafka brokers and 5 zookeepers.
Only the ZK should be odd, the others are independent. You should define your fault tolerance characteristics, then use this to define the network topology characteristics.
> Also how many system should we use for Ordering service, I have created this topology, I will need your feedback. Let me know how should adjust the number of orderers and kafka brokers and how I should distribute them on the VM. For now I have assigned two VMs for ordering service. I have been advised to run even the CA on the same systems.
This really depends on your goals. If you are looking for high availability, I would discourage you from running any of the services colocated on the same VM, as this means that if this one VM fails, you will lose multiple services.
Has joined the channel.
How can I add a new org to existing channel ? I am able to decode the genesis block in JSON using configtxlator from SDK . However , I am not aware how to edit JSON (generate org information like readers, writers and encoded certs ) to add a new org.
@n91 I would recommend that you generate a genesis block containing the crypto material for your org. This is done in the ordering system channel genesis block, but you may repeat it. Once decoded to JSON, copy the organization definition (which includes the MSP material) somewhere. Then to add an organization, simply follow the standard configtxlator reconfiguration flow as outlined in https://github.com/hyperledger/fabric/blob/master/examples/configtxupdate/README.md , but use the JSON you saved from the first step to define the new org.
@n91 I would recommend that you generate a genesis block containing the crypto material for your org. This is done in the ordering system channel genesis block, but you may repeat it. Once decoded to JSON, copy the organization definition (which includes the MSP material as well as org policies) somewhere. Then to add an organization, simply follow the standard configtxlator reconfiguration flow as outlined in https://github.com/hyperledger/fabric/blob/master/examples/configtxupdate/README.md , but use the JSON you saved from the first step to define the new org.
I am doing the same thing now but I think it is just a hack
@jyellick I am doing the same thing now but I think it is just a hack
Certainly the reconfiguration flow for v1 is on the hacky side. Reconfiguration is something that SDKs will hopefully expose in the future. However, for v1 this was not realistic to be completed. That is why `configtxlator` exists, to give a language neutral way to help SDK users to perform reconfiguration. Ideally, the application is storing the most up to date copy of the organization definition somewhere for each org, which can be used in the application to construct the change request.
@jyellick Would this new org be requesting the configuration block itself, or permission to join the channel by providing his own certs?
@n91 The new org cannot request the current config block, because it is a member. Instead, the new org should supply its identity as that Organization section to an existing member. The existing member constructs the config update based on this new org definition, then solicits signatures from existing channel members until sufficient signatures have been gathered. The existing member then submits the update tx which adds the new org.
@jyellick What does "Organization section to an existing member." means ?
how do the new org supply its certs and all information to existing member ?
As I indicated, it is a bit of a hack. But the organization may define a genesis block using `configtxgen`, and decode this using `configtxlator` to extract that section.
As I indicated, it is a bit of a hack. But the organization may define a genesis block using `configtx.yaml`, then generate it using `configtxgen`, and decode this using `configtxlator` to extract that section.
then submit this section to existing orgs offline ?
Correct.
Hopefully, these sections change very infrequently, so it may be generated once and stored, then used for a long period of time before it must be regenerated.
who decides the number of signatures required for an org to enter a channel ?
This is (by default) defined by the `/Channel/Application/Admins` policy
The default value for this policy is the majority of the Admin policies for the orgs in the channel.
So, for instance, if you had a channel with:
`peerOrg0`, `peerOrg1`, and `peerOrg2`
It would require that either the `Admins` policy for `peerOrg0` and `peerOrg1` is satsified, or the `Admins` policy for `peerOrg0` and `peerOrg2`, or the `Admins` policy for `peerOrg1` and `peerOrg2` is satisfied.
The `Admins` policy of a peer organization is defined by default to be any admin certificate of that org.
Got it ! Thank you very much
@n91 Happy to help!
@n91 Can you explains the steps to add the new org.
@gauthampamu I am also working on it. Will follow this : [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=ew3QxqL5XcLhbmnpM)
You can generate crypto material for 1 org by editing configtx.yaml and using configtxgen to generate orderer.block. Then use configtxlator to decode it into json
Hey @jyellick I am trying to rename these things from Org1 to different names. What sections of the configs do I need to change when recreating the crypto materials
@Asara I expect you should be mostly fine with a 'Search and replace' type strategy. You will obviously want to hit `configtx.yaml`, and `crypto-config.yaml`, but you will also need to make sure the MSP ID is specified correctly in your orderer and peer config (either via environment or in the yaml, depending on how you have things setup).
The key to remember when doing this, is that the MSPID is not tied to the crypto contents. You could theoretically, change the MSPIDs and Org names everywhere without regenerating your crypto material
ah
hm.
(However, assuming you let the org names bleed into things like hostnames, you will end up with TLS problems for instance)
makes sense
So update the Name: entries, but not the ID entries?
You may do this, or you may update both. It will really depend on what parts of your network you are generating, and which parts are hard coded. In the interest of re-usability, I would encourage you to dynamically generate as much as possible, so I would be in the camp of "change them both", this way you can catch anywhere you are not dynamically handling this. But it is of course, up to you.
Thanks!
Does that mean the names in the profile also need to be updated?
The names like `*Org1` are actually just yaml references to the organization section labeled by `&Org1`, and will have no effect on your configuration.
That is what I thought. Cool thanks
However, under the `&Org1` section, there are `Name` and `ID` fields, these will of course affect your config.
What exactly is the Name that is refernced there?
Where is that name referenced elsewhere?
As in, under &OrdererOrg, there is Name, ID. What are those referencing exactly?
Ah, so, `Name` is the key which is used to refer to the org in the configuration. It is restricted to follow the naming conventions of configuration elements. The MSPID is the ID as encoded in the MSP, and its form is far more relaxed.
This is probably a question better asked to the #fabric-crypto folks, but my understanding is, that the org name was not tied to the MSP ID, because an org does not always administer its own MSP. But really, I am a little fuzzy on this. The examples use different IDs for these fields, I think to help the user recognize that there is a differentiation between them. However, if you desired as a rule for your network to require that these IDs always match, I see no problem with this.
I'll mess around and hopefully figure it out :)
Has joined the channel.
Hello,
I want to set up Kafka and am looking at https://github.com/hyperledger/fabric/blob/release/bddtests/dc-orderer-kafka.yml
I have downloaded the fabric-release.zip and have grepped with no success.
But I cannot figure out where the following bash variables for the orderer in dc-orderer-kafka.yml are initialized or what they should be:
- ORDERER_GENERAL_LOCALMSPID=${ORDERER1_ORDERER_GENERAL_LOCALMSPID}
- ORDERER_GENERAL_LOCALMSPDIR=${ORDERER1_ORDERER_GENERAL_LOCALMSPDIR}
- ORDERER_GENERAL_TLS_PRIVATEKEY=${ORDERER1_ORDERER_GENERAL_TLS_PRIVATEKEY}
- ORDERER_GENERAL_TLS_CERTIFICATE=${ORDERER1_ORDERER_GENERAL_TLS_CERTIFICATE}
- ORDERER_GENERAL_TLS_ROOTCAS=${ORDERER1_ORDERER_GENERAL_TLS_ROOTCAS}
Does anyone have an example?
Thanks in advance!
@szoghybe I'm not sure I follow your question. The variables you referenced (those starting with `ORDERER_GENERAL_` are all overrides for the ``orderer.yaml` file. This is configuration unique to each ordering process, and why you see it enumerated by orderer id.
@szoghybe I'm not sure I follow your question. The variables you referenced (those starting with `ORDERER_GENERAL_` are all overrides for the `orderer.yaml` file. This is configuration unique to each ordering process, and why you see it enumerated by orderer id.
@szoghybe the values are calculated dynamically when you execute the behave features
https://github.com/hyperledger/fabric/tree/release/bddtests#running-a-specific-feature
here is a sample output...
services:
orderer0:
command: orderer
environment:
ORDERER_GENERAL_GENESISFILE: /var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/genesis_file
ORDERER_GENERAL_GENESISMETHOD: file
ORDERER_GENERAL_LISTENADDRESS: 0.0.0.0
ORDERER_GENERAL_LOCALMSPDIR: /var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/orderer0/localMspConfig
ORDERER_GENERAL_LOCALMSPID: ordererOrg0
ORDERER_GENERAL_LOGLEVEL: debug
ORDERER_GENERAL_TLS_CERTIFICATE: /var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/orderer0/tls_config/orderer0Signer-orderer0-ordererOrg0-tls.crt
ORDERER_GENERAL_TLS_ENABLED: 'true'
ORDERER_GENERAL_TLS_PRIVATEKEY: /var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/orderer0/tls_config/orderer0Signer-orderer0-ordererOrg0-tls.key
ORDERER_GENERAL_TLS_ROOTCAS: '[/var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/orderer0/localMspConfig/cacerts/ordererOrg0.pem]'
ORDERER_KAFKA_RETRY_SHORTINTERVAL: 1s
ORDERER_KAFKA_RETRY_SHORTTOTAL: 30s
ORDERER_KAFKA_VERBOSE: 'true'
image: 584780235419.dkr.ecr.us-east-1.amazonaws.com/hyperledger/fabric-orderer
ports:
- '7050'
volumes:
- /etc/hyperledger/msp:/etc/hyperledger/msp:rw
- /opt/gopath/src/github.com/hyperledger/fabric/fabric-explorer/volumes/orderer:/var/hyperledger/bddtests/volumes/orderer:rw
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
```services:
orderer0:
command: orderer
environment:
ORDERER_GENERAL_GENESISFILE: /var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/genesis_file
ORDERER_GENERAL_GENESISMETHOD: file
ORDERER_GENERAL_LISTENADDRESS: 0.0.0.0
ORDERER_GENERAL_LOCALMSPDIR: /var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/orderer0/localMspConfig
ORDERER_GENERAL_LOCALMSPID: ordererOrg0
ORDERER_GENERAL_LOGLEVEL: debug
ORDERER_GENERAL_TLS_CERTIFICATE: /var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/orderer0/tls_config/orderer0Signer-orderer0-ordererOrg0-tls.crt
ORDERER_GENERAL_TLS_ENABLED: 'true'
ORDERER_GENERAL_TLS_PRIVATEKEY: /var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/orderer0/tls_config/orderer0Signer-orderer0-ordererOrg0-tls.key
ORDERER_GENERAL_TLS_ROOTCAS: '[/var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/orderer0/localMspConfig/cacerts/ordererOrg0.pem]'
ORDERER_KAFKA_RETRY_SHORTINTERVAL: 1s
ORDERER_KAFKA_RETRY_SHORTTOTAL: 30s
ORDERER_KAFKA_VERBOSE: 'true'
image: 584780235419.dkr.ecr.us-east-1.amazonaws.com/hyperledger/fabric-orderer
ports:
- '7050'
volumes:
- /etc/hyperledger/msp:/etc/hyperledger/msp:rw
- /opt/gopath/src/github.com/hyperledger/fabric/fabric-explorer/volumes/orderer:/var/hyperledger/bddtests/volumes/orderer:rw
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
```services:
orderer0:
command: orderer
environment:
ORDERER_GENERAL_GENESISFILE: /var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/genesis_file
ORDERER_GENERAL_GENESISMETHOD: file
ORDERER_GENERAL_LISTENADDRESS: 0.0.0.0
ORDERER_GENERAL_LOCALMSPDIR: /var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/orderer0/localMspConfig
ORDERER_GENERAL_LOCALMSPID: ordererOrg0
ORDERER_GENERAL_LOGLEVEL: debug
ORDERER_GENERAL_TLS_CERTIFICATE: /var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/orderer0/tls_config/orderer0Signer-orderer0-ordererOrg0-tls.crt
ORDERER_GENERAL_TLS_ENABLED: 'true'
ORDERER_GENERAL_TLS_PRIVATEKEY: /var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/orderer0/tls_config/orderer0Signer-orderer0-ordererOrg0-tls.key
ORDERER_GENERAL_TLS_ROOTCAS: '[/var/hyperledger/bddtests/volumes/orderer/4fcba86e6b0b11e79d3502a6c0ed0e40/orderer0/localMspConfig/cacerts/ordererOrg0.pem]'
ORDERER_KAFKA_RETRY_SHORTINTERVAL: 1s
ORDERER_KAFKA_RETRY_SHORTTOTAL: 30s
ORDERER_KAFKA_VERBOSE: 'true'
image: hyperledger/fabric-orderer
ports:
- '7050'
volumes:
- /etc/hyperledger/msp:/etc/hyperledger/msp:rw
- /opt/gopath/src/github.com/hyperledger/fabric/fabric-explorer/volumes/orderer:/var/hyperledger/bddtests/volumes/orderer:rw
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
Has joined the channel.
Has joined the channel.
If an OSN crashes i.e., data in filesystem is lost then how can it recover again i.e., resync all the channels of all the ledgers. Or if I ask it in a different way, how can adding a new orderer download the ledger as ordering nodes are not connected with each other and kafka keeps messages only for 7 days.
Dear Experts,
Could you please provide me some insight for dynamically expand existing network .
I am confused whether we could do it with cryptogen and configtx .
My use case is to add new banks dynamically to the network .
I got a hint that configtxlator could help for this.
could you please provide me with some insights .I havent fully understood the concept of configtxlator
Has joined the channel.
@narayanprusty: Kafka _should not_ keep messages for only 7 days, until we work on pruning support. This is covered in the documentation: https://github.com/hyperledger/fabric/blob/release/docs/source/kafka.rst
@kostas so you mean if a OSN is shutdown for few weeks and then started again then it will read the missed blocks from kafka partition. And kafka is now keeping the ledger forever. \
@kostas so you mean if a OSN is shutdown for few weeks and then started again then it will read the missed blocks from kafka partition. And kafka is now keeping the ledger forever.
@narayanprusty: Correct.
@kostas Does peers sync only from peers of the same organization and orderer it is connected to or do they also sync from peers of other organizations of the same channel.
Basically I want to know if peers of different organizations connect to each other and sync up.
@narayanprusty: They can. More details on that operation can be provided by the folks in #fabric-gossip
@kostas Thanks for the answers. I wanted to know a bit about membership. When I generate crypto files from cryptogen tool I don't see that all of the crypto files generated from a root CA. They are all just pub/priv keys of different orgs which are put in genesis block. So how does authentication add/remove organisations from network work. Can any organisation who is part of the network can add/remove any other organisation
@kostas Thanks for the answers. I wanted to know a bit about membership. When I generate crypto files from cryptogen tool I don't see that all of the crypto files generated from a root CA. They are all just pub/priv keys of different orgs which are put in genesis block. So how does authentication for add/remove organisations from network work. Can any organisation who is part of the network can add/remove any other organisation
> Can any organisation who is part of the network can add/remove any other organisation
@narayanprusty: No, policies on the `Channel` config tree for the channel decide who gets to modify which part of the tree.
For instance, adding orgs would imply that the `/Channel/Application/Admins` policy (which by default is the majority of all of channel org admins) need to agree on that change.
How can I configure who can remove/add or who are the admins? How can I configure policies on the network?
First, some reading is required:
https://github.com/hyperledger/fabric/blob/release/docs/source/policies.rst
https://github.com/hyperledger/fabric/blob/release/docs/source/configtx.rst
https://github.com/hyperledger/fabric/blob/release/docs/source/configtxlator.rst
If you're still left with questions, post here.
@kostas Thanks for all the help. I will go through all the docs and post if I have any questions.
Has joined the channel.
Can anyone tell me how to change genesis.block which is in ".block" format to ".pb" format so that I can follow the given steps in the link to reconfigure my channel.http://hyperledger-fabric.readthedocs.io/en/latest/configtxlator.html
http://hyperledger-fabric.readthedocs.io/en/latest/configtxlator.html
Kindly let me know any possible methods.
@jyellick I feel you can help me.
@rohitrocket that distinction is in name only, as both should protobuf structures
@rohitrocket that distinction is in name only, as both should be protobuf structures
Sorry I didn't get you :(
You mean to say both should work ?
whether it is .block or .pb, both should be serialized protobuf structures
cat them to console and verify they are binary
and not json
okay. one more help
I have setup the network using BYFN...I haven't yet tried node sdk.....Would this transformation thing still work ? I mean if I cURL the thing (configtxlator)
within some container who has this .block file ?
@rohitrocket Yes, the `configtxlator` is not network specific. It simply translates between proto and JSON. with no dependency on your network configuration.
Okay :) thanks
.block format would work right instead of .pb format ?
`.block` could more appropriately be named `.block.pb`. When the `.block` name was conceived, there was no notion of blocks which were JSON encoded instead of protobuf encoded, so no distinction was made.
They are both protobuf encoded.
.block is automatically created while following BYFN.
anyways thanks a lot
Has joined the channel.
Has joined the channel.
apart from solo consensus algo do we support pbft in V1 ?
no
So we have only solo ?
Hi guys, I am here trying to modify .block file into and update version using configtxlator and .jq tool. Objective is to add a whole new organization in the channel. I am not able to figure out that where all should be the changes made inorder to meet my objective since the converted json file is very hard to interpret. Can @jyellick @yacovm or anyone else help me ? I am posting here my converted json file.
please upload to pastebin
or a github gist
okay
https://pastebin.com/26egadmp
@yacovm any info ?
@jyellick is infinitely proficient than me in these configuration issues
@yacovm when will he be active here ?
I think in 6 hours or so
but cannot promise anything
@vu3mmg - solo is not consensus, it is just one centralized node. but there is crash-tolerant consensus with Kafka. see here http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html
for more background on consensus in permissioned blockchains, read this https://arxiv.org/abs/1707.01873
OK thanks . One more query , with SOLO , we do blind replication ?
So we need kafka , if we move to production ?
if you want a notion of consensus that goes beyond running a single node, then it's kafka, yes.
(running = trusting a single node)
if I understood , correctly , here we are differentiating consensus from replication ?
i dont understand you here, sorry.
if we run a network with consensus as solo and create a transaction
the transaction will be endorsed and propagated to the network
?
so that if peer0 created the txn
the data will be replicated across peer2
?
So we are doing just network replication then ?
sorry, still don't understand. the model of fabric is much richer than related platforms where all data is replicated to all nodes and "validated" by all nodes. can you please read the architecture sections of the doc and ask w.r.t. to that? http://hyperledger-fabric.readthedocs.io/en/latest/
@yacovm but have you tried your hands in any kind of configuration of json file ?
what do you mean tried my hands?
have you tried configuring any such json file and made an update ?
I tried... but I resulted in using a prepared script, made my life easier
https://github.com/sandp125/FabricNodeAPI_V1
Oh you used readymade API for updating channel configuration.
yes
Just few more insight.
I won't go myself and use that readymade API. but what kind of changes were you able to bring ? I mean was adding a whole new organisation was possible ?
or some different kind of changes ?
@cca88 Thank you ,I read that (http://hyperledger-fabric.readthedocs.io/en/latest/fabric_model.html#consensus) , please correct me if I am wrong , you were hinting about endorsement model for reaching consensus for committing a txn . (from the doc ..... In a nutshell, consensus is defined as the full-circle verification of the correctness of a set of transactions comprising a block. ....)
> I mean was adding a whole new organisation was possible ?
That is what the script does
adds org3 to a channel
and then removes it
cool :)
thanks @yacovm
Has joined the channel.
Has joined the channel.
@yacom one naive question , can we use cryptogen to generate certificates for new peer in an org . How this is done . Could you please give me some conceptual idea . how do other existing and running peer recogonize this new peer .
no, it doesn't support that. You need to manually make a certificate with openSSL and sign it with the private key of the CA
@yacovm I am confused between MSP and fabric CA
can you please help me getting the concepts crystal clear ? I mean is fabric CA part of MSP ?
There is a good doc about MSP
https://hyperledger-fabric.readthedocs.io/en/latest/msp.html
Yeah I went through both the docs
Unfortunately confused.
begging for help here particularly you to throw some light on the differences.
so the MSP is a module inside the peer or the ordering service
it consumes certificates and policies
and processes the rules dictated by the configuration to say if something is allowed or not
the fabric CA is... just a CA, i.e it generates certificates, etc.
so in short their functions are mutually exclusive ?
but somehow they are interdependent because of one thing "Certificates" ?
pardon me if i am very wrong.
Has joined the channel.
Hi guys, I'm working on a fabric 1.0 POC and my manager asks if we can demo what happens when consensus is not being made, prove that the data is immutable on the ledger. is there any way to change the ledger on one peer and get a consensus error?
@rolo Consensus on transaction order occurs in the ordering service. Consensus on the validity of the transaction occurs in the endorsement phase. My suspicion is, that you would be best off writing a non-deterministic chaincode, install it on two different peers with an endorsement policy requiring 2 endorsements. Then you could show that if these endorsements do not match, the transaction is not committed even if ordered.
Hello, I have an issue trying to create 3 ordrerers: https://stackoverflow.com/questions/45191760/how-to-define-3-orderers-in-hyperledger-fabric-1-0
@szoghybe: Left a comment.
@kostas In the docker-compose file called in the networkUp() of byfn.sh; docker ps lists the following: fabric-tools:1.0.0 fabric-kafka:1.0.0 fabric-kafka:1.0.0 fabric-kafka:1.0.0 fabric-kafka:1.0.0 fabric-peer:1.0.0 fabric-peer:1.0.0 fabric-peer:1.0.0 fabric-peer:1.0.0 fabric-ca:1.0.0 fabric-ca:1.0.0 fabric-couchdb:1.0.0 fabric-couchdb:1.0.0 fabric-couchdb:1.0.0 fabric-zookeeper:1.0.0 fabric-zookeeper:1.0.0 fabric-zookeeper:1.0.0 fabric-couchdb:1.0.0
(...)
kafka0:
container_name: kafka0.example.com
extends:
file: dc-orderer-kafka-base.yml
service: kafka
environment:
- KAFKA_BROKER_ID=0
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
- GODEBUG=netdns=cgo
- CORE_LOGGING_LEVEL=debug
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
kafka1:
container_name: kafka1.example.com
extends:
file: dc-orderer-kafka-base.yml
service: kafka
environment:
- KAFKA_BROKER_ID=1
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
- GODEBUG=netdns=cgo
- CORE_LOGGING_LEVEL=debug
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
(...)
zookeeper0:
extends:
file: dc-orderer-kafka-base.yml
service: zookeeper
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
- GODEBUG=netdns=cgo
- CORE_LOGGING_LEVEL=debug
zookeeper1:
extends:
file: dc-orderer-kafka-base.yml
service: zookeeper
environment:
- ZOO_MY_ID=2
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
- GODEBUG=netdns=cgo
- CORE_LOGGING_LEVEL=debug
zookeeper2:
extends:
file: dc-orderer-kafka-base.yml
service: zookeeper
environment:
- ZOO_MY_ID=3
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
- GODEBUG=netdns=cgo
- CORE_LOGGING_LEVEL=debug
Please use a service like pastebin or a GitHub Gist for these files. Paste the link here.
When I bring the network down, it lists the orderers Removing orderer1.example.com ... done
Removing orderer2.example.com ... done
Removing orderer0.example.com ... done
Hello All,
I wanted to know that if we could read if a kafka consumer has unconsumed messages or not without having to explicitly consume the message, are there functions available for that? Am referring to the code in
https://github.com/hyperledger/fabric/blob/master/orderer/kafka/chain.go
line number: 224,
```
case in, ok := <-chain.channelConsumer.Messages():
```
The code here uses a for-select format that runs when there are messages to be consumed, but it also consumes the messages. I was simply looking for a check that tells if there are unconsumed messages in a topic without actually consuming the messages. I was trying to look at this file:
https://github.com/hyperledger/fabric/blob/master/vendor/github.com/Shopify/sarama/consumer.go , to write such a code.
Hello All,
I wanted to know that if we could read a kafka consumer has unconsumed messages or not without having to explicitly consume the message, are there functions available for that? Am referring to the code in
https://github.com/hyperledger/fabric/blob/master/orderer/kafka/chain.go
line number: 224,
```
case in, ok := <-chain.channelConsumer.Messages():
```
The code here uses a for-select format that runs when there are messages to be consumed, but it also consumes the messages. I was simply looking for a check that tells if there are unconsumed messages in a topic without actually consuming the messages. I was trying to look at this file:
https://github.com/hyperledger/fabric/blob/master/vendor/github.com/Shopify/sarama/consumer.go , to write such a code.
Hello All,
I wanted to know if we could find out that a kafka consumer has unconsumed messages or not without having to explicitly consume the message, are there functions available for that? Am referring to the code in
https://github.com/hyperledger/fabric/blob/master/orderer/kafka/chain.go
line number: 224,
```
case in, ok := <-chain.channelConsumer.Messages():
```
The code here uses a for-select format that runs when there are messages to be consumed, but it also consumes the messages. I was simply looking for a check that tells if there are unconsumed messages in a topic without actually consuming the messages. I was trying to look at this file:
https://github.com/hyperledger/fabric/blob/master/vendor/github.com/Shopify/sarama/consumer.go , to write such a code.
Message Attachments
Message Attachments
Tar file of my folder containing all yaml files and byfn.sh
Trying to create 3 orderers, getting 2017-07-19 20:12:14.282 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: lookup orderer0.example.com: no such host"; Reconnecting to {orderer0.example.com:7050
Looks like the creation of orderers is failing silently, not listed when I do: [vagrant@centos7 ~]$ docker ps | awk '{ print $1 " " $2 }'
CONTAINER ID
6b575771e116 hyperledger/fabric-tools
4bc44ee9a393 hyperledger/fabric-kafka
8ef99fcd56b1 hyperledger/fabric-peer
dcf90f9275b9 hyperledger/fabric-kafka
01e2f1bf23c6 hyperledger/fabric-kafka
241361566a78 hyperledger/fabric-kafka
8c9adb6744af hyperledger/fabric-peer
3b2981c8cb5e hyperledger/fabric-couchdb
c3d774829fed hyperledger/fabric-zookeeper
26922d7ce677 hyperledger/fabric-couchdb
9a7dd067d5a9 hyperledger/fabric-couchdb
d272b24fc12b hyperledger/fabric-ca
f2f02e5e06f3 hyperledger/fabric-ca
1c71c44a347b hyperledger/fabric-zookeeper
625e5a0499a6 hyperledger/fabric-zookeeper
afea69995f21 hyperledger/fabric-couchdb
TIA!
@Rachitga: This question belong to the sarama repo: https://github.com/Shopify/sarama
@Rachitga: This question belongs to the sarama repo: https://github.com/Shopify/sarama
@szoghybe: In the comment I left in S/O I asked for the output of `docker ps -a` (notice the -a`). Can you please provide it here? (Again use a service like Pastebin or GitHub Gist and just provide the link containing the snippet here.)
@szoghybe: In the comment I left in S/O I asked for the output of `docker ps -a` (notice the `-a`). Can you please provide it here? (Again use a service like Pastebin or GitHub Gist and just provide the link containing the snippet here.)
Welcome to #fabric-consensus. Questions here should be related to either the ordering service code and its APIs (Broadcast/Deliver), configuration transactions, or the ordering service consensus plugins (Solo/Kafka/SBFT). Before posting your question, please take time to ensure that your question is precise and concise, and use a service like Pastebin or GitHub Gist for all log outputs that you wish to reference. For example: Bad question: Why do I get the error `BAD_REQUEST`? Good question: Using `fabric-examples/first-network/byfn.sh`, when submitting the channel creation as `Admin@org1.example.com` it succeeds, but when using `User1@org1.example.com` it fails with `BAD_REQUEST`. (Full log can be found here: https://pastebin.com/LFGNB88a) Why does this second request fail?
hi @kostas correct me if i am wrong...while bootstrapping the orderer i had to create the genesis block that contains information about all the consortiums /channels (e.g. Consortiums: SampleConsortium.. ). so when the application creates a new channel using the channel artifacts( mychannel.tx generated from SampleConsortium) orderer identifies the channel by identifying the channel with the consortium group defined in the genesis block.... Is there a way i can create a new channel that is not defined before in the genesis block ? or a way to edit the genesis block of the orderer so i can add the new consortium to the consortiums group? or Its necessary to define all the Consortiums in the Consortiums group while bootstrapping orderer ?
@shubhamvrkr You may define a channel with any subset of the consortium definition. For instance, if you define consortium `foo` with members `A`, `B`, `C`, you may create channels with `A` and `B`, just `B`, `A`, `B`, and `C`, or any other permutation.
You may always modify the consortium definition after bootstrap via a config transaction. You may also define new consortiums through the same mechanism.
You can see in the `bddtests` that an orderer is bootstrapped with no consortium, then a new one is defined, and channels created based on it.
thanks @jyellick ..can u please provide me the file link under bddtest?
@shubhamvrkr https://github.com/hyperledger/fabric/blob/release/bddtests/features/bootstrap.feature#L93-L97
thanks a lot @jyellick
@jyellick I have a serious issue here and I think you can help me.
I want to solve a problem of reconfiguring channel by adding a channel dynamically.....on the BYFN sample.
*by adding a new organization
Could you please let me know whether the following procedure is the right one
I have to update to update genesis.block using configtxlator
Genesis.block was created by configtxgen earlier
configtxlator should be run from my linux host machine where the containers for peers and orders are running
So that the exisitng peers and orders need not worry about new org addition
I will will send genesis block binary from the directory where genesis.block is located to configtxlator
Using configtxlator I have to decode the genesis.block to a human readable format @jyellick
Are there any API's in shim layer to create Index for couchDB queries or it has to be done using Curl only?
@rohitrocket This is not correct. They key is to send a configuration update transaction. You may see examples here: https://github.com/hyperledger/fabric/blob/master/examples/configtxupdate/README.md#reconfiguration-example
@niteshsolanki This is not a #fabric-consensus related question, you might try #fabric-peer-endorser-committer
@jyellick I went through the reconfiguration example....but in the human readable .json file where all changes should be brought to add a whole new organization that I am not able to figure out.
Has joined the channel.
Has joined the channel.
Hello All,
I was trying to find out if I wanted two consumers instead of one consumer as is present in the current implementation.
```
type chainImpl struct {
consenter commonConsenter
support multichain.ConsenterSupport
channel channel
lastOffsetPersisted int64
lastCutBlockNumber uint64
producer sarama.SyncProducer
parentConsumer sarama.Consumer
channelConsumer sarama.PartitionConsumer
halted bool // For the Enqueue() calls
exitChan chan struct{} // For the Chain's Halt() method
startCompleted bool // For testing
}
```
I can create two channel consumers in the above structure to do so, would I need to create two parent consumers? What is the relation between a parent consumer and a channel consumer?
Also I wanted that the consumers work independent of each other, that is if i read a message in a particular topic from one consumer, then i can read the same message in the same topic from the second consumer.
Can anyone help me regarding this?
Hello All,
I was trying to find out if I wanted two consumers instead of one consumer as is present in the current implementation.
```
type chainImpl struct {
consenter commonConsenter
support multichain.ConsenterSupport
channel channel
lastOffsetPersisted int64
lastCutBlockNumber uint64
producer sarama.SyncProducer
parentConsumer sarama.Consumer
channelConsumer sarama.PartitionConsumer
halted bool // For the Enqueue() calls
exitChan chan struct{} // For the Chain's Halt() method
startCompleted bool // For testing
}
```
I can create two channel consumers in the above structure to do so, would I need to create two parent consumers? What is the relation between a parent consumer and a channel consumer?
Also I wanted that the consumers work independent of each other, that is if i consume a message in a particular topic from one consumer, then i can read the same message in the same topic from the second consumer.
Can anyone help me regarding this?
Hello All,
I was trying to find out that if I wanted two consumers instead of one consumer as is present in the current implementation.
```
type chainImpl struct {
consenter commonConsenter
support multichain.ConsenterSupport
channel channel
lastOffsetPersisted int64
lastCutBlockNumber uint64
producer sarama.SyncProducer
parentConsumer sarama.Consumer
channelConsumer sarama.PartitionConsumer
halted bool // For the Enqueue() calls
exitChan chan struct{} // For the Chain's Halt() method
startCompleted bool // For testing
}
```
I can create two channel consumers in the above structure to do so, would I need to create two parent consumers? What is the relation between a parent consumer and a channel consumer?
Also I wanted that the consumers work independent of each other, that is if i consume a message in a particular topic from one consumer, then i can read the same message in the same topic from the second consumer.
Can anyone help me regarding this?
Hello All,
I was trying to find out that if I wanted two consumers instead of one consumer as is present in the current implementation.
Code is in: https://github.com/hyperledger/fabric/blob/master/orderer/consensus/kafka/chain.go
```
type chainImpl struct {
consenter commonConsenter
support multichain.ConsenterSupport
channel channel
lastOffsetPersisted int64
lastCutBlockNumber uint64
producer sarama.SyncProducer
parentConsumer sarama.Consumer
channelConsumer sarama.PartitionConsumer
halted bool // For the Enqueue() calls
exitChan chan struct{} // For the Chain's Halt() method
startCompleted bool // For testing
}
```
I can create two channel consumers in the above structure to do so, would I need to create two parent consumers? What is the relation between a parent consumer and a channel consumer?
Also I wanted that the consumers work independent of each other, that is if i consume a message in a particular topic from one consumer, then i can read the same message in the same topic from the second consumer.
Can anyone help me regarding this?
@kostas pastebin of 3 orderers issue at https://pastebin.com/MQcCiF6b and the tar attached above in rocketchat which contains all my yaml and sh files. I see the orderers failed. How to debug that?
@Rachitga: All of these questions now move deep into `sarama` territory and as such, the sarama repo is a much better venue for them.
@kostas , thanks for your help, I will ask in the sarama repo for more knowledge about this.
Message Attachments
Has joined the channel.
Message Attachments
@jyellick & others any info would be highly appreciated
I followed this tutorial https://github.com/hyperledger/fabric/blob/master/examples/configtxupdate/reconfig_membership/script.sh
Do you have TLS enabled on your peer?
Do you have TLS enabled on your orderer?
172.19.0.4:7050 is the ip of orderer container and port....I am executing this peer channel update command from cli container
I have to check it...I mean this sample network is purely BYFN on official doc.
Yes, the sample network has TLS on by default.
You will need to pass in the orderer TLS CA to that command
See the `--cafile` option
okay.
sorry but how to know orderer TLS CA ?
@jyellick
you mean some path to .pem file ?
Yes. For the BYFN setup, it will be a file named `tlsca.example.com-cert.pem`
You may look at the `peer channel create` command which takes the same flag
(and was run by the byfn.sh script earlier)
Message Attachments
yeah I did what you told me to do
I ran this command
peer channel update -f firstchannelconf_update_env.pb -c firstchannel -o 172.19.0.4:7050 --cafile ./crypto/ordererOrganizations/example.com/tlsca/tlsca.example.com-cert.pem
@jyellick and above screenshot shows the error
Is it something to do with maybe wrong IP address ?
I do not think so, this looks like a TLS problem to me still
Oh
But I haven't made any changes to such configuration
I wonder if you may not use the IP address, if you must use the name for TLS hostname auth
Please look at how the `peer channel create` command is invoked
Message Attachments
peer channel update -f firstchannelconf_update_env.pb -c firstchannel -o orderer.example.com:7050 --cafile ./crypto/ordererOrganizations/example.com/tlsca/tlsca.example.com-cert.pem
@jyellick in the peer channel create it was "orderer.example.com:7050"
so I made that change to my command still some error ^
What does the tail of the orderer log indicate?
@kostas Found a couple of errors, the e2e.yaml wasn't being replaced. Also a typo or two.
let me check @jyellick
Message Attachments
@jyellick
I have shared the screenshot here.
The log indicates that this is indeed a TLS problem
Has joined the channel.
But as far as I know I haven't made any changes anywhere. Still can you please let me know the steps to resolve TLS problem ?
Please pass `--tls true` to that command
@szoghybe: My plan was to have a look at this later today, please keep me posted if you make any progress so that we don't duplicate effort. I suspect a configuration error that causes the orderers to crash, or sth along these lines.
Hey all, is there any documentation on how Kafka consensus works?
Message Attachments
Message Attachments
@rishabh1102: Do you mean how Kafka does it internally, or how the Kafka-based orderers uses Kafka?
@jyellick above was the result of adding --tls true :)
Both
But more on how kafka does it internally
@rohitrocket This indicates that you have not collected sufficient signatures for your config update. It requires a signature from an admin of each org
(You have only signed it with one)
@rishabh1102: OK, for Kafka did you have a look at the Kafka website? They have one page with documentation and the answer is in there.
@rishabh1102: OK, for Kafka did you have a look at the Kafka website? They have *one* page with documentation and the answer is in there.
please tell me how to resolve this problem @jyellick ? I have followed that github reconfigmembership link anyways.
Actually I am a whole newbie pardon if I am taking much of your time.
@rohitrocket The example is for a channel which has only one member, so only requires one signature to modify
The peer CLI does not have the ability to sign with more than one signature
okay. so inorder to run how do I take other signatures ?
*inorder to resolve
@jyellick
You would need to do this using one of the SDKs
If you'd like, you can open an enhancement request for the peer CLI to support this workflow
@jyellick okay just one just out of topic question ? When are you most active in this chat community ? I think I won't get your second line unless you elaborate it to me :(
@rohitrocket I am in the ET US time zone (New York time), and tend to be most active here 10am-5pm and 9pm-2am, but it varies
We track issues in the JIRA system https://jira.hyperledger.org/browse/
You may create a new issue, requesting an improvement to the peer CLI, to support a workflow where multiple signatures are required on a configuration transaction
You may paste it here and I will double check that it is in a condition that someone can implement a fix for you
okay huge thanks for the help. I will see :)
In byfn.sh function replacePrivateKey () we do # Copy the template to the file that will be modified to add the private key
cp ./docker-compose-e2e-template.yaml ./docker-compose-e2e.yaml
Is docker-compose-e2e.yaml implictely used? I can't see any reference to it afterwards and there is some repetition between the docker-compose -f file I am using and docker-compose-e2e.yaml...
Is docker-compose-e2e.yaml implicitly used? I can't see any reference to it afterwards and there is some repetition between the docker-compose -f file I am using and docker-compose-e2e.yaml...
^^ @Ratnakar
@szoghybe No , docker-compose-e2e.yaml is not used any where in byfn.sh . this contains fabric-ca section and was intended for using when against node-sdk.
Message Attachments
Message Attachments
Message Attachments
Message Attachments
docker ps -a at https://pastebin.com/cxzq4gJ5
```docker ps -a``` at https://pastebin.com/cxzq4gJ5
@Ratnakar that might be my issue then ? the docker-compose-myfile.yaml should contain the CA enviroment with the CA1_PRIVATE_KEY and CA2_PRIVATE_KEY string that needs to be substituted?
@Ratnakar that might be my issue then ? the docker-compose-myfile.yaml should contain the CA enviroment with the `CA1_PRIVATE_KEY` and `CA2_PRIVATE_KEY` string that needs to be substituted?
By docker-compose-myfile.yaml I am referring to the docker-compose -f I am using in the byfn.sh function networkUp() -- my zip contains a tar of all my yaml and sh files BTW. Any help greatly appreciated, 3 resources have been banging away at this for days.
So I guess in the byfn.sh function replacePrivateKey () I should comment out line
cp docker-compose-e2e-template.yaml docker-compose-e2e.yaml
```cp docker-compose-e2e-template.yaml docker-compose-e2e.yaml```
and replace docker-compose-e2e.yaml with docker-compose-myfile.yaml in
sed $OPTS "s/CA1_PRIVATE_KEY/${PRIV_KEY}/g" docker-compose-e2e.yaml
```sed $OPTS "s/CA1_PRIVATE_KEY/${PRIV_KEY}/g" docker-compose-e2e.yaml```
and
sed $OPTS "s/CA1_PRIVATE_KEY/${PRIV_KEY}/g" docker-compose-e2e.yaml
sed $OPTS "s/CA2_PRIVATE_KEY/${PRIV_KEY}/g" docker-compose-e2e.yaml
```sed $OPTS "s/CA2_PRIVATE_KEY/${PRIV_KEY}/g" docker-compose-e2e.yaml```
and remove the commented-out environment lines of ca0 and ca1 in docker-compose-myfile.yaml
and remove the commented-out environment lines of `ca0` and `ca1` in docker-compose-myfile.yaml
???
@szoghybe Could you please use the backtick notation when posting? It is very difficult to parse as normal text. Text with a backtick on the left and then the right will appear like `this`. If you have a full line of terminal output or script quote, please put three backticks on a line by themselves, then the code, then three more backticks on their own line. That will make the output look like
```
this
```
You may edit your messages by mousing over them, clicking on the gear icon, and then clicking the pencil icon.
@jyellick understood.
@szoghybe You can see in the `docker ps -a` that you have three ordering containers which started, but immediately exited:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6a52709bc762 hyperledger/fabric-orderer "orderer" 14 minutes ago Exited (1) 14 minutes ago orderer2.example.com
343280f88987 hyperledger/fabric-orderer "orderer" 14 minutes ago Exited (1) 14 minutes ago orderer0.example.com
d5f00eb8c944 hyperledger/fabric-orderer "orderer" 14 minutes ago Exited (1) 14 minutes ago
```
@szoghybe You can see in the `docker ps -a` that you have three ordering containers which started, but immediately exited:
```
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6a52709bc762 hyperledger/fabric-orderer "orderer" 14 minutes ago Exited (1) 14 minutes ago orderer2.example.com
343280f88987 hyperledger/fabric-orderer "orderer" 14 minutes ago Exited (1) 14 minutes ago orderer0.example.com
d5f00eb8c944 hyperledger/fabric-orderer "orderer" 14 minutes ago Exited (1) 14 minutes ago
```
yes, unsure why.
Please do `docker logs 6a52709bc762` and put the result into a pastebin
@jyellick https://pastebin.com/wiFWKFkV
As you can see, there is your problem:
```
2017-07-20 18:56:15.342 UTC [orderer/main] initializeLocalMsp -> CRIT 002 Failed to initialize local MSP: Could not load a valid signer certificate from directory /var/hyperledger/orderer/msp/signcerts, err stat /var/hyperledger/orderer/msp/signcerts: no such file or directory
```
You must be failing to properly include the crypto material from the `byfn.sh` `crypto-config` folder into the orderer image.
I have a generated file
```crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp/signcerts/orderer.example.com-cert.pem```
can't see how it would make its way into the orderer image (?)
Message Attachments
How can I use a different user name other than User1 ?
Has joined the channel.
Hi, suppose that we have 3 orgs on one channel, and we defined that org2 and org3 should endorse the tx. When a user from org1 send a query proposal to the channel(only reading the data), would it only be sent to the peers of org1 or to all the peers on channel?
endorsing will be done by the peers assigned to the chaincode, you only need to seend the query to your peer, I think it's Ok
Has joined the channel.
users
getting an error tailing args detected when trying to execute command ./peer channel fetch config config_block.pb -c testchainid --cafile /fabric-test/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/ca.crt -o orderer.example.com:7050 --tls /fabric-test/crypto-config/ordererOrganizations/example.com/users/Admin@example.com/tls/server.crt
Has joined the channel.
Has joined the channel.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=k5HCEbYZ4RikxEbh2) @xinpei8 hi, when you execute a query there is no endorsement because the peer only read it stateDB and return the result. but if you execute an Invoke function many peers will endorse the transaction (depending on the endorsement policy).
@szoghybe [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Puh49wMmWEFmb63e7)
Per your docker output, you are trying to start 3 orderers. You should have defined 3 orderers in your crypto config with appropriate names (like orderer0, orderer1, etc.). These certificates are generally passed to the docker containers via shared volumes in the compose.
@n91 In the `crypto-config.yaml` file you may define the number of users created, but they following a naming convention. If you wish to define other usernames, you may use #fabric-ca [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=67fvv7gJzbjD8wcn4)
@shubhamvrkr Pass `--tls true --cafile
@jyellick my crypto-config.yaml mentions orderer0, orderer1, orderer2 as per https://pastebin.com/VYi4TxEH Am I doing something wrong?
Did you run the `./byfn.sh -m generate` after modifying this file?
You said you have a single generated file, for `orderer.example.com`, I would expect if you modified the `crypto-config.yaml` and regenerated the artifacts, you would have 3 sign-certs, at
```
crypto-config/ordererOrganizations/example.com/orderers/orderer{0,1,2}.example.com/msp/signcerts/orderer.example.com-cert.pem
```
yes, I always generate, up, down. I have:
$ ls crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/signcerts/
orderer0.example.com-cert.pem
$ ls crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/msp/signcerts/
orderer1.example.com-cert.pem
$ ls crypto-config/ordererOrganizations/example.com/orderers/orderer2.example.com/msp/signcerts/
orderer2.example.com-cert.pem
Ah, that is what I would expect. Please check your compose file to make sure the orderers are correctly binding in these dirs as volume mounts
For orderer0 I have ```volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./volumes/orderer0:/var/hyperledger/bddtests/volumes/orderer
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/signcerts```
For orderer0 I have ```volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ./volumes/orderer0:/var/hyperledger/bddtests/volumes/orderer
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/:/var/hyperledger/orderer/tls
```
orderer1 and orderer2 are analogous
This looks correct. The error you pasted was:
```
err stat /var/hyperledger/orderer/msp/signcerts: no such file or directory
```
Which implies that there is no cert in the `orderer/msp/signcerts` directory
Either the volume path is not being specified correctly, or the contents of the value path are wrong. From what you've pasted, both look correct, so you will need slowly step through the compose to see it is.
Either the volume path is not being specified correctly, or the contents of the volume path are wrong. From what you've pasted, both look correct, so you will need slowly step through the compose to see it is.
Either the volume path is not being specified correctly, or the contents of the volume path are wrong. From what you've pasted, both look correct, so you will need slowly step through the compose to see which it is.
@jyellick Should all three orderers have this config line in volumes? I have it for orderer0 only (?)
```- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block```
(They should.)
@kostas thanks, I updated the config. Still same problem tho.
(mumble, mumble)
BTW, how do you step through the compose?
BTW, how do you "step through the compose" ?
What is the correct syntax to declare `Orderer.Kafka.Brokers` and in which par of which yaml file should I do that? I seem to be missing that.
@szoghybe: https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L173...L176
Basically, edit `configtx.yaml` as shown in the link above.
OK, thanks, I had that. I thought it was something else.
Message Attachments
any1 here having issues with fabric-peer loading msp/tlscacerts into trust store?
Hi @kostas and @jyellick
Hi @kostas and @jyellick , what's this configuration on line 107 in configtx.yaml used for? https://github.com/hyperledger/fabric/blob/release/examples/e2e_cli/configtx.yaml#L107
@Glen To encode the addresses of the ordering service nodes into the genesis block.
@Glen In this way, if a new orderer is added to the network, the channel configuration may be updated so that all network members are aware of the new address
Hi everyone, I am wondering how SBFT fits in Fabric? In other words, which Fabric components will use SBFT? Thanks!
@qizhang SBFT is intended to be one of the consensus plugin types for the fabric ordering service
@thanks
Thanks @jyellick. My understanding is that SBFT kicks in only when there are multiple orderers, correct?
@qizhang In order to use SBFT, you would need to use multiple orderers. However, you may run multiple orderers without SBFT (in fact, the only supported way to run multiple orderers at the moment is via the Kafka consensus backend)
I see. Are multiple orderers use the SBFT to reach to an agreement on what is the next block going to be broadcast to the committing peers?
@qizhang Please see http://hyperledger-fabric.readthedocs.io/en/latest/arch-deep-dive.html for a more thorough discussion.
In brief however, the peers simulate poposals producing proposal results which can be applied deterministically. These results are wrapped into a transaction and sent to the ordering service. The ordering service consents upon message order (this could be Solo, Kafka, or in time, SBFT) and batches the transactions together into a block. The peer then receives these blocks, and applies the transaction results.
Got it, thanks @jyellick
Has joined the channel.
Hi all, is it possible to plug in your own consensus algorithm for the orderers? And where would that fit in the existing code? I would like to try some simple alternatives for the kafka ordering service. Thanks in advance!
Hey all, is there any way of specifying the policy besides using ANDs and ORs?
For example, I have 7 organisations, and I want the policy to be the folllowing:
Any 4 out of 7
@jyellick @kostas Just updated https://gerrit.hyperledger.org/r/#/c/11613/ . Note that I added one more test which is *NOT* skipped, and it runs with only 10 tx. The purpose is to catch code changes in orderer that requires benchmark test to be updated. pls take a look
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=eqMcFJBb9cwEJixNi) @DaanN518 please take a look at `orderer/consensus`, particularly you probably want to take a look at `chain interface` here: https://github.com/hyperledger/fabric/blob/release/orderer/multichain/chainsupport.go#L52
Ok I will! Thanks a lot
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=L5CZ3qfQL22e7PKKT) @rishabh1102 I'm not an expert on policies but a quick scan through http://hyperledger-fabric.readthedocs.io/en/latest/policies.html?highlight=policy suggests that AND and OR are actually expressed by NOutOf, so I think your scenario is definitely possible to implement.
Hello All,
Is there any use of the lastoffsetpersisted field when the deliver API is called? For now I have found this field being used in starting the consumers in setupconsumerforchannel function. So what I figure is that this field is used for starting up the consumers in the orderer with the correct offset. But is it being used by the committers to get blocks if it is out of sync?
Am referring to this file: https://github.com/hyperledger/fabric/blob/release/orderer/kafka/chain.go
```
chain.channelConsumer, err = setupChannelConsumerForChannel(chain.consenter.retryOptions(), chain.haltChan, chain.parentConsumer, chain.channel, chain.lastOffsetPersisted+1)
```
Hello All,
Is there any use of the lastoffsetpersisted field when the deliver API is called? For now I have found this field being used in starting the consumers in setupconsumerforchannel function. So what I figure is that this field is used for starting up the consumers in the orderer with the correct offset. But is it being used by the committers to get blocks if it is out of sync?
Am referring to this file: https://github.com/hyperledger/fabric/blob/release/orderer/kafka/chain.go
```
chain.channelConsumer, err = setupChannelConsumerForChannel(chain.consenter.retryOptions(), chain.haltChan, chain.parentConsumer, chain.channel, chain.lastOffsetPersisted+1)
```
Last Offset Persisted is part of this structure,
```
type chainImpl struct {
consenter commonConsenter
support multichain.ConsenterSupport
channel channel
lastOffsetPersisted int64
lastCutBlockNumber uint64
producer sarama.SyncProducer
parentConsumer sarama.Consumer
channelConsumer sarama.PartitionConsumer
// When the partition consumer errors, close the channel. Otherwise, make
// this an open, unbuffered channel.
errorChan chan struct{}
// When a Halt() request comes, close the channel. Unlike errorChan, this
// channel never re-opens when closed. Its closing triggers the exit of the
// processMessagesToBlock loop.
haltChan chan struct{}
// // Close when the retriable steps in Start have completed.
startChan chan struct{}
}
```
Hello All,
Is there any use of the `lastOffsetPersisted` field when the deliver API is called? For now I have found this field being used in starting the consumers in `setupConsumerForChannel` function. So what I figure is that this field is used for starting up the consumers in the orderer with the correct offset. But is it being used by the committers to get blocks if it is out of sync?
Am referring to this file: https://github.com/hyperledger/fabric/blob/release/orderer/kafka/chain.go
```
chain.channelConsumer, err = setupChannelConsumerForChannel(chain.consenter.retryOptions(), chain.haltChan, chain.parentConsumer, chain.channel, chain.lastOffsetPersisted+1)
```
Last Offset Persisted is part of this structure,
```
type chainImpl struct {
consenter commonConsenter
support multichain.ConsenterSupport
channel channel
lastOffsetPersisted int64
lastCutBlockNumber uint64
producer sarama.SyncProducer
parentConsumer sarama.Consumer
channelConsumer sarama.PartitionConsumer
// When the partition consumer errors, close the channel. Otherwise, make
// this an open, unbuffered channel.
errorChan chan struct{}
// When a Halt() request comes, close the channel. Unlike errorChan, this
// channel never re-opens when closed. Its closing triggers the exit of the
// processMessagesToBlock loop.
haltChan chan struct{}
// // Close when the retriable steps in Start have completed.
startChan chan struct{}
}
```
@Rachitga The `Deliver` call never interacts directly with the Kafka consumer. The loose architecture is that `Broadcasts` hands messages to the consenter (in this case Kafka), the consenter orders transactions into blocks via a Kafka producer, and gets a list of totally ordered messages from the consumer. It then batches these messages into blocks and writes them to the ledger. The `Deliver` call only queries the ledger.
@rishabh1102 Yes, @guoger is absolutely correct. The policy construct is based on the idea of NOutOf. The AND and ORs are actually expressed as "2 out of 2" and "1 out of 2" respectively. You may actually express very powerful policies by nesting these structures.
Consider: (2 out of [(1 out of [org1.Admin]), (2 out of [org2.Admin, org3.Admin, org4.Admin, org5.Admin, org6.Admin])
This policy requires that org1.Admin always sign, and 2 of the remaining orgs (org{2,3,4,5,6}) have their admins sign as well.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Ama9nLrTsRaoowzyR)
@jyellick , thanks! So the last Offset Persisted exists only to restart the orderer with the correct consumer offset then? Or is there any other purpose of the last Offset Persisted.
^^ That's its only purpose.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=SQ7Tts8mLsfDgNEbb) @jyellick I am aware of this architecture, the last offset persisted field is explicitly written into the ledger with this statement,
```
encodedLastOffsetPersisted := utils.MarshalOrPanic(&ab.KafkaMetadata{LastOffsetPersisted: receivedOffset})
support.WriteBlock(block, committers, encodedLastOffsetPersisted)
```
Hence I was wondering if the deliver call is made for a block based on block number, or is it based on the last offset persisted seen till that point. Thanks @kostas for clearing that up.
Has left the channel.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=ew3QxqL5XcLhbmnpM) @jyellick I followed the documentation to add an organization. How to execute `peer channel update -f config_update_as_envelope.pb -c testchainid -o 127.0.0.1:7050` from java SDK ?
Hey guys, how is the orderer's ssl hostname set up in the docker containers?
I keep getting `2017-07-26 00:27:21.492 UTC [grpc] Printf -> DEBU f87 grpc: Server.Serve failed to complete security handshake from` on my orderer logs
@n91 I wouldn't want to guide you wrong, so I will point you to the #fabric-sdk-java channel. I actually saw a snippet today of @rickr doing a full reconfiguration flow from inside java. I'm guessing he intends to publish this shortly.
@Asara In the standard e2e/byfn network, the orderer's hostname is `orderer.example.com`
You can see this defined via `crypto-config.yaml` as the hostname is `orderer` for the domain `example.com`
@jyellick Okay and there is no other place where this hostname/domain is set?
@Asara The error you are seeing is in the TLS negotiation. The client receives the server's TLS cert, then checks to make sure that the hostname it attempted to connect to matches the hostname in the TLS cert.
I ask because I keep seeing this error:
```
2017-07-26 02:56:28.003 UTC [deliveryClient] StartDeliverForChannel -> DEBU 318 This peer will pass blocks from orderer service to other peers for channel mychannel
2017-07-26 02:56:31.004 UTC [ConnProducer] NewConnection -> ERRO 319 Failed connecting to orderer.example.com:7050 , error: context deadline exceeded
2017-07-26 02:56:31.004 UTC [deliveryClient] connect -> ERRO 31a Failed obtaining connection: Could not connect to any of the endpoints: [orderer.example.com:7050]
```
Hm. So the peer is essentially saying the TLS cert for the orderer is wrong?
Well, I would expect a different error from the peer
Are you certain your peer is attempting to connect to the orderer with TLS?
```
- CORE_PEER_TLS_ENABLED=true
```
The orderer log message indicates that the TLS handshake was not completed. I would have expected an error on the peer indicating that it broke off the connection for lack of a matching hostname, but instead the peer connection seems to timeout
is set for the peer, so I'd have to assume so.
yeah
and the weirder part is
```
# telnet orderer.example.com 7050
Trying 172.30.2.4...
Connected to orderer.example.com.
```
So its not a networking issue...
Alright, i'm going to head to bed. Hopefully I figure out a solution tomorrow :)
Thanks @jyellick
Sure thing, sorry we didn't get it worked out tonight, I will keep thinking on it
Has joined the channel.
https://chat.hyperledger.org/channel/fabric-questions?msg=EE3KNv3B63H9S63Tc
@Asara telnet "works" because you haven't sent any data. If you'd send data it'd terminate the connection.
Can you please try to do on your peer: `tcpdump port 7050 -w out.log` and then upload the output here?
Also what you can do is turn on the gRPC logging in the peer https://github.com/hyperledger/fabric/blob/release/sampleconfig/core.yaml#L54
Has joined the channel.
Hello, I want to know
how does orderer work? how do multiple orderers work together?
and if it is protected by attack?
and if it is protected against attack?
@akdj: In short, as the name implies, the ordering service receives transactions from clients that wish to _write_ to a channel (a shared log between the peers of the channel basically), and comes up with the one single order with which all of these transactions (now packaged into blocks) should be applied to the channel. The ordering service also maintains an access control list to make sure that the channel should be read, written, and have its properties modified only by those who are supposed to. See http://hyperledger-fabric.readthedocs.io/en/latest/arch-deep-dive.html#ordering-service-nodes-orderers for more info.
In the Kafka case, multiple orderers work together by using routing the incoming transactions to a Kafka cluster for ordering, see http://hyperledger-fabric.readthedocs.io/en/latest/arch-deep-dive.html#ordering-service-nodes-orderers and the included diagrams towards the end for more info.
In general though (and this will be the case for the BFT ordering service coming next), the orderers use a protocol --a set of rules-- to collaborate and agree on how the incoming transactions will be orderer.
@akdj: In short, as the name implies, the ordering service receives transactions from clients that wish to _write_ to a channel (a shared log between the peers of the channel basically), and comes up with the one single order with which all of these transactions (now packaged into blocks) should be applied to the channel. The ordering service also maintains an access control list to make sure that the channel should be read, written, and have its properties modified only by those who are supposed to. See http://hyperledger-fabric.readthedocs.io/en/latest/arch-deep-dive.html#ordering-service-nodes-orderers for more info.
In the Kafka case, multiple orderers work together by using routing the incoming transactions to a Kafka cluster for ordering, see https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit and the included diagrams towards the end for more info.
In general though (and this will be the case for the BFT ordering service coming next), the orderers use a protocol --a set of rules-- to collaborate and agree on how the incoming transactions will be orderer.
https://chat.hyperledger.org/channel/fabric-consensus?msg=79uQ3gEH6u4noTLYG
Define "attack".
@kostas attack like deny of service
If bad requests come in, the orderer will drop them, but that alone is not enough. Even in that case, for example, a stream has been opened, the request has been received and inspected, etc. These are all time-consuming operations, and if a node where to be flooded with tons of them, I suspect there is a valid chance it would crawl to a halt. Ultimately, this would probably come down to the ordering service node (OSN) administrator to prevent, by placing a load balancer or what-have-you before the OSN, or whitelisting IP-ranges in a firewall to mitigate such attacks. (But we are now getting quickly outside my domain.)
ok :D thank you for all explanation
@yacovm Doesn't seem like `out.log` has any info in it
How would I update the core.yaml if I'm using byfn?
update the docker-compose
And change? `CORE_LOGGING_LEVEL=DEBUG` is set. Would I need to add a section?
is there a list of environment variables that the different components consume?
Grpc logging
Look at the core yaml
And uppercase all characters
Accordingly
Alright I figured out the TLS issue, apparently my docker instance couldn't access DNS.
But I now see this error. Can't seem to figure out whats causing it.
`2017-07-26 17:04:21.035 UTC [blocksProvider] DeliverBlocks -> ERRO 31a [mychannel] Error verifying block with sequnce number 1, due to Failed to reach implicit threshold of 1 sub-policies, required 1 remaining`
@Asara Great, glad you have made progress
In general, this error indicates that the submitter of your block request is not authorized to read from that channel
Can you paste more of your orderer log (ideally to a service like pastebin)?
Sure will do in ~5 min
As an aside, I've gotten byfn working before. What I'm doing now is I have taken that setup of compose files, split them to different servers (2 orgs, 1 peer for each org, and 1 orderer) so the same layout as byfn minus 1 peer each.
And am trying to get these services to communicate on different servers
Ah, got it, this would certainly explain your hostname resolution troubles
Yeap :)
I'm sure there are better patterns these days (Docker Swarm? Kubernetes?) but back in the day, I would follow the ambassador pattern with some success https://docs.docker.com/engine/admin/ambassador_pattern_linking/
Will fabric always be bound to docker? Will peer/orderer/ca never be standalone binaries?
Because that is the path I hope to move towards
Which is why this is my intermediate step
I had 0.6.5 running on multiple servers with a single instance in each. Just trying to get to a similar state with 1.0
I can't think of anything that limits us to running these as Docker containers. Am I missing something? (I may well be.)
I mean I know the peer needs docker in order to run chaincode, but are binaries provided? Or do we still have to compile them ourselves?
There are release binaries out there that are posted
For my personal development, I rarely use docker, always run the orderer and peer binaries locally
@jyellick I feel like they should be posted as a link on github/the website or something. Because I can't say they are advertised very well...
I personally had no idea
https://nexus.hyperledger.org/content/repositories/releases/org/hyperledger/fabric/hyperledger-fabric/
I agree, they aren't advertised terribly well. To find it I had to look it up in the samples bootstrap script
Crazy. That is awesome. Thanks @jyellick I'll make that next week's project :)
Is there also a binary for the ca out there?
@Asara not to my knowledge, but it seems like a reasonable request, maybe try asking in #fabric-release and see if this is in the roadmap?
I'll give it a go, thanks
Alright... On a deploy of chaincode I am now getting:
`
2017-07-26 19:14:58.490 UTC [dockercontroller] deployImage -> DEBU 3a5 Created image: dev-peer0.org1.example.com-test-1.0.0
2017-07-26 19:14:58.491 UTC [dockercontroller] Start -> DEBU 3a6 start-recreated image successfully
2017-07-26 19:14:58.492 UTC [dockercontroller] createContainer -> DEBU 3a7 Create container: dev-peer0.org1.example.com-test-1.0.0
2017-07-26 19:14:59.896 UTC [dockercontroller] Start -> ERRO 3a8 start-could not recreate container post recreate image: no such image
2017-07-26 19:14:59.896 UTC [container] unlockContainer -> DEBU 3a9 container lock deleted(dev-peer0.org1.example.com-test-1.0.0)
2017-07-26 19:14:59.896 UTC [chaincode] Launch -> ERRO 3aa launchAndWaitForRegister failed Error starting container: no such image
2017-07-26 19:14:59.896 UTC [endorser] callChaincode -> DEBU 3ab Exit
2017-07-26 19:14:59.896 UTC [endorser] simulateProposal -> ERRO 3ac failed to invoke chaincode name:"lscc" on transaction 3577f46d3cc73bbb8f0b5cb1b80e57d31ebe354997989dae1f144bd677cb8b5a, error: Error starting container: no such image
2017-07-26 19:14:59.896 UTC [endorser] simulateProposal -> DEBU 3ad Exit
2017-07-26 19:14:59.896 UTC [lockbasedtxmgr] Done -> DEBU 3ae Done with transaction simulation / query execution [7062aa76-d532-4f6e-ad56-46a516018de6]
2017-07-26 19:14:59.896 UTC [endorser] ProcessProposal -> DEBU 3af Exit
`
Alright... On a deploy of chaincode I am now getting:
```
2017-07-26 19:14:58.490 UTC [dockercontroller] deployImage -> DEBU 3a5 Created image: dev-peer0.org1.example.com-test-1.0.0
2017-07-26 19:14:58.491 UTC [dockercontroller] Start -> DEBU 3a6 start-recreated image successfully
2017-07-26 19:14:58.492 UTC [dockercontroller] createContainer -> DEBU 3a7 Create container: dev-peer0.org1.example.com-test-1.0.0
2017-07-26 19:14:59.896 UTC [dockercontroller] Start -> ERRO 3a8 start-could not recreate container post recreate image: no such image
2017-07-26 19:14:59.896 UTC [container] unlockContainer -> DEBU 3a9 container lock deleted(dev-peer0.org1.example.com-test-1.0.0)
2017-07-26 19:14:59.896 UTC [chaincode] Launch -> ERRO 3aa launchAndWaitForRegister failed Error starting container: no such image
2017-07-26 19:14:59.896 UTC [endorser] callChaincode -> DEBU 3ab Exit
2017-07-26 19:14:59.896 UTC [endorser] simulateProposal -> ERRO 3ac failed to invoke chaincode name:"lscc" on transaction 3577f46d3cc73bbb8f0b5cb1b80e57d31ebe354997989dae1f144bd677cb8b5a, error: Error starting container: no such image
2017-07-26 19:14:59.896 UTC [endorser] simulateProposal -> DEBU 3ad Exit
2017-07-26 19:14:59.896 UTC [lockbasedtxmgr] Done -> DEBU 3ae Done with transaction simulation / query execution [7062aa76-d532-4f6e-ad56-46a516018de6]
2017-07-26 19:14:59.896 UTC [endorser] ProcessProposal -> DEBU 3af Exit
```
Or is this not a consensus issue? In which case i'll remove that and bring it elsewhere
Unrelated Ignore
In general, chaincode issues should go to #fabric-peer-endorser-committer (I would reply here/now, but looking at the log, nothing jumps out to me)
Thanks I'll move to there
Has joined the channel.
Is it possible to endorse a transaction outside of the chaincode, like check the arguments or the payload?
@sqwerrels I don't follow the question. An endorsement is fundamentally a data structure, which could be constructed manually, but I'm not certain of the value.
Has joined the channel.
Hi all, I am running _balance-transfer_ in example and try to modify _configtx.yaml_ by changing _BatchTimeout_ from 2s to 3600s, then use _configtxgen_ to generate _genesis.block_ and _mychannel.tx_. When run the _testAPIs.sh_ script, everything seems fine except that it can not instantiate chaincode. (response message: Failed to order the transaction. Error code: undefined) because of timeout in the server. Does anyone try something like this and is there any solution? thanks.
@Long To resolve instantiation timeout, you can itry to ncrease the *request-timeout* in the _node_modules/fabric-client/config/default.json_ file (as big as 300000 -- 5min).
file: "node_modules/fabric-client/config/default.json"
thank @vigneswaran.r I change the request-timeout in the config, but error seems the same
\[[2017-07-27 06:36:55.615] [ERROR] instantiate-chaincode - Failed to send instantiate transaction and get notifications within the tim│·········
eout period. undefined │·········
[2017-07-27 06:36:55.616] [ERROR] instantiate-chaincode - Failed to order the transaction. Error code: undefined \]
Error: >[2017-07-27 06:36:55.615] [ERROR] instantiate-chaincode - Failed to send instantiate transaction and get notifications within the tim│·········
eout period. undefined │·········
[2017-07-27 06:36:55.616] [ERROR] instantiate-chaincode - Failed to order the transaction. Error code: undefined
|>qoute
[2017-07-27 06:36:55.615] [ERROR] instantiate-chaincode - Failed to send instantiate transaction and get notifications within the tim│·········
eout period. undefined │·········
[2017-07-27 06:36:55.616] [ERROR] instantiate-chaincode - Failed to order the transaction. Error code: undefined
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=bq6FAPpWKkAq64apb)
`[2017-07-27 06:36:55.615] [ERROR] instantiate-chaincode - Failed to send instantiate transaction and get notifications within the tim│·········
eout period. undefined │·········
[2017-07-27 06:36:55.616] [ERROR] instantiate-chaincode - Failed to order the transaction. Error code: undefined `
```[2017-07-27 06:36:55.615] [ERROR] instantiate-chaincode - Failed to send instantiate transaction and get notifications within the tim│·········
eout period. undefined │·········
[2017-07-27 06:36:55.616] [ERROR] instantiate-chaincode - Failed to order the transaction. Error code: undefined ```
Has joined the channel.
@Long Please share a few more lines before this [ERROR] which may have some clue. Also share the lines from the peer side log corresponding to the instantiation time.
@jyellick So currently to endorse a transaction, the chaincode stimulates the transaction and then returns a success or error. Is there a way to have an external check of either the argument or the payload of the transaction and reject the proposal?
The chaincode simulates the proposal, encodes the proposal results, signs them (as an endorsement) and returns them to the client. The client aggregates endorsements from as many peers as required, then packages them into a transaction and sends it for ordering.
I'm not sure what sort of external check you wish. You may check in the chaincode whatever you like to determine whether or not the chaincode should endorse the proposal.
Im wondering if its possible to check outside of the chaincode
Im wondering if its possible to reject the proposal outside of the chaincode
@sqwerrels But where? At the client?
After rereading your comment, I think I've had a misconception on how endorsement works. I assumed that it was possible to check the results before signing them, and I didnt want to check the data via the chaincode because it was verification that an address would exist. I think I will revise my model to check the data before sending out the proposal
Assuming you are acting as a client, you may inspect the proposal result returned by the peer. If you do not like the result you may choose not to send the transaction and the proposal will have no effect on the ledger. This is not a recommended mode of operation however, as it requires the client to gain knowledge of the proposal result structure
Has joined the channel.
Why do we have constrains on minimum 4 Kafka nodes.
@dinesh.rivankar To ensure that no transaction which is acknowledged is lost, we need in sync replicas = 2 (ISR=2). To ensure there are still two ISRs available when any broker crashes, we require that the replication factor = 3 (RF = 3). Additionally, because constructing a new channel requires creating a new Kafka partition (and all replicas must be available) we additionally require 1 additional node, so that RF=3 may be satisfied. Hence, 4 kafka brokers.
@dinesh.rivankar To ensure that no transaction which is acknowledged is lost, we need in sync replicas = 2 (ISR=2). To ensure there can be two ISRs available even when some broker crashes, we require that the replication factor = 3 (RF = 3). Additionally, because constructing a new channel requires creating a new Kafka partition (and all replicas must be available) we also require 1 additional node, so that RF=3 may be satisfied. Hence, 4 kafka brokers.
Has joined the channel.
@jyellick Thanks for the reply... why do we set replication factor as 3 ? Is it based on CFT which is 2f+1 to tolerate 1 node failure ?
@dinesh.rivankar: No, this formula does not apply here. Assume brokers B0, B1, B2, B3. The candidate ISR sets are subsets of the RF set. If you set RF = 2, then the Kafka controller picks 2 brokers (say B0, B1) as the RF set for a channel. The only possible ISR set here is (B0,B1), i.e. the RF set itself. Now imagine that B0 or B1 crashes. You no longer have an ISR set with |ISR|=2, so you cannot write to that channel anymore.
On the other hand, if you set RF=3, and the Kafka controller picks (B0, B1, B2) as the RF set, and now the candidate ISR sets are all possible 2-of-3 combinations from the RF set, i.e. (B0, B1), (BO, B2), (B1, B2).
Now if any one broker crashes (say, B0 as in the example before), your cluster can still maintain an ISR set for this channel (the one with B1 and B2) and thus keep the channel writeable.
This is why:
- you need, at a minimum, an RF of 3 to exhibit crash-fault tolerance
- as I wrote above, the 2f+1 formula does not apply
Hi team, sorry to ask, in 1.0.X will we be introducing PBFT or will it be SBFT? Is there a JIRA link I can read? Thanks!
@tennenjl 1.0.x is to be bug fixes only and introduce no new features
@jyellick Thanks, so if I assume it will be in 1.X, still looking for the same information if it is available.
@tennenjl Yes, I would expect it to be scheduled for some 1.X release. We are working to finalize the content for 1.1, and SBFT will definitely not fit within the scope of the other work already scheduled. We don't know which release SBFT will be included in yet
@Jyellick Thanks again.. I appreciate the help.
Has joined the channel.
when using kafka, do the orderers also keep a copy of complete ledger (all blocks) indefinitely? I thought I read somewhere that is configurable, i.e. to set a limit in the orderer - based on size or number of blocks - so the orderer could keep a number of blocks for quick responses rather than retrieving them from KB again should a peer request a certain range of blocks.
@scottz You are thinking of the RAM ledger option, which retains only a configurable number of blocks. This is useful for testing, but should never be used in production. At some point in the future we will need to implement pruning of the blocks in the orderer ledger so that they are not retained indefinitely, but for now, it is a requirement.
@scottz You are thinking of the RAM ledger option, which retains only a configurable number of blocks. This is useful for testing, but should never be used in production. At some point in the future we will need to implement pruning of the blocks in the orderer ledger so that they are not retained indefinitely, but for now, all blocks generated are retained on the local filesystem of the orderer indefinitely.
oh, yep. and that eliminates my followup questions. thanks!
so for one channel, there is a copy of the ledger maintained in each of the KBs in the replica set (3 in most of our configurations), and every orderer, and every peer that joined the channel. Right?
Correct apart from the KB part.
The brokers don't hold a ledger. Just a log of the transactions that then get used to construct a ledger at the orderers and peers.
@scottz To be more precise, every KB in the replica set maintains a list of all transactions in the channel. Each OSN deterministically converts the kafka partitions into a blockchain for that channel and stores the blocks locally in the ledger. Each peer which joins a channel retrieves and stores all blocks for that channel locally in their own ledger.
yes, with regards to the KB, I knew I was hand-waving a bit; you guys are too good! In any case, you have confirmed my understanding so I can provide guidance expectations for disk space usage.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=8cCXgRLwNFy8KQxbP) @kostas Thanks for the explanation :)
Has left the channel.
Has joined the channel.
I am trying to run the make configtxlator command
I got the latest code for fabric from master branch
I get the below error
[smita@dockervm1 fabric]$ make configtxlator
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
build/bin/configtxlator
CGO_CFLAGS=" " GOBIN=/home/smita/src/github.com/hyperledger/fabric/build/bin go install -tags "" -ldflags "-X github.com/hyperledger/fabric/common/tools/configtxlator/metadata.Version=1.0.1-snapshot-" github.com/hyperledger/fabric/common/tools/configtxlator
# github.com/hyperledger/fabric/msp
msp/configbuilder.go:269: unknown msp.FabricMSPConfig field 'TlsRootCerts' in struct literal
msp/configbuilder.go:270: unknown msp.FabricMSPConfig field 'TlsIntermediateCerts' in struct literal
msp/mspimpl.go:923: conf.TlsRootCerts undefined (type *msp.FabricMSPConfig has no field or method TlsRootCerts)
msp/mspimpl.go:924: conf.TlsRootCerts undefined (type *msp.FabricMSPConfig has no field or method TlsRootCerts)
msp/mspimpl.go:925: conf.TlsRootCerts undefined (type *msp.FabricMSPConfig has no field or method TlsRootCerts)
msp/mspimpl.go:937: conf.TlsIntermediateCerts undefined (type *msp.FabricMSPConfig has no field or method TlsIntermediateCerts)
msp/mspimpl.go:938: conf.TlsIntermediateCerts undefined (type *msp.FabricMSPConfig has no field or method TlsIntermediateCerts)
msp/mspimpl.go:939: conf.TlsIntermediateCerts undefined (type *msp.FabricMSPConfig has no field or method TlsIntermediateCerts)
make: *** [build/bin/configtxlator] Error 2
I am trying to run the make configtxlator command
I got the latest code for fabric from master branch
I get the below error
https://pastebin.com/ekVHCfRs
Has joined the channel.
@smita0709 Please do not paste large blocks like this to the channel. If you have a long section of code, please use a service like pastebin.
How did you get the code? The errors above indicate that perhaps you have a tar of the code, and did not retrieve it via a mechanism like `git clone`. The build depends on the availability of `git` and assumes the source tree was retrieved via `git`.
@smita0709 Please do not paste large blocks like this to the channel. If you have a long section of code, please use a service like pastebin. I have modified your post to use pastebin, but please do this in the future yourself.
How did you get the code? The errors above indicate that perhaps you have a tar of the code, and did not retrieve it via a mechanism like `git clone`. The build depends on the availability of `git` and assumes the source tree was retrieved via `git`.
Has joined the channel.
For the fabric supplied kafka and zookeeper nodes are the container mount points that need to be made persistent /kafka/logs and /var/zookeeper respectively ?
@rsherwood: I don't quite get the question. Rephrase?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=XHcnXz5bNEck3rqbz) @kostas When defining a kafka container I will need to make some storage as an external volume. What directory should be the container mount point ? eg. whats to the right of the following in the container definition assuming I used the defaults for kafka -v /vol1/xxx :/var/lib/kafka/data . Then the same question for zookeeper.
@rsherwood: This should be entirely up to you no?
@kostas I think @rsherwood is asking "To achieve data persistence across container destruction and recreation, what directories must be stored as shared volumes?"
Ah, I was reading it the opposite way (i.e. where do I store the data in my system), which is why the question seemed odd.
This is pure Kafka/ZK question, and as such, there is thankfully a bunch of information available online already. For instance: http://docs.confluent.io/current/cp-docker-images/docs/operations/external-volumes.html
For the Fabric-supplied images, note that ZK uses these directories: https://github.com/hyperledger/fabric/blob/release/images/zookeeper/Dockerfile.in#L16...L18
For the Fabric-supplied images, note that ZK uses these directories: https://github.com/hyperledger/fabric/blob/release/images/zookeeper/Dockerfile.in#L16...L18 (`/data` and `/datalog`)
For the Fabric-supplied images, note that ZK uses these directories: https://github.com/hyperledger/fabric/blob/release/images/zookeeper/Dockerfile.in#L16...L18 (`/data` and `/datalog`)
For the Fabric-supplied images, note that ZK uses these directories: https://github.com/hyperledger/fabric/blob/release/images/zookeeper/Dockerfile.in#L16...L18 (so: `/data` and `/datalog`)
For Kafka, we go with the default ones.
For Kafka, we go with the default one (`logs.dirs` by default should point to `/tmp/kafka-logs` so that's what you should persist -- but in a production-level environment you should really set the `log.dir` (ATTN: no `s`) value to a path, and map that path to a R+W external volume)
For Kafka, `logs.dirs` by default points to `/tmp/kafka-logs` so that's what you should persist. But in a production-level environment you should really set the `log.dir` (ATTN: no `s`) value to a path, and map that path to a R+W external volume.
For Kafka, `logs.dirs` by default points to `/tmp/kafka-logs` so that's what you should persist. But in a production-level environment you should really set the `log.dir` (ATTN: no `s` now) value to a path, and map that path to a R+W external volume.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=nDKgaydP3CG3MdqBC) @kostas Thanks that was the information that I was after.
@jyellick I have question on production topology. If we are not creating many channels, I was informed to just have three ZK and three Kafka brokers. If we are unable to create channels, I was thinking we could restart the kafka broker. Also I was told that having four and three kafka is same ? Is it true. if we really need HA for channel creation, is it sufficient to have four kafka brokers or do need to have five kafka brokers. You had reviewed my topology and suggested that you need four kafka brokers so I wanted to confirm it.
> If we are unable to create channels, I was thinking we could restart the kafka broker.
But then you don't exhibit crash fault tolerance.
> Also I was told that having four and three kafka is same ? Is it true.
No.
> If we really need HA for channel creation, is it sufficient to have four kafka brokers or do need to have five kafka brokers.
Four. This is explained here: http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#steps
I have some questions on the network connection requirements for the Ordering Service.
http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#steps
1) Zookeepers should have network connection with the Kafka brokers
but do they need network access to the Orderers ?
2) Orderers should have network connection with the Kafka broker and all the peers in the network.
3) Also the Orderers should have network connect to the Fabric CA of the Orderers.
4) We also need communication between the Peer to Orderer as Peer might connect to the order to join the channel.
@gauthampamu (1) yes to KB, and no to orderers. (2) yes. (3) not sure about this one. (4) yes.
@gauthampamu
> 3) Also the Orderers should have network connect to the Fabric CA of the Orderers.
This is not necessary. You may run fabric with no fabric-ca at all
> 4) We also need communication between the Peer to Orderer as Peer might connect to the order to join the channel.
This seems like a subset of (2), do you mean the application/SDK?
For 4), To join the peers to channel, you definitely need communication between the Application/SDK and Peer but I thought the peers would also connect to the Orderer when you submit the request to join the channel. Can you example the communication between the different nodes when the join request is submitted by the application SDK.
3) I was recommended to have separate Fabric CA for Orderer.
yes, I have seen configuration or sample fabric networks started without CA but I was recommended that you should have Fabric CA for production environment and it was recommended not use the cryptogen tool for certificate generation.
@gauthampamu
Yes, you must allow the peer network to make connections to the orderer. I was simply pointing out that you had already stated this in (2)
> yes, I have seen configuration or sample fabric networks started without CA but I was recommended that you should have Fabric CA for production environment and it was recommended not use the cryptogen tool for certificate generation.
I meant simply to point out, that the orderers do not depend on the fabric-ca, so there is no need for connectivity from the orderer to the CA. You would have to check if the CA needs connectivity to the orderers (but I suspect not)
As a more general rule. The orderers need no connectivity with outband connections, other than to the Kafka cluster. The peers and SDK must be able to make inbound connections to the orderers.
@gauthampamu
Yes, you must allow the peer network to make connections to the orderer. I was simply pointing out that you had already stated this in (2)
> yes, I have seen configuration or sample fabric networks started without CA but I was recommended that you should have Fabric CA for production environment and it was recommended not use the cryptogen tool for certificate generation.
I meant simply to point out, that the orderers do not depend on the fabric-ca, so there is no need for connectivity from the orderer to the CA. You would have to check if the CA needs connectivity to the orderers (but I suspect not)
As a more general rule. The orderers need no connectivity with outbound connections, other than to the Kafka cluster. The peers and SDK must be able to make inbound connections to the orderers.
@jyellick I will use pastebin in the future. Yes I had downloaded the zip file instead of cloning it
@jyellick I downloaded the latest fabric code from github master branch. I sent the
configuration block to configtxlator to convert it to Json. The generated
json looks a little different i.e it has a "type" "version" tag after
root_certs, however I still do not see revocation_list tag.
I added a empty crls folder in ./crypto-config/ordererOrganizations/
OrdererOrg/orderers/orderer.OrdererOrg/msp/
Is this the correct location to add the crls folder so that we see a
revocation_list tag?
Has joined the channel.
I've a question on how long a transaction is kept in a Kafka / Orderer and how long with R1 to help with space calculations. Assuming three orders and the minimum of 4 kafka instances with default replication of 3 instances how many copies will be stored and how long ? Read the docs says ". log.retention.ms = -1. ". So I think that means that copies need to be kept on kafka for ever with this release. With the minimum recommended setup I think that a transaction is kept on kafka three times. I thought I heard that once a block is formed the block is placed back on kafka to distribute through to the orderer. Did I miss hear that bit ? This would be an additional 3 copies. In addition I assume that each order instance will keep a copy in a flat file ledger. Are my numbers correct ?
Has joined the channel.
Has joined the channel.
Hi, guys! I'm trying to setup a blockchain consisting of one orderer with a kafka cluster (4 kafkas, 3 zookeepers).
It seems that the orderer was connected to a kafka cluster, but when to create a new channel, I got the following errors in the orderer log.
2017-08-01 10:35:01.696 UTC [orderer/common/deliver] Handle -> WARN 8a2 [channel: mychannel] Rejecting deliver request because of consenter error
I got this more than ten times, and finally got 'Error: timeout waiting for channel creation' in cli log.
Can I get any help? or can I get some instructions to setup order(s) with kafka?
Is there a reason the ordering service gets the entire transaction? It would seem to make more sense to just submit something like a transaction ID for ordering, thus the ordering service would be completely blind to the transaction contents/participants/etc. Well I guess not completely blind as it would know the ID of the submitting client, although with Tcerts even that's probably not much of a concern.
@toddinpal: For transaction types other than config, I think this could work, yes. If memory serves me right this was even suggested sometime during the design phase. (I am guessing we opted for this design as it makes things simpler, though less ideal than the scheme that you describe.) If there's enough bandwidth in the future, I'd be open in tackling this.
@toddinpal: For transaction types other than config, I think this could work, yes. If memory serves me right this was even suggested sometime during the design phase. (I am guessing we opted for the current design as it makes things simpler, though less ideal than the scheme that you describe.) If there's enough bandwidth in the future, I'd be open in tackling this.
@toddinpal: For transaction types other than config, I think this could work, yes. If memory serves me right this was even suggested sometime during the design phase. (I am guessing we opted for the current design as it makes things simpler, though less ideal than the scheme that you describe.) If there's enough bandwidth in the future, I'd be open to tackling this.
@KSLee: The instructions are here: https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html If the issue persists, please provide a few more details about your setup, and logs for the orderer at the DEBUG level. See this channel's description for info on how to paste these here.
@kostas Right, config updates for the ordering service would obviously have to be seen completely by the orderers. :-)
@rsherwood:
> Read the docs says ". log.retention.ms = -1. ". So I think that means that copies need to be kept on kafka for ever with this release
That is correct.
@toddinpal This sounds very much like the sideDB concept. The biggest problem with no distributing the transaction contents through consensus, is that you must replicate the transaction across the peer network. If a transaction commits but the peers cannot find the backing data, this is a serious problem.
@toddinpal This sounds very much like the sideDB concept. The biggest problem with not distributing the transaction contents through consensus, is that you must replicate the transaction across the peer network. If a transaction commits but the peers cannot find the backing data, this is a serious problem.
@rsherwood:
> With the minimum recommended setup I think that a transaction is kept on kafka three times.
Correct.
@rsherwood:
> With the minimum recommended setup I think that a transaction is kept on kafka three times.
Correct.
> I thought I heard that once a block is formed the block is placed back on kafka to distribute through to the orderer. Did I miss hear that bit?
I believe you misheard. That is not the case.
> In addition I assume that each order instance will keep a copy in a flat file ledger. Are my numbers correct ?
Correct.
@jyellick I was under the impression sideDB dealt with maintaining information off-chain. What I'm suggesting is that the ordering service has no need for the transaction contents, although at the moment the ordering service also handles the delivery of transactions, which could instead be done by gossip
I would argue that is exactly the same. The transactions are stored (verifiable by the blockchain) in a out of band way.
I would argue that is exactly the same concept. The transactions are stored (verifiable by the blockchain) in a out of band way.
I would argue that is exactly the same concept. The transactions are stored (verifiable by the blockchain) in an out of (consensus0 band way.
I would argue that is exactly the same concept. The transactions are stored (verifiable by the blockchain) in an out of (consensus) band way.
Stored but not committed
Why do you say in an out of (consensus) band way? The consensus is only on the order, not the contents
@jyellick:
> If a transaction commits but the peers cannot find the backing data, this is a serious problem.
I remember this concern; I think it may be less grave than we make it out to be. Ultimately, if the transaction cannot be found _anywhere_ (using the gossip mechanism), it won't be replaced with anything in the block. If only some of the peers have it, but won't release it for whatever reason, then OK, let these peers go on with a fork. What it gets down to, I think, is that you can tackle the "no backing data" found problem at the protocol level.
@jyellick:
> If a transaction commits but the peers cannot find the backing data, this is a serious problem.
I remember this concern; I think it may be less grave than we make it out to be. Ultimately, if the transaction cannot be found _anywhere_ (using the gossip mechanism), it won't be replaced with anything in the block. If only some of the peers have it, but won't release it for whatever reason, then OK, let these peers go on with a fork. What it gets down to, I think, is that you can tackle the "no backing data" problem at the protocol level.
> I remember this concern; I think it may be less grave than we make it out to be.
I need to review the design of the sideDB, if it does not address this problem, I would still be very worried.
Under what conditions would the backing data be unavailable?
Hmm... so the client submitting the transaction hasn't provided it to the peers yet?
Exactly.
Or, the client or peers are byzantine, and deliberately trying to break the network
seems like an ordering problem... :-)
@jyellick good point
Alice submits hash "foo" to the OS, with the implicit assumption that "hey, I have spread it around already and everybody knows what we're talking about" but she hasn't done that.
right
Basically, the core problem is, the backing transaction data may be lost. So, at some point, the peer network must decide to abandon processing that transaction and proceed with the chain. But, how do they come to that decision, this comes back to a consensus/ordering problem which is very hard to handle correctly under the peer network.
Basically, the core problem is, the backing transaction data may be lost. So, at some point, the peer network must decide to abandon processing that transaction and proceed with the chain. But, how do they come to that decision? This comes back to a consensus/ordering problem which is very hard to handle correctly under the peer network.
This is not an ordering problem though.
what I meant by ordering, was ordering of submitting the backing data to the other peers vs the transaction id to the ordering service... But you're right in that a byzantine peer or client could cause havoc
As Kostas said, it's something we've talked about supporting, but there are some real and hard problems in dealing with byznatine client/peers, so for v1, we opted for full transactions through the orderer. Maybe this is something that can be supported in the future, but it is not as easy as it looks.
Right, I get it... seems like something perhaps the endorsing peers could help address...
Agreed, with a strong enough endorsement policy (say, require endorsement from 2 peers in every org on the network), with endorsing peers retaining the transaction cache through the current epoch (where the tx becomes invalid if submitted after the epoch), I think this would be okay.
I am having trouble unmarshaling a `processedTransaction` protobuf. If I look at the raw data, I can see the original JSON string that was endorsed. But it seems impossible to extract this string.
@passkit How are you trying to unmarshal it? What error do you get?
I've tried half a dozen or so different encapsulations - ProcessedTransaction, Transaction, ProposalResponse, SignedTransaction, all on both the raw data and the envelope. I am trying to access the original proposal response, but cannot find any way to do so. There are typically no errors, I get either an empty object, the original payload, or what looks like a signature.
Yes, protobuf is not a fail fast encoding, so long as there are no conflicting sequence numbers, you generally will have no problems.
I promise it can be done! I'm not entirely sure off the top of my head which proto structure would be near `processedTransaction`, presumably this is an SDK call?
My use case is that I have issues with the Fabric GO SDK occasionally not catching event broadcasts - the transactions are submitted and endorsed, but I have no confirmation. In these cases, I want to pull the transaction by txID, check it, and then use the response as I would have, had I received it via broadcast.
I'm not entirely following. The standard path should be:
1. Create proposal
2. Send to peer(s)
3. Get proposal response(s)
4. Package proposal responses into a transaction
5. Submit transaction via Broadcast
6. Listen for block deliver event for txid
So where in this path are you?.
Step 6 fails
I therefore have the txid, but I missed the event broadcast even though the event WAS successfully broadcast
https://jira.hyperledger.org/browse/FAB-5557 the SDK team raised this issue after they were able to validate that block deliver events may go unheard.
Note step 6 succeeds on the peer, but the SDK does not catch the broadcast.
Ah, okay, I was thrown off by the word 'broadcast', as this is usually reserved for the orderer API
So you have a structure, presumably from step 4, which you wish to use to simulate the reception of a block event?
Correct
Have you tried unmarshaling it as a `common.Envelope`?
The thing which is sent to ordering is a `common.Envelope`, who's `payload` field is a marshaled `common.Payload`, and who's `data` field is I believe a `peer.Transaction`, though I would need to double check the last one (will wait to see if you find the other structure there)
`common.Envelope` only unmarshals a signature
unmarshaling as common.Payload has a data field
Could you point me to a line in the go-sdk you are using?
Could you point me to the line in the go-sdk which is giving you this data structure?
https://github.com/hyperledger/fabric-sdk-go/blob/master/pkg/fabric-client/channel/query.go#L110
Assuming you have seen:
```
// ProcessedTransaction wraps an Envelope that includes a transaction along with an indication
// of whether the transaction was validated or invalidated by committing peer.
// The use case is that GetTransactionByID API needs to retrieve the transaction Envelope
// from block storage, and return it to a client, and indicate whether the transaction
// was validated or invalidated by committing peer. So that the originally submitted
// transaction Envelope is not modified, the ProcessedTransaction wrapper is returned.
message ProcessedTransaction {
// An Envelope which includes a processed transaction
common.Envelope transactionEnvelope = 1;
// An indication of whether the transaction was validated or invalidated by committing peer
int32 validationCode = 2;
}
```
?
@passkit Note below, I use proto field names, not the go version which will be upper cased.
Your `transactionEnvelope` contains a `payload` field. This is a marshaled `common.Payload`. Inside the `common.Payload` is a `data` field. This is a marshaled `peer.Transaction` message.
So, your code should look something like this:
```
payload := &common.Payload{}
err := proto.Unmarshal(pt.TransactionEnvelope.Payload)
// handle err
tx := &peer.Transaction{}
err = proto.Unmarshal(payload.Data, tx)
// handle err
```
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=ErChpc5bSdrjNr54f) @jyellick Orderers need outbound connection to all the peers and Kafka cluster. I had been recommended to have fabric ca for Ordering Service and since you have to enroll the userid from the Ordering node, you will also need connection to the fabric ca, if you are running fabric-ca client on the Orderer.
@jyellick What are some limitation with ordering service with 3 ZK and when level fault tolerance would you get when you use 5 ZK. When you move from 3 to 5ZK, should you also consider increasing the number of Orderer, should we increase that number only to improve transaction volume and performance .
This is a classic fault tolerance issue. Increasing the number of ZK nodes means you can handle more ZK node failures, but you really shouldn't go over 7 ZK nodes.
The number of ZK nodes is completely orthogonal to the number of OSNs.
@jyellick - I had previously got that far, and am left with a transaction with array of actions containing a header and a payload.
```
resp := &common.Payload{}
err = proto.Unmarshal(txn.TransactionEnvelope.Payload, resp)
pl := &peer.Transaction{}
err = proto.Unmarshal(resp.Data, pl)
pa := &peer.TransactionAction{}
err = proto.Unmarshal(pl.Actions[0].Payload, pa)
pd := &peer.ChaincodeProposalPayload{}
err = proto.Unmarshal(pa.Payload, pd)
```
I continue to dig further down the rabbit hole, but still cannot find a way to cleanly extract the JSON. Honestly, it feels like it really should not be this convoluted to get hold of useable information from querying a transaction ID.
The above gets me closet, but the JSON is still encapsulated
Sorry @passkit, in the ordering service, we (deliberately) never drill that deep into transactions, so I'm having to look up everything in the code as I reply.
Looking at the code, it looks like the bytes in the `input` field of the `ChaincodeProposalPayload` are what's being sent to your chaincode.
Are you using `chaintool`?
The` input` bytes do contain the JSON, but it is still encapsulated
I'm using the go SDK to submit the original transaction.
Right, but which chaincode are you submitting it to?
Basically, I am wondering if the chaincode takes bytes, of it it actually takes some proto structure, and something like chaintool is doing some clever mapping
(I will volunteer that this is getting out of my depth, I have written some chaincodes, and used the go SDK as exercises, but will not claim deep knowledge, just trying to be helpful)
I am submitting it to my own chaincode that takes 2 arguments (a record Id and a transaction value). This chaincode returns a JSON response containing the new record status (3 or more fields will have changed in the record as it is processed by the chaincode). This JSON is returned in step 3 of your list above and then continues onward to be endorsed.
It is the response from the proposal that I am trying to extract
Ah, I thought you were attempting to retrieve the proposal contents itself. I see, let me take another look at the protos.
Okay, so I think you have the wrong type here
`TransactionAction.payload` is a marshaled `ChaincodeActionPayload`
according to:
```
// TransactionAction binds a proposal to its action. The type field in the
// header dictates the type of action to be applied to the ledger.
message TransactionAction {
// The header of the proposal action, which is the proposal header
bytes header = 1;
// The payload of the action as defined by the type in the header For
// chaincode, it's the bytes of ChaincodeActionPayload
bytes payload = 2;
}
```
```
00000000 0a 20 1f be 8f 2a 4c f6 f4 cf 81 b4 f5 80 ec c9 |. ...*L.........|
00000010 11 52 21 72 a1 54 c1 84 e6 ae 59 da 94 ac e7 75 |.R!r.T....Y....u|
00000020 85 1b 12 a4 12 0a af 09 12 93 09 0a 06 6c 6c 32 |.............ll2|
00000030 2d 63 63 12 88 09 0a 15 0a 0e 6c 73 6f 54 4d 56 |-cc.......lsoTMV|
00000040 6a 79 33 76 37 65 62 64 12 03 08 f1 05 1a ee 08 |jy3v7ebd........|
00000050 0a 0e 6c 73 6f 54 4d 56 6a 79 33 76 37 65 62 64 |..lsoTMVjy3v7ebd|
00000060 1a db 08 7b 22 69 64 22 3a 22 6c 73 6f 54 4d 56 |...{"id":"lsoTMV|
00000070 6a 79 33 76 37 65 62 64 22 2c 22 75 69 64 22 3a |jy3v7ebd","uid":|
00000080 22 35 79 50 6d 36 69 44 64 47 77 4c 58 70 33 48 |"5yPm6iDdGwLXp3H|
00000090 62 72 30 43 6c 65 4b 22 2c 22 63 61 6d 70 61 69 |br0CleK","campai|
.... more JSON
00000490 74 65 54 69 6d 65 22 3a 22 32 30 31 37 2d 30 38 |teTime":"2017-08|
000004a0 2d 30 31 54 31 33 3a 34 36 3a 33 37 2e 36 30 36 |-01T13:46:37.606|
000004b0 30 38 36 38 36 38 2b 30 37 3a 30 30 22 7d 12 17 |086868+07:00"}..|
000004c0 0a 04 6c 73 63 63 12 0f 0a 0d 0a 06 6c 6c 32 2d |..lscc......ll2-|
000004d0 63 63 12 03 08 b9 04 1a e1 08 08 c8 01 1a db 08 |cc..............|
000004e0 7b 22 69 64 22 3a 22 6c 73 6f 54 4d 56 6a 79 33 |{"id":"lsoTMVjy3|
000004f0 76 37 65 62 64 22 2c 22 75 69 64 22 3a 22 35 79 |v7ebd","uid":"5y|
00000500 50 6d 36 69 44 64 47 77 4c 58 70 33 48 62 72 30 |Pm6iDdGwLXp3Hbr0|
00000510 43 6c 65 4b 22 2c 22 63 61 6d 70 61 69 67 6e 49 |CleK","campaignI|
.... more JSON
00000900 65 44 61 79 22 3a 31 2c 22 63 72 65 61 74 65 54 |eDay":1,"createT|
00000910 69 6d 65 22 3a 22 32 30 31 37 2d 30 38 2d 30 31 |ime":"2017-08-01|
00000920 54 31 33 3a 34 36 3a 33 37 2e 36 30 36 30 38 36 |T13:46:37.606086|
00000930 38 36 38 2b 30 37 3a 30 30 22 7d 22 0c 12 06 6c |868+07:00"}"...l|
00000940 6c 32 2d 63 63 1a 02 32 39 |l2-cc..29|
```
```
// ChaincodeActionPayload is the message to be used for the TransactionAction's
// payload when the Header's type is set to CHAINCODE. It carries the
// chaincodeProposalPayload and an endorsed action to apply to the ledger.
message ChaincodeActionPayload {
// This field contains the bytes of the ChaincodeProposalPayload message from
// the original invocation (essentially the arguments) after the application
// of the visibility function. The main visibility modes are "full" (the
// entire ChaincodeProposalPayload message is included here), "hash" (only
// the hash of the ChaincodeProposalPayload message is included) or
// "nothing". This field will be used to check the consistency of
// ProposalResponsePayload.proposalHash. For the CHAINCODE type,
// ProposalResponsePayload.proposalHash is supposed to be H(ProposalHeader ||
// f(ChaincodeProposalPayload)) where f is the visibility function.
bytes chaincode_proposal_payload = 1;
// The list of actions to apply to the ledger
ChaincodeEndorsedAction action = 2;
}
```
So once you have the `ChaincodeActionPayload` you should be able to look at the `ChaincodeEndorsedAction`
Above was the result of the `ChaincodeProposalPayload` `input`
Let me try `ChaincodeActionPayload`
Once you have that message, you can look at the `ChaincodeEndorsedAction`. It has a field `proposal_response_payload` which is a marshaled `ChaincodeAction`, which _finally_ has a field `results` which is I believe what you are looking for
proto is very cleverly designed, to be both forwards and backwards compatible. The idea is that if someone else has a newer version of the message, they may encode fields which you, with an older version of the message do not have. This means, that if you try to unmarshal some bytes into a message, if there are extra fields, those fields get silently ignored. Similarly, if there are too few, the assumption is that your proto is newer, and those fields are populated with default zero values. So, when unmarshaling, so long as the fields in the messages are not mis-typed (ie, you do not end up trying to decode one field type as another type) the unmarshaling will succeed. Hope this helps explain why this is such a painful exercise. The proto files in `fabric/protos/peer/` are actually fairly well documented as to what opaque byte fields are actually marshaled versions of, though I'll be the first to admit, the transaction format may be expressive, but it is not simple.
@passkit proto is very cleverly designed, to be both forwards and backwards compatible. The idea is that if someone else has a newer version of the message, they may encode fields which you, with an older version of the message do not have. This means, that if you try to unmarshal some bytes into a message, if there are extra fields, those fields get silently ignored. Similarly, if there are too few, the assumption is that your proto is newer, and those fields are populated with default zero values. So, when unmarshaling, so long as the fields in the messages are not mis-typed (ie, you do not end up trying to decode one field type as another type) the unmarshaling will succeed. Hope this helps explain why this is such a painful exercise. The proto files in `fabric/protos/peer/` are actually fairly well documented as to what opaque byte fields are actually marshaled versions of, though I'll be the first to admit, the transaction format may be expressive, but it is not simple.
@passkit proto is very cleverly designed, to be both forwards and backwards compatible. The idea is that if someone else has a newer version of the message, they may encode fields which you, with an older version of the message do not have. This means, that if you try to unmarshal some bytes into a message, if there are extra fields, those fields get silently ignored. Similarly, if there are too few, the assumption is that your proto is newer, and those fields are populated with default zero values. So, when unmarshaling, so long as the fields in the messages are not mis-typed (ie, you do not end up trying to decode one field type as another type) the unmarshaling will succeed. Hope this helps explain why this is such a painful exercise. The proto files in `fabric/protos/peer/*.proto` are actually fairly well documented as to what opaque byte fields are actually marshaled versions of, though I'll be the first to admit, the transaction format may be expressive, but it is not simple.
@passkit proto is very cleverly designed, to be both forwards and backwards compatible. The idea is that if someone else has a newer version of the message, they may encode fields which you, with an older version of the message do not have. This means, that if you try to unmarshal some bytes into a message, if there are extra fields, those fields get silently ignored. Similarly, if there are too few, the assumption is that your proto is newer, and those fields are populated with default zero values. So, when unmarshaling, so long as the fields in the messages are not mis-typed (ie, you do not end up trying to decode one field type as another type) the unmarshaling will succeed (even though the marshaled message is not of the type you are attempting to unmarshal to). Hope this helps explain why this is such a painful exercise. The proto files in `fabric/protos/peer/*.proto` are actually fairly well documented as to what opaque byte fields are actually marshaled versions of, though I'll be the first to admit, the transaction format may be expressive, but it is not simple.
I'm working my way through the hierarchy and can get the `ProposalResponsePayload`. Struggling now to get the bytes from the payload to unmarshal into a `ChaincodeAction`. But edging forward.
Can appreciate that from the perspective of the orderer, it is not necessary to look at the component parts. Just thinking more in terms of overall usability. Auditors in particular will have a need to transactions by ID, so it’s surprising to me that a method does not already exist to view the user provided input.
Can appreciate that from the perspective of the orderer, it is not necessary to look at the component parts. Just thinking more in terms of overall usability. Auditors in particular will have a need to pull transactions by ID, so it’s surprising to me that a method does not already exist to view the user provided input.
I actually have a JIRA item out there for this. I implemented a proto bytes -> nothing but JSON (no opaque fields) viewer for the configtx format. It would not be that much work to extend this to the endorsertx format, but as always, it is a matter of competing time commitments.
https://jira.hyperledger.org/browse/FAB-4370
Has joined the channel.
@jyellick thanks for your help so far. I feel close, but this is also confusing the hell out of me.
If I look at the raw` ChaincodeEndorsedAction` - I see the following:
```proposal_response_payload:"\242_#\017\315\340|\020\014\023+X\033\"\231\320\234J\212T\300\033=\025uZ\334e\023\254\376X" endorsements:
```proposal_response_payload:"\242_#\017\315\340|\020\014\023+X\033\"\231\320\234J\212T\300\033=\025uZ\334e\023\254\376X" endorsements:
Inside, I have my encapsulated data. However: there seems to be no method to get at the `proposal_response_payload`.
Inside, I have my encapsulated data. However, there seems to be no method to get at the `proposal_response_payload`.
the `.ProposalResponsePayload` and `.GetProposalResponsePayload()` properties only return the hash:
```
00000000 a2 5f 23 0f cd e0 7c 10 0c 13 2b 58 1b 22 99 d0 |._#...|...+X."..|
00000010 9c 4a 8a 54 c0 1b 3d 15 75 5a dc 65 13 ac fe 58 |.J.T..=.uZ.e...X|
```
the `.ProposalResponsePayload` and `.GetProposalResponsePayload()` properties only return the hash:
```
00000000 a2 5f 23 0f cd e0 7c 10 0c 13 2b 58 1b 22 99 d0 |._#...|...+X."..|
00000010 9c 4a 8a 54 c0 1b 3d 15 75 5a dc 65 13 ac fe 58 |.J.T..=.uZ.e...X|
```
the `.ProposalResponsePayload` and `.GetProposalResponsePayload()` properties only return the hash:
```00000000 a2 5f 23 0f cd e0 7c 10 0c 13 2b 58 1b 22 99 d0 |._#...|...+X."..|
00000010 9c 4a 8a 54 c0 1b 3d 15 75 5a dc 65 13 ac fe 58 |.J.T..=.uZ.e...X|
```
the `.ProposalResponsePayload` and `.GetProposalResponsePayload()` property/method only return the hash:
```00000000 a2 5f 23 0f cd e0 7c 10 0c 13 2b 58 1b 22 99 d0 |._#...|...+X."..|
00000010 9c 4a 8a 54 c0 1b 3d 15 75 5a dc 65 13 ac fe 58 |.J.T..=.uZ.e...X|
```
The only other property or method is `.Endorsements/.GetEndorsements()` but these return another `proposal_response_payload` and I am stuck in a loop.
@jyellick
With reference to the issue, unable to see revocation_list tag in channel configuration json by adding a empty crl folder.
If I want test by actually including CRL information in the crls folder, what should I add to this folder? I meant should we add certifcates or serial numbers?
This is the folder location
./crypto-config/ordererOrganizations/
OrdererOrg/orderers/orderer.OrdererOrg/msp/
@YashGanthe
Is there a way of backing up a running order / kafka / zookeeper cluster which is still receiving transactions such that it can all be restored and restarted and reach a consistent point ? If so are there any restrictions such as backing up the order first ?
Has joined the channel.
Has joined the channel.
@passkit I'm a little confused:
> the `.ProposalResponsePayload` and `.GetProposalResponsePayload()` property/method only return the hash:
but above you pasted the the field with much more data in it? If you could send me a file containing the binary and the type, I could try unmarshaling it for you, then give you a code snippet which does it
@jyellick yes you are right. The data shown above was from objects one level up (`ChaincodeActionPayload` and `ChaincodeEndorsedAction`) I think, but was the value shown for `"proposal_response_payload"` if that makes any sense.
I tried unmarshalling the Extension as a `ChaincodeAction` but the action only contained a signature and the results contained 'lscc'.
```
err = proto.Unmarshal(txn.TransactionEnvelope.Payload, resp)
pl := &peer.Transaction{}
err = proto.Unmarshal(resp.Data, pl)
pa := &peer.TransactionAction{}
err = proto.Unmarshal(pl.Actions[0].Payload, pa)
pd := &peer.ChaincodeActionPayload{}
err = proto.Unmarshal(pa.Payload, pd)
pe := &peer.ChaincodeEndorsedAction{}
err = proto.Unmarshal(pd.ChaincodeProposalPayload, pe)
pr := &peer.ChaincodeActionPayload{}
err = proto.Unmarshal(pd.ChaincodeProposalPayload, pr)
```
```
err = proto.Unmarshal(txn.TransactionEnvelope.Payload, resp)
pl := &peer.Transaction{}
err = proto.Unmarshal(resp.Data, pl)
pa := &peer.TransactionAction{}
err = proto.Unmarshal(pl.Actions[0].Payload, pa)
pd := &peer.ChaincodeActionPayload{}
err = proto.Unmarshal(pa.Payload, pd)
pe := &peer.ChaincodeEndorsedAction{}
err = proto.Unmarshal(pd.ChaincodeProposalPayload, pe)
pr := &peer.ChaincodeActionPayload{}
err = proto.Unmarshal(pd.ChaincodeProposalPayload, pr)
```
`pd.Action.ProposalResponsePayload` contains the data above.
I'll try and export the binary and pm to you.
> Is there a way of backing up a running order / kafka / zookeeper cluster which is still receiving transactions such that it can all be restored and restarted and reach a consistent point ? If so are there any restrictions such as backing up the order first ?
@rsherwood This is not a good strategy for fabric. I do not know about the Kafka backup solutions, but this amounts essentially to rewriting history. Not only all orderers, but all peers would need to be restored to a point in time before the kafka backup occurred.
Theoretically, if you were to be able to reset the whole network, so long as your Kafka backup were at least as recent than your orderer backups, and your orderer backups were at least as recent than your peer backups, then this could work, but I hope this sounds like a bad strategy to you as well.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=CgGRXxdfq3jSLnLcE) @jyellick Thanks I agree its certainly not something I would want to do except in a dire emergency, but what I was trying to think was is there any point in taking an online backup at all ? In nearly every scenarios I can imagine a policy of using redundant copies of data (ie multiple orders / kafka) would be used , and a fix and go forward policy for data corruption, however I could imagine a scenarios such as a malicious party gaining logical control of a network where the collective network decision is to restore from a backup. In the order you outline is it important to have each orderer node backup at the same point of for example can I take a single order node ledger backup and apply to all order nodes ?
@rsherwood Generally speaking, so long as the backup time for each orderer, is older than the current state of Kafka (with no requirement that any order have been backed up at the same time), then you should be okay.
Has joined the channel.
Reference: http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html
` Each channel maps to a separate single-partition topic in Kafka ` - Is there a guideline for ensuring active partition (leader) of the channel is distributed uniformly across kafka-cluster?
My understanding by having RF = 3, N = 4, If a broker hosting one of the replication of Log Partition fails, Kafka controller does not replicate this partition automatically to ensure RF=3 for partition is retained?
Document refers to using bdd yaml file however we will have to run configgentx to create n-orderer organization setup (crypto-material, genesis block as indicated) = OSN. My understanding, quickest way to verify with existing example/e2e-cli that is present inside fabric repo is using 1 Orderer-4-kafka-3-zookeeper. we are trying to host finallly 3-orderer-4kafka-3Zookerper setup.
> Is there a guideline for ensuring active partition (leader) of the channel is distributed uniformly across kafka-cluster?
Kafka takes care of that automatically for you. Kafka tries to (a) spread replicas evenly among brokers, and (b) make sure that for each partition each replica is on a different broker. If you make use of the `rack` field available in versions >= 0.10, it even tries to assign replicas to different racks if possible.
> Is there a guideline for ensuring active partition (leader) of the channel is distributed uniformly across kafka-cluster?
@rahulhedge: Kafka takes care of that automatically for you. Kafka tries to (a) spread replicas evenly among brokers, and (b) make sure that for each partition each replica is on a different broker. If you make use of the `rack` field available in versions >= 0.10, it even tries to assign replicas to different racks if possible.
> Is there a guideline for ensuring active partition (leader) of the channel is distributed uniformly across kafka-cluster?
@rahulhegde: Kafka takes care of that automatically for you. Kafka tries to (a) spread replicas evenly among brokers, and (b) make sure that for each partition each replica is on a different broker. If you make use of the `rack` field available in versions >= 0.10, it even tries to assign replicas to different racks if possible.
> My understanding by having RF = 3, N = 4, If a broker hosting one of the replication of Log Partition fails, Kafka controller does not replicate this partition automatically to ensure RF=3 for partition is retained?
@rahulhedge: That is correct.
> My understanding by having RF = 3, N = 4, If a broker hosting one of the replication of Log Partition fails, Kafka controller does not replicate this partition automatically to ensure RF=3 for partition is retained?
@rahulhegde: That is correct.
> Document refers to using bdd yaml file however we will have to run configgentx to create n-orderer organization setup (crypto-material, genesis block as indicated) = OSN.
@rahulhedge: The document doesn't suggest you necessarily *use* the YAML files under `fabric/bddtests`, it points to them as sample config files inline with the suggested configuration settings.
> Document refers to using bdd yaml file however we will have to run configgentx to create n-orderer organization setup (crypto-material, genesis block as indicated) = OSN.
@rahulhegde: The document doesn't suggest you necessarily *use* the YAML files under `fabric/bddtests`, it points to them as sample config files inline with the suggested configuration settings.
> however we will have to run configgentx to create n-orderer organization setup (crypto-material, genesis block as indicated) = OSN. My understanding, quickest way to verify with existing example/e2e-cli that is present inside fabric repo is using 1 Orderer-4-kafka-3-zookeeper. we are trying to host finallly 3-orderer-4kafka-3Zookerper setup.
@rahulhedge: Not sure I got that. Could you rephrase the question?
> however we will have to run configgentx to create n-orderer organization setup (crypto-material, genesis block as indicated) = OSN. My understanding, quickest way to verify with existing example/e2e-cli that is present inside fabric repo is using 1 Orderer-4-kafka-3-zookeeper. we are trying to host finallly 3-orderer-4kafka-3Zookerper setup.
@rahulhegde: Not sure I got that. Could you rephrase the question?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=yaaMDSnbRwkD6u9s6) @kostas
Scenario - I have 10 channels, 4 Kafka broker. As per our use-case, we know channel [1-6] would have say N Times more ledger transaction compared to channel [7-10]. My understanding - having distribute [1-6] channel in round-robin is more preferred or having a selection to distribute channel/topic per broker will be more preferred considering scalability.
Does Kafka Controller (Kafka Question) also change leader depending upon the load seen per partition?
If Yes - OSN node as producer will also need to be aware about the new leader/partition?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=yaaMDSnbRwkD6u9s6) @kostas
Scenario - I have 10 channels, 4 Kafka broker (RF = 4). As per our use-case, we know channel [1-6] would have say N Times more ledger transaction compared to channel [7-10]. My understanding - having distribute [1-6] channel in round-robin is more preferred or having a selection to distribute channel/topic per broker will be more preferred considering scalability.
Does Kafka Controller (Kafka Question) also change leader depending upon the load seen per partition?
If Yes - OSN node as producer will also need to be aware about the new leader/partition?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=yaaMDSnbRwkD6u9s6) @kostas
Scenario - I have 10 channels, 4 Kafka broker (RF = 3). As per our use-case, we know channel [1-6] would have say N Times more ledger transaction compared to channel [7-10]. My understanding - having distribute [1-6] channel in round-robin is more preferred or having a selection to distribute channel/topic per broker will be more preferred considering scalability.
Does Kafka Controller (Kafka Question) also change leader depending upon the load seen per partition?
If Yes - OSN node as producer will also need to be aware about the new leader/partition?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=8ThPooxtrLQZo8bNq) @kostas
I meant if I want to try Kafka-Zookeeper working using the existing https://github.com/hyperledger/fabric/tree/release/examples/e2e_cli setup than the quickest way is to run 1orderer-4kafka-3ZNode setup by changing composer file as per the sample available in fabric/bddtest. This was more for my understanding, I will try this and see if i get some blocker.
> Does Kafka Controller (Kafka Question) also change leader depending upon the load seen per partition?
@rahulhegde: It does not. If you want to play around with that, you may look into the logic Kafka applies when spreading replicas with `rack.id` awareness, and play around with the `rack.id`'s on your brokers accordingly. I would however advise against that. Have you done any measurements that show your broker is struggling to catch up with the traffic? A properly tuned Kafka broker will usually swallow whatever you throw at it.
> I meant if I want to try Kafka-Zookeeper working using the existing https://github.com/hyperledger/fabric/tree/release/examples/e2e_cli setup than the quickest way is to run 1orderer-4kafka-3ZNode setup by changing composer file as per the sample available in fabric/bddtest. This was more for my understanding, I will try this and see if i get some blocker.
@rahulhegde: Ah, I got it now. Please do. @Ratnakar is the resident expert when it comes to the E2E CLI setup (and is in fact the one who reworked it for Kafka recently), so he should be able to help with any script modifications.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=x8BnAaJRKmotyHKYt) @kostas
Thanks - we haven't done any measurement on this setup but was checking if there exist a flexibility. Bcoz as of today, we have the load estimated expected per client (OSN client) and was checking if topics can be aligned per broker.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=x8BnAaJRKmotyHKYt) @kostas
Thanks - we haven't done any measurement on this setup but was checking if there exist a flexibility. Bcoz as of today, we have the load estimated expected per channel and was checking if topics can be aligned per broker.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=x8BnAaJRKmotyHKYt) @kostas
Thanks - we haven't done any measurement on this setup but was checking if there exist a flexibility. Bcoz as of today, we have the load estimate expected per channel and was checking if topics can be aligned per broker.
@sklump
(https://chat.hyperledger.org/channel/general?msg=6gEBiqYqhD3RfD6JW)
Has joined the channel.
`If an ordering service outputs deliver(seqno, prevhash, blob) at a correct peer p` - this means that is the output of the ordering service (in our specific case a batch/block) is delivered at a peer (meaning it was received from the orderer, accepted and processed) that the peer must also have delivered the previous output in the sequence
Put another way, the guarantee is that the ordering service outputs messages (blocks) in order by sequence and that the peer will process those message in order by seq-no
`If an ordering service outputs deliver(seqno, prevhash, blob) at a correct peer p` - this means that is the output of the ordering service (in our specific case a batch/block) is delivered at a peer (meaning it was received from the orderer, accepted and processed) that the peer must also have delivered the previous output in the sequence
Put another way, the guarantee is that the ordering service outputs messages (blocks) in order by sequence and that the peer will process those message in order by seq-no. So let's say that the peer delivers block 10 and then dies/loses communication with the orderer. When the peer reconnects to the orderer, it might now receive block 20. It now knows that it is missing blocks 11-19 so it must get those blocks, process them in order before it will deliver block 20
@kostas @jyellick feel free to correct / amend ;)
@mastersingh24 @sklump Yes. I agree the wording is a little vague. But the simple idea is that, the ordering service produces blocks sequentially, and will deliver them to the peers sequentially. The blocks will all form a hash chain, and the peer should process them in sequence.
@mastersingh24 @sklump Yes. I agree the wording is a little vague. But the simple idea is that, the ordering service produces blocks sequentially, and will deliver them to the peers sequentially. The blocks will all form a hash chain, and the peer should process them in sequence order.
* No skipping. At a correct peer p, if an ordering service outputs [deliver(seqno, prevhash, blob)], such that seqno>0, then p already processed delivered event [deliver(seqno-1, prevhash0, blob0)]
I reordered the clauses without adding too many words in case it might be clearer. Onward!
* No skipping. At a correct peer p, to process ordering service output [deliver(seqno, prevhash, blob)], such that seqno>0, then p already must have processed [deliver(seqno-1, prevhash0, blob0)]
Yes, that reads more nicely
@sklump I will encourage you to submit those edits as changesets/PRs when you find some time. The documentation needs love in general, and it's great that you're going over it.
@sklump I encourage you to submit those edits as changesets/PRs when you find some time. The documentation needs love in general, and it's great that you're going over it.
How?
https://hyperledger-fabric.readthedocs.io/en/latest/CONTRIBUTING.html
thanks
In short, sign up for a Linux Foundation account, git clone the repo and set up the Gerrit hook, create an issue in JIRA ("Improve documentation"), and submit a changeset to Gerrit referencing that issue. Feel free to add folks here as reviewers to your changeset.
If you're stuck anywhere during the onboarding process, feel free to ask for help.
Has joined the channel.
An HSBN one if you get a chance https://gerrit.hyperledger.org/r/#/c/12135/
In our current fabric network, we have one ordering node and one kafka-zookeeper. We want to scale this up — especially the number of ordering nodes given a fixed kafka-zookeeper cluster.
1. Do we have instructions to setup multiple ordering node in an ordering service?
2. Especially, what are the changes need to be made in `configtx.yaml` and `crypto-config.yaml`?
3. How to make a fabric network use multiple ordering node? When we have only one ordering node, we can create channel by `peer channel create -o $ORDERER0:7050 -c $CHANNEL_NAME -f $CHANNEL_NAME.tx`. When we have multiple ordering node in an ordering service, can we still use the same channel creation command? Will the channel block contains all ordering nodes IP or only one (means, we can have only one node per channel as gossip connects to single orderer)?
In our current fabric network, we have one ordering node and one kafka-zookeeper. We want to scale this up — especially the number of ordering nodes given a fixed kafka-zookeeper cluster.
1. Do we have instructions to setup multiple ordering node in an ordering service?
2. Especially, what are the changes need to be made in `configtx.yaml` and `crypto-config.yaml`?
3. How to make a fabric network use multiple ordering node? When we have only one ordering node, we can create channel by `peer channel create -o $ORDERER0:7050 -c $CHANNEL_NAME -f $CHANNEL_NAME.tx`. When the peer joins the channel, it can find the orderer IP address from the channel genesis block such that gossip can connect to orderer for receiving block. When we have multiple ordering nodes, how peer connects to the ordering nodes?
In our current fabric network, we have one ordering node and one kafka-zookeeper. We want to scale this up — especially the number of ordering nodes given a fixed kafka-zookeeper cluster.
1. Do we have instructions to setup multiple ordering node in an ordering service? `bddtests/dc-orderer-kafka.yml`?
2. How to make a fabric network use multiple ordering node? When we have only one ordering node, we can create channel by `peer channel create -o $ORDERER0:7050 -c $CHANNEL_NAME -f $CHANNEL_NAME.tx`. When the peer joins the channel, it can find the orderer IP address from the channel genesis block such that gossip can connect to orderer for receiving block. When we have multiple ordering nodes, how peer connects to the ordering nodes?
> 1. Do we have instructions to setup multiple ordering node in an ordering service? `bddtests/dc-orderer-kafka.yml`?
Setting up multiple ordering nodes is identical to setting up just one, but multiple times. Simply generate your genesis block using `configtxgen`, then bootstrap as many ordering nodes as you would like with this block.
> 2. How to make a fabric network use multiple ordering node? When we have only one ordering node, we can create channel by `peer channel create -o $ORDERER0:7050 -c $CHANNEL_NAME -f $CHANNEL_NAME.tx`. When the peer joins the channel, it can find the orderer IP address from the channel genesis block such that gossip can connect to orderer for receiving block. When we have multiple ordering nodes, how peer connects to the ordering nodes?
When you bootstrap your ordering network with `configtxgen`, modify the `Orderer.Addresses` list to include all of the orderers you are going to deploy. Your new channels will inherit this list.
@Senthil1 ^
Is it possible for each channel to use a different set of orderers? e.g. orderers from the same organizations as the channel membership.
@jyellick @kostas Is it possible for each channel to use a different set of orderers? e.g. orderers from the same organizations as the channel membership.
@dave.enyeart Like many questions, the short answer is "no", with the long answer being rather more complicated.
Every orderer processes every channel for the ordering service.
However, there is no requirement that you have only one ordering service.
Assuming that you configured two ordering services, created a channel in each, you _should_ be able to call join channel for each on the same peer, and have that peer gain access to both channels, each of which is processed by a different set of orderers.
This isn't something that's been tested, and I suspect there are some gotchas in there around the peer, where certain pieces of config, like the TLS support is geared towards only one ordering service. But, it has always been our goal architecturally, that there be a plurality of ordering services, with peers connecting to as many as needed.
Actually, there should be a way to have a separate "ordering service" for each channel... there might have been a JIRA issue created for this once. But until this happens, I believe one can get there as well by configuring multiple fabric instances and having only one channel per instance.
thanks, good points
thanks @jyellick
Has joined the channel.
Consider i have one MSP instance for organization . Is it possible to define my own MSP rules for identity validations .?
@shubhamvrkr: #fabric-crypto is your channel for this Q.
@kostas ohh okay
1. We are running Orderer solo setup for quite few weeks, do we have migration steps to OSN-Kafka-ZK setup?
2. Currently - the same orderer genesis block can work with each OSN in kafka-zooker setup, is the recommendation to use different localMSP (signing material esp.) if OSNs are hosted in same organization?
3. With OSN-Kafka-ZK setup, it seems feasible to have orderer hosted in different organization. At the same time Kafka cluster and ZK Ensemble too in a different organization. Is my understanding correct?
@rahulhegde
1. Changing the consensus type is explicitly not supported
2. It is never good practice to share the same private key for multiple hosts/processes. Please generate a unique localMSP for each orderer
3. Feasible, yes, but remember Kafka is not a BFT service. Having multiple ordering organizations does not increase security, it can be argued that with Kafka, the fewer orderer organizations the better
@rahulhegde ^
Just to expand a bit on Jason's point number 3 above, you need to ask yourself why you're spreading the Kafka cluster over different orgs.
If it's for HA reasons, and what you're basically looking for is to spread over brokers in different datacenters, cool. (Though you'd probably still be better off running MirrorMaker.)
If it's for HA reasons, and what you're basically looking for is to spread brokers over different datacenters (not orgs), cool. (Though you'd probably still be better off running MirrorMaker.)
But if you're spreading over different orgs as a means to distribute consensus/ordering powers over these orgs, do note that _for any given channel_, at any point in time, there is only one broker that decides what goes in the blockchain of that channel; the broker that is the leader replica of that channel.
To make things a bit more concrete, assume that you're using org X and org Y for your Kafka cluster. Org X owns brokers B1 and B2, while Org Y owns brokers B3 and B4.
For `channel-one`, the RF set is `B1, B2, B3` and the current leader replica is `B2`.
For `channel-one`, the RF set is `B2, B3, B4` and the current leader replica is `B2`.
That effectively means that Org X controls what goes in the ledger of channel-one.
That effectively means that Org X controls what goes in the ledger of `channel-one`.
If they're malicious and censor the requests that come to `B2` the other brokers (`B3`, `B4`) wouldn't know.
So, spreading Kafka over different orgs in order to protect yourself from byzantine faults (the censorship example above is one such fault) does not make sense.
Has joined the channel.
Thanks @kostas and @jyellick for the notes.
I wanted to understand - if there was any limitation on the fabric side, but i see there aren't.
This was more in perspective of sharing orderer computing by having each organization host their OSN. However sharing kafka broker is not recommended across organization due to channel/topic relation with leader of partition and censorship problem. Our setup will host everything O-K-Z in 1 organization.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=3Lwki4rdepYvwEQ4L) @kostas This is an interesting point. Assuming we want to do it for the whole network redundancy purposes, is it possible to create architectures that will be spanning different DCs? I.e. Org1 will host it's own set of peers/OSN/Kafka Brokers in DC1 and Org2 will host a different set in DC2. Syncronization of Blockchain between peers belonging to the same channel, but hosted in DC1 and DC2 will be happening through MirrorMaker. Will that work theoretically?
Has joined the channel.
Has joined the channel.
@frbrkoala: I kind of lost you on the setup here.
- DC1: Org1's peers/OSNs/KBs.
- DC2: Org2's peers/OSNs/KBs.
As best as I can tell, this is your setup.
Question for you before we move any further: which Kafka brokers back the ordering service? dc1.KBs? dc2.KBs? Both?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=FokpJhWjoCLqvMxui) @kostas Good question. I guess since all the peers in DC1 and DC2 are in the same channel, then both DC1.KBs and DC2.KBs should be backing ordering service. If I understand correctly, we can't split them in channel's genesis/config block and route different orderers to different KBs within one channel.
Then I'm not sure where MirrorMaker comes in?
Does the orderer sign a block before storing in its own raw ledger? Or does it attach signature only when sending block to peer? In the latter case, all the orderers would store identical blocks and their ledgers would be identical to each other (without the orderer's various signatures to differentiate) - but a bit smaller than the peer ledgers. (2) Is the answer the same for solo and kafka? (If not, then, I am looking for the kafka answer.)
@scottz The orderer signs blocks, then stores them. The primary reason for this is because signing is expensive, and the orderer may need to deliver the same block hundreds or thousands of times. By signing before commit, we do that work only once.
I'm not certain what you mean about identical ledgers. The peer has block storage, but no state database. The blocks the orderer stores will have the same block header and block data, but the block metadata will vary form orderer to orderer, and will always be different from the peer, as the peer adds the invalidtx bitmask to the metadata.
I'm not certain what you mean about identical ledgers. The orderer has block storage, but no state database. The blocks the orderer stores will have the same block header and block data, but the block metadata will vary form orderer to orderer, and will always be different from the peer, as the peer adds the invalidtx bitmask to the metadata.
https://chat.hyperledger.org/channel/fabric-gossip?msg=g5pHTJDBHD7DPZjQc
This is essentially saying that, if the OSNs tried to cut blocks based on a local timer, they might each end up with a different number of messages in their block. And the blockchains must obviously be the same. So, Kafka uses a 'time to cut" message to synchronize the time-based cutting of blocks.
Has joined the channel.
Has joined the channel.
Hello, All.
I am testing integrity of the fabric 1.0 with couch db. I modify the couchdb data using curl command. but query is return modified data. I expect error. How can i check the integriy of the state db?
I am testing integrity of the fabric 1.0 with couch db. I modify the couchdb data using curl command. but query return modified data. I expect error. How can i check the integriy of the state db?
Has joined the channel.
@jslee99a: Ask in #fabric-ledger
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=uxdSMGK6QRoHWnMZg) @kostas To replicate Kafka's topics across DCs, as you mentioned here: [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=3Lwki4rdepYvwEQ4L)
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=uxdSMGK6QRoHWnMZg) @kostas We would need to use MirrorMaker in order to achieve HA across DCs by replicating Kafka's topics, as you mentioned here: [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=3Lwki4rdepYvwEQ4L)
Has joined the channel.
Has joined the channel.
How do we do an atomic increment of a value? If we do a get and set in the chain code, is there a race condition between simultaneous invocations?
@sampath06 This is a better question for #fabric-ledger but in short, all peer ledger state keys have versions, and there is an MVCC readset, and a postimage writeset. If the keys do not match, the commit wlil not succeed.
https://en.wikipedia.org/wiki/Multiversion_concurrency_control
@sampath06 Some simple thoughts: I bet you were thinking you could do a query, and then pass "X" and "Y" when invoking a chaincode function that does something like "if the value is still X, then set to Y". That would prevent concurrent invocations that use the same base value for X. But that is clunky because it does not leverage the blockchain to enforce the preconditions (value of X), which it is can do. Another way would be to design a chaincode function that applies a delta (e.g. subtract Y from X) that does not care so much what the previous value is exactly - although the function could always do at least a little error checking (first ensure the current value of X is greater than Y).
@sampath06 Some simple thoughts: I bet you were thinking you could do a query, and then pass "X" and "Y" when invoking a chaincode function that does something like "if the value is still X, then set to Y". That would prevent concurrent invocations that use the same base value for X. But that is clunky because it does not leverage the blockchain to enforce the preconditions (value of X ), which it is can do. Another way would be to design a chaincode function that applies a delta (e.g. subtract Y from X ) that does not care so much what the previous value is exactly - although the function could always do at least a little error checking (first ensure the current value of X is greater than Y ).
To put it more succinctly, if you chaincode reads a value, and makes a decision based on that chaincode, the transaction will only commit if the values your chaincode read were not changed between execution and commit.
To put it more succinctly, if your chaincode reads a value, and makes a decision based on that chaincode, the transaction will only commit if the values your chaincode read were not changed between execution and commit. There is no race.
To put it more succinctly, if your chaincode reads a value, and makes a decision based on that zlud, the transaction will only commit if the values your chaincode read were not changed between execution and commit. There is no race.
To put it more succinctly, if your chaincode reads a value, and makes a decision based on that value, the transaction will only commit if the values your chaincode read were not changed between execution and commit. There is no race.
@jyellick @scottz Thanks for the explanation. Also signed on to #fabric-ledger. Will post further queries there.
IIUC, `LastOffsetPersisted` should be the offset of *last* tx in a block. Looking at https://github.com/hyperledger/fabric/blob/master/orderer/consensus/kafka/chain.go#L437-L440 consider following case:
`MaxMessageCount=5, PreferredMaxBytes=10KB`
- `receiver` has *2* tx in queue, total size is *8KB*, offset of last tx is `x`
- a new tx comes in, receivedOffset: `x+1`, size: `5KB`
- `Ordered` will cut all pending tx into *one* batch due to size overflow, and enqueue the latest tx
- in this case, `LastOffsetPersisted=receivedOffset - int64(len(batches)-i-1)`, which is `(x+1) - (1 - 0 - 1) = x+1`. However, it should be `x` in this case.
In order to obtain correct `LastOffsetPersisted`, we need to precisely know if the newly received tx is wrapped in the batch or not. cc @jyellick @kostas
IIUC, `LastOffsetPersisted` should be the offset of *last* tx in a block. Looking at https://github.com/hyperledger/fabric/blob/master/orderer/consensus/kafka/chain.go#L437-L440
consider following case:
- `MaxMessageCount=5, PreferredMaxBytes=10KB`
- `receiver` has *2* tx in queue, total size is *8KB*, offset of last tx is `x`
- a new tx comes in, receivedOffset: `x+1`, size: `5KB`
- `Ordered` will cut all pending tx into *one* batch due to size overflow, and enqueue the latest tx
- in this case, `LastOffsetPersisted=receivedOffset - int64(len(batches)-i-1)`, which is `(x+1) - (1 - 0 - 1) = x+1`. However, it should be `x` in this case.
In order to obtain correct `LastOffsetPersisted`, we need to precisely know if the newly received tx is wrapped in the batch or not. cc @jyellick @kostas
IIUC, `LastOffsetPersisted` should be the offset of *last* tx in a block. Looking at https://github.com/hyperledger/fabric/blob/master/orderer/consensus/kafka/chain.go#L437-L440
consider following case:
- `MaxMessageCount = 5, PreferredMaxBytes = 10KB`
- `receiver` has *2* tx in queue, total size is *8KB*, offset of last tx is `x`
- a new tx comes in, receivedOffset: `x+1`, size: `5KB`
- `Ordered` will cut all pending tx into *one* batch due to size overflow, and enqueue the latest tx
- in this case, `LastOffsetPersisted = receivedOffset - int64(len(batches)-i-1)`, which is `(x+1) - (1 - 0 - 1) = x+1`. However, it should be `x` in this case.
In order to obtain correct `LastOffsetPersisted`, we need to precisely know if the newly received tx is wrapped in the batch or not. cc @jyellick @kostas
> shouldn't this https://github.com/hyperledger/fabric/blob/master/orderer/consensus/kafka/chain.go#L407 be using `ProcessConfigUpdateMsg`?
Also I submitted https://gerrit.hyperledger.org/r/#/c/12309/ for this. I temporarily associated it to FAB-5284 (only story I'm working on now). But if you could confirm it's a bug, I'll file a proper JIRA for it.
Has joined the channel.
@guoger: The `LastOffsetPersisted` observation looks like a bug. You are correct that we need to know if the newly-received TX ends up in the batch or not. The code neglects the case where the newly-received transaction remains hidden in the blockcutter.receiver's pendingBatch. Will file a JIRA and fix this -- thanks for reporting this, great catch.
@guoger: Regarding `ProcessConfigUpdateMsg`, note that in line 407, as things stand right now and in order to allow the old message flow, you actually get a message of header type `CONFIG`, not `CONFIG_UPDATE`. See: https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/standardchannel.go#L70 So calling `ProcessConfigUpdateMsg` on this envelope is incorrect. Essentially what you wish to do here is apply the filters, mimicking the behavior in: https://github.com/hyperledger/fabric/blob/release/orderer/common/blockcutter/blockcutter.go#L89...L93
@guoger: Regarding `ProcessConfigUpdateMsg`, note that in line 407, as things stand right now and in order to allow the old message flow, you actually get a message of header type `CONFIG`, not `CONFIG_UPDATE`. See: https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/standardchannel.go#L70 So calling `ProcessConfigUpdateMsg` on this envelope is incorrect. Essentially what you wish to do here is apply the filters, mimicking the behavior in: https://github.com/hyperledger/fabric/blob/release/orderer/common/blockcutter/blockcutter.go#L89...L93 (which is what the current code does)
@frbrkoala: This doesn't strike me as clear-cut MirrorMaker use case. MirrorMaker would work great if there was no overlap between what gets written in DC.A and DC.B, and you then used a MirrorMaker installation @ DC.A to replicate DC.B's contents, and vice versa for the MirrorMaker installation @ DC.B. With the setup that you describe, the right thing to do is take advantage of rack awareness (in with v0.10.0), rely on Kafka to prepare a rack-alternating broker list for your partitions, and increase the crash fault tolerance of your system (by adding KBs and adjusting the replication factor and minimum ISR sizes accordingly) so that the ordering service can work without issues if DC.A or DC.B goes down. https://chat.hyperledger.org/channel/fabric-consensus?msg=kgkqykHasL563pAGD
@frbrkoala: This doesn't strike me as clear-cut MirrorMaker use case. MirrorMaker would work great if there was _no overlap_ between what gets written in DC.A and DC.B, and you then used a MirrorMaker installation @ DC.A to replicate DC.B's contents, and vice versa for the MirrorMaker installation @ DC.B. With the setup that you describe, the right thing to do is take advantage of rack awareness (in with v0.10.0), rely on Kafka to prepare a rack-alternating broker list for your partitions, and increase the crash fault tolerance of your system (by adding KBs and adjusting the replication factor and minimum ISR sizes accordingly) so that the ordering service can work without issues if DC.A or DC.B goes down. https://chat.hyperledger.org/channel/fabric-consensus?msg=kgkqykHasL563pAGD
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=SgsbRtd9MTzX7QpPB) @kostas I see. Just to clarify. If it fair to say that by overlap in writing you mean the situation when two transactions were submitted simultaneously by clients in both DCs and therefore Kafka Brokers ended up with different versions of the last record in the same topic?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=u6TkvSxSmHLGefrdK) @kostas I see, thx. I'll update https://gerrit.hyperledger.org/r/#/c/12309 to only improve UT coverage then. Also, `LastOffsetPersisted` is not inserted in the config path, neither is `lastCutBlockNumber` incremented. They are done in the patch I submitted, should I keep them?
@frbrkoala: No, by overlap here I mean that inevitably you will end up with brokers in DC.A replicating partition leaders in DC.B and vice-versa (otherwise DC.A and DC.B's brokers are not part of the same ordering service -- this is why I asked you about your setup a couple of days ago). So there is an overlap because you already have backups of DC.A in DC.B and vice-versa.
@guoger: Ah, that is something that we definitely missed. Go for it :) https://chat.hyperledger.org/channel/fabric-consensus?msg=hsfbAgY7z3hYsuD72
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=sb9LamBzsh7n8JPGB) @kostas Ok. In that case we can use MirrorMaker only for syncing into a "hot standby" Kafka cluster, correct?
Right, but again -- you need to make sure that if DC.A goes down the Kafka cluster is writeable.
@frbrkoala" Right, but again -- you need to make sure that if DC.A goes down the Kafka cluster is writeable.
@frbrkoala: Right, but again -- you need to make sure that if DC.A goes down the Kafka cluster is writeable.
If it is not, then you will not be able to update the configuration of the ordering service so that it points every channel to brokers only in DC.B -- the ones in DC.B that you had there originally, plus the new ones that you wish to expose now as replacements to the DC.A ones.
But then the question is -- if your cluster _is_ writeable when DC.A goes down, why do you need MirrorMaker to begin with?
Which is why I asked the question 2 days ago.
And which is why I suggested that MirrorMaker is the wrong approach here.
Just use rack.id (where rack == datacenter in your case) and have enough redundancy so that the cluster is writeable when either DC goes down, and then you don't have to worry about updating the ordering service configuration or any such process.
Hello, if I modify some protobuff files in fabric like ab.proto, How can I generate the corresponding pb.go file, thanks
@Glen Run `make protos` from the base `fabric` directory
does this update all proto files by default?
It does
Ok got it
this target has no dependency of gotools, I need to make that first..
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=bYQaGTYmjax4nZ5Kj) @kostas Great! Thanks a lot for the detailed explanations!
Has joined the channel.
is there an out of the box kafka example somewhere that works and uses the cli peer ?
@rangak Yes, please see the `example/e2e_cli`
(This was converted to use Kafka recently, so you will need to check for the current version on the master branch)
^^ https://github.com/hyperledger/fabric/blob/master/examples/e2e_cli/end-to-end.rst
ok thanks.
i used the 1.0.0 release cryptogen and configtxgen tools and it fails
do i need to build these for these tools for the current master in order for this to work ?
Yes, you must upgrade your version of `configtxgen`
thanks.
In our use case, the approval for change must come from a specific user. If we do this through a rest api invoking a chaincode to update a field, how do we guarantee has come only from that user? How do we protect this part?
Has joined the channel.
@sampath06 The chaincode api allows you to retrieve the identity of a proposal.
```
// GetCreator returns `SignatureHeader.Creator` (e.g. an identity)
// of the `SignedProposal`. This is the identity of the agent (or user)
// submitting the transaction.
GetCreator() ([]byte, error)
```
So, you may invoke this within your chaincode to verify if the user is authorized to do a particular action.
Alternatively, there is currently work being done to allow specific function ACLs to be defined outside of system chaincodes https://jira.hyperledger.org/browse/FAB-3621
I expect that at a future date, these ACLs will be extended to cover user chaincodes as well.
@sampath06 The chaincode api allows you to retrieve the identity of a proposal.
```
// GetCreator returns `SignatureHeader.Creator` (e.g. an identity)
// of the `SignedProposal`. This is the identity of the agent (or user)
// submitting the transaction.
GetCreator() ([]byte, error)
```
So, you may invoke this within your chaincode to verify if the user is authorized to do a particular action.
Alternatively (although it does nothing for your question at the moment), there is currently work being done to allow specific function ACLs to be defined outside of system chaincodes https://jira.hyperledger.org/browse/FAB-3621
I expect that at a future date, these ACLs will be extended to cover user chaincodes as well.
@jyellick Thanks. If I have a portal/app where the user checks a document and then endorses it, how do I secure that communication. Would that still remain how it is done currently with web apps or is there a workflow or setup within hyperledger to cover that?
There is nothing built into hyperledger specifically for this. I would encourage you to use a standard user session management technique like Oauth
So I would maintain a mapping between my webapp user and the hyperledger client identity and have the chaincode verify the identity, right?
Correct
Thanks
Hi @jyellick @kostas, If I have mutiple orderers and peers, How do I know which orderer deliver blocks to which peer and how to control that? Thanks
Ok, I've tested this with configtx.yaml, it works
@Glen The peers will randomly select among the orderers configured via `configtx.yaml`, and will fail over between in the event that there is something wrong. [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=iyBpndKRPZcNBeZnD)
@jyellick Ok got it
Has joined the channel.
@jyellick When you have 4 Kafka nodes and 3 Zookeepers, if two kafka nodes are down, we can still process transactions as long as one of the orderer is available right ? We need minimum three kafka for creating new channel or joining channels. Is that correct ?
Hello. Is there a downside when decreasing the BatchTimeout, and increasing the MaxMessageCount? It seems to speed up the ordering service for big transaction throughputs.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=RSyCR8FmYSm2aYLha) @jyellick I was under the impression the client SDK will send the request to the orderer and it is responsibility of the client to select the orderer. Someone told me that the SDK sends it to the first orderer in the list and you have to resend the request if the orderer is down. I never thought the peer make outbound connection to the orderer.
@gauthampamu This is really not sufficient information to determine. I suspect the answer is no though. Assuming you are using RF=3, ISR=2, you are only guaranteed that a single replica may crash while retaining 100% availability. [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Jnu2NnX6Pri8C3BPH)
@SotirisAlfonsos The only real downside is that latency (the time between submitting a tx and having it appear in a block) will increase when the transaction throughput is lower. But yes, for a high load production system, I would expect that these parameters would be tuned up.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=TkuCxLTsX8Xyv8Tr2)
@gauthampamu The SDK generally invokes the `Broadcast` function of the orderer to inject transactions into the system. The peer generally calls the `Deliver` function of the orderer to receive blocks from the orderer.[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=BgyET3qcy6TiCKNYf)
But if we assume that your BatchTimeout=0.1sec and MaxMessageCount=200, wouldn't it be good for both cases?
But if we assume that your BatchTimeout=0.1sec and MaxMessageCount=200, wouldn't it be good for both cases? Well depending on how fast the server can process those can be even lower, higher respectively. Is this correct?
@SotirisAlfonsos Yes, your batch timeout will effectively limit `MaxMessageCount` at some point.
@SotirisAlfonsos Yes, your batch timeout will effectively limit `MaxMessageCount` at some point. (But in an ideal world, you would run on a fast server, with your MaxMessageCount high and your BatchTimeout low)
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=6DJpauCRYMLyhsgWC) @jyellick If RF=3 and ISR=2, is that sufficient information
@gauthampamu Yes:
> If RF=3 and ISR=2, is that sufficient information
In this case, you may lose up to 1 Kafka broker and still be guaranteed that the ordering service remains 100% functional.
Has joined the channel.
Has joined the channel.
I am trying to understand endorsements a little better. How is it actually used? In what case will or can a peer node refuse to endorse a transaction?
@sampath06 The #fabric-peer-endorser-committer is probably a better venue for this question. But, endorsements occur as the result of executing a chaincode. A client submits a proposal to a peer, a peer validates the proposal, executes to determine the results of the proposal, and returns an endorsed (signed) copy of the results. The client seeks endorsements from as many different peers as required (as dictated by the trust model for the network), then packages them up as a transaction. When it comes time for the transaction to be committed, all peers check to make sure that enough endorsements have been gathered for that transaction before modifying the world state with the effect of that transaction. Perhaps more concisely, an endorsement indicates that a peer vouches for the result of some smart contract, because of the results of executing a chaincode..
Hi everyone, is the consensus in Fabric used for orderers to agree on the orders of the transactions in each block, or used for orderers to agree on the fact the all the transactions in a block are valid?
@qizhang There are different levels of validity. The fabric ordering service ensures that only transactions from members authorized to transact on the channel make it into the blockchain. This is a first and most basic level of validity. Determining whether the transaction is valid to modify the world state is a (deterministic) decision left up to the peers.
This balance was struck for performance and scale. The basic validation by the orderer ensures that all transactions are at least auditable (their source can be identified), so if a client is sending invalid transactions to spam the network, it can be identified, and removed.
thanks @jyellick , but consensus means the orderers need to agree on something, so I am wondering what they are trying to agree on?
@qizhang They are agreeing on the order of transactions, and that the transactions are authorized for the channel
You may see under https://en.wikipedia.org/wiki/Consensus_(computer_science) that one of the common applications of consensus is for https://en.wikipedia.org/wiki/Atomic_broadcast
The orderer implements an atomic broadcast service for fabric transactions. (In fact, you'll notice that we even use the terms `Broadcast` and `Deliver` for the API names per the standard distributed systems notion of atomic broadcast)
@qizhang Note that if you are looking for consensus on transaction output, this is actually accomplished via the peer endorsement mechanism. The peers execute chaincode, to produce a proposal response (containing the effect of the transaction). Enough peers must agree on the result of the proposal to form a valid transaction. You can think at this point, that the result of the transaction has been consented on by the peers. It must then be ordered, and then with both consented order, and result, the peers may apply the transaction deterministically.
@jyellick So the consensus protocol will be used by both the endorsing peers and the orederes?
@jyellick So the consensus protocol will be used by both the endorsing peers and the orderers?
@qizhang There is a rule (endorsement policy) associated with each chaincode which determines the number and identity/role of peers in the network which must agree on the output of a transaction for this transaction to be considered valid. I would hesitate to apply the term "consensus protocol" to this procedure, but it is a way for the network to reach consensus about the output of a transaction.
Then, to achieve total order of the transactions, a different form of consensus is performed among the orderers. This consensus is very pluggable, and may change depending on your deployment. For v1.0, the only officially supported ordering consensus protocol is Kafka, which leverages a backing Kafka cluster for the total ordering of messages. We intend to officially support more consensus options in the future.
So, there is consensus on transaction output (endorsement), and consensus on transaction order (ordering).
@jyellick thanks
- in Kakfa-consensus orderer service setup - we say 3 OSN nodes but since the consensus to cut-block is performed by each OSN, we should be good to run with 2 OSN (1 for fault-tolerance) - Thanks.
@rahulhegde Yes, we recommend three OSNs. This is better for facilitating things like upgrade (you may take one OSN down for upgrade, while still having CFT), but strictly speaking, 2 OSNs achieves CFT of 1.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=2j9h7qXgE44Ak5wig) @jyellick
the recent file-sizing estimates may question the reason for 3 OSN (no prune support yet). Cost v/s value - upgrade will be time-bound activity, is this only factor reasoning it?
Consider that you have 2 OSNs, and you stop one to upgrade it. You now only have one OSN running. If that OSN crashes, your fabric network becomes unavailable.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=ZyZv2knyvzs22uu32) @jyellick
Thanks - can you help of this questions - I have posted also on fabric-peer-endorser-committer channel
- on starting the peer - how does peer get the orderer host information? I assume the connection is initiated by Peer to Orderer.
- in kafka-consensus orderer service setup - how do we know or control which peer-OSN communicates and I assume there would be switching to the available OSN to support fault tolerance
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=ZyZv2knyvzs22uu32) @jyellick
Thanks - can you help on these questions - I have posted also on fabric-peer-endorser-committer channel
- on starting the peer - how does peer get the orderer host information? I assume the connection is initiated by Peer to Orderer.
- in kafka-consensus orderer service setup - how do we know or control which peer-OSN communicates and I assume there would be switching to the available OSN to support fault tolerance
@rahulhegde
> - on starting the peer - how does peer get the orderer host information? I assume the connection is initiated by Peer to Orderer.
The orderer addresses are encoded into the genesis block, which are ultimately derived from `configtx.yaml`
> - in kafka-consensus orderer service setup - how do we know or control which peer-OSN communicates and I assume there would be switching to the available OSN to support fault tolerance
Correct, the peer will randomly pick an OSN, and on failure, connect to a different one
So when a peer is booted for the first time - it never makes a connection to the orderer. So my understanding, the Peer Join operation (which passes .block) as a argument to it, is the first time where the connection is initiated to orderer.
My understanding as of today - v1.0.0, we have support to add new OSN + Kafka-broker to the existing channel configuration block + system channel configuration block.
My understanding as of today - v1.0.0, we have support to add new OSN or Kafka-broker to the existing channel configuration block + system channel configuration block.
> So when a peer is booted for the first time - it never makes a connection to the orderer. So my understanding, the Peer Join operation (which passes .block) as a argument to it, is the first time where the connection is initiated to orderer.
Yes, a peer does not connect to the orderer until it receives a JoinChannel. The idea is that in the future, a single peer may connect to multiple ordering services.
> My understanding as of today - v1.0.0, we have support to add new OSN or Kafka-broker to the existing channel configuration block + system channel configuration block.
Yes, the configuration may be modified at runtime to add new OSNs or Brokers
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=Euvoz5onPwBv77Doa) @jyellick So how often does the Peer request the orderer for the deliver function. So does orderer make any connections to the peers ? I meant..do we need outbound connection from the Orderer to peer or is it sufficient to have the outbound connection from the peer to Orderer.
@gauthampamu The peer connects to the orderer `Deliver` gRPC service. It holds a HTTP2 stream open, and receives blocks delivered asynchronously as they are produced.
What is the recommended system configuration for running the Kafka cluster. Let say we are running the Orderer, Zookeeper and Kafka broker docker container on the VM, what should be the CPU and Memory.
@gauthampamu There is no easy answer for a question like this. You will have to experiment with your particular workload and requirements.
Has joined the channel.
How do you download the .block file from the orderer?
Is there any way to get it without copying it onto the host box (assume we're not using the cli image)
@tom.appleyard you can use SDK to request a current config
@Vadim This being the .block to join channel?
What is the command for that?
I suppose you could repackage it into a block with some tool
not so straightforward at this point, actually
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=TmxA2WveEY9qAv4td) @jyellick there is a solution being offered for https://jira.hyperledger.org/browse/FAB-5793 which introduces blocking mechanism in the peer, because the task writing to ledger takes longer than the initial processing of messages from orderer, when there are tens of thousands or more msgs from orderer. Being asynchonous delivery, could the orderer deliveries overrun the peer (deliver client)?
Hello, has anyone attempted to use Fabric with other consensus mechanisms besides the solo and Kafka options? I want to see if I can get Fabric to use Swirlds as a consensus mechanism.
@Vadim Basically could I join a channel using hte SDK with whatever this function returns/
@tom.appleyard you don't need an orderer block to join the channel
you need a channel transaction
and you probably should ask this on #fabric-sdk-node
How come this is the cli command for it then? `peer0.org1.example.com peer channel join -b mychannel.block`
How come this is the cli command for it then? `peer channel join -b mychannel.block`
When you issue `peer channel create` it makes a .block file
but this is not an orderer block
anyway, ask on #fabric-sdk-node ... usually SDKs are more flexible than cli commands
No, it is not possible for the peer to be overrun by new blocks. The HTTP2 flow controls prevent it.[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=K4BigdM4ECcLEKPax)
@scottz No, it is not possible for the peer to be overrun by new blocks. The HTTP2 flow controls prevent it.[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=K4BigdM4ECcLEKPax)
Yes, I have seen some work using other consensus methods with fabric. The architecture is meant to be quite pluggable [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=KNiwczpCChxrzyYiq)
@rbv Yes, I have seen some work using other consensus methods with fabric. The architecture is meant to be quite pluggable [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=KNiwczpCChxrzyYiq)
@Vadim there is no repackaging necessary. Simply get a `common.Block` proto structure back from your SDK implementation of choice, serialize it with protobuf, and write it to disk. [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=JjyZBkcpsYMfsijZ4)
@tom.appleyard `peer channel create` allocates the channel resources at the orderer. When the orderer does this, it creates a genesis block for your channel. Since the peer needs the genesis block to join a channel, as a matter of convenience `peer channel create` also fetches the block from the orderer. [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=2Aoy3nNM9YH7rPkKc)
are these works publicly accessible? I have some ideas of where I'd need to modify things but wanted to see other examples
@rbv I've encouraged the authors to publish, but they are still doing some polishing. Adding some other consensus plugin examples have been on my list, but I have not had time. You are certainly welcome to post questions here and we will do our best to help you along.
Ah, perfect thank you !! Let me do some more poking around in the HF code so I can know what exactly to ask.
Has joined the channel.
@jyellick
whenever a Orderer receives a channel configuration block for changing the CRL, is the CRL list a complete list for the MSP or it is the delta?
https://github.com/hyperledger/fabric/blob/release/msp/mspimpl.go#L775 - is the code that gets called and indicates it is the complete list alway?
@rahulhegde Yes, it is a complete list.
I had a look at how new Organizations can be added to an existing Consortium and read that the network has to get stopped and that a new genesis block has to get created and distributed to all participating Orderers. Is there any timeline for when we can expect to be able to add Organizations to an existing Consortium without restarting the network with a new genesis block?
@LeoKotschenreuther
> and read that the network has to get stopped and that a new genesis block has to get created and distributed to all participating Orderers
This is patently false. Where did you read this?
> Is there any timeline for when we can expect to be able to add Organizations to an existing Consortium without restarting the network with a new genesis block?
You may do this today, it is well supported and tested, and done in our bddtests.
@jyellick There was a commend in the fabric channel here in the Rocket Chat: https://chat.hyperledger.org/channel/fabric?msg=PWeYWmANXBsjbzYpo
@jyellick There was a comment in the fabric channel here in the Rocket Chat: https://chat.hyperledger.org/channel/fabric?msg=PWeYWmANXBsjbzYpo
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=K3oetiYo56zu5ysF7) @jyellick
[GEN] For my understanding - Is this a complete list because CA can decide in future to remove a certificate out from CRL List?
Sorry I missed that @LeoKotschenreuther but it is simply not true. You may modify the consortium definition at any point after bootstrapping. This has always been the case.
@jyellick How would that work? I had a look at the hyperledger docs about the configtxlator at http://hyperledger-fabric.readthedocs.io/en/latest/configtxlator.html and they explain how to add an organization to a channel after it has already been created. I think this is also part of the BDD tests. What about creating a new Organization (creating a new set of keys) and adding that to an existing Consortium?
@LeoKotschenreuther The process is virtually identical. Simply fetch the current ordering system channel configuration. Modify the consortium definition just as you would if you were adding an org to a channel (but instead of adding to the "Application" section, add it to the "Consortiums" section. Then compute the update and submit. Your consortium definition will be updated and you may use it, no shutting down of any OSNs required :slight_smile:
@LeoKotschenreuther The process is virtually identical. Simply fetch the current ordering system channel configuration. Modify the consortium definition just as you would if you were adding an org to a channel (but instead of adding to the "Application" section, add it to the "Consortiums" section). Then compute the update and submit. Your consortium definition will be updated and you may use it, no shutting down of any OSNs required :slight_smile:
Thanks a lot @jyellick, I will give it a try. Are there plans to simplify the whole process? It's really easy to understand how to add another Organization to the configtx.yaml file. However, the JSON file you get from decoding the current genesis block looks a lot more complex and it's not immediately obvious where to adjust the file for adding an Organization.
@LeoKotschenreuther Yes, hopefully in the future, the SDKs will add some helpful functions. I'd also love to see someone add a simple UI to `configtxlator` which could be used for inspecting and editing. We are always looking for help if someone is interested!
What is the hard disk requirements for the Kafka Brokers when you have Kafka cluster with 4 nodes and 3 ZK, lets say it is 10GB per year for the Couch DB, it 2Kb per transaction and we have close to 5 Million Transactions per year.
Lets say we are using the default replication factor of 3 and insync replica value of 2.
@gauthampamu Based on the information you provided. There is a replication factor of 3. This means the data will be replicated 3 times. So, if you are sending 2Kb per transaction, with 5 million tx per year, the lower end of your required disk space would be:
5 Million Tx * 2048 bytes per TX * 3 replicas roughly 30 billion bytes.
Or, roughly 30 GB of storage. This would of course be on the low end, there will be some other metadata costs etc. Add whatever buffer you feel comfortable with. For my back of the envelope math I would call 30 GB to 40 GB for the purposes of giving each replica 10 GB of storage. I'd then double it or so as a margin of safety, and conclude that about 20GB of disk space per broker is appropriate.
Then again, I'm a software developer, not a deployment architect, I have no idea what real industry standard margins of error are when it comes to right sizing storage, and it will likely come down to your risk tolerance.
@gauthampamu Based on the information you provided. There is a replication factor of 3. This means the data will be replicated 3 times. So, if you are sending 2Kb per transaction, with 5 million tx per year, the lower end of your required disk space would be:
```
5 Million Tx * 2048 bytes per TX * 3 replicas roughly 30 billion bytes.
```
Or, roughly 30 GB of storage. This would of course be on the low end, there will be some other metadata costs etc. Add whatever buffer you feel comfortable with. For my back of the envelope math I would call 30 GB to 40 GB for the purposes of giving each replica 10 GB of storage. I'd then double it or so as a margin of safety, and conclude that about 20GB of disk space per broker is appropriate.
Then again, I'm a software developer, not a deployment architect, I have no idea what real industry standard margins of error are when it comes to right sizing storage, and it will likely come down to your risk tolerance.
@gauthampamu Based on the information you provided. There is a replication factor of 3. This means the data will be replicated 3 times. So, if you are sending 2Kb per transaction, with 5 million tx per year, the lower end of your required disk space would be:
```
5 Million Tx * 2048 bytes per TX * 3 replicas = roughly 30 billion bytes.
```
Or, roughly 30 GB of storage. This would of course be on the low end, there will be some other metadata costs etc. Add whatever buffer you feel comfortable with. For my back of the envelope math I would call 30 GB to 40 GB for the purposes of giving each replica 10 GB of storage. I'd then double it or so as a margin of safety, and conclude that about 20GB of disk space per broker is appropriate.
Then again, I'm a software developer, not a deployment architect, I have no idea what real industry standard margins of error are when it comes to right sizing storage, and it will likely come down to your risk tolerance.
@gauthampamu Based on the information you provided. There is a replication factor of 3. This means the data will be replicated 3 times. So, if you are sending 2Kb per transaction, with 5 million tx per year, the lower end of your required disk space would be:
```
5 Million Tx * 2048 bytes per TX * 3 replicas = roughly 30 billion bytes.
```
Or, roughly 30 GB of storage. This would of course be on the low end, there will be some other metadata costs etc. Add whatever buffer you feel comfortable with. For my back of the envelope math I would call 30 GB as 40 GB for the purposes of giving each replica 10 GB of storage. I'd then double it or so as a margin of safety, and conclude that about 20GB of disk space per broker is appropriate.
Then again, I'm a software developer, not a deployment architect, I have no idea what real industry standard margins of error are when it comes to right sizing storage, and it will likely come down to your risk tolerance.
I have TLS configuration for the Peer, Orderer etc but do we need similar option for the Kafka broker and Zookeeper.
I have seen TLS configuration for the Peer, Orderer etc but do we need similar option for the Kafka broker and Zookeeper.
I have seen TLS configuration for the Peer, Orderer etc but do we have similar option for the Kafka broker and Zookeeper
@jyellick Thanks for the response
> I have seen TLS configuration for the Peer, Orderer etc but do we have similar option for the Kafka broker and Zookeeper
@gauthampamu Yes, please see https://github.com/hyperledger/fabric/blob/master/docs/source/kafka.rst step 7.
Has joined the channel.
1. A channel is initially configured with certificates for organizations P1 and P2. The channel configuration has 2 sections, one for P1 MSP and another for P2 MSP. The certificate for P1 gets revoked and appears in the CRL published by the CA. Should we update both the MSP sections with the revocation list with the contents of the CRL or should we update on the P1 MSP section?
2. As an extension to the above question, if the certificate of P3 is revoked, is it acceptable to add the revocation list in the configuration update of channels that were not initially configured with the certificate?
Hello. I noticed that the ordering of a transaction takes around 300 milliseconds in both kafka and solo ( from the time that the request left the client until the block was added on a peer. ). This is with batch timeout 0.01 seconds. I was wondering where does this delay come from? Is there a way to further reduce this ordering time?
@YashGanthe
> Should we update both the MSP sections with the revocation list with the contents of the CRL or should we update on the P1 MSP section?
No, you need only update the CRL for the issuing MSP.
> if the certificate of P3 is revoked, is it acceptable to add the revocation list in the configuration update of channels that were not initially configured with the certificate?
The certificates are not generally embedded in the config. Instead, the CAs are embedded. So I'm not sure this question makes sense.
@SotirisAlfonsos
> Hello. I noticed that the ordering of a transaction takes around 300 milliseconds in both kafka and solo ( from the time that the request left the client until the block was added on a peer. ). This is with batch timeout 0.01 seconds. I was wondering where does this delay come from? Is there a way to further reduce this ordering time?
You are describing a network latency problem. The flow you describe requires:
```
Client -> Orderer (50 ms)
Orderer -> Kafka (50 ms)
Kafka -> Orderer (50 ms)
BatchTimerStarted (100 ms)
BatchTimerPops, create block
Orderer -> Peer (50 ms)
```
I am making up the 50ms number, but it seems somewhat reasonable for network latency between components. You may of course use pipelining, so that there are many requests in line at the same time. Thus, the throughput is not constrained by the latency.
@jyellick Thank you for the reply. Everything is set up locally and at the time i am not queuing transactions. The communication between Dockers takes less than 10 milliseconds ( 6 on average ) and the batch timeout is 10ms. But i see a delay of almost 100 ms right before the indexing block print, on the orderer and also on the peers, which makes up for the 200ms of the total time.
@jyellick Thank you for the reply. Everything is set up locally and at the time i am not queuing transactions. Each communication between Dockers takes less than 10 milliseconds ( 6 on average ) and the batch timeout is 10ms. But i see a delay of almost 100 ms right before the indexing block print, on the orderer and also on the peers, which makes up for the 200ms of the total time.
@SotirisAlfonsos Locally, on my system for the orderer with Solo and a batch timeout of 10ms
```
2017-08-17 10:09:00.521 EDT [orderer/server/main] Broadcast -> DEBU 115 Starting new Broadcast handler
...
2017-08-17 10:09:00.528 EDT [orderer/multichannel] commitBlock -> DEBU 12e [channel: testchainid] Wrote block 1
```
So, from the time of initial gRPC connection, to the time the block is committed to disk, I see 7ms of elapsed time.
@SotirisAlfonsos Locally, on my system for the orderer with Solo and a batch timeout of 10ms
```
2017-08-17 10:09:00.521 EDT [orderer/server/main] Broadcast -> DEBU 115 Starting new Broadcast handler
...
2017-08-17 10:09:00.528 EDT [fsblkstorage] indexBlock -> DEBU 12c Indexing block [blockNum=1, blockHash=[]byte{0x1c, 0xaf, 0xb9, 0xf, 0x5c, 0xb6, 0xec, 0x41, 0x30, 0x3a, 0xe8, 0xc0, 0xb8, 0xc8, 0x1b, 0x43, 0x33, 0xf7, 0x5b, 0xa7, 0xd9, 0x8e, 0x47, 0x3f, 0x3e, 0x9b, 0xb4, 0x76, 0x67, 0x2a, 0x91, 0x2f} txOffsets=
txId= locPointer=offset=70, bytesLength=2099
...
2017-08-17 10:09:00.528 EDT [orderer/multichannel] commitBlock -> DEBU 12e [channel: testchainid] Wrote block 1
```
So, from the time of initial gRPC connection, to the time the block is committed to disk, I see 7ms of elapsed time.
@SotirisAlfonsos Locally, on my system for the orderer with Solo and a batch timeout of 10ms
```
2017-08-17 10:09:00.521 EDT [orderer/server/main] Broadcast -> DEBU 115 Starting new Broadcast handler
...
2017-08-17 10:09:00.522 EDT [orderer/common/broadcast] Handle -> DEBU 119 [channel: testchainid] Broadcast has successfully enqueued message of type MESSAGE
2017-08-17 10:09:00.522 EDT [orderer/common/blockcutter] Ordered -> DEBU 11a Enqueuing message into batch
2017-08-17 10:09:00.522 EDT [orderer/solo] main -> DEBU 11b Batch timer expired, creating block
...
2017-08-17 10:09:00.528 EDT [fsblkstorage] indexBlock -> DEBU 12c Indexing block [blockNum=1, blockHash=[]byte{0x1c, 0xaf, 0xb9, 0xf, 0x5c, 0xb6, 0xec, 0x41, 0x30, 0x3a, 0xe8, 0xc0, 0xb8, 0xc8, 0x1b, 0x43, 0x33, 0xf7, 0x5b, 0xa7, 0xd9, 0x8e, 0x47, 0x3f, 0x3e, 0x9b, 0xb4, 0x76, 0x67, 0x2a, 0x91, 0x2f} txOffsets=
txId= locPointer=offset=70, bytesLength=2099
...
2017-08-17 10:09:00.528 EDT [orderer/multichannel] commitBlock -> DEBU 12e [channel: testchainid] Wrote block 1
```
So, from the time of initial gRPC connection, to the time the block is committed to disk, I see 7ms of elapsed time.
```2017-08-17 13:00:24.689 UTC [orderer/main] Broadcast -> DEBU 1a83 Starting new Broadcast handler
2017-08-17 13:00:24.689 UTC [orderer/common/broadcast] Handle -> DEBU 1a84 Starting new broadcast loop
...
2017-08-17 13:00:24.845 UTC [fsblkstorage] indexBlock -> DEBU 1ac1 Indexing block [blockNum=30, blockHash=[]byte{0x7, 0x5c, 0x38, 0xad, 0x73, 0xf7, 0x4f, 0x1a, 0xb5, 0x9a, 0x96, 0x1, 0x23, 0x36, 0x72, 0xdf, 0xb8, 0x4, 0x81, 0x94, 0xd4, 0xcd, 0x53, 0x5, 0xd4, 0x74, 0x4a, 0xb7, 0x49, 0xab, 0x98, 0x3a} txOffsets=
txId=52f01f0e60a5f1b45a6644fd0403f3dccb31ae904d1aee42c51e73007d173740 locPointer=offset=70, bytesLength=6022
]
2017-08-17 13:00:24.845 UTC [fsblkstorage] updateCheckpoint -> DEBU 1ac2 Broadcasting about update checkpointInfo: latestFileChunkSuffixNum=[0], latestFileChunksize=[247848], isChainEmpty=[false], lastBlockNumber=[30]
2017-08-17 13:00:24.845 UTC [orderer/multichain] WriteBlock -> DEBU 1ac3 [channel: mychannel] Wrote block 30
```
```2017-08-17 13:00:24.689 UTC [orderer/main] Broadcast -> DEBU 1a83 Starting new Broadcast handler
2017-08-17 13:00:24.689 UTC [orderer/common/broadcast] Handle -> DEBU 1a84 Starting new broadcast loop
...
2017-08-17 13:00:24.712 UTC [orderer/solo] main -> DEBU 1aad Batch timer expired, creating block
...
2017-08-17 13:00:24.845 UTC [fsblkstorage] indexBlock -> DEBU 1ac1 Indexing block [blockNum=30, blockHash=[]byte{0x7, 0x5c, 0x38, 0xad, 0x73, 0xf7, 0x4f, 0x1a, 0xb5, 0x9a, 0x96, 0x1, 0x23, 0x36, 0x72, 0xdf, 0xb8, 0x4, 0x81, 0x94, 0xd4, 0xcd, 0x53, 0x5, 0xd4, 0x74, 0x4a, 0xb7, 0x49, 0xab, 0x98, 0x3a} txOffsets=
txId=52f01f0e60a5f1b45a6644fd0403f3dccb31ae904d1aee42c51e73007d173740 locPointer=offset=70, bytesLength=6022
]
2017-08-17 13:00:24.845 UTC [fsblkstorage] updateCheckpoint -> DEBU 1ac2 Broadcasting about update checkpointInfo: latestFileChunkSuffixNum=[0], latestFileChunksize=[247848], isChainEmpty=[false], lastBlockNumber=[30]
2017-08-17 13:00:24.845 UTC [orderer/multichain] WriteBlock -> DEBU 1ac3 [channel: mychannel] Wrote block 30
```
```2017-08-17 13:00:24.689 UTC [orderer/main] Broadcast -> DEBU 1a83 Starting new Broadcast handler
2017-08-17 13:00:24.689 UTC [orderer/common/broadcast] Handle -> DEBU 1a84 Starting new broadcast loop
...
2017-08-17 13:00:24.712 UTC [orderer/solo] main -> DEBU 1aad Batch timer expired, creating block
...
2017-08-17 13:00:24.845 UTC [fsblkstorage] indexBlock -> DEBU 1ac1 Indexing block [blockNum=30, blockHash=[]byte{0x7, 0x5c, 0x38, 0xad, 0x73, 0xf7, 0x4f, 0x1a, 0xb5, 0x9a, 0x96, 0x1, 0x23, 0x36, 0x72, 0xdf, 0xb8, 0x4, 0x81, 0x94, 0xd4, 0xcd, 0x53, 0x5, 0xd4, 0x74, 0x4a, 0xb7, 0x49, 0xab, 0x98, 0x3a} txOffsets=
txId=52f01f0e60a5f1b45a6644fd0403f3dccb31ae904d1aee42c51e73007d173740 locPointer=offset=70, bytesLength=6022
]
2017-08-17 13:00:24.845 UTC [fsblkstorage] updateCheckpoint -> DEBU 1ac2 Broadcasting about update checkpointInfo: latestFileChunkSuffixNum=[0], latestFileChunksize=[247848], isChainEmpty=[false], lastBlockNumber=[30]
2017-08-17 13:00:24.845 UTC [orderer/multichain] WriteBlock -> DEBU 1ac3 [channel: mychannel] Wrote block 30
```
Was there a restructuring on that part of the code from beta to rc? Or my system is too slow?
Ah, I did not realize that you were on such an old level of code.
Still, this seems like an environment problem to me.
I am running the orderer locally on my laptop to see those low latencies. It might be overhead introduced by docker as well
What platform are you executing on?
Ubuntu 14.04 server with 4 cpus and 8G mem.
Is this a VM or bare metal?
bare metal
bare metal. But it easily handles the transactions. Seemingly at least.
Those numbers are very surprising for me for bare metal. 100+ms of latency from receiving the broadcast to the batch time expiring seems excessive, even if your hardware were very slow
I was expecting perhaps that it was VM which was sharing CPU time and being swapped out for some period of time
Can you try running locally instead of inside docker? Can you build fabric binaries locally?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=BKQB4iFc4qcg7epLD) @jyellick Also do need account for Kafka logs. I have heard that Kafka logs will keep growing indefinitely and we don't set any age limit for the messages. How much disk should we allocate if the size of the couchdb is around 10GB per year.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=BKQB4iFc4qcg7epLD) @jyellick Also do we need account for Kafka logs. I have heard that Kafka logs will keep growing indefinitely and we don't set any age limit for the messages. How much disk should we allocate if the size of the couchdb is around 10GB per year.
The idea was that a laptop would not be able to handle it, that is why we decided to build on the server. What do you mean "instead of inside docker?"
@SotirisAlfonsos If you pull the fabric code, you may pull and run the `orderer` binary as a local process as well as a sample client. We can then try to replicate the experiment I ran on my laptop to identify whether it is something related to running inside docker, or perhaps something else.
@jyellick Thanks, I will try to do that. Getting some errors for now. As soon as i manage to do it i will get back to you.
Has joined the channel.
Should I be able to see my TLS ca certs in the configtxgen -inspectBlock output if they existed in the tlscacerts directory when configtxgen was executed?
@jmcnevin Which version of `configtxgen`?
i'm using the fabric-tools 1.0.1 docker container
it says "development build"
@jmcnevin Unfortunately that version of `configtxgen` is much less verbose in its output than the current one. You may instead of `configtxlator` from the same version, or build `configtxgen` from master. Any of these options should allow you to see the TLS certs.
@jmcnevin Unfortunately that version of `configtxgen` is much less verbose in its output than the current one. You may instead use `configtxlator` from the same version, or build `configtxgen` from master. Any of these options should allow you to see the TLS certs.
ok, I can see the cert with the configtxlator output, thanks :)
Has joined the channel.
@jyellick i have some question about the fabric channel config
i have seen the page
http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html
but i still have some question about the fabric
the config_group is recursively
when the orderer start
it use the genesis block to config
however the client(sdk) still pass the configtx to the orderer
?
i don't why
i don't know why
i use the configtxlator tool to view the genesis block in the e2e
i use the configtxaltor to view the genesis block in the e2e (i have run the network_setup.sh)
@asaningmaxchain Please see http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html#channel-creation
Essentially, the channel creation tx sets the "Application" section of the new config. This is composited with the configuration from the ordering system channel genesis block.
Message Attachments
https://chat.hyperledger.org/channel/fabric?msg=aCYcm8wATCXtvvNug
i use the tool which you provide (configtxlator) to view the channel.tx and genesis.block
i send two picture to you
wait a moment
Message Attachments
i think you are familiar with it
@vikas_hada
> Can someone please confirm whether multi-orderer(kafka-based) channel creation is allowed thorugh cli bash provided in the `fabric-samples` ->`first network`
Yes, the channel creation process is the same whether you are using solo of Kafka.
Has joined the channel.
but i find it's difficult to understand
it
could you tell me how the fabric build a channel
What components are included
and how could i to modify i want to
@jyellick
@asaningmaxchain This question ask is very broad
http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html
http://hyperledger-fabric.readthedocs.io/en/latest/configtxgen.html
http://hyperledger-fabric.readthedocs.io/en/latest/configtxlator.html
http://hyperledger-fabric.readthedocs.io/en/latest/policies.html
@asaningmaxchain This question you ask is very broad
http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html
http://hyperledger-fabric.readthedocs.io/en/latest/configtxgen.html
http://hyperledger-fabric.readthedocs.io/en/latest/configtxlator.html
http://hyperledger-fabric.readthedocs.io/en/latest/policies.html
What specifically would you like to change in the channel configuration?
like add a peer
To add a peer organization, you need to add a section which defines that organization to the JSON at the same level as your other application orgs.
You may generate this JSON using `configtxgen`, then copy it into your config, and use `configtxlator` to generate an update, which will modify your channel
You may find an example https://github.com/hyperledger/fabric/blob/release/examples/configtxupdate/reconfig_membership/script.sh
Note that this example simply copies an org definition and renames it using the `jq` command on line https://github.com/hyperledger/fabric/blob/release/examples/configtxupdate/reconfig_membership/script.sh#L63
Instead you would need to copy the correct org definition
it just produce the config how can i use it in the fabric
take effect in the fabric
The example I sent you does this. It submits the configuration update to the channel and modifies the membership.
thx
i try it
it use the peer channel fetch/update cmd to finish the task
so i must execute the make peer cmd in the fabric folder
@asaningmaxchain It depends on how you have configured your network. If you are using the e2e/byfn.sh networks, there is a peer cli container
the byfn.sh provide the cli container like the docker-compose-cli.yaml
?
I thought so
I thought so, but I have not used it in some time, I could be incorrect.
ok
i try it
@jyellick thx
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-questions?msg=Gh2hDKk8GtwNu7KZD) @CodeReaper This will be the better channel to address your question.
Has joined the channel.
the current system (kafka) is crash fault tolerant (not byzantine).
there is work planned to implement SBFT at some point the future
there is work planned to implement SBFT at some point in the future
@jyellick i resolve the genesis.block which produces in the e2e and i see the page(http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html#channel-creation)
Message Attachments
the picture show the config about the genesis.block
which defines a channel_group which is a configGroup
and then it contain value and policy
the policy govern who has right to modify the config
?
take a example if i want to modify the channel_group that means i must has the admin right?
Each config element has a `mod_policy` field
By default, these are all set to `Admins`
This means, look at the `policies` map _for the current group_ to find the policy which governs its modification
You will see some, such as the `OrdererAddresses` whose `mod_policy` is actually `/Channel/Orderer/Admins`
Because it begins with `/Channel`, you look up the policy by finding the `Orderer` group, then looking up its `Admins` policy
"HashingAlgorithm": {
"mod_policy": "Admins",
"value": {
"name": "SHA256"
}
},
"OrdererAddresses": {
"mod_policy": "/Channel/Orderer/Admins",
"value": {
"addresses": [
"orderer.example.com:7050"
]
}
}
i know the type=1
yes
the config show me
but while the HashingAlgorithm mod_policy is Admin
?
I'm not sure what the question is?
not the /channel/orderer/admin
as you see
the HashingAlgorithm whose mod_policy is Admins not /Channel/Orderer/Admins
Yes, this is the default configuration, thought it obviously could be changed
so if i want to modify the HashingAlgorithm must have the /Channel/Admins permission?
Correct
You must collect signatures which satisfy the policy defined for `/ChannelAdmins`
You must collect signatures which satisfy the policy defined for `/Channel/Admins`
ok, that means each element in the config who has a modify which has a context
unless it start with /
Yes, if you were to modify the `HashingAlgorithm` and the `OrdererAddresses`, the signatures which accompanied the update would have to satisfy both `/Channel/Admins` and `/Channel/Orderer/Admins` (though by default, satisfying `/Channel/Admins` also means satisfying `/Channel/Orderer/Admins`)
And yes, the `mod_policy` requires the context of the group to know which policy to look up.
(unless it starts with `/`)
ok,let me take a example if i want to modify the HashingAlgorithm and then i need a /Channel/Admins Permission
how can i judge the client pass to orderer who contain the /Channel/Admins
First, look up the definition of the policy called `Admins` defined in the `channel_group`'s `policies` map
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
}
}
```
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
}
}
```
@jyellick i write a email to you about How do I understand that fabric's channel config
@jyellick ’‘’i write a email to you about How do I understand that fabric's channel config‘’‘
let me take this for example
@asaningmaxchain You may use three backticks in a row, then paste the code, then three more backticks to get output
```
like
this
that
is easier to read
```
yes
I just edited your post to do so, the indenting is helpful
yes
how can i to do it
i don't know how to edit
Click the 'gear' icon when you however over your post, then click the 'pencil' icon to edit it. Or just hit 'up' on your keyboard
In any case. There is your policy definition for `/Channel/Admins`. Note that it is type `3`. From `fabric/common/policies.proto` you may see the policy type defined:
```
enum PolicyType {
UNKNOWN = 0; // Reserved to check for proper initialization
SIGNATURE = 1;
MSP = 2;
IMPLICIT_META = 3;
}
```
You will generally either see policies of type 3 or 1, but we may add additional types in the future.
`IMPLICIT_META` policies, look at the policy definitions in the sub-groups, when they evaluate.
So, in your case, you can see that the `/Channel/Admins` policy requires a maority of the `Admins` policies from `/Channel/*/Admins`.
So, in your case, you can see that the `/Channel/Admins` policy requires a majority of the `Admins` policies from `/Channel/*/Admins`.
(This is because of:
```
{
"rule": "MAJORITY",
"sub_policy": "Admins"
}
```
This is because of:
```
{
"rule": "MAJORITY",
"sub_policy": "Admins"
}
```
So, look up the `Admins` policy definitions for each of the entries in the `channel_group` `groups` map
(hint: there are only two of them)
there are only two is Consortium admin and Orderer admin
?
Ah, so you are looking at an ordering system channel genesis block, not an normal channel genesis block. It would probably be more helpful to look at one for a normal channel, but we can do either.
use the configtxgen tool and sampleConfig to produce normal genesis block
?
This would be to follow the normal channel creation process as seen in the end to end or samples. You will find a file called `mychannel.block` which is the result of creating a new channel.
because i see the orderer source code and i debug
You may see a sample JSON from a normal channel here https://github.com/jyellick/fabric-gerrit/blob/json-structures/examples/configtxupdate/reconfig_membership/example_output/config.json
yes
We can proceed to analyze the `/Channel/Admins` policy from there if you like
yes
i like
So, find the two sub-policies of name `Admins`
ok
They are `/Channel/Orderer/Admins` and `/Channel/Application/Admins`, do you see?
yes
So, for `/Channel/Admins` to be satisfied, both of these policies must be satisfied (because it requires `MAJORITY` ). So, look at the definition for `/Channel/Application/Admins` and paste it here?
too long
It should be short, I mean:
```
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 1,
"value": {
"identities": [
{
"principal": {
"msp_identifier": "DEFAULT"
}
}
],
"rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0
}
]
}
}
}
}
},
```
"policies": {
"Admins": {
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
}
},
"Readers": {
"policy": {
"type": 3,
"value": {
"sub_policy": "Readers"
}
}
},
"Writers": {
"policy": {
"type": 3,
"value": {
"sub_policy": "Writers"
}
}
}
}
"policies": {
"Admins": {
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
}
},
"Readers": {
"policy": {
"type": 3,
"value": {
"sub_policy": "Readers"
}
}
},
"Writers": {
"policy": {
"type": 3,
"value": {
"sub_policy": "Writers"
}
}
}
}
```
"policies": {
"Admins": {
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
}
},
```
yes
See that this definition is the same as for `channel_group`, but, now it is at a new group
So, we need the majority of the sub-policies named `Admins` for the groups defined under `/Channel/Application`. There should two
oh no
that's too complex
my brain is boom
This is the last step. The next set of policies are evaluated against the policies. You picked a top level config element, so it requires many steps. If you had picked a lower one, it would be more direct.
Let me digest it
my brain is small
No problem, it is slightly complex, but very powerful, and I think you will find it easy once you understand.
yes i also find it's most important for understand fabric
@jyellick i will write an email to you today about How do I understand that fabric config channel
ok
?
my email is cxa13241930467@163.com
Okay, I will look for it
your email?
jyellick@us.ibm.com
@jyellick are you still there
?
Message Attachments
let me take a example for you
and then you pointed out where I went wrong
as the picture show
the genesis block contains a channle_group
the genesis block contains a channel_group
because channel_group is a ConfigGroup which recursive definition
message ConfigGroup {
uint64 version = 1;
map
'''message ConfigGroup {
uint64 version = 1;
map
'''message ConfigGroup {
uint64 version = 1;
map
'''message ConfigGroup {
uint64 version = 1;
map
'''message ConfigGroup {
uint64 version = 1;
map
message ConfigGroup {
uint64 version = 1;
map
''' message ConfigGroup {
uint64 version = 1;
map
because the channel_group is a ConfigGroup
message ConfigGroup {
uint64 version = 1;
map
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=56xP3DHt29Fzi83S7)
message ConfigGroup {
uint64 version = 1;
map
I know a lot
@jyellick
?
i read we discuss
It is late in my time zone (01:18 right now) but please feel free to leave comments and I will respond
my english is not good so i will try my best to describe my question so you have a little time to read it and answer it
@jyellick thx a lot
@asaningmaxchain Please take you time. I understand the difficulty a second language must pose, but taking time to ask good questions will help everyone. I really would like to convey knowledge about the configtx to more parties, I think it is important that more people understand.
@jyellick ok
good night
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=nKLNQeEoRJSqiQd8p) @jyellick There define one
i think the channel config like a tree the leaf define the policy while judge the msp_principal and the higher use the leaf to build a permission
@jyellick
Has joined the channel.
Has left the channel.
@asaningmaxchain Yes, the config is very much a tree structure, and leafs must define policies in principals while higher elements may define their policies in terms of lower policies.
@jyellick i have some question about the source code in the orderer module
```lchain := newChainSupport(createStandardFilters(ledgerResources),
ledgerResources,
consenters,
signer)```
let me take a example for you
the above show create a chain
the method createStandardFilters(ledgerResources)
build some filter
```filter.EmptyRejectRule,
sizefilter.MaxBytesRule(ledgerResources.SharedConfig().BatchSize().AbsoluteMaxBytes),
sigfilter.New(policies.ChannelWriters, ledgerResources.PolicyManager()),
configtxfilter.NewFilter(ledgerResources),
filter.AcceptRule,```
the filter orderer is important
```func (rs *RuleSet) Apply(message *ab.Envelope) (Committer, error) {
for _, rule := range rs.rules {
action, committer := rule.Apply(message)
switch action {
case Accept:
return committer, nil
case Reject:
return nil, fmt.Errorf("Rejected by rule: %T", rule)
default:
}
}
return nil, fmt.Errorf("No matching filter found")
}```
if one meet the requirement
it return the committer
so if the client send a message to the orderer
it use the filter to judge
will the first one meet
it returns a committer
the other filter will not use
do you understand what i say?
I understand what you say, but I don't understand the question?
my mistake
i see the source code again
it's right
what's the relationship the channel and chain?
@jyellick
@asaningmaxchain There is a one to one relationship between channels and blockchains.
You will see in the code many places the word `chain` is used, when probably `channel` should be used. This is because much of the code was written before the word `channel` was finalized.
yes
ok i understand
so the normal process is start the orderer(which provide two service broadcast and deliver) and then use the sdk to config channel message the orderer (broadcast service) and the peer receive the block from the orderer(deliver service) and peer commit the message to the local ledger
and then if the client want to take transaction the client should go to the peer for endorsement and send the endorsed transaction to the orderer
Yes, config updates go directly to the orderer. Standard transactions go first to the peer for endorsement, then for ordering. In either case, once the transaction goes to ordering, it will be committed to all peers.
the orderer provides the deliver service , so the peer pull the block from the orderer or the orderer send the block to the peer
The peer initiates a connection to the orderer via `Deliver`, then waits on the stream and receives blocks asynchronously as they are created, so long as the connection is established.
ok
i have a question about the orderer ledger
i see the orderer module source code
i use the solo consensus for example
batches, committers, ok := ch.support.BlockCutter().Ordered(msg)
```batches, committers, ok := ch.support.BlockCutter().Ordered(msg)```
the above tell once the orderer receive the msg the chain need to Orderer
```type receiver struct {
sharedConfigManager config.Orderer
filters *filter.RuleSet
pendingBatch []*cb.Envelope
pendingBatchSizeBytes uint32
pendingCommitters []filter.Committer
}```
the receiver contains a pendingBatch,pendingCommitters
if the pendingBatch gt r.sharedConfigManager.BatchSize().MaxMessageCount
it should be cut
take the pendingBatch,pendingCommitters from the receiver
and then clear the receiver's pendingCommitters,pendingBatch
however the pendingCommitters produce by the filter
committer, err := r.filters.Apply(msg)
```func (rs *RuleSet) Apply(message *ab.Envelope) (Committer, error) {
for _, rule := range rs.rules {
action, committer := rule.Apply(message)
switch action {
case Accept:
return committer, nil
case Reject:
return nil, fmt.Errorf("Rejected by rule: %T", rule)
default:
}
}
return nil, fmt.Errorf("No matching filter found")
}```
once accept and return the committer
so i take a look the code which set the committer
so i take a look the code which set the filter
```logger.Debugf("Starting chain: %s", chainID)
chain := newChainSupport(createStandardFilters(ledgerResources),
ledgerResources,
consenters,
signer)```
```return filter.NewRuleSet([]filter.Rule{
filter.EmptyRejectRule,
sizefilter.MaxBytesRule(ledgerResources.SharedConfig().BatchSize().AbsoluteMaxBytes),
sigfilter.New(policies.ChannelWriters, ledgerResources.PolicyManager()),
configtxfilter.NewFilter(ledgerResources),
filter.AcceptRule,
})```
the last one is filter.AcceptRule
while the committer is NoopCommitter
so the generate block doesn't commit the orderer ledger
?
@jyellick
I am reading
```for i, batch := range batches {
block := ch.support.CreateNextBlock(batch)
ch.support.WriteBlock(block, committers[i], nil)
}```
ok
The `Committer` is a bit of a bad name for this. It is simply code which executes when the block commits. This notion is changed in the v1.1 codebase.
ok
```func (cs *chainSupport) WriteBlock(block *cb.Block, committers []filter.Committer, encodedMetadataValue []byte) *cb.Block {
for _, committer := range committers {
committer.Commit()
}
// Set the orderer-related metadata field
if encodedMetadataValue != nil {
block.Metadata.Metadata[cb.BlockMetadataIndex_ORDERER] = utils.MarshalOrPanic(&cb.Metadata{Value: encodedMetadataValue})
}
cs.addBlockSignature(block)
cs.addLastConfigSignature(block)
err := cs.ledger.Append(block)
if err != nil {
logger.Panicf("[channel: %s] Could not append block: %s", cs.ChainID(), err)
}
logger.Debugf("[channel: %s] Wrote block %d", cs.ChainID(), block.GetHeader().Number)
return block
}```
the orderer write the block to the local ledger
my misttake
for _, committer := range committers {
committer.Commit()
}
```for _, committer := range committers {
committer.Commit()
}```
No problem, the name is misleading
however the code should be remove
yes
so the block is generated in the orderer
?
Correct, the orderer is the only entity which generates blocks
another question is the orderOrg contains two orderer which we define orderer0,orderer1 ,the client1 send the 100 tx to orderer0 and the client2 send the 100 tx to the orderer1 each orderer has a cut so how to ensure the order
use the kafka for consensus
another question is the orderOrg contains two orderer which we define orderer0,orderer1 ,the client1 send the 100 tx to orderer0 and the client2 send the 100 tx to the orderer1 each orderer has a cut so how to ensure the order
use the kafka for consensus
Yes. Each orderer validates the transactions, then forwards them to the Kafka broker for ordering. Once the transactions have a total order, the orderers determinstically cut them into blocks.
so each orderer has a block
?
Yes, each orderer computes the block independently, but the code is designed carefully so that they will be the same block.
the peer may receive same block?
The peer may retrieve the block from either orderer, and the blocks will be identical
ok
i have a good understanding for fabric @jyellick thx a lot
i am going to bed
You are welcome @asaningmaxchain happy to help
When using kafka for consensus using the default configuration is there any security to restrict who can publish to a kafka topic? Is there anything to stop a none orderer container on the docker network from publishing rubbish to a topic ?
Has joined the channel.
hi @jyellick i have a question the condition is i want to use the orderer/sample_clients/broadcast_timestamp to test the performance and to test what l learn
the orderer start should point some parameter in the sampleconfig/orderer.yaml i set the genesisMethod=file and set the genesisFile=genesis.block which generate by the e2e and put it into the sampleconfig folder
and i copy the orderer msp to sampleconfig folder
and i start ./broadcast_timestamp
and it tell me sign fail
i realize i should have /channel/write permission
and i parse the genesis block
k=[Groups] /Channel
k=[Policy] /Channel/Admins
k=[Policy] /Channel/Readers
k=[Policy] /Channel/Writers
k=[Values] /Channel/BlockDataHashingStructure
k=[Values] /Channel/HashingAlgorithm
k=[Values] /Channel/OrdererAddresses
k=[Groups] /Channel/Consortiums
k=[Policy] /Channel/Consortiums/Admins
k=[Groups] /Channel/Consortiums/SampleConsortium
k=[Values] /Channel/Consortiums/SampleConsortium/ChannelCreationPolicy
k=[Groups] /Channel/Consortiums/SampleConsortium/Org1MSP
k=[Policy] /Channel/Consortiums/SampleConsortium/Org1MSP/Admins
k=[Policy] /Channel/Consortiums/SampleConsortium/Org1MSP/Readers
k=[Policy] /Channel/Consortiums/SampleConsortium/Org1MSP/Writers
k=[Values] /Channel/Consortiums/SampleConsortium/Org1MSP/MSP
k=[Groups] /Channel/Consortiums/SampleConsortium/Org2MSP
k=[Policy] /Channel/Consortiums/SampleConsortium/Org2MSP/Admins
k=[Policy] /Channel/Consortiums/SampleConsortium/Org2MSP/Readers
k=[Policy] /Channel/Consortiums/SampleConsortium/Org2MSP/Writers
k=[Values] /Channel/Consortiums/SampleConsortium/Org2MSP/MSP
k=[Groups] /Channel/Orderer
k=[Policy] /Channel/Orderer/Admins
k=[Policy] /Channel/Orderer/BlockValidation
k=[Policy] /Channel/Orderer/Readers
k=[Policy] /Channel/Orderer/Writers
k=[Values] /Channel/Orderer/BatchSize
k=[Values] /Channel/Orderer/BatchTimeout
k=[Values] /Channel/Orderer/ChannelRestrictions
k=[Values] /Channel/Orderer/ConsensusType
k=[Groups] /Channel/Orderer/OrdererOrg
k=[Policy] /Channel/Orderer/OrdererOrg/Admins
k=[Policy] /Channel/Orderer/OrdererOrg/Readers
k=[Policy] /Channel/Orderer/OrdererOrg/Writers
k=[Values] /Channel/Orderer/OrdererOrg/MSP
```k=[Groups] /Channel
k=[Policy] /Channel/Admins
k=[Policy] /Channel/Readers
k=[Policy] /Channel/Writers
k=[Values] /Channel/BlockDataHashingStructure
k=[Values] /Channel/HashingAlgorithm
k=[Values] /Channel/OrdererAddresses
k=[Groups] /Channel/Consortiums
k=[Policy] /Channel/Consortiums/Admins
k=[Groups] /Channel/Consortiums/SampleConsortium
k=[Values] /Channel/Consortiums/SampleConsortium/ChannelCreationPolicy
k=[Groups] /Channel/Consortiums/SampleConsortium/Org1MSP
k=[Policy] /Channel/Consortiums/SampleConsortium/Org1MSP/Admins
k=[Policy] /Channel/Consortiums/SampleConsortium/Org1MSP/Readers
k=[Policy] /Channel/Consortiums/SampleConsortium/Org1MSP/Writers
k=[Values] /Channel/Consortiums/SampleConsortium/Org1MSP/MSP
k=[Groups] /Channel/Consortiums/SampleConsortium/Org2MSP
k=[Policy] /Channel/Consortiums/SampleConsortium/Org2MSP/Admins
k=[Policy] /Channel/Consortiums/SampleConsortium/Org2MSP/Readers
k=[Policy] /Channel/Consortiums/SampleConsortium/Org2MSP/Writers
k=[Values] /Channel/Consortiums/SampleConsortium/Org2MSP/MSP
k=[Groups] /Channel/Orderer
k=[Policy] /Channel/Orderer/Admins
k=[Policy] /Channel/Orderer/BlockValidation
k=[Policy] /Channel/Orderer/Readers
k=[Policy] /Channel/Orderer/Writers
k=[Values] /Channel/Orderer/BatchSize
k=[Values] /Channel/Orderer/BatchTimeout
k=[Values] /Channel/Orderer/ChannelRestrictions
k=[Values] /Channel/Orderer/ConsensusType
k=[Groups] /Channel/Orderer/OrdererOrg
k=[Policy] /Channel/Orderer/OrdererOrg/Admins
k=[Policy] /Channel/Orderer/OrdererOrg/Readers
k=[Policy] /Channel/Orderer/OrdererOrg/Writers
k=[Values] /Channel/Orderer/OrdererOrg/MSP```
Please set `ORDERER_GENERAL_LOCALMSPID=Org1MSP` or similar
Please set `ORDERER_GENERAL_LOCALMSPID=OrdererMSP` or similar
let me step by step
you are familiar with the fabric
i am newer
@asaningmaxchain There is an MSP dir defined by `ORDERER_GENERAL_LOCALMSPDIR` which I believe you have replaced, but there is also the MSP id set by `ORDERER_GENERAL_LOCALMSPID` which must be updated as well.
The former defines where to find the certificates. The latter defines which org to claim issued the certificates.
The former defines where to find the certificates. The latter defines which org MSP to claim issued the certificates.
the env ```ORDERER_GENERAL_LOCALMSPDIR ``` and ```ORDERER_GENERAL_LOCALMSPID``` set in the orderer.yaml?
the env ```ORDERER_GENERAL_LOCALMSPDIR``` and ```ORDERER_GENERAL_LOCALMSPID``` set in the orderer.yaml?
Yes, these environment variables override the values in `orderer.yaml`. You may alternatively simply modify the `orderer.yaml` file.
@asaningmaxchain It is 02:30 in my time zone, and I need to go to bed. I will answer any questions you have in my morning
ok
@jyellick thx
i will write my question detail
i will write my question detailly
good night
@jyellick i find it's wrong,let me show the problem,i use the broadcast_timestamp to test the orderer and then config the orderer.yaml in which i set the GenesisMethod=file and GenesisBlock=genesis.block, the genesis.block which produce by the e2e and then i start the cmd ./broadcast_timestamp, the idea show me the error Rejecting broadcast message because of filter error: Rejected by rule: *sigfilter.sigFilter
@jyellick i find it's wrong,let me show the problem,i use the broadcast_timestamp to test the orderer and then config the orderer.yaml in which i set the GenesisMethod=file and GenesisBlock=genesis.block, the genesis.block which produce by the e2e and then i start the cmd ./broadcast_timestamp, the ide show me the error Rejecting broadcast message because of filter error: Rejected by rule: *sigfilter.sigFilter
i see the filter set by the method ```func createSystemChainFilters(ml *multiLedger, ledgerResources *ledgerResources) *filter.RuleSet {
return filter.NewRuleSet([]filter.Rule{
filter.EmptyRejectRule,
sizefilter.MaxBytesRule(ledgerResources.SharedConfig().BatchSize().AbsoluteMaxBytes),
sigfilter.New(policies.ChannelWriters, ledgerResources.PolicyManager()),
newSystemChainFilter(ledgerResources, ml),
configtxfilter.NewFilter(ledgerResources),
filter.AcceptRule,
})
}```
it add the sigfilter
it means i must have a channel/writer permisssion
i parse the genesis.block and take out the channel/writer
```"Writers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"sub_policy": "Writers"
}
}
}```
the type=3 so it's a ImplicitMetaPolicy
that mean require channel/*/writer permission
but if not define the Rule(ANY,ALL,MARJORITY)
but if not define the Rule(ANY,ALL,MARJORITY) i print configResult.JSON() it give the default value for the each
when the orderer startup
the method ```configMap, err := MapConfig(configEnv.Config.ChannelGroup)``` invoked
and then i print the configMap's key
```[Groups] /Channel
[Policy] /Channel/Admins
[Policy] /Channel/Readers
[Policy] /Channel/Writers
[Values] /Channel/BlockDataHashingStructure
[Values] /Channel/HashingAlgorithm
[Values] /Channel/OrdererAddresses
[Groups] /Channel/Consortiums
[Policy] /Channel/Consortiums/Admins
[Groups] /Channel/Consortiums/SampleConsortium
[Values] /Channel/Consortiums/SampleConsortium/ChannelCreationPolicy
[Groups] /Channel/Consortiums/SampleConsortium/Org1MSP
[Policy] /Channel/Consortiums/SampleConsortium/Org1MSP/Admins
[Policy] /Channel/Consortiums/SampleConsortium/Org1MSP/Readers
[Policy] /Channel/Consortiums/SampleConsortium/Org1MSP/Writers
[Values] /Channel/Consortiums/SampleConsortium/Org1MSP/MSP
[Groups] /Channel/Consortiums/SampleConsortium/Org2MSP
[Policy] /Channel/Consortiums/SampleConsortium/Org2MSP/Admins
[Policy] /Channel/Consortiums/SampleConsortium/Org2MSP/Readers
[Policy] /Channel/Consortiums/SampleConsortium/Org2MSP/Writers
[Values] /Channel/Consortiums/SampleConsortium/Org2MSP/MSP
[Groups] /Channel/Orderer
[Policy] /Channel/Orderer/Admins
[Policy] /Channel/Orderer/BlockValidation
[Policy] /Channel/Orderer/Readers
[Policy] /Channel/Orderer/Writers
[Values] /Channel/Orderer/BatchSize
[Values] /Channel/Orderer/BatchTimeout
[Values] /Channel/Orderer/ChannelRestrictions
[Values] /Channel/Orderer/ConsensusType
[Groups] /Channel/Orderer/OrdererOrg
[Policy] /Channel/Orderer/OrdererOrg/Admins
[Policy] /Channel/Orderer/OrdererOrg/Readers
[Policy] /Channel/Orderer/OrdererOrg/Writers
[Values] /Channel/Orderer/OrdererOrg/MSP
```
the /channel/*/writer will regexp /Channel/Orderer/Writers /Channel/Orderer/OrdererOrg/Writers /Channel/Consortiums/SampleConsortium/Org1MSP/Writers /Channel/Consortiums/SampleConsortium/Org2MSP/Writers
@asaningmaxchain could you try `broadcast_msg` on `master` branch? I don't think `broadcast_timestamp` on `v1.0.0` actually signs the messages...
(Otherwise you need to configure orderer to let through any messages writes)
```s.broadcast([]byte(fmt.Sprintf("Testing %v", time.Now())))``` the message doesn't sign
not sure about the history here, but `broadcast_msg` on `master` branch does work
i try it
https://github.com/hyperledger/fabric/tree/master/orderer/sample_clients it doesn't provide the broadcast_timestamp
?
it provide the broadcast_msg
i try it
Has joined the channel.
@jyellick i need your help
i want to know the orderer how to manager the policy
i see the code result, err := cm.processConfig(configEnv.Config.ChannelGroup) in the common/configtx/manager
it's hard to understand
when orderer deal with the config
when orderer deal with the genesis.block
I am seeing an error when attempting to create a channel: `Error: Got unexpected status: SERVICE_UNAVAILABLE -- Could not enqueue` This is a non-tls kafka based orderer network with 2 orgs and 2 peers per org. I do see a warning message in the orderer that was used when attempting to create the channel: `Error reading from 172.18.0.12:38146: rpc error: code = Canceled desc = context canceled`. I am also using the cli command: `CORE_PEER_MSPCONFIGPATH=/var/hyperledger/configs/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp CORE_PEER_LOCALMSPID=org1.example.com CORE_PEER_ID=peer0.org1.example.com CORE_PEER_ADDRESS=peer0.org1.example.com:7051 peer channel create --file /var/hyperledger/configs/behavesystest.tx --channelID behavesystest --timeout 120 --orderer orderer2.example.com:7050`
Any insights to what may be causing this problem would be appreciated. What's strange is I'm not able to replicate this problem consistently everytime, but whenever it happens it has the same signature.
@asaningmaxchain The code for v1.0.x for processing config, though valid, is definitely difficult to parse. If you are especially interested in v1.0.x, I can help you, but, the v1.1 code found here: https://gerrit.hyperledger.org/r/#/c/12563/ will likely be much easier to follow
@latitiah The warning you see is because your client did not appropriately close the stream before disconnecting. The `SERVICE_UNAVAILABLE` generally indicates that Kafka has not finished initializing or is otherwise misconfigured. Try the request again in a few minutes and see if there is a difference.
@jyellick i like the code for v1.0.x and it took me a long time to study
so i don't give up
the method ```ledgerResources := ml.newLedgerResources(configTx)
chainID := ledgerResources.ChainID()```
is very important to parse the genesis.block
and go to the method ```configManager, err := configtx.NewManagerImpl(configTx, initializer, nil)```
@asaningmaxchain Yes, the `configtx.NewManagerImpl` causes invocations to the policy manager to validate the update.
@asaningmaxchain Yes, the `configtx.NewManagerImpl` produces a structure which on update causes invocations to the policy manager to validate the update.
yes but i read the method ```configMap, err := MapConfig(configEnv.Config.ChannelGroup)``` and ```result, err := cm.processConfig(configEnv.Config.ChannelGroup)```
yes but i read the method ```configMap, err := MapConfig(configEnv.Config.ChannelGroup)``` and ```result, err := cm.processConfig(configEnv.Config.ChannelGroup)```
yes but i read the method ```configMap, err := MapConfig(configEnv.Config.ChannelGroup)```\n and ```result, err := cm.processConfig(configEnv.Config.ChannelGroup)```
yes but i read the method ```configMap, err := MapConfig(configEnv.Config.ChannelGroup)``` and ```result, err := cm.processConfig(configEnv.Config.ChannelGroup)```
my brain is boom
Yes, you'll see in v1.1, the `processConfig` method goes away in favor of a simpler flow. In 1.0.x it is quite a complex path.
so i go into the master branch
?
It is up to you. As I said, the v1.0.x branch is valid and works. I do not understand your intent with reading the code.
because i want to use the broadcast_timestamp to test the orderer and to test my understand is right or not
but the process go wrong
so i decide to read the source code
Oh, so, as @guoger indicated, `broadcast_timestamp` in `v1.0.x` only works if signature validiation is disabled.
You must use the `SampleInsecureSolo` profile for `broadcast_timestamp` to work.
Alternatively, you can pull `broadcast_msg` from master, and use this, as it supports signing transactions.
the send message doesn't sign
in the v1.0.x
i change my branch to the master
the change is large
It is
But you only need `broadcast_msg` sample client, you may built it, then switch back
@jyellick ok i try it
2017-08-24 00:04:47.331 CST [cauthdsl] func2 -> ERRO 005 Principal deserialization failure (MSP DEFAULT is unknown) for identity 0a0744454641554c54129a072d2d2d2d2d424547494e202d2d2d2d2d0a4d494943697a4343416a4b6741774942416749554245567773537830546d7164627a4e776c654e42427a6f4954307777436759494b6f5a497a6a3045417749770a667a454c4d416b474131554542684d4356564d78457a415242674e5642416754436b4e6862476c6d62334a7561574578466a415542674e564241635444564e680a62694247636d467559326c7a59323878487a416442674e5642416f54466b6c7564475679626d5630494664705a47646c64484d7349456c75597934784444414b0a42674e564241735441316458567a45554d4249474131554541784d4c5a586868625842735a53356a623230774868634e4d5459784d5445784d5463774e7a41770a5768634e4d5463784d5445784d5463774e7a4177576a426a4d517377435159445651514745774a56557a45584d4255474131554543424d4f546d3979644767670a5132467962327870626d45784544414f42674e564241635442314a68624756705a326778477a415a42674e5642416f54456b6835634756796247566b5a3256790a49455a68596e4a70597a454d4d416f474131554543784d44513039514d466b77457759484b6f5a497a6a3043415159494b6f5a497a6a304441516344516741450a4842754b73414f34336873344a4770466669474d6b422f7873494c54734f766d4e32576d77707350485a4e4c36773848576533784350517464472f584a4a765a0a2b433735364b457355424d337977355054666b7538714f42707a43427044414f42674e56485138424166384542414d4342614177485159445652306c424259770a464159494b7759424251554841774547434373474151554642774d434d41774741315564457745422f7751434d414177485159445652304f42425945464f46430a6463555a346573336c746943674156446f794c66567050494d42384741315564497751594d4261414642646e516a32716e6f492f784d55646e3176446d6447310a6e4567514d43554741315564455151654d427943436d31356147397a6443356a62323243446e6433647935746557687663335175593239744d416f47434371470a534d343942414d43413063414d4551434944663948626c34786e337a3445774e4b6d696c4d396c58324671346a5770416152564239374f6d564565794169416b0a61587a422f6a6e6c5533394237577773394249723963386d534f455046365659317547502b644b5630673d3d0a2d2d2d2d2d454e44202d2d2d2d2d0a
2017-08-24 00:04:47.331 CST [orderer/common/broadcast] Handle -> WARN 006 [channel: testchainid] Rejecting broadcast message because of filter error: Rejected by rule: *sigfilter.sigFilter
it stilll has error
Yes
You are using the `first-network`?
Or how are you starting the orderer
use the ide and start the orderer
Okay
When you bootstrapped the orderer, you defined an orderer org in your `configtx.yaml`
use the default value
i don't modify anything
What profile did you use?
SampleInsecureSolo
Are you certain? That error would indicate you bootstrapped your orderer in some other way. Can you delete the ledger for the orderer (`rm -Rf /var/hyperledger/production/orderer/` ) and try again?
GenesisMethod: provisional
# Genesis profile: The profile to use to dynamically generate the genesis
# block to use when initializing the orderer system channel and
# GenesisMethod is set to "provisional". See the configtx.yaml file for the
# descriptions of the available profiles. Ignored if GenesisMethod is set to
# "file".
GenesisProfile: SampleInsecureSolo
# Genesis file: The file containing the genesis block to use when
# initializing the orderer system channel and GenesisMethod is set to
# "file". Ignored if GenesisMethod is set to "provisional".
GenesisFile: genesisblock
Understood. This should be fine.
But it sounds like there is other bootstrap information, please try removing the ledger and starting again.
ok
i try it now
it's ok
but the console show me the warn
2017-08-24 00:10:07.323 CST [orderer/common/broadcast] Handle -> WARN 004 Error reading from stream: rpc error: code = Canceled desc = context canceled
This is benign, you may ignore it.
ok
If you wish, you may re-bootstrap and use the `SampleSingleMSPSolo` profile instead
I also encourage you to look at the help of `broadcast_msg`
You may send multiple messages and send them in parallel by using the flags
./broadcast_msg --help
it tell how to set the paramater
i can use the sample_clients/deliver_stdout to get the block from the orderer
?
Message Attachments
Yes, but you will need to get the one from master, like you did with `broadcast_msg`
Note that the transaction rate you are seeing does not involve signature validation because you are using the `SampleInsecureSolo` profile.
yes
i know
To see more realistic performance, you will want to run `SampleSingleMSPSolo`
i use the SampleSingleMSPSolo
the data in the picture i use the ```SampleSingleMSPSolo```
Remember, you must delete your ledger before the profile change will take effect
i forget
if len(lf.ChainIDs()) == 0 {
initializeBootstrapChannel(conf, lf)
} else {
logger.Info("Not bootstrapping because of existing chains")
}
``` if len(lf.ChainIDs()) == 0 {
initializeBootstrapChannel(conf, lf)
} else {
logger.Info("Not bootstrapping because of existing chains")
}```
because i use the GenesisMethod=provisional so how can i get the genesis.block
?
peer channel fetch
?
Yes, this is one way to do that
You may also like `deliver_stdout` from master as it will print the block nicely for you
it should have permission to deliver the block from the orderer
otherwise anyone can get the block
When running these commands in the dev environment, all commands use the key and certificate in `fabric/sampleconfig/msp`
need to set the env
?
This is the default, it should 'just work'
In production, you would have multiple orgs and multiple certs and would need to set these parameters manually, but running it like you are, you do not need to.
```time ./deliver_stdout -seek 0```
use the cmd to get the first block
or genesis.block
i got it
and then i use the peer channel fetch config to get the genesis.block
Hi, is anyone familiar with the PBFT code in Fabric 0.6?
@Glen I have not actively worked on this code in some time, but yes, I am.
i'm working on the portion of PBFT to Fabric 1.0:grin:
Is that feasible?
I found the execution arrangements are different
I won't say it is impossible, but I think this is an undertaking which might not be the most efficient use of your time
Let me give you a link
also I need the checkpoint mechanism and I'm trying to where it's triggered
https://github.com/hyperledger/fabric/tree/75294a99eda00371208ec03411784816fa4a19c6/orderer/sbft
This is the `sbft` directory which was in development under fabric v1.0
not know why sbft removes checkpoint
`sbft` does not really remove checkpointing, instead, it fixes the checkpoint interval to 1 so that every round is a checkpoint.
as it may lead to out of sync
yes every procedure is followed by a checkpoint
The original PBFT protocol specification is based on the idea of UDP links
Thus, messages could arrive out of order
This was interesting from an academic perspective, but in reality, TCP links have high throughput even over WANs these days. So, the idea of the original checkpoint interval was to periodically sync the network.
This was interesting from an academic perspective, but in reality, TCP links have high throughput even over WANs these days. So, the idea of the original checkpoint interval was to periodically sync the network as the messages arrived out of order.
This was interesting from an academic perspective, but in reality, TCP links have high throughput even over WANs these days. So, the original idea to checkpoint periodically to increase throughput as the messages arrived out of order is not so necessary today.
Since with a TCP stream, messages always arrive in order (from a non-byzantine replica), there is no harm in checkpointing more frequently.
In fact we want to first implement PBFT and then improve it on Fabric 1.0 as our own bussiness solution
yes
so once a checkpoint seems more frequent?
Yes, `sbft` simply has frequent checkpoints, rather than 'no checkpoints'
Yes, `sbft` simply has frequent (always) checkpoints, rather than 'no checkpoints'
although it/s simpler
> In fact we want to first implement PBFT and then improve it on Fabric 1.0 as our own bussiness solution
As always, we welcome high quality code contributions to hyperledger fabric. An additional BFT consensus implementation would be wonderful.
yeah, we have disscussed some solutions about BFT algorithms and we think Kafka combined with Fabric 1.0 in fact meets most needs
Yes, @Glen , we learned a great deal doing PBFT for v0.5/v0.6, and although we intend to bring BFT ordering to fabric v1.x, we think that Kafka handles a large percentage of the use cases already.
Has joined the channel.
@jyellick i try to read the module orderer source for v1.0.x when i see the method ```func NewManagerImpl(envConfig *cb.Envelope, initializer api.Initializer, callOnUpdate []func(api.Manager)) (api.Manager, error)``` for parse the genesis.block return a configManager object to operation the fabric config
when i read the method ```configMap, err := MapConfig(configEnv.Config.ChannelGroup)``` in the ```NewManagerImpl```
the method parse the each ConfigGroup and ConfigValue and ConfigPolicy
so i use the code
```for k, v := range configMap {
fmt.Printf("key=%s,value=%+v\n", k, v)
}```
to prove my understand
Message Attachments
i think it's enough to judge who has permission to operation channel
```result, err := cm.processConfig(configEnv.Config.ChannelGroup)``` the method is hard to read
```type policyConfig struct {
policies map[string]Policy
managers map[string]*ManagerImpl
imps []*implicitMetaPolicy
}
// ManagerImpl is an implementation of Manager and configtx.ConfigHandler
// In general, it should only be referenced as an Impl for the configtx.ConfigManager
type ManagerImpl struct {
parent *ManagerImpl
basePath string
fqPrefix string
providers map[int32]Provider
config *policyConfig
pendingConfig map[interface{}]*policyConfig
pendingLock sync.RWMutex
// SuppressSanityLogMessages when set to true will prevent the sanity checking log
// messages. Useful for novel cases like channel templates
SuppressSanityLogMessages bool
}```
can you explain each field in the struct @jyellick
i know the method ```func proposeGroup(result *configResult) error``` in the common/configtx/config
is parse the each group(ConfigGroup,ConfigPolicy,ConfigValue)
is parse the each group(ConfigGroup,ConfigPolicy,ConfigValue) by Using recursive way
so i want to know the fabric parse the policy
because it is important to modify the channel config and send message
Yes, each `common.ConfigGroup` contains a `Policies` map, from string to `ConfigPolicy`. The `ConfigPolicy` contains a `Policy` which contains a `Value` which is marshaled bytes. These bytes are decoded by the `ManagerImpl.providers`
```
// Provider provides the backing implementation of a policy
type Provider interface {
// NewPolicy creates a new policy based on the policy bytes
NewPolicy(data []byte) (Policy, proto.Message, error)
}
```
This parses the marshaled bytes into a policy which may be evaluated.
yes i know
but i still don't ```type policyConfig struct {
policies map[string]Policy
managers map[string]*ManagerImpl
imps []*implicitMetaPolicy
}
// ManagerImpl is an implementation of Manager and configtx.ConfigHandler
// In general, it should only be referenced as an Impl for the configtx.ConfigManager
type ManagerImpl struct {
parent *ManagerImpl
basePath string
fqPrefix string
providers map[int32]Provider
config *policyConfig
pendingConfig map[interface{}]*policyConfig
pendingLock sync.RWMutex
// SuppressSanityLogMessages when set to true will prevent the sanity checking log
// messages. Useful for novel cases like channel templates
SuppressSanityLogMessages bool
}
// NewManagerImpl creates a new ManagerImpl with the given CryptoHelper
func NewManagerImpl(basePath string, providers map[int32]Provider) *ManagerImpl {
_, ok := providers[int32(cb.Policy_IMPLICIT_META)]
if ok {
logger.Panicf("ImplicitMetaPolicy type must be provider by the policy manager")
}
return &ManagerImpl{
basePath: basePath,
fqPrefix: PathSeparator + basePath + PathSeparator,
providers: providers,
config: &policyConfig{
policies: make(map[string]Policy),
managers: make(map[string]*ManagerImpl),
},
pendingConfig: make(map[interface{}]*policyConfig),
}
}```
the policyConfig and ManagerImpl struct has many field
i don't understand
i know the method ```result, err := cm.processConfig(configEnv.Config.ChannelGroup)``` is to parse the json file and set the data to the object like the channel,orderer,consortium
Mostly, it is the `config` field, which will be set to a map with current policy names. The other fields are simply used in support.
and then ``````
type policyConfig struct {
policies map[string]Policy
managers map[string]*ManagerImpl
imps []*implicitMetaPolicy
}```
let me take a example and we analysis
Message Attachments
as you see the channel has policy attribute
i choose the write policy to take a example
the type=3
is implicitMetaPolicy and rule = MAJORITY
so it need channel/*/admins exceed half in the sub_policy
Message Attachments
as you see the /channel/orderer/admin is same as the /channel/admins
Message Attachments
but the /channel/orderer/ordererOrg/admin type=1,it means the admins need the signature OrdererMSP
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=FQfYebdPvTT9aTRY3) @jyellick ```func (pm *ManagerImpl) ProposePolicy(tx interface{}, key string, configPolicy *cb.ConfigPolicy) (proto.Message, error) {
pm.pendingLock.RLock()
pendingConfig, ok := pm.pendingConfig[tx]
pm.pendingLock.RUnlock()
if !ok {
logger.Panicf("Serious Programming error: called Propose without Begin")
}
policy := configPolicy.Policy
if policy == nil {
return nil, fmt.Errorf("Policy cannot be nil")
}
var cPolicy Policy
var deserialized proto.Message
if policy.Type == int32(cb.Policy_IMPLICIT_META) {
imp, err := newImplicitMetaPolicy(policy.Value)
if err != nil {
return nil, err
}
pendingConfig.imps = append(pendingConfig.imps, imp)
cPolicy = imp
deserialized = imp.conf
} else {
provider, ok := pm.providers[int32(policy.Type)]
if !ok {
return nil, fmt.Errorf("Unknown policy type: %v", policy.Type)
}
var err error
cPolicy, deserialized, err = provider.NewPolicy(policy.Value)
if err != nil {
return nil, err
}
}
pendingConfig.policies[key] = cPolicy
logger.Debugf("Proposed new policy %s for %s", key, pm.basePath)
return deserialized, nil
}```
@ysadek the each configGroup Nested too much
@asaningmaxchain This is necessary complexity to safely support multiple types of configuration data
but it's very hard to understand
can you draw a picture to explain the process
or take a example
or i take a example and you tip me
@asaningmaxchain I am now working on a tool to present an HTML view of the configuration
but i think you can take a example to explain it,so the process can be hyalinize
because the go language is similar with c language
i advice you take a example and we explore the channel config
i advice you take a example and then explore the channel config
and i think it's very hard to say
and i think it's very hard to say even it's well designed
@asaningmaxchain Please look at https://github.com/jyellick/fabric-gerrit/blob/channelconfig/common/channelconfig/channel.go#L75
Starting from there, you may see how the config is parsed. This is new code for v1.1. It has the same results as v1.0.x, but it is much easier to read
I have read how to go parse the ConfigValue
and i show the process and you correct me if i am wroing
and i show the process and you correct me if i am wrong
```type resources struct {
policyManager *policies.ManagerImpl
configRoot *config.Root
mspConfigHandler *configtxmsp.MSPConfigHandler
}```
the structure in the above is a entry to parse the channel config
the configRoot is used to parse the ConfigValue in each ConfigGroup by using recursive way
```func (r *Root) BeginValueProposals(tx interface{}, groups []string) (ValueDeserializer, []ValueProposer, error) {
if len(groups) != 1 {
return nil, nil, fmt.Errorf("Root config only supports having one base group")
}
if groups[0] != ChannelGroupKey {
return nil, nil, fmt.Errorf("Root group must have channel")
}
r.mspConfigHandler.BeginConfig(tx)
return failDeserializer{}, []ValueProposer{r.channel}, nil
}```
```result, err := cm.processConfig(configEnv.Config.ChannelGroup)```
```func processConfig(channelGroup *cb.ConfigGroup, proposer api.Proposer) (*configResult, error) {
helperGroup := cb.NewConfigGroup()
helperGroup.Groups[RootGroupKey] = channelGroup
configResult := &configResult{
group: helperGroup,
valueHandler: proposer.ValueProposer(),
policyHandler: proposer.PolicyProposer(),
}
err := proposeGroup(configResult)
if err != nil {
return nil, err
}
return configResult, nil
}```
```valueHandler: proposer.ValueProposer(),```
```valueDeserializer, subValueHandlers, err := result.valueHandler.BeginValueProposals(result.tx, subGroups)```
`ConfigGroup`s are handled recursively, yes
the valueDeserializer object to Deserialize the Configvalue
for key, value := range result.group.Values {
msg, err := valueDeserializer.Deserialize(key, value.Value)
if err != nil {
result.rollback()
return fmt.Errorf("Error deserializing key %s for group %s: %s", key, result.groupName, err)
}
result.deserializedValues[key] = msg
}
the subValueHandler is a object to parse the next configvalue in descendant configGroup
i know the method proposeGroup(configResult) how to parse the ConfigValue
and i don't know how to parse the ConfigPolicy in the method proposeGroup
Config policies are parsed separately from config values
https://github.com/hyperledger/fabric/blob/release/common/configtx/config.go#L219-L226
i know
```type policyConfig struct {
policies map[string]Policy
managers map[string]*ManagerImpl
imps []*implicitMetaPolicy
}
// ManagerImpl is an implementation of Manager and configtx.ConfigHandler
// In general, it should only be referenced as an Impl for the configtx.ConfigManager
type ManagerImpl struct {
parent *ManagerImpl
basePath string
fqPrefix string
providers map[int32]Provider
config *policyConfig
pendingConfig map[interface{}]*policyConfig
pendingLock sync.RWMutex
// SuppressSanityLogMessages when set to true will prevent the sanity checking log
// messages. Useful for novel cases like channel templates
SuppressSanityLogMessages bool
}
```
but the two structure is hard to read
You may find the more recent implementation easier to read https://github.com/jyellick/fabric-gerrit/blob/channelconfig/common/policies/policy.go#L84-L90
You may find the more recent implementation easier to read https://github.com/jyellick/fabric-gerrit/blob/channelconfig/common/policies/policy.go#L86-L90
```
type ManagerImpl struct {
path string // The group level path
policies map[string]Policy
managers map[string]*ManagerImpl
}
```
the define in the master branch
?
It is in a CR waiting to be merged into the master branch
https://gerrit.hyperledger.org/r/#/c/12563/ and related changes
how can i see the new changes
?
```type ManagerImpl struct {
path string // The group level path
policies map[string]Policy
managers map[string]*ManagerImpl
}``` so in this structure,
i take a example and you point out my understand is right or not
```Channel
Admins(Policy)
Writers(policy)
Readers(policy)
Orderer
Admins(Policy)
Writers(Policy)
Readers(policy)
Consortium
Admins(policy)
Writer(Policy)
Reader(Policy)
SampleConsortium
Admins(policy)
Writer(policy)
Reader(policy)```
I pushed a branch to my github with all the changes https://github.com/jyellick/fabric-gerrit/tree/channelconfig so that you can see
ok
the path is one of the array(Channel,Orderer,Consortium,SampleConsortim)
the policies is map so it's a map[Admins or Writers or Readers ]=policy
the managers is sub_node
like the channel the sub_node is orderer/Consortim
the manages is a map[orderer,Consortium]Manager
for the channel
so the managers is a element to maintain the inheritance relationship
that's all?
Yes, you see that `mod_policy` sometimes begins with a `/` like `/Channel/Orderer/Admins`, in this case, we ask the root policy manager fro the policy. Sometimes, `mod_policy` is just plain like `Admins`, in this case, we find the policy manager which corresponds to the group in the tree, and ask that manager for the policy.
In that way `Admins` could refer to `/Channel/Admins` or `/Channel/Orderer/Admins` etc. depending on the context
yes
``````
type ManagerImpl struct {
path string // The group level path
policies map[string]Policy
managers map[string]*ManagerImpl
}```
is easier to understand
Yes, this is why I wrote the change :slight_smile:
than before
to be honest,the fabric is huge project
the total process is hard to understand
the entrie process is hard to understand
"github.com/hyperledger/fabric/common/flogging"
i think i should modify the fabric-gerrit to fabric
i think i should rename the fabric-gerrit to fabric
`fabric-gerrit` is simply what I named the repository on my private github account. This is because `fabric` used to be hosted on github and not on gerrit, so I already had a repository named `fabric`. There is no need to use the `fabric-gerrit` name anywhere
ok
So yes, if you cloned the link I sent, you must rename it to `github.com/hyperledger/fabric` not `github.com/jyellick/fabric-gerrit`
it's ok
what's the relationship channle orderer application consortiums
what's the relationship channle orderer application consortiums,organization
@jyellick
```type ChannelConfig struct {
protos *ChannelProtos
hashingAlgorithm func(input []byte) []byte
mspManager msp.MSPManager
appConfig *ApplicationConfig
ordererConfig *OrdererConfig
consortiumsConfig *ConsortiumsConfig
}
```
@asaningmaxchain Channel config is always the top level config group, and one of its subgroups is the orderer config group. For the ordering system channel there is also a sub-group called the consortiums config group. For standard channels, there is also a sib-group called the application config group.
so what's the consortiums config group and application config group
like the channel is used to define a channel for protecting the Confidentiality,the ordererConfig is used to the order the message but what's the consotium channel config and application config group
The application config group defines how the peer network behaves. Things like how peers can gossip with eachother (via the anchor peers configuration). The consortiums config group defines how an ordering network should create channe.s
The application config group defines how the peer network behaves. Things like how peers can gossip with eachother (via the anchor peers configuration). The consortiums config group defines how an ordering network should create channels.
The application config group defines how the peer network behaves. Things like how peers can gossip with eachother (via the anchor peers configuration). The consortiums config group defines how an ordering network should allow the creation of channels.
The application config group defines how the peer network behaves; things like how peers can gossip with eachother (via the anchor peers configuration). The consortiums config group defines how an ordering network should allow the creation of channels.
The application config group defines how the peer network behaves; things like how peers can gossip with eachother (via the anchor peers configuration). The consortiums config group defines how an ordering network should allow the creation of new channels.
The consortiums group only appears in the ordering system channel. The application group appears in all standard channels.
what's the difference between the orderering system channel and standard channles
what's the difference between the orderering system channel and standard channels
The ordering system channel is reserved for use by the orderers, to orchestrate the creation of channels. It is the channel that the ordering system is bootstrapped with.
For all other channels, these are standard channels, they are created by consortium members, and are used to perform application transactions.
ok
that means the orderering system channel is manager to manage how to create the standard channel
Correct
Hi, one quick question regarding PBFT. Why do we need PBFT if there is only one organization running the consensus?
@aberfou For v1.0 we are using Kafka, so we generally encourage only one org to run consensus. For PBFT or PBFT-like consensus algorithms,the assumption would be that multiple organizations would be involved in consensus.
@jyellick so that means it is possible to run the orderer by multiple organizations? Is this also possible in the 10.0 release?
@jyellick so that means it is possible to run the orderer by multiple organizations? Is this also possible in the 1.0 release?
it is possible for the orderer to be run my multiple organizations in v1.0, but with Kafka, there is only CFT, not BFT, so we discourage this configuration. There is no BFT support in v1.0. There was a preliminary implementation of BFT before v1.0 (sbft), but we did not feel we could get it into production shape prior to release so we removed it. We will add it back in the future.
@jyellick thx
@jyellick i think you should use the ```configMap, err := MapConfig(configEnv.Config.ChannelGroup)``` configMap to parse the channel config may be easier to understand
because the configMap contains all the config infomation
for key,config := range configMap{}
@asaningmaxchain Have you looked at the more recent version of the config parsing I sent you?
the change is large
Yes, it is, but it is simpler and easier to understand
I think it is better than `MapConfig`
in the future version you will add the new config way?
in your gerrit
Yes, some of these have already started to merge
I expect it to be in master by the end fo the week
ok
I am also working on a way to display the JSON as html. It is not done yet, but you may see a preview at https://rawgit.com/jyellick/fabric-gerrit/html/common/tools/configtxlator/ui/test.html
Do not try to upload protobuf files, but the sample files or JSON files should work
It includes some descriptive text you may find helpful
i take a look
because the recursive method is very hard to understand when nest level is deep
Hopefully this HTML makes it easier to understand, it is only a draft and we can improve it to make it better as well
ok
it can't visit
Message Attachments
I don't know why you are having problems, the link works for me. Alternatively, you may download the two files in this directory https://github.com/jyellick/fabric-gerrit/tree/html/common/tools/configtxlator/ui and open `test.html` with your web browser
ok
thx
```conf := config.Load()
bytes, _ := json.Marshal(conf)
logger.Infof("the config value is %s in json formatter", filepath.Dir(viper.ConfigFileUsed()), string(bytes))```
@jyellick i think add the log info may be a good choose to show the information which the orderer module uses
Hi @jyellick as i read the previous sbft code, I didnt find where the logs of preprepare and commit and cleared, so the log files will store all transactions since the sbft instance is up right?
@jyellick in the https://github.com/jyellick/fabric-gerrit/tree/channelconfig
i start the orderer node and find the error
https://github.com/jyellick/fabric-gerrit/tree/channelconfig
* 'General' has invalid keys: LogFormat
panic: Error unmarshaling config into struct:1 error(s) decoding:
* 'General' has invalid keys: LogForma
but i find in the type General doesn't define the LogFormat
and the orderer.yaml doesn't exist the LogFormat
@jyellick the method ```func NewManagerImpl(path string, providers map[int32]Provider, root *cb.ConfigGroup) (*ManagerImpl, error)``` is easier to understand the fabric how to parse the policy
```type ManagerImpl struct {
path string // The group level path
policies map[string]Policy
managers map[string]*ManagerImpl
}```
just understand the struct
i think you should add more comments on the ManagerImpl struct
like the path
is full path where the policy location
is full path where the policy locates
@asaningmaxchain You are using an older version of the orderer binary with a newer config file
yes i know
and the policies and managers field
@asaningmaxchain When this new code merges, you are welcome to submit a CR which improves the documentation
ok
Has joined the channel.
When I run ./configtxgen -profile SampleSingleMSPSolo I do not get a genesis.block file. I have modified the orderer.yaml and also set the env vars. The only thing I get is [common/configtx/tool] main -> INFO 001 Loading configuration i read the exchange from aa few days ago. I have deleted the ledgers. it works fine using all the defaults. I have sent a time broadcast and see it in the broadcast receive client (when all defaults). How to get a genesis block?
@jworthington Please add `-outputBlock
Thx. I started to try that, but it seemed counter to the docs and I figured you must have something set to do that when the flag was null. I see the block. thx
@jyellick I am looking to build the architecture for DR (Active - Passive). I was thinking of having exact same configuration of the kafka cluster and peers on the secondary site and replicate the data to other site. When you bring up the env on the secondary site, I thought you will bring up the VM. I have heard the DR works only when you can have the same IP address and hostname on the secondary site.
If you have a restriction that you cannot have the same IP address on the secondary site, what are the options ?
@gauthampamu This is a really not a fabric question, but a Kafka deployment question. I'd refer you to their excellent documentation: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=27846330
@jyellick the broadcast_msg should print the rate of progress
@asaningmaxchain I agree, this would be a nice feature. Feel free to open a JIRA for it and if you are feeling ambitious, submit a patch.
@jyellick i try it
FYI http://hyperledger-fabric.readthedocs.io/en/latest/CONTRIBUTING.html
@jyellick ok
what's the rscc,the full name is
?
Resource System Chaincode
like the lscss like life system chaincode
ok
i got it
Hi@jyellick, as SBFT uses f+1 checkpoint to finish checkpoint while PBFT needs 2f+1 checkpoints, does that mean you weaken the implementation of checkpoint in SBFT
@Glen SBFT still requires 2f+1 to commit
for csrc, c := range s.cur.checkpoint {
sum := fmt.Sprintf("%x", c.Digest)
sums[sum] = append(sums[sum], csrc)
if len(sums[sum]) >= s.oneCorrectQuorum() {
max = sum
}
}
func (s *SBFT) oneCorrectQuorum() int {
return int(s.config.F + 1)
}
this part indicates only F+1
This is for a weak checkpoint, same as PBFT
https://jira.hyperledger.org/browse/FAB-5949
@jyellick
Thanks, do you intend to implement it yourself, or should I look into it?
@jyellick i try it
Hi @jyellick I have one question about implementing the checkpoint in Fabric 1.0, I scanned the codes of PBFT in Fabric 0.6 and SBFT in Fabric 1.0, the former used rocksdb to store states and the latter use files under /tmp to store only one copy of the last checkpoint
as orderer may have no rocksdb support, so should I use still rocksdb or the files in Fabric 1.0 as it is
@Glen in Fabric 0.6 the orderer and peer were combined and maintained all world state, so a DB was needed. For v1.0, the orderer only maintains orderer state which is small and simple, no need for a DB.
yes
In fact i wonder which can be a better way of maintaining the states including writing probably more than one copy as checkpoint will be done periodically, also the Pset and Qset may also need to stored, so which method do you recommend?
@jyellick i provide the code to the fabric or your fabric-gerrit
@asaningmaxchain What is the link to the gerrit CR?
https://jira.hyperledger.org/browse/FAB-5949
I do not see the code?
Oh, i see, you were asking where to submit the code
yes
Please use the normal process of pushing a CR to gerrit as described in the link I sent
ok
Trying to confirm my thinking: the orderer service can be composed of orderers which each belong to different member orgs, correct?
@pschnap Yes, however, this is not a recommended configuration, especially for v1.0/v1.1.
Since the backing consensus type is Kafka for production deployments, and Kafka is not BFT, adding more ordering members is usually not desirable.
ok, but the goal in future is to have the orderer nodes hosted by multiple parties, right?
Once we add other BFT consensus types, then multi-org ordering makes more sense. However, keep in mind, any of the ordering orgs will have visibility into what channels exist and their membership.
ok; good to keep in mind
As the SideDB support merges, there will be less and less information flowing through the orderer.
So, I suspect a few 'more trusted' orgs will run a BFT ordering service.
So that no single org has the power to choose order, but ultimately, their visibility will be limited to the opaque data representations presented via sideDB
thanks @jyellick !
@jyellick
https://gerrit.hyperledger.org/r/#/c/12953/
please take a look
I will
can you explain the msp?
i think the msp is a ca
each org has a msp
@asaningmaxchain Thank you for the CR. There are a couple comments I have left on it.
Yes, MSPs are basically CAs
The difference is, that it does not have to be X.509 cryptography. There are some other cryptographic schemes which are being developed which will also satisfy the MSP interface.
you can work on your job and we can talk later if you have time
Please feel free to leave questions here and I will do my best to respond to them
ok
@jyellick i reply you
```func (m *localMSPPrincipalGetter) Get(role string) (*msp.MSPPrincipal, error) {
mspid, err := GetLocalMSP().GetIdentifier()
if err != nil {
return nil, fmt.Errorf("Could not extract local msp identifier [%s]", err)
}
switch role {
case Admins:
principalBytes, err := proto.Marshal(&msp.MSPRole{Role: msp.MSPRole_ADMIN, MspIdentifier: mspid})
if err != nil {
return nil, err
}
return &msp.MSPPrincipal{
PrincipalClassification: msp.MSPPrincipal_ROLE,
Principal: principalBytes}, nil
case Members:
principalBytes, err := proto.Marshal(&msp.MSPRole{Role: msp.MSPRole_MEMBER, MspIdentifier: mspid})
if err != nil {
return nil, err
}
return &msp.MSPPrincipal{
PrincipalClassification: msp.MSPPrincipal_ROLE,
Principal: principalBytes}, nil
default:
return nil, fmt.Errorf("MSP Principal role [%s] not recognized.", role)
}
}
```
the method define in the msp/mgmt/principal
i find the role is not enough for the fabric
@asaningmaxchain We agree, we are working to add additional roles
There are CRs out there for this, but it is not backwards compatible, so we are deciding how to introduce.
ok
`Role.CLIENT`, `Role.ORDERER` and `Role.PEER` will be added
You may also use the OU support to create your own custom groups.
``` enum Classification {
ROLE = 0; // Represents the one of the dedicated MSP roles, the
// one of a member of MSP network, and the one of an
// administrator of an MSP network
ORGANIZATION_UNIT = 1; // Denotes a finer grained (affiliation-based)
// groupping of entities, per MSP affiliation
// E.g., this can well be represented by an MSP's
// Organization unit
IDENTITY = 2; // Denotes a principal that consists of a single
// identity
}```
yes
the Classification define in the protos/msp/msp_principal
the method ```func (m *localMSPPrincipalGetter) Get(role string) (*msp.MSPPrincipal, error)```
is not enough for the enum Classification
as you say, the Role.CLIENT,Role.ORDERER,Role.PEER will be add,it means the type=IDENITY
No, the type is `ROLE`, the type of `IDENTITY` means literal certificate bytes.
`Get(role string)` only returns principals of `Classification` `ROLE`
ok
For more powerful evaluation, you should use policies.
`fabric/common/cauthdsl`
yes
i have a question about the orderer start
as you see
```func initializeLocalMsp(conf *config.TopLevel) {
// Load local MSP
err := mspmgmt.LoadLocalMsp(conf.General.LocalMSPDir, conf.General.BCCSP, conf.General.LocalMSPID)
if err != nil { // Handle errors reading the config file
logger.Fatal("Failed to initialize local MSP:", err)
}
}```
when the orderer start
it load the msp
use the config in the orderer.yaml
``` LocalMSPDir: msp
# LocalMSPID is the identity to register the local MSP material with the MSP
# manager. IMPORTANT: The local MSP ID of an orderer needs to match the MSP
# ID of one of the organizations defined in the orderer system channel's
# /Channel/Orderer configuration. The sample organization defined in the
# sample configuration provided has an MSP ID of "DEFAULT".
LocalMSPID: DEFAULT
```
that mean the orderer has a msp
what's about the org
let me take a example
for you
i set ```GenesisProfile: SampleSingleMSPSolo``` in the orderer.yaml
and start the orderer
i use the peer channel fetch config
to get the genesisblock
and parse the genesis block use the configlator
wait a moment
i send the json data to you
i use the command ```make peer``` to build a peer binary so wait a moment
```"Orderer": {
"groups": {
"SampleOrg": {
```
that means the orderer contains a groups named sampleOrg
so the sampelOrg has a msp
?
what's the difference the orderer msp and sampleOrg msp
```Organizations:
# SampleOrg defines an MSP using the sampleconfig. It should never be used
# in production but may be used as a template for other definitions.
- &SampleOrg
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment.
Name: SampleOrg
# ID to load the MSP definition as.
ID: DEFAULT
# MSPDir is the filesystem path which contains the MSP configuration.
MSPDir: msp
# AdminPrincipal dictates the type of principal used for an
# organization's Admins policy. Today, only the values of Role.ADMIN and
# Role.MEMBER are accepted, which indicates a principal of role type
# ADMIN and role type MEMBER respectively.
AdminPrincipal: Role.ADMIN
AnchorPeers:
# AnchorPeers defines the location of peers which can be used for
# cross-org gossip communication. Note, this value is only encoded
# in the genesis block in the Application section context.
- Host: 127.0.0.1
Port: 7051```
we can look the configuration for the sampleOrg
@asaningmaxchain There is the local MSP, and the channel MSPs
The local MSP contains the processes's signing identity and private key, as well as information about the MSP which administers the process. This is configured in the local config as you see.
The channel config is stored on the blockchain. You see that `GenesisProfile: SampleSingleMSPSolo`, and `GenesisMethod: Provisional`
This is a useful default for development, but should never be used in production.
`Provisional` causes the orderer to basically invoke `configtxgen` under the covers
It basically runs `configtxgen -profile SampleSingleMSPSolo -outputBlock`
Then takes the output, and makes it block 0 of the ordering system channel.
All orderers must start with the same block 0, and running that command multiple times will create different results, so it should be run only once, then `GenesisMethod` should be set to `file`
@jyellick the process which you say i know
The configuration blocks on the blockchain contain all of the information needed to validate transactions for the blockchain. It contains all organizations MSPs, channel policies, and other information.
yes
Over time, the channel config changes via configuration update transactions.
So, local MSP is only used for signing and local admin actions.
Channel MSPs are used for validating transactions in the channel.
(And everyone has the same Channel MSPs, so everyone makes the same validation decision)
ok
i have basic question
the first one is
the orderer contains many org
take a example
the orderer contains the sampleOrg1 and sampleOrg2
the sampleOrg1 and sampleOrg2 has msp each
the orderer has the msp
also
so i want to change the channel config
i must meet the orderer msp requirement
?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=sYTrfCviD8srEHMjz) @jyellick i think i should make sense it
No
Channel config is always authorized via the `fabric/common/configtx` code.
Channel config is always authorized via the `fabric/common/configtx` code. (Which uses the policies and `mod_policy` values to determine whether the update is satisfied)
The update must arrive as a transaction for the channel which is being updated
Then, the transaction will be validated, ordered, and when it commits to the blockchain, all orderers (and all peers) get the updated channel MSP definitions
For local admin actions, think about shutting down the process
For local admin actions, think about shutting down the process, or in the case of a peer, installing a new chaincode.
so if the client send the tx to the peer for endorse,that means it use the channel msp to signature the tx
so when the orderer receive the endorsed tx, it can verify
?
The client has a signing cert. The peer uses the channel MSP to check if the client's signing cert is allowed to invoke. Then, the client sends it to the orderer, the orderer checks the /Channel/Writers policy using the channel MSPs to make sure the client's cert is authorized to `Broadcast`, finally, the peer receives the block and uses the channel MSP to check the client's signature again.
The peer also uses the channel MSPs to check the endorsement signatures from the other peers.
Local MSP is only ever used to check signatures outside of a channel context
it means the Local MSP is used for each component(like the peer,org) but the channel MSP is a sharing configuration for each component
Yes, channel configuration (including channel MSPs) is always shared among all channel members. Local MSP is not shared.
like the peer, if i want to install,upgrade the chaincode,i must satisfy the msp
like the peer, if i want to install,upgrade the chaincode,i must satisfy the local msp
?
Yes. Otherwise, an admin from Org2 could cause a peer from Org1 to install a chaincode Org1 did not want. The chaincode binaries are all installed using the local MSP. Then, the channel agrees to use a chaincode which has been installed via an instantiate transaction.
ok,but the orderer can contains the org
the org has a map
```type OrganizationConfig struct {
protos *OrganizationProtos
mspConfigHandler *MSPConfigHandler
msp msp.MSP
mspID string
name string
}```
the above show the config
Peers and orderer both have this config
yes
so what's the function the org
so what's the function the org msp
because the orderer contains the org
because the orderer contains the org msp
I do not understand. Channels may contain many orgs
so the org is channel msp and local msp
so the msp is channel msp and local msp
Each org has an MSP definition. The channel configuration contains all the the MSP definitions for all the orgs.
Each org has an MSP definition. The channel configuration contains all the the MSP definitions for all the orgs in the channel.
The local MSP is one of the MSP definitions in the channel.
i got it
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=FrJXRrL48QTfbeSoy) @jyellick ```type ChannelConfig struct {
protos *ChannelProtos
hashingAlgorithm func(input []byte) []byte
mspManager msp.MSPManager
appConfig *ApplicationConfig
ordererConfig *OrdererConfig
consortiumsConfig *ConsortiumsConfig
}```
mspManager msp.MSPManager contains msp for each org
when parse the channel config
Yes, that is correct.
i don't see the orderer how to use the msp when the orderer start the method ```
func initializeLocalMsp(conf *config.TopLevel) {
// Load local MSP
err := mspmgmt.LoadLocalMsp(conf.General.LocalMSPDir, conf.General.BCCSP, conf.General.LocalMSPID)
if err != nil { // Handle errors reading the config file
logger.Fatal("Failed to initialize local MSP:", err)
}
}``` load the msp for the orderer
The orderer only uses this local MSP for signing
ok,the broadcast_msg show the local MSP
ok,the broadcast_msg show the local MSP's function
``` // Load local MSP
err := mspmgmt.LoadLocalMsp(config.General.LocalMSPDir, config.General.BCCSP, config.General.LocalMSPID)
if err != nil { // Handle errors reading the config file
fmt.Println("Failed to initialize local MSP:", err)
os.Exit(0)
}
signer := localmsp.NewSigner()
```
the seems doesn't verify the message
@jyellick
```chain.Processor = msgprocessor.NewSystemChannel(chain, r.templator, msgprocessor.CreateSystemChannelFilters(r, chain))```
the chain.Processor use the filter to process the message
Yes, the filter performs the signature check
```NewSigFilter(policies.ChannelWriters, ledgerResources.PolicyManager()),```
i have the question
the client use the signer to create the signed envelope
```env, err := utils.CreateSignedEnvelope(cb.HeaderType_MESSAGE, s.channelID, s.signer, &cb.ConfigValue{Value: transaction}, 0, 0)```
and the orderer receive the msg
``msg, err := srv.Recv()```
```msg, err := srv.Recv()```
why not verify the message
why not verify the msg
```chdr, isConfig, processor, err := bh.sm.BroadcastChannelSupport(msg)```
it just get the channelheader,processor from the chain
```func (sf *sigFilter) Apply(message *cb.Envelope) error {
signedData, err := message.AsSignedData()
if err != nil {
return fmt.Errorf("could not convert message to signedData: %s", err)
}
policy, ok := sf.policyManager.GetPolicy(sf.policyName)
if !ok {
return fmt.Errorf("could not find policy %s", sf.policyName)
}
err = policy.Evaluate(signedData)
if err != nil {
return errors.Wrap(errors.WithStack(ErrPermissionDenied), err.Error())
}
return nil
}
```
my mistake
the msg transfer the signedData,and then evaluate
the sampleconfig doesn't provide the tls config
@jyellick i advice add the tls
i submit a issue
Thanks, we will look into it, or as always, you are welcome to submit a CR yourself
i do it
i did it
@jyellick i meet a question about the fabric broadcast service
```// Handle starts a service thread for a given gRPC connection and services the broadcast connection
func (bh *handlerImpl) Handle(srv ab.AtomicBroadcast_BroadcastServer) error {
addr := util.ExtractRemoteAddress(srv.Context())
logger.Debugf("Starting new broadcast loop for %s", addr)
for {
msg, err := srv.Recv()
if err == io.EOF {
logger.Debugf("Received EOF from %s, hangup", addr)
return nil
}
if err != nil {
logger.Warningf("Error reading from %s: %s", addr, err)
return err
}
chdr, isConfig, processor, err := bh.sm.BroadcastChannelSupport(msg)
if err != nil {
logger.Warningf("[channel: %s] Could not get message processor for serving %s: %s", chdr.ChannelId, addr, err)
return srv.Send(&ab.BroadcastResponse{Status: cb.Status_INTERNAL_SERVER_ERROR, Info: err.Error()})
}
if !isConfig {
logger.Debugf("[channel: %s] Broadcast is processing normal message from %s with txid '%s' of type %s", chdr.ChannelId, addr, chdr.TxId, cb.HeaderType_name[chdr.Type])
configSeq, err := processor.ProcessNormalMsg(msg)
if err != nil {
logger.Warningf("[channel: %s] Rejecting broadcast of normal message from %s because of error: %s", chdr.ChannelId, addr, err)
return srv.Send(&ab.BroadcastResponse{Status: ClassifyError(err), Info: err.Error()})
}
err = processor.Order(msg, configSeq)
if err != nil {
logger.Warningf("[channel: %s] Rejecting broadcast of normal message from %s with SERVICE_UNAVAILABLE: rejected by Order: %s", chdr.ChannelId, addr, err)
return srv.Send(&ab.BroadcastResponse{Status: cb.Status_SERVICE_UNAVAILABLE, Info: err.Error()})
}
} else { // isConfig
logger.Debugf("[channel: %s] Broadcast is processing config update message from %s", chdr.ChannelId, addr)
config, configSeq, err := processor.ProcessConfigUpdateMsg(msg)
if err != nil {
logger.Warningf("[channel: %s] Rejecting broadcast of config message from %s because of error: %s", chdr.ChannelId, addr, err)
return srv.Send(&ab.BroadcastResponse{Status: ClassifyError(err), Info: err.Error()})
}
err = processor.Configure(msg, config, configSeq)
if err != nil {
logger.Warningf("[channel: %s] Rejecting broadcast of config message from %s with SERVICE_UNAVAILABLE: rejected by Configure: %s", chdr.ChannelId, addr, err)
return srv.Send(&ab.BroadcastResponse{Status: cb.Status_SERVICE_UNAVAILABLE, Info: err.Error()})
}
}
if logger.IsEnabledFor(logging.DEBUG) {
logger.Debugf("[channel: %s] Broadcast has successfully enqueued message of type %s from %s", chdr.ChannelId, cb.HeaderType_name[chdr.Type], addr)
}
err = srv.Send(&ab.BroadcastResponse{Status: cb.Status_SUCCESS})
if err != nil {
logger.Warningf("[channel: %s] Error sending to %s: %s", chdr.ChannelId, addr, err)
return err
}
}
}```
as you see
the above is code the broadcast service deal with the envelope
it's endless lopp
it's endless loop
chain.start()
```chain.start()```
```func (ch *chain) main() {
var timer <-chan time.Time
var err error
for {
seq := ch.support.Sequence()
err = nil
select {
case msg := <-ch.sendChan:
if msg.configMsg == nil {
// NormalMsg
if msg.configSeq < seq {
_, err = ch.support.ProcessNormalMsg(msg.initialMsg)
if err != nil {
logger.Warningf("Discarding bad normal message: %s", err)
continue
}
}
batches, _ := ch.support.BlockCutter().Ordered(msg.initialMsg)
if len(batches) == 0 && timer == nil {
timer = time.After(ch.support.SharedConfig().BatchTimeout())
continue
}
for _, batch := range batches {
block := ch.support.CreateNextBlock(batch)
ch.support.WriteBlock(block, nil)
}
if len(batches) > 0 {
timer = nil
}
} else {
// ConfigMsg
if msg.configSeq < seq {
msg.configMsg, _, err = ch.support.ProcessConfigUpdateMsg(msg.initialMsg)
if err != nil {
logger.Warningf("Discarding bad config message: %s", err)
continue
}
}
batch := ch.support.BlockCutter().Cut()
if batch != nil {
block := ch.support.CreateNextBlock(batch)
ch.support.WriteBlock(block, nil)
}
block := ch.support.CreateNextBlock([]*cb.Envelope{msg.configMsg})
ch.support.WriteConfigBlock(block, nil)
timer = nil
}
case <-timer:
//clear the timer
timer = nil
batch := ch.support.BlockCutter().Cut()
if len(batch) == 0 {
logger.Warningf("Batch timer expired with no pending requests, this might indicate a bug")
continue
}
logger.Debugf("Batch timer expired, creating block")
block := ch.support.CreateNextBlock(batch)
ch.support.WriteBlock(block, nil)
case <-ch.exitChan:
logger.Debugf("Exiting")
return
}
}
}```
is a also a endless loop
so when the message comes
the first endless loop should be invoke?
how the second endless loop get the envelope
they are running on separate goroutines
@asaningmaxchain they are running on separate goroutines
yes,how the both communicate with
i got it
it use the orderer
```
// Order accepts normal messages for ordering
func (ch *chain) Order(env *cb.Envelope, configSeq uint64) error {
select {
case ch.sendChan <- &message{
configSeq: configSeq,
initialMsg: env,
}:
return nil
case <-ch.exitChan:
return fmt.Errorf("Exiting")
}
}```
the message seems process twice
?
@jyellick
```func (bh *handlerImpl) Handle(srv ab.AtomicBroadcast_BroadcastServer) error {```
```
configSeq, err := processor.ProcessNormalMsg(msg)```
and ```/ NormalMsg
if msg.configSeq < seq {
_, err = ch.support.ProcessNormalMsg(msg.initialMsg)
if err != nil {
logger.Warningf("Discarding bad normal message: %s", err)
continue
}
}```
and ```
if msg.configSeq < seq {
_, err = ch.support.ProcessNormalMsg(msg.initialMsg)
if err != nil {
logger.Warningf("Discarding bad normal message: %s", err)
continue
}
}```
yes, and re-validation will be eliminated (for most of time) when FAB-5284 is done
yes, and re-validation will be eliminated (for most of time) when FAB-5258 is done
got it
Has joined the channel.
With kafka and tls it appears that you can configure both the kafka client (orderer) and broker with tls certificates. This will prove for authentication of the tls keys at each end of a connection.
Is there any authorization rules provided by Fabric in Kafka image based on the identity provided, or is simply if the orderer can communicate with kafka then it has kafka super user privileges.
Similarly does the orderer just accept any certificate for kafka where the root of trust is verifiable ?
@rsherwood: Not sure I get the question, but if you elaborate/rephrase I'll do my best to help. I will say this: there is nothing Fabric-related/specific about the way TLS authentication happens between the orderers and the Kafka brokers. Not sure if that helps?
Asking the question in a different way , and not being a kafka expert .
Once using the tls certificate kafka has authenticated the orderer, has anything been done to restrict what that orderer can do in kafka ?
As an aside, if we don't use tls , is it simply that any container that has access to the same docker network that kafka is listening on can the container do whatever kafka will allow.
Has joined the channel.
Has left the channel.
> Once using the tls certificate kafka has authenticated the orderer, has anything been done to restrict what that orderer can do in kafka?
@rsherwood: No, you'll need to set up an `Authorizer` for that, see: http://docs.confluent.io/current/kafka/authorization.html
> As an aside, if we don't use tls , is it simply that any container that has access to the same docker network that kafka is listening on can the container do whatever kafka will allow.
@rsherwood: Correct.
@jyellick i don't know why the ```type Manager interface {
// GetPolicy returns a policy and true if it was the policy requested, or false if it is the default policy
GetPolicy(id string) (Policy, bool)
// Manager returns the sub-policy manager for a given path and whether it exists
Manager(path []string) (Manager, bool)
}```
provide the Manger(path []string) method
@jyellick i don't why the ```type Manager interface {
// GetPolicy returns a policy and true if it was the policy requested, or false if it is the default policy
GetPolicy(id string) (Policy, bool)
// Manager returns the sub-policy manager for a given path and whether it exists
Manager(path []string) (Manager, bool)
}``` provide the ```Manager(path []string)(Manager,bool)```
@jyellick i don't why the ```type Manager interface {
// GetPolicy returns a policy and true if it was the policy requested, or false if it is the default policy
GetPolicy(id string) (Policy, bool)
// Manager returns the sub-policy manager for a given path and whether it exists
Manager(path []string) (Manager, bool)
}``` provide the ```Manager(path []string)(Manager,bool)```
@jyellick i don't why the ```type Manager interface {
// GetPolicy returns a policy and true if it was the policy requested, or false if it is the default policy
GetPolicy(id string) (Policy, bool)
// Manager returns the sub-policy manager for a given path and whether it exists
Manager(path []string) (Manager, bool)
}
provide the ```Manager(path []string)(Manager,bool)```
method, i know the method return the sub-manager
```2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 002 path:Channel/Orderer/SampleOrg
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 003 policy name: Readers
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 004 policy name: Writers
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 005 policy name: Admins
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 006 path:Channel/Orderer
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 007 policy name: Writers
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 008 policy name: Admins
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 009 policy name: SampleOrg/Readers
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 00a policy name: SampleOrg/Writers
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 00b policy name: SampleOrg/Admins
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 00c policy name: BlockValidation
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 00d policy name: Readers
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 00e path:Channel/Consortiums/SampleConsortium/SampleOrg
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 00f policy name: Writers
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 010 policy name: Admins
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 011 policy name: Readers
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 012 path:Channel/Consortiums/SampleConsortium
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 013 policy name: SampleOrg/Writers
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 014 policy name: SampleOrg/Admins
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 015 policy name: SampleOrg/Readers
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 016 path:Channel/Consortiums
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 017 policy name: Admins
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 018 policy name: SampleConsortium/SampleOrg/Writers
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 019 policy name: SampleConsortium/SampleOrg/Admins
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 01a policy name: SampleConsortium/SampleOrg/Readers
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 01b path:Channel
2017-08-31 22:37:45.862 CST [policies] NewManagerImpl -> INFO 01c policy name: Admins
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 01d policy name: Orderer/SampleOrg/Readers
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 01e policy name: Orderer/Writers
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 01f policy name: Consortiums/Admins
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 020 policy name: Consortiums/SampleConsortium/SampleOrg/Writers
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 021 policy name: Consortiums/SampleConsortium/SampleOrg/Admins
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 022 policy name: Orderer/BlockValidation
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 023 policy name: Orderer/SampleOrg/Admins
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 024 policy name: Consortiums/SampleConsortium/SampleOrg/Readers
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 025 policy name: Readers
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 026 policy name: Writers
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 027 policy name: Orderer/Readers
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 028 policy name: Orderer/Admins
2017-08-31 22:37:45.863 CST [policies] NewManagerImpl -> INFO 029 policy name: Orderer/SampleOrg/Writers```
the log that i use the GenesisMethod=priviosnal and GenesisProfile=SampleSingleMSPSolo
the channel manger contains all the policy
the each manger contains the sub-manager policies recursive
so the channel manager contains the all policy define everywhere
i also don't understand the common/config/resources folder
the comment is less
I'm trying to setup a kafka ordering service with kafka cluster containers on one machine and the orderer containers on the second machine. On trying to create a channel, I keep getting a `service_unavailable` error. On checking the docker logs, there is a message posted as `found some partitions to be leaderless`. After the max trials, it fails with following message on orderer log :
```
[sarama] 2017/08/31 15:02:04.336627 client.go:599: client/metadata fetching metadata for [testchainid] from broker 9.167.110.93:32800
[sarama] 2017/08/31 15:02:04.484268 client.go:610: client/metadata found some partitions to be leaderless
2017-08-31 15:02:04.484 UTC [orderer/kafka] startThread -> CRIT 595 [channel: testchainid] Cannot post CONNECT message = kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
panic: [channel: testchainid] Cannot post CONNECT message = kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
```
For more details please see the issue https://jira.hyperledger.org/browse/FAB-6002
any pointers are appreciated.
@htyagi90 This indicates a misconfiguration of your Kafka cluster and is not a fabric issue
Please check your Kafka and Zookeeper logs to identity the problem
well, my broker are setup correctly, logs don't show any error. 1 out of three zookeeper container log show now error.
but other 2 zookeeper containers show different errors.
one ZK
```
2017-08-31 19:56:04,741 [myid:3] - INFO [LearnerHandler-/10.0.0.5:37162:LearnerHandler@518] - Received NEWLEADER-ACK message from 2
2017-08-31 19:56:13,116 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /10.0.0.13:58646
2017-08-31 19:56:13,128 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.13:58646
2017-08-31 19:56:13,138 [myid:3] - INFO [SyncThread:3:FileTxnLog@203] - Creating new log file: log.100000001
2017-08-31 19:56:13,188 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer@673] - Established session 0x35e39dd64560000 with negotiated timeout 6000 for client /10.0.0.13:58646
2017-08-31 19:56:13,314 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x35e39dd64560000 type:create cxid:0x5 zxid:0x100000003 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers
2017-08-31 19:56:13,383 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x35e39dd64560000 type:create cxid:0xb zxid:0x100000007 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config
2017-08-31 19:56:13,441 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x35e39dd64560000 type:create cxid:0x13 zxid:0x10000000c txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin
2017-08-31 19:56:13,872 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x35e39dd64560000 type:setData cxid:0x21 zxid:0x100000012 txntype:-1 reqpath:n/a Error Path:/controller_epoch Error:KeeperErrorCode = NoNode for /controller_epoch
2017-08-31 19:56:14,003 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x35e39dd64560000 type:delete cxid:0x30 zxid:0x100000014 txntype:-1 reqpath:n/a Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election
2017-08-31 19:56:14,219 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x35e39dd64560000 type:create cxid:0x37 zxid:0x100000016 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2017-08-31 19:56:14,221 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x35e39dd64560000 type:create cxid:0x38 zxid:0x100000017 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
2017-08-31 19:56:14,684 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x15e39dd64440000 type:create cxid:0x13 zxid:0x100000019 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2017-08-31 19:56:14,685 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x15e39dd64440000 type:create cxid:0x14 zxid:0x10000001a txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
2017-08-31 19:56:15,353 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /10.0.0.18:32982
2017-08-31 19:56:15,356 [myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.18:32982
2017-08-31 19:56:15,362 [myid:3] - INFO [CommitProcessor:3:ZooKeeperServer@673] - Established session 0x35e39dd64560001 with negotiated timeout 6000 for client /10.0.0.18:32982
2017-08-31 19:56:15,692 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x35e39dd64560001 type:create cxid:0x13 zxid:0x10000001d txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2017-08-31 19:56:15,692 [myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x35e39dd64560001 type:create cxid:0x14 zxid:0x10000001e txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
```
the other shows something like this.
```
2017-08-31 19:56:03,692 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Follower@61] - FOLLOWING - LEADER ELECTION TOOK - 235
2017-08-31 19:56:03,694 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper2 to address: zookeeper2/10.0.0.6
2017-08-31 19:56:03,695 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Learner@236] - Unexpected exception, tries=0, connecting to zookeeper2/10.0.0.6:2888
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.zookeeper.server.quorum.Learner.connectToLeader(Learner.java:228)
at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:69)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:846)
2017-08-31 19:56:03,768 [myid:2] - INFO [zookeeper1/10.0.0.5:3888:QuorumCnxManager$Listener@541] - Received connection request /10.0.0.4:48900
2017-08-31 19:56:03,772 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 1 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) FOLLOWING (my state)
2017-08-31 19:56:03,774 [myid:2] - INFO [WorkerReceiver[myid=2]:FastLeaderElection@600] - Notification: 1 (message format version), 3 (n.leader), 0x0 (n.zxid), 0x1 (n.round), LOOKING (n.state), 1 (n.sid), 0x0 (n.peerEpoch) FOLLOWING (my state)
2017-08-31 19:56:04,720 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Learner@329] - Getting a snapshot from leader
2017-08-31 19:56:04,725 [myid:2] - INFO [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:FileTxnSnapLog@240] - Snapshotting: 0x100000000 to /data/version-2/snapshot.100000000
2017-08-31 19:56:13,139 [myid:2] - WARN [QuorumPeer[myid=2]/0:0:0:0:0:0:0:0:2181:Follower@116] - Got zxid 0x100000001 expected 0x1
2017-08-31 19:56:13,140 [myid:2] - INFO [SyncThread:2:FileTxnLog@203] - Creating new log file: log.100000001
```
the third zk shows no exception
Also, if I remove the advertised port, I keep getting the following error in orderer logs:
```
[sarama] 2017/08/31 20:08:04.857291 broker.go:96: Failed to connect to broker 9.167.110.93:32800: dial tcp 9.167.110.93:32800: getsockopt: no route to host
[sarama] 2017/08/31 20:08:04.857315 client.go:620: client/metadata got error from broker while fetching metadata: dial tcp 9.167.110.93:32800: getsockopt: no route to host
[sarama] 2017/08/31 20:08:04.857321 client.go:626: client/metadata no available broker to send metadata request to
[sarama] 2017/08/31 20:08:04.857327 client.go:428: client/brokers resurrecting 3 dead seed brokers
[sarama] 2017/08/31 20:08:04.857332 client.go:187: Closing Client
2017-08-31 20:08:04.857 UTC [orderer/kafka] startThread -> CRIT 58a [channel: testchainid] Cannot set up producer = kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
panic: [channel: testchainid] Cannot set up producer = kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
goroutine 9 [running]:
panic(0xb31bc0, 0xc420902440)
/opt/go/src/runtime/panic.go:500 +0x1a1
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc4201f5050, 0xc6ca2d, 0x29, 0xc4207de5c0, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x127
github.com/hyperledger/fabric/orderer/kafka.startThread(0xc420888630)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/kafka/chain.go:153 +0xfca
created by github.com/hyperledger/fabric/orderer/kafka.(*chainImpl).Start
/opt/gopath/src/github.com/hyperledger/fabric/orderer/kafka/chain.go:94 +0x3f
```
@htyagi90: This does sound like a Kafka issue as @jyellick noted. Please try out the following. Once you have your Kafka cluster up and running, use the shell scripts that come with the tarball that correspond to your version of Kafka (`bin/kafka-topics.sh`, `bin/kafka-console-producer.sh`, `bin/kafka-console-consumer.sh`) and try to create a topic, post to it, and consume from it.
@htyagi90: This does sound like a Kafka issue as @jyellick noted. Please try out the following. Once you have your Kafka cluster up and running, use the shell scripts that come with the tarball that correspond to your version of Kafka (`bin/kafka-topics.sh`, `bin/kafka-console-producer.sh`, `bin/kafka-console-consumer.sh`) and try to create a topic, post to it, and consume from it.
@htyagi90: This does sound like a Kafka issue as @jyellick noted. Please try out the following. Once you have your Kafka cluster up and running, use the shell scripts that come with the tarball that correspond to your version of Kafka: `bin/kafka-topics.sh`, `bin/kafka-console-producer.sh`, `bin/kafka-console-consumer.sh` and try to create a topic, post to it, and consume from it.
@htyagi90: This does sound like a Kafka issue as @jyellick noted. Please try out the following. Once you have your Kafka cluster up and running, use the shell scripts that come with the tarball that correspond to your version of Kafka: `bin/kafka-topics.sh`, `bin/kafka-console-producer.sh`, `bin/kafka-console-consumer.sh` and try to create a topic, post to it, and read from it.
The instructions for these operations can be found here: http://kafka.apache.org/090/documentation.html#quickstart (they are short and clear)
*If* that works without issues, please post here again. If it doesn't, you most definitely have a pure Kafka issue at hand.
*If* that works without issues, please post here again. If it doesn't, you are most definitely dealing with a pure Kafka configuration issue.
```Hi all,
I'd like to announce that LedgerDomain is publishing an example Fabric app that may help others in their development.
https://github.com/LedgerDomain/FabricWebApp
Some features/aspects:
- Based on Fabric node.js SDK
- Uses fabric-ca-cryptogen.sh script to generate crypto materials
- Uses configtxgen to generate configuration TX and orderer's genesis block
- TLS is enabled on all components (with a caveat; see the repo's README.md)
- Uses docker (must use v1.13 or newer) and docker-compose to simulate all the different services that would run on separate servers in a production environment
- Uses docker volumes to demonstrate how the cryptographic/configuration materials would be segregated and distributed to "real" servers
- Easy to run the whole system via make rules that invoke docker-compose services to manage everything
- Chaincode that demonstrates using the transactor cert to do basic permissions checking
- Minimal web client which allows invoking any transaction on behalf of a specified user, which may be used to demonstrate the permissions checking done in the chaincode
More info is available in the README.md file. I'm available on the Hyperledger RocketChat server as vdods if there are any questions!
```
@jyellick i advice the common/channelconfig/*.util.go move to common/channelconfig/util/
the *.util.go is used to produce the sample for the configtxgen
@asaningmaxchain Yes, I have considered doing something like this. There are some other options with go 1.9's type aliases as well
the code is much easier to understand
but the policy is still a little difficult to understand
and i don't understand the /channel/*/admin is equal to /channel/orderer/*admin
i take a example
channel orderer sampleorg
the channel,orderer,sampleorg has three policies(admins,writer,read)
the sampleorg is a leaf
so it use the msp manager to judge the signed data
for the orderer
for the sampleorg
```type ManagerImpl struct {
path string
policies map[string]Policy
managers map[string]*ManagerImpl
}```
i use the ManagerImpl to build the data
the path is channel/orderer/sampleorg
the policies has three
admins,reader,writer
the managers is nil
for the orderer
the path is channel/orderer
the policies is
sampleorg/admins
sampleorg/reader,sampleorg/writer
smapleorg/admin,sampleorg/reader,sampleorg/writer
admins,writer,reader
the admins,writer,reader map the policy is use the sampleorg/admin,sampleorg/writer,sampleorg/reader
so for the orderer
the admin and sampleorg/admins map the policy is same
the same as the reader,writer
For the implicit policies, a sub-policy name is given (usually the same name), and a sub-policy rule, either 'ANY' 'ALL' or 'MAJORITY'
For the Admins policies, it is by default 'MAJORITY', for readers and writers 'ANY'
ok
Has joined the channel.
@kostas hey thanks for the pointers. So I did that, and on running the console producer script, getting error as `{test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)` .
Does this has to do somehting with the server properties or kafka is not able to connect to zookeeper ?
@kostas thanks for the pointers. So I did a little playing and what you suggested is true. Since I'm running kafka cluster on a different machine and orderers on a different machine, I've `KAFKA_ADVERTISED_HOST_NAME` and `KAFKA_ADVERTISED_PORT` setting as the public IP and port of the host machine. Now, once inside the container, with these settings the broker couldn't be reached.
I also tried by removing these settings and then run the shell scripts. They worked fine.
Now, my conundrum is how to expose the kafka cluster via internet to the orderers docker host.
> I also tried by removing these settings and then run the shell scripts. They worked fine.
@htyagi90: Awesome!
> I also tried by removing these settings and then run the shell scripts. They worked fine.
@htyagi90: That is great, thank you for the update.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=TJyoCJywwphPs94D9) @jyellick bs := &bootstrapper{
channelGroups: []*cb.ConfigGroup{
// Chain Config Types
channelconfig.DefaultHashingAlgorithm(),
channelconfig.DefaultBlockDataHashingStructure(),
// Default policies
policies.TemplateImplicitMetaAnyPolicy([]string{}, channelconfig.ReadersPolicyKey),
policies.TemplateImplicitMetaAnyPolicy([]string{}, channelconfig.WritersPolicyKey),
policies.TemplateImplicitMetaMajorityPolicy([]string{}, channelconfig.AdminsPolicyKey),
},
}
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=TJyoCJywwphPs94D9) @jyellick ```bs := &bootstrapper{
channelGroups: []*cb.ConfigGroup{
// Chain Config Types
channelconfig.DefaultHashingAlgorithm(),
channelconfig.DefaultBlockDataHashingStructure(),
// Default policies
policies.TemplateImplicitMetaAnyPolicy([]string{}, channelconfig.ReadersPolicyKey),
policies.TemplateImplicitMetaAnyPolicy([]string{}, channelconfig.WritersPolicyKey),
policies.TemplateImplicitMetaMajorityPolicy([]string{}, channelconfig.AdminsPolicyKey),
},
}```
@asaningmaxchain Yes, this is where the defaults are set.
but i have a problem about the tool configtxgen, like the file in the common/channelconfg/*util.go
those file build a template for each component(channel,orderer,application)
but each method defines in the file return a ConfigGroup
i think just set the configvalue for each component and build a channel
```func SetConfigValueForChannelGroup(keys []string, values [][]byte) *cb.ConfigGroup {
}```
```func SetConfigValueForChannelGroup(keys []string, values [][]byte) (*cb.ConfigGroup, error) {
channelGroup := cb.NewConfigGroup()
for index, key := range keys {
switch key {
case ConsortiumKey:
channelGroup.Values[ConsortiumKey] = &cb.ConfigValue{Value: utils.MarshalOrPanic(&cb.Consortium{Name: string(values[index])})}
case HashingAlgorithmKey:
if values[index] == nil {
values[index] = []byte(defaultHashingAlgorithm)
}
channelGroup.Values[HashingAlgorithmKey] = &cb.ConfigValue{Value: utils.MarshalOrPanic(&cb.HashingAlgorithm{Name: string(values[index])})}
case BlockDataHashingStructureKey:
if values[index] == nil {
values[index] = []byte(defaultBlockDataHashingStructureWidth)
}
channelGroup.Values[BlockDataHashingStructureKey] = &cb.ConfigValue{Value: utils.MarshalOrPanic(&cb.BlockDataHashingStructure{Width: uint32(string(values[index])) })}
case OrdererAddressesKey:
if values[index] == nil {
values[index] = []byte("127.0.0.1:7050")
}
buffer := bytes.NewBuffer(values[index])
addrs := make([]string, 0)
for {
addr, err := buffer.ReadString(byte(','))
if err != io.EOF {
addrs = append(addrs, addr)
}
}
channelGroup.Values[OrdererAddressesKey] = &cb.ConfigValue{Value: utils.MarshalOrPanic(&cb.OrdererAddresses{Addresses: addrs})}
default:
return nil, fmt.Errorf("key does disallow")
}
}
return channelGroup, nil
}
```
@jyellick can you provide the fabric design doc,like why define the orderer system channel and other
@jyellick can you provide the fabric design doc,like why define the orderer system channel and org,applicationOrg,
@jyellick can you provide the fabric design doc,like why define the orderer system channel and org,applicationOrg,consortiums
?
Has joined the channel.
Has joined the channel.
Can someone please explain the type of consensus being supported in Hyperledger Fabric v1.0 and which will be best one to support?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=JZGjSKFoJKzcnSnnF) @ynkumar143 there are two types available in v1.0, solo and kafka. Only kafka (which is Crash Fault Tolerant) is recommended for production. Folks are still working on pbft consensus
While I'm still working on adding more tests for FAB-5720, I want to check if following compatibility schema fulfills the need:
- a new configuration field is added as
```
Compatibility:
Resubmission: false
```
it is _false_ by default. However, starting v1.0 orderer with a genesis file containing this field would fail, so not _forward compatible_ (we would be using the old genesis file to start v1.1 orderer while upgrading anyway)
A possible upgrade procedure looks like:
- starting from a cluster consisting of v1.0 peers and orderers
- (re)starting v1.1 orderers with old genesis file, where `Compatibility: resubmission` is missing, therefore _false_
- upgrading peers to v1.1
- update config to turn on `Resubmission`
Note that if we join a v1.0 peer to a channel, whose config contains key `Compatibility`, peer would not recognize it and fail to join.
I read the compatibility design doc https://docs.google.com/document/d/1CbB8dR0GNnHi7UekIDpsySCBXkPTtKk0m53CIzJbU-A/edit# but failed to find detailed description on how to handle this. So I felt we should have consensus on this upgrading schema so it aligns with the bigger picture. cc @jyellick @kostas
I have a CA server. I have an Orderer that runs fine as SampleInsecureSolo. Trying to get it to use SampleSingleMSPSolo with my msp. So I enrolled my orderer with my ca server using -M to get the msp. I placed the msp on the orderer and try to configtxgen. What goes in msp/admincerts/? I tried the admincert.pem from sampleconfig but get "certificate signed by unknown authority".
@gouger With respect to the compatibility stuff, please simply assume that there is something provided by the orderer channel config (often referenced as `SharedConfig` in the code. You may wish to modify this interface and mock the behavior. There will be a general mechanism to address getting this method to work.
@guoger ger With respect to the compatibility stuff, please simply assume that there is something provided by the orderer channel config (often referenced as `SharedConfig` in the code. You may wish to modify this interface and mock the behavior. There will be a general mechanism to address getting this method to work.
@guoger With respect to the compatibility stuff, please simply assume that there is something provided by the orderer channel config (often referenced as `SharedConfig` in the code. You may wish to modify this interface and mock the behavior. There will be a general mechanism to address getting this method to work.
@guoger With respect to the compatibility stuff, please simply assume that there is something provided by the orderer channel config (often referenced as `SharedConfig` in the code). You may wish to modify this interface and mock the behavior. There will be a general mechanism to address getting this method to work.
@jworthington This is a better question for #fabric-ca
k, thx
@jyellick i see the source code for configtxgen tool
i think it's a little difficult to understand
like channel_util
it just set the configvalue for channelgroup
```
func SetConfigValueForChannelGroup(keys []string,values [][]byte)*cb.ConfigGroup {
}```
why not use the method set the configvalue for configgroup
@asaningmaxchain Yes, that code could probably use some cleanup, though as it is a tool, and not executed in normal runtime, it is slightly lower priority.
@jyellick i try myself to modify it
Great, good luck!
Let me know if I can be of any assistance
ok
```package template
import (
cb "github.com/hyperledger/fabric/protos/common"
"github.com/hyperledger/fabric/bccsp"
"math"
"github.com/hyperledger/fabric/protos/utils"
"fmt"
"strings"
)
const (
// ConsortiumKey is the key for the cb.ConfigValue for the Consortium message
ConsortiumKey = "Consortium"
// HashingAlgorithmKey is the cb.ConfigItem type key name for the HashingAlgorithm message
HashingAlgorithmKey = "HashingAlgorithm"
// BlockDataHashingStructureKey is the cb.ConfigItem type key name for the BlockDataHashingStructure message
BlockDataHashingStructureKey = "BlockDataHashingStructure"
// OrdererAddressesKey is the cb.ConfigItem type key name for the OrdererAddresses message
OrdererAddressesKey = "OrdererAddresses"
)
var (
defaultConsortium = "SimpleConsortium"
defaultHashingAlgorithm = bccsp.SHA256
defaultBlockDataHashingStructureWidth = math.MaxUint32
defaultOrdererAddresses = []string{"127.0.0.1:7050"}
)
func DefaultConfigValueForChannelGroup() (*cb.ConfigGroup, error) {
return SetConfigValueForChannelGroup([]string{ConsortiumKey, HashingAlgorithmKey, BlockDataHashingStructureKey, OrdererAddressesKey}, [][]byte{nil, nil, nil, nil})
}
func SetConfigValueForChannelGroup(keys []string, values [][]byte) (*cb.ConfigGroup, error) {
channelGroup := cb.NewConfigGroup()
if len(keys) != len(values) {
return nil, fmt.Errorf("the length of the keys is not equal to the values,please check")
}
for index, key := range keys {
switch key {
case ConsortiumKey:
if values[index] == nil {
values[index] = []byte(defaultConsortium)
}
channelGroup.Values[ConsortiumKey] = &cb.ConfigValue{Value: utils.MarshalOrPanic(&cb.Consortium{Name: string(values[index])})}
case HashingAlgorithmKey:
if values[index] == nil {
values[index] = []byte(defaultHashingAlgorithm)
}
channelGroup.Values[HashingAlgorithmKey] = &cb.ConfigValue{Value: utils.MarshalOrPanic(&cb.HashingAlgorithm{Name: string(values[index])})}
case BlockDataHashingStructureKey:
if values[index] == nil {
values[index] = []byte(defaultBlockDataHashingStructureWidth)
}
channelGroup.Values[BlockDataHashingStructureKey] = &cb.ConfigValue{Value: utils.MarshalOrPanic(&cb.BlockDataHashingStructure{Width: uint32(string(values[index]))})}
case OrdererAddressesKey:
if values[index] == nil {
values[index] = []byte(defaultOrdererAddresses)
}
address := string(values[index])
addresses := make([]string, 0)
for _, addr := range strings.Split(address, ",") {
addresses = append(addresses, addr)
}
channelGroup.Values[OrdererAddressesKey] = &cb.ConfigValue{Value: utils.MarshalOrPanic(&cb.OrdererAddresses{Addresses: addresses})}
default:
return nil, fmt.Errorf("the key is not support")
}
}
return channelGroup, nil
}
```
@jyellick it's for channel config
the configtxgen template one nests another one
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=mshTgYGYHYCeBZQNq) @jyellick not sure I understand you correctly.. I modified `config.Orderer` interface to include one more method `Resubmission()` so that `chain.SharedConfig().Resubmission()` would tell us the status of switch. Also, `configuration.proto` is augmented with
```
// Compatibility contains configurations for backwards compatibilities
message Compatibility {
// Consenter should re-submit transaction if it's deemed to be valid during re-validation.
// However, v1.0.x orderer couldn't differentiate original tx and re-submitted tx. Therefore
// this switch is added to control resubmission behavior
bool resubmission = 1;
}
```
where I hope future compatibility-related config could go in there.
```type Registrar struct {
chains map[string]*ChainSupport
consenters map[string]consensus.Consenter
ledgerFactory ledger.Factory
signer crypto.LocalSigner
systemChannelID string
systemChannel *ChainSupport
templator msgprocessor.ChannelConfigTemplator
}```
@jyellick i think the field templator should be remov
@jyellick i think the field templator should be remove\
@jyellick i think the field templator should be remove
because i think it act as a container
just get the chain for the container for each message
@gouger I'd suggest you not bother with hacking on the config code or defining new message types. As the mechanism for handling non-backwards compatible stuff is still under development. Instead, you may simply modify the interface definition for channelconfig.Orderer to expose some method like `ShouldResubmit()` which informs you of your decision. We will worry about how that is populated in the future.
@guoger I'd suggest you not bother with hacking on the config code or defining new message types. As the mechanism for handling non-backwards compatible stuff is still under development. Instead, you may simply modify the interface definition for channelconfig.Orderer to expose some method like `ShouldResubmit()` which informs you of your decision. We will worry about how that is populated in the future.
hey, I'm trying to run kafka based ordering service with 3zk-3kafka-3orderers.
I have setup an overlay network using docker swarm.
my zk-kafka cluster is on one docker host and orderers are on a second docker host .
hey, I'm trying to run kafka based ordering service with 3zk-3kafka-3orderers.
I have setup an overlay network using docker swarm.
my zk-kafka cluster is on one docker host and orderers are on a second docker host .
on running the network, my kafka brokers are able to register to the orderers fine (as shown in the logs)
also, kafka-zk cluster is communicating fine (no error in the logs.)
But on channel creation, I get timeout error as follows
```
```
on running the network, my kafka brokers are able to register to the orderers fine (as shown in the logs)
also, kafka-zk cluster is communicating fine (no error in the logs.)
But on channel creation, I get timeout error as follows
```
2017-09-05 20:23:55.746 UTC [channelCmd] readBlock -> DEBU 106 Got status:*orderer.DeliverResponse_Status
2017-09-05 20:23:55.746 UTC [msp] GetLocalMSP -> DEBU 107 Returning existing local MSP
2017-09-05 20:23:55.746 UTC [msp] GetDefaultSigningIdentity -> DEBU 108 Obtaining default signing identity
2017-09-05 20:23:55.746 UTC [channelCmd] InitCmdFactory -> INFO 109 Endorser and orderer connections initialized
Error: timeout waiting for channel creation
Usage:
peer channel create [flags]
Flags:
-c, --channelID string In case of a newChain command, the channel ID to create.
-f, --file string Configuration transaction file generated by a tool such as configtxgen for submitting to orderer
-t, --timeout int Channel creation timeout (default 5)
Global Flags:
--cafile string Path to file containing PEM-encoded trusted certificate(s) for the ordering endpoint
--logging-level string Default logging level and overrides, see core.yaml for full syntax
-o, --orderer string Ordering service endpoint
--test.coverprofile string Done (default "coverage.cov")
--tls Use TLS when communicating with the orderer endpoint
-v, --version Display current version of fabric peer server
```
in the orderer logs, I am posting the snippets of brokers registering and the warning I keep getting.
```
2017-09-05 20:23:54.941 UTC [orderer/main] Deliver -> DEBU a8f Starting new Deliver handler
2017-09-05 20:23:54.941 UTC [orderer/common/deliver] Handle -> DEBU a90 Starting new deliver loop
2017-09-05 20:23:54.941 UTC [orderer/common/deliver] Handle -> DEBU a91 Attempting to read seek info message
[sarama] 2017/09/05 20:23:55.055863 client.go:397: client/brokers registered new broker #0 at dace2ed499f5:9092
[sarama] 2017/09/05 20:23:55.055879 client.go:397: client/brokers registered new broker #1 at 13cb75ae1eb5:9092
[sarama] 2017/09/05 20:23:55.055882 client.go:397: client/brokers registered new broker #2 at 9adf353186f5:9092
[sarama] 2017/09/05 20:23:55.055898 client.go:154: Successfully initialized new client
2017-09-05 20:23:55.055 UTC [orderer/kafka] try -> DEBU a92 [channel: mychannel] Error is nil, breaking the retry loop
2017-09-05 20:23:55.055 UTC [orderer/kafka] startThread -> INFO a93 [channel: mychannel] Parent consumer set up successfully
2017-09-05 20:23:55.055 UTC [orderer/kafka] setupChannelConsumerForChannel -> INFO a94 [channel: mychannel] Setting up the channel consumer for this channel (start offset: -2)...
2017-09-05 20:23:55.055 UTC [orderer/kafka] try -> DEBU a95 [channel: mychannel] Retrying every 1s for a total of 30s
2017-09-05 20:23:55.142 UTC [orderer/common/deliver] Handle -> WARN a96 [channel: mychannel] Rejecting deliver request because of consenter error
2017-09-05 20:23:55.142 UTC [orderer/main] func1 -> DEBU a97 Closing Deliver stream
```
the three steps as described in the docs, producer setup, connect message and consumer set are completed. But on channel create this is failing.
After certain amount of tries, the logs show that it is connected to one of the brokers, and keeps sending connect message.
```
2017-09-05 20:23:55.947 UTC [orderer/main] func1 -> DEBU aab Closing Deliver stream
2017-09-05 20:23:56.056 UTC [orderer/kafka] try -> DEBU aac [channel: mychannel] Connecting to the Kafka cluster
[sarama] 2017/09/05 20:23:56.056170 config.go:329: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
[sarama] 2017/09/05 20:23:56.198812 broker.go:144: Connected to broker at dace2ed499f5:9092 (registered as #0)
[sarama] 2017/09/05 20:23:56.489769 consumer.go:648: consumer/broker/0 added subscription to mychannel/0
2017-09-05 20:23:56.489 UTC [orderer/kafka] try -> DEBU aad [channel: mychannel] Error is nil, breaking the retry loop
2017-09-05 20:23:56.489 UTC [orderer/kafka] startThread -> INFO aae [channel: mychannel] Channel consumer set up successfully
2017-09-05 20:23:56.489 UTC [orderer/kafka] startThread -> INFO aaf [channel: mychannel] Start phase completed successfully
2017-09-05 20:23:56.629 UTC [orderer/kafka] processMessagesToBlocks -> DEBU ab0 [channel: mychannel] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
2017-09-05 20:23:56.629 UTC [orderer/kafka] processConnect -> DEBU ab1 [channel: mychannel] It's a connect message - ignoring
```
also, I tried running the setup again with a different channel name, but when kafka containers are spun up, they always create partitions with name `mychannel`
@htyagi90 Are you tearing down the Kafka containers and backing storage before trying again? You can try tuning up the timeout as well, by passing `--timeout 30` or similar
I'm tearing down the kafka containers, not sure about the storage part. Is there some parameter in karfa server properties that does that. Also, I tried increasing the channel create timeout to 20 still the error persisted.
One more observation I saw that, if the network (orderers and kafka containers) are allowed to wait, I can see in the logs that they are still connected. tried running the channel create command again in the cli, got an issue "channel already created, version incorrect", the channel block wasn't created.
@jyellick
The storage would be related to how you have configured your docker images. If you do not have shared volumes, and you destroy the container, this should re-initialize the storage.
The storage would be related to how you have configured your docker containers. If you do not have shared volumes, and you destroy the container, this should re-initialize the storage.
How long after starting the Kafka cluster before you being trying to create the channel?
I kind of do that kind off immediately.
i'll test with applying a wait between kafka containers and creating channel
After 5minutes around, I can see in the orderer logs that it is fetching metadata from the brokers.
Yes, basically, when Kafka first starts up, it takes a while for leader election etc.
It's best to simply wait and let the Kafka cluster finish standing up before proceeding
alright. i'll give it a shot.
Thanks for the pointers @jyellick
@htyagi90 You might want to take a look at this: https://github.com/hyperledger/fabric/blob/master/examples/e2e_cli/scripts/script.sh#L57-L84
This is how our end to end verifies that ordering has started and successfully made contact with Kafka prior to kicking off the rest of the tests
I just tried spinning up my docker containers but got the following error
`Error reading field 'responses': Error reading array of size 750693, only 31 bytes available (kafka.server.ReplicaFetcherThread)`
I suppose this has something to do with the container storage..?
Would be my best guess, but nothing I have ever seen personally
Has joined the channel.
hello, To calculate the hash of a block, Merkle tree is used to calculate the hash of all transactions in this block. However, in createNextBlock I can not find the Merkle code. did I find the right code or the hash is not calculated in Merkle way?
@qsmen The channel configuration defines the width of the Merkle tree
For the time being, this width is fixed to `MAX_UINT32` or similar
This effectively degrades the Merkle tree into a flat hash
This is why you do not see any Merkle tree code
ok,I see. when it is set to be 2, that would be merkle way.
Well, that would be a binary Merkle tree, yes.
Thank you
@jyellick the master branch e2e doesn't work
wait a moment ,i try it again and show error
Is this locally, or via CI?
locally
my mistake
it's right
@jyellick i use the broadcast_config tool to build a channel,the orderer genesis channel profile is SampleSingleMSPSolo,and then i start the orderer
as the profile SampleSingleMSPSolo desc
as the profile SampleSingleMSPSolo describe
```SampleSingleMSPSolo:
Orderer:
<<: *OrdererDefaults
Organizations:
- *SampleOrg
Consortiums:
SampleConsortium:
Organizations:
- *SampleOrg```
and then i set the ```genConf = genesisconfig.Load(genesisconfig.SampleSingleMSPChannelProfile)```
to start the broadcast_config
``` SampleSingleMSPChannel:
Consortium: SampleConsortium
Application:
Organizations:
- *SampleOrg```
i modify the profile to the below
```SampleSingleMSPChannel:
Consortium: SampleConsortium
Orderer:
<<: *OrdererDefaults
Organizations:
- *SampleOrg
Application:
Organizations:
- *SampleOrg```
but get error from the method ```newChannelConfigEnv, err := ctxm.ProposeConfigUpdate(envConfigUpdate)```
but get error from the method ```func (dt *DefaultTemplator) NewChannelConfig(envConfigUpdate *cb.Envelope) (configtxapi.Manager, error) {```
and then i debug the method
``` // If the consortium group has no members, allow the source request to have no members. However,
// if the consortium group has any members, there must be at least one member in the source request
if len(systemChannelGroup.Groups[channelconfig.ConsortiumsGroupKey].Groups[consortium.Name].Groups) > 0 &&
len(configUpdate.WriteSet.Groups[channelconfig.ApplicationGroupKey].Groups) == 0 {
return nil, fmt.Errorf("Proposed configuration has no application group members, but consortium contains members")
}```
the error is 'Proposed configuration has no application group members, but consortium contains members'
i read the comment and debug
it goes into the error
but i set the profile is ```SampleSingleMSPChannel:
Consortium: SampleConsortium
Orderer:
<<: *OrdererDefaults
Organizations:
- *SampleOrg
Application:
Organizations:
- *SampleOrg```
the application contains the org
so the i think method ```env, err := channelconfig.MakeChainCreationTransaction(newChannelId, genesisconfig.SampleConsortiumName, signer)``` exist bug
the method define ```func MakeChainCreationTransaction(channelID string, consortium string, signer msp.SigningIdentity, orgs ...string) (*cb.Envelope, error) {```
the orgs is nil
so in the method ```func (cct *channelCreationTemplate) Envelope(channelID string) (*cb.ConfigUpdateEnvelope, error) {```
```for _, org := range cct.orgs {
rSet.Groups[ApplicationGroupKey].Groups[org] = cb.NewConfigGroup()
wSet.Groups[ApplicationGroupKey].Groups[org] = cb.NewConfigGroup()
}```
the cct.orgs is nil
```for key, value := range systemChannelGroup.Values {
channelGroup.Values[key] = value
if key == channelconfig.ConsortiumKey {
// Do not set the consortium name, we do this later
continue
}
}```
i think the code should be modify
i think the code should modify
```for key, value := range systemChannelGroup.Values {
channelGroup.Values[key] = value
}```
```for key, value := range systemChannelGroup.Values {
channelGroup.Values[key] = value
}```
```for key, value := range systemChannelGroup.Values {
if key == channelconfig.ConsortiumKey {
// Do not set the consortium name, we do this later
continue
}
channelGroup.Values[key] = value
}```
```flag.StringVar(&cmd.args.chainID, "chainID", "NewChannelId", "In case of a newChain command, the chain ID to create.")``` the name of chain is not legal
so i think it's better to modify the 'chainID' to the newchannelid
@asaningmaxchain I think you are right about the _missing OrgNames_ that causes this tool to fail. However it needs more modifications to become a more general tool. If you just want to create a new channel with _SampleSingleMSPChannel_ profile, you could just modify the `newchain.go` to pass in org names. I don't see why you want to touch `systemchannel.go`
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=GsHEcygBN9fupnmQ8) @asaningmaxchain you are right here, you are welcome to submit a PR for this. otherwise I could do it tomorrow
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=MfJy56ZaRTDANjcAg) @guoger later i will submit a PR
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=FqzLQYiyS8DH3WQWe) @guoger i add the parameter
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=FqzLQYiyS8DH3WQWe) @guoger i add the sampleOrg
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=FqzLQYiyS8DH3WQWe) @guoger i add the sampleOrg parameter
```env, err := channelconfig.MakeChainCreationTransaction(newChannelId, genesisconfig.SampleConsortiumName, signer,genesisconfig.SampleOrg)```
i pass the SampleOrg parameter
and then it's right
so i think i should modify the code
So, I got an orderer up using my msp (SampleSingleSoleMSP). I decided to try to create a channel (SampleSingleMSPChannel). Created the block from configtxgen, but then the doc says "This will output a marshaled Envelope message which may be sent to broadcast to create a channel.". Went looking at broadcast_config, tried it as is and got your error above. "Proposed configuration has no application group members, but consortium contains members" Looks like I need to wait for your fix to work as is?
So then I said, well, I have not run broadcast_timeout to see if it still works. No, the orderer says
2017-09-06 12:00:30.735 UTC [policies] GetPolicy -> DEBU 0f0 Returning policy Writers for evaluation
2017-09-06 12:00:30.735 UTC [cauthdsl] func1 -> DEBU 0f1 0xc42002a058 gate 1504699230735339677 evaluation starts
2017-09-06 12:00:30.735 UTC [cauthdsl] func2 -> DEBU 0f2 0xc42002a058 signed by 0 principal evaluation starts (used [false])
2017-09-06 12:00:30.735 UTC [cauthdsl] func2 -> DEBU 0f3 0xc42002a058 processing identity 0 with bytes of
2017-09-06 12:00:30.735 UTC [cauthdsl] func2 -> ERRO 0f4 Principal deserialization failure (MSP is unknown) for identity
I realized I used a custom LocalMSODir for the orderer (orderer/msp). Copied my msp to the default location hoping broadcast-timeout needed it there and rebuilt but still get the same error. I am guessing that since it looks like null bytes for identity 0 it's not actually reading anything from an MSP?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=nZzQgSPCkQNHkta99) @jworthington if you start orderer binary, it uses _SampleInsecureProfile_ by default, defined in `sampleconfig/orderer.yaml`. And you could use `broadcast_config` as is, no modification needed.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=74oDdzibByJEHB2od) @jworthington I'm not aware of the tool `broadcast_timeout`, where do you see it?
Yes. thx. I did that. And then did So then I said, well, I have not run broadcast_timeout to see if it still works. No, the orderer says
2017-09-06 12:00:30.735 UTC [policies] GetPolicy -> DEBU 0f0 Returning policy Writers for evaluation
2017-09-06 12:00:30.735 UTC [cauthdsl] func1 -> DEBU 0f1 0xc42002a058 gate 1504699230735339677 evaluation starts
2017-09-06 12:00:30.735 UTC [cauthdsl] func2 -> DEBU 0f2 0xc42002a058 signed by 0 principal evaluation starts (used [false])
2017-09-06 12:00:30.735 UTC [cauthdsl] func2 -> DEBU 0f3 0xc42002a058 processing identity 0 with bytes of
2017-09-06 12:00:30.735 UTC [cauthdsl] func2 -> ERRO 0f4 Principal deserialization failure (MSP is unknown) for identity
sorry broadcast_timestamp
man, copy and paste
I did SampleInsecureProfile as is. then did SampleSingleSoleMSP as is. All fine. now trying to use my custom msp
ah, in that case, I don't think `broadcast_timestamp` in v1.0.x actually loads any MSP. You may want to try `broadcast_msg` on master branch if you want to use profiles other than _SampleInsecure_
@jworthington ah, in that case, I don't think `broadcast_timestamp` in *v1.0.x* actually loads any MSP. You may want to try `broadcast_msg` on *master* branch if you want to use profiles other than _SampleInsecure_
great. thx. meetings the rest of the day, so you are off the hook. But I'll be back tomorrow. ;)
always welcome
Has joined the channel.
In a Fabric network with multiple orderers, these orderers use the consensus protocol to reach an agreement on the order of the transactions in each block. Does each orderer build its own block first and then participate in the consensus protocol, or the opposite way?
In a Fabric network with multiple orderers, these orderers use the consensus protocol to reach an agreement on the order of the transactions in each block. Does each orderer build its own block first and then participate in the consensus protocol to adjust the orders of the transactions in that block?
@qizhang: No. In the Kafka option, the orderers route the incoming transactions to a Kafka partition, and then read that (ordered) partition to construct the blocks that are added to their local copy of the ledger.
In the BFT option, the orderers are to route the transactions to the orderer that is acting as the leader for that round/view, and it is the leader node that decides which transactions go into the block and in which order.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=aXhm6cXvzbmr5ELWf) @guoger i think the broadcast_timestamp can support any MSP
@jeffgarratt do you have publish the git
@jeffgarratt do you publish the git
https://github.com/jeffgarratt/fabric-explorer
Has joined the channel.
Has joined the channel.
Has joined the channel.
@jyellick if the /var/hyperledger/production/chains contains the folder,and i start the orderer,it will the last config block to config the channel
?
@jyellick i meet q problem the follow is my step
i use the kafka for consensus and map the data the host for each component (peer,orderer,couchdb) and then i start the fabric network,it's ok
i use the kafka for consensus and map the data the host for each component (peer,orderer,couchdb) and then i start the fabric network,and build a channel,make transactions it's ok
and then i restart the fabric network
i use the fabric version is v1.0.0
Message Attachments
@asaningmaxchain: Can you set the orderer logging level to DEBUG and post the output here? Use a service like Pastebin.
@asaningmaxchain
> if the /var/hyperledger/production/chains contains the folder,and i start the orderer,it will the last config block to config the channel
When the orderer starts, it looks at the ledger which is stored there, finds the latest config block for each chain, and uses this to start them.
> i meet q problem the follow is my step
It's no obvious what your problem is from the description below. It appears that your orderer is having trouble connecting to Kafka, but would require more investigation.
@asaningmaxchain
> if the /var/hyperledger/production/chains contains the folder,and i start the orderer,it will the last config block to config the channel
When the orderer starts, it looks at the ledger which is stored there, finds the latest config block for each chain, and uses this to start them.
@binhn @kostas bft-smart based ordering service prototype should be available in September. We will also publish an accompanying technical report. Will post here as soon as this happens
Responding to maintainers channel msgs as i have read only there
Hi everyone! where can we find the instruction of the hyperledger SBFT.
@luckydogchina: I'm afraid there are no instructions to be found, as SBFT is not an option yet.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=H7J66rzfsANdHG3gj) @kostas Ok, When to use sbft consensus?
I'm hesitant to make a prediction. Development of it is set to resume in a couple of months at the latest, and it'll take a few months till it's good to go, so I'd say check back in 6 months from now?
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=BQQ5hhbn2CyduMXy2) @kostas the log is too long
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=BQQ5hhbn2CyduMXy2) @kostas the log is too long,do you need the peer/orderer/couchdb
@asaningmaxchain: I understand that but it would still be useful. I just need the orderer's log.
wait a moment
the consensus which i use the kafka
four orderer
so i just pick one to post you
ok
?
Message Attachments
as you see, i mount the data to the host
and then i use the java sdk to build a channel
and then i use the java sdk to build a channel and make transactions it's ok
@asaningmaxchain: A couple of things.
and now i restart the fabric network
1. In general, using Kafka as a consensus option doesn't mean you have to use more than one orderers. One ordering service node is good enough.
2. I just need the logs of the ordering service node that you are targeting when you send the create channel transaction.
Message Attachments
@asaningmaxchain: I will need you to pipe this output to a text file and attach it here. Or better yet, copy and paste it to a service like Pastebin, and send me the resulting link. I will not inspect a screenshot.
ok
please wait a moment
@kostas the peer will join the channel which named mychannel
?
when i use the cmd 'docker exec -it peer0.org1.example.com bash' go into the peer
and i use cmd 'peer channel list'
it tell me the peer has join the mychannel
the chaincode has installed too
?
how can i to prove it
@jyellick i think the peer should add the cmd peer chaincode list
@jyellick i think the peer should add the cmd `peer chaincode list`
@asaningmaxchain: If I get you right then, you suggest that there is no longer an issue?
@kostas i am sorry,i don't understand
Ah, let me rephrase. Is there still a problem, or are you good now?
there is still a problem
at the later,i will post the detail step
please wait a moment
the process is too long
Has joined the channel.
Hi guys, I'm very new to fabric so please bear with me. We have 3 users (A, B, C), each connected to its own peer node. When user A invokes a chaincode, how can user B and C be notified about the event?
@jgabuya This depends on how you are accessing fabric. The SDKs each provide an event interface for detecting when transactions commit.
@jyellick Thanks for your respone. We are using the Node.js SDK. When you say "when transactions commit", is it during the endorsement or after the orderer writes to the blockchain?
@jgabuya Commit always happens after ordering. The orderer determines the order of the transactions, then the peers deterministically validate them based on that order.
You will see in the NodeSDK examples that after submitting a transaction, that a callback occurs during commit. Though #fabric-sdk-node is the better place for that discussion
@jyellick I see. So using the SDK, is it possible to capture the actual payload from the event? Here's the scenario: We have an app where we like to display a real-time history of user transactions. We believe that extracting that data directly from the blockchain itself might have some latency issues. So one idea we had is to use events, and store the payload values into into something like MongoDB for faster retrieval, while keeping the blockchain as the source of truth.
My two cents, would be that it's probably better to go the simplest route first, using the SDK to pull the information you require, and then if you discover latency or other problems, identify the source and optimize from there. If you'd rather pursue the approach directly, there is a transaction ID which makes for a nice key for database storage.
So for example, user A and B both have 100 coins initially. user A fires a transaction proposal to send 10 coins to user B. My questions is, can we extract the payload value (A, B, 10) from the event? Or are we only going to see the resulting values (in this case, A = 90, B = 110)?
@jyellick, thanks for that. I guess the answer would be to try it with the SDK myself
@jgabuya Happy to help, you may find better answers asking in the SDK channel
Great, thank you! @jyellick
@jgabuya I actually have a vague recollection, that when you write a chaincode, the chaincode gets to choose the content to encode in the payload of the event. So, you might be able to leverage this if you are developing both the client and the chaincode.
@jyellick @kostas @sanchezl I think patches for FAB-5284 and FAB-5720 are reviewable now, I updated JIRA status. Also I created FAB-6081 to capture the remaining work of enabling `Resubmission` switch, which should be aligned with the unified effort of addressing compatibility in Fabric (FAB-556)
And sorry for interrupting your discussion.. just don't wan to PM every reviewer..
You are a gentleman and a scholar, thanks! I keep saying I'll review them, and I push them back, but it'll happen very very soon now. Thanks again!
You are a gentleman and a scholar, thanks! (I keep saying I'll review them, and I defer this, but it'll happen very very soon now. Getting there with my queue.)
@gouger Actually, if you have some time now, I can show you a sample capabilities flag one that I've pushed out there
@guoger Actually, if you have some time now, I can show you a sample capabilities flag one that I've pushed out there
@jyellick I see. So that means "choosing the content to encode" can be done on the chaincode level? e.g. can be written in Go?
@jgabuya This is really not my area of expertise, but, in the chaincode shim, there is:
```
// SetEvent allows the chaincode to propose an event on the transaction
// proposal. If the transaction is validated and successfully committed,
// the event will be delivered to the current event listeners.
SetEvent(name string, payload []byte) error
```
I believe that this `payload` is delivered as part of the event when the transaction commits.
So, if your chaincode calls `SetEvent` (along with its `PutState` etc.) then you may choose to encode whatever data your application will like to not have to go look up
@jyellick Thanks! Will take a look at this
@kostas the real problem is sdk
i am sorry to bother you
@asaningmaxchain: No worries at all, let me know if you bump into more orderer-related issues.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=rsHN8g76txpbyiQve) Thanks @kostas . So Kafka and BFT are two different options to organize the orderers, right? Where in Fabric can I choose which option to use? Thanks!
@qizhang:
> So Kafka and BFT are two different options to organize the orderers, right?
Right.
> Where in Fabric can I choose which option to use? Thanks!
https://chat.hyperledger.org/channel/fabric-consensus?msg=eHwXgMtrZugrfMatn
Has joined the channel.
Message Attachments
@jyellick as you see i use the master branch to start the e2e
and it tell the error
@kostas i meet a question about the fabric restart,
i use the master branch
i use the example/example/e2e_cli/ to start the fabric network
it's ok
and then i map the data to the host
`root@chenxuan-ThinkPad-X240:/var/hyperledger# ll
total 36
drwxr-xr-x 9 root root 4096 9月 11 15:52 ./
drwxr-xr-x 15 root root 4096 9月 11 11:11 ../
drwxrwxrwx 4 root root 4096 9月 11 15:52 couchdb0/
drwxrwxrwx 4 root root 4096 9月 11 15:52 couchdb1/
drwxrwxrwx 4 root root 4096 9月 11 15:52 couchdb2/
drwxrwxrwx 4 root root 4096 9月 11 15:50 couchdb3/
drwxr-xr-x 3 root root 4096 9月 11 15:52 orderer/
drwxr-xr-x 4 root root 4096 9月 11 15:52 peer0/
drwxr-xr-x 4 root root 4096 9月 11 15:52 peer1/
root@chenxuan-ThinkPad-X240:/var/hyperledger#
`
```root@chenxuan-ThinkPad-X240:/var/hyperledger# ll
total 36
drwxr-xr-x 9 root root 4096 9月 11 15:52 ./
drwxr-xr-x 15 root root 4096 9月 11 11:11 ../
drwxrwxrwx 4 root root 4096 9月 11 15:52 couchdb0/
drwxrwxrwx 4 root root 4096 9月 11 15:52 couchdb1/
drwxrwxrwx 4 root root 4096 9月 11 15:52 couchdb2/
drwxrwxrwx 4 root root 4096 9月 11 15:50 couchdb3/
drwxr-xr-x 3 root root 4096 9月 11 15:52 orderer/
drwxr-xr-x 4 root root 4096 9月 11 15:52 peer0/
drwxr-xr-x 4 root root 4096 9月 11 15:52 peer1/
root@chenxuan-ThinkPad-X240:/var/hyperledger#
```
and then i make a channel and make some tx
and i restart the fabric network
i use the docker logs orderer.example.com to see the log
Message Attachments
@jyellick
and i post the log to the https://pastebin.com/w8uPXPZ6 @jyellick
``` erroredChan := chain.Errored()
select {
case <-erroredChan:
logger.Warningf("[channel: %s] Rejecting deliver request for %s because of consenter error", chdr.ChannelId, addr)
return sendStatusReply(srv, cb.Status_SERVICE_UNAVAILABLE)
default:
}```
it seems occur the error
@asaningmaxchain could you post your orderer log to https://pastebin.com/ or https://gist.github.com/ and paste the link here?
ok
@jeffgarratt
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=b4KHYLM2JcaervWwN) @asaningmaxchain shoot
yes
https://pastebin.com/w8uPXPZ6
@kostas
```
[sarama] 2017/09/11 08:09:09.574345 broker.go:96: Failed to connect to broker kafka3:9092: dial tcp 172.22.0.17:9092: getsockopt: connection refused
```
@asaningmaxchain It looks like you have connectivity problems to your kafka cluster, is it up and running at that address?
yes
should i post the kafka log
?
First please inspect it yourself, and ensure that you can run the kafka sample consumer/producer against your kafka cluster at that address
```CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f33e87a5aa56 hyperledger/fabric-tools:x86_64-1.0.1 "/bin/bash -c 'sle..." 2 minutes ago Up 2 minutes cli
8fcbd068121a hyperledger/fabric-orderer:x86_64-1.0.1 "orderer" 2 minutes ago Up 2 minutes 0.0.0.0:7050->7050/tcp orderer.example.com
119f4c3d56fe hyperledger/fabric-peer:x86_64-1.0.1 "peer node start" 2 minutes ago Up 2 minutes 0.0.0.0:10051->7051/tcp, 0.0.0.0:10052->7052/tcp, 0.0.0.0:10053->7053/tcp peer1.org2.example.com
d8b1d0198b45 hyperledger/fabric-kafka:x86_64-1.0.1 "/docker-entrypoin..." 2 minutes ago Up 2 minutes 9093/tcp, 0.0.0.0:32780->9092/tcp kafka3
c4f19296e217 hyperledger/fabric-peer:x86_64-1.0.1 "peer node start" 2 minutes ago Up 2 minutes 0.0.0.0:7051-7053->7051-7053/tcp peer0.org1.example.com
e48b19c001c1 hyperledger/fabric-peer:x86_64-1.0.1 "peer node start" 2 minutes ago Up 2 minutes 0.0.0.0:9051->7051/tcp, 0.0.0.0:9052->7052/tcp, 0.0.0.0:9053->7053/tcp peer0.org2.example.com
8c6a99307ad7 hyperledger/fabric-kafka:x86_64-1.0.1 "/docker-entrypoin..." 2 minutes ago Up 2 minutes 9093/tcp, 0.0.0.0:32779->9092/tcp kafka2
f7151c922bb8 hyperledger/fabric-kafka:x86_64-1.0.1 "/docker-entrypoin..." 2 minutes ago Up 2 minutes 9093/tcp, 0.0.0.0:32778->9092/tcp kafka1
a04c6875eda2 hyperledger/fabric-kafka:x86_64-1.0.1 "/docker-entrypoin..." 2 minutes ago Up 2 minutes 9093/tcp, 0.0.0.0:32777->9092/tcp kafka0
34ee840aeea4 hyperledger/fabric-peer:x86_64-1.0.1 "peer node start" 2 minutes ago Up 2 minutes 0.0.0.0:8051->7051/tcp, 0.0.0.0:8052->7052/tcp, 0.0.0.0:8053->7053/tcp peer1.org1.example.com
bbc57ac0a9f3 hyperledger/fabric-couchdb:x86_64-1.0.1 "tini -- /docker-e..." 2 minutes ago Up 2 minutes 4369/tcp, 9100/tcp, 0.0.0.0:8984->5984/tcp couchdb3
4366f0b331fe hyperledger/fabric-couchdb:x86_64-1.0.1 "tini -- /docker-e..." 2 minutes ago Up 2 minutes 4369/tcp, 9100/tcp, 0.0.0.0:7984->5984/tcp couchdb2
c04f0c3dd023 hyperledger/fabric-couchdb:x86_64-1.0.1 "tini -- /docker-e..." 2 minutes ago Up 2 minutes 4369/tcp, 9100/tcp, 0.0.0.0:6984->5984/tcp couchdb1
8e83f13165c7 hyperledger/fabric-zookeeper:x86_64-1.0.1 "/docker-entrypoin..." 2 minutes ago Up 2 minutes 0.0.0.0:32776->2181/tcp, 0.0.0.0:32775->2888/tcp, 0.0.0.0:32774->3888/tcp zookeeper1
fd99265d4eeb hyperledger/fabric-zookeeper:x86_64-1.0.1 "/docker-entrypoin..." 2 minutes ago Up 2 minutes 0.0.0.0:32773->2181/tcp, 0.0.0.0:32772->2888/tcp, 0.0.0.0:32771->3888/tcp zookeeper0
b339069b8f2c hyperledger/fabric-couchdb:x86_64-1.0.1 "tini -- /docker-e..." 2 minutes ago Up 2 minutes 4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp couchdb0
2cf4c9b0fe96 hyperledger/fabric-zookeeper:x86_64-1.0.1 "/docker-entrypoin..." 2 minutes ago Up 2 minutes 0.0.0.0:32770->2181/tcp, 0.0.0.0:32769->2888/tcp, 0.0.0.0:32768->3888/tcp zookeeper2```
```"NetworkSettings": {
"Bridge": "",
"SandboxID": "bad1dd46f3daf6837d6d34bb4b704f638ffb777051b006919f4e60d68ad2f531",
"HairpinMode": false,
"LinkLocalIPv6Address": "",
"LinkLocalIPv6PrefixLen": 0,
"Ports": {
"9092/tcp": [
{
"HostIp": "0.0.0.0",
"HostPort": "32779"
}
],
"9093/tcp": null
},
"SandboxKey": "/var/run/docker/netns/bad1dd46f3da",
"SecondaryIPAddresses": null,
"SecondaryIPv6Addresses": null,
"EndpointID": "",
"Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"IPAddress": "",
"IPPrefixLen": 0,
"IPv6Gateway": "",
"MacAddress": "",
"Networks": {
"e2ecli_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"kafka2",
"8c6a99307ad7"
],
"NetworkID": "16539cc31ec15bddbf008978fdebd4809fbb3170b9cfc5f870ab7a5155acf49e",
"EndpointID": "30491e9a919f3e28d7a300a7b59ab9105b7be1940e464a98875e39d2fdb5ae94",
"Gateway": "172.22.0.1",
"IPAddress": "172.22.0.12",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:16:00:0c",
"DriverOpts": null
}
}
}
```
@asaningmaxchain Please do not post such large blocks of text. Can you instead confirm that the Kafka cluster is operational using the Kafka sample clients?
i am doing it
Has joined the channel.
```package main
import (
"github.com/Shopify/sarama"
"time"
"strings"
"strconv"
"fmt"
"os"
"math/rand"
)
func main() {
config := sarama.NewConfig()
config.Producer.Return.Successes = true //必须有这个选项
config.Producer.Timeout = 5 * time.Second
p, err := sarama.NewAsyncProducer(strings.Split("localhost:32777,localhost:32778,localhost:32779,localhost:32780", ","), config)
defer p.Close()
if err != nil {
return
}
//必须有这个匿名函数内容
go func(p sarama.AsyncProducer) {
errors := p.Errors()
success := p.Successes()
for {
select {
case err := <-errors:
if err != nil {
fmt.Println(err)
}
case <-success:
}
}
}(p)
v := "async: " + strconv.Itoa(rand.New(rand.NewSource(time.Now().UnixNano())).Intn(10000))
fmt.Fprintln(os.Stdout, v)
msg := &sarama.ProducerMessage{
Topic: "test12",
Value: sarama.ByteEncoder(v),
}
p.Input() <- msg
}
```
@asaningmaxchain We just requested that you not post large blocks of text
Why are we still pasting these large— what @jyellick said.
i am sorry
@asaningmaxchain: Login to one of your Kafka containers, and proceed similar to what's written here: https://kafka.apache.org/quickstart#quickstart_createtopic
I had a look at your logs and Jason's right, you do seem to have a connectivity issue with your Kafka cluster.
i can go into the kafka cantainer
the shell which i can't find
`docker exec -ti kafka_container_name bash`
i find it
in the /opt/kafa
folder
Right.
`bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test`
the zookeeper ip
i should modify
but i don't know how to modify
Use Pastebin and give me the Docker Compose file that you're using.
ok
which one you need
i use the fabric master branch
https://pastebin.com/iuMXukEq
Your zookeeper IP:port is `zookeeper0:2181`.
I will note that we're diving into pure Kafka territory here, so I'll stop here. If you bump into a consensus-specific problem, feel free to post again.
ok
it seems good
i use the cmd `./kafka-topics.sh --list --zookeeper zookeeper0:2181`
to get the topics
```mychannel
test
test12
testchainid```
Has joined the channel.
Has joined the channel.
@jyellick i use the example/configtxupdate/reconfig_memebership tool to test the tool is right or not
```2017-09-12 17:21:13.587 CST [orderer/common/broadcast] Handle -> WARN 1d4 [channel: example] Rejecting broadcast of config message from 127.0.0.1:36384 because of error: Error authorizing update: Error validating DeltaSet: invalid mod_policy for element [Policy] /Channel/Application/Readers: mod_policy not set```
i extract the channel_create_tx.pb
https://pastebin.com/n4YPE5xD
the application element doesn't define the mod_policy
Has joined the channel.
@asaningmaxchain The `channel_create_tx.pb` was most likely generated with an old version of `configtxggen`. If you regenerate it with the newer version, you should not see this error.
i try it
@jyellick Where can I find some documentation, explanation about different policies that we can specify for the `genesis.block` created for the channel. What different `rule` can we use instead of the default `MAJORITY` and `ANY` for readers and writers.
you can take a look the common/tool/configtxgen
in the fabric source code
@htyagi90
@htyagi90 As @asaningmaxchain indicates, you may see the source code for `configtxgen` at `common/tools/configtxgen`. To prevent `configtx.yaml` from being extremely complicated, we do not allow custom policies to be specified there. Instead, if you wish to specify custom policies you should do so by editing the output of `configtxgen` using `configtxlator` https://github.com/hyperledger/fabric/tree/master/examples/configtxupdate or you may modify the `configtxgen` tool yourself to express different policies depending on how you are more comfortable working.
@jyellick i use the cmd `make configtxgen` and repeat my step
it still has error
i use the master branch
Are you certain that you are using the correct file and correct `configtxgen` binary? I have verified locally that this `mod_policy` is set.
i see the shell,it find the `configtxgen` in the build/bin
so i use the cmd `make configtxgen`
let me check it
What is the output of `configtxgen -version`?
`1.1.0-snapshot-833b24a`
@asaningmaxchain Here is the output from my `configtxgen` https://pastebin.com/pM5JgpCk which is valid
ok
i check it
it's ok
you should add a tip
remove the folder(/var/hyperledger/)
remove the folder(/var/hyperledger/) if the orderer restart
@jyellick i advice the configtxgen tool should provide flexible way to let the developer to define the policy
Yes, if we can come up with an elegant way to support modifying defining policies, we will do this
I am looking forward to it
i see the peer source code ,what's the gossip
?
@kostas @jyellick
@asaningmaxchain Gossip is a way the peers can send blocks and other state information to eachother bypassing the orderer.
that means the peers can sync data with each other
?
Yes
what's the data the peers should sync
For now they only sync blocks. Later, they will sync "private data" which a block refers to by hash, but is not in the block, so that the ordering service cannot see it.
ok,that means the not all the peer deliver the block from the orderer?
Correct
All peers may call `Deliver`, or, only some may, and then instead receive the blocks from eachother.
so if the peer which deliver block from orderer shutdown
what's the condition
It depends on the configuration. If leader election is enabled (the default), the other peers will detect the failure and pick a new peer to call `Deliver`.
that means i can set the leader peer when the fabric setup
?
Yes
if i shutdown the leader peer,the other peer will detect the failure and pick a new?
just for the test purpose
Yes
ok,i will test it
another question
if i delete the ledgerdata
in the peer
i can query the data from the peer
?
If you delete the state database and restart the peer, the peer will rebuild it from the blockchain
i don't restart the peer
i can map the data to the host
and then i delete the ledger
and then i delete the ledger data
the data can't sync by the gossip
?
I will note that we are quickly getting outside consensus territory here. @asaningmaxchain, #fabric is a better venue for these questions. (Please also take care to format the messages appropriately, you always break them into 5-6 lines.)
@kostas my english is not good
@kostas my english is not good,so i just send a little to express my mind
Understood, and that is absolutely fine. (Same here actually.) This hasn't to do with your grasp of the language. Take the time to craft your message, and don't press Enter every 4 words is all I'm asking for.
ok,i will take care
@jyellick thx a lot
@jyellick If we are not going to create new channels. Fo Kafka, can we just manage with 3 vms (3ZK and 3KB and 3 O). If one of the node goes down, we can still process transaction but not able to create new channels. To create new channels, we just need to bring up the failed VM.
@gauthampamu If you disallow new channel creation, then it is not necessary to have `RF + F` brokers, only `ISR+F`
@gauthampamu If you disallow new channel creation, then it is not necessary to have `RF + F` brokers, only `ISR + F = RF`
(Where `RF` is the replication factor, `ISR` is the number of required in sync replicas, and `F` is the total number of broker failures you wish to tolerate)
Thanks
If the ISR is 2 , F=1, 3 Broker is sufficient right
Yes, assuming you set RF=3, and disallow channel creation
@jyellick how can i know the msp is member/admin
The admin certificates are enumerated in the MSP definition. All certs issued by the CA count as members.
ok
can you take a look the systemchannel.go
```for key, value := range systemChannelGroup.Values {
channelGroup.Values[key] = value
if key == channelconfig.ConsortiumKey {
// Do not set the consortium name, we do this later
continue
}
}```
i think the segment should be modify
the if should add the before the assign
Yes, or the check could be removed entirely as the value is overwritten later.
yes
and another error is http://hyperledger-fabric.readthedocs.io/en/latest/policies.html?highlight=policy
the list example the struct should be modify the field
let me take a example
```SignaturePolicyEnvelope{
version: 0,
policy: SignaturePolicy{
n_out_of: NOutOf{
N: 2,
policies: [
SignaturePolicy{ signed_by: 0 },
SignaturePolicy{ signed_by: 1 },
],
},
},
identities: [mspP1, mspP2],
}```
as you see
the NOutOf should use the rules
field
Yes, that is correct
Feel free to open an issue and or submit a CR
and i think add a tip is that if the n==len(rules) it express the 'and' other it express the 'or'
and i think add a tip is that if the n==len(rules) it express the 'and' otherwise it express the 'or'
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=iEbtJs7CQ35an3ejL) @jyellick ok
@jyellick i think when the orderer start,it parse the genesis block,so when parse the genesis block,it should judge the genesis block contains the consortium
@jyellick when the orderer start,it parse the genesis block, when parse the genesis block,it should judge the genesis block contains the consortium
@asaningmaxchain I do not understand you
Dear all, we are planning to implement another consensus module to be merged into the fabric 1.0 release version, as a pluggable module. Right now, we are using the official images downloaded from docker hub. So, if I plug the new consensus module, replacing the Kafka module, in fabric 1.0 RC, must I regenerate the orderer images from source code? Is it possible to replacing the Kafka consensus module directly without regenerate the orderer image? I have this need, because my corporate network behind proxy and firewall is a big trouble to generate the image. Thanks for any helpful hints!
The image I mentioned above is the official Fabric's Docker images from Docker Hub.
@chifalcon Can you share any details about the consensus plugin? Presently, there is no way to dynamically link new code into the orderer or peer binaries, so you would likely have to regenerate the orderer image with your modified code.
Fabric v1.1 should be adding support for go 1.9 which does support runtime linking between go code. However, we will have to consider whether this is a path for consensus plugins going forward we would encourage.
There have also been some architectural changes to the common code for the orderer which are in master, bound for v1.1, I would encourage you to build your plugin off of this latest source code.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=cfuuuDB4yCAQpXcL4) @jyellick Thanks for your info. The new consensus would be supported by trusted execution environment (TEE) as schemed. A dynamic consensus port for replacing different modules in real-time would be definitely helpful. So may I know whether there is any timeline that v1.1 be released?
@chifalcon Nothing is written in stone, but I have hard November 15th as a release date. The orderer code in master should be fairly stable at this point, so I think it would be safe to work from master.
@chifalcon Nothing is written in stone, but I have heard November 15th as a release date. The orderer code in master should be fairly stable at this point, so I think it would be safe to work from master.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Hey peeps! I want to write an application where a subset of peers do a multisignature for each transaction which generates a coin. In which phase is this possible? Can the peers communicate directly? Would I need to implement this as a consensus plugin which checks the validity of the multi signature?
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=vk477vFoyhYoTeHfk) @jyellick when the orderer starts,it just parse the configValue,but not judge the value
@hpurmann
> I want to write an application where a subset of peers do a multisignature for each transaction which generates a coin. In which phase is this possible?
This is actually the standard path for a fabric application. Install a chaincode on multiple peers with an endorsement policy which requires some subset (up to and including all of them) to endorse. Their endorsements are gathered by the client application by asking each required peer directly. Once collected, they are bundled together, and sent as part of a single transaction.
> Would I need to implement this as a consensus plugin which checks the validity of the multi signature?
No, the standard validation path ensures the endorsement policy you set is satisfied.
> when the orderer starts,it just parse the configValue,but not judge the value
@asaningmaxchain The orderer most definitely validates the configuration at startup. If there is no 'consortiums' definition for the ordering system channel, the orderer will panic on startup.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=ySWdeEhfvNk5NpaSL) @jyellick it's my fault
@jyellick why not use the database to store the channel information
?
There is no state database at the orderer.
i know
store the channel information
not the state database
@asaningmaxchain There is no dependency on a database for the orderer, so the blockchain is the only storage option. What is wrong with the current approach?
no
it's no problem
can you take a look the protos/peer/proposal.prot
can you take a look the protos/peer/proposal.proto
i think the comment is not good
i think the comment is not good to understand the process of transaction
@jyellick the policy have a better good advice when the orderer start
@jyellick the policy have a better way to config when the orderer start
@jyellick the policy have a better way to config when the orderer start,i think the policy like a tree,so i think use a tree to describe the policy may be a good choice to display the policy
@jyellick the policy have a better way to config when the orderer start?,i think the policy like a tree,so i think use a tree to describe the policy may be a good choice to display the policy
Has joined the channel.
Whatever I do on using a channel I get some variation of this error from configtxgen -inspectBlock. -inspectBlock Create geneisis block reports fine and orderer starts fine. create channel bloc says fine. Channel create on peer says fine and shows fine in orderer. join channel gives Error: proposal failed (err: rpc error: code = Unknown desc = chaincode error (status: 500, message: Failed to reconstruct the genesis block, proto: bad wiretype for field common.BlockHeader.Number: got wiretype 2, want 0))
configtxgen -inspectBlock CRIT 004 Error on inspectBlock: Error unmarshaling block: proto: bad wiretype for field common.BlockHeader.Number: got wiretype 2, want 0
@jworthington It sounds to me like you are probably trying to join with the config update transaction which created the channel, not with the block which was produced by the orderer.
If you are using the peer CLI, it will automatically create a file like `mychannel.block` as a result of the `peer channel create` command. This is the genesis block which should be passed as a parameter to `JoinChannel`
i get the error just using configtxgen on the orderer. configtxgen is fine for genesis block but not for channel block
configtxgen inspectBlock fine for genesis but not channel
@jworthington There are two types of artifacts which are generated by `configtxgen`. One is the genesis block which is used to bootstrap the orderer. The other is a channel creation transaction which is used to define a channel. Once the transaction is submitted, the orderer constructs a new genesis block based on this transaction.
I suspect if you instead run `configtxgen -inspectChannelCreateTx` you will get valid output
yes, that works on the orderer for inspecting a channel block
> for inspecting a channel block
Please keep in mind that if `inspectChannelCreateTx` works, then you are not in fact inspecting a block, but a channel creation transaction. They are different structures.
so i thought on the peer I needed the mychannel.block in the folder when I call peer channel join mychannel
Yes, `mychannel.block` should be a block, created by `peer channel create`
Yes, `mychannel.block` should be a block, created by `peer channel create` (or more exactly, it should be a blocked, created by the orderer when the channel creation transaction was submitted, and fetched by `peer channel create` )
Yes, `mychannel.block` should be a block, created by `peer channel create` (or more exactly, it should be a block, created by the orderer when the channel creation transaction was submitted, and fetched by `peer channel create` )
I'll try that. I have copied it from the orderer
How would you copy a block from the orderer?
well, maybe it was the definition then? scp mychannel.block from orderer to peer
I'm still unsure how `mychannel.block` would have been written at the orderer, unless you were invoking the `peer channel create` command from there.
(Typically, the CLI commands are run from inside of a specific `cli` container)
(Typically, the CLI commands are run from inside of a dedicated `cli` container)
configtxgen/configtxgen -profile SampleSingleMSPChannel -channelID test3 -outputCreateChannelTx test3
@jworthington I don't understand. This would not create a `mychannel.block` file, only a `test3` file
put test3 on the peer and use to create channel.
But you said:
> scp mychannel.block from orderer to peer
If you ran `peer create channel` on the peer, why would it create the block on the orderer?
But you said:
> scp mychannel.block from orderer to peer
If you ran `peer create channel` on the peer, why would it create the `mychannel.block` on the orderer?
well, I was just using small names. Trying so much not using genesis.block and mychannel.block. Lazy
peer channel create uses the output file from configtxgen channel on orderer.
so the i call peer channel join on the same file.
This is what I said earlier: [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=o9CawxghcioJFDYfs)
You have created a channel creation transaction, this is not a block, and it cannot be used to join the peer to a channel.
The `peer create channel` command fetches the correct block, and in your case, would create a file called `test3.block`
so i need that file for peer channel create, but some other name for the peer channel join. specifically, the block
is the block in var/hyperledger/...?
Yes, but it is encoded into a database type log file, you should not try to retrieve it directly from the filesystem.
Instead, you can use `peer channel fetch 0 -c test3`
```
72- if [ -z "$CORE_PEER_TLS_ENABLED" -o "$CORE_PEER_TLS_ENABLED" = "false" ]; then
73: peer channel fetch 0 -o orderer.example.com:7050 -c "testchainid" >&log.txt
74- else
75: peer channel fetch 0 -o orderer.example.com:7050 -c "testchainid" --tls $CORE_PEER_TLS_ENABLED --cafile $ORDERER_CA >&log.txt
76- fi
```
^ for example
^ for example (from `examples/e2e_cli/scripts/script.sh` )
let me play for a bit and stop bothering you for a bit. thx
We are trying to generate crypto-material as per our use-case requirement and facing a problem at the time of creating the orderer genesis block using configtxgen on GA-release
```
[rhegde@dusd1devrhap040 scripts]$ configtxgen -profile profile_orderer -outputBlock orderer.block
2017-09-19 20:14:03.366 CEST [common/configtx/tool] main -> INFO 001 Loading configuration
2017-09-19 20:14:03.391 CEST [configvalues/msp] TemplateGroupMSPWithAdminRolePrincipal -> CRIT 002 Setting up the MSP manager failed, err The supplied identity is not valid, Verify() returned x509: certificate is not authorized to sign other certificates
```
what does the error signify - we are able to create channel tx file successfully.
@rahulhegde This comes from the golang X.509 implementation
```
if len(c.PermittedDNSDomains) > 0 {
ok := false
for _, domain := range c.PermittedDNSDomains {
if opts.DNSName == domain ||
(strings.HasSuffix(opts.DNSName, domain) &&
len(opts.DNSName) >= 1+len(domain) &&
opts.DNSName[len(opts.DNSName)-len(domain)-1] == '.') {
ok = true
break
}
}
if !ok {
return CertificateInvalidError{c, CANotAuthorizedForThisName}
}
}
```
```
if certType == intermediateCertificate && (!c.BasicConstraintsValid || !c.IsCA) {
return CertificateInvalidError{c, NotAuthorizedToSign}
}
```
So, it looks to me like you have defined an intermediate CA certificate, but, it does not have the `IsCA` field defined in the ASN1 of the PEM?
It doesn't have the isCA field defined.
Then I believe this is your error, if a cert is to act as an intermediate CA, I believe it must have `IsCA` set to true.
I am just thinking - we this reason - even channel configuration (tx) should have failed. This is the same iCA.
I am just thinking - with this reason - even channel configuration (tx) should have failed. This is the same iCA.
Ah, so, no. For channel creation, the creation transaction does not actually encode the MSP definitions.
Instead, it encodes the org names, and the orderer looks up the MSP definitions from the consortium definition.
You can see the difference if you inspect the channel creation transaction vs the genesis block.
(via `-inspectChannelCreateTx` and `-inspectBlock` respectively. If you use a more recent version of `configtxgen`, the output of these commands will be more verbose)
okay - separate q: does it mean - we should define all the organizations (that are part of the channel definition in configtx.yaml) in the orderer's consortium section?
If you wish for an organization to be able to create channels (and or be included in the initial membership of a channel) then yes, they must be defines as part of a consortium definition for the orderer.
If you wish for an organization to be able to create channels (and or be included in the initial membership of a channel) then yes, they must be defined as part of a consortium definition for the orderer.
we have opened a JIRA to have documentation around consortium - but i will not touch this topic now. I will try adding isCA=true to iCA.
we generate CSR using fabric-ca-client - do you know which option will help me define isCA=True.
Sadly my knowledge of the #fabric-ca is quite limited, you might try asking in that channel though
I;d second that but this might help too https://hyperledger-fabric-ca.readthedocs.io/en/latest/users-guide.html#enrolling-an-intermediate-ca
@rahulhegde That link from @muralisr looks like it has your answer
Thanks @jyellick and @muralisr - Let me check on documentation but this looks to be coupled with fabric-ca-server.
@rahulhegde the intermediate CA can be used to generate the msps for you
Has joined the channel.
Has joined the channel.
Has joined the channel.
Hi @jyellick as Orderer will build and deliver the blocks also store them locally, so will orderer keep the whole ledge as peer, or it will be cleared periodically?
Hello, from the analysis of order code, it seems that orders maintain a blockchain for each channel. If orders maintain a complete blockchain for each channel, that would be a big burden for orders. So orders should only maintain partial blockchain for each channel? Thank you.
@qsmen how to achieve that? Do we have a configuration parameter for the threshold, where is the relative code?
@Glen @qsmen For the time being, all orderers retain all blocks forever. The ability to prune chains will be added in the future
@Glen $hyperledger/fabric/orderer/solo/consensus.go :
func (ch *chain)main{ ... ch.support.WriteBlock(block, committers[i], nil) ... }
@Glen $hyperledger/fabric/orderer/solo/consensus.go :
func (ch *chain)main{ ... ch.support.WriteBlock(block, committers[i], nil) ... }
the code you want
@Glen $hyperledger/fabric/orderer/solo/consensus.go :
func (ch *chain)main{ ... ch.support.WriteBlock(block, committers[i], nil) ... }
the code you want
@Glen $hyperledger/fabric/orderer/solo/consensus.go :
func (ch *chain)main{ ... ch.support.WriteBlock(block, committers[i], nil) ... }
the code you want
func (cs *chainSupport) WriteBlock(block *cb.Block, committers []filter.Committer, encodedMetadataValue []byte) *cb.Block {
for _, committer := range committers {
committer.Commit()
}
// Set the orderer-related metadata field
if encodedMetadataValue != nil {
block.Metadata.Metadata[cb.BlockMetadataIndex_ORDERER] = utils.MarshalOrPanic(&cb.Metadata{Value: encodedMetadataValue})
}
cs.addBlockSignature(block)
cs.addLastConfigSignature(block)
err := cs.ledger.Append(block)
if err != nil {
logger.Panicf("[channel: %s] Could not append block: %s", cs.ChainID(), err)
}
logger.Debugf("[channel: %s] Wrote block %d", cs.ChainID(), block.GetHeader().Number)
return block
}
thanks @jyellick and @luckydogchina , I got it.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=sDHyz5ZkGoY5YikXa) @rahulhegde
To mark this thread complete - after adding basic constrains - CA:TRUE to Intermediate Certificate and SKI to admin certificate, I was able to create the genesis block.
Hi,everyone! Do we need to send the install tx to orderer or only send the SignedChaincodeDeploymentSpec to endorsors, when installing the chaincode?
Hi,everyone! Do we need to send the install tx to orderers or only send the SignedChaincodeDeploymentSpec to endorsors, when installing the chaincode?
Hi,everyone! Do we need to send the install tx to orderers to commit the tx to ledger or only send the SignedChaincodeDeploymentSpec to endorsors, when installing the chaincode?
@luckydogchina the install tx does not need to be ordered
then will the chaincode install process be kept in ledger?
@jyellick , if install tx does not need being ordered, then this process will not be in any block?
@qsmen correct. The chaincode install is global for the peer process and not for any particular channel, so it does not make sense that it would be in a block. A chaincode must be instantiated for a given channel before it may be used, and this instantiation is recorded in the blockchain
Thank you very much
jyellick
Does the system channel own the channel ledger, and where is it kept in and what does it record?@jyellick
Does the system channel own a channel ledger, and where is the system channel ledger kept in and what does the system channel ledger record?@jyellick
@luckydogchina The ordering system channel is used to orchestrate channel creation. Typically, only orderers retain a copy of this channel's blockchain
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=DnoHW2doyqWvY9oze) @jyellick Thanks!
@jyellick how the peer to validate the tx is valid,i see the `common/ledger` use the leveldb to store the block,but not validates the tx
@jyellick how the peer to validate the tx is valid,i see the `common/ledger` use the leveldb to store the block,but not validates the tx and can you tell me the peer how to validate the tx
i know the orderer node just order the envelope which comes from peers
https://github.com/hyperledger/fabric/blob/master/core/scc/vscc/validator_onevalidsignature.go#L64-L73
thx
@jyellick i think the fabric should provide a web in which user can set the config like the orderer,application,consortium
@jyellick i think the fabric should provide a web in which user can set the config like the orderer,application,consortium for the channel
I agree it would be a nice feature
if it accepts,i want to join
hi. did anyone find something wrong with solo ordering? i changed `BatchTimeout : 2s` and found chaincode invoking was executed every 2 seconds, and `BatchTimeout : 10s` for 10 seconds accordingly.
@jyellick i use the `configtxlator` tool to parse the channel.tx
@jyellick i use the `configtxlator` tool to parse the channel.tx,the console tell me `proto: bad wiretype for field common.BlockHeader.Number: got wiretype 2, want 0`
the message is short,so i post here
i use the master branch, the configtxlator version info is `configtxlator:
Version: 1.1.0-snapshot-3145da5
Go version: go1.8.3
OS/Arch: linux/amd64`
i use the master branch, the configtxlator version info is ```configtxlator:
Version: 1.1.0-snapshot-3145da5
Go version: go1.8.3
OS/Arch: linux/amd64```
i recommend to remove the default when create channel ,because when i want to create the channel ,i don't set the option -f,it tell me the `proposed configuration has no application group members, but consortium contains members`
however when i parse the channel.tx
however when i parse the channel.tx,i find the it contains the group
however when i parse the channel.tx,i find the it contains the group,so i think the `peer channel create` command should add the -f option forcely
Has joined the channel.
how to run configtx tool for this kind of profile
SampleInsecureKafka:
Orderer:
<<: *OrdererDefaults
OrdererType: kafka
Addresses:
- orderer0.example.com:7050
- orderer1.example.com:7050
- orderer2.example.com:7050
Organizations:
- *ExampleCom
Application:
<<: *ApplicationDefaults
Organizations:
- *ExampleCom
Consortiums:
SampleConsortium:
Organizations:
- *ExampleCom
- *Org1ExampleCom
- *Org2ExampleCom
It is my pleasure to announce the first (proof-of-concept maturity level) Byzantine fault-tolerant (BFT) ordering service for Hyperledger Fabric v1 (after PBFT implementation was deprecated along with v0.6). This BFT ordering service is a wrapper around BFT-SMaRt Java library (https://github.com/bft-smart/library) which is one of the oldest actively maintained open source BFT libraries, maintained by University of Lisbon.
The code is available at https://github.com/jcs47/hyperledger-bftsmart whereas the technical report describing the implementation is available at http://arxiv.org/abs/1709.06921.
@vukolic do you mean BFT is ready for fabric?
it seems not seamless compatible with fabric, which is implemented in java. i think golang is preferable.
@vukolic to research the implementation, i suggest that docker compose file should be provided to build the example network.
@bh4rtp I'm not sure what the problem is? This sounds like things are working properly? [ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=GpWMYZMukZ3vhThei)
Has joined the channel.
bhrtp
@jyellick i not sure it is the problem of solo ordering or node sdk. i noticed that every block only had one single transaction for BatchTimeout = 2s. so i changed BatchTimeout to be 10s, which is thought to have at least 5 transactions in every block. but the result is disappointed. it remains one single transaction in every block. at the client side, every invoke will be executed in 2s for BatchTimeout = 2s and 10s for BatchTimeout = 10s. i think this is not working properly.
@jyellick i not sure it is the problem of solo ordering or node sdk. i noticed that every block only had one single transaction for BatchTimeout = 2s. so i changed BatchTimeout to be 10s, which was thought to have at least 5 transactions in every block. but the result was not different. it remained one single transaction in every block. at the client side, every invoke will be executed in 2s for BatchTimeout = 2s and 10s for BatchTimeout = 10s. i think this is not working properly.
[ Thank you, @jyellick ](https://chat.hyperledger.org/channel/fabric-consensus?msg=5X8ejezoFFEy3aa7k)
@jyellick Let's say I want to check a condition that a certain threshold of generated coins is kept for that particular peer, where would I do this? Is this something I need to have in the consensus because I want it checked on all peers?
@hpurmann: This also sounds like logic that you would write into your chaincode application. (And maybe something that you could address with custom VSCC as well.)
Thanks. I'll try to do a PoC and get back to you :)
Has joined the channel.
If you use unspent transaction output (i.e like in bitcoin) you need a custom vscc
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=sRrx88ATYDRjLjkZe)
Are you using the SDK's eventing mechanism to wait for one transaction to commit before sending the next? This would explain the behavior you are observing.
@bh4rtp Are you using the SDK's eventing mechanism to wait for one transaction to commit before sending the next? This would explain the behavior you are observing.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=sRrx88ATYDRjLjkZe)
@bh4rtp how many thread do you start
if you want test the tps,you can use the PTE tool
Has joined the channel.
@jyellick that sounds reasonable. could you tell me how to confirm the using of sdk's eventing mechanism? is there any method to avoid waiting for one transaction to commit before sending the next?
@asaningmaxchain just a single client.
@bh4rtp so you use the for loop to send the tx
?
@asaningmaxchain for the example business session, i wrote 23 invokes and 20 query tests by hand, and found there were 24 blocks created.
maybe you query use the invoke
maybe you query use the invoke,please make check
query is very fast. while every invoke keeps 2s latency.
query is very fast. while every invoke keeps 2s latency (for BatchTimeout = 2s).
query is very fast, while every invoke keeps 2s latency (for BatchTimeout = 2s).
@bh4rtp Are you certain the invoke is not waiting for the transaction to commit before attempting the next invoke?
@jyellick let me check it.
@jyellick i see the source code add the another msp implementation
can you provide doc how to use it
can you provide doc how to use it?
@asaningmaxchain The new MSP implementation is not yet ready to be used
@jyellick ok,i am looking for it
@jyellick i see the source code in the common/cauthdsl `cauthdsl_builder.go` it assign the msp is admin or member
@jyellick i see the source code in the common/cauthdsl `cauthdsl_builder.go` it assign the msp is admin or member,it is controlled by the user
?
it means that i can set the msp is admin or member
@asaningmaxchain: This MSP discussion doesn't seem to be consensus-related?
i am sorry
i am sorry,i know
i will pay more attention to discuss the question
Has joined the channel.
@kostas can you please help with with this issue
Running an orderer
`Amjads-MacBook-Pro:fabric amjad$ ./build/bin/orderer`
Gets the following error
```017-09-27 17:04:24.141 GST [orderer/main] main -> INFO 001 Starting orderer:
Version: 1.0.3-snapshot-d54542f
Go version: go1.8
OS/Arch: darwin/amd64
2017-09-27 17:04:24.154 GST [orderer/localconfig] Load -> CRIT 002 Error unmarshaling config into struct: 2 error(s) decoding:
* '' has invalid keys: genesis, sbftlocal
* 'Kafka.Retry' has invalid keys: Period, Stop
panic: Error unmarshaling config into struct:2 error(s) decoding:
* '' has invalid keys: genesis, sbftlocal
* 'Kafka.Retry' has invalid keys: Period, Stop
goroutine 1 [running]:
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panic(0xc420258090, 0xc4202cdb00, 0x2, 0x2)
/tts/official/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:188 +0xc7
github.com/hyperledger/fabric/orderer/localconfig.Load(0xc420258b40)
/tts/official/src/github.com/hyperledger/fabric/orderer/localconfig/config.go:237 +0x583
main.main()
/tts/official/src/github.com/hyperledger/fabric/orderer/main.go:69 +0x2f6```
Took the latest copy to reconfigure a new machine to be used in "DEV" mode
@Amjadnz: I see references to `sbftlocal` in the log that you provided. This suggests that you're using old artifacts.
I c
But I got the latest code from github today morning.
If it helps I can get the copy again and retry.
maybe some thing is not clean.
@Amjadnz: What does your `orderer.yaml` look like? Use Pastebin.
Sure. https://pastebin.com/KfxMxvuj
This is an outdated file. See: https://github.com/hyperledger/fabric/blob/master/sampleconfig/orderer.yaml for the orderer.yaml that's inline with what your orderer binary expects.
Cool - I woudl check this out
Thanks for pointing it out. I would refresh and try again.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=7MzPp2974rL5RHK3s) @kostas - this worked. My orderer is now up.
Thanks.
Has joined the channel.
I have 3 peers, 3 couch DB's an order and a CA I have deployed a hyperledger composer .bna file to all 3 peers and spun up a composer-rest-server. How do I configure consensus between these 3 peers?
@t_stephens67: I'm not well-versed in Composer (and this is really a question for #composer) but as long as your peers have a genesis block that points them to the same (set of) orderer(s), and their respective peer orgs belongs to the same consortium, they're ready to do business with each other in the context of a channel.
@jyellick i have asked `fabric-sdk-node` and knew that sdk does not have the mechanism of eventing mechanism to wait for one transaction to commit before sending the next.
@jyellick i have asked `fabric-sdk-node` and knew that sdk does not have the eventing mechanism to wait for one transaction to commit before sending the next.
@jyellick here is sdk, peer and orderer log. please see https://pastebin.com/tiBE1mSv. thanks.
@jyellick here is sdk, peer and orderer logs. please see https://pastebin.com/tiBE1mSv. thanks.
@bh4rtp I am certain that the SDKs all support the peer eventing mechanism, and many of the tests use this mechanism to wait for a transaction to commit. If you built your application based off of these tests, you might be performing this behavior yourself?
Can you share your application code? or a simplified version which reproduces the problem?
@jyellick no problem. let me have the application code simplified.
@jyellick i have reproduced the problem using `fabric-samples/balance-transfer`. to do it:
1. clone `fabric-sdk-node` and `fabric-samples`
2. in the fabric-samples, run #npm install
3. modify `request-timeout` to be 90000 in the node_modules/fabric-client/config/default.json
4. run ./runApp.sh in root
5. open another terminal and run ./mytestAPIs.sh
copy mytestAPIs.sh from https://pastebin.com/gkYmQsq1.
@jyellick i have reproduced the problem using `fabric-samples/balance-transfer`. to do it:
1. clone `fabric-sdk-node` and `fabric-samples`
2. install `jq` for jwt shell
3. in the fabric-samples, run #npm install
4. modify `request-timeout` to be 90000 in the node_modules/fabric-client/config/default.json
5. run ./runApp.sh in root
6. open another terminal and run ./mytestAPIs.sh
copy mytestAPIs.sh from https://pastebin.com/gkYmQsq1.
@bh4rtp Please see https://github.com/hyperledger/fabric-samples/blob/release/balance-transfer/app/invoke-transaction.js#L90-L123
I am not a javascript expert, but I read this portion of the invoke helper as blocking until the event from the transaction commit arrives.
Hence, the app is waiting for the previous transaction to commit before sending the subsequent one
Has joined the channel.
Other than partitioning, how does specifying endorsement policy help? Does it matter which peer is selected as an endorser for a transaction?
@sampath06 The endorsement policy typically specifies which organizations must endorse a transaction. So, for instance, if two organizations develop a chaincode to traide assets between them, the endorsement policy would require that both organizations agree on the chaincode result. Endorsement policy is a way to enforce that the required policy agree with the result of a transaction.
@sampath06 The endorsement policy typically specifies which organizations must endorse a transaction. So, for instance, if two organizations develop a chaincode to traide assets between them, the endorsement policy would require that both organizations agree on the chaincode result. Endorsement policy is a way to enforce that the required parties agree with the result of a transaction.
@sampath06 The endorsement policy typically specifies which organizations must endorse a transaction. So, for instance, if two organizations develop a chaincode to trade assets between them, the endorsement policy would require that both organizations agree on the chaincode result. Endorsement policy is a way to enforce that the required parties agree with the result of a transaction.
@jyellick yes, but can sdk avoid using the event and get the txId to return quickly?
@bh4rtp Of course, simply examine that code, and modify it not to wait for the event future to return.
Keep in mind, that the balance transfer example may develop MVCC conflicts if you do not wait for the transaction to commit before executing a new one
@jyellick it seems the more clients commit the same number of transactions, the more efficient it will be.
In general, you should design your chaincode to minimize key contention. Consider for instance a chaincode which tallies votes. One option, is to have a key which records the number of votes, which each transaction reads, and increments by one. This is a bad chaincode architecture however, because any two concurrent transactions will result in a conflict, because they are each reading the same key. Instead, consider a chaincode design where each vote is recorded as a key, and to find the total number of votes is a query which iterates over all the votes and totals them. In this second design, as many parallel votes may be cast as desired without any possibility for conflict.
[ ](https://chat.hyperledger.org/channel/fabric-consensus?msg=dKjzCzPQE8os8fXLY) @jyellick Since all nodes have the same data, in what case would one node fail a transaction while another node endorses it?
@sampath06 e.g. if one organization decides to cheat, or got hacked or chaincode produces non-deterministic results, etc.
@Vadim Got it. Thanks. Can we change the endorser during invocation of the chaincode. For e.g. in a multi-tenant situation, where multiple organisations are part of the channel, can we specify the endorser based on the organisations involved in the transaction.
@sampath06 you can specify any endorsers you want using the SDK
But that is at the chaincode level right? So if I want the endorsing peers to change based on the parties involved in the transaction, I need multiple chaincodes?
no, this is at SDK level
https://fabric-sdk-node.github.io/global.html#ChaincodeInvokeRequest
there you see targets, which you can set to any peers (endorsers) you want
at chaincode level you have an endorsement policy which says how many signatures needed for the transaction to be valid
How so how does the peers receiving the request differ from the ones specified in the endorsement policy? Just the receiving peer also evaluate the transaction or only the endorsing peers do it?
@sampath06 in the endorsement policy you normally specify not a concrete peer, but a number of sugnatures from an org
from which peers they come, does not matter
@Vadim I was equating peers with Orgs. So if we want to make sure that peers from particular organisations approve a transaction based on which orgs are involved in the transaction, how do we do that? Will the peers parameter in the SDK ensure that?
@sampath06 I guess you need to know that out of band for now. In case you make a mistake, the endorsement policy will ensure that the transaction is invalid.
I've seen some JIRA items on peer discovery, so perhaps you can look it up there
Sure. Thanks. I was hoping endorsing peers could be used for this. But I guess not
e.g. https://jira.hyperledger.org/browse/FAB-5451
@jyellick mvcc validates the membership. what is mvcc conflicts?
@bh4rtp https://en.wikipedia.org/wiki/Multiversion_concurrency_control
Folks, this is a reminder that this channel is for discussions related to the ordering service and its APIs. None of the latest discussions here are relevant to consensus. If in doubt, please post in #fabric.
Folks, this is a reminder that this channel is for discussions related to the ordering service and its APIs.
None of the latest discussions here are relevant to consensus.
If in doubt, please post in #fabric
Perhaps part of the confusion stems from the fact that this channel is named #fabric-consensus, and we used to have this marketing-speak blurb out there in the docs that said consensus in Fabric happens throughout the entire stack, starting from endorsement and ending up with the validation check. This may be accurate, depending on how generous you feel with the definitions, but this is really not the place for endorsement or validation questions. I'll ping @rjones and see if we can change to #fabric-orderer
Has joined the channel.
@kostas that would be good. I have been putting my endorser queries here since that seems to be part of consensus
@sampath06: Understood, thanks for confirming my theory.
I remember there is a doc describing how the orderer works, but cannot find it. Can anyone send it again? Thanks!
Room name changed to: fabric-orderer by rjones
Welcome to #fabric-orderer. Questions here should be related to either the ordering service code and its APIs (Broadcast/Deliver), configuration transactions, or the ordering service consensus plugins (Solo/Kafka/SBFT). Before posting your question, please take time to ensure that your question is precise and concise, and use a service like Pastebin or GitHub Gist for all log outputs that you wish to reference. For example: Bad question: Why do I get the error `BAD_REQUEST`? Good question: Using `fabric-examples/first-network/byfn.sh`, when submitting the channel creation as `Admin@org1.example.com` it succeeds, but when using `User1@org1.example.com` it fails with `BAD_REQUEST`. (Full log can be found here: https://pastebin.com/LFGNB88a) Why does this second request fail?
@jyellick Hi, how to use peer channel fetch to fetch the config block , I executed the command on cli, i.e client, but I can't find the block. Thanks
Hi, how to use peer channel fetch to fetch the config block , I executed the command on cli, i.e client, but I can't find the block. Thanks
can you show the cmd that you use
peer channel fetch config config_block.pb -o orderer.example.com:7050 --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/cacerts/ca.example.com-cert.pem -c mychannel
on cli container, but didn't get the config_block.pb
./peer channel fetch config config_block.pb -o 127.0.0.1:7050 -c testchainid this works with orderer running locally
Has left the channel.
not for fetching block from remote orderer container as for my test
do you create the channel named mychannel
?
@kostas i think https://gerrit.hyperledger.org/r/#/c/12953/ can merge
Has joined the channel.
Hi
Can we use Kafka based orderer to improve the performance of my Fabric network.?
Rite now I am working with SOLO orderer.
@Ashish how do you know the performance when you use the SOLO
we are not using any tools as of now.
we have a client who is sending in transactions
and we just measure the request to response timings
you can try it
in master branch github it provides the kafka e2e_cli
Hi, we can use configtxgen to produce channel genesis block. We can also ask orderer to produce channel genesis block by calling certain api. However in fabric orderer code part I can not find the code that can produce channel genesis block. why? Thank you.
please take your eye to the source code `common/tool`
please take your eye to the source code `common/tool` it provide the many tool to produce the config
Has left the channel.
@asaningmaxchain , thank you. would you please tell me the exact function or directory, then by finding the function, I can locate the code that produce channel genesis block.
Hi, I am facing an issue. I am sending a channel creation tx and on the orderer side I get following error:
Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating ReadSet: Existing config does not contain element for [Groups] /Channel/Application but was in the read set
can someone explain me the meaning of the error?
@aberfou $fabric/common/configtx/update.go:32: return fmt.Errorf("Existing config does not contain element for %s but was in the read set", key)
@aberfou $fabric/common/configtx/update.go:32: return fmt.Errorf("Existing config does not contain element for %s but was in the read set", key)
func (c *configSet) verifyReadSet(readSet map[string]comparable) error {
for key, value := range readSet {
existing, ok := c.configMap[key]
if !ok {
return fmt.Errorf("Existing config does not contain element for %s but was in the read set", key)
}
if existing.version() != value.version() {
return fmt.Errorf("Readset expected key %s at version %d, but got version %d", key, value.version(), existing.version())
}
}
return nil
}
thx, but can you please elaborate?
your conifgration-set do not contain all Configuration options which are necessary
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2kSXcHtaSvisizRy7) @kostas the 'create channel' api is in fabric-client/lib/Client.js based nodejs.
is it ok to all orderers run in one port :7050 ?
I am getting a
```
error : Error: SERVICE_UNAVAILABLE
```
when trying to send requests from client side
```
error: [Orderer.js]: sendBroadcast - on error: "Error: Connect Failed\n at ClientDuplexStream._emitStatusIfDone
```
any help on this ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=px4tg8Z2pEFcDszid) @asaningmaxchain [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=maenScgJnYsuzuJJs) @asaningmaxchain
Thank you @asaningmaxchain - will try it :)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=px4tg8Z2pEFcDszid) @asaningmaxchain [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=maenScgJnYsuzuJJs)
Thank you @asaningmaxchain - will try it :)
@Ashish: Comparing solo and Kafka performance-wise is comparing apples to oranges, as solo is only meant for development purposes and testing. You should switch to the Kafka-based orderer if you're running an actual application.
> We can also ask orderer to produce channel genesis block by calling certain api.
@qsmen: What is the API that you, as a user, have access to on the orderer to produce a channel genesis block?
`configtxgen` allows you to create channel creation transaction that you then broadcast to the ordering service, and the OS parses it to create a new channel. For channel creation purposes your only means of interaction with the ordering service, is your broadcasting this transaction. That's it.
@gentios: I assume you mean different VMs/machines. The answer is yes.
When I run orderer version from the pre-built binaries or make myself, It shows version Version: development build. But the other binaries show 1.0.0, etc. All of the docker containers I have tried (from downloaded images) also show 1.0.0, etc. Is that expected / desired behavior for pre-built or self-made binaires?
I had expected to run orderer as a custom user, but I either have to grant write to custom user on /var/hyperledger or run as root. Is it expected to run as root or that I should grant write to custom user?
@jworthington I just checked the v1.0.2 release binaries and I indeed see the version as `development build`. I'm guessing this was an oversight in the release process. I'll move this to #fabric-release
> but I either have to grant write to custom user on /var/hyperledger or run as root
@jworthington I would very much encourage you to `chown` that folder, rather than run as root. You may alternatively reconfigure the storage path by editing `orderer.yaml` to set the `FileLedger.Location`, or, alternatively, you may override it using the environment variable `ORDERER_FILELEDGER_LOCATION`
Thx
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2kSXcHtaSvisizRy7) @kostas
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2kSXcHtaSvisizRy7) @kostas Thanks a Ton. Needed some confidence before I got my hands wet with it.
by running "configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/
.→channel.tx -channelID $CHANNEL_NAME", we get channel configuration transaction.
@kostas , Thank you. By running "configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/
.→channel.tx -channelID $CHANNEL_NAME", we get channel configuration transaction. By running "peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-
.→artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.", we get the genesis block for a channel. do you mean also by running command we also broadcast the block to ordering service and the orderer create a channel with this block?
@kostas , Thank you. By running "configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/
.→channel.tx -channelID $CHANNEL_NAME", we get channel configuration transaction. By running "peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-
.→artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.", we get the genesis block for a channel. do you mean also by running command we broadcast the block to ordering service and the orderer create a channel with this block?
@kostas , Thank you. By running "configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/
.→channel.tx -channelID $CHANNEL_NAME", we get channel configuration transaction. By running "peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-
.→artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.", we get the genesis block for a channel. do you mean also by running this command we broadcast the block to ordering service and the orderer create a channel with this block?
yes, should be this. the first command does not contain orderer information. the second command contains orderer inforamtion. About api, I am searching
the second command would return the channel genesis block, so it should be produced by the peer or by the orderer.
the first command is used to create the channel config,and second command is used to send the config to the orderer,and then get the genesis block about the channel named mychannel
@qsmen
@asaningmaxchain ,so the order produce the block with the recieved config.
yes
ok, thank you ,
@asaningmaxchain , however, I can not find the code that produce the block in orderer code part. do you know this?
please take your eyes to the `orderer/common/multichannel/blockwriter.go`
@asaningmaxchain , thank you. I will read that.
@asaningmaxchain , what's your fabric version? in fabric 1.0, I can not find the file
master
please checkout to the master branch
yeah, I get it. Thank you very much
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=PNyLp8JtAhopct82R) @kostas the 'create channel' api is in fabric-client/lib/Client.js based nodejs.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Ycw3FxENbyZ8ikjHm) @asaningmaxchain thanks , the create channel flow based apis:
1.Client sends the `createchannel.tx` to orderer service;
2.orderer service return Success or Fail;
3.Client sends the `getGenesisBlock` to orderer service;
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Ycw3FxENbyZ8ikjHm) @asaningmaxchain thanks , the create channel flow based apis:
1.Client sends the `createchannel.tx` to orderer service;
2.orderer service return Success or Fail;
3.Client sends the `getGenesisBlock` to orderer service;
4. orderer service return `Genesis Block`;
5.Client sends the `Join Channel` include `Genesis Block` to peer
3.Client get the `getGenesisBlock` to orderer service;
@luckydogchina
@luckydogchina do you use the node sdk to create channel
?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WFdXxRQrbfcYcPh2e) @asaningmaxchain Yes
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=shEfx3qKH6ehGyn7T) @asaningmaxchain why is `get` ?
the peer get the genesis block
1 the sdk send the `createChannel.tx` to the orderer
2 the peer will get the genesis block based on the step 1
the genesis block the peer received should be sent by the client from luckychinadog?
the genesis block the peer received should be sent by the client from luckychinadog's description?
I think two different ways are both right, one is the command way. the other is api way.
Anyone knows how two deffierent orderers commiunicate within same channel ? like this scenario: I have one system channel including two different orderers,. a peer wants to create a application channel by sending orderer config.tx, my question is after peer sent config.tx to one of the orderers, what will the orderers do in the next ?
`I have one system channel including two different orderers,` it seems doesn't support
`I have one system channel including two different orderers,` it seems doesn't support,let me check it
https://gerrit.hyperledger.org/r/#/c/13999/ @kostas please take a look
@here - can anyone point out where is orderer's storage directory.
is it in /var/hyperledger?
yea its in /var/hyperledger/production/orderer
you can config in the orderer.yaml
Yep just found out - thanks anyway.
One more question - If I have 4 orderers how do I connect them to one network. Can anyone provide help?
@here Lets take I have 1 orderer for each organisation. and I have 4 such ORGs.
Now Need to make sure that they are part of the network when communicating.
Where do we specify the "IP/PORT" of other orderers?
@Amjadnz lets not use @-here for every message
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=yxr25DXb5Wbx38MnQ) @wy Sure
@asaningmaxchain Lets take I have 1 orderer for each organisation. and I have 4 such ORGs.
Now Need to make sure that they are part of the network when communicating.
Where do we specify the "IP/PORT" of other orderers?
Any link to the respective doc also could be help.
@Amjadnz - take a look at https://github.com/hyperledger/fabric/blob/release/sampleconfig/configtx.yaml#L140 - you can specify multiple orderer addresses / endpoints
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pHFfZb743mY6n6ACk) @mastersingh24 - Great. Thanks
@mastersingh24 - one more question if two orgnaizations want to use the same orderer from a different org - any clues?
I mean - two peers of different organisations want to use the same orderer - what are the certs that we have to share.
orderer.yaml - Server Cert, ClientRootCAs (should specify all the client CAs plannning to connect)
peer (core.yaml) - msp location, and while running channel create pass the CAFILE of the orgaziation listed with orderer (clientROOTCA), MSPID
Somehow all my combinations are giving me bad-certifcate issues.
on the connection or on the proposal?
on the connection they are TLS certs, on the proposal they are e-certs
just to create a channel - It is a proposal right?
Usually the orderer is from a different org than the peers.
well it's proposal but you need to get through the TLS first
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=e5bmWQofBL5AoPaoW) @yacovm Ok - that was new info for me.
Ok so I have to specify these arguments to the peer to create channel
```CORE_PEER_MSPCONFIGPATH=peerOrganizations/adx.ubn.ae/peers/peer0.adx.ubn.ae/msp
CORE_PEER_ADDRESS=peer0.adx.ubn.ae:7051 CORE_PEER_LOCALMSPID="AdxOrg"
CORE_PEER_TLS_ROOTCERT_FILE=peerOrganizations/adx.ubn.ae/tlsca/tlsca.adx.ubn.ae-cert.pem```
`--tls true --cafile ordererOrganizations/orderer.ubn.ae/orderers/orderer.orderer.ubn.ae/tls/ca.crt`
Ok so I have to specify these arguments to the peer to create channel
```CORE_PEER_MSPCONFIGPATH=peerOrganizations/adx.ubn.ae/peers/peer0.adx.ubn.ae/msp
CORE_PEER_ADDRESS=peer0.adx.ubn.ae:7051 CORE_PEER_LOCALMSPID="AdxOrg"
CORE_PEER_TLS_ROOTCERT_FILE=peerOrganizations/adx.ubn.ae/tlsca/tlsca.adx.ubn.ae-cert.pem```
As command arguments:
`--tls true --cafile ordererOrganizations/orderer.ubn.ae/orderers/orderer.orderer.ubn.ae/tls/ca.crt`
but am always bumping into this error - somehow :-(
```Error: Error connecting due to rpc error: code = Internal desc = connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"```
yep that's a TLS cert problem
Aha - so its not signed by the proper CA?
but what about- `CORE_PEER_TLS_ENABLED=true` ?
I guess you also have that
yea using as `--tls true` in the command line - isn't that same?
nope. the `--tls true` is for orderer, the CORE_PEER one is for the peer
Ok - that is very interesting.
interesting isn't the word I would pick ;)
:-) for me I meant
Let me give another round - and update the group.
so you're trying to create a channel?
yea with TLS enabled
and it is giving 2 errors
`Error: Error connecting due to rpc error: code = Internal desc = connection error: desc = "transport: authentication handshake failed: remote error: tls: bad certificate"`
hmmm what do you pass to `-o` ?
-o is the hostname of the orderer:7050
-o is the hostname of the orderer:7051
-o is the hostname of the orderer:7050
The exact command is like this
```CORE_PEER_MSPCONFIGPATH=peerOrganizations/adx.ubn.ae/peers/peer0.adx.ubn.ae/msp CORE_PEER_ADDRESS=peer0.adx.ubn.ae:7051 CORE_PEER_LOCALMSPID="AdxOrg" CORE_PEER_TLS_ROOTCERT_FILE=peerOrganizations/adx.ubn.ae/peers/peer0.adx.ubn.ae/tls/ca.crt peer channel create -o orderer:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls true --cafile peerOrganizations/adx.ubn.ae/ca/ca.adx.ubn.ae-cert.pem```
aha this is the problem
`--cafile peerOrganizations/adx.ubn.ae/ca/ca.adx.ubn.ae-cert.pem` - you need here the orderer org CA file
Ok - so --tls and --cafile always are associated with orderer. Let me test it.
@yacovm - This is the command I'm trying now
```CORE_PEER_TLS_ENABLED=true CORE_PEER_MSPCONFIGPATH=peerOrganizations/adx.ubn.ae/users/Admin@adx.ubn.ae/msp CORE_PEER_ADDRESS=peer0.adx.ubn.ae:7051 CORE_PEER_LOCALMSPID="AdxOrg" CORE_PEER_TLS_ROOTCERT_FILE=peerOrganizations/adx.ubn.ae/tlsca/tlsca.adx.ubn.ae-cert.pem peer channel create -o orderer:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --cafile /tts/official/src/tts/ubn/test/sampleconfig/ordererOrganizations/orderer.ubn.ae/ca/ca.orderer.ubn.ae-cert.pem ```
```Error: Error connecting due to rpc error: code = Internal desc = connection error: desc = "transport: authentication handshake failed: x509: certificate signed by unknown authority"```
In the orderer logs I get the folllwing errors:
```2017-09-30 15:49:47.647 GST [grpc] Printf -> DEBU 873 grpc: Server.Serve failed to complete security handshake from "127.0.0.1:61483": remote error: tls: bad certificate```
Case #2: If I try with a TLS cert for the --cafile option
```CORE_PEER_TLS_ENABLED=true CORE_PEER_MSPCONFIGPATH=peerOrganizations/adx.ubn.ae/users/Admin@adx.ubn.ae/msp CORE_PEER_ADDRESS=peer0.adx.ubn.ae:7051 CORE_PEER_LOCALMSPID="AdxOrg" CORE_PEER_TLS_ROOTCERT_FILE=peerOrganizations/adx.ubn.ae/tlsca/tlsca.adx.ubn.ae-cert.pem peer channel create -o orderer:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls true --cafile /tts/official/src/tts/ubn/test/sampleconfig/ordererOrganizations/orderer.ubn.ae/orderers/orderer.orderer.ubn.ae/tls/ca.crt ```
I get a different error:
```Error: got unexpected status: FORBIDDEN -- Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied```
so fabric has different TLS root CAs and different e-cert root CAs
the 2nd attempt gets you through the TLS!
and that is what you need to do
We have to use TLS certs only
yes
now you're missing, I think another env var
`CORE_PEER_TLS_LOCALMSPID`
`CORE_PEER_LOCALMSPID`
@Amjadnz ^
https://github.com/hyperledger/fabric/blob/release/sampleconfig/core.yaml#L252
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ATq9KNLNeamJMMRGf) @yacovm - Thanks as always very helpful. Would do one round of test and confirm if all is fine.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=vuLWCuFDtmdFcvPXG) @yacovm - After doing all the changes with one more of mine - It all seems to be working. Thanks @yacovm
Regarding my change
``` ADXNetworkDuo:
Orderer:
<<: *OrdererDefaults
Organizations:
Consortiums:
SampleConsortium:
Organizations:
Application:
Organizations:
Consortium: SampleConsortium```
I had earlier specified a lot of orgs in the configtx file.
Something like this
``` ADXNetworkDuo:
Orderer:
<<: *OrdererDefaults
Organizations:
- <<: *AdxOrg
AdminPrincipal: Role.ADMIN
- <<: *ScaOrg
AdminPrincipal: Role.ADMIN
- <<: *BrokerOrg
AdminPrincipal: Role.ADMIN
- <<: *OnlineOrg
AdminPrincipal: Role.ADMIN
- <<: *AuditorOrg
AdminPrincipal: Role.ADMIN
Consortiums:
SampleConsortium:
Organizations:
- <<: *AdxOrg
AdminPrincipal: Role.ADMIN
- <<: *ScaOrg
AdminPrincipal: Role.ADMIN
- <<: *BrokerOrg
AdminPrincipal: Role.ADMIN
- <<: *OnlineOrg
AdminPrincipal: Role.ADMIN
- <<: *AuditorOrg
AdminPrincipal: Role.ADMIN
Application:
Organizations:
- <<: *AdxOrg
AdminPrincipal: Role.ADMIN
- <<: *ScaOrg
AdminPrincipal: Role.ADMIN
- <<: *BrokerOrg
AdminPrincipal: Role.ADMIN
- <<: *OnlineOrg
AdminPrincipal: Role.ADMIN
- <<: *AuditorOrg
AdminPrincipal: Role.ADMIN
Consortium: SampleConsortium
```
There is some issue in my working copy of the example.
If I remove all orgs - then its kind of okay. However I would like to have some degree of control on how orgs are part of my channel list.
And what are the access privileges given to them when I'm generating the GENESIS block per client per channel.
`configtxgen -profile ADXNetworkDuo -outputAnchorPeersUpdate ./channel-artifacts/ADXanchors.tx -channelID $CHANNEL_NAME -asOrg AdxOrg`
Is there any documentation the structure of configtx.yaml file?
I see there are a lot of hidden items that are there which are extremely useful for the production system.
Is there any documentation the structure of configtx.yaml file?
I see there are a lot of hidden items that are there which are extremely useful for production systems.
It may not be used as it is - but we can built on top of it and save a lot of time.
Is there any documentation for the structure of configtx.yaml file?
I see there are a lot of hidden items that are there which are extremely useful for production systems.
It may not be used as it is - but we can built on top of it and save a lot of time.
@Amjadnz The profile used to generate your ordering system channel defines the organizations in the consortium definition. The organizations in the consortium definition may create channels on this ordering service. When creating a channel, the members defined in the application set are the organizations which will be able to participate in that particular channel. More sophisticated configuration is possible by modifying the underlying proto structures (possibly using a tool like `configtxlator` )
@Amjadnz The profile used to generate your ordering system channel defines the organizations in the consortium definition. The organizations in the consortium definition may create channels on this ordering service. When creating a channel, the members defined in the application set of the profile used to define the new channel are the organizations which will be able to participate in that particular channel. More sophisticated configuration is possible by modifying the underlying proto structures (possibly using a tool like `configtxlator` )
@Amjadnz The profile used to generate your ordering system channel defines the organizations in the consortium definition. The organizations in the consortium definition may create channels on this ordering service. When creating a channel, the members defined in the application section of the profile used to define the new channel are the organizations which will be able to participate in that particular channel. More sophisticated configuration is possible by modifying the underlying proto structures (possibly using a tool like `configtxlator` )
@kostas i will notice
https://gerrit.hyperledger.org/r/#/c/13999/
@jyellick i don't know why https://gerrit.hyperledger.org/r/#/c/12953/ can't merge?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Z3DCDim2Y8pEm3wYB) @jyellick Thanks.
@kostas, @yacovm or @jyellick - can you please clarify this point
Orderer - Start looks fine
Peer - Start Looks fine
Channel - Create Looks fine
Channel - Join - the rest response from server is fine. But the logs of PEER show the following error
```CORE_PEER_TLS_LOCALMSPID="AdxOrg" CORE_PEER_TLS_ENABLED=true CORE_PEER_MSPCONFIGPATH=/tts/official/src/tts/ubn/test/sampleconfig/peerOrganizations/adx.ubn.ae/users/Admin@adx.ubn.ae/msp CORE_PEER_ADDRESS=peer0.adx.ubn.ae:7051 CORE_PEER_LOCALMSPID="AdxOrg" CORE_PEER_TLS_ROOTCERT_FILE=/tts/official/src/tts/ubn/test/sampleconfig/peerOrganizations/adx.ubn.ae/tlsca/tlsca.adx.ubn.ae-cert.pem peer channel join -b ./adx-agm-channel.block```
```2017-10-01 23:41:20.228 GST [ConnProducer] NewConnection -> ERRO 34d Failed connecting to orderer:7050 , error: x509: certificate signed by unknown authority
2017-10-01 23:41:20.228 GST [deliveryClient] connect -> DEBU 34e Connected to
2017-10-01 23:41:20.228 GST [deliveryClient] connect -> ERRO 34f Failed obtaining connection: Could not connect to any of the endpoints: [orderer:7050]
```
The above is the PEER Logs
I know it's the peer log
:-)
so my bet is that the orderer org's TLS-CA certs aren't in the config block
you can test that using the `configtxgen --outputblock` or something like that
Ok
Can we set these parameters as part of environment variables when generating the channel transaction?
As `configtxgen` is not having parameters to input the orderer crypto details.
Ok - so here what I did:
`configtxgen -profile ADXNetworkDuo -inspectBlock adx-agm-channel.block`
and as you suggested - I don't have the OrdererOrg registered there.
Let me check that - Thanks a lot.
I could find the orderer info in the block def. But the CA info is missing.
and you don't see there tlsca ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=yGN7FYks9yfbeHdpS) @yacovm - no. just the definition of the orderer and it listening on 7050.
```"Orderer": {
"mod_policy": "Admins",
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
},
"version": "0"
},
"BlockValidation": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Writers"
}
},
"version": "0"
},
"Readers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Readers"
}
},
"version": "0"
},
"Writers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Writers"
}
},
"version": "0"
}
},
"values": {
"BatchSize": {
"mod_policy": "Admins",
"value": {
"absolute_max_bytes": 10485760,
"max_message_count": 10,
"preferred_max_bytes": 524288
},
"version": "0"
},
"BatchTimeout": {
"mod_policy": "Admins",
"value": {
"timeout": "2s"
},
"version": "0"
},
"ChannelRestrictions": {
"mod_policy": "Admins",
"version": "0"
},
"ConsensusType": {
"mod_policy": "Admins",
"value": {
"type": "solo"
},
"version": "0"
}
},
"version": "0"
}
},```
Sorry for the entire text - but just to give you more info.
and in one more location
```"OrdererAddresses": {
"mod_policy": "/Channel/Orderer/Admins",
"value": {
"addresses": [
"orderer:7050"
]
},
"version": "0"
}
```
Is there a way we can specify Orderer TLSCA in configtx.yaml?
@Amjadnz when you post large segments of text, please use a service like pastebin
The TLS certs which are encoded by `configtxgen` come out of the directory specified by `MSPDir` in `configtxyaml`
The TLS certs which are encoded by `configtxgen` come out of the directory specified by `MSPDir` in `configtx.yaml`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=zELwgFQzqFFTnTfN6) @jyellick ok
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=KzWKJBmYgMoiWzh82) @jyellick you mean the MSPDir for orderer config?
i tried MSPDir In ordererDefaults Section but it says unrecognized MSPDir setting.
Can you guide me to a config sample with orderer MSP defined?
@Amjadnz the MSP dir is encoded as part of the organization definition. The organization specified in the orderer section defines the TLS certs for ordering
Hey guys, I am deploying Fabric on K8S. For one channel with multiple orderers,I want to set a LB to those orderers, however, due to I don't konw how different orderers coordinate work with others within same channel, I wonder if the LB would interfere with normal workflow.
Any ideas about this ?
Hey guys, I am deploying Fabric on K8S. For one channel with multiple orderers,I want to set a LB for these orderers, however, due to I don't konw how different orderers coordinate work with others within same channel, I wonder if the LB would interfere with normal workflow.
why do you want to LB the orderers @LordGoodman ?
The peer itself can handle multiple orderers per channel so that you won't have to LB yourself
Anyway... assuming you really, really want to LB the orderers you'll have to give them a TLS certificate with the right subject alternative name as the external host of the LB otherwise the TLS handshake won't work
As far as I'm concerned, all orderers within same channel have same functionality, they should not be different for clients(same channel), thus I want to combine those orderers into one entrypoint and clients only need to konw this entrypoint. Of course, your concern of TLS handshake must to be token into consideration.
so the reason is for clients?
yes
I am getting timout error when trying with --tls true --cafile
```2017-10-02 07:21:40.778 UTC [container] lockContainer -> DEBU 396 waiting for container(cove-peer0.org1.example.com-fabcar-1.0) lock
2017-10-02 07:21:40.778 UTC [container] lockContainer -> DEBU 397 got container (cove-peer0.org1.example.com-fabcar-1.0) lock
2017-10-02 07:21:40.790 UTC [dockercontroller] stopInternal -> DEBU 398 Stop container cove-peer0.org1.example.com-fabcar-1.0(Container not running: cove-peer0.org1.example.com-fabcar-1.0)
2017-10-02 07:21:40.792 UTC [dockercontroller] stopInternal -> DEBU 399 Kill container cove-peer0.org1.example.com-fabcar-1.0 (API error (500): {"message":"Cannot kill container cove-peer0.org1.example.com-fabcar-1.0: Container aa955cf22ac32c0b2f9a7f4e8b2e4c451e1ed2aec02b743ce7a7b7356ef90d67 is not running"}
)
2017-10-02 07:21:40.812 UTC [dockercontroller] stopInternal -> DEBU 39a Removed container cove-peer0.org1.example.com-fabcar-1.0
2017-10-02 07:21:40.812 UTC [container] unlockContainer -> DEBU 39b container lock deleted(cove-peer0.org1.example.com-fabcar-1.0)
2017-10-02 07:21:40.812 UTC [chaincode] Launch -> ERRO 39c launchAndWaitForRegister failed Timeout expired while starting chaincode fabcar:1.0(networkid:cove,peerid:peer0.org1.example.com,tx:a55e7ca5e857de077fbe882a84590e898df62151b4caecf189cf82a45e1e4847)
2017-10-02 07:21:40.812 UTC [endorser] callChaincode -> DEBU 39d Exit
2017-10-02 07:21:40.812 UTC [endorser] simulateProposal -> ERRO 39e failed to invoke chaincode name:"lscc" on transaction a55e7ca5e857de077fbe882a84590e898df62151b4caecf189cf82a45e1e4847, error: Timeout expired while starting chaincode fabcar:1.0(networkid:cove,peerid:peer0.org1.example.com,tx:a55e7ca5e857de077fbe882a84590e898df62151b4caecf189cf82a45e1e4847)
2017-10-02 07:21:40.813 UTC [endorser] simulateProposal -> DEBU 39f Exit
2017-10-02 07:21:40.814 UTC [lockbasedtxmgr] Done -> DEBU 3a0 Done with transaction simulation / query execution [de0bc056-97ba-4ee9-8edb-e3b5290b1a11]
2017-10-02 07:21:40.814 UTC [endorser] ProcessProposal -> DEBU 3a1 Exit
I am using this command
```docker exec -e "CORE_PEER_LOCALMSPID=org1.example.com" -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/configs/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp" cli peer chaincode instantiate -o orderer0.example.com:7050 --tls true --cafile /var/hyperledger/configs/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n fabcar -v 1.0 -c '{"Args":[""]}' -P "OR ('Org1ExampleCom.admin','Org2ExampleCom.admin')"
@yacovm do you have any idea ?
why this is happening ?
check the container logs
I have pasted above
this are the logs of peer
these are the logs of peer
right. Check the container logs.
I don't have any logs
in cli
do that again, then quickly do `docker ps -a` and then see the container instance. then do quickly `docker logs ` on the container
yes no logs
there are no logs in cli
@yacovm
I am executing these commands
```# Create the channel
docker exec -e "CORE_PEER_LOCALMSPID=org1.example.com" -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/msp/users/Admin@org1.example.com/msp" peer0.org1.example.com peer channel create -o orderer0.example.com:7050 -c mychannel -t 10 -f /var/hyperledger/configs/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile /var/hyperledger/configs/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
# Join peer0.org1.example.com to the channel.
d
```# Create the channel
docker exec -e "CORE_PEER_LOCALMSPID=org1.example.com" -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/msp/users/Admin@org1.example.com/msp" peer0.org1.example.com peer channel create -o orderer0.example.com:7050 -c mychannel -t 10 -f /var/hyperledger/configs/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile /var/hyperledger/configs/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
# Join peer0.org1.example.com to the channel.
docker exec -e "CORE_PEER_LOCALMSPID=org1.example.com" -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/msp/users/Admin@org1.example.com/msp" peer0.org1.example.com peer channel join -b mychannel.block
```docker exec -e "CORE_PEER_LOCALMSPID=org1.example.com" -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/configs/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp" cli peer chaincode install -n fabcar -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode
docker exec -e "CORE_PEER_LOCALMSPID=org1.example.com" -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/configs/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp" cli peer chaincode instantiate -o orderer0.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /var/hyperledger/configs/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n fabcar -v 1.0 -c '{"Args":[""]}' -P "OR ('Org1ExampleCom.admin','Org2ExampleCom.admin')"
sleep 10
docker exec -e "CORE_PEER_LOCALMSPID=org1.example.com" -e "CORE_PEER_MSPCONFIGPATH=/var/hyperledger/configs/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp" cli peer chaincode invoke -o orderer0.example.com:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /var/hyperledger/configs/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem -C mychannel -n fabcar -c '{"function":"initLedger","Args":[""]}'
right, so right after you do the `instantiate` - do `docker ps -a`
and then `docker logs` on the container
@yacovm I do there are no logs
```CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d2960a54261e hyperledger/fabric-tools "/bin/bash -c 'sle..." 16 minutes ago Up 16 minutes 0.0.0.0:32784->9092/tcp cli
1a7cab2d472e hyperledger/fabric-peer "peer node start" 16 minutes ago Up 16 minutes 0.0.0.0:7056->7051/tcp, 0.0.0.0:7058->7053/tcp peer1.org1.example.com
d0f17cae9f73 hyperledger/fabric-peer "peer node start" 16 minutes ago Up 16 minutes 0.0.0.0:8056->7051/tcp, 0.0.0.0:8058->7053/tcp peer1.org2.example.com
8bf38912e328 hyperledger/fabric-peer "peer node start" 16 minutes ago Up 16 minutes 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer0.org1.example.com
22c3faaac740 hyperledger/fabric-peer "peer node start" 16 minutes ago Up 16 minutes 0.0.0.0:8051->7051/tcp, 0.0.0.0:8053->7053/tcp peer0.org2.example.com
b47fd2cbef73 hyperledger/fabric-orderer "orderer" 16 minutes ago Up 16 minutes 0.0.0.0:32783->7050/tcp orderer1.example.com
07b3ae569cff hyperledger/fabric-orderer "orderer" 16 minutes ago Up 16 minutes 0.0.0.0:32782->7050/tcp orderer2.example.com
14d05ef4d34c hyperledger/fabric-orderer "orderer" 16 minutes ago Up 16 minutes 0.0.0.0:32781->7050/tcp orderer0.example.com
fb702e649285 hyperledger/fabric-kafka "/docker-entrypoin..." 17 minutes ago Up 16 minutes 9093/tcp, 0.0.0.0:32780->9092/tcp kafka3
a55b637a09fa hyperledger/fabric-kafka "/docker-entrypoin..." 17 minutes ago Up 16 minutes 9093/tcp, 0.0.0.0:32779->9092/tcp kafka0
5d1ae3309575 hyperledger/fabric-kafka "/docker-entrypoin..." 17 minutes ago Up 16 minutes 9093/tcp, 0.0.0.0:32778->9092/tcp kafka2
098de03dc9db hyperledger/fabric-kafka "/docker-entrypoin..." 17 minutes ago Up 16 minutes 9093/tcp, 0.0.0.0:32777->9092/tcp kafka1
769531639fb1 hyperledger/fabric-couchdb "tini -- /docker-e..." 17 minutes ago Up 17 minutes 4369/tcp, 9100/tcp, 0.0.0.0:5984->5984/tcp couchdb
b45f2d96921c hyperledger/fabric-zookeeper "/docker-entrypoin..." 17 minutes ago Up 17 minutes 0.0.0.0:32776->2181/tcp, 0.0.0.0:32775->2888/tcp, 0.0.0.0:32774->3888/tcp zookeeper0
1b6ce9bd9a40 hyperledger/fabric-zookeeper "/docker-entrypoin..." 17 minutes ago Up 17 minutes 0.0.0.0:32773->2181/tcp, 0.0.0.0:32772->2888/tcp, 0.0.0.0:32771->3888/tcp zookeeper1
eeb4808d7471 hyperledger/fabric-zookeeper "/docker-entrypoin..." 17 minutes ago Up 17 minutes 0.0.0.0:32770->2181/tcp, 0.0.0.0:32769->2888/tcp, 0.0.0.0:32768->3888/tcp zookeeper2
acf9a0d1090a hyperledger/fabric-ca "sh -c 'fabric-ca-..." 17 minutes ago Up 17 minutes 0.0.0.0:7054->7054/tcp ca.example.com
I don't know what to tell you then
do you think that this command can make any problem
``` command: /bin/bash -c 'sleep 6000000000000000000'
it is in the docker-compose cli
no
hmm
this is bad than
@Vadim do you have any idea why is this happening ?
I think @lclclc had a similar issue but don't know if he managed to fix it
Has joined the channel.
Has joined the channel.
Hi, one quick question. Where do I define the ip address of the orderer for the peers?
@aberfou When you bootstrap your ordering service, you should specify the orderers external IPs in the `configtx.yaml` `Orderer` section. When a peer is joined to a channel, it reads these addresses from the genesis block and uses them to connect to the ordering service.
@jyellick and what about the ip adresses of the anchor peers, is it the same?
also in configtx.yaml?
@aberfou The addresses of the anchor peers are generally configured after the creation of a channel. You may use `configtx.yaml` to specify the anchor peer addresses for the initial anchor peer update. You can see an example here: https://github.com/hyperledger/fabric-samples/blob/release/first-network/byfn.sh#L278-L286
but what should i do if i dont know the ip addresses at that time?
You may generate that update after the addresses have been allocated
The `byfn.sh` script generates them in advance, but this is not a requirement. You may edit `configtx.yaml` and regenerate those artifacts once the IPs are known.
but this will have no impact on the genesis.block which has been delivered to the orderer?
when i start the orderer i define a genesis.block
For the channel creation flow:
1. User creates the channel creation transaction using `configtxgen` and `configtx.yaml`
2. User submits the channel creation transaction via `peer channel create`, the orderer processes the transaction, generates a genesis block for the new channel, and the `peer channel create` command retrieves it, writing it to `
The genesis block you created directly is for the ordering system channel. Each channel has its own genesis block however, which is created during the channel creation process (step 2 above)
The `byfn.sh` script orders the above steps as:
1, 4, 2, 3, 5, 6, 7
because it knows the addresses of the anchor peers at the beginning of the process, and it is easier to script all of the `configtxgen` commands together. However, the more sensible flow is what I wrote above, and would allow you to set the addresses after provisioning the peers.
that means when is start the orderer i dont need the genesis.block?
i can send it anytime?
No, you need a genesis block to bootstrap the ordering system. This is for the ordering system channel, and it is what tells the orderer how to authorize and construct new channels.
ok at that time i dont need the ip addresses of the anchor peers and also of the orderer?
You should have the addresses of your orderer, but the anchor peers are not needed.
You should have the addresses of your orderers, but the anchor peers are not needed.
It is possible to reconfigure the orderer addresses, but this is not something we recommend as it can have other negative effects.
but how should i know the ip address of the orderer while my machine gets created for instance in AWS
Simply wait until the orderer machines have been created in AWS, then generate the genesis block, and then start the processes, bootstrapping them with the genesis block you just created.
You may of course (and I would recommend that you do) use hostnames, instead of IPs
yes that was also my plan
In that event, you may simply update the DNS to point those hostnames at the correct orderer IPs
sure
thanks a lot !
Has left the channel.
Has joined the channel.
Hi, simple question. If I lose all of my kafka and/or zookeeeper servers (the docker containers die and cannot be recovered) is it possible to recover the fabric network to a usable state?
You're in for a a world of hurt. You still have the blockchain data in your ordering service nodes, so things could be worse, but you need to recreate the datastore in the Kafka brokers so that they got the right offsets, etc.
ok thanks!
What about if I have 1 orderer, is there a way to persist it's data such that it can be recovered?
ok thanks!
What about if I have 1 orderer, is there a way to persist it's data such that it can be recovered if it were to die?
The above is not a function of the number of orderers on your network.
So is there any way to persist the orderers data (e.g. mount as a docker volume)?
Of course.
is it /var/hyperledger/production/orderer?
By default, yes
great thanks
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=KtWNS34SARkzHNhQF) @bsteinfeld
As long as the mount does not involve vboxfs (VirtualBox) volumes. For example, if using Vagrant, make sure the docker volume mount does not point all the way to a folder on the host machine.
What is the recommended log level for Orderer, should we set to info or debug ?
especially for Production env
@gauthampamu For production, I would recommend `INFO`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hABZ7zAiohKnL8j4w) @jyellick - This seems to work now. Thanks man and to @yacovm for the lead and info.
@jyellick and @yav
@jyellick and @yavcovm I have another question - if you may. Now I'm trying to install the chaincode in dev-mode.
Channel Creation - Fine
Channel Join - Fine
Chaincode Starting - Issue
Command to start chaincode
```CORE_PEER_MSPCONFIGPATH=/tts/official/src/tts/ubn/test/sampleconfig/peerOrganizations/adx.ubn.ae/users/Admin@adx.ubn.ae/msp CORE_PEER_ADDRESS=peer0.adx.ubn.ae:7051 CORE_PEER_TLS_LOCALMSPID="AdxOrg" CORE_PEER_TLS_ENABLED=true CORE_PEER_TLS_ROOTCERT_FILE=/tts/official/src/tts/ubn/test/sampleconfig/peerOrganizations/adx.ubn.ae/tlsca/tlsca.adx.ubn.ae-cert.pem CORE_CHAINCODE_LOGLEVEL=debug CORE_CHAINCODE_ID_NAME=checkcert:0 ./checkcert```
Error is shown in the logs
```2017-10-03 11:31:28.658 GST [shim] SetupChaincodeLogging -> INFO 001 Chaincode log level not provided; defaulting to: INFO
2017-10-03 11:31:28.658 GST [shim] SetupChaincodeLogging -> INFO 002 Chaincode (build level: ) starting up ...
2017-10-03 11:31:28.658 GST [shim] userChaincodeStreamGetter -> ERRO 003 Error trying to read file content : open : no such file or directory
Error starting LegalCompanyMasters04 chaincode: Error trying to read file content : open : no such file or directory```
If I run the command at the prompt `./checkcert`
If I run the command at the prompt `./checkcert` without any parameters with only PEER then this error is showing up.
@Amjadnz does your chaincode read any files?
Hmm. Nothing of that sort.
Just a generic code to output the current cert.
let me check
check that the
CORE_PEER_MSPCONFIGPATH and other paths are correct
also, it seems as if some var is not set
the paths are fine, as you pointed out - I'm trying the sample example_02 code too.
Same issue.
Let me just post that snippet too.
`pwd` `/tts/official/src/github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02`
```CORE_PEER_MSPCONFIGPATH=/tts/official/src/tts/ubn/test/sampleconfig/peerOrganizations/adx.ubn.ae/users/Admin@adx.ubn.ae/msp CORE_PEER_ADDRESS=peer0.adx.ubn.ae:7051 CORE_PEER_TLS_LOCALMSPID="AdxOrg" CORE_PEER_TLS_ENABLED=true CORE_PEER_TLS_ROOTCERT_FILE=/tts/official/src/tts/ubn/test/sampleconfig/peerOrganizations/adx.ubn.ae/tlsca/tlsca.adx.ubn.ae-cert.pem CORE_CHAINCODE_LOGLEVEL=debug CORE_CHAINCODE_ID_NAME=checkcert:0 ./chaincode_example02```
Result:
```2017-10-03 11:41:55.825 GST [shim] SetupChaincodeLogging -> INFO 001 Chaincode log level not provided; defaulting to: INFO
2017-10-03 11:41:55.825 GST [shim] SetupChaincodeLogging -> INFO 002 Chaincode (build level: ) starting up ...
2017-10-03 11:41:55.825 GST [shim] userChaincodeStreamGetter -> ERRO 003 Error trying to read file content : open : no such file or directory
Error starting Simple chaincode: Error trying to read file content : open : no such file or directory```
```CORE_PEER_MSPCONFIGPATH=/tts/official/src/tts/ubn/test/sampleconfig/peerOrganizations/adx.ubn.ae/users/Admin@adx.ubn.ae/msp CORE_PEER_ADDRESS=peer0.adx.ubn.ae:7051 CORE_PEER_TLS_LOCALMSPID="AdxOrg" CORE_PEER_TLS_ENABLED=true CORE_PEER_TLS_ROOTCERT_FILE=/tts/official/src/tts/ubn/test/sampleconfig/peerOrganizations/adx.ubn.ae/tlsca/tlsca.adx.ubn.ae-cert.pem CORE_CHAINCODE_LOGLEVEL=debug CORE_CHAINCODE_ID_NAME=checkcert:0 ./chaincode_example02```
Result:
```2017-10-03 11:41:55.825 GST [shim] SetupChaincodeLogging -> INFO 001 Chaincode log level not provided; defaulting to: INFO
2017-10-03 11:41:55.825 GST [shim] SetupChaincodeLogging -> INFO 002 Chaincode (build level: ) starting up ...
2017-10-03 11:41:55.825 GST [shim] userChaincodeStreamGetter -> ERRO 003 Error trying to read file content : open : no such file or directory
Error starting Simple chaincode: Error trying to read file content : open : no such file or directory```
I'm also checking the doc, the vars CORE_PEER_MSPCONFIGPATH, CORE_PEER_TLS_LOCALMSPID and others are not set
http://hyperledger-fabric.readthedocs.io/en/latest/peer-chaincode-devmode.html
`CORE_CHAINCODE_LOGLEVEL=debug CORE_PEER_ADDRESS=127.0.0.1:7052 CORE_CHAINCODE_ID_NAME=mycc:0 ./chaincode_example02
`
`CORE_CHAINCODE_LOGLEVEL=debug CORE_PEER_ADDRESS=127.0.0.1:7052 CORE_CHAINCODE_ID_NAME=mycc:0 ./chaincode_example02`
yea tried that too. But just to point out I have setup another set of orderer, peers from configtx.
My peer has TLS enabled.
So I have to provide the TLS details
try without tls
ok - let me check
notice the port is 7052 not 7051
@Amjadnz
also I don't think dev mode would even work with TLS @Amjadnz
https://gerrit.hyperledger.org/r/#/c/14113/
@yacovm in non-dev mode, which peer port does chaincode use to connect to peer?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=6cciZpPngGxyiXqun)
@yacovm I'm trying to debug timeout issues in docker swarm between peer and chaincode and I see only connection to port 7051 from the chaincode.
or perhaps I'm missing something
is that the master branch?
or release?
images from dockerhub 1.0.2
ah so it is configurable
it may be 7051
@yacovm ok, thanks. You by any chance are not aware of any timeout problems in docker swarm? In runs flawlessly in containers, but as soon as it's on Swarm, everything timeouts (SDK-peer direction and peer-chaincode direction). I think it's because of IPVS load balancer timeouts, but I'd expect that GRPC connections must use keepalives? I, however, don't see any traffic with tcpdump. Really strange.
so we have lowered the keepalives to 1m in 1.0.3
stay tuned ;)
yes, but even in 1.0.2 it's 300 sec, right? The ipvs timeout is 900sec, so it should be quite enough.
what bothers me is that I observe no traffic unless it's triggered by me.
@Vadim - https://github.com/hyperledger/fabric/blob/release/sampleconfig/core.yaml#L372
There has always been a keepalive setting for peer-chaincode communication (outside of gRPC keepalives). As a matter of fact, the gRPC keepalives are not enabled between the peer and chaincode since this other keepalive mechanism already existed
@mastersingh24 thanks!
@mastersingh24 what about SDK? I see that sdk seems to keep connections to peer and orderer open and they also timeout eventually.
I'm talking about node-sdk
The SDKs do use the gRPC keepalive mechanisms
quick searching in fabric-sdk shows SDK limits set in EventHub, but I don't see the same done for peer and orderer connections
https://github.com/hyperledger/fabric-sdk-node/search?utf8=%E2%9C%93&q=keepalive&type=
perhaps some more background: whenever I send the TX to my Farbic in Swarm, I see two more connections opened in IPVS: to peer and to orderer. WHen those time out, the first time I try to send a transaction I get "endpoint read failure", the second time it recovers but then since orderer connection is not working i see "SERVICE_UNAVAILABLE" so the third time everything works.
@mastersingh24 appreciate the answer :)
so I thought that the SDK reuses the peer and orderer connection and after timeout, I get these errors.
@Vadim - You should also be able to use the keepalive for the peer / orderer connections as well - but I think you will also have to set the `grpc.keepalive_permit_without_calls` property in the SDK as well
@Vadim - You should also be able to use the keepalive for the peer / orderer connections as well - but I think you will also have to set the `grpc.keepalive_permit_without_calls` property in the SDK as well. 0 (false) and 1 (true). You want true ;)
@mastersingh24 is there any generic method to set this for all gprc connections?
also, do you know if there is any env var responsible for the chaincode keepalive?
`CORE_CHAINCODE_KEEPALIVE`
Has joined the channel.
Has joined the channel.
Is there any good tutorial over how to start a kafka based orderer
I have followed the dc-orderer-kafka and dc-orderer-kafka-base but I can not seem to get the configs right
@t_stephens67: Have you brought up a Kafka cluster and ZK ensemble up w/o issues? (Forget about Fabric for a second.)
Has joined the channel.
Has joined the channel.
@kostas I think so I have 4 hyperledger/fabric-kafka containers and 3 hyperledger/fabric-zookeeper containers configured exactly like the dc-orderer-kafka.yml and dc-orderer-kafka-base.yml
Ah, that is not what I'm talking about.
Can you get a Kafka cluster and ZK ensemble up and have them communicate w/ each other w/o issues?
https://kafka.apache.org/quickstart
Once you've reached that stage, then you just follow the instructions here for the additional, Fabric-related steps and you're good to go: http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html
I havnt done that Ill give it a look over.
@kostas yes I have been able to follow that quickstart guide and everything seems to be working I will configure my cluster and then follow that readthedocs link
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=6cciZpPngGxyiXqun) @yacovm - thanks I woudl try that now.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=6cciZpPngGxyiXqun) @yacovm - thanks I would try that now.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=KwScdtPfqEiKjiJLi) @yacovm -I would try disabling TLS on DEV MODE
Yeah it is said in the documentation that we dont support dev mode wjth TLS
Don't you read the docs? :slight_smile:
:wink:
@kostas help....
So I followed the bringing up kafka-based ordering service to a T I have 4 kafka servers and 1 zookeeper (I figured 1 would be ok for now) when I run docker-compose up my orderer says [channel: testchainid] connecting to the Kafka cluster and just keeps trying over and over
@t_stephens67: Use a service like Pastebin and provide logs for the orderer. Set the log level to debug.
@t_stephens67 Have you tried using the sample cli producer and consumer apps? It is always a good idea to ensure that your Kafka cluster is functioning with the Kafka provided utilities prior to attempting to deploy fabric on top of it
@kostas can't access Pastebin at work
gist.github.com? Any Pastebin-like service will work.
nope
Save to a text file and upload here then.
http://pasted.co/
https://hastebin.com/
Are two additional services that might work
Nope those are all blocked
Message Attachments
Those are all the logs for orderer/kafka
Got it. Will review shortly. (I am honestly wondering who on earth would block `github.com` BTW.)
@t_stephens67: Can you please set the verbosity of the sarama logger to true?
I'm also interested in the output you get when you do `docker ps -a`.
@kostas sarama logger? how do I go about that? sorry I am a bit in over my head here.
@t_stephens67: My bad for not being more specific. https://github.com/hyperledger/fabric/blob/master/docs/source/kafka.rst#debugging
docker ps -a shows 3 peers 3 couchdb's an orderer and a ca
Where are your Kafka brokers and ZooKeeper node?
They are running in other putty windows :laughing:
Different PuTTY windows as in different VMs / ssh sessions?
ssh sessions
Alright, I'll need a few more details about your setup, and for you to enable the sarama debugger as per the link above.
My goodness that gives me a lot more logs let me look them over and see if I can get anything from it.
@t_stephens67: The logs repeat themselves because of attempts to reconnect.
As long as you give me something from the beginning we should be able to tell what's wrong.
@kostas its a connection refused. If I had a dollar for every time I have encountered that on this project....
Are you trying to set up TLS connections?
No I do not have TLS set
Which profile are you using for the ordering service's genesis block?
Also, I will still need that log, and some details on your setup.
Profile is ComposerOrdererGenesis
Can you upload your `configtx.yaml` here?
I'll cut to the punchline and guess that you are most likely dealing with a Kafka configuration issue, as in you are not setting up your Kafka cluster correctly. This is what I'm trying to figure out via that log that I'm asking.
(Heads up that I'll go offline in 25m.)
Its the same configtx from https://github.com/hyperledger/fabric-samples/blob/release/first-network/configtx.yaml with the solo changed to Kafka 3 additional Kafka brokers and and only 1 org
I will also be offline in 25m
Wait https://github.com/hyperledger/composer-tools/blob/master/packages/fabric-dev-servers/fabric-scripts/hlfv1/composer/configtx.yaml
that was the wrong link
1. Upload your YAML file
2. Upload the first 300 lines or so (?) of the orderer log.
3. Can you describe your setup? For instance, do you spin up the containers in different VMs?
Yea Ill get that worked up for you tomorrow and DM you
Sounds like a plan.
I think SOLO will be fine considering its a POC but it would be nice to get some kind of consensus working
Understood. If you were able to setup a multi-broker Kafka cluster using the Apache Kafka tutorial w/o issues, there is no reason for the Kafka-based ordering service to fail. I am guessing we are dealing with a silly configuration issue that we should be able to figure out with the artifacts that you'll send over tomorrow.
You may need to set `advertised.host.name` and `advertisted.port` in your Kafka brokers if you're running them in a different machine, but you should have bumped into this issue when I suggested you try the plain Kafka tutorial.
You may need to set `advertised.host.name` and `advertisted.port` in your Kafka brokers if you're running them on a Docker host that's a different machine, but you should have bumped into this issue when I suggested you try the plain Kafka tutorial.
Probably so 5 weeks ago I had no idea what blockchain was and had never even heard of any of these tools. and now im building an entire fabric network and composer network.
and they do want this distributed across 3 vm's not sure if my team will get that far in the next few weeks.
we are trying to get the network running in 1 vm for now
But you said you were running the Kafka containers in a different ssh session (which I took as a reference to a different VM).
oh no I just logged into the same VM I never spun up Kafka containers only ran the server in the terminal itself
Alright, let's see the logs tomorrow.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
@kostas good morning. I took a step back from yesterday and attempted running step 6 with all the configurations from "Bringing up a kafka-based orderering service" looks like the min.insync.replicas config is messing up the "my-replicated-topic" topic
not sure if that will affect anything and still doesn't explain the connection refused error.
@t_stephens67: Good morning. I won't be able to help w/o those artifacts.
> not sure if that will affect anything and still doesn't explain the connection refused error.
Indeed, this won't affect the connectivity issue.
@kostas it connected.....
im soooooooooo confused
I just spun everything up to get you some fresh logs and it worked
I cant right now lol
Works for me.
Im at a loss here thanks for all the help
I am getting SERVICE_UNAVAILABLE while making transaction in orderer, with Kafka Brokers
my channel looks like this when executing transaction
https://hastebin.com/mijeqezama.js
```Channel {
_name: 'mychannel',
_peers:
[ Peer {
_options: [Object],
_url: 'grpc://localhost:7051',
_endpoint: [Object],
_request_timeout: 45000,
_endorserClient: [Object],
_name: null } ],
_anchor_peers: [],
_orderers:
[ Orderer {
_options: [Object],
_url: 'grpc://localhost:7050',
_endpoint: [Object],
_request_timeout: 45000,
_ordererClient: [Object] } ],
_kafka_brokers: [],
_clientContext:
Client {
_cryptoSuite:
CryptoSuite_ECDSA_AES {
_hashAlgo: 'SHA2',
_keySize: 256,
_cryptoKeyStore: [Object],
_curveName: 'secp256r1',
_ecdsaCurve: [Object],
_hashFunction: [Function],
_hashFunctionKeyDerivation: [Object],
_hashOutputSize: 32,
_ecdsa: [Object] },
_channels: { mychannel: [Circular] },
_stateStore: FileKeyValueStore { _dir: '/tmp/fabric-client-kvs_Org1ExampleCom' },
_userContext:
User {
_name: 'admin',
_roles: null,
_affiliation: '',
_enrollmentSecret: '',
_identity: [Object],
_signingIdentity: [Object],
_mspId: 'org1.example.com',
_cryptoSuite: [Object] },
_msps: Map {},
_devMode: false },
_msp_manager: MSPManager { _msps: {} } }
does anyone know why this happens to me ?
@gentios Please use a service like http://hastebin.com and do not post such large snippets in this channel.
I have edited your post. The information you posted is not sufficient to make a diagnosis. I recommend you look at the conversation above in the channel between @t_stephens67 and @kostas for some tips on how to perform Kafka debugging. In short, `SERVICE_UNAVAILABLE` usually means there is something wrong with your Kafka cluster (or it is still being initialized). Ensure that you can successfully connect to the Kafka cluster using the Kafka sample cli producer/consumer, and only then attempt to use fabric with it.
I am reading the following document about the orderer design, and have some questions, please advise. Thanks! https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit
1. Please let me know whether my following understanding is correct. An OSN in this document represents an orderer. Each orderer receives transactions from one or multiple peers. Then, it batches the transactions it has received into blocks, and sends each block to a Kafka partition. Is that correct?
2. There is an example in `Problem 3`, saying
```Consider now the case where you have a batchTimeout of 1 second, and two OSNs. A batch has just been cut and a new transaction comes in via, say, OSN1. It gets posted to the partition.```
Why the new transaction gets posted to the partition by OSN immediately, instead of waiting for a batch or the timeout?
Thanks again!
> Each orderer receives transactions from one or multiple peers
Generally transactions are submitted by clients, not peers.
Has joined the channel.
> Then, it batches the transactions it has received into blocks, and sends each block to a Kafka partition. Is that correct?
No. For Kafka, the OSNs send the transactions to Kafka to receive total order first, then each OSN determinstically cuts a batch and forms a block locally
Hi, is it possible to enable GRPC trace at docker container level?
yep. just turn on the CORE_LOGGING_GRPC env var, @jy that should do the trick.
> Why the new transaction gets posted to the partition by OSN immediately, instead of waiting for a batch or the timeout?
This is not the architecture. Messages are ordered first, then blocks are cut.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=JnE7bJHKgbZGKQnxi) @yacovm thank you!
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=iWCoPPHaK5vdomj3t) @jyellick each OSN will generate the block?
@asaningmaxchain Correct, the block generation is deterministic, to guarantee that the same block is generated at each OSN.
@jyellick so in the genesis block,it can define many orderer config?
so how define many orderer config
One ordering service configuration for the time being.
One ordering service configuration.
You've updated the original message, so my response now doesn't make sense.
@kostas i am wrong
> so how define many orderer config
What do you wish to achieve precisely?
Have a peer connect to multiple ordering services?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=TCtdmm6YwFKgyx3j4) @kostas no
because i see the doc https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit
because i see the doc https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit,it draws many OSNs
because i see the doc https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit, it draws many OSNs
because i see the doc https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit, it draws many OSNs,so i want to know how to define many OSN when the orderer start
Have a peer connect to multiple orderers within the same ordering service?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Y2bYKuK2koqu6m7sq) @kostas no
i got it, it means the OSNs can share the Kafka cluster?
@kostas i got it, it means the OSNs can share the Kafka cluster?
How Fabric guarantees that different orderers will cut the same blocks? The description in the design doc is somehow confusing to me, can you please illustrate the key points? Thanks
@kostas https://gerrit.hyperledger.org/r/#/c/12953/ i don't know why can't be merge?
@kostas https://gerrit.hyperledger.org/r/#/c/12953/ i don't know why can't be merge?please take a look
> i got it, it means the OSNs can share the Kafka cluster?
Correct.
> https://gerrit.hyperledger.org/r/#/c/12953 i don't know why can't be merge?please take a look
I will review again later today. You haven't addressed this yet: https://gerrit.hyperledger.org/r/c/12953/5/orderer/sample_clients/broadcast_msg/client.go#85
> https://gerrit.hyperledger.org/r/#/c/12953 i don't know why can't be merge?please take a look
I will review again later today. You haven't addressed this yet: https://gerrit.hyperledger.org/r/#/c/12953/5/orderer/sample_clients/broadcast_msg/client.go#85
`
Not Found
The page you requested was not found, or you do not have permission to view this page.`
Try again, updated the link.
@asaningmaxchain: Try again, updated the link.
> How Fabric guarantees that different orderers will cut the same blocks? The description in the design doc is somehow confusing to me, can you please illustrate the key points? Thanks
In short, the OSNs send a "time to cut block 9" message, and once they read that message from the partition they know it marks the end of block 9.
> How Fabric guarantees that different orderers will cut the same blocks? The description in the design doc is somehow confusing to me, can you please illustrate the key points? Thanks
@qizhang: In short, the OSNs send a "time to cut block 9" message, and once they read that message from the partition they know it marks the end of block 9.
Subsequent identical messages will be ignored.
Which part of the doc was confusing to you exactly? I will gladly edit it.
@qizhang
> How Fabric guarantees that different orderers will cut the same blocks? The description in the design doc is somehow confusing to me, can you please illustrate the key points? Thanks
There are rules established for a channel in the channel configuration. These rules indicate the max number of messages in a block, the maximum size of a block, the preferred size of a block, as well as the batch timeout. In the Kafka case, the first 3 rules may be applied deterministically with no real effort. For the batch timeout, it work as Kostas describes.
@qizhang
> How Fabric guarantees that different orderers will cut the same blocks? The description in the design doc is somehow confusing to me, can you please illustrate the key points? Thanks
There are rules established for a channel in the channel configuration. These rules indicate the max number of messages in a block, the maximum size of a block, the preferred size of a block, as well as the batch timeout. In the Kafka case, the first 3 rules may be applied deterministically with no real effort. For the batch timeout, it work as @kostas describes.
@qizhang
> How Fabric guarantees that different orderers will cut the same blocks? The description in the design doc is somehow confusing to me, can you please illustrate the key points? Thanks
There are rules established for a channel in the channel configuration. These rules indicate the max number of messages in a block, the maximum size of a block, the preferred size of a block, as well as the batch timeout. In the Kafka case, the first 3 rules may be applied deterministically with no real effort. For the batch timeout, it works as @kostas describes.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=CraiKefGiAW5qe2Gi) @kostas it modify it right now
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ru8F5JcqEA8NvBXmj) Thanks @kostas I am now stuck at the description of "Problem 3", especially the example illustrated there. I do not quite catch the point why `We are now in a situation where the OSN has cut a block with both of these transactions, whereas OSN2 cut block with just the first of them`.
Thanks @jyellick what does it mean by 'with no real effort'?
@qizhang I mean that, for instance, if the rule is that "Blocks may contain at most 10 messages", then if all orderers see 10 ordered messages, they may trivially cut those 10 into a block. There is no need to synchronize between them. This contrasts with the batch timeout, where one's timer fire when 8 transactions have arrived, but another's might fire when 9 have arrived. In this case, the OSNs must coordinate to pick one so that the block contents are the same.
@qizhang I mean that, for instance, if the rule is that "Blocks may contain at most 10 messages", then if all orderers see 10 ordered messages, they may trivially cut those 10 into a block. There is no need to synchronize between them. This contrasts with the batch timeout, where one's timer might fire when 8 transactions have arrived, but another's might fire when 9 have arrived. In this case, the OSNs must coordinate to pick one so that the block contents are the same.
> Thanks @kostas I am now stuck at the description of "Problem 3", especially the example illustrated there. I do not quite catch the point why `We are now in a situation where the OSN has cut a block with both of these transactions, whereas OSN2 cut block with just the first of them`.
@qizhang: How transactions do you think OSN1 ends up with, and how many for OSN2?
> Thanks @kostas I am now stuck at the description of "Problem 3", especially the example illustrated there. I do not quite catch the point why `We are now in a situation where the OSN has cut a block with both of these transactions, whereas OSN2 cut block with just the first of them`.
@qizhang: How many transactions do you think OSN1 should end up with? How many for OSN2? I can walk you through this example step-by-step.
@jyellick I see, but usually, we have the "max number of messages in a block" and "batch timeout" configured simultaneously, right?
@qizhang Correct. If messages are arriving fast enough, then blocks will always be cut simply based on "max count of messages per block". If messages are not arriving fast enough, then the batch timeout mechanism described by Kostas will cause the block to be cut. In he former case, there is no need for orderers to communicate with eachother, because they will all reach the same decision by themselves. In the latter case, communication occurs via the "time to cut" message as described by @kostas
@kostas my understanding is that, at t=5s, OSN2 cut a block containing tx1, and at t=5.6s, OSN1 cut a block that also contains tx1. Then, tx2 gets posted to the partition. But why `it is read by OSN2 @t=6.2 and OSN1 @t=6.5`? Since the timout is 1s, and OSN2 cut a block at t=5s, it should cut the next block t=6s, similarly, OSN1 should cut the next block at t=6.6s
Why would OSN2 cut a block at t=5s?
That's when tx1 first came in, kicking off the timer.
Until then the timer was idle.
The example says `OSN2 read it at time t=5s and sets a timer that will fire at t=6s`, thus I am assuming OSN2 cuts a block at t=5s
The example says `OSN2 reads it at time t=5s and sets a timer that will fire at t=6s`, thus I am assuming OSN2 cuts a block at t=5s
The block will be cut when the timer expires no?
At time t=5s the OSN reads the transaction.
And adds it to the pool for the next, upcoming block.
So 'read' means the OSN gets a transaction from the client and post it to the Kafka?
So 'read' means the OSN gets a transaction from the client and posts it to the Kafka?
According to the doc:
> A batch has just been cut and a new transaction comes in via, say, OSN1. It gets posted to the partition. OSN2 reads it at time t=5s
So OSN1 got the transaction from a client, posted it to Kafka, and then OSN2 read it from the Kafka partition.
I see. Does an OSN read transactions from Kafka partition only when it tries to cut a block?
What other uses do you have in mind?
I do not have other uses in mind, but want to be clear about the 'read' operation. My understanding is that an OSN reads transactions from the Kafka partition, and tries to cut a block from there whenever necessary. Right?
I do not have other uses in mind, but want to be clear about the 'read' operation. My understanding is that an OSN reads transactions from the Kafka partition, and tries to cut a block from there whenever necessary. Then, after a block is cut, the OSN will read transactions, cut another block, so on so forth. Right?
Correct.
@kostas @jyellick https://gerrit.hyperledger.org/r/#/c/12953/ i have fixed the problem,can you take a look?
@kostas @jyellick https://gerrit.hyperledger.org/r/#/c/12953/ i have fixed the problem,can you take a look? if you have time
(Will do.)
Just replied with some notes
@jyellick ok,i reply you
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=vYEw4DnfjPTpcHEvk) @jyellick @kostas can you provide some method to inspect where is the problem
@asaningmaxchain Follow the topic creation and message production/cosumption from https://kafka.apache.org/quickstart to ensure the cluster is working properly
@jyellick it means that i can use the command `docker exec -it kafka bash` to execute the shell to test the kafka cluster set up successfully?
@asaningmaxchain If you are doing all of your networking in docker, yes. Otherwise, it should be done from whatever remove host the OSN will be executing on.
@asaningmaxchain If you are doing all of your networking locally in docker, yes. Otherwise, it should be done from whatever remove host the OSN will be executing on.
@asaningmaxchain If you are doing all of your networking locally in docker, yes. Otherwise, it should be done from whatever remote host the OSN will be executing on.
i got it
@asaningmaxchain: You're almost good to go with the changeset, just a minor fix needed.
Checking in on this again... any news on the PBFT consensus backend, or other backends as far as release dates?
We’re looking at raft for 1.2 and BFT for 1.3.
@kostas Not only the peer, but also the orderer contains a copy of the ledger for each channel they participant, is that correct? If yes, is there any difference between the ledger maintained by an orderer and the ledger maintained by a peer, if the orderer and the peer belong to the same channel? Thanks
@qizhang Perhaps more accurately, the orderer and the peer both contain a copy of the blockchain. The orderer does not maintain a copy of the corresponding state database, nor does it make any attempt to validate whether a transaction would successfully modify the state database (the peer obviously does both)
@jmcnevin if they stick to the release schedule proposed at the chicago hackfest it will be end of Q3 2018
@jrosmith thanks
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3rBczux9cv36k4ADr) @kostas ok
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3rBczux9cv36k4ADr) @kostas ok,i am sorry about that it's too late to fix it,
@kostas with Raft and BFT , will there be any change in the existing procedure of endorsement and ordering ?
@vu3mmg In the case of Raft, I would expect no changes at all. In the case of BFT, the client may need to submit to more than one OSN, depending on the trust model between the client and the OSNs
Thank you @jyellick
@jyellick for raft will the current orderer will be the leader ? and will we have multiple orderers ?
@vu3mmg Raft is a leader based consensus algorithm. The non-leader orderers would forward transactions to the leader, and the leader would order them into blocks
OK. Then we need multiple orderers ? out of which one will be the leader ? or current peers will have the ordering capability ?
I am trying to build a mental model of deployment . The reason is , if I deploy V1 fabric and if I change to raft , how can I upgrade the network
I have question on endorsement. If a peer is down for couple of hours and if it is member of the endorsement policy, does it mean it will not process or approve any transactions until it synchronizes the ledger state with rest of the peers in the channels.
For example peer1, peer2 and peer 3 are in the same channel and peer 2 is down for 2 hours. So peer 2 is restarted...so is it safe to say..peer2 will not endorsement until it synchronizes..or will it return old read and write sets in the endorsement response.
@gauthampamu If a peer is out of sync, it will return endorsements over old key versions, which will not match the endorsements of the other peers
@vu3mmg From a deployment perspective, hopefully Raft will be no different from Kafka, the clients should not need to know the details of the consensus implementation in this case
@jyellick one more naive query . This means today I am deploying kafka based orderer . I have a growing world state DB . After 3 months If I want to change to raft. my peers and state db will be transparent of this and the network will be using today's genesis.block , when I upgrade to raft ?
@gauthampamu I know this has been answered already but please consider #fabric-peer-endorser-committer or #fabric for these questions in the future.
@vu3mmg: Switching ordering types on an existing blockchain network is not in the works. (Yet at least.)
@kostas This means , we need to reset the network and reload the data ,while changing the ordering type ?
Correct.
Thank you.
I am getting this error in orderer
```failed to order the transaction. Error codeundefined
error : failed to order the transaction. Error codeundefined
I am using kafka clusters
and don't know why this happens
any help ?
I have used this yaml to deploy the network
https://github.com/hyperledger/fabric/blob/release/test/feature/docker-compose/docker-compose-kafka.yml
can you paste the log about orderer node
?
@asaningmaxchain yes I am using orderer0.example.com
and here are my logs
https://hastebin.com/ufeferonev.rb
@asaningmaxchain I just made a new invoke and here are the new logs
https://hastebin.com/yirabufeji.hs
@gentios i can't open the url due to the network,can you use the github gist
@asaningmaxchain yes just a second
@asaningmaxchain here https://gist.github.com/gentios/5f1eae0a6af63794b552d7f299d65636
you can use the fabric master branch,it provides the sample about the kafka cluster
can you send me a link
@asaningmaxchain
ok,https://github.com/hyperledger/fabric/tree/master/examples/e2e_cli
@asaningmaxchain ok thank you
I am getting an error of ```ENDORSEMENT_POLICY_FAILURE
@asaningmaxchain my network config looks similar to that what you provided
and still got this problem
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=QnTYTqb3xABRBxR2y) @gentios are you sure?
yes I have checked
anyone ?
@asaningmaxchain how to network up the example that
you send it to me
please follow http://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html
@asaningmaxchain but that is not the same as the script
@gentios take a time to see,and then you will get a lot
@asaningmaxchain thank you for the help
guys I am trying to deploy
the e2e_cli
and I get thi error
``````
Channel name : mychannel
Check orderering service availability...
Attempting to fetch system channel 'testchainid' ...3 secs
Attempting to fetch system channel 'testchainid' ...7 secs
Attempting to fetch system channel 'testchainid' ...10 secs
Attempting to fetch system channel 'testchainid' ...14 secs
Attempting to fetch system channel 'testchainid' ...17 secs
Attempting to fetch system channel 'testchainid' ...21 secs
Attempting to fetch system channel 'testchainid' ...24 secs
Attempting to fetch system channel 'testchainid' ...27 secs
Attempting to fetch system channel 'testchainid' ...30 secs
Attempting to fetch system channel 'testchainid' ...33 secs
Attempting to fetch system channel 'testchainid' ...36 secs
Attempting to fetch system channel 'testchainid' ...39 secs
Attempting to fetch system channel 'testchainid' ...42 secs
Attempting to fetch system channel 'testchainid' ...46 secs
Attempting to fetch system channel 'testchainid' ...49 secs
Attempting to fetch system channel 'testchainid' ...52 secs
Attempting to fetch system channel 'testchainid' ...55 secs
Attempting to fetch system channel 'testchainid' ...58 secs
Attempting to fetch system channel 'testchainid' ...61 secs
2017-10-05 12:21:56.819 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2017-10-05 12:21:56.819 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2017-10-05 12:21:56.825 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2017-10-05 12:21:56.825 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-10-05 12:21:56.825 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-10-05 12:21:56.825 UTC [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2017-10-05 12:21:56.825 UTC [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2017-10-05 12:21:56.825 UTC [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0AB3060A1708021A0608E4C7D8CE0522...6C40AE0AA64012080A021A0012021A00
2017-10-05 12:21:56.825 UTC [msp/identity] Sign -> DEBU 009 Sign: digest: 288EA23BAFAEA8DCB43E33AF84D295CA6A60B1351A64FAFA827BD6E4D2FE8188
2017-10-05 12:21:56.828 UTC [channelCmd] readBlock -> DEBU 00a Received block:0
2017-10-05 12:21:56.828 UTC [main] main -> INFO 00b Exiting.....
!!!!!!!!!!!!!!! Ordering Service is not available, Please try again ... !!!!!!!!!!!!!!!!
================== ERROR !!! FAILED to execute End-2-End Scenario ==================
https://hastebin.com/oyozefagej.sql
``````
Channel name : mychannel
Check orderering service availability...
Attempting to fetch system channel 'testchainid' ...3 secs
Attempting to fetch system channel 'testchainid' ...7 secs
Attempting to fetch system channel 'testchainid' ...10 secs
Attempting to fetch system channel 'testchainid' ...14 secs
Attempting to fetch system channel 'testchainid' ...17 secs
Attempting to fetch system channel 'testchainid' ...21 secs
Attempting to fetch system channel 'testchainid' ...24 secs
Attempting to fetch system channel 'testchainid' ...27 secs
Attempting to fetch system channel 'testchainid' ...30 secs
Attempting to fetch system channel 'testchainid' ...33 secs
Attempting to fetch system channel 'testchainid' ...36 secs
Attempting to fetch system channel 'testchainid' ...39 secs
Attempting to fetch system channel 'testchainid' ...42 secs
Attempting to fetch system channel 'testchainid' ...46 secs
Attempting to fetch system channel 'testchainid' ...49 secs
Attempting to fetch system channel 'testchainid' ...52 secs
Attempting to fetch system channel 'testchainid' ...55 secs
Attempting to fetch system channel 'testchainid' ...58 secs
Attempting to fetch system channel 'testchainid' ...61 secs
2017-10-05 12:21:56.819 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2017-10-05 12:21:56.819 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2017-10-05 12:21:56.825 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2017-10-05 12:21:56.825 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-10-05 12:21:56.825 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-10-05 12:21:56.825 UTC [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2017-10-05 12:21:56.825 UTC [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2017-10-05 12:21:56.825 UTC [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0AB3060A1708021A0608E4C7D8CE0522...6C40AE0AA64012080A021A0012021A00
2017-10-05 12:21:56.825 UTC [msp/identity] Sign -> DEBU 009 Sign: digest: 288EA23BAFAEA8DCB43E33AF84D295CA6A60B1351A64FAFA827BD6E4D2FE8188
2017-10-05 12:21:56.828 UTC [channelCmd] readBlock -> DEBU 00a Received block:0
2017-10-05 12:21:56.828 UTC [main] main -> INFO 00b Exiting.....
!!!!!!!!!!!!!!! Ordering Service is not available, Please try again ... !!!!!!!!!!!!!!!!
================== ERROR !!! FAILED to execute End-2-End Scenario ==================
@gentios Please do not post large chunks of logs like this.
@jyellick I think that this is needed, otherwise nobody will know why the error is happening
@gentios Use a service like http://hastebin.com http://gist.github.com or http://pastebin.com , then post a link to the logs
ok
anyway do you know why this error occurs
The ordering service is not successfully starting, usually because the Kafka cluster is not configured correctly, or the orderers are not configured to connect to the cluster correctly.
I have just used the gitclone
nothing changed
from the master
I am trying for a long time to configure kafka clusters but without success
Are you certainly nothing at all has changed? The `e2e_cli` runs many times a day in the CI infrastructure, it does work.
Are you certain nothing at all has changed? The `e2e_cli` runs many times a day in the CI infrastructure, it does work.
from the docker files nothing
just changed this
FABRIC_CFG_PATH=C:/Users/.../Desktop/e2e_cli
because it doesn't work with export FABRIC_CFG_PATH=$PWD
and instead of download-dockerimages.sh
have used this
https://hastebin.com/uladezihuf.bash
do download the configtxgen and docker images of 1.0.0
I see you are running on Windows. This is much more challenging and temperamental than Linux. Did you follow the guide here http://hyperledger-fabric.readthedocs.io/en/latest/prereqs.html precisely? The versions of the tools you have installed matter. If at all possible, I would recommend you work inside of Virtualbox with the devenv image provided by `hyperledger/fabric/devenv`
```$ docker --version
Docker version 17.09.0-ce, build afdb6d4
$ docker-compose --version
docker-compose version 1.16.1, build 6d1ac219
@jyellick I think I have installed all the prerequisites because
I have used the fabric-samples/basic network before
and everything worked fine
I do not have access to any Windows machines, so I cannot attempt to reproduce. I can say that the `e2e_cli` works consistently on Linux. The error you pasted indicates something wrong with the state of the Kafka cluster. I recommend that you inspect the container logs for the Kafka brokers and Zookeepers to see if anything looks wrong. If you can find nothing in the container logs, I recommend that you start a Virtualbox devenv image, and run the `e2e_cli` in there. Then you can compare the steps between the two to identify your problem.
hi, when i am getting the transaction history using the transaction id, i see the block number with fields `low` and `high` what do these fields mean?
> I am trying for a long time to configure kafka clusters but without success
> I am trying for a long time to configure kafka clusters but without success
@wy This is a better question for #fabric-ledger
@wy This is a better question for #fabric-ledger or #fabric-peer-endorser-committer
Has joined the channel.
What consensus protocols that have in Fabric, and how to configure the Fabric to switch from one protocol to another? Thanks!
@qizhang There is the PoC consensus 'solo', and the production ready 'kafka'. You may not switch between them.
Hey @jyellick , I wanted to know the pros and cons of using a solo orderer or multiple for multiple organizations. Where and how does kafka come in picture? Normal data piplines with grpc used in between peers(gossip) doesnt do the job?
@CodeReaper Solo should only ever be used for development/PoC. Kafka should be used for any production scenarios. Running multiple solo orderers is equivalent to running multiple ordering services. This is a scenario which ultimately is planned to be supported, but it is nothing I can recommend at the moment.
> Normal data piplines with grpc used in between peers(gossip) doesnt do the job?
Gossip will replicate data, but if an ordering service becomes unavailable, new transactions cannot commit.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SdygyE7EuxQ9JFxZ9) @jyellick Any specific roadblock we might have while building the the kafka orderer service at this moment??
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SdygyE7EuxQ9JFxZ9) @jyellick Any specific roadblock we might encounter while building the the kafka orderer service at this moment??
@CodeReaper No, Kafka should be good to go as a production ordering system.
Thanks @jyellick , any documentaion we can go through?
https://github.com/hyperledger/fabric/blob/master/docs/source/kafka.rst is a good place to start
@jyellick since I am running windows I do have a lot of problems with kafka
do you have any info on how to start the virtualbox devenv
I get this in kafka cluster trying to run the e2e_cli example
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12)
Whatever you're running this on doesn't have enough memory.
I am running in ubuntu virtualbox
@kostas but I got this error in windows too
Understood, but I don't see what the OS has to do with the memory that you assign to the process that executes this.
Understood, but I don't see what this has to do with the memory that is assigned to the executing process.
Can you provide a few more details on your setup?
yes I have just cloned the e2e_cli
@gentios How much RAM does your system have?
the virtualbox ?
1GB
Right.
And you're trying to bring up, what, 4 Kafka brokers and 3 ZK nodes?
yes
This is absolutely not enough RAM for all of these processes.
I just increased the ram to 3gb
now I will retry
Before you retry this specific experiment --
@kostas should I increase the docker ram ?
Based on your questions here yesterday, and your StackOverflow thread from a couple of weeks ago (to which I responded, only to see the thread deleted), it's key to ask this:
Have you tried experimenting with a multi-broker Kafka setup as described in https://kafka.apache.org/quickstart ?
not really just some basic stuff
I made a network from scratch
for hyperledger with kafka etc..
Not sure I follow.
anyway, I jsut want to setup a network with kafka clusters for production purposes
That's the thing. It's not an "anyway" kind of situation.
Let me rephrase the question:
yes please
Have you attempted to run a multi-broker Kafka/ZK setup? (Forget about Fabric now.)
no
@jyellick yesterday, me in that S/O thread that you deleted two weeks ago, and me today as well -- we're all explaining to you that unless you attempt to build this in an iterative fashion you'll fail.
As @jyellick did yesterday, as I did in that S/O thread that you deleted two weeks ago, and as I did today as well -- we're all explaining to you that unless you attempt to build this in an iterative fashion you'll fail.
There's a reason why this exists: https://github.com/hyperledger/fabric/blob/master/docs/source/kafka.rst#caveat-emptor
At the very least, spend some time on this: https://chat.hyperledger.org/channel/fabric-orderer?msg=LfW5a7xSZdbfbaGFK
@kostas I thank you for the help really
I am not new in fabric now 2 months working with it
so I am learning step by step
since there is not much community I have found this chat for help
This doesn't address my point however. I get _missing_ an instruction. But if we point you to an instruction repeatedly and you ignore it, you're really wasting everyone's time.
This doesn't address my point however. I get _missing_ an instruction. But if we point you to an instruction repeatedly and you ignore it, then we end up running in circles.
and trying to help others
@kostas I haven't ignored your instructions or somebody else
those have helped me a lot to get through this
I am trying to get help and help others as much as I can
that's all
since there is not a big community and a lot of docs for hyperledger fabric
I have come here to this chat for support
GM @jyellick @muralisr
As part of resilience verification, we brought down the orderer node (running in solo) with data-persistence enabled. Now
1. we are receiving following warning message in the orderer log.
`Handle -> WARN 195 Error reading from stream: rpc error: code = Canceled desc = context canceled`
2. Peer node has joined 4 channels however it is receiving block on only 1 channel.
Can you help me troubleshoot this error. I haven't restarted the Peer Node but the Orderer node was restarted multiple times, and the error is same.
GM @jyellick @muralisr
As part of resilience verification, we brought down the orderer node (running in solo) with data-persistence enabled. Now
1. we are receiving following warning message in the orderer log.
`Handle -> WARN 195 Error reading from stream: rpc error: code = Canceled desc = context canceled`
2. Peer node has joined 4 channels however it is receiving block on only 1 channel.
Can you help me troubleshoot this error. I haven't restarted the Peer Node but the Orderer node was restarted multiple times, and the error is same.
@rahulhegde I would begin by asserting that there is no formal support for solo and crash resiliency. I see no explicit reason why it should not work (with downtime), but it is untested.
The error you specify in (1) is caused when a client hangs up before appropriately closing the stream.
Is this for `Broadcast` or `Deliver`?
This is Broadcast `[orderer/common/broadcast]` and client is the Peer CLI.
This is Broadcast `[orderer/common/broadcast]` and client is Peer CLI.
Can you confirm that this error is new? I expect you would have seen this before the restart as well?
I am sure I have seen this error before - not to blame this error. Because I see 16 lines of this message getting printed which corresponds to 16. However out of 16 channel (or 16 ledger transaction, 1 per channel), only few are passed to Peer (as said in the Point 2).
I am sure I have seen this error before - not to blame this error. Because I see 16 lines of this message getting printed which corresponds to 16 channels. However out of 16 channel (or 16 ledger transaction, 1 per channel), only few are passed to Peer (as said in the Point 2).
Have you looked at the log for the peer which is only receiving some blocks?
Unfortunately - Peer is running on INFO, the only print i receive is ` Created block [4134] with 1 transaction(s) `. I can plan to restart the peer node but hope it will hide the problem after it.
Unfortunately - Peer is running on INFO, the only print i receive is ` Created block [4134] with 1 transaction(s) `. I can plan to restart the peer node but hope it will hide the problem after it.
` 2017-10-06 15:49:37.646 CEST [kvledger] Commit -> INFO 63e6 Channel [ChannelX]: Created block [4134] with 1 transaction(s) `
And the orderer is running at debug?
(Which release version of orderer and peer?)
Orderer is running in INFO and can be bumped up. This is GA Release.
GA v1.0.0? (or v1.0.1, v1.0.2, v1.0.3?)
1.0.0
Do i restart the orderer in DEBUG, this can give hint/
I was hoping to confirm that the new blocks were being created at the orderer
This would require `DEBUG` at the orderer
I can check the file-system block-date-time?
w/o restarting
Yes, this is a good idea
yes they are getting created for all 16 channel.
@yacovm For v1.0.0, at INFO level, is there a way to determine whether a peer is a leader for a channel. Per @rahulhegde's problem above, his peer is a member of four channels, but is only getting blocks for one. The blocks are being created at the orderer. My first thought is that perhaps the channel leader for the 3 non-functioning channels is broken and not being re-elected or similar?
Actually, @rahulhegde what are your `core.yaml` settings for `peer.gossip.useLeaderElection` and `peer.gossip.orgLeader` ?
Actually, @rahulhegde what are your `core.yaml` settings for `peer.gossip.useLeaderElection` and `peer.gossip.orgLeader` for your peer which is missing blocks?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=TDkhpTuec9Ee4NF3Q) ‘solo’ means only one orderer, without any consensus protocol, right?
Correct.
@jyellick IIRC we made it info at v1.0.x, so for v1.0 you need debug :disappointed:
But you can use tcpdump
Or netstat
To see ig the peer has any connection
We dont share connections across channels
So each channel is a connection of its own
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=AsfDBjXuhwyY3hgXs) @jyellick
Sorry - had to run through meetings - Here is our current setup
There is only 1 Peer per Organization
Every Peer in the Organization is set with ` CORE_PEER_GOSSIP_ORGLEADER=true `
Can you use netstat or similar to see the number of connections from the peer to the ordering service?
and ` useLeaderElection: false ` is the default setting picked up from ` /etc/hyperledger/fabric/core.yaml `
and this is multi-host setup using docker swarm. Let me check netstat, do u want me run from the orderer container?
as per yacov comment - we should see 1 connection per channel and there are many connection not Established
I thought using gRPC - we have 1 connection per Peer and there is multiplexing across channel :(
I thought using gRPC - we have 1 connection per Peer and there is multiplexing across all subscribed channel by that peer :(
@rahulhegde gRPC may multiplex streams over a single socket, though the peer opens a new socket per channel today
I'm not sure what the benefit to running it against the orderer would be, the peer should be sufficient
how will peer initiate and restore these channel connection with Orderer?
how will peer initiate and restore these channel connection with Orderer - w/o restarting Peer?
how will peer initiate and restore these channel connection with Orderer - w/o restarting Peer or have configured our system incorrectly?
how will peer initiate and restore these channel connection with Orderer - w/o restarting Peer or we haven't configured our system?
how will peer initiate and restore these channel connection with Orderer - w/o restarting Peer or we haven't configured our system correctly?
I see some peers have established connection with channel (but not all channels).
@muralisr ^^
If `peer.gossip.orgLeader` is set to true, that peer should unconditionally be trying to connect to the orderer. If the peer has no open connections to the orderer, then this indicates a bug in the peer to me.
Do u want to capture logs as we want to release the environment for further test.
Do u want to capture logs as we want to release the environment to Tester.
Do u want to capture logs as we want to release the environment to Tester?
Even with debug enabled, the peer logs around the `Deliver` call were unfortunately sparse, I would hate for you to go through the trouble of reproducing at higher log level if the logs will not be helpful. I believe logging enhancements were pushed into 1.0.2 (though I will double check). Any chance you can use a more recently fix version?
I see another ERROR in peer logs ` 2017-10-05 18:34:59.855 CEST [eventhub_producer] Chat -> ERRO 618b error during Chat, stopping handler: rpc error: code = Canceled desc = context canceled ` - this looks like eventhub and is it benign too?
Yes, I believe that is benign.
I just confirmed, there is a commit:
```
commit 0c01aaa83d2bd588d70b227b6f52dc4ce85e9647
Merge: aa7883fad 9d558532a
Author: David Enyeart
Ah, @rahulhegde actually, looking at the code, it looks like if the peer cannot make contact with any of the ordering nodes for a long enough period of time, it gives up
Maybe @yacovm or @C0rWin can confirm what the behavior for the peer is when all OSNs are unavailable for a prolonged period of time
You said it yourself... it gives up.
@yacovm Yes, I saw that it gave up, and returned an error. I did not know if somewhere else in the stack the deliver client is re-initialized and started again, or if the only remedy is to restart the peer.
So - in case of leader election is will remedy itself as a side effect.
Would you recommend that peers run with leader election disabled? This seems to be the default we give in `core.yaml`, but enabling leader election seems generally to work around many problems?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=J24EvuuChkEx5phCr) @yacovm
what is prolonged definition - retry count/interval?
When leader election is on, if the peer contacted the ordering service it means its a leader.
So after it gives up, it gives up leadership for a period of time, then if its alone - it will become a leader again
it's 5 minutes
https://github.com/hyperledger/fabric/blob/release/core/deliverservice/deliveryclient.go#L32
> Would you recommend that peers run with leader election disabled?
Depends on the setup / environment
> Depends on the setup / environment
@yacovm I guess more specifically, do you think leaving it disabled is a good default for our sample configuration
what is our sample configuration?
`fabric/sampleconfig/core.yaml`
Well if you recall... if you have leader election *off* and some peers are not leaders, they can't join a channel unless their org is in the genesis block
So this is a pretty critical scenario where leader election saves you, right?
Yes
This is one of the reasons why I thought we should turn it on by default
it's off by default in core.yaml ?
https://github.com/hyperledger/fabric/blob/release/sampleconfig/core.yaml#L123
in the e2e it's on, and no one really deploys fabric without docker (we both know why)
I guess we should make it true then
Want me to make a JIRA and fix it?
if you don't mind... sure. @C0rWin you on board with this?
@yacovm I deploy fabric without docker... what am I missing?
> if you don't mind... sure. @C0rWin you on board with this?
turning leader election on by default?
in core.yaml
` useLeaderElection: true`
?
and `orgLeader: false`, yes
go for it :)
@Asara do you have multiple peers in an org?
do they use leader election?
multiple peers in an org, yes, leader election no
But plan on getting leader election in soon
it's pretty simple you know
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=CCGMX7S4te2azw3cL) @yacovm
I didn't get this - Peers will be defined at the organization level. Do we mean to say a Peer defined as leader for that Org1, can join to a channel where channel doesn't have Org1 MSP in their channel configuration?
@yacovm @C0rWin https://gerrit.hyperledger.org/r/#/c/14285/
@yacovm pretty simple enabling leader election? Its just enabling it in the config and disabling orgLeader right?
> Its just enabling it in the config and disabling orgLeader right?
indeed
Thanks @jyellick
@rahulhegde What I mean is - if you create a channel with org0 and org1, and after 10 blocks you add org2 to a channel, if the peers of org2 don't use leader election - (lets assume you have 1 peer as a leader and 2 none leaders)
then the none-leader peers will reject receiving blocks from the leader peer of org2
because the genesis block doesn't contain org2 as in the channel
They will also (If I recall correctly) reject blocks from orgs org0 and org1 because they think they themselves are not in the channel
however, if they are all configured with leader election, they will connect to the orderer and pull blocks
until they get to that config block that tells "org2 is in the channel"
until that time - they will all be leaders and after they get the config block they'll elect 1 leader
what we really need to do (at some point) to solve this problem is just be able to join channel from the latest config block
In our setup - we have 1 Peer per organization and we want to avoid gossip across organization for any SLA reason and always want Peer to pull blocks from Orderer. This means - it is good to set leader election = true (from default false) and retain the setting of `CORE_PEER_GOSSIP_ORGLEADER=TRUE`
there is only 1 organization in our setup which has 2 Peers - and the same above setting should also hold true for them?
no, please don't put them both to `true`
put leader election to true, and the org_leader to false
to avoid gossip across orgs - just don't put any anchor peers in the genesis block
and in the late config blocks
if the peers don't know about each other, they can't gossip
OKay - we don't define any anchor peer for organization. There is 1 variable we define ` CORE_PEER_GOSSIP_EXTERNALENDPOINT ` - this should be set to NULL or it doesnt have effect as we don't define anchor peer?
right. it doesn't have any effect in our case
you can omit it
eh, just realized we're polluting the #fabric-orderer space with gossip talk
there is a #fabric-gossip channel where you can ask (for next time) gossip related questions
Sure @yacovm
So as a recommendation to patch the current Peer Orderer connection problem - I would make the above changes to leader election = true and set the Gossip Leader = false. And do u side effect of this setting, the connection will be reestablished by Peer to Orderer
yes
Do we open a Hyperledger JIRA to get a strategical fix to this issue?
which issue?
This is reason i started this chat -- not to discuss Gossip Issue. This went down to this discussion.
The fact that once an org leader fails to connect to the orderer long enough, he gives up forever and you must restart the peer
your problem is that when the orderers are unreachable, after a few minutes - the peer gives up on it, right?
If i do a netstat inside the peer, i see only few connections are opened with Orderer.
1 connection per channel.
> The fact that once an org leader fails to connect to the orderer long enough, he gives up forever and you must restart the peer
@jyellick this is by design TBH
I'm not sure however that 5 min is a good time limit
I personally thing that a few hours is better
I understand that Kafka is used as a central hub to order all the transactions so that different orderers can cut blocks from there, but what is the purpose of Zookeeper? Please advise. Thanks
5mins is the time Peer will again try to reconnect to orderer if its leader election is marked true. Is lower value the better number?
5 min is the time until it gives up... completely.
Okay - looks like there is no way we can control it today (UR Reference - https://github.com/hyperledger/fabric/blob/release/core/deliverservice/deliveryclient.go#L32)
Okay - looks like there is no way we can control it today via docker environment variable (UR Reference - https://github.com/hyperledger/fabric/blob/release/core/deliverservice/deliveryclient.go#L32)
@qizhang
> but what is the purpose of Zookeeper? Please advise. Thanks
Zookeeper is required for the Kafka brokers to perform leader election. You may find more by googling "Kafka Zookeeper Architecture" or similar.
@rahulhegde right. I think making it configurable is a good idea.
Another question on impact of setting ` leader election = true ` - from all organization setup, we only have 1 organization with 2 peers. And as per my understanding, they will now always talk to Orderer for fetch block or to recover from back-log and will never interact with each other at any point of time. Is that correct?
Another question on impact of setting ` leader election = true ` - from all organization nodes, we only have 1 organization with 2 peers. And as per my understanding, they will now always talk to Orderer for fetch block or to recover from back-log and will never interact with each other at any point of time. Is that correct?
Another question on impact of setting ` leader election = true ` - from all organization nodes, we only have 1 organization with 2 peers. And as per my understanding, they will now always talk to Orderer to fetch block or to recover from back-log and will never interact with each other at any point of time. Is that correct?
@yacovm - can u please confirm ^ .
@rahulhegde I do not believe this is true. Leader election does exactly what it sounds like, dynamically picks a single leader from an org to fetch blocks. If that leader fails, a new one is elected. Still, in general, only one peer from each org will connect to ordering and will then disseminate blocks to the others via gossip. @yacovm Please correct me if I am wrong
@rahulhegde I do not believe this is true. Leader election does exactly what it sounds like, dynamically picks a single leader from an org to fetch blocks from ordering. If that leader fails, a new one is elected. Still, in general, only one peer from each org will connect to ordering and will then disseminate blocks to the others via gossip. @yacovm Please correct me if I am wrong
yep you're correct
but
if he doesn't have any bootstrap peers
or anchor peers
then the peers won't know each other right?
so each peer will think its alone in the org
So it some how now means for the above case and setup (no bootstrap/anchor peer), these two peers will always connect to the Orderer.
yes
@rahulhegde , @jyellick , @C0rWin I opened https://jira.hyperledger.org/browse/FAB-6515 `Specify deliver service reConnectTotalTimeThreshold in core.yaml`
Tagged it with [help-wanted] in case some groupie would want to pick it up and do it.
@rahulhegde , @jyellick , @C0rWin I opened `https://jira.hyperledger.org/browse/FAB-6515` Specify deliver service reConnectTotalTimeThreshold in core.yaml`
Tagged it with [help-wanted] in case some groupie would want to pick it up and do it.
@rahulhegde , @jyellick , @C0rWin I opened https://jira.hyperledger.org/browse/FAB-6515 `Specify deliver service reConnectTotalTimeThreshold in core.yaml`
Tagged it with [help-wanted] in case some groupie would want to pick it up and do it.
@rahulhegde , @jyellick , @C0rWin I opened https://jira.hyperledger.org/browse/FAB-6515
`Specify deliver service reConnectTotalTimeThreshold in core.yaml`
Tagged it with [help-wanted] in case some groupie would want to pick it up and do it.
Has joined the channel.
@jyellick https://jira.hyperledger.org/browse/FAB-6527
what does ```Error reading from stream: rpc error: code = Canceled desc = context canceled means in orderer ?
I have cloned the repo from gihub/master with kafka brokers e2e_cli example
I have cloned the repo from gihub/master with kafka brokers
and it's been days I cannot deploy it in ubuntu
is there any bug or something I don't know
the ```checkOSNAvailability()
function fails
also if I remove it and try to create a channel it says
also if I remove it and dry to create a channel it says
```Failed to connect to broker kafka0:9092: dial tcp 172.18.0.13:9092: getsockopt: connection refused
Failed to connect to broker kafka0:9092: dial tcp 172.18.0.13:9092: getsockopt: connection refused
I think that my issue is addressed to this issue https://jira.hyperledger.org/browse/FAB-3787?page=com.atlassian.jira.plugin.system.issuetabpanels%3Aall-tabpanel
gentios
Has joined the channel.
Hi. I am trying to create a channel on my network. I have ran the following command on my peer: ```peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx``` which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
```peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx``` which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
``` peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
``` peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
``` peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
``` peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
[policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
``` peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
[policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
``` peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
``` peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
``` peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
[policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
``` peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
```peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
```peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
``` peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
``` peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx ```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
[policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
``` [policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
```[policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
```peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
```[policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
```peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx```
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
```[policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
```[policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
`peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx`
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
```[policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
`peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx`
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
```[policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
`peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx`
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
```[policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
`peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx`
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
```[policies] GetPolicy -> DEBU 26c Returning policy Writers for evaluation
[policies] GetPolicy -> DEBU 26d Returning dummy reject all policy because Writers could not be found in /Application/Writers
[policies] GetPolicy -> DEBU 26e Returning policy Admins for evaluation
[policies] GetPolicy -> DEBU 26f Returning dummy reject all policy because Admins could not be found in /Application/Admins
[policies] GetPolicy -> DEBU 270 Returning dummy reject all policy because Readers could not be found in /Application/Readers
[policies] GetPolicy -> DEBU 271 Returning policy Readers for evaluation
[common/configtx] addToMap -> DEBU 272 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 273 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 274 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 275 Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 276 Adding to config map: [Values] /Channel/Consortium
[common/configtx] addToMap -> DEBU 277 Adding to config map: [Groups] /Channel
[common/configtx] addToMap -> DEBU 278 Adding to config map: [Groups] /Channel/Application
[common/configtx] addToMap -> DEBU 279 Adding to config map: [Groups] /Channel/Application/Org1MSP
[common/configtx] addToMap -> DEBU 27a Adding to config map: [Groups] /Channel/Application/Org2MSP
[common/configtx] addToMap -> DEBU 27b Adding to config map: [Policy] /Channel/Application/Admins
[common/configtx] addToMap -> DEBU 27c Adding to config map: [Policy] /Channel/Application/Writers
[common/configtx] addToMap -> DEBU 27d Adding to config map: [Policy] /Channel/Application/Readers
[common/configtx] addToMap -> DEBU 27e Adding to config map: [Values] /Channel/Consortium
[policies] GetPolicy -> DEBU 27f Returning policy ChannelCreationPolicy for evaluation
[cauthdsl] func1 -> DEBU 280 0xc420026f38 gate 1507552827357314371 evaluation starts
[cauthdsl] func2 -> DEBU 281 0xc420026f38 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 282 0xc420026f38 processing identity 0 with bytes of 0a074f7267314d53...
[msp/identity] newIdentity -> DEBU 283 Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[cauthdsl] func2 -> DEBU 284 0xc420026f38 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org2MSP, got Org1MSP)
[cauthdsl] func2 -> DEBU 285 0xc420026f38 principal evaluation fails
[cauthdsl] func1 -> DEBU 286 0xc420026f38 gate 1507552827357314371 evaluation fails
[cauthdsl] func1 -> DEBU 287 0xc420026f48 gate 1507552827358608236 evaluation starts
[cauthdsl] func2 -> DEBU 288 0xc420026f48 signed by 0 principal evaluation starts (used [false])
[cauthdsl] func2 -> DEBU 289 0xc420026f48 processing identity 0 with bytes of 0a074f7267314d535012d...
[msp/identity] newIdentity -> DEBU 28a Creating identity instance for ID -----BEGIN CERTIFICATE-----
MIICWDCCAf+gAwIBAgIUHTk4UwXXCm2PTeD7...
-----END CERTIFICATE-----
[msp] SatisfiesPrincipal -> DEBU 28b Checking if identity satisfies ADMIN role for Org1MSP
[cauthdsl] func2 -> DEBU 28c 0xc420026f48 identity 0 does not satisfy principal: This identity is not an admin
[cauthdsl] func2 -> DEBU 28d 0xc420026f48 principal evaluation fails
[cauthdsl] func1 -> DEBU 28e 0xc420026f48 gate 1507552827358608236 evaluation fails
[orderer/common/broadcast] Handle -> WARN 28f Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ```
Hi. I am trying to create a channel on my network. I have ran the following command on my peer:
`peer channel create -o orderer.example.com:7050 -c businesschannel -f /channel-artifacts/channel.tx`
which gave me a BAD_REQUEST response. So I checked the logs on my orderer and this is the tail of the logs, where I think the problems start:
https://hastebin.com/acuvayiviy.hs
```
I do not understand where I went wrong. Before calling the channel create,should I do anything else?
@SimonOberzan Please do not post large segments of logs in this channel. Use a service like http://hastebin.com I will edit your post for you, but please do this yourself next time
@jyellick Ok, didn't know that
@SimonOberzan The error you are seeing is because the certificate you are submitting the channel creation request with, is not an admin certificate
@jyellick Which one do you mean? The one specified in CORE_PEER_MSPCONFIGPATH?
Correct
Ok, thank you
@SimonOberzan You will want to set that path to something like:
```crypto-config/peerOrganizations/org1.example.com/users/Admin\@org1.example.com/msp/```
@jyellick Yeah I saw something like this in an example, when called from a docker-tools container, but I am executing this command in a peer container and I have the variable already set to peer/msp. Would changing it cause problems somewhere else?
@SimonOberzan Only if you restart the peer binary. Each process in the fabric network needs a cryptographic identity (controlled by that path). Since you are in the peer container, that identity is set to be that peer. Since env variables can't retroactively affect an already started process, you should be safe to modify it.
@jyellick Oh I see, thank you
Has joined the channel.
After reading the Orderer-Kafka technical document, the decision to add block is hinted using TTC-
After reading the Orderer-Kafka technical document, the decision to add block is hinted using TTC-
@rahulhegde Only the first time-to-cut message for a particular block number is honored
When tm2 was received, it would start a new timer to send the ttcb-block-11 message
timer would be started only once it receive transactions from the ordering service client and not from the Kafka?
timer would be started by OSN only once it receive transactions from the ordering service client and not from the Kafka?
Ah, so perhaps that's the crux of the issue. The timer never starts when receiving directly from clients, only after the message has been ordered by Kafka
So, before either orderer sent `ttcb-block-10` they must have had other pending transactions. When block 10 was actually cut/committed, the OSNs kill the timer, until there is an outstanding batch, then start it again. In your example, which would have happened after tm2 was ordered and received by the OSN
Thanks @jyellick - I was impression that OSN starts timer upon receiving the first message from OSN client.
Thanks @jyellick - I was in impression that OSN starts timer upon receiving the first message from OSN client.
> A batch has just been cut and a new transaction comes in via, say, OSN1. It gets posted to the partition. OSN2 reads it at time t=5s and sets a timer that will fire at t=6s.
From the doc:
> A batch has just been cut and a new transaction comes in via, say, OSN1. It gets posted to the partition. OSN2 reads it at time t=5s and sets a timer that will fire at t=6s.
Has joined the channel.
@jyellick i see the you add the `https://github.com/hyperledger/fabric/commit/49e427d7876c9d1eb147b506df986de8edbd586f` more direct method to config the configtx
so i can use it in my `https://jira.hyperledger.org/browse/FAB-6527`
so i can use it in my https://jira.hyperledger.org/browse/FAB-6527
Has joined the channel.
Hi, I want to know whether it is possible to generate two channels on one orderer? Currently, when I start orderer node, I provide the info about the genesis.block to it. How can I provide a second genesis block to the same orderer if two channels have to be managed by it. Thanks in advance
of course,you can set two channels,the genesis block is used to define the system channel
of course,you can set two channels,the genesis block is used to define the system channel
@chfalak
I think there is one to one correspondence between genesis block and the channels?
I mean each channel has a unique genesis block
Has left the channel.
@chfalak: When you start the ordering service node, you point it to the genesis block _for the system channel_ (as @asaningmaxchain notes). See: https://github.com/hyperledger/fabric/blob/master/sampleconfig/orderer.yaml#L62
You can then send a channel creation transaction to the ordering service node; this will generate a genesis block for that channel.
Using the peer CLI this is what is returned via the `peer channel create` command, and this is what you use to join the peer on the channel: https://hyperledger-fabric.readthedocs.io/en/latest/build_network.html#create-join-channel
So, you can generate up to `MaxChannels` on an ordering service w/o issues: https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L241
To conclude: yes, you can generate two (in fact as many as `MaxChannels`) on an ordering service: https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L241
@kostas i see the orderer.yaml which defines the Capabilities that is used to define the capability for the each component
@kostas i see the orderer.yaml which defines the Capabilities that is used to define the capability for the each component?
@kostas i see the orderer.yaml which defines the Capabilities element that is used to define the capability for the each component?
Yes.
Hi, I have successfully created a channel from a peer, but then when I try to join the peer I get the following output: ```2017-10-10 11:31:33.665 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2017-10-10 11:31:33.665 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2017-10-10 11:31:33.666 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2017-10-10 11:31:33.667 UTC [msp/identity] Sign -> DEBU 004 Sign: plaintext: 0A8A070A5C08011A0C0895DFF2CE0510...665C47702A6B1A080A000A000A000A00
2017-10-10 11:31:33.667 UTC [msp/identity] Sign -> DEBU 005 Sign: digest: 781ABB6D0D6CA1F05E578291B513EB51CFCCD8399E8F2CD2E34D83F6FD15363C
Error: proposal failed (err: rpc error: code = Unknown desc = chaincode error (status: 500, message: "JoinChain" request failed authorization check for channel [businesschannel]: [Failed verifying that proposal's creator satisfies local MSP principal during channelless check policy with policy [Admins]: [This identity is not an admin]]))
```
The admin cert on CORE_PEER_MSPCONFIG was generated by cryptogen. Shouldn't the setting that were used to create channel be good enough to join it?
@SimonOberzan Which cert are you using to sign the transaction?
`This identity is not an admin` typically means you used the wrong cert
if you are using the BYFN setup, use the Admin1@orgX certs
The one generated by cryptogen. The path: `peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/admincerts/Admin@org1.example.com-cert.pem`
The one generated by cryptogen. The path: `peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/admincerts/Admin@org1.example.com-cert.pem`
I'm creating and joining the channel on peer0.org1.example.com
Has joined the channel.
And when you created the genesis block, you are sure Org1 was a part of that channel? Because it seems like what you are doing 'should' work
@Asara You mean in configtx.yaml?
@Asara You mean in configtx.yaml? That is the content: https://hastebin.com/fawawisosa.makefile
@Asara what is the MSPDir in configtx.yaml pointing to? Should such path exist on the peer when calling join channel?
@kostas Can you please send me a configtx.yaml file (not basic one).
@Asara I checked the Org1 admincert on path specified with MSPDir in configtx.yaml and it is the same as in $CORE_PEER_MSPCONFIGPATH/admincert
@chfalak: Not sure what you mean by basic, or what exactly you're after? When you run the E2E CLI test for instance, a valid `configtx.yaml` is generated.
Also admincert and signcerts are the same in $CORE_PEER_MSPCONFIGPATH/. Is that OK?
@jyellick Could you please take a look?
@SimonOberzan Since you created the channel successfully, I expect you are using the correct cert. However, to join a peer to a channel, you must invoke this as the admin _of that peer_. So, first identity which organization that peer belongs to, then use an admin cert for that organization in your request.
@SimonOberzan Since you created the channel successfully, I expect you are using an admin cert. However, to join a peer to a channel, you must invoke this as the admin _of that peer_. So, first identity which organization that peer belongs to, then use an admin cert for that organization in your request.
@jyellick so in my instance peerOrganizations/org1.example.com/peers/peer0.org1.example.com/admincerts/?
@jyellick so in my instance the following cert: peerOrganizations/org1.example.com/peers/peer0.org1.example.com/admincerts/?
@jyellick so in my instance the following cert: `peerOrganizations/org1.example.com/peers/peer0.org1.example.com/admincerts/`?
@SimonOberzan Those are the certificates that peer0 will recognize as 'admins', but they are only the public parts. You would still need to find the user with that cert and use that as the directory. Also keep in mind, that is only for `peer0` of Org1. I'm not certain which peer you are targeting for the JoinChannel, but make sure that you are referring to the correct peer.
@jyellick Yes, I'm targeting peer0.org1. I still don't understand which certificates to provide. Could you please elaborate where can I find them?
@jyellick Yes, I'm targeting peer0.org1, and executing command on the same peer. I still don't understand which certificates to provide. Could you please elaborate where can I find them?
@jyellick Yes, I'm targeting peer0.org1, and executing command on it. I still don't understand which certificates to provide. Could you please elaborate where can I find them?
@jyellick Yes, I'm targeting peer0.org1, and executing commands on it. I still don't understand which certificates to provide. Could you please elaborate where can I find them?
Has joined the channel.
Has joined the channel.
Has joined the channel.
@kostas In Kafka-Zookeeper Setting , getting exception in order logs :func2 -> DEBU 662a 0xc420970090 signature for identity 0 is invalid: Could not determine the validity of the signature, err Invalid S. Must be smaller than half the order
Do you have any idea , what could be the root cause (I am colleague of Rahul Hegde)
@SimonOberzan The admin cert for an org is usually at a path like `peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/admincerts/Admin@org1.example.com-cert.pem` which you referred to earlier. You can refer to the e2e scripts to see all of the associated variables which are set when a user is switched.
@ManjeetGambhir This error seems to be from the go-sdk https://github.com/hyperledger/fabric-sdk-go/blob/master/third_party/github.com/hyperledger/fabric/bccsp/sw/ecdsa.go#L107
The orderer would not emit this message.
~The orderer would not emit this message.~
~The orderer would not emit this message.~ Actually, I see they vendored fabric, odd that Google sent me to the fabric-go-sdk first
@jyellick Yeah I have used that certificate for creation of the channel, and joining already when I posted the first question and it didn't work. I will look in the example that you have referenced..
@ManjeetGambhir Can you be more specific about how/when this error occurs? This would indicate to me that the orderer is checking a signature, and finding that it is not well formed according to the cryptographic rules for the corresponding MSP.
Trying to visualize how a kafka based orderer works in a diagram for a presentation. Anyone know of any good flowcharts showing this?
@t_stephens67 You might find https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit helpful
@jyellick thanks yea I have looked at that.
does the orderer send the transaction to all the brokers in the cluster and how does the zookeeper come into play I guess is more of what I am trying to understand
@t_stephens67 The orderer uses the Sarama library which implements the Kafka protocols. From a fabric perspective it makes more sense to talk about the guarantees provided by the APIs. For sending the transaction (or in Kafka parlance, producing it), the orderer will not receive acknowledgement of that send until it has propagated to the number of in-sync replicas required, so if the client receives a `SUCCESS`, there is a guarantee that that transaction will not be lost. If you are interested in how Kafka implements these APIs, there is a multitude of good documentation available on the web, I suggest googling "Kafka Architecture" as a starting place.
@jyellick That document seems to me to be more of an evolution of the design, not the design itself. It would be extremely helpful to have a document that just describes the design IMHO.
So from what I am getting the orderer creates a producer and a consumer (when the network is created). when a transaction is created the producer pushes that transaction to each kafka broker? then once the batch size is reached (Or the batch timer) the orderer requests the block to be cut. and consumes the block from the brokers. The orderer then sends that block onto the peers to be added to the each ledger copy.
or does the zookeeper maintain the process of cutting the block and letting the orderer know its ready to be consumed
@t_stephens67 There is a topic containing exactly one partition for each channel. The orderers push (produce) transactions sent to this channel to that partition. For each channel, there is a consumer for this partition, which reads ordered transactions and deterministically cuts them into blocks. The consumer thread starts a timer when the first transaction in a block is received, if the block is not full by the time the timer expires, it sends a message to the partition indicating that it is time to cut the block. The first such message to arrive for a particular block cuts it. The important thing to note is that blocks are never pushed to Kafka, only transactions. The transactions are then cut into blocks in a deterministic fashion by each orderer.
@toddinpal You might find the diagrams attached to https://jira.hyperledger.org/browse/FAB-5258 helpful
@jyellick right I can read the google doc.
@t_stephens67 You wrote:
> and consumes the block from the brokers.
This is why I re-iterated that last point
As well as:
> or does the zookeeper maintain the process of cutting the block and letting the orderer know its ready to be consumed
which contradicts the idea that blocks are cut at the orderer
So what does the zookeeper do?
@t_stephens67 Zookeeper is used internally by Kafka to organize leader election among the brokers
As suggested, if you google "Kafka Architecture" you can see the role that Zookeeper plays
Cross-posting this from #fabric-ledger
https://chat.hyperledger.org/channel/fabric-ledger?msg=iSKKNQoQqxsSrcfp9
@qizhang I _ think _ you don't have enough kafka servers configured, or that your minimum in-sync config number is too high and you have only 1
(can you define only 1 in-sync replica? I don't know personally...)
(can you define only 1 in-sync replica? I don't know personally... but should be possible i.e for a single kafka serve)
@yacovm @qizhang This error would indicate that Kafka has been configured to require 2 replicas be in sync, but there is only one available, so Kafka cannot accept any new data until another broker can join the ISR set.
hmm I was close :)
Can Service not available (503) as OSN response should be also tracked to the same ISR not available reason?
Can Service not available (503) as OSN response, should be also tracked to the same ISR not available reason?
@rahulhegde Certainly, if the OSN cannot properly communicate with the Kafka network, it will return a 503
Is Endorsement Policy not stored as part of channel configuration
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=gx793Qdk7XpkJcakC) @kostas Only way to modify - is by chaincode instantiation/upgrade?
and is /application/channel/[writers,admins] allowed to instantiate/upgrade chaincode?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=gx793Qdk7XpkJcakC) @kostas
Only way to modify the endorsement policy - is by chaincode instantiation/upgrade?
and is /application/channel/[writers,admins] allowed to instantiate/upgrade chaincode?
Is Endorsement Policy stored on channel configuration?
It isn't.
Only way to modify the endorsement policy - is by chaincode instantiation/upgrade?
and is /application/channel/[writers,admins] allowed to instantiate/upgrade chaincode?
@rahulhegde: I think you accidentally edited the original question, instead of posting a new one. At any rate:
> Only way to modify the endorsement policy - is by chaincode instantiation/upgrade?
Yes you can set the endorsement policy only during those stages.
> and is /application/channel/[writers,admins] allowed to instantiate/upgrade chaincode?
You cannot set a reference to channel policies.
> and is /application/channel/[writers,admins] allowed to instantiate/upgrade chaincode?
You cannot provide a reference to channel policies.
As part of the work that @jyellick is doing with the resources config tree, I think this _should_ be possible though? @jyellick to confirm.
@rahulhegde The instantiation policy which determines who may invoke instantiate for a chaincode may be specified as part of the signed chaincode package, or, the default is any admin from any org in the channel.
@rahulhegde (For v1.0.x) the instantiation policy which determines who may invoke instantiate for a chaincode may be specified as part of the signed chaincode package, or, the default is any admin from any org in the channel.
Correct @kostas the new lifecycle management work will allow modifications to endorsement policy to be managed by a policy reference
(Including policies such as `/Channel/Application/Writers` and `/Channel/Application/Admins`
(Including policies such as `/Channel/Application/Writers` and `/Channel/Application/Admins`)
@rahulhegde You can see https://jira.hyperledger.org/browse/FAB-6042 for more discussion
Thanks for the update.
@kostas @jyellick Thanks for the update.
Has joined the channel.
Hi there, I'm getting `[common/configtx/tool/localconfig] Load -> CRIT 002 Error reading configuration: Unsupported Config Type ""` while trying to start my orderer. Some docs mention this is a result of not setting FABRIC_CFG_PATH correctly, but I have done that. My guess is that it has to do with my configuration options `GenesisMethod: provisional` and `GenesisProfile: SampleInsecureSolo`, which are the ones set in the orderer.yaml in the fabric-orderer docker image. Do I have to use a genesis block?
Hi there, I'm getting `[common/configtx/tool/localconfig] Load -> CRIT 002 Error reading configuration: Unsupported Config Type ""` while trying to start my orderer. Some docs mention this is a result of not setting FABRIC_CFG_PATH correctly, but I have done that. My guess is that it has to do with my configuration options `GenesisMethod: provisional` and `GenesisProfile: SampleInsecureSolo`, which are the ones set in the orderer.yaml in the fabric-orderer docker image. Do I have to use a genesis block instead?
@vdods If you chose to use the `provisional` genesis method, this essentially invoked `configtxgen` for your to dynamically generate a genesis block. You must have a `configtx.yaml` accessible in this path for that to work. We never recommend the `provisional` genesis method for production.
@vdods If you chose to use the `provisional` genesis method, this essentially invokes `configtxgen` for your to dynamically generate a genesis block. You must have a `configtx.yaml` accessible in this path for that to work. We never recommend the `provisional` genesis method for production.
@vdods If you chose to use the `provisional` genesis method, this essentially invokes `configtxgen` for you to dynamically generate a genesis block. You must have a `configtx.yaml` accessible in this path for that to work. We never recommend the `provisional` genesis method for production.
Ahh
that explains the error. My goal is to start the orderer out with an "empty" configuration, and update it later. Would I do that by creating a genesis block using a configtx.yaml profile with itself as the orderer, and no consortiums? I'm trying to keep the node configuration/start separate from network architecture and channel config
@vdods Absolutely. You must define a consortiums section, but need not define any consortiums. If you look at the bdds, this is actually how they function.
Great! Thanks!
Has joined the channel.
Hi,does anyone know this error :[orderer/common/deliver] Handle -> WARN 01d [channel: coldchain] Rejecting deliver request because of consenter error
[orderer/kafka] Enqueue -> WARN 01e [channel: coldchain] Will not enqueue, consenter for this channel hasn't started yet
I tried to change the orderer image tag from rc1 to 1.0.2, then I got that error!
@WHATISOOP what are you using to query the orderer? cli? or are you trying to run some examples?
I do some query by java sdk.
could you paste full log of your orderer node to pastebin/gist and post it here? normally this error could happen when you didn't give enough time for orderer to bootstrap, you may wait for a little while and query again.
could you paste full log of your orderer node to pastebin/gist and post link here? normally this error could happen when you didn't give enough time for orderer to bootstrap, you may wait for a little while and query again.
https://chat.hyperledger.org/file-upload/X6utLWnb7GDeCBcrk/orderer.log
this is the orderer log
@WHATISOOP file access denied...
how about downloading from the Files List? I uploaded to the files list...
@WHATISOOP alright, if I'm looking at correct log file you uploaded, it's repeatedly trying to connect to Kafka cluster. Could you check your Kafka connection? you can follow http://kafka.apache.org/quickstart to quickly diagnose that. Meanwhile, you could also make increase the verbosity of Sarama log and post result again (you could find `verbose` option in orderer.yaml)
@WHATISOOP alright, if I'm looking at correct log file you uploaded, it's repeatedly trying to connect to Kafka cluster. Could you check your Kafka connection? you can follow http://kafka.apache.org/quickstart to quickly diagnose that. Meanwhile, you could also increase the verbosity of Sarama log and post result again (you could find `verbose` option in orderer.yaml)
ok, thank you !
Hi everyone, do different channels share orderers, or usually each channel has its own orderers? Thanks
Share.
@kostas I am thinking that since each channel has its own set of block, will it be more scalable to assign each channel its own orderers?
Of course it would be, but once you go down that path you'll see that it gets hairy really fast. Do the clients know which orderer to target? If yes, how is this information communicated to them? What happens if the assignment changes? If not, then the orderer receiving the client request needs to relay it to the appropriate owner, wait for the response, then pipe it back to the client? What happens if the assignment changes halfway through? That's just a couple of issues that jump to me right off the bat with this suggestion. We'd be adding yet another layer of indirection and a disproportionate amount of complexity.
Of course it would be in theory, but once you go down that path you'll see that it gets hairy really fast. Do the clients know which orderer to target? If yes, how is this information communicated to them? What happens if the assignment changes? If not, then the orderer receiving the client request needs to relay it to the appropriate owner, wait for the response, then pipe it back to the client? What happens if the assignment changes halfway through? That's just a couple of issues that jump to me right off the bat with this suggestion. We'd be adding yet another layer of indirection and a disproportionate amount of complexity.
@kostas Some of my thoughts below, please correct me if anything is wrong. According to the description here http://hyperledger-fabric.readthedocs.io/en/latest/txflow.html, I think the clients should know what orderer it going to send a transaction to. There is no static assignment of between a client and an orderer, since a client can send different transactions to different orderers. I am not sure whether there is any "replay" mechanism, but as far as I know, there is not.
You're suggesting that channel "foo" is assigned to orderers OSN1 and OSN5.
The current design works because _any_ orderer can work with _any_ channel.
Now in your design, I am a client and I wish to broadcast a transaction to channel "foo".
Do I know that OSN1 and OSN5 are responsible for this channel? If so, how? If I don't, how is this taken care of in the background?
There are answers to these questions, but as I wrote above: once you go down that path you'll see that it gets hairy really fast.
I see, so you mean current no bottleneck has been observed from sharing the orderers by multiple channels, is that correct?
I see, good points, I will dig out the answers of the questions you raised. By "hairy really fast", do you mean that currently no bottleneck has been observed from sharing the orderers by multiple channels?
No, I mean that the solution will become really complex, really fast.
You are welcome to explore this though, and if you think you have a design that can work, let us know.
Sure, will do. Currently, all the channels share the same set of orderers, and in this case, how does a client decide which orderer a transaction needs to be sent to?
They don't have to decide precisely because any OSN will do the job.
But a client needs pick one orderer to send a transaction anyway, otherwise, there is no destination for this send operation.
But a client needs to pick one orderer to send a transaction anyway, otherwise, there is no destination for this send operation.
So, just randomly pick an orderer?
Correct.
I see
(By "they don't have to", I meant that there's no reason in picking a _specific_ orderer. As long as it's available, it'll do the job.)
Is it the OSN's job to tell Kafka what chain to write to? Or does Kafka discern this on its own?
@5igm4: The OSN does it. It can infer the topic/partition from the channel name.
@kostas What is the difference between an OSN and an orderer?
They're the same thing. OSN = Ordering Service Node.
(which is an instance of the `orderer` binary executing)
@kostas Does that mean there is one chain/ledger per channel?
@5igm4: Correct.
For every channel there is a corresponding single-partition topic in Kafka (1 to 1 mapping).
For every channel there is a corresponding ledger instance (again, 1 to 1 mapping) on every OSN.
Thanks! Clears up a few of my questions, but I'm sure I'll have more though :)
Any time.
hello - which policy is used from channel configuration to authorize performing ` peer channel fetch ` action - is it ` /channel/reader `?
hello - which policy is used from channel configuration to authorize performing ` peer channel fetch ` action - is it ` /channel/Readers `?
@rahulhegde Correct. Invocations of `Deliver` (which `peer channel fetch` uses) is controlled by `/Channel/Readers`
Note that by default, `/Channel/Readers` is satisfied if either `/Channel/Application/Readers` or `/Channel/Orderer/Readers` is satisfied.
(And similarly, `/Channel/Application/Readers` is satisfied if any of `/Channel/Application/Org1/Readers`, ..., `/Channel/Application/OrgN/Readers` is satisfied.)
Correct.
Correct. This goes recursive to each sub-group if it is implicit meta policy.
Exactly
Now for ` peer channel update ` - is it ` /Channel/Writers` or ` /Channel/Admins ` that needs to be honored?
Now for ` peer channel update ` - is it ` /Channel/Writers` or ` /Channel/Admins ` that will be honored?
You may of course use an explicit policy type and not use this recursive behavior, but the recursion makes for a nice default, and allows each org to define what it means for their org to be a Reader, without requiring approval from the rest of the network.
The submitter of `peer channel update` must satisfy `/Channel/Writers` as it is an invocation of the `Broadcast` API. However, this is only the first check. The update itself carries a set of signatures (usually including one from the submitter) which is then evaluated against the set of triggered `mod_policies` in the config. For instance, modifying the membership of a channel means modifying the `/Channel/Application` group which has a `mod_policy` of `/Channel/Application/Admins`. So, this policy must be satisfied. Modifying Org1's anchor peers triggers evaluation of the `/Channel/Application/OrgName/Admins` policy, etc.
So, for a config update, first, the config framework computes all of the modified elements of the configuration. It then takes the set of `mod_policy`s for these modified elements, and evaluates them each in turn against the attached signature set. Depending on the modification to the config, the set of `mod_policy`s will be different.
So, for a config update, first, the config framework detects all of the modified elements of the configuration. It then takes the set of `mod_policy`s for these modified elements, and evaluates them each in turn against the attached signature set. Depending on the modification to the config, the set of `mod_policy`s will be different.
Okay - for this note below
```
For instance, modifying the membership of a channel means modifying the `/Channel/Application` group which has a `mod_policy` of `/Channel/Application/Admins `
```
The other way of representing this specific path's `mod_policy` could be `Admins` which is same as `/Channel/Application/Admins`
Okay - for this note below
```
For instance, modifying the membership of a channel means modifying the `/Channel/Application` group which has a `mod_policy` of `/Channel/Application/Admins `
```
The other way of representing this specific path's `mod_policy` could be `Admins` (relative path representation) which is same as `/Channel/Application/Admins` (absolute path representation)
And where is `/Channel/Admins` used currently?
And where is `/Channel/Admins` usage mapped currently?
@rahulhegde Right, when a `mod_policy` does not begin with a `/`, it uses the current group as the root and treats the `mod_policy` as a relative path. You are correct there.
The `/Channel/Admins` policy is referenced by any of the elements in the `/Channel` level configuration, you can see it specified as the `mod_policy` for those elements. Generally though, these elements do not currently support modification. (For instance, there is only one allowed hashing algorithm at present, eventually we hope to add more)
Is fetching of configuration block only possible from orderer node and is not possible from the peer node who has joined to that channel?
Is fetching of configuration block only possible from orderer node. Is it also possible from the peer node who has joined to that channel?
I see a org1.peer node is allowed to join a channel using ` peer channel join ` command using channel block information which has org2 definition only? However it is not able to get the ledger blocks but is this that should be prevented?
I see a org1.peer node is allowed to join a channel using ` peer channel join ` command using channel block information which has org2 definition only? However it is not able to get the ledger blocks but is this that should be prevented at the first place?
I see a org1.peer node is allowed to join a channel using ` peer channel join ` command using channel block tx information which has org2 definition only however it is not able to get the ledger blocks. But is this something, that should be prevented at the first place?
> Is fetching of configuration block only possible from orderer node. Is it also possible from the peer node who has joined to that channel?
Certainly, you may use the normal peer APIs to query blocks from the peer.
> I see a org1.peer node is allowed to join a channel using ` peer channel join ` command using channel block tx information which has org2 definition only however it is not able to get the ledger blocks. But is this something, that should be prevented at the first place?
This is a known limitation. If you enable leader election, the peer will go to the orderer and fetch the blocks until it gets a configuration block containing his org, then gossip will resume normally. If leader election is disabled, then non-leader peers will not be able to retrieve new blocks.
There has been some discussion on how to fix this, see: https://jira.hyperledger.org/browse/FAB-5288
^ @rahulhegde
When we have a Fabric network with 3 Kafka and 3 Zookeepers shared by all the channels, are they working as “1 worker + 2 backups” mode? (1 kafka is actually providing service, while the other 2 are backups, and so do the three Zookeepers)
No. It really depends on how you configure your Kafka cluster.
It comes down to what value you give to the "replication factor" and the "ISR" configuration settings.
At any point in time a channel (a.k.a. partition) is _owned_ by a single Kafka broker, but the number of replicas depends on the replication factor which is up to the user to define.
The Kafka documentation should answer all of your questions there.
Hey all - is there a link to the consensus interface for fabric?
e.g. is there a common interface that both Kafka and PBFT will connect through?
@kelly_: There is.
A consensus plugin needs to implement the `Consenter` and `Chain` interfaces defined here: https://github.com/hyperledger/fabric/blob/master/orderer/consensus/consensus.go
Thanks!
@kostas can you tell me the what's the chaintool
@kostas can you tell me the what's the chaintool?
@kostas In fact, I want to have three organisations in my system (Org1, Org2 and Org3). Org1 and Org2 communicate on one channel (ch12), and Org2 and Org3 communicate on another channel (ch23). Can you please help me with a configtx.yaml file by which I can generate genesis block and channel transactions for ch12 and ch23. Thanks in advance
Hi,@kostas @jyellick it seems orderer reads from the genesis block about the consensus type, can we modify it by update the config. As I tested, it couldn't be modified. Also the channels among the same orderers are all the same consensus type, can't be customized. right?
> can you tell me the what's the chaintool?
@asaningmaxchain: http://fabric-chaintool.readthedocs.io/en/latest/ and #fabric-chaintool should get you started
ok @kostas
> Can you please help me with a configtx.yaml file by which I can generate genesis block and channel transactions for ch12 and ch23. Thanks in advance
@chfalak: Proceed as follows:
> Can you please help me with a configtx.yaml file by which I can generate genesis block and channel transactions for ch12 and ch23. Thanks in advance
@chfalak: Use https://github.com/hyperledger/fabric/blob/release/examples/e2e_cli/configtx.yaml as your guide and proceed as follows:
1. Add a new org definitions under the organization section. Similar to how `Org1` is defined here: https://github.com/hyperledger/fabric/blob/release/examples/e2e_cli/configtx.yaml#L58
2. Add this organization to your consortium, so instead of `*Org1` and `*Org2`, that list also includes `*Org3` now.
2. Add this organization to your consortium, so instead of `*Org1` and `*Org2`, that list also includes `*Org3` now: https://github.com/hyperledger/fabric/blob/release/examples/e2e_cli/configtx.yaml#L24
3. Duplicate the `TwoOrgsChannel` profile: https://github.com/hyperledger/fabric/blob/release/examples/e2e_cli/configtx.yaml#L27 -- in the new profile, replace `Org1` with `Org3`.
You now have the required material so as to bootstrap an ordering service with a consortium definition consisting of orgs 1 to 3 (step 1), and then you can create a channel creation transaction for `ch12` (the `TwoOrgsChannel` profile) and for `ch23` (the duplicate/modified profile you created in Step 3).
Has joined the channel.
Is there a specific OSN that's responsible for alerting other OSNs with a TTC message? If not, how is this handled? Furthermore, are there any other instances where OSN's must communicate with each other?
Is there a specific OSN that's responsible for alerting other OSNs with a TTC message? If not, how is this handled? Furthermore, are there any other instances where OSNs must communicate with each other?
@5igm4 https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit
Take a look at that, should help you out.
Thanks!
> it seems orderer reads from the genesis block about the consensus type, can we modify it by update the config. As I tested, it couldn't be modified.
@Glen Correct. The consensus type is fixed across all channels in the ordering service. This is deliberate. Consider for instance a set of 3 OSNs who all receive an update to change from the Kafka consensus type to the Solo consensus type. There is no way to make this change without forking the blockchain.
If you wish to have different channels using different consensus types, you should simply start two different ordering services.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=LFKbApumZhu6byt8J) @jyellick
Just wanted to check the impact on this limitation - does it mean the Orderer will restricts connection with Peer since it is not authorized organization in the channel configuration `/channel/Application/Readers`.
@rahulhegde No, the orderer will use the most recent configuration for authorizing clients. Since the org has been added to the channel, with the default policies in place, the new org will be granted read access.
@rahulhegde No, the orderer will use the most recent configuration for authorizing clients. Since the org has been added to the channel, with the default policies in place, the new org will be granted read access. Because the peer which joined has an older version of the configuration, its connection to other peers will fail, but the orderer will still authorize it to receive blocks.
@kostas Is there only one consortium in a network? Or there is a one to one corresponding between consortiums and channels in a network?
@chfalak There may be many consortiums, and each consortium may have many channels defined
Channels are always created in the context of a consortium, to be able to authorize the channel creation
For instance, if consortium ABC has members A, B, and C, and consortium B,C,D has members B, C, and D, and member B wants to create a channel with C, the orderer must know which consortium's channel creation rules to apply (ABC's, or BCD's) in evaluating the request.
@jyellick and how do we tell orderer which consortium's channel creation rules apply to?
In `configtx.yaml` you will see the consortium name specified in the channel creation profiles. For instance https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L147
Ok. Can you please tell me what does Application Organiations mean in ChannelCreationProfile
https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L148
The application organizations are those organizations which may connect peers to this channel
The application organizations are those organizations which may connect peers and or clients to transact on this channel
Is there any organization which is a part of the network, but does not connect its peer to a channel?
An organization can decide to have clients only, or orderers only.
@yacovm Can you please suggest me some document where I can read this stuff about organizations, channels, etc
I'm not exactly the best person to ask for documents. There is https://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html I guess https://hyperledger-fabric.readthedocs.io/en/latest/msp.html is also a good idea.
If anyone else has a recommendation please add
@chfalak: http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html and https://hyperledger-fabric.readthedocs.io/en/latest/configtxgen.html and https://hyperledger-fabric.readthedocs.io/en/latest/policies.html are necessary reading.
Has joined the channel.
hey @here, anyone has a sample kafka docker compose configuration that includes both peers and orderers and kafka instances?
@antoniovassell: https://github.com/hyperledger/fabric/blob/master/examples/e2e_cli/docker-compose-cli.yaml
@kostas oh thanks, was looking through the bddtests, https://github.com/hyperledger/fabric/tree/release/bddtests
Will try this out,
Right, `dc-orderer-*.yml` files under `bddtests` do a good job of explaining the orderer setup, complete with comments, etc. but this covers the orderer only.
What are the main advantages to use Zookeeper in your Fabric networks?
@kostas okay
@ascatox: Kafka cannot actually run w/o Zookeeper. It's not a question of advantages. It's a matter of necessity.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=oZdQfoDsD8TZvW9xS) @kostas Ok I didn't understand this
Could you help me understand what is missing
```
2017-10-15 02:58:28.329 UTC [common/configtx] addToMap -> DEBU cf9 Adding to config map: [Policy] /Channel/Application/Admins
2017-10-15 02:58:28.329 UTC [policies] GetPolicy -> ERRO cfa Returning dummy reject all policy because no policy ID supplied
2017-10-15 02:58:28.329 UTC [orderer/common/broadcast] Handle -> WARN cfb Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Unexpected missing policy for item [Policy] /Channel/Application/Admins
2017-10-15 02:58:28.329 UTC [orderer/main] func1 -> DEBU cfc Closing Broadcast stream
```
This is something i tried to change in the Configuration Block Update to Channel
```
"write_set": {
"groups": {
"Application": {
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 1,
"value": {
"identities": [
{
"principal": {
"msp_identifier": "clsbgb65",
"role": "ADMIN"
}
}
],
"rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0
}
]
}
}
}
},
"version": "1"
}
},
"version": "2"
}
}
}
```
Initially this configuration had ADMINS, MAJORITY as policy and 2 Organizations. Is it getting mapped to following error in the orderer log ` Error authorizing update `
@rahulhegde How are you generating the config update? (Based on the output, it looks like you might be using v1.0.0 binaries?
yes GA 1.0.0. binaries.
using ConfigTxLator - documented approach of using CURL.
I was able to successfully add new organization with the same approach.
(@jyellick: Just so that it's easier for me to spot as well, what tipped you off on the usage of 1.0.0 binaries here?)
@rahulhegde There is a bug in the v1.0.0 `configtxgen` binary which omits the `mod_policy` on the `/Channel/Application/{Admins,Readers,Writers}` policies. There is a correspodning similar bug in the v1.0.0 `orderer` which allows this omission. Both are fixed in v1.0.1. For channels generated with the v1.0.0 `configtxgen`, the policies for the application cannot be modified. There is a workaround for this coming in v1.1.0. When the system is upgraded to v1.1.0, there is a special upgrade path which repairs these broken channels. However, for the time being, I recommend you simply update your `configtxgen` and `orderer` versions to at least v1.0.1 and work from there.
> Just so that it's easier for me to spot as well, what tipped you off on the usage of 1.0.0 binaries here?)
@kostas The output of `configtxlator` there does not include any `version: 0`. This is the default for protobuf marshaling, to not emit fields which are set to their default. This was a little confusing though for human readability, so the behavior was changed in v1.0.1 to emit the defaults.
@rahulhegde See FAB-6449 and FAB-5309
@jyellick: You'd expect to see `version: 0` on the `Application` config group?
Actually I'd expect to see it on the top level channel group for the write set.
I see _config_ groups here but I don't see a channel group?
The channel group is the root config group. It is implicitly the `write_set`.
That is, the `write_set` is a `*common.ConfigGroup`, and the root of a config update for a channel is the channel config group
A-ha. So you would expect:
``` "version": "2"
}
}
"version": "0"
}```
A-ha. So you would expect?
``` "version": "2"
}
}
"version": "0"
}```
A-ha. So you would expect this then?
``` "version": "2"
}
}
"version": "0"
}```
Exactly
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=JrCsAuRgoBKTWdSz9) @jyellick [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=JrCsAuRgoBKTWdSz9) @jyellick
Thanks - in that case we will move to next version - 1.0.3.
We see a frequent - ISR shrinks and hence the partition append fails on orderer giving 503. And after that there is ISR expand after re communication. Could you let me know what could be the reason for this error?
We see a frequent - ISR shrinks and hence the partition append fails on orderer giving 503. And after that there is ISR expand after re communication. Could you let me know what could be the reason for this error?
I don't see any kafka-broker container going down however a Replicate Fetcher thread shutting down message.
I'd check your cluster for network partitions and general connectivity issues. Sounds like your brokers are not reaching out to the partition leader often enough for them to labeled as in-sync.
I'd check your cluster for network partitions and general connectivity issues. Sounds like the replicas are not reaching out to the partition leader often enough for them to labeled as in-sync.
Kosta - do we know if the Replica Fetcher Thread shutting log message indicate its or if there is any pattern for it to be confirmed from kafka log that it is network issue?
Could it be a case of keep-alive to be used between brokers & zk nodes.
Not off the top of my head, I'd have to dive deeper into it. The good thing about Kafka is that it's practically guaranteed that someone in the community has bumped into this before and there is a documented fix out there.
Just one last thing to facilitate the troubleshooting: remember that a replica is considered in-sync if it has an active session with Zookeeper (i.e. it has sent a heartbeat within the last 6 seconds -- a configurable value), and fetched the most recent messages from the leader in the last 10 seconds (also a configurable value).
@rahulhegde: Just one last thing to facilitate the troubleshooting: remember that a replica is considered in-sync if it has an active session with Zookeeper (i.e. it has sent a heartbeat within the last 6 seconds -- a configurable value), and fetched the most recent messages from the leader in the last 10 seconds (also a configurable value).
And one more thing. The Kafka Definitive Guide book states:
And one more thing. The Kafka Definitive Guide book writes:
> Seeing one or more replicas rapidly flip between in-sync and out-of-sync status is a sure sign that something is wrong with the cluster. The cause is often a misconfiguration of Java's garbage collection on a broker. Misconfigured garbage collection can causethe broker to pause for a few seconds, during which it will lose connectivity to Zookeeper. When a broker loses connectivity to Zookeeper, it is considered out-of-sync with the cluster, which causes the flipping behavior.
So keep that in mind during your debugging as well.
Has joined the channel.
@jyellick @yacovm Can I reasonably and confidently guide a customer that if they deploy their application with SOLO orderer to operational deployment (production), they could reasonably expect IBM to not provide production quality support (best effort perhaps but no more)? I can not find any support statement that indicates this position unambiguously - other than this URL in stackoverflow - https://stackoverflow.com/questions/41104102/can-the-hyperledger-fabric-consensus-service-be-distributed/41226222#41226222. Thanks
@pvrbharg Hyperledger Fabric is an open source Linux Foundation project. You would need to discuss any specifics of an IBM support contract with the IBM representative selling such a contract to you. However, Solo ordering creates a single point of failure, and no support contract can provide guarantees against this. The choice of ordering service is ultimately up to the deployer, but I find it hard to imagine a production deployment which would be satisfied with the guarantees of solo
@pvrbharg Hyperledger Fabric is an open source Linux Foundation project. You would need to discuss any specifics of an IBM support contract with the IBM representative selling such a contract to you. However, Solo ordering creates a single point of failure, and no support contract can provide guarantees against this. The choice of ordering service is ultimately up to the deployer, but I find it hard to imagine a production deployment which would be satisfied with the guarantees offered by solo. As @baohua points out below, if your solo orderer crashes, your whole network will cease functioning. If your solo orderer crashes badly (corrupted disk for instance), your network may not be salvageable.
@pvrbharg Hyperledger Fabric is an open source Linux Foundation project. You would need to discuss any specifics of an IBM support contract with the IBM representative selling such a contract to you. However, solo ordering creates a single point of failure, and no support contract can provide guarantees against this. The choice of ordering service is ultimately up to the deployer, but I find it hard to imagine a production deployment which would be satisfied with the guarantees offered by solo. As @baohua points out below, if your solo orderer crashes, your whole network will cease functioning. If your solo orderer crashes badly (corrupted disk for instance), your network may not be salvageable.
@pvrbharg that will mean once the single orderer node down, the whole network won't work
@pvrbharg that will mean once the single orderer node down (as SOLO provided), the whole network won't work
@jyellick @baohua I agree with both of your responses and guidance. I would convey the same. On an another related but separate followup note - if the SOLO orderer becomes unavailable for some time - during this period - "whole network won't work" - comment - would it not work for adding new transactions to block or would not work for querying ledger (read only or rich queries if couchDB is in play) or would not work for both query and commit? If the SOLO orderer comes back up later - would the network pick up where left off and work or would not work at all? Is this kind of tests performed in our internal regression? All these questions are coming from customer and I am performing due diligence for our customer and IBM in my engagement. Thanks.
@pvrbharg if you ordering is unavailable,, the blockchain becomes effectively read only. Queries to peers should still be possible, but no new transactions can commit
@jyellick Yes Jason and thank you for confirming my understanding - this is documented in our GIT document for Ordering Service delivered by Kafka. I am making sure this is true even for SOLO orderer. In the documentation - we do not say this level of detail if ordering service is provided by SOLO. Hence this question seeking clarification. I am good. Regards
` configtxgen compiled on v1.0.3` says signcerts folder needs to be populated. any additional information is required?
` configtxgen compiled on v1.0.3` says signcerts folder needs to be populated. any reason this has changed - i thought admin, ca would have been good enough?
` configtxgen compiled on v1.0.3` says signcerts folder needs to be populated. any reason this has changed?
@rahulhegde No, this sounds like a possible regression to me. Signcerts should not be required for generating validating MSP definitions
Can you post the exact error?
Putting it out the logs on 1-1 chat window
Putting it out logs on 1-1 chat window
```
func getMspConfig(dir string, ID string, sigid *msp.SigningIdentityInfo) (*msp.MSPConfig, error) {
cacertDir := filepath.Join(dir, cacerts)
signcertDir := filepath.Join(dir, signcerts)
admincertDir := filepath.Join(dir, admincerts)
```
Could it be a bad code - and bad compilation.
```
func getMspConfig(dir string, ID string, sigid *msp.SigningIdentityInfo) (*msp.MSPConfig, error) {
cacertDir := filepath.Join(dir, cacerts)
signcertDir := filepath.Join(dir, signcerts)
admincertDir := filepath.Join(dir, admincerts)
```
Could it be a bad code - and bad compilation in my workspace. this is not there in https://github.com/hyperledger/fabric/blob/v1.0.3/msp/configbuilder.go#L165. Let me clone afresh.
Yes, I was looking for that error string and could not find it
[ ](https://chat.hyperledger.org/channel/fabric?msg=Yu38iCFuwsddoiJbF) @mikykey this is the place for this question to be answered
Has joined the channel.
@kostas @jyellick ^^^
@mikykey Please google "Kafka Architecture", you will find a multitude of good references
@mikykey also i found this design doc very useful https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing
Need further understanding on this point during channel creation (http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html)
```
Point 5
Because the CONFIG_UPDATE applies modifications to the Application group (its version is 1), the config code validates these updates against the ChannelCreationPolicy. If the channel creation contains any other modifications, such as to an individual org’s anchor peers, the corresponding mod policy for the element will be invoked.
```
If i do a peer config fetch for system channel, i don't see any `Application` named group in it like how we see in an application channel configuration, so what are these validation mentioned in this note?
Need further understanding on these points during channel creation (http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html)
```
Point 5
Because the CONFIG_UPDATE applies modifications to the Application group (its version is 1), the config code validates these updates against the ChannelCreationPolicy. If the channel creation contains any other modifications, such as to an individual org’s anchor peers, the corresponding mod policy for the element will be invoked.
```
If i do a peer config fetch for system channel, i don't see any `Application` named group in it like how we see in an application channel configuration, so what are these validation mentioned in this note?
Need further understanding on these points during `Channel Creation` (http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html)
```
Point 5
Because the CONFIG_UPDATE applies modifications to the Application group (its version is 1), the config code validates these updates against the ChannelCreationPolicy. If the channel creation contains any other modifications, such as to an individual org’s anchor peers, the corresponding mod policy for the element will be invoked.
```
If i do a peer config fetch for system channel, i don't see any `Application` named group in it like how we see in an application channel configuration, so what are these validation mentioned in this note?
And `6. The new CONFIG transaction with the new channel config is wrapped and sent for ordering on the ordering system channel. After ordering, the channel is created.`
Is this a new channel configuration block on the system channel or application channel?
And `6. The new CONFIG transaction with the new channel config is wrapped and sent for ordering on the ordering system channel. After ordering, the channel is created.`
Is this a new channel configuration block on the system channel or application channel (genesis block for the application channel)?
What is the process for confirming the `Channel Creation` is success - is this a synchronous call as i see from Peer CLI (considering kafka-setup). Does same applies for Channel Update too?
> f i do a peer config fetch for system channel, i don't see any `Application` named group in it like how we see in an application channel configuration, so what are these validation mentioned in this note?
> If i do a peer config fetch for system channel, i don't see any `Application` named group in it like how we see in an application channel configuration, so what are these validation mentioned in this note?
The system channel does not have an `Application` config group, and rightly so, since [the documentation states](http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html#application-channel-configuration) that:
> Application configuration is for channels which are designed for application type transactions.
And the system channel is not a channel meant for application type transactions.
> so what are these validation mentioned in this note?
This description applies to every other channel.
(The validation being described actually explicitly refers to the channel creation process. Since the ordering system channel must be bootstrapped before any channels can be created, this section cannot apply)
https://chat.hyperledger.org/channel/fabric-orderer?msg=zHWrvBALebbk32a2X
https://chat.hyperledger.org/channel/fabric-orderer?msg=zHWrvBALebbk32a2X
^ This message needs editing, see @jyellick's point above. Proper way to put it: "The concept of an application group applies to every to other channel."
> This description applies to every other channel.
^ This message needs editing, see @jyellick's point above. Proper way to put it: "The concept of an application group applies to every to other channel."
> This description applies to every other channel.
^ This message needs editing, see @jyellick's point above. Proper way to put it: "The concept of an application group applies to every other channel."
^ Ah, missed that bit above. The message needs editing then. Proper way to put it: "The concept of an application group applies to every other channel."
> Is this a new channel configuration block on the system channel or application channel (genesis block for the application channel)?
The config for the new channel is first written in the ordering system channel, then, it is copied into a genesis block for the new channel
^^ This message needs editing, see @jyellick's point above. Proper way to put it: "The concept of an application group applies to every to other channel."
> What is the process for confirming the `Channel Creation` is success - is this a synchronous call as i see from Peer CLI (considering kafka-setup). Does same applies for Channel Update too?

Yes, to verify channel creation you must effectively poll the orderer. For configuration updates, you may use normal eventing mechanisms for block creation at the peer to detect a new block with a configuration update in it.
> What is the process for confirming the `Channel Creation` is success - is this a synchronous call as i see from Peer CLI (considering kafka-setup). Does same applies for Channel Update too?
Yes, to verify channel creation you must effectively poll the orderer. For configuration updates, you may use normal eventing mechanisms for block creation at the peer to detect a new block with a configuration update in it.
what does this mean ` creating an Application group with the newly specified members and --> specifying its mod_policy to be the ChannelCreationPolicy <--- as specified in the consortium config `
I see in `src/fabric/protos/common/configuration.go`, ChannelCreationPolicy returns Policy -> how is this assigned?
what does this mean ` creating an Application group with the newly specified members and --> specifying its mod_policy to be the ChannelCreationPolicy <--- as specified in the consortium config `
I see in `src/fabric/protos/common/configuration.go`, ChannelCreationPolicy returns Policy -> where is this assigned?
In the Consortium definition.
The logical flow for this is: a bunch of orgs get together, they decide they want to form a consortium and they need to agree on the `ChannelCreationPolicy` that will apply.
If you Ctrl+F [this doc](http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html) for "ChannelCreationPolicy" you'll see where it's set in the config tree, and also a description about its usage.
Trying to understand why it is mentioned as `mod_policy`, the organization part of the channel must be part of the consortium and hence their signature policy needs to be met (IMPLICIT_META as defined in System Channel).
So it is all to say who creates Channel?
Can i have setup having Consortium with 4 Organization and having ChannelCreationPolicy as SIGNATURE of Organization1 - what does this mean now?
Can i have setup having Consortium with 4 Organization and having ChannelCreationPolicy as SIGNATURE of Organization1 - what does this mean now for channel creation process?
That your channel creation request will go through as long as it's signed by org1.
> So it is all to say who creates Channel?
Yes.
> Can i have setup having Consortium with 4 Organization and having ChannelCreationPolicy as SIGNATURE of Organization1
Yes.
> what does this mean now for channel creation process?
That your channel creation request will go through as long as it's signed by org1.
> So it is all to say who creates Channel?
Yes.
> Can i have setup having Consortium with 4 Organization and having ChannelCreationPolicy as SIGNATURE of Organization1
Yes, you are not restricted to the implicit meta type.
> what does this mean now for channel creation process?
That your channel creation request will go through as long as it's signed by org1.
> So it is all to say who creates Channel?
Yes. Also, since the application group of the newly created channel has its `mod_policy` set to the ChannelCreationPolicy, this means that if you wish to modify the `Values`, `Policies`, or `Groups` maps of the `Application` `ConfigGroup` of the newly-created channel, this policy needs to be satisfied. See: "For Groups, modification is adding or removing elements to the Values, Policies, or Groups maps (or changing the mod_policy). "
> Can i have setup having Consortium with 4 Organization and having ChannelCreationPolicy as SIGNATURE of Organization1
Yes, you are not restricted to the implicit meta type.
> what does this mean now for channel creation process?
That your channel creation request will go through as long as it's signed by org1.
> So it is all to say who creates Channel?
Yes. Also, since the `Application` `ConfigGroup` of the newly-created channel has its `mod_policy` set to the ChannelCreationPolicy, this means that if you wish to modify the `Values`, `Policies`, or `Groups` maps of the `Application` `ConfigGroup` of the newly-created channel, this policy needs to be satisfied. See: "For Groups, modification is adding or removing elements to the Values, Policies, or Groups maps (or changing the mod_policy). "
> Can i have setup having Consortium with 4 Organization and having ChannelCreationPolicy as SIGNATURE of Organization1
Yes, you are not restricted to the implicit meta type.
> what does this mean now for channel creation process?
That your channel creation request will go through as long as it's signed by org1.
> So it is all to say who creates Channel?
Yes. Also, since the `Application` `ConfigGroup` of the newly-created channel has its `mod_policy` set to the ChannelCreationPolicy, this means that if you wish to modify the `Values`, `Policies`, or `Groups` maps that it contains, you'll need to make sure that your config update satisfies this policy. See: "For Groups, modification is adding or removing elements to the Values, Policies, or Groups maps (or changing the mod_policy). "
> Can i have setup having Consortium with 4 Organization and having ChannelCreationPolicy as SIGNATURE of Organization1
Yes, you are not restricted to the implicit meta type.
> what does this mean now for channel creation process?
That your channel creation request will go through as long as it's signed by org1.
> So it is all to say who creates Channel?
Yes. Also, since the `Application` `ConfigGroup` of the newly-created channel has its `mod_policy` set to the ChannelCreationPolicy, this means that if you wish to modify the `Values`, `Policies`, or `Groups` maps that it contains, you'll need to make sure that your config update satisfies this policy. Recall the documentation stating that: "For Groups, modification is adding or removing elements to the Values, Policies, or Groups maps (or changing the mod_policy). " (And `Application` is just that, a Group.)
> Can i have setup having Consortium with 4 Organization and having ChannelCreationPolicy as SIGNATURE of Organization1
Yes, you are not restricted to the implicit meta type.
> what does this mean now for channel creation process?
That your channel creation request will go through as long as it's signed by org1.
(@rahulhegde: Edited my response above to add some more detail.)
(@rahulhegde: Edited my response to add a bit more detail.)
I tried this exercise - and it looks changing the ChannelCreationPolicy to SIGNATURE - doesn't allow channel creation even if signed by that organization and it checks channel organizations to be subset of consortium list.
Also after changing ChannelCreationPolicy of system channel and followed with a new application channel creation, I don't see any change in the application sub-group policies. Literal meaning `mod_policy` is string and `policy` is structure. I m not sure - how this will be assigned during channel creation.
I tried this exercise - and it looks changing the ChannelCreationPolicy to SIGNATURE - doesn't allow channel creation even if signed by that organization and it checks channel organizations to be subset of consortium list. I was expecting this part.
Also after changing ChannelCreationPolicy of system channel and followed with a new application channel creation, I don't see any change in the application sub-group policies. Literal meaning `mod_policy` is string and `policy` is structure. I m not sure - how this will be assigned during channel creation.
> I don't see any change in the application sub-group policies. Literal meaning `mod_policy` is string and `policy` is structure.
What you should be seeing is:
(a) the "mod_policy" of the application config group set to the string "ChannelCreationPolicy": https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L277
(b) a policy under the "Application" group with key "ChannelCreationPolicy" and value equal to the channel creation policy of the corresponding consortium: https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L274...L276
> I don't see any change in the application sub-group policies. Literal meaning `mod_policy` is string and `policy` is structure.
@rahulhedge: What you should be seeing is:
(a) the "mod_policy" of the application config group set to the string "ChannelCreationPolicy": https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L277
(b) a policy under the "Application" group with key "ChannelCreationPolicy" and value equal to the channel creation policy of the corresponding consortium: https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L274...L276
> I don't see any change in the application sub-group policies. Literal meaning `mod_policy` is string and `policy` is structure.
@rahulhegde: What you should be seeing is:
(a) the "mod_policy" of the application config group set to the string "ChannelCreationPolicy": https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L277
(b) a policy under the "Application" group with key "ChannelCreationPolicy" and value equal to the channel creation policy of the corresponding consortium: https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L274...L276
> I don't see any change in the application sub-group policies. Literal meaning `mod_policy` is string and `policy` is structure.
@rahulhegde: What you should be seeing is:
(a) the "mod_policy" of the application config group set to the string "ChannelCreationPolicy": https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L277
(b) a policy under the "Application" group with key "ChannelCreationPolicy" and value equal to the channel creation policy of the corresponding consortium: https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L274
> I tried this exercise - and it looks changing the ChannelCreationPolicy to SIGNATURE - doesn't allow channel creation even if signed by that organization and it checks channel organizations to be subset of consortium list.
Interesting. Could you describe how you went about changing the ChannelCreationPolicy?
> I don't see any change in the application sub-group policies. Literal meaning `mod_policy` is string and `policy` is structure.
@rahulhegde: What you should be seeing is:
(a) the "mod_policy" of the application config group set to the string "ChannelCreationPolicy": https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L277
(b) a policy under the "Application" group with key "ChannelCreationPolicy" and value equal to the channel creation policy of the corresponding consortium: https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L274
> I tried this exercise - and it looks changing the ChannelCreationPolicy to SIGNATURE - doesn't allow channel creation even if signed by that organization and it checks channel organizations to be subset of consortium list.
Interesting. Could you describe how you went about changing the ChannelCreationPolicy?
> I don't see any change in the application sub-group policies. Literal meaning `mod_policy` is string and `policy` is structure.
@rahulhegde: I am not exactly sure why/where _sub_-group _policies_ (plural) come into play? What you should be seeing is:
(a) the "mod_policy" of the application config group set to the string "ChannelCreationPolicy": https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L277
(b) a policy under the "Application" group with key "ChannelCreationPolicy" and value equal to the channel creation policy of the corresponding consortium: https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L274
The string value of "mod_policy" in (a) references the policy definition in (b).
> I tried this exercise - and it looks changing the ChannelCreationPolicy to SIGNATURE - doesn't allow channel creation even if signed by that organization and it checks channel organizations to be subset of consortium list.
How did you change the ChannelCreationPolicy?
> I tried this exercise - and it looks changing the ChannelCreationPolicy to SIGNATURE - doesn't allow channel creation even if signed by that organization and it checks channel organizations to be subset of consortium list.
Interesting. Could you describe how you went about changing the ChannelCreationPolicy?
Could help me, Kafka+2Orderer + 4peer on 4 vms(a:orderer1, peer0.org1,b:peer1.org1,c:orderer1, peer0.org2,d:peer1.org2)
peer chaincode instantiate has error in logs
2017-10-18 05:02:13.720 UTC [orderer/common/broadcast] Handle -> WARN 1261 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating ReadSet: Readset expected key [Groups] /Channel/Application/Org1MSP at version 0, but got version 1
Hi, do you have a development plan for SBFT? I can't find related issues in JIRA.
@yoheiueda I not open issue in JIRA, I not confirm it's a bug or because my set fault
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=oq4ywpiupkLaPAw9p) @jyellick Ok
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=bEdhJAYuR4iZLaBe8) @scott_xu My question is to everyone in this channel, not specific to your question. Sorry for the confusion.
Hi Everyone.. I am reading this document for channel policies. https://hyperledger-fabric.readthedocs.io/en/latest/policies.html ... Can any one tell me where we set the policies for any channel? How can we make a policy for any channel? Where this information is stored??
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WQoAKJhHmcjxwg8y2) @kostas
```
"write_set": {
"groups": {
"Consortiums": {
"groups": {
"CLSConsortium": {
"values": {
"ChannelCreationPolicy": {
"mod_policy": "/Channel/Orderer/Admins",
"value": {
"type": 1,
"value": {
"identities": [
{
"principal": {
"msp_identifier": "clsbgb65",
"role": "ADMIN"
}
}
],
"rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0
}
]
}
}
}
},
"version": "1"
}
}
}
}
}
}
}
```
`https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L277` mentioned is not present in v1.0.3 (currently been used by me).
@yoheiueda: No date set for SBFT yet. We should pick up development of it again early next year though.
@ahmadzafar: I suggest you read the configtx document: http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html
You'll see that each channel has its configuration stored in a config tree ("/Channel").
You'll see that each channel has its configuration stored in a `ConfigGroup` structure.
You'll want to add application-related policies to your channel to the "/Channel/Application" `ConfigGroup`, by modifying that config group's "Policies" map.
You would store application-related policies for that channel under the "/Channel/Application" config group.
```message ConfigGroup {
uint64 version
map
Of course you can add them under an org's config group as well, by modifying for examples the "Policies" map in "/Channel/Application/Org1".
You create these policies via a configuration update transaction as the document describes (set the readset to identify the current path to what you're about to modify, then set the writeset by including your modification and updating the version fields accordingly).
You need to make sure that the configuration update transaction is signed so that the mod_policy of the config group that contains the policies map that you're editing is satisfied. (Again, this is also in the doc.)
In the end, this information is stored in a configuration block on that channel's ledger. The most recent configuration block carries the current configuration for the channel. So if you wanted to add a policy, you would fetch it, read the config to build the readset and the writeset (as above), push the config_update, and if everything is signed properly the channel would end up with a new configuration block and references to the new policy that you defined.
I am trying to understand how easily is this scenario possible
- Orderer is defined in org-1
- Requirement --> All channel update to system and application channel configuration using org-2 admins.
My understanding - I will have to modify the genesis block before it is been used by orderer service nodes
- do i change in the system channel channel configuration -
- all `mod_policy` under `/Channel/Orderer/org-1/` to `/Channel/Consortiums/ConsortiumId/org-2/Admins`
Is this correct?
And as per `channel creation` process, the `/Channel/Orderer/org-1/` group is copied , this means there `mod_policy`
will also be copied thus putting it to a state which non-editable henceforth. Since there is no path `/Channel/Consortiums/ConsortiumId/org-2/Admins`
in application channel configuration.
Is this correct?
Is there a way out here - we want only org2/Admins to have permission to change system/application channel configuration?
@rahulhegde: RE: the modification of ChannelCreationPolicy in that snippet above -- was the configuration updated successfully? i.e. When you read the latest configuration block for the system channel, do you see the "ChannelCreationPolicy" that you just defined under the "CLSConsortium"?
> https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L277` mentioned is not present in v1.0.3 (currently been used by me).
Correct. This bit was refactored recently. The functionality in 1.0.3 is the same though.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=kBQTqNsPrx8EP8AN9) @kostas
- yes configuration was updated successfully to the system channel and i can see from the next fetch of system channel (`testchainid`) - is this not expected?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=kBQTqNsPrx8EP8AN9) @kostas
- yes configuration was updated successfully to the system channel and i can see from the next fetch of system channel (`testchainid`) got that update - is this not expected?
No, this is me making sure we haven't missed anything obvious.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3Jdz9336BLrwJpQiS) @kostas
But you mentioned yesterday - we are allowed to modify `ChannelCreationPolicy` from IMPLICIT to SIGNATURE.
Correct, and I haven't rescinded that statement.
> Also after changing ChannelCreationPolicy of system channel and followed with a new application channel creation, I don't see any change in the application sub-group policies. Literal meaning `mod_policy` is string and `policy` is structure. I m not sure - how this will be assigned during channel creation.
@rahulhegde The `mod_policy` and application policies are defined by the channel creation transaction. The channel creation policy is used to check if the submitter is authorized to define these.
Before the channel creation, you may think that you have an ephemeral template configuration for the channel. One who's `mod_policy` is set to the value from `ChannelCreationPolicy` with no other policies defined.
The channel creation transaction modifies the Application group to define a new `mod_policy` and new `Readers/Writers/Admins` policies, this means the `mod_policy` of `ChannelCreationPolicy` must be satisfied.
And +1 to @kostas's reply, a signature policy should be just fine for the channel creation policy. The config update you posted looks good, assuming it was applied successfully. You should see the failure reason for the creation in the logs.
That bit then:
> (a) the "mod_policy" of the application config group set to the string "ChannelCreationPolicy": https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L277
> (b) a policy under the "Application" group with key "ChannelCreationPolicy" and value equal to the channel creation policy of the corresponding consortium: https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L274
would be incorrect in that it captures only the interim state.
That is an accurate description of the state of the ephemeral configuration which the channel creation transaction modifies to produce the genesis configuration.
Seems obvious in hindsight. My recollection of that code path was fuzzy and I looked it up last night but it looks like I didn't take it all the way to end, sorry for any misdirection @rahulhegde.
Seems obvious in hindsight. My recollection of that code path was fuzzy and I looked it up last night but it looks like I didn't take it all the way to end, sorry for any misdirection @rahulhegde
If i have to summarize -
1. I would really not see a newly created application channel policy and this is an ephemeral policy to honor who can create the channel.
2. `a signature policy should be just fine for the channel creation policy` - does this mean a Policy Value modification from current IMPLICIT_META to SIGNATURE, would mean a single signature from that SIGNATURE policy organization should be good to create channel, which is currently not happening. Now this statement contradict again with Kosta answer - `we cannot modify` - putting me in confusion :). As of 1.0.3 - I have successfully changed the `ChannelCreationPolicy` from IMPLICIT_META -> SIGNATURE. So to put this in terms of scenario -
Conclusion - Channel creation fails if the channel creation transaction is signed by org1 with message `Attempted to include a member which is not in the consortium`
Consortium (CLSConsortium) contains only org1
ChannelCreationPolicy - Modified to SIGNATURE org1/Admins
Channel Profile Definition -
```
profile_org2_org3:
Consortium: CLSConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *org1
- *org2
- *org3
```
If i have to summarize -
1. I would really not see a newly created application channel policy (aka `ChannelCreationPolicy` once we fetch the application channel configuration block) and this is an ephemeral policy to honor who can create the channel.
2. `a signature policy should be just fine for the channel creation policy` - does this mean a Policy Value modification from current IMPLICIT_META to SIGNATURE, would mean a single signature from that SIGNATURE policy organization should be good to create channel, which is currently not happening. Now this statement contradict again with Kosta answer - `we cannot modify` - putting me in confusion :). As of 1.0.3 - I have successfully changed the `ChannelCreationPolicy` from IMPLICIT_META -> SIGNATURE. So to put this in terms of scenario -
Conclusion - Channel creation fails if the channel creation transaction is signed by org1 with message `Attempted to include a member which is not in the consortium`
Consortium (CLSConsortium) contains only org1
ChannelCreationPolicy - Modified to SIGNATURE org1/Admins
Channel Profile Definition -
```
profile_org2_org3:
Consortium: CLSConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *org1
- *org2
- *org3
```
Right, so the error message explains what's going on right? You can only include orgs in your channel that are members of the consortium. You're adding org2 and org3, but your consortium only contains org1.
Right, so the error message explains what's going on right? You can only include orgs in your channel that are members of the consortium. You're adding org2 and org3, but your consortium contains only org1.
Right, so the error message explains what's going on right? You can only include orgs in your channel that are members of the consortium. You wish to create a channel with orgs 1, 2, and 3, but your consortium contains only org1.
> Now this statement contradict again with Kosta answer - `we cannot modify` - putting me in confusion
I think I missed your point here, can you rephrase?
+1 @kostas. @rahulhegde The error is unrelated to the ChannelCreationPolicy. I suspect that when you modified your channel configuration, you no only changed the policy, but also removed the other org definitions from the consortium. All of the initial channel members must be in the consortium. Once a channel has been created you may reconfigure its membership arbitrarilly
Sorry stepped out - So ChannelCreationPolicy if changed to SIGNATURE only says it must be only signed using that organization MSP however Consortium sub-set must be met.
Sorry stepped out - So ChannelCreationPolicy if changed to SIGNATURE means, the only accepted signature that must be present during channel creation request is of that organization defined in `ChannelCreationPolicy` however consortium sub-set definition must also be met.
@rahulhegde Exactly
Thanks Jason and Kosta. I have re paste my other question.
Thanks Jason and Kosta. I will re paste my other question.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Q7df5YSzWDNzPqCrr) @guoger Thank you very much :) this fits my needs :) Thanks
I am trying to understand how easily is this scenario possible?
- Orderer is defined in org1
- Requirement --> All channel update to system and application channel configuration must be allowed only using org2 Admins policy.
My understanding - I will have to modify the genesis block before orderer service nodes are bootstrapped with it:
- Do i change in the system channel channel configuration @
- all `mod_policy` under `/Channel/Orderer/org1/` must be changed to value = `/Channel/Consortiums/ConsortiumId/org2/Admins`
Q1 => Is this correct?
And as per `channel creation` process described at the orderer node, the `/Channel/Orderer/org1/` group is copied to the channel application configuration. This means there `mod_policy` will also be copied thus putting it to a state which non-editable henceforth. Since there exist no path like `/Channel/Consortiums/ConsortiumId/org2/Admins` in channel application configuration.
Q2=> Is this correct?
Is there a way out here - we want only org2's Admins to have permission to change system/application channel configuration?
> - Requirement --> All channel update to system and application channel configuration must be allowed only using org2 Admins policy.
What modifications specifically? Would like for Org1 to be able to update its own CRL for instance? Or should Org2 be the only way allowed to modify any channel configuration? To control membership changes, and who has access to the orderer Broadcast/Deliver functions, control over the `/Channel/Application` group is sufficient. If you wish to control all of the organization's crypto material, anchor peers, etc. then you will want to more broadly change the `mod_policy` values.
> My understanding - I will have to modify the genesis block before orderer service nodes are bootstrapped with it:
This is the easiest way to do things, but you may reconfigure the ordering system channel using configuration update transactions.
> - Do i change in the system channel channel configuration
Yes, I would expect for you to do this.
> - Requirement --> All channel update to system and application channel configuration must be allowed only using org2 Admins policy.
What modifications specifically? Would like for Org1 to be able to update its own CRL for instance? Or should Org2 be the only way allowed to modify any channel configuration? To control membership changes, and who has access to the orderer Broadcast/Deliver functions, control over the `/Channel/Application` group is sufficient. If you wish to control all of the organization's crypto material, anchor peers, etc. then you will want to more broadly change the `mod_policy` values.
> My understanding - I will have to modify the genesis block before orderer service nodes are bootstrapped with it:
This is the easiest way to do things, but you may reconfigure the ordering system channel using configuration update transactions.
> - Do i change in the system channel channel configuration
Yes, I would expect for you to do this.
> - all `mod_policy` under `/Channel/Orderer/org1/` must be changed to value = `/Channel/Consortiums/ConsortiumId/org2/Admins`
> Q1 => Is this correct?
It depends on what you would like to accomplish. If you simply want control over the membership and read/write access to the channel, then you do not need to modify these. If you want to control the certificate material, I would recommend a different approach.
> And as per `channel creation` process described at the orderer node, the `/Channel/Orderer/org1/` group is copied to the channel application configuration. This means there `mod_policy` will also be copied thus putting it to a state which non-editable henceforth. Since there exist no path like `/Channel/Consortiums/ConsortiumId/org2/Admins` in channel application configuration.
Q2=> Is this correct?
Correct. Using absolute `mod_policy` paths in the consortium definition is not recommended for this reason.
> Is there a way out here - we want only org2's Admins to have permission to change system/application channel configuration?
Yes, it's possible, but as stated above, I'll need to know your goals more specifically.
Is it possible to have multiple 'solo' orderers? It sounds like a silly question in my head, but I'm not sure. As in, can I have 2 'solo' orderers on a single channel, one belonging to each organization that is on the channel?
@Asara Each solo orderer would represent a distinct ordering network.
@Asara Each solo orderer would represent a distinct ordering network. (So, there would be no enforced relationship between the two. You could for instance create a channel with the same ID on both networks, but they would be distinct channels)
So in order to have multiple orderers in a channel, they must have a kafka backend?
Correct
Thanks!
I have a little concern with regards to kafka-based ordering on multiple servers with data persistence enforced. If I bring down my network completely, how do I recover? I noticed that the orderer keeps adding new blocks but the peers are not receiving this blocks. How do I make the peers receive this new blocks?
`What modifications specifically? Would like for Org1 to be able to update its own CRL for instance? Or should Org2 be the only way allowed to modify any channel configuration?`
Yes - any update to the MSP elements of Org1 like CRL, could be iCA in long run. This is managed by Org2 completely.
@rahulhegde Then I recommend that you modify the `Admins` policy for each of the org definitions to be `1 of Org2.Admin` rather than `1 of OrgN.Admin` as they are currently defined. Also set the `/Channel/Application/Admins` in this manner when you create the channel. This will give `Org2` complete control over the channel and will give the other orgs zero administrative rights.
@eetti
> I have a little concern with regards to kafka-based ordering on multiple servers with data persistence enforced. If I bring down my network completely, how do I recover? I noticed that the orderer keeps adding new blocks but the peers are not receiving this blocks. How do I make the peers receive this new blocks?
The peers should recover automatically in this scenario. How are you performing the restart? If you are using something like docker you will need to be careful that the addresses of the ordering service do not change on restart.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=iPfM4vCaKTFHkperP) @jyellick The addresses are the server ip addresses. I think you are correct it should automatically recover. However if I bring down one orderer (I have 3), when I add the orderer back to the network, it doesn't work properly. I might have to check the addresses, that could be the issue.
@eetti Best of luck. This is a scenario that has certainly been tested and worked for others.
Hi everyone, I got some problem with changing the block size, please advise. When I increase the `MaxMessageCount` from 10 to 100, I can see the block size increases also from 10 tx/block to 100 tx/block. However, when I further increase the `MaxMessageCount` from 100 to 200, the block size only increases to 104 tx/block, and it stays the same even when I increase the `BatchTimeout` from 2s to 5s. BTW, I am using a client that sends 600 tx/sec to the Fabric, so I think the tx rate from the client should be high enough to create blocks with 200 tx/block.
```# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 4s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 200
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB```
```# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 5s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 200
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB```
@qizhang You may turn on debug level logs in the orderer and look for:
```logger.Debugf("[channel: %s] Proper time-to-cut received, just cut block %d", chain.ChainID(), chain.lastCutBlockNumber)```
and
```logger.Debugf("[channel: %s] Batch filled, just cut block %d - last persisted offset is now %d", chain.ChainID(), chain.lastCutBlockNumber, offset)```
To see whether the block is being cut by the timeout or by the batch size being reached.
@qizhang You may turn on debug level logs in the orderer and look for:
```
logger.Debugf("[channel: %s] Proper time-to-cut received, just cut block %d", chain.ChainID(), chain.lastCutBlockNumber)
```
and
```
logger.Debugf("[channel: %s] Batch filled, just cut block %d - last persisted offset is now %d", chain.ChainID(), chain.lastCutBlockNumber, offset)
```
To see whether the block is being cut by the timeout or by the batch size being reached.
@qizhang You may turn on debug level logs in the orderer and look for:
```
logger.Debugf("[channel: %s] Proper time-to-cut received, just cut block %d", chain.ChainID(), chain.lastCutBlockNumber)
```
and
```logger.Debugf("[channel: %s] Batch filled, just cut block %d - last persisted offset is now %d", chain.ChainID(), chain.lastCutBlockNumber, offset)
```
To see whether the block is being cut by the timeout or by the batch size being reached.
@qizhang You may turn on debug level logs in the orderer and look for:
```logger.Debugf("[channel: %s] Proper time-to-cut received, just cut block %d", chain.ChainID(), chain.lastCutBlockNumber)
```
and
```logger.Debugf("[channel: %s] Batch filled, just cut block %d - last persisted offset is now %d", chain.ChainID(), chain.lastCutBlockNumber, offset)
```
To see whether the block is being cut by the timeout or by the batch size being reached.
@qizhang You may turn on debug level logs in the orderer and look for:
```logger.Debugf("[channel: %s] Proper time-to-cut received, just cut block %d", chain.ChainID(), chain.lastCutBlockNumber)
```
and
```logger.Debugf("[channel: %s] Batch filled, just cut block %d - last persisted offset is now %d", chain.ChainID(), chain.lastCutBlockNumber, offset)
```
To see whether the block is being cut by the timeout or by the batch size being reached respectively.
Since your update worked once, I expect that you are actually hitting the batch timeout. You may try increasing the batch timeout further.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=bvAmESgoyXEszytL3) @jyellick
Right - complete management to be done by Organization-1. Your suggestion of changing Admins Policy of each and every organization to Organization1.Admin looks good.
Has joined the channel.
Is there anyway to remove the spaces between these? - ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
I tried adding commas but once deployed the zookeepers kept restarting and wouldnt stay up in docker
@t_stephens67 If you are building the zookeeper image yourself with the fabric provided Dockerfile, you may see
https://github.com/hyperledger/fabric/blob/release/images/zookeeper/docker-entrypoint.sh#L29-L31
is where this variable is used. If you want to use a different delimiter, you could edit this processing, but then you would be committing to maintaining your own fork of the Dockerfile.
great we are bringing up a swarm and instead of using a docker compose file we are running these commands one at a time so it wasnt thrilled with the spaces
we tried using single quotes and commas but each gave us errors in the log files
Yes, essentially, that for loop adds each space separated thing as a line into the zookeeeper config file
So, by adding a different delimiter, you are causing them all to be a single line in the config, which will obviously cause you problems
If anyone has ever set up hyperledger fabric using docker swarm along with kafka based consensus help would be greatly appreciated
@t_stephens67 You might be interested in https://github.com/hyperledger/cello
@t_stephens67 You might be interested in https://github.com/hyperledger/cello #cello
@t_stephens67 You might be interested in https://github.com/hyperledger/cello I believe their RC channel is #cello
yea one of my team members looked into cello and abandoned it pretty quickly
Because?
not sure I've been working on other parts of the tool I only recently started helping out with distributing the containers
It does look like Cello has a visual interface is there any way to use a terminal? We are doing everything through putty.
As an open source group, I'd encourage everyone to support eachother. If Cello doesn't satisfy your deployment needs, I'm sure you're not alone, and we should see what we can do as a community to address those needs, rather than everyone rolling their own solution.
I'm by no means a Cello expert, but I'm rather certain it does not require a GUI
Yea thats what I was thinking it just had a bunch of screenshots. Ill look into it.
I'd encourage you to check out #cello. My exposure to it has been limited, but I recall that @tongli had contributed some scripts around automating deployment on different cloud architectures.
Has joined the channel.
yea id have to go though and write my own scripts the struggles of integrating outside software with internal software
https://chat.hyperledger.org/channel/fabric-orderer?msg=fgJMW6TxboTgyWJ6i
Going back to this for a second: https://chat.hyperledger.org/channel/fabric-orderer?msg=fgJMW6TxboTgyWJ6i
So, the bundle here contains the interim state, correct?
So, the bundle here contains the interim state, correct? https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L98
The bundle here contains the interim state, correct? https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L98
Correct
Hi all, I'm working on setting up my workflow for configuring, deploying, and running a fabric network on remote servers, and one of my goals is to decouple the configuration+deployment concern from the channel creation concern. Concretely, I want to get all the CAs, peers, and orderers running on my remote servers first, and then do channel creation, joining, etc later. For now I'm using a "trivial" orderer genesis block that has only a single org for the orderers, and has a single consortium with no members, and I'd like to use config tx updates to add members, create channels, etc. Is that possible?
Is the modification of the Application group happening here? https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L103 I don't see it...
> For now I'm using a "trivial" orderer genesis block that has only a single org for the orderers, and has a single consortium with no members, and I'd like to use config tx updates to add members, create channels, etc. Is that possible?
@vdods: That is certainly possible.
> Is the modification of the Application group happening here?
Correct, this creates the proposed new channel configuration
You can see how the consortium is added after the ordering service instantiation here: https://github.com/jeffgarratt/fabric-prototype/blob/master/features/bootstrap.feature#L103
@kostas I'm looking at the docs for configtxgen ( https://hyperledger-fabric.readthedocs.io/en/latest/configtxgen.html ) right now -- are there other tools or concepts I'll need to use?
As long as you know how configuration transactions work, you're good to go I think, in the sense that you know what needs to be edited and how: https://hyperledger-fabric.readthedocs.io/en/latest/configtx.html
And can this all be done via the peer CLI? e.g. config updates
I know I've seen channel creation and joining via CLI
Yes.
It's not the ideal way to go about it, but yes.
What's the preferred way?
SDK?
Ah.. gotcha. My goal is to make configuration+deployment of nodes, channel configuration, channel joining, etc the responsibility of the sysadmin, not the app developer
Understood.
I'm looking at CAs/peers/orderers as metaphorical hardware, the channel is a hard drive, and then the chaincode is software to be installed and used by the dev
Thanks for the help! :)
> Correct, this creates the proposed new channel configuration
Ah, I see it now.
Was expecting an additional step where you'd set these policies, etc. in the writeset, but you're carrying these in the `envConfigUpdate` all along.
Was expecting an additional step where you'd set these policies, etc. in the writeset, but you've been carrying these in the `envConfigUpdate` all along.
@kostas Can an org be added to a channel later solely via configtxgen and CLI commands? Or does that require some SDK work?
You can fetch the latest config using the peer CLI, then using the work @jyellick has done around configtxgen you can construct the required update and send it back to the ordering service.
Cool
At some point I encountered an problem when running the orderer indicating that I should have configtx.yaml in the orderer base dir. I forget exactly why this is or if this was a temporary requirement. Is it really needed? It seems like the genesis block and initial channel config tx should be sufficient.
@qizhang what client are you using? and what is your network configuration? 600 tx/sec is too fast to reach for me.
@jyellick @kostas https://gist.github.com/asaningmaxchain/863ce32ab06f5b78920dacfe259a0846
replace with startingBlockNumber = fl.Height() - 1
replace with startingBlockNumber = fl.Height() - 1,it's ok?
@bh4rtp you can shoot 600 tx/s using nodesdk if you fork multiple instances of the code. You'll have to take care of synchronization in case your transactions depend upon one another.
> At some point I encountered an problem when running the orderer indicating that I should have configtx.yaml in the orderer base dir. I forget exactly why this is or if this was a temporary requirement. Is it really needed? It seems like the genesis block and initial channel config tx should be sufficient.
@vdods The only time that `configtx.yaml` is required during orderer start is if the genesis method is `provisional` (which effectively invoked `configtxgen` under the covers). This is not recommended for production, if you are supplying a genesis block via the `file` genesis method, there is no need to have a `configtx.yaml`
> At some point I encountered an problem when running the orderer indicating that I should have configtx.yaml in the orderer base dir. I forget exactly why this is or if this was a temporary requirement. Is it really needed? It seems like the genesis block and initial channel config tx should be sufficient.
@vdods The only time that `configtx.yaml` is required during orderer start is if the genesis method is `provisional` (which effectively invokes `configtxgen` under the covers). This is not recommended for production, if you are supplying a genesis block via the `file` genesis method, there is no need to have a `configtx.yaml`
> At some point I encountered an problem when running the orderer indicating that I should have configtx.yaml in the orderer base dir. I forget exactly why this is or if this was a temporary requirement. Is it really needed? It seems like the genesis block and initial channel config tx should be sufficient.
@vdods The only time that `configtx.yaml` is required during orderer start is if the genesis method is `provisional` (which effectively invokes `configtxgen` under the covers). This is not recommended for production. If you are supplying a genesis block via the `file` genesis method, there is no need to have a `configtx.yaml`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2XMfuCrFfdogza2Xg) @kostas message ConfigEnvelope {
Config config = 1;
Envelope last_update = 2;
}
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2XMfuCrFfdogza2Xg) @kostas message ConfigEnvelope {
Config config = 1;
Envelope last_update = 2;
} where these information is stored in channel transaction. My channel transaction file is looking like this.
Channel creation for channel: mychannel
Read Set:
{
"Channel": {
"Values": {
"Consortium": {
"Version": "0",
"ModPolicy": "",
"Value": {
"name": "SampleConsortium"
}
}
},
"Policies": {},
"Groups": {
"Application": {
"Values": {},
"Policies": {},
"Groups": {
"Org2MSP": {
"Values": {},
"Policies": {},
"Groups": {}
},
"Org1MSP": {
"Values": {},
"Policies": {},
"Groups": {}
}
}
}
}
}
}
Write Set:
{
"Channel": {
"Values": {
"Consortium": {
"Version": "0",
"ModPolicy": "",
"Value": {
"name": "SampleConsortium"
}
}
},
"Policies": {},
"Groups": {
"Application": {
"Values": {},
"Policies": {
"Writers": {
"Version": "0",
"ModPolicy": "Admins",
"Policy": {
"PolicyType": "3",
"Policy": {
"subPolicy": "Writers",
"rule": "ANY"
}
}
},
"Readers": {
"Version": "0",
"ModPolicy": "Admins",
"Policy": {
"PolicyType": "3",
"Policy": {
"subPolicy": "Readers",
"rule": "ANY"
}
}
},
"Admins": {
"Version": "0",
"ModPolicy": "Admins",
"Policy": {
"PolicyType": "3",
"Policy": {
"subPolicy": "Admins",
"rule": "MAJORITY"
}
}
}
},
"Groups": {
"Org1MSP": {
"Values": {},
"Policies": {},
"Groups": {}
},
"Org2MSP": {
"Values": {},
"Policies": {},
"Groups": {}
}
}
}
}
}
}
Delta Set:
[Groups] /Channel/Application
[Policy] /Channel/Application/Admins
[Policy] /Channel/Application/Writers
[Policy] /Channel/Application/Readers
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nCiPbaThGtMrWvLxx) @kostas message ConfigEnvelope {
Config config = 1;
Envelope last_update = 2;
} where these information is stored in channel transaction. My channel transaction file is looking like this.
Channel creation for channel: mychannel
Read Set:
{
"Channel": {
"Values": {
"Consortium": {
"Version": "0",
"ModPolicy": "",
"Value": {
"name": "SampleConsortium"
}
}
},
"Policies": {},
"Groups": {
"Application": {
"Values": {},
"Policies": {},
"Groups": {
"Org2MSP": {
"Values": {},
"Policies": {},
"Groups": {}
},
"Org1MSP": {
"Values": {},
"Policies": {},
"Groups": {}
}
}
}
}
}
}
Write Set:
{
"Channel": {
"Values": {
"Consortium": {
"Version": "0",
"ModPolicy": "",
"Value": {
"name": "SampleConsortium"
}
}
},
"Policies": {},
"Groups": {
"Application": {
"Values": {},
"Policies": {
"Writers": {
"Version": "0",
"ModPolicy": "Admins",
"Policy": {
"PolicyType": "3",
"Policy": {
"subPolicy": "Writers",
"rule": "ANY"
}
}
},
"Readers": {
"Version": "0",
"ModPolicy": "Admins",
"Policy": {
"PolicyType": "3",
"Policy": {
"subPolicy": "Readers",
"rule": "ANY"
}
}
},
"Admins": {
"Version": "0",
"ModPolicy": "Admins",
"Policy": {
"PolicyType": "3",
"Policy": {
"subPolicy": "Admins",
"rule": "MAJORITY"
}
}
}
},
"Groups": {
"Org1MSP": {
"Values": {},
"Policies": {},
"Groups": {}
},
"Org2MSP": {
"Values": {},
"Policies": {},
"Groups": {}
}
}
}
}
}
}
Delta Set:
[Groups] /Channel/Application
[Policy] /Channel/Application/Admins
[Policy] /Channel/Application/Writers
[Policy] /Channel/Application/Readers
@ahmadzafar Please do not post long snippets of files, logs, or code to this channel. Use a service like hastebin.com. Please edit your post to do this. I would do it for you, but the formatting of your snippet was lost when you did not use the backtick syntax supported by rocketchat.
@ahmadzafar Please do not post long snippets of files, logs, or code to this channel. Use a service like hastebin.com and post the link here. Please edit your post to do this. I would do it for you, but the formatting of your snippet was lost when you did not use the backtick syntax supported by rocketchat.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=MWMhi4Cm5yYxAWMmm) @jyellick Okay fine..
Hi @jyellick I have made sbft run on my servers, can it be a commercial solution? does it have any known defects?
@thakkarparth007 i take balance-transfer as example. wonder why every block contains only one transaction, and the transaction speed is about `BatchTimeout` seconds.
@bh4rtp because the orderer waits for BatchTimeout seconds before constructing a block and if you don't send any more transactions before the timer expires, the block will contain only one tx
@Vadim but i sent 50 transactions continuously without sleep.
perhaps your code waited for tx to be included in the block before sending the next one?
maybe, i am using `fabric-samples/balance-transfer`.
because it seems to be the issue
i am not familiar with node.js. just copied the codes from balance-transfer.
> Hi @jyellick I have made sbft run on my servers, can it be a commercial solution? does it have any known defects?
@Glen I do not believe SBFT supports creating channels, or channel reconfiguration.
@bh4rtp By default, the examples use an event listener to wait for a transaction to commit, this behavior is expected.
Moving the discussion to right channel - copy paste
GM - I am getting a error fetching the system channel using `peer channel fetch`, I have regenerated the orderer genesis block by adding a `/Channel/Consortiums/Readers`, `/Channel/Consortiums/Writers`. There is IMPLICIT_META set for `/Channel/Readers`, which as per last discussion should now recursively identify all Readers in the subgroup.
Looking at the error - does it skip the `/Channel/Consortiums/` and only looks at `/Channel/Orderer/`? Is this special case or defect in my setup/code?
[ ](https://chat.hyperledger.org/channel/fabric?msg=BTAM225cXkeJCrdE8) @jyellick
Let me do that.
[ ](https://chat.hyperledger.org/channel/fabric?msg=BTAM225cXkeJCrdE8) @jyellick
Let me do that. Do you want any DEBUG flag to be on other than `ORDERER_GENERAL_LOGLEVEL`
> Do you want any DEBUG flag to be on other than `ORDERER_GENERAL_LOGLEVEL
@rahulhegde That's all I need
> Do you want any DEBUG flag to be on other than `ORDERER_GENERAL_LOGLEVEL`
@rahulhegde That's all I need
hi guys, does anyone know how i can add a new org into a network? will this require any downtime in the network?
@wy An example of how to do this is being actively developed. You may add orgs without any downtime.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=5cp4uEbxZHPzbNdmy) @jyellick
This is resolved. `Polices` object was missing to `/Channel/Consortiums/ConsortiumId` and hence the hierarchically processing didn't pickup for that organization. Thanks Jason!!!
I want to understand where is the `/Channel/Orderer/` used by? Looks to me for Peer verifying the block signed by Orderer.
@rahulhegde Yes, that is one aspect of it. It also stores the assorted shared configuration required by the ordering service, like the batch size, batch timeout, consensus type, tec.
@rahulhegde Yes, that is one aspect of it. It also stores the assorted shared configuration required by the ordering service, like the batch size, batch timeout, consensus type, etc.
Hi all, I'm getting an error involving MSP ID "DEFAULT", even after I've purged that from all apparent places in my config files. This error message is from the orderer after I attempt to create the channel via peer CLI:
Orderer logs:
```2017-10-19 15:05:22.856 PDT [cauthdsl] func2 -> ERRO 004 Principal deserialization failure (MSP DEFAULT is unknown) for identity 0a0744454641554c54129a072d2d2d2d2d424547494e202d2d2d2d2d0a4d494943697a4343416a4b6741774942416749554245567773537830546d7164627a4e776c654e42427a6f4954307777436759494b6f5a497a6a3045417749770a667a454c4d416b474131554542684d4356564d78457a415242674e5642416754436b4e6862476c6d62334a7561574578466a415542674e564241635444564e680a62694247636d467559326c7a59323878487a416442674e5642416f54466b6c7564475679626d5630494664705a47646c64484d7349456c75597934784444414b0a42674e564241735441316458567a45554d4249474131554541784d4c5a586868625842735a53356a623230774868634e4d5459784d5445784d5463774e7a41770a5768634e4d5463784d5445784d5463774e7a4177576a426a4d517377435159445651514745774a56557a45584d4255474131554543424d4f546d3979644767670a5132467962327870626d45784544414f42674e564241635442314a68624756705a326778477a415a42674e5642416f54456b6835634756796247566b5a3256790a49455a68596e4a70597a454d4d416f474131554543784d44513039514d466b77457759484b6f5a497a6a3043415159494b6f5a497a6a304441516344516741450a4842754b73414f34336873344a4770466669474d6b422f7873494c54734f766d4e32576d77707350485a4e4c36773848576533784350517464472f584a4a765a0a2b433735364b457355424d337977355054666b7538714f42707a43427044414f42674e56485138424166384542414d4342614177485159445652306c424259770a464159494b7759424251554841774547434373474151554642774d434d41774741315564457745422f7751434d414177485159445652304f42425945464f46430a6463555a346573336c746943674156446f794c66567050494d42384741315564497751594d4261414642646e516a32716e6f492f784d55646e3176446d6447310a6e4567514d43554741315564455151654d427943436d31356147397a6443356a62323243446e6433647935746557687663335175593239744d416f47434371470a534d343942414d43413063414d4551434944663948626c34786e337a3445774e4b6d696c4d396c58324671346a5770416152564239374f6d564565794169416b0a61587a422f6a6e6c5533394237577773394249723963386d534f455046365659317547502b644b5630673d3d0a2d2d2d2d2d454e44202d2d2d2d2d0a
2017-10-19 15:05:22.856 PDT [orderer/common/broadcast] Handle -> WARN 005 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2017-10-19 15:05:22.857 PDT [orderer/common/deliver] Handle -> WARN 006 Error reading from stream: rpc error: code = Canceled desc = context canceled
```
Peer channel creation commandline:
```peer channel create --cafile root-ca/root.cert.pem --orderer localhost:7050 --logging-level debug --channelID simple-channel --file generated-artifacts/channels/simple-channel/deployment-package/channel.tx
```
@vdods: While the error message is from the orderer, this is not an orderer issue per se. What is happening is that you are presenting to the orderer an MSP (with the ID of "default") whose ID is not present in the MSP map that the orderer maintains.
Have you edited this one: https://github.com/hyperledger/fabric/blob/release/sampleconfig/core.yaml#L252
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pQfDtW6j6JzBNabkS) In fact, I can run the whole e2e_cli example with sbft, channel reconfiguration is not tested, but it may also be supported
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pQfDtW6j6JzBNabkS) In fact, I can run the whole e2e_cli example with sbft, channel reconfiguration is not tested, but it may also be supported @jyellick
hello, anyone knows how i can suspend an organisation from a running fabric network?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=HiKzuJeD9uABstCmD) @kostas Yeah, I changed them all to Org0MSP (for now, I only have a single org)
@wy: To prevent an org from participating in an existing channel do a config update to remove its org group from from /Channel/Application
@wy: To prevent an org from participating in an existing channel do a config update to remove its org group from from `/Channel/Application/Groups`.
@wy: To prevent an org from participating in an existing channel do a config update to remove its org group from the `Groups` map in `/Channel/Application/`.
what do you mean by `/Channel/Application`?
The next logical step is to also remove that org from the consortium definition. Assume this org is part of the consortium `Bar`, you'll need to submit a config update transaction that removes that org group from the `Groups` map of `/Channel/Consortiums/Foo` in the system channel.
> what do you mean by `/Channel/Application`?
Ah, this is where I refer you to: http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html
In short every channel maintains a tree structure for its configuration, where `/Channel` can be thought of as the root node. The `/Channel/Application` part of the config holds all the orgs participating on the channel.
@vdods: That is weird. What is the value of your `CORE_PEER_LOCALMSPID` env var?
@kostas how about CRLs is it needed?
Has joined the channel.
Assuming a standard setup with a 1-to-1 mapping between MSPs and orgs, I don't see where a CRL would be useful here? The CRL is meant as a means for an MSP provider to say "these identities should be revoked". But for your case, you're getting rid of the the MSP provider completely.
Hello
So I've been working on hyperledger for some time now and described a network for development.
I used information gotten from the existing examples on github like fabcar and byfn
however I have noticed that all the examples currently have the orderer as solo
but I would like to know if any examples exist where the order was implemented based on a kafka cluster
also with multiple orderers
I would like to implement this in my development network
any help would be appreciated
@daygee check this out: https://blockchain-fabric.blogspot.in/2017/09/underconstruction-setting-up-blockchain.html
It has instructions for setting up a single kafka based orderer. With some digging around you may be able to get multiple orderers running.
thank you @thakkarparth007
@daygee: The E2E CLI test in the `master` branch uses Kafka: https://github.com/hyperledger/fabric/tree/master/examples/e2e_cli
The documentation also points you to the minimum acceptable settings for a Kafka-based ordering service here: http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#example
@kostas I did not know that, thanks!
@kostas what will be the implication here if the organisation is a required endorser for a chaincode in the channel?
@wy This channel is for orderer related questions, I will cross post this to #fabric-peer-endorser-committer for you
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Fdqqw4hXcyFkesfsh) @kostas @jyellick i was referring to this
If the Org is not defined in the channel, then any policy requiring that org's signature can never be satisfied.
can the chaincode be upgraded with another policy if that happens?
Yes, it may
Or rather, depending on the instantiation policy of the chaincode, it may
what do you mean by that? isnt the policy also defined during upgrade?
You may redefine the instantiation and endorsement policies during upgrade.
You may redefine the instantiation and endorsement policies during upgrade. (The old one must still be satisfied though)
so if a particular org is already a required endorser defined in the previous instantiation/upgrade, i will not be able to change the policy if i suspend the org?
The policy checked on upgrade is the instantiation policy. Note, we are modifying how the chaincode lifecycle functions in v1.1 to be more powerful/flexible, which can be used to eliminate problems like if an instantiation policy can longer be satisfied.
ah so in the current implementation, you can never suspend a org which is an endorser defined in the instantiation policy?
else you would risk causing the channel to stop functioning
Well, it would not stop the channel from functioning, it would stop that chaincode from being invoked anymore
understood, but that would mean that whatever is in the world state for that chaincode can no longer be changed right?
That's correct
@jyellick so when should CRLs be used?
@wy CRLs should be used when an organization wishes to blacklist specific certificates. Perhaps one of their peers was hacked and its private key was stolen. That peer's certificate should be revoked so that it can no longer transact on the network.
so as per the way to suspend a bank described by @kostas , do i need to update all channels which the org is part of? or is there a way to revoke all right globally?
@jyellick so as per the way to suspend a bank described by @kostas , do i need to update all channels which the org is part of? or is there a way to revoke all rights globally?
Once created, channels are managed independently. You should send an update to each channel making the modification you desire. Of course, you will likely want to script or automate this process somehow.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2ru3EaaphgcGnTzji) @jyellick kostas also mentioned a system channel, what is this and who can update it?
Has joined the channel.
The ordering system channel, is the channel which the orderers are bootstrapped with initially. It is used to facilitate channel creation. The ordering admin generally has the rights to modify this channel.
The 'ordering system channel' is the channel which the orderers are bootstrapped with initially. It is used to facilitate channel creation. The ordering admin generally has the rights to modify this channel.
Anybody have any experience pairing Orderers with Jocko(https://github.com/travisjeffery/jocko) instead of Kafka?
No. I’ve been following this project since the beginning. They don’t have replication yet, and even if they did, I would personally be a bit skeptical of switching to it,
No. I’ve been this project since the beginning. Still a bit too early (they don’t have replication implemented yet, right?) but I like where this is going.
No. I’ve been following this project since the beginning. Still a bit too early (they don’t have replication implemented yet, right?) but I like where this is going.
I do think it’d be a mistake to invest any more resources in CFT solutions though. (Raft is just an experiment for the APIs.) BFT is what we should be targeting.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7gL6CzG2Z9N2kLF7s) @kostas I've set `localMspId: Org0MSP` in core.yaml for the peer, but haven't set CORE_PEER_LOCALMSPID. This should have the same effect, presumably. I made sure that all the "DEFAULT" were changed to "Org0MSP" for localMspID
I've set localMspID to Org0MSP in both orderer.yaml and core.yaml
And I also have
```Organizations:
- &Org0
Name: Org0
# ID to load the MSP definition as
ID: Org0MSP
...
_If_ that env var is not set, and you're still seeing this issue after modifying the `core.yaml` file, I'm afraid I'm all out of helpful options.
@vdods: _If_ that env var is not set, and you're still seeing this issue after modifying the `core.yaml` file, I'm afraid I'm all out of helpful options.
@vdods Can you please check the value of that variable? For instance:
```
echo $CORE_PEER_LOCALMSPID
```
I believe the docker containers set it automatically
@jyellick I'm running peer, orderer, fabric-ca-server, etc outside of docker, and that env var is not set in the shell.
Which version of `peer` are you executing?
@kostas I'll try setting the var, and see if it changes anything
1.0.3
compiled from the v1.0.3 tag of github repo
Ah, okay, I had a suggestion for test if you were on master, but it does not apply to 1.0.3.
I'm curious what it is, anyway
(Out of curiosity, what's the suggestion?)
(Heh.)
:)
In master, there is a `peer channel signconfigtx` which will simply take a config update transaction (of which a channel creation transaction is one), and add a signature to the config update signature set, and write it back to disk.
In master, there is a `peer channel signconfigtx` which will simply takes a config update transaction (of which a channel creation transaction is one), and add a signature to the config update signature set, and write it back to disk.
In master, there is a `peer channel signconfigtx` which will simply takes a config update transaction (of which a channel creation transaction is one), and adds a signature to the config update signature set, and write it back to disk.
In master, there is a `peer channel signconfigtx` which will simply take a config update transaction (of which a channel creation transaction is one), and add a signature to the config update signature set, and write it back to disk.
So, you could run this command, then inspect it using `configtxgen` or `configtxlator` to see the identity which was embedded in the transaction.
Hmm.. could be a useful debugging method
thanks
No problem. As another thought, you might try turning up the `CORE_PEER_LOGGING` to debug
I think the MSP will emit some information about the identity it is loading
Oh I think I may have an idea... I was running `peer channel create` without bothering to set FABRIC_CFG_PATH, thinking that because it's creating a channel, it doesn't need to be fully configured as a peer on the network. That's probably not the case.
So it's probably using all the configuration defaults
Aha, yes, that could very well be it
Hmm, ok, well it got further -- I now have in the orderer logs:
```2017-10-20 14:30:17.864 PDT [cauthdsl] func2 -> DEBU 170 0xc42000e358 identity 0 does not satisfy principal: This identity is not an admin
2017-10-20 14:30:17.864 PDT [cauthdsl] func2 -> DEBU 171 0xc42000e358 principal evaluation fails
2017-10-20 14:30:17.864 PDT [cauthdsl] func1 -> DEBU 172 0xc42000e358 gate 1508535017831076962 evaluation fails
2017-10-20 14:30:17.864 PDT [orderer/common/broadcast] Handle -> WARN 173 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
```
The cert it's talking about is for the peer that's issuing the 'create channel' request. Does that peer have to be an admin on the orderer?
Also, I'm issuing the 'peer create channel' request using a peer on my blockchain network -- one of the peers I'll later invite to join the channel. Is this correct? Or should I be using some differently-configured peer that uses the orderer-admin cert that I've placed in the orderer's msp/admincerts dir?
It seems that the latter is probably appropriate, but I'm really not sure based on how general Fabric's architecture is
@jyellick To summarize my question -- should channel creation be done by a separate peer configured to use the separate "orderer admin" enrollment cert? It seems like a responsibility of the orderer admin and not that of any peer that will join the channel.
Call it a "channel creator peer" configuration, which uses orderer-admin.cert (which is present in the orderer's admincerts dir)
> Does that peer have to be an admin on the orderer?
It has to be an admin of the org that's referenced in the about-to-be-created channel.
So if there are four peers, peer0.orgA, peer1.orgA, peer0.orgB, and peer1.orgB, all of which will be invited to channelX, then the channel creator must be an admin in orgA or orgB, defined by the certs found in $FABRIC_CFG_PATH/msp/admincerts when configtxgen is run? All the interrelationships between the various MSPs are super confusing and not very clearly documented :-/
And I suppose this all applies when the genesis block is created for the orderer
I keep going back to this doc but there's a good reason: https://hyperledger-fabric.readthedocs.io/en/latest/configtx.html
The overall concept here may not be immediately obvious, but once you get the hang of it, you'll hopefully see it's a really powerful configuration mechanism.
Armed with the knowledge from that doc, you should inspect the genesis block and channel creation transaction and pay attention to the policies defined automatically there.
You'll see that configtxgen defaults to a ChannelCreationPolicy that allows any org admin from the consortium to create a channel.
So in order to create a channel using those defaults, you'll need to sign the channel creation request with an admin cert that corresponds to your peer's org MSP as defined in the genesis block in the orderer.
@kostas Ok, thanks. I think I'm understanding more -- time to try it all out and see if I really do :)
Does it sound reasonable to register a "channel-admin" account for each org, and make that the only admin for the channel? I.e. restrict channel creation to a single account per org, which would be used by e.g. a sysadmin who's in charge of setting up the blockchain network ahead of time.
I suppose I should also say that I intend on having a fixed number of channels per app, and they would all be created at the beginning.
Also, is there any way to specify the path to configtx.yaml directly to configtxgen, instead of setting FABRIC_CFG_PATH?
I'd rather not have to copy configtx.yaml into a different place
I have source configuration files and generated crypto materials kept separate
I have non-generated configuration files and generated crypto materials kept separate
Has joined the channel.
Has joined the channel.
I'm trying to run bft-smart, but stumbled at the very beginning. I did:
1. cloned fabric and hyperledger-bftsmart repos
2. checked out zeromq branch
3. ran `ant` in hyperledger-bftsmart
4. ran `./launch4Replicas.sh`,
which game me 'Could not find or load main class bft.BFTNode'
What else should I do before running lanuch4Replicas?
I'm trying to run bft-smart, but stumbled at the very beginning. I did:
1. clone fabric and hyperledger-bftsmart repos
2. check out zeromq branch
3. run `ant` in hyperledger-bftsmart
4. run `./launch4Replicas.sh`,
which game me 'Could not find or load main class bft.BFTNode'.
What else should I do before running lanuch4Replicas?
I'm trying to run bft-smart, but stumbled at the very beginning. I did:
1. clone fabric and hyperledger-bftsmart repos
2. check out zeromq branch
3. run `ant` in hyperledger-bftsmart
4. run `./launch4Replicas.sh`,
which gave me 'Could not find or load main class bft.BFTNode'.
What else should I do before running lanuch4Replicas?
How do we set whether a node is an endorser in the network? Do we set some environment variable for it?
Has joined the channel.
@outis: I suggest you reach out to the authors of the bft-smart plugin for Fabric.
@chfalak: This belongs to #fabric-peer-endorser-committer
Has joined the channel.
What is the purpose of the 4th additonal kafka broker in Kafka setup?
Has joined the channel.
Hi, Currently have 4 kafka and 3 zookeepers across three VM's and getting this reoccurring message in our order
017-10-23 14:20:03.591 UTC [orderer/kafka] try -> DEBU 149 [channel: testchainid] Attempting to post the CONNECT message...
[sarama] 2017/10/23 14:20:03.591171 client.go:599: client/metadata fetching metadata for [testchainid] from broker kafka3:9092
[sarama] 2017/10/23 14:20:03.592896 client.go:610: client/metadata found some partitions to be leaderless
Does anyone know what we might be configuring wrong and why this isn't working?
@niteshsolanki: Kafka replicates a partition (channel) in `default.replication.factor` replicas. It also won't write something to the channel until `min.insync.replicas` have acked it. With what I've written so far, a 3-broker setup with `default.rf` set to 3 and `min.isr` set to 2 should work just fine.
However the problem is that Kafka doesn't allow you to create a new topic (which is what happens when we create a channel) unless `default.rf` brokers are up.
So with the 3-broker setup you are unable to create channels if you have one broker down.
You therefore need to have a 4 brokers so that you can have 1 broker go down and all operations (read/write/create channel) can go on without issues.
@rbulgarelli: We'll need logs to figure out what's up. You'll need to set the set logging levels to DEBUG (see: https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#debugging) and use Hastebin for the resulting orderer logs. Post the links here.
@kostas thanks for the answer. Another question I had was why are channels mapped to single partition topic only and not with multiple partitions? More partitions would have higher throughout right?
Kafka guarantees ordering only on a partition basis.
Kafka guarantees ordering only per partition.
If you were to map a channel to multiple partitions, all the ordering guarantees would go out the window.
ohk.. so what is the recommended number of OSN for kafka set up?
No recommended number, this depends on your application really and whether your OSNs can handle the load.
Ok.. thanks @kostas
orderer-logs.txt
Can you also please post the logs from your Kafka brokers?
Something's definitely off in your Kafka cluster setup.
@kostas any advantage of setting ISR to 2 and not just 1?
@kos
@niteshsolanki: Yes. If it's set to 1 and the leader broker dies, your blockchain is broken because none of the replicas are guaranteed to be at the same hind. ISR = 2 is the bare minimum you can roll with.
Ohk..so in Hyperledger this config is same throughout for all channels (partitions)?
The configuration applies to the Kafka cluster so it affects all channels.
Ohk.
Hi, Here our our 4 kafka logs.
kafka0-logs.txt
kafka2-logs.txt
kafka3-logs.txt
kafka1-logs.txt
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hdu7ehEWPcF6rjdjw) @kostas thanks for the pointer.
@jyellick Why remove the sbft of consensus after v1.0.0-al嫖
Why remove the sbft of consensus after v1.0.0-alpha2?
It wasn't production-ready.
@honeyc: https://lists.hyperledger.org/pipermail/hyperledger-fabric/2017-May/001047.html
When will the development of sbft , there are some plans to do it ?
@honeyc I hope it will release soon. But (everything before `but` is bullshit)... Seems it is a NO after checking https://wiki.hyperledger.org/projects/fabric/proposedv1_1
Haha,Maybe we can try to implement it, refer to the previous unimplemented version? Do you have more information about the architecture that you can refer to?
@honeyc: We're looking at around 1.3 or so but nothing set in stone yet.
@sanchezl: I'm looking at the logs that @rbulgarelli has posted above and they look interesting. The OSN reaches out to broker #2 (`kafka2`) which seems to always return stale metadata, pointing to `kafka3` (broker #3) as the leader of the partition. As suggested [here](https://github.com/Shopify/sarama/issues/595#issuecomment-171269672), this most likely indicates that ZK is out of sync with the Kafka broker (broker #2 in our case). Do you happen to have any idea how we can confirm if that's the case? I've been playing around with the metadata stored in ZK but can't seem to find anything: https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper I am probably missing something though.
@rbulgarelli , can you provide zookeeper logs also? Also, any information on how these processes are running (you mention 3 VMs, but I see at least 7 hostnames in use).
@kostas, @rbulgarelli Here is a script that starts a GUI tool that might be useful: https://jira.hyperledger.org/secure/attachment/12406/manager.sh
you might have to adjust the script for your particular situation:
• the script assumes you are using docker
• the script assumes one of your zookeeper containers is named `zookeeper0`
Hi everyone.. Can any body tell me the difference between Application Org and Client Org. How we can set it in configtx.yaml file? I am using this file https://hastebin.com/amofizuxej.coffeescript . I want to make org1 and org2 as Application Org and Org3 as Client Org. And How we can check the ledger on any peer?
@ahmadzafar where do you see a client org in configtx.yaml? I see only application orgs.
which are orgs from a consortium that establish a channel
@Vadim Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
- *Org3
Org1,2,3 are in the same channel so that they can communicate with each other. At the end of file
Application: &ApplicationDefaults
Organizations:
- *Org1
- *Org2
i did only set Org1 and Org2 in ApplicationDefaults to make these Application Orgs..
Is any thing is wrong in this??
@ahmadzafar the first part overrides the second part
@Vadim So what can i do for making org3 as client organisation.
what is "client organization"?
@Vadim It may be an application which is communicating with organisation. Client org has not ledger on it..
you mean a client app maybe?
yes a client app
then it must have an identity issued by one of the channel orgs
Can you explain how? How a client can communicate with a organisation??
@ahmadzafar if by "communicate" you mean invoking/querying a chaincode, I recommend you to check balance-transfer or fabcar examples from https://github.com/hyperledger/fabric-samples
Can anyone explain what type of parameters we can store in ApplicationDefaults in configtx.yaml file?
# SECTION: Application
#
# - This section defines the values to encode into a config transaction or
# genesis block for application related parameters
#
################################################################################
Application: &ApplicationDefaults
The Sbft is a simplified PBFT or speculative BFT?
simplified
what mean simplified?
you can find descent explanation in following post: http://sammantics.com/blog/2016/7/27/chain-1
here is relevant publication: https://www.microsoft.com/en-us/research/publication/practical-byzantine-fault-tolerance-proactive-recovery/?from=http://research.microsoft.com/en-us/um/people/mcastro/publications/p398-castro-bft-tocs.pdf
Very grateful
Has joined the channel.
Hello, I've been thinking about adding of proof of work like algorithm to Hyperledger. I think It could be of some use in some very specific cases. For instance, in case of sensors, which are not able to directly communicate with the blockchain by their own ... There might be an intermediate node, which sends transactions for the sensors. But How can the sensor makes sure the transaction has been truly sent ? Either by proof of work -- the sensor checks at given pace proofs as block, hash+nonce -- If the difficulty is large enough, a single node can not achieve such a proof in such a small amount of time. Another way would be to sign blocks but it would imply sensors to know public key of other nodes beforehand.
Hello, I've been thinking about adding a "Proof of Work" like algorithm to Hyperledger. I think It could be of some use in some very specific cases. For instance, in case of sensors, which are not able to directly communicate with the blockchain by their own ... There might be an intermediate node, which sends transactions for the sensors. But How can the sensor makes sure the transaction has been truly sent ? Either by proof of work -- the sensor checks at given pace proofs as block, hash+nonce -- If the difficulty is large enough, a single node can not achieve such a proof in such a small amount of time. Another way would be to sign blocks but it would imply sensors to know public key of other nodes beforehand.
Hello, I've been thinking about adding a "Proof of Work" like algorithm to Hyperledger. I think It could be of some help in some very specific cases. For instance, in case of sensors, which are not able to directly communicate with the blockchain by their own ... There might be an intermediate node, which sends transactions for the sensors. But How can the sensor makes sure the transaction has been truly sent ? Either by proof of work -- the sensor checks at given pace proofs as block, hash+nonce -- If the difficulty is large enough, a single node can not achieve such a proof in such a small amount of time. Another way would be to sign blocks but it would imply sensors to know public key of other nodes beforehand.
Hello, I've been thinking about adding a "Proof of Work" like algorithm to Hyperledger. I think It could be of some help in some very specific cases. For instance, in case of sensors, which are not able to directly communicate with the blockchain by their own ... There might be an intermediate node, which sends transactions for the sensors instead. How can the sensor makes sure the transaction has been truly sent ? Either by proof of work -- the sensor checks at given pace proofs as block, hash+nonce -- If the difficulty is large enough, a single node can not achieve such a proof in such a small amount of time. Another way would be to sign blocks but it would imply sensors to know public key of other nodes beforehand.
Hello, I've been thinking about adding a "Proof of Work" like algorithm to Hyperledger. I think It could be of some help in some very specific cases. For instance, in case of sensors, which are not able to directly communicate with the blockchain by their own ... There might be an intermediate node, which sends transactions for the sensors instead. How can the sensor makes sure the transaction has been truly sent ? Either by proof of work -- the sensor checks at given pace proofs as block, hash+nonce -- If the difficulty is large enough, a single node can not achieve such a proof in such a small amount of time. It follows the intermediate node can not hold transactions :-) Another way would be to sign blocks but it would imply sensors to know public key of other nodes beforehand.
Hello, I've been thinking about adding a "Proof of Work" like algorithm to Hyperledger. I think It could be of some help in some very specific cases. For instance, in case of sensors, which are not able to directly communicate with the blockchain by their own ... There might be an intermediate node, which sends transactions for the sensors instead. How can the sensor makes sure the transaction has been truly sent ? Either by proof of work -- the sensor checks at given pace proofs as block, hash+nonce -- If the difficulty is large enough, a single node can not achieve such a proof in such a small amount of time. It follows the intermediate node can not hold transactions :-) Another way would be to sign blocks but it would imply sensors to know public key of other nodes beforehand.
PS : consensus setup just be customized depending on the needs, I think.
Hello, I've been thinking about adding a "Proof of Work" like algorithm to Hyperledger. I think It could be of some help in some very specific cases. For instance, in case of sensors, which are not able to directly communicate with the blockchain by their own ... There might be an intermediate node, which sends transactions for the sensors instead. How can the sensor makes sure the transaction has been truly sent ? Either by proof of work -- the sensor checks at given pace proofs as block, hash+nonce -- If the difficulty is large enough, a single node can not achieve such a proof in such a small amount of time. It follows the intermediate node can not hold transactions :-) Another way would be to sign blocks but it would imply sensors to know public key of other nodes beforehand.
PS : consensus setup should be customized depending on the needs, I think.
Hello, I've been thinking about adding a "Proof of Work" like algorithm to Hyperledger. I think It could be of some help in some very specific cases. For instance, in case of sensors, which are not able to directly communicate with the blockchain by their own ... There might be an intermediate node, which sends transactions for the sensors instead. How can the sensor makes sure the transaction has been truly sent ? Either by proof of work -- the sensor checks at given pace proofs as block, hash+nonce -- If the difficulty is large enough, a single node can not achieve such a proof in such a small amount of time. It follows the intermediate node can not hold transactions without being spotted :-) Another way would be to sign blocks but it would imply sensors to know public key of other nodes beforehand.
PS : consensus setup should be customized depending on the needs, I think.
Has joined the channel.
can anyone give any thoughts on changing of consensus in fabric?
@JeremyMet fabric orderers are signing the blocks
@Vadim Yes, indeed ! Actually, it really depends on the use case. For very specific cases (as the one I previously described), PoW could be interesting. As consensus are easily pluggable, this is not really an issue for Hyperledger. However, should not Pow be provided genuinely, Idk ?
@sanchezl, @kostas I have attached the three zookeeper logs. I am not sure what you mean by several host names, could you clarify what you mean?
zookeeper0-logs.txt
zookeeper1-logs.txt
zookeeper2-logs.txt
We have one VM hosting our docker swarm and serving as our manager and then three other VMs that are connected to the swarm that are running the containers. We have tried to distribute the zookeepers and kafkas to all three machines but where running into issues and were unsure if they all needed to be on one machine only.
@JeremyMet: You absolutely do _not_ need a proof of work algorithm in Fabric, and in private/permissioned blockchains in general. PoW is there to take care of the Sybil attacks which is an attack vector you do not have.
We inspected the zookeeper container and found no broker information like in this link (https://cwiki.apache.org/confluence/display/KAFKA/Kafka+data+structures+in+Zookeeper) any suggestions as to why we would be missing this info in our ZK container?
Hi... we have been working and now our orderer is saying "found some partitions to be leaderless" we got rid of the stale metadata issue and now have this.
I still suspect something is wrong with the zookeeper replication.
Try running zk-smoketest against your zookeeper cluster:
```
# build zk-smoketest image
docker build --tag zk-smoketest https://github.com/phunt/zk-smoketest/raw/master/Dockerfile
# get the network being used by the zookeeper containers
ZOOKEEPER_CONTAINER_NAME=server_zookeeper0_1
NETWORK_ID=$(docker inspect $ZOOKEEPER_CONTAINER_NAME --format '{{range $i,$v := .NetworkSettings.Networks}}{{$i}}{{end}}')
# run zk-smoketest
docker run \
--network $NETWORK_ID \
zk-smoketest \
--cluster=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
```
(of course, edit the above for your environment)
Also, here is a kafka cluster setup docker-compose file that you can compare to your own:
https://github.com/hyperledger/fabric/blob/master/orderer/common/server/docker-compose.yml
[myid:3] - INFO [ProcessThread(sid:3 cport:-1)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:XXXXtype:create cxid:0x17 zxid:0x100000055 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
Hi, this is what we get when we check the leader zookeeper log. Any suggestions on where to go to fix this?
This is not an error.
(Notice the `INFO` log level.)
Hello How can I join 2 orderers to the same channel?
@t_stephens67 you need to use Kafka for that
yes we have 4 kafka brokers and 3 zookeepers
You don't need to join them to a channel. As long as they communicate via the same Kafka cluster they have access to all channels.
@kostas so the following command is not necessary? "docker exec vp0 peer channel create -o orderer.example.com:7050 -c composerchannel -f /etc/hyperledger/configtx/composer-channel.tx"
No it is. This commands refers to joining a _peer_ to a channel.
oh ok I see thanks
@kostas do we need to specify orderer 1 and orderer 2 in the -o flag?
No.
ok we are trying to get everything working with swarm and its quite the process
:frowning2:we got the whole project working off 1 VM and now we are trying to rip all the containers apart..... we are not network people....
Has joined the channel.
Has joined the channel.
[sarama] 2017/10/26 08:30:49.516017 client.go:601: client/metadata fetching metadata for all topics from broker kafka1:9092
[sarama] 2017/10/26 08:30:59.518064 broker.go:96: Failed to connect to broker kafka1:9092: dial tcp: i/o timeout
[sarama] 2017/10/26 08:30:59.518195 client.go:620: client/metadata got error from broker while fetching metadata: dial tcp: i/o timeout
[sarama] 2017/10/26 08:30:59.518258 config.go:329: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
[sarama] 2017/10/26 08:30:59.518328 client.go:601: client/metadata fetching metadata for all topics from broker kafka0:9092
[sarama] 2017/10/26 08:31:09.520304 broker.go:96: Failed to connect to broker kafka0:9092: dial tcp: i/o timeout
[sarama] 2017/10/26 08:31:09.520441 client.go:620: client/metadata got error from broker while fetching metadata: dial tcp: i/o timeout
[sarama] 2017/10/26 08:31:09.520632 config.go:329: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
[sarama] 2017/10/26 08:31:09.520708 client.go:601: client/metadata fetching metadata for all topics from broker kafka2:9092
I run an example from "https://github.com/asararatnakar/FabricNodeApp1.0", the logs of orderer0 is above
which is the transactions ordered by? timestamp?
@srongzhe: You are not configuring your Kafka cluster correctly. Your first step —at a minimum— should be to set up a Kafka cluster as the Apache Kafka quick start documentation suggests. Only after you got this running should you move towards setting up a Kafka-based orderer.
The transactions are ordered according to the order with which the Kafka replica that owns the partition corresponding to the channel writes them to disk.
I deleted your other question since it's cross-posted in #fabric and answered it there.
@kostas I only start "docker-compose -f XXXXX.yaml up -d", have other things to be done? create topics ? create producter?
https://chat.hyperledger.org/channel/fabric-orderer?msg=roZE8tiZu9Xh6zr7A
You don't have to do any of those things. As I said though, something's up with your Kafka cluster/environment. https://chat.hyperledger.org/channel/fabric-orderer?msg=roZE8tiZu9Xh6zr7A
Has joined the channel.
hi, I think @t_stephens67 asked the questions about multiple orderers earlier yesterday... just wanna confirm, can we have more than 1 orderer serve the same channel?
I have a sample app registered with an orderer org serving a channel and multiple peers, the minute I configure more than 1 orderer, I get an error
Yes, you may have as many orderers are you would like if you use the Kafka consensus mechanism.
What sort of error?
I have a Kafka/zookeeper cluster configured
in docker-compose.yaml, I configured orderer0.example.com, orderer1.example.com and ordere2.example.com , all three have depends_on kafka0, kafka1, kafka2 and kafka3
but when I start my network and run my bdd test, it fails when trying to load one of my CCs.. if I comment out orderer1 and orderer2, my bdd test pass with no issues
@Baha-sk I suggest you look at the example:
https://github.com/hyperledger/fabric/tree/master/examples/e2e_cli
This shows how to configure fabric to use a Kafka cluster as a backend
Note particularly, the way the `configtx.yaml` is modified to bootstrap the ordering service. If you add additional ordering nodes, you will want to update the list of orderers in `configtx.yaml` _before_ bootstrapping the network
thanks @jyellick, I think fabric/examples has a similar example, my more specific question is, the docker compose in these examples has only 1 instance of the orderer "orderer.example.com" with a Kafka/ZK cluster.. can we not have more than 1 orderer serving the same channel?
You may, but you will need to modify `configtx.yaml` to enumerate the additional orderer addresses
https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#example
I'll have a look at configtx.yaml .. but I did update my copy of configtx.yaml... I'm just gonna compare your example with mine..
Note here: https://github.com/hyperledger/fabric/blob/master/examples/e2e_cli/configtx.yaml#L106-L107
There is only a single orderer address. To use multiple orderers, you should specify their addresses here as well
correct, that's exactly what I modified to add multiple orderer hosts in configtx.yaml
correct, that's exactly what I modified to add multiple orderer hosts in configtx.yaml
Addresses:
- orderer0.example.com:7050
- orderer1.example.com:7050
- orderer2.example.com:7050
and added 3 entries in docker compose
but I suspect that for 1 channel/peer consortium, only 1 orderer can manage their transactions, is this true?
No, this is not true
You may have as many orderers as you wish, @kostas linked to perhaps a better example above ( https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#example )
You may have as many orderers as you wish (all of which may service any channel), @kostas linked to perhaps a better example above ( https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#example )
so far the example you have shown above and the Fabric e2e cli example has only 1 orderer with Kafka/Zk cluster
That's correct, but it could have defined more, perhaps that's an enhancement we can add in the future
if you can show me a real example with more than 1 orderer container serving the same channel/peers consortium that would be great...
so far, in my example, only 1 orderer with a cluster of Kafka/ZooKeeper works
the minute I add the other 2 orderer containers in my docker compose, I get a failure
Hi @Baha-sk, did you check this out by any chance? https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#example
It refers to the following compose files
https://github.com/hyperledger/fabric/blob/master/bddtests/dc-orderer-kafka.yml
https://github.com/hyperledger/fabric/blob/master/bddtests/dc-orderer-kafka-base.yml
Error: QueryChaincode return error: CreateAndSendTransactionProposal returned error: invoke Endorser returned error: Transaction processor (localhost:9051) returned error 'rpc error: code = Unknown desc = chaincode error (status: 500, message: Calling mySnap (endorseTransaction) through stub.InvokeChaincode("mysnap") return status 500 with message Error selecting endorsers: Error getting peer group resolver for chaincodes [[mybrokercc]] on channel [consortium]: unable to create new peer group resolver for chaincode(s) [[mybrokercc]] on channel [consortium]: error retrieving signature policy for chaincode [mybrokercc] on channel [consortium]: error querying chaincode [mybrokercc] on channel [consortium]: error querying chaincode data for chaincode [mybrokercc] on channel [consortium]: Error querying peers on channel consortium: Error initializing new channel: Unable to retrieve channel configuration from orderer service: error returned from orderer service: Error creating NewAtomicBroadcastClient rpc error: code = Unavailable desc = grpc: the connection is unavailable)' for txID '5a2a3cc2a226cb2cc5481b8221f3131e9c50b67b968aec4764fe18fae0e8a98d'
`Error creating NewAtomicBroadcastClient rpc error: code = Unavailable desc = grpc: the connection is unavailable)'`
That error sounds like your client cannot make a network connection to the orderer
This would imply that there is a network connectivity issue between your client and the orderer, not necessarily that there is anything wrong with the composition. Did you verify that all of the orderers started?
I noticed in the Kafka example under bdd test you added the zookeeper containers to the orderer's depend_on, I only added the kafka containers as dependency in my orderer...
yes, all 3 orderers were started.. let me add the zookeep
zookeeper to my orderer's dependencies and retry..
@Baha-sk Again, this looks to me like a network connectivity issue between the client and the orderer, I do not believe diagnosing the connection between the orderer and Kafka is likely to be fruitful
hmmm.. I'm running the full example network on my laptop.. so there should be no connectivity issue
I just hope it's not a Mac vs Windows vs Linux issue
Perhaps you changed a host name somewhere and forgot to change it elsewhere? Perhaps renaming `orderer.example.com` to `orderer0.example.com` for instance?
This is routinely tested on Mac, Linux, and Windows.
ok.. I see that orderer1 and orderer2 extend orderer0 in the kafka example.. I did orderer0, 1 and 2 to all extend orderer.example.com
But I meant, did you fix your application to refer to the new addresses you defined?
my app has all three orderers registered with their respective addresses
I will check again in a bit.. I have another urgent issue to fix right away.. thanks @jyellick and @kostas for your help...
Has joined the channel.
If all orderers including Kafka are down and up again, all the channels seems not working. What is the recovery process?
If all orderers including Kafka fail to working due to network disconnections, then network up again, all the channels seems not working. What is the recovery process?
If all orderers including Kafka fail to work due to network disconnections, then network up again, all the channels seems not working. What is the recovery process?
@nate94305: Can you rephrase? Not sure I follow. Feel free to describe this step-by-step.
I run the file "https://github.com/hyperledger/fabric/blob/master/orderer/common/server/docker-compose.yml" by command "docker-compose up -d", the 3 kafka containers always restart frequently . why?
@jyellick https://gist.github.com/c3922717ef26b2c40ce772b94b47f724 can you take a look
?
@asaningmaxchain: What does this have to do with the ordering service?
i use the cmd go run orderer/main.go to start the orderer
in the master branch
i don't know what's the problem
my go version is 1.8
Have you played around with "govendor" by any chance?
Have you played around with `govendor` by any chance?
@asaningmaxchain: Have you played around with `govendor` by any chance?
Can you tell me what you see when you run `ls -l $GOPATH/src/github.com/hyperledger/fabric/vendor/golang.org/x/net/context`?
Use hastebin.com to paste the output.
@srongzhe: Will be impossible to tell unless you provide the Kafka broker logs.
@kostas wait a moment
@kostas it's my mistake,please ignore
Hi All, is sPBFT consensus algorithm is available in 1.0, please guide on how to enable/configure it
It is not.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=NNKxaACohs8dru7nc) @kostas Thank you, curious to know when it will be available
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Q47PqcMTMcD4ab4da) @kostas
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Q47PqcMTMcD4ab4da) @kostas Hi, Kostas. We are testing hyperledger fabric's availability when the network failure during BMT in Korea with 5 orderers(along with 5 kafkas and 5 zoopkeepers) on 5 servers (one server with 1 orderer, 1 kafka and 1 zookeep). The client simulated the network failure by changing IP table (blocking port) one by one from 5 orders active to 0 orderer active. Then, the network came back by restoring IP table (not-blocking port). However, we have trouble with restoring ordering service. Orderers were not working at all. So we re-stared orderers/kafka/zookeeper, still orderers/kafkas are not working by showing 503 error on logs (service not available)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Q47PqcMTMcD4ab4da) @kostas Hi, Kostas. We are testing hyperledger fabric's availability when the network failure during BMT in Korea with 5 orderers(along with 5 kafkas and 5 zoopkeepers) on 5 servers (one server with 1 orderer, 1 kafka and 1 zookeep). The client simulated the network failure by changing IP table (blocking port) one by one from 5 orders's network not-blocking to 5 orderers' network blocking. Then, the network came back by restoring IP table (not-blocking port). However, we have trouble with restoring ordering service. Orderers were not working at all. So we re-stared orderers/kafka/zookeeper, still orderers/kafkas are not working by showing 503 error on logs (service not available)
I will need access to the Kafka logs.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=o5fnSYmzFd7yvohCn) @kostas We are out of BMT test room now. We will try to get the logs.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?m [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=dTnenn5XeewNaHghp) @kostas benchmarking test.
What is BMT to begin with?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=dTnenn5XeewNaHghp) @kostas We are out of BMT test room now. I am not sure I can get the logs but I will try.
Benchmarking test is for comparing the functional and non-functional features between supplier's solution.
In fact, I think there’s no need for the logs. If I get the scenario you are testing correctly, you are exceeding the fault tolerance of the Kafka cluster.
Yes, we have exceeded the fault tolerance. So the ordering service stopped. After that, how do we restore the service?
If you exceed the fault tolerance of the network, all bets are off. There is no automated way to restore the service.
What is manual way?
This is not a Fabric issue per se. You’d bump into the same issue if you were running a Kafka cluster.
So, Kafka should never stopped?
No, Kafka’s fault tolerance should not be exceeded.
Sure, that's the 1st target for service operation. Our issues per the client request is what is the manual way to restore the orderer in an extreme situation.
The manual way would involve finding the orderer with the longest ledger, copying it over manually to all other ordering service nodes, and modifying the logic in the orderer binary so that it doesn’t seek to latest recorded offset but instead starts from scratch.
The manual way would probably involve finding the orderer with the longest ledger, copying it over manually to all other ordering service nodes, and modifying the logic in the orderer binary so that it doesn’t seek to latest recorded offset but instead starts from scratch.
There may be details that I’m missing or things that can go wrong here. This is not a path I suggest you take.
No, no, no..... we are not checking the transactions or something.... Orderers/Kafka can't connect to each other including nodes.
No, no, no..... we are not checking even the transactions or something.... Orderers/Kafka can't connect to each other including nodes.
503 error is service not-available.
503 error is "service not-available."
Let’s do this right. Post a detailed write-up with Kafka logs when you get a chance. Use Hastebin for all of these things.
Tag myself and @sanchezl
how can we use hastebin?
how can I use hastebin?
hastebin.com?
Can't I use this chatting room? I am not sure how to forward the messages to you and sanchez.
Can't I use this chatting room? I am not sure how to forward the messages to you and sanchez using hastebin.
I will tag you and Sanchezl
No you paste the logs in Hastebin, then paste the links to your snippets in Hastebin here.
See the channel’s description.
Ah... okay. not to fill up too much space on this board.
Exactly.
Thanks Kostas.
One more question. When the minimum number of orderers is set to 2 and others die due to network failure or something. Those two orderers are working hard. Then, other orderers are active again. How do we pass some of loads to others? Is it automatic?
I think this depends on the implementation of peer/sdk. IIRC, they would stick to current alive orderer, so not load-balanced
Hi @kostas . what is the requireAcks flag set for Producer in kafka-based ordering service ?
Hi @kostas . what is the value of "request.required.acks" flag for Producer in kafka-based ordering service ?
@niteshsolanki it's set here: https://github.com/hyperledger/fabric/blob/release/orderer/kafka/config.go#L70
`sarama.WaitForAll`
@guoger thanks.. Is it recommended to use WaitForAll ? what will happen if it is set to (1) =wait only for leader and both the ISR are down. will kafka leader still write data ?
@guoger thanks.. Is it recommended to use WaitForAll ? what will happen if it is set to WaitForLocal and both the ISR are down. will kafka leader still write data ?
@kostas @jyellick do you consider smooth upgrade?if i use the v1.0 to deploy,how can i to upgrade in the future
@niteshsolanki I could be wrong here, but I think we wanna make sure that we consider a tx successfully produced *iff* all ISR commit it, so that other orderers reading from non-leader ISRs are in-sync
@guoger ok. so if it set to waitForLocal then the followers will not fetch logs (data) from the leader ? does that flag really controls the replication ?
@guoger ok. so if it is set to waitForLocal, then the followers will not fetch logs (data) from the leader ? does that flag really controls the replication ?
@niteshsolanki followers still fetch data from leader. However, if it's set to `waitForLocal` AND this fetch failed, producer would still consider tx to be successful as long as it's committed in leader
ok. and let say if the follower never fetches the logs and ISR is not satisfied, then the message will still be considered committed ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pHaD8EX9epDwqYKpi) @niteshsolanki that's how I interpreted the kafka doc
so, if a message is committed at leader, but not followers, and the leader crashes. The tx is lost
and we even risk a fork in this case, if orderers are consuming data from different ISRs (i.e. one orderer consumed new tx from leader right before it crashes, but other orderers haven't)
and we even risk a fork in this case, if orderers are consuming data from different ISRs (i.e. one orderer consumed new tx from leader right before it crashes, but other orderers who are consuming followers never get this tx)
ohh.. so how the fork will be resolved in this case?
that's why we use `WaitForAll` so that if tx is considered committed from orderer's perspective, it's committed at every ISR
but again, I could be wrong here.. need @jyellick or @kostas to confirm
I'm just trying to reverse-engineer the reason after all :P
@guoger thanks
Hi, Kostas I linked all the logs of orderer, zookeeper and kafka with a google drive file as a zipped format ~ https://drive.google.com/open?id=0Bz2qhnN8N1h3amxvWlZhN1luSWc @kostas, @sanchezl
Hi, Kostas I linked all the logs of orderer, zookeeper and kafka with a google drive file as a zipped format ~ https://drive.google.com/open?id=0Bz2qhnN8N1h3amxvWlZhN1luSWc @kostas, @sanchezl, @guoger
@kostas, @sanchezl, @guoger Hi, Kostas I linked all the logs of orderer, zookeeper and kafka with a google drive file as a zipped format ~ https://drive.google.com/open?id=0Bz2qhnN8N1h3amxvWlZhN1luSWc
Has joined the channel.
@niteshsolanki @guoger I will let @kostas confirm, but I believe the `request.required.acks` is actually about not falsely acknowledging a transaction to the client. Essentially, it prevents us from replying with a success to a client until the message is far enough into the process that there is fault tolerance. With respect to state forking on leader crash, the setting which protects us is actually `unclean.leader.election.enable=false`, which requires that if the fault tolerance of Kafka is exceeded, only members of the most recent ISR are considered as candidates for leader election.
@niteshsolanki FYI, here is a link http://cloudurable.com/blog/kafka-architecture-low-level/index.html which discusses this, in summary `request.required.acks` is about durability, while `unclean.leader.election.enable=false` is about consistency
@niteshsolanki FYI, here is a link http://cloudurable.com/blog/kafka-architecture-low-level/index.html which discusses this, in summary `request.required.acks` is about durability, while `unclean.leader.election.enable` is about consistency
@jyellick I posted a message regarding the recovery of kafka service to Kostas and Sanchez. Can you kindly help me on the issue? https://chat.hyperledger.org/channel/fabric-orderer?msg=dvzEH5dSkC5Wu6RsP I put the logs of orderer/kafka/zookeeper here: https://drive.google.com/open?id=0Bz2qhnN8N1h3amxvWlZhN1luSWc
@nate94305 As @kostas indicates, Kafka is a crash fault tolerant system, which tolerates a finite number of faults. If you have violated the number of faults it can tolerate, then recovery may be quite difficult. I'd recommend first that you focus on restoring your Kafka cluster, looking through its logs and eliminating errors. Keep in mind, if you do any sort of destructive recovery, you will very likely fork the blockchain.
@jyellick thanks for the useful information provided. Actually I had a Kafka setup of 3 broker nodes with RF=3 and min.insync.replica=2. I had shut down two of my broker nodes and and run the producer with request.acks =1 and Kafka could commit the message.
@jyellick thanks for the useful information provided. Actually I had a Kafka setup of 3 broker nodes with RF=3 and min.insync.replica=2. I had shut down two of my broker nodes and and run the producer with request.acks =1 and Kafka could commit the message.
@jyellick thanks for the useful information provided. Actually I had a Kafka setup of 3 broker nodes with RF=3 and min.insync.replica=2. I had shut down two of my broker nodes and and run the producer with request.acks =1 and Kafka could commit the message. 2)for the same setup I ran the producer with request.acks =all then Kafka didn't commit the message. So my question is if we specify MIN ISR =2 and RF=3 and I have only one broker running how could Kafka commit the message with producer request.acks= 1?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=MNRHHsFdkMz2xaMKa) @jyellick In this case, none of orderer/kafka/zookeeper was down. The network around this were down and up.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=MNRHHsFdkMz2xaMKa) @jyellick In this case, none of orderer/kafka/zookeeper was down. The network around this were down and up.
2)for the same setup I ran the producer with request.acks =all then Kafka didn't commit the message. So my question is if we specify MIN ISR =2 and RF=3 and I have only one broker running how could Kafka commit the message with producer request.acks= 1?
@nate94305 If the Kafka cluster did not suffer any faults, but you are receiving 503 errors, then it is more likely that the orderer simply cannot contact the Kafka service. If you are using some sort of network virtualization like docker, then perhaps the hostname to IP mapping or similar was hurt during your failure simulation. Losing network connectivity between the orderers and Kafka cluster should recover automatically.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=kzHckNZ4vdqZsF6a2) @jyellick Is there any the time to set for recovery?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=kzHckNZ4vdqZsF6a2) @jyellick Is there any the time to set for recovery in the configuration?
@nate94305 Yes, please see `orderer.yaml`'s Kafka section, there are many configurable timeouts there
@nate94305 Yes, please see `orderer.yaml`'s Kafka section, there are many configurable timeouts/retries there
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=KdeJ3MpgCCT3MGLpQ) @jyellick Thanks. I will try a few test and get back here. Thanks. BTW, can you investigate the logs I put to check whether there is any other indication of failure?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=KdeJ3MpgCCT3MGLpQ) @jyellick Thanks. I will try a few test and get back here. Thanks. BTW, can you investigate the logs I put to check whether there is any indication of other failures?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=KdeJ3MpgCCT3MGLpQ) @jyellick Thanks. I will try a few tests and get back here. Thanks. BTW, can you investigate the logs I put to check whether there is any indication of other failures?
@nate94305 In your logs, I see network errors with Kafka attempting to connect to zookeeper, as well as failures about the ISR set being too small. This does not support your assertion that the Kafka cluster was not faulted
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ehEKf5pgYytgdQjQh) @jyellick If Kafkas and zookeepers can't communicate each other, the ISR set would be reduced too small too, right?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ehEKf5pgYytgdQjQh) @jyellick If Kafkas and zookeepers can't communicate each other, the ISR set would be reduced too small too, right? No instance of Kafka/orderer/ZK was down. Just they can't talk to each other due to network failure.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ehEKf5pgYytgdQjQh) @jyellick If Kafkas and zookeepers can't communicate each other, the ISR set would be reduced too small too, right? No instance of Kafka/orderer/ZK was down. Just they can't talk to each other due to network failure. So, how can we restore the kafka cluster service?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ehEKf5pgYytgdQjQh) @jyellick If Kafkas and zookeepers can't communicate each other, the ISR set would be reduced too small too, right? No instance of Kafka/orderer/ZK was down. Just they can't talk to each other due to network failure. So, how can we restore the kafka cluster service? Or, can it be the solution that two kafka sits on the same server? So, ISR set can't be reduced below two.
In any case, there is a way of disaster recovery? There must be a way..... for any kind of system....
In any case, there is a way of disaster recovery? There must be a way..... for any kind of system as long as the data on the disc is there.
You may find numerous resources on the internet around repairing a crashed Kafka cluster
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=9bsJsZxQwNR3HM8jG) @jyellick not really.. I used search terms like "disaster recovery of kafka" but couldn't find a useful information.
Could anyone give the reference document for disaster recovery processes? we will test the scenarios and give the feedback over here.
I run the example "https://github.com/hyperledger/fabric/tree/master/examples/e2e_cli" by start network_setup.sh. fail to execute the command "peer channel fetch" , I got the errors from orderer logs:
2017-10-27 05:36:01.887 UTC [orderer/main] Deliver -> DEBU 11c Starting new Deliver handler
2017-10-27 05:36:01.887 UTC [orderer/common/deliver] Handle -> DEBU 11d Starting new deliver loop
2017-10-27 05:36:01.888 UTC [orderer/common/deliver] Handle -> DEBU 11e Attempting to read seek info message
2017-10-27 05:36:01.889 UTC [orderer/common/deliver] Handle -> WARN 11f [channel: testchainid] Rejecting deliver request because of consenter error
2017-10-27 05:36:01.889 UTC [orderer/main] func1 -> DEBU 120 Closing Deliver stream
2017-10-27 05:36:04.963 UTC [orderer/main] Deliver -> DEBU 121 Starting new Deliver handler
2017-10-27 05:36:04.963 UTC [orderer/common/deliver] Handle -> DEBU 122 Starting new deliver loop
2017-10-27 05:36:04.963 UTC [orderer/common/deliver] Handle -> DEBU 123 Attempting to read seek info message
2017-10-27 05:36:04.964 UTC [orderer/common/deliver] Handle -> WARN 124 [channel: testchainid] Rejecting deliver request because of consenter error
2017-10-27 05:36:04.964 UTC [orderer/main] func1 -> DEBU 125 Closing Deliver stream
2017-10-27 05:36:08.049 UTC [orderer/main] Deliver -> DEBU 126 Starting new Deliver handler
2017-10-27 05:36:08.049 UTC [orderer/common/deliver] Handle -> DEBU 127 Starting new deliver loop
2017-10-27 05:36:08.049 UTC [orderer/common/deliver] Handle -> DEBU 128 Attempting to read seek info message
2017-10-27 05:36:08.049 UTC [orderer/common/deliver] Handle -> WARN 129 [channel: testchainid] Rejecting deliver request because of consenter error
2017-10-27 05:36:08.050 UTC [orderer/main] func1 -> DEBU 12a Closing Deliver stream
[sarama] 2017/10/27 05:36:10.636650 broker.go:96: Failed to connect to broker kafka2:9092: dial tcp: i/o timeout
[sarama] 2017/10/27 05:36:10.638179 client.go:620: client/metadata got error from broker while fetching metadata: dial tcp: i/o timeout
[sarama] 2017/10/27 05:36:10.638308 client.go:626: client/metadata no available broker to send metadata request to
[sarama] 2017/10/27 05:36:10.638368 client.go:428: client/brokers resurrecting 4 dead seed brokers
[sarama] 2017/10/27 05:36:10.638418 client.go:590: client/metadata retrying after 250ms... (3 attempts remaining)
[sarama] 2017/10/27 05:36:10.889141 config.go:329: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
[sarama] 2017/10/27 05:36:10.889359 client.go:601: client/metadata fetching metadata for all topics from broker kafka3:9092
[channel: testchainid] Rejecting deliver request because of consenter error
who can help me?
@srongzhe I would suspect it's kafka connection issue. Could you turn on sarama log and post orderer log? pls do not post it directly here, but using a tool like gist or pastebin/hastebin, thx.
@guoger Thank you very much! How turn on sarama log? Is sarama log belong to orderer ? or belong to kafka? I am newer to rocket chat . I address the gist failed, because I am in china, in security wall.
@guoger Is not my previous post "[sarama] 2017/10/27 ......." sarama log?
[sarama] 2017/10/27 05:36:10.636650 broker.go:96: Failed to connect to broker kafka2:9092: dial tcp: i/o timeout
[sarama] 2017/10/27 05:36:10.638179 client.go:620: client/metadata got error from broker while fetching metadata: dial tcp: i/o timeout
I am running the master branch, and noticed that orderer reported the warning message shown below. What does this mean? How can I disable the compatibility mode?
```[orderer/consensus/kafka] processRegular -> WARN 031 [channel: testchainid] This orderer is running in compatibility mode```
@srongzhe I'm seeing this error in the log you posted. I guess it's kafka connection issue. Could you try to verify that first?
```
[sarama] 2017/10/27 05:36:10.638179 client.go:620: client/metadata got error from broker while fetching metadata: dial tcp: i/o timeout
```
@srongzhe Sorry for late reply. I'm seeing this error in the log you posted. I guess it's kafka connection issue. Could you try to verify that first?
```
[sarama] 2017/10/27 05:36:10.638179 client.go:620: client/metadata got error from broker while fetching metadata: dial tcp: i/o timeout
```
Also, hastebin.com should work in china
@yoheiueda If you are running *all orderers* from master branch, that warning is safe to ignore. However, if you are running a mixed cluster, where v1.0.x and v1.1 orderers co-exist, you want to set orderer capabilities in your configtx.yaml to `false`:
```
# Orderer capabilities apply only to the orderers, and may be safely
# manipulated without concern for upgrading peers. Set the value of the
# capability to true to require it.
Orderer: &OrdererCapabilities
# V1.1 for Order is a catchall flag for behavior which has been
# determined to be desired for all orderers running v1.0.x, but the
# modification of which would cause imcompatibilities. Users should
# leave this flag set to true.
"V1.1": true
```
@yoheiueda If you are running *all orderers* from master branch, that warning is safe to ignore. However, if you are running a mixed cluster, where v1.0.x and v1.1 orderers (master branch in your case) co-exist, you want to set orderer capabilities in your configtx.yaml to `false`:
```
# Orderer capabilities apply only to the orderers, and may be safely
# manipulated without concern for upgrading peers. Set the value of the
# capability to true to require it.
Orderer: &OrdererCapabilities
# V1.1 for Order is a catchall flag for behavior which has been
# determined to be desired for all orderers running v1.0.x, but the
# modification of which would cause imcompatibilities. Users should
# leave this flag set to true.
"V1.1": true
```
@yoheiueda If you are running *all orderers* from master branch, that warning is safe to ignore. However, if you are running a mixed cluster, where v1.0.x and v1.1 orderers (master branch in your case) co-exist, you want to set orderer capability `v1.1` in your configtx.yaml to `false`:
```
# Orderer capabilities apply only to the orderers, and may be safely
# manipulated without concern for upgrading peers. Set the value of the
# capability to true to require it.
Orderer: &OrdererCapabilities
# V1.1 for Order is a catchall flag for behavior which has been
# determined to be desired for all orderers running v1.0.x, but the
# modification of which would cause imcompatibilities. Users should
# leave this flag set to true.
"V1.1": true
```
@yoheiueda If you are running *all orderers* from master branch, that warning is safe to ignore. However, if you are running a mixed cluster, where v1.0.x and v1.1 orderers (master branch in your case) co-exist, you want to set orderer capability `v1.1` in your `configtx.yaml` to `false`:
```
# Orderer capabilities apply only to the orderers, and may be safely
# manipulated without concern for upgrading peers. Set the value of the
# capability to true to require it.
Orderer: &OrdererCapabilities
# V1.1 for Order is a catchall flag for behavior which has been
# determined to be desired for all orderers running v1.0.x, but the
# modification of which would cause imcompatibilities. Users should
# leave this flag set to true.
"V1.1": true
```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=YEByyKqxRDwSDK9nX) @guoger I am using a single OSN, but my configtx.yaml is for V1.0, so the capabilities setting is not included. I'd like to try some performance enhancements in orderer v1.1, so I am wondering whether I can benefit from the performance improvement even with the compatibility mode.
@guoger I am using a single OSN, but my configtx.yaml is for V1.0, so the capabilities setting is not included. I'd like to try some performance enhancements in orderer v1.1, so I am wondering whether I can benefit from the performance improvement even with the compatibility mode.
I tried to add the capabilities setting to my configtx.yaml, but I got the following error when I tried to join a new channel. ```[orderer/common/deliver] deliverBlocks -> ERRO 07a [channel: mychannel] Error reading from channel, cause was: NOT_FOUND```
Please ignore my previous post. I forgot to clean up some files in my environment.
@guoger Thx, I verify the connection between orderer and kafka today. I first launch a container with ping tool to test kafka0:9092.
@yoheiueda regarding to performance improvement in compatibility mode, I'm afraid not..
@srongzhe you probably wanna try the step here: http://kafka.apache.org/quickstart
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=34BM3SoYuA4MXg22j) @guoger I see. I am still trying to enable V1.1 capabilities, but ` peer channel join` shows the following error message. ```Error: proposal failed (err: rpc error: code = Unknown desc = chaincode error (status: 500, message: Error deserializing key Capabilities for group /Channel: Unexpected key Capabilities))```
@jyellick i recently see the configtxgen tool source code,i find some code should be modify,
1,// ChannelRestrictionsValue returns the config definition for the orderer channel restrictions.
// It is a value for the /Channel/Orderer group.
func KafkaBrokersValue(brokers []string) *StandardConfigValue {
return &StandardConfigValue{
key: KafkaBrokersKey,
value: &ab.KafkaBrokers{
Brokers: brokers,
},
}
}
1,`// ChannelRestrictionsValue returns the config definition for the orderer channel restrictions.
// It is a value for the /Channel/Orderer group.
func KafkaBrokersValue(brokers []string) *StandardConfigValue {
return &StandardConfigValue{
key: KafkaBrokersKey,
value: &ab.KafkaBrokers{
Brokers: brokers,
},
}`
}
1,```// ChannelRestrictionsValue returns the config definition for the orderer channel restrictions.
// It is a value for the /Channel/Orderer group.
func KafkaBrokersValue(brokers []string) *StandardConfigValue {
return &StandardConfigValue{
key: KafkaBrokersKey,
value: &ab.KafkaBrokers{
Brokers: brokers,
},
}``
}
the comment should be modify
2,
@asaningmaxchain: That is a good catch. Could you submit a CR modifying this comment when you get a sec? Feel free to add Jason or me as reviewers.
@kostas @jyellick ok```// GenesisBlock produces a genesis block for the default test chain id
func (bs *Bootstrapper) GenesisBlock() *cb.Block {
block, err := genesis.NewFactoryImpl(bs.channelGroup).Block(genesisconfig.TestChainID)
if err != nil {
logger.Panicf("Error creating genesis block from channel group: %s", err)
}
return block
}
// GenesisBlockForChannel produces a genesis block for a given channel ID
func (bs *Bootstrapper) GenesisBlockForChannel(channelID string) *cb.Block {
block, err := genesis.NewFactoryImpl(bs.channelGroup).Block(channelID)
if err != nil {
logger.Panicf("Error creating genesis block from channel group: %s", err)
}
return block
}
```
i think it's a good choice to modify like below
```// GenesisBlock produces a genesis block for the default test chain id
func (bs *Bootstrapper) GenesisBlock() *cb.Block {
return bs.GenesisBlockForChannel(genesisconfig.TestChainID)
}
// GenesisBlockForChannel produces a genesis block for a given channel ID
func (bs *Bootstrapper) GenesisBlockForChannel(channelID string) *cb.Block {
block, err := genesis.NewFactoryImpl(bs.channelGroup).Block(channelID)
if err != nil {
logger.Panicf("Error creating genesis block from channel group: %s", err)
}
return block
}```
https://jira.hyperledger.org/browse/FAB-6800 @kostas @jyellick
@asaningmaxchain: Responded on that JIRA link.
@kostas i got it
Has joined the channel.
Hi all, I'm looking into consensus algorithms (we are currently just running noop)
I've seen this
https://github.com/diegomasini/hyperledger-fabric/blob/master/docs/FAQ/consensus_FAQ.md
https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#example
[1:35]
https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing
Hi all, I'm looking into consensus algorithms (we are currently just running noop)
I've seen this
https://github.com/diegomasini/hyperledger-fabric/blob/master/docs/FAQ/consensus_FAQ.md
https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#example
[1:35]
https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing
Hi all I'm looking at consensus algorithms (we are currently just running noop)
I've read this (thanks @jyellick )
https://github.com/diegomasini/hyperledger-fabric/blob/master/docs/FAQ/consensus_FAQ.md
https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#example
https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing
Is there any more detail (or link to the github repo) for PBFT or Sieve for fabric
Hi all I'm looking at consensus algorithms (we are currently just running noop)
I've read this (thanks @jyellick )
https://github.com/diegomasini/hyperledger-fabric/blob/master/docs/FAQ/consensus_FAQ.md
https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#example
https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing
Is there any more detail (or link to the github repo) for PBFT or Sieve for fabric
@JohnWhitton PBFT and Sieve were consensus options available in v0.5/v0.6. With fabric v1.0 the first production ready consensus algorithm is based on the CFT messaging system Kafka. We are actively working towards returning some PBFT-like consensus algorithm to fabric, but it is not currently ready.
Thanks @jyellick so this sounds like the best read then https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing
One more question if I have multiple orderers (or OSNs) is the main benefit just redundancy or by having one orderer per organization does that add validity to the consensus model (i.e. by each organization running an orderer is it similar to, say, multiple miners in ethereum in that each organizations ordere (similar to each ethereum miner) is contributing blocks to the ledger thus safequarding against one orderer being a malicious player)
or is that really covered by endorsement policies and having multiple peers and validators?
Under any CFT scheme, multiple nodes just gives you redundancy.
Many warnings occurred when I created a channel, then I there is error when I tried to join a peer into the channel. Not sure whether they are related. The warning message and the error are as following:
```info: [PTE 0 main]: [joinChannel:org4] Successfully enrolled orderer 'admin'
error: [Orderer.js]: sendDeliver - rejecting - status:NOT_FOUND
error: [PTE 0 main]: Error: Invalid results returned ::NOT_FOUND
at ClientDuplexStream.
```orderer0.example.com | 2017-10-31 00:07:15.281 UTC [cauthdsl] deduplicate -> WARN 00f De-duplicating identity 0a0a4f726......2d2d2d2d2d0a at index 3 in signature set```
`orderer0.example.com | 2017-10-31 00:07:15.281 UTC [cauthdsl] deduplicate -> WARN 00f De-duplicating identity 0a0a4f7264657265724......d2d2d2d2d0a at index 3 in signature set`
```info: [PTE 0 main]: [joinChannel:org4] Successfully enrolled orderer 'admin'
error: [Orderer.js]: sendDeliver - rejecting - status:NOT_FOUND
error: [PTE 0 main]: Error: Invalid results returned ::NOT_FOUND
at ClientDuplexStream.
I got the above warning message when I create the channel, and the above error when I tried to join a peer to the channel. Not sure whether they are related, or how to fix it. Thanks!
I got the above warning message when I created the channel, and the above error when I tried to join a peer to the channel. Not sure whether they are related, or how to fix it. Thanks!
By turning on the DEBUG info, I saw the following from the orderer log
```2017-10-31 00:44:03.791 UTC [orderer/common/server] initializeSecureServerConfig -> INFO 11a Starting orderer with TLS enabled
2017-10-31 00:44:03.791 UTC [orderer/common/server] Start -> INFO 11b Beginning to serve requests
2017-10-31 00:44:47.025 UTC [orderer/common/server] Deliver -> DEBU 11c Starting new Deliver handler
2017-10-31 00:44:47.025 UTC [orderer/common/deliver] Handle -> DEBU 11d Starting new deliver loop for 172.19.0.22:52074
2017-10-31 00:44:47.025 UTC [orderer/common/deliver] Handle -> DEBU 11e Attempting to read seek info message from 172.19.0.22:52074
2017-10-31 00:44:47.025 UTC [orderer/common/deliver] Handle -> WARN 11f Error reading from 172.19.0.22:52074: rpc error: code = Canceled desc = context canceled
2017-10-31 00:44:47.025 UTC [orderer/common/server] func1 -> DEBU 120 Closing Deliver stream
2017-10-31 00:44:50.922 UTC [orderer/common/server] Deliver -> DEBU 121 Starting new Deliver handler
2017-10-31 00:44:50.922 UTC [orderer/common/deliver] Handle -> DEBU 122 Starting new deliver loop for 9.47.152.36:53952
2017-10-31 00:44:50.922 UTC [orderer/common/deliver] Handle -> DEBU 123 Attempting to read seek info message from 9.47.152.36:53952
2017-10-31 00:44:50.922 UTC [orderer/common/deliver] deliverBlocks -> DEBU 124 Rejecting deliver for 9.47.152.36:53952 because channel testorgschannel1 not found
2017-10-31 00:44:50.923 UTC [orderer/common/deliver] Handle -> DEBU 125 Waiting for new SeekInfo from 9.47.152.36:53952
2017-10-31 00:44:50.923 UTC [orderer/common/deliver] Handle -> DEBU 126 Attempting to read seek info message from 9.47.152.36:53952
2017-10-31 00:44:50.925 UTC [orderer/common/deliver] Handle -> DEBU 127 Received EOF from 9.47.152.36:53952, hangup
2017-10-31 00:44:50.925 UTC [orderer/common/server] func1 -> DEBU 128 Closing Deliver stream```
@qizhang can you disable the TLS?
@asaningmaxchain why disabling TLS could solve the problem?
@qizhang do you try it?
not yet :-)
I created a network with 1 channel, 8 orgs, and each org has 2 peers. Joining the peers from org1 and org2 to the channel worked well, the error occurred when I tried to join peers in org3
i know you use the PTE tool to test the performance
yes
@jyellick can yo take a look the ```common/channelconfig/channel.go``` i think the method named ```validateOrdererAddress``` should modify
https://gist.github.com/asaningmaxchain/ade9b07549041600f84694d582d97cc1
@qizhang Are you either running `peer channel signconfigtx` or using ` version of `configtxlator` which is older than about a week?
@qizhang Are you either running `peer channel signconfigtx` or using a version of `configtxlator` which is older than about a week?
@asaningmaxchain Because channel config validation must be deterministic, it is not safe to change the `validateOrdererAddresses` function like that, the orderers must all agree on validation rules.
@jyellick ok
Hi all! Is there chance to modify genesis block on orderer via SDK?
Hi all !!
I wanna know about , How to add a new Organisation in the existing channel ?
I have an existing fully-working
I have an existing fully-working 2 Organisation setup ( according to the Hyperledger Fabric documentation )
Now, if I want to add a new Organisation,
What are the changes I need to do ?
I did these changes :
1. Created crypto-materials using cryptogen
2. Made a new genesis.block using configtx (with changes like, Orderer Genesis containing all 3 Orgs now)
3. Replaced this genesis block with the Old genesis Block of Orderer
4. Shared the mychannel.block with new Org-Peers
5. Tried to make query, But FAILED!!!!!!!!!!
@UtkarshSingh http://hyperledger-fabric.readthedocs.io/en/latest/configtxlator.html
@Vadim I have gone through that as well
But, instead of making modifications in the genesis block, can't we make a new genesis block and replace it with the older genesis block (present in the orderer)
no, as you have a blockchain and you cannot modify existing blocks
How these 2 methods are different?
Why I cannot change the genesis Block?
(on replacing the block, what are the things, that get effected)
@UtkarshSingh as @Vadim points out, the entire purpose of blockchain is to prevent the modification of older blocks. If you were to modify the genesis block, you are in effect creating a new blockchain, and you would lose all data.
You may reconfigure the blockchain, which will generate a new configuration block. The genesis block is the first configuration block.
@Ratnakar and @rohitadivi are putting together a `fabric-samples` example of network membership reconfiguration which will hopefully be available soon
@UtkarshSingh we are planning to submit sample with documentation by end of this week.
for now you can refer the steps here https://github.com/asararatnakar/dynamic_add_org3/blob/master/scripts/script.sh#L250-L345
Hi, does anyone know how to configure the maximum message size in Fabric network? We are trying to send a bit large data to a Fabric network. I understand that a Fabric network uses gRPC protocol and it has a default limit of 4MB. https://godoc.org/google.golang.org/grpc#MaxRecvMsgSize Is this value configurable in Peers and Orderers? This question is related to this thread: about fabric-sdk-java https://chat.hyperledger.org/channel/fabric-sdk-java?msg=MKCom94jBRewXX4j4
I took a look at core.yaml sample but I could not find it. https://github.com/hyperledger/fabric/blob/v1.0.1/examples/cluster/config/core.yaml
I have set Orderer.BatchSize.AbsoluteMaxSize: 99MB in configtx.yaml to generate chanel artifacts.
I also set Kafka as follows:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
@ryokawajp https://github.com/hyperledger/fabric/blob/release/core/comm/config.go#L22-L24 are the fabric defaults. These are not exposed through the configuration and you would need to tweak this manually. But the gRPC streams should be accepting up to 100 MB messages by default. I suspect that your problem is that your client is limiting the maximum send message size, and you will need to modify this value in your client as well.
@ryokawajp https://github.com/hyperledger/fabric/blob/release/core/comm/config.go#L22-L24 are the fabric defaults. These are not exposed through the configuration and you would need to tweak this manually. But the gRPC streams should be accepting up to 100 MB messages by default. I suspect that your problem is that your client is limiting the maximum send message size, and you will need to modify this value in your client as well. From the conversation snippet you pasted, it appears you are setting the maximum inbound message size, but I would expect a corresponding outbound message size which would need to be bumped as well.
https://github.com/grpc/grpc-java/issues/2563 appears to discuss the default 4MB limit on gRPC's java implementation and has some discussion on how others have worked around this default
(That issue I found linked to from here: https://github.com/grpc/grpc-java/blob/master/core/src/main/java/io/grpc/CallOptions.java#L347-L356 )
Hi everybody, doing a network update right now i get the following error ```
Hi everybody, doing a network update right now i get the following error ```orderer0.test.com | 2017-10-31 22:40:07.863 UTC [msp] SatisfiesPrincipal -> DEBU 310 Checking if identity satisfies ADMIN role for OrdererTestMSP
orderer0.test.com | 2017-10-31 22:40:07.863 UTC [cauthdsl] func2 -> DEBU 311 0xc420284690 principal matched by identity 0
orderer0.test.com | 2017-10-31 22:40:07.863 UTC [msp/identity] Verify -> DEBU 312 Verify: digest = 00000000 a2 fe 89 3e c3 49 14 64 ae 67 34 f4 bf 25 55 0b |...>.I.d.g4..%U.|
orderer0.test.com | 00000010 77 2a 6f 11 ce b6 8f 4f a0 de 80 75 06 58 05 eb |w*o....O...u.X..|
orderer0.test.com | 2017-10-31 22:40:07.863 UTC [msp/identity] Verify -> DEBU 313 Verify: sig = 00000000 30 44 02 20 2d cf 85 09 a7 b9 0e 11 aa 17 e7 73 |0D. -..........s|
orderer0.test.com | 00000010 4b 78 ad c4 27 d9 c0 6b 25 a4 b0 2f 28 71 8d d5 |Kx..'..k%../(q..|
orderer0.test.com | 00000020 45 52 0e bd 02 20 37 14 ee 52 dc cd 47 a3 ff 6a |ER... 7..R..G..j|
orderer0.test.com | 00000030 26 cd d2 a9 16 67 68 c1 e3 b7 88 01 bb 1e 17 6a |&....gh........j|
orderer0.test.com | 00000040 ae 1f 83 19 52 c2 |....R.|
orderer0.test.com | 2017-10-31 22:40:07.863 UTC [cauthdsl] func2 -> DEBU 314 0xc420284690 signature for identity 0 is invalid: The signature is invalid
orderer0.test.com | 2017-10-31 22:40:07.864 UTC [cauthdsl] func2 -> DEBU 315 0xc420284690 principal evaluation fails
orderer0.test.com | 2017-10-31 22:40:07.864 UTC [cauthdsl] func1 -> DEBU 316 0xc420284690 gate 1509489607862248167 evaluation fails
orderer0.test.com | 2017-10-31 22:40:07.864 UTC [orderer/common/broadcast] Handle -> WARN 317 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Consortiums/TestConsortium not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining```
the final transaction is signed with the ordereradmin priv key - how can the signature then be wrong?
@david_dornseifer is this the channel config admin(i.e. registered as an admin within the MSP config as part of the orderer system channel)? that is different than the orderer admin (which is in the local msp config)
there is a difference between an orderer admin, and an orderer config admin. The former is relative to a specific node and is denoted within the local MSP config (i.e. on the file system of the node). The orderer config admin is defined within the configuration block of the orderer system channel. This is the identity that you will need to use wrt to changing configuration of the orderer (in this case, creation or modification of a consortium).
@david_dornseifer there is a difference between an orderer admin, and an orderer config admin. The former is relative to a specific node and is denoted within the local MSP config (i.e. on the file system of the node). The orderer config admin is defined within the configuration block of the orderer system channel. This is the identity that you will need to use wrt to changing configuration of the orderer (in this case, creation or modification of a consortium).
@jeffgarratt thx for the fast answer - I'm using the OrdererTestMSP user as the signing identity - that one is also given in the config/admins section of the old genesis.block
@jeffgarratt do you know if there is an example implementation somewhere not using a peer to inject the update but the sdk?
@jyellick Thank you so much. So the default max message size in Fabric is large enough in our case. I will read through the links and configure fabric-sdk-java.
@david_dornseifer Could you make sure that you are using the very latest version of `configtxlator` from master? There is a bug in older versions which can create a corrupted signature in your channel update transaction.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8pvDQqyub6WctzLA4) @jyellick that means that the server set the max message 100M and then the client should also set the max message
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8pvDQqyub6WctzLA4) @jyellick that means that the server set the max message 100M and then the client should also set the max message?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8pvDQqyub6WctzLA4) @jyellick that means that the server set the max message 100M and then the client should also set the max message otherwise the large message can't send
Correct @asaningmaxchain, the server and the client each set their maximum message sizes when negotiating the gRPC stream. Only messages satisfying the intersection of these restrictions will be allowed.
?
@jyellick thx
BTW,i want to know how the peer pull the message from the orderer,the orderer provider broadcast/deliver ,the client send tx to peer for endorsing ,and then send the endorsed tx to the orderer by the broadcast service,however the peer how to get the ordered block
the peer pull the block from the orderer timing or the orderer push the block to the peer
If the peer is a 'leader', it connects to the ordering service `Deliver` service and requests blocks indefinitely. Once the orderer has delivered all existing blocks, the peer holds the network connection open, waiting to receive the next block. If the peer is not a leader, then it receives blocks via gossip
@jyellick ok,another question, i don't know the orderer should accept the type ```HeaderType_ORDERER_TRANSACTION``` what's the condition should be use
@asaningmaxchain: Users cannot send transactions of this type to the ordering service; [as the proto definition suggests](https://github.com/hyperledger/fabric/blob/master/protos/common/common.proto#L43), it is used internally by the ordering service for transactions that are bound for the system channel. For instance, consider a configuration update transaction that calls for the creation of a new channel; this transaction needs to be ordered on the system channel. It is [wrapped into a message](https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L113) of type `ORDERER_TRANSACTION`.
@asaningmaxchain: Users cannot send transactions of this type to the ordering service; [as the proto definition suggests](https://github.com/hyperledger/fabric/blob/master/protos/common/common.proto#L43), it is used internally by the ordering service for transactions that are bound for the system channel. For instance, consider a configuration update transaction that calls for the creation of a new channel; this transaction needs to be ordered on the system channel. It is therefore [wrapped into a message](https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L113) of type `ORDERER_TRANSACTION` and gets pushed by the receiving OSN to that channel.
@kostas i don't understand,I'll tell you what I think,when the orderer bootstrap,it needs to load the genesis block (which defines each component value,policies) and then the user can send a type of ```config``` message to orderer to build a new channel
@kostas i don't understand,I'll tell you what I think,when the orderer bootstrap,it needs to load the genesis block (which defines each component value,policies) and then the user can send a type of ```config``` message to orderer to build a new channel `config` message to orderer to build a new channel, so the new channel is built, the config msg shouldn't generate the block?
@kostas i don't understand,I'll tell you what I think,when the orderer bootstrap,it needs to load the genesis block (which defines each component value,policies) and then the user can send a type of `config` message to orderer to build a new channel `config` message to orderer to build a new channel, so the new channel is built, the config msg shouldn't generate the block?
so the new channel is built,
the config msg shouldn't generate the block?
(@asaningmaxchain: As we have asked you in the past, can you please not break each message into 5 lines? I'll edit the messages above for you, but please stop doing this.)
@kostas ok
@kostas ok,i am sorry
So in your quote above, I'd note that the user actually sends a `CONFIG_UPDATE` message (not a `CONFIG`), other than that I'd say it more-or-less sounds correct.
@asaningmaxchain: So in your quote above, I'd note that the user actually sends a `CONFIG_UPDATE` message (not a `CONFIG`), other than that I'd say it more-or-less sounds correct.
Not sure what this has to do with your original question and my answer though?
@kostas i type again, ,when the orderer bootstrap,it needs to load the genesis block (which defines each component value,policies) and then the user can send a type of ```CONFIG_UPDATE```to build a new channel,so the orderer should generate a block with one message ?
@kostas i type again, ,when the orderer bootstrap,it needs to load the genesis block (which defines each component value,policies) and then the user can send a type of ```CONFIG_UPDATE```to build a new channel,so the orderer should generate a block with one message ?
@kostas i type again, ,when the orderer bootstrap,it needs to load the genesis block (which defines each component value,policies) and then the user can send a type of `CONFIG_UPDATE`to build a new channel,so the orderer should generate a block with one message ?
Correct.
Correct. User sends a configuration update calling for the creation of a new channel, assuming all the checks go through, this transaction is transformed into a configuration envelope, which is then wrapped into an `ORDERER_TRANSACTION`, which ends up in the system channel, on a block of its own.
User sends a configuration update calling for the creation of a new channel, assuming all the checks go through, this transaction is transformed into a configuration envelope, which is then wrapped into an `ORDERER_TRANSACTION`, which ends up in the system channel, on a block of its own.
Since you are familiarized with the codebase, this fragment may be of use: https://github.com/hyperledger/fabric/blob/master/orderer/common/msgprocessor/systemchannel.go#L94..L113
@kostas i take a look
@kostas i take a look,thx
(When you get a chance, perhaps take a look at this? https://gerrit.hyperledger.org/r/c/14979/)
(When you get a chance, perhaps take a look at this as well? https://gerrit.hyperledger.org/r/c/14979/)
@kostas Not Found
@kostas Not Found,please check
@kostas Not Found,please check,The page you requested was not found, or you do not have permission to view this page.
It's because I'm using the new Gerrit UI, try this URL: https://gerrit.hyperledger.org/r/#/c/14979/
@kostas wait a moment, i fix it now
@kostas please refresh the previous link,i push it
Has joined the channel.
@jyellick thx - configtxlator from 1.0.4 works :)
Hi All, Have created network with Kafka as consensus and when i try to create a channel through CLI am getting below error, can you please help me to fix the issue
/opt/gopath/src/github.com/hyperledger/fabric/peer# peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem --timeout 60
2017-11-02 07:33:57.322 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2017-11-02 07:33:57.322 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2017-11-02 07:33:57.325 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2017-11-02 07:33:57.325 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2017-11-02 07:33:57.325 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2017-11-02 07:33:57.325 UTC [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2017-11-02 07:33:57.326 UTC [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2017-11-02 07:33:57.326 UTC [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0AA1060A0C4B50726F76696465724D53...725061796572436F6E736F727469756D
2017-11-02 07:33:57.326 UTC [msp/identity] Sign -> DEBU 009 Sign: digest: 47DB2C48F622B090658D6A61AD32C11BCAFDE416731171AECB8432110ABFB311
2017-11-02 07:33:57.326 UTC [msp] GetLocalMSP -> DEBU 00a Returning existing local MSP
2017-11-02 07:33:57.326 UTC [msp] GetDefaultSigningIdentity -> DEBU 00b Obtaining default signing identity
2017-11-02 07:33:57.326 UTC [msp] GetLocalMSP -> DEBU 00c Returning existing local MSP
2017-11-02 07:33:57.326 UTC [msp] GetDefaultSigningIdentity -> DEBU 00d Obtaining default signing identity
2017-11-02 07:33:57.326 UTC [msp/identity] Sign -> DEBU 00e Sign: plaintext: 0AD8060A1508021A0608E594EBCF0522...169E4E32192A5347B58A2C5EBF8BD3CD
2017-11-02 07:33:57.326 UTC [msp/identity] Sign -> DEBU 00f Sign: digest: 5383A0ECFF4668D7CEA57EFCC6AE2F8D55B3AB21C9B4190C7E1A54CAC57E8166
Error: Got unexpected status: SERVICE_UNAVAILABLE
in orderer log i see below message
2017-11-02 07:33:57.384 UTC [orderer/kafka] Enqueue -> DEBU 877 [channel: testchainid] Enqueueing envelope...
2017-11-02 07:33:57.384 UTC [orderer/kafka] Enqueue -> WARN 878 [channel: testchainid] Will not enqueue, consenter for this channel hasn't started yet
2017-11-02 07:33:57.384 UTC [orderer/main] func1 -> DEBU 879 Closing Broadcast stream
2017-11-02 07:33:57.386 UTC [orderer/common/deliver] Handle -> WARN 87a Error reading from stream: rpc error: code = Canceled desc = context canceled
2017-11-02 07:33:57.386 UTC [orderer/main] func1 -> DEBU 87b Closing Deliver stream
Hi. Since v0.6 there is no PBFT consensus available. Is it in work already? Are there any plans to include it in some version already? I would need some info about when it will be available so any help appreciated.
Hi. Since v0.6 there is no PBFT consensus available. Is it in work already? Are there any plans to include it in some version? I would need some info about when it will be available so any help appreciated.
Hi. Since v0.6 there is no PBFT consensus available. Is it in work already? Are there any plans to include it in some version? I would need info about when it will be available so any help appreciated.
yes ,0.6 is working with PBFT
1.0 has no PBFT implemented yet
Error: Got unexpected status: SERVICE_UNAVAILABLE indicates the consensus plugin may not be working
Error: Got unexpected status: SERVICE_UNAVAILABLE indicates the consensus plugin may not be working correctly
correctly
@jyellick As in the Blockchain, if we make any changes in a Block, due to hash-chaining, the blocks after that Block, will be corrupted
So, on configuring the first Block (Genesis Block), whole Blockchain should get effected or corrupted.
@UtkarshSingh you can send config-updates and this won't affect the very first genesis block, but rather apply updates on top of it. The resulting config will be a combination of the very first block and chronologically applied config updates on top of it. THis is similar to how source code version control systems like git work, as you don't modify existing commits but apply new commits with changes to get the final state of your source code.
Hi @jyellick May I ask a question about SBFT, though it's removed already, i found the instance maintain a message queue for every replica, and each time only one batch can be processed, I doubt if this can be parallelized to improve efficiency, maybe the previous
block hash can be filled in deliver stage like 0.6?
can anyone provide any steps to change concensus from solo to kafka?
Clipboard - November 2, 2017 4:21 PM
services:
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- 7050:7050
orderer1.example.com:
container_name: orderer1.example.com
image: hyperledger/fabric-orderer
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
# enabled TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer1
volumes:
- ../channel-artifacts/genesis.block:/var/hyperledger/orderer/orderer.genesis.block
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/msp:/var/hyperledger/orderer/msp
- ../crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/tls/:/var/hyperledger/orderer/tls
ports:
- 8050:8050
my base yaml file
@risabhsharma71: Please edit that message above. Use Hastebin for long snippets of text.
@MadhavaReddy: Your Kafka cluster hasn't been initialized yet. You're sending the channel creation transaction too early. (Please use Hastebin for those snippets of text next time.)
@SimonOberzan: We'll be resuming work on a BFT consensus plugin soon. I'd expect it around the 1.3 release or so, but nothing's set in stone yet, and timetables change.
@MadhavaReddy i have actually changed the configtx and cryptoconfig.yaml
but i dont know what do do next?
ohh sure @MadhavaReddy @kostas
> Error: Got unexpected status: SERVICE_UNAVAILABLE indicates the consensus plugin may not be working correctly
@Glen: You don't provide enough background info about your setup but this most likely indicates that your Kafka cluster is inaccessible.
@UtkarshSingh: Not sure I follow. Can you please rephrase? What is the concern here?
@Glen: Regarding your one batch at a time comments, see: https://jira.hyperledger.org/browse/FAB-897
@risabhsharma71: This should generally be posted in #fabric -- but before doing that I'd look at your composer file, starting from line 10 to line 34 as the error log suggests.
https://hastebin.com/faxegeqivu.cs
@kostos this the compose file
@risabhsharma71: http://www.yamllint.com
@risabhsharma71: http://www.yamllint.com
valid yaml it says
@kostas i've seen it, and i'm not too clear about the FAB, did you implement the pipelining or why didn't you implement that? thanks
We didn't implement it because we haven't resumed work on BFT yet.
so that's a optimized soloution?
so that's a optimized soloution you have taken into accout, and it's feasible, right?
Too early to tell. Let's deliver the unoptimized version first.
I think if adopts this, one batch one checkpoint may also be adjusted as now the batches are sequential and depends on the last batch
I think if adopts this, the batches will be modified to independent of each other
@kostas also SBFT seems to have no block synchronization mechanism, is that a problem?
@kostas could you tell me if my conpose file is wrong?
Yes. It is wrong.
where could you tell me please
Did you try the site I pointed you to?
yes it said valid yaml
Screen Shot 2017-11-02 at 07.30.37.png
I don't think so?
yes yes my mistake i pasted other yaml
thanks @kostas
one more noob question..can you tell me how to add hastebin snippets here
You did it already no? https://chat.hyperledger.org/channel/fabric-orderer?msg=2AXNgtyeYuBndAc3Z
it just shows me a link..i guess you could see the snippet
@kostas do you think we need to implement the block synchronization if sbft is used, if not why?
Yes, the link is what we're after so you're good.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=k6qPYy5t2hw7X3QFK) @Vadim So, this updation will act as a Transaction & a newly created block will get append to the Blockchain. From now onwards, all transactions will refer to this updated_genesis_block.
GenesisBlock<-Block2<-Block3 (all txns refer to GenesisBlock for validation)
After making changes over the GenesisBlock, UpdatedGenesisBlock will get append to the Blockchain :
GenesisBlock<-Block2<-Block3<-UpdatedGenesisBlock (all txns now refer to UpdatedGenesisBlock for validation)
I am right ??
@UtkarshSingh UpdatedGenesisBlock - it's not the full block, it's only updates
@vadim updates won't act as a transaction ?
it is a transaction of type CONFIG_UPDATE
http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html?highlight=config_update#configuration-updates
How many types of transactions are there in Hyperledger Fabric ?
And, how many of these will be hashed ?
@UtkarshSingh I guess message types are listed here: https://github.com/hyperledger/fabric/blob/release/protos/common/common.proto#L38-L46, the second question I did not understand, be more specific.
My second question was:
As, in blockchain, while creating any block, all txns + prev block's hash + some information got hashed
In Hyperledger Fabric, all types of txns get hashed ??
I see no reason why some transactions in the block would not be hashed
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ciG53gMqfTg3qocJx) @Vadim
GenesisBlock<-Block2<-Block3<-Block4(having a CONFIG_UPDATE txn)
Is that right ?
right
Is GenesisBlock just used to validate the txns ?
At what steps, a peer or a txn needs to check with the GenesisBlock.
(I just found that, at that time of Instantiation, GenesisBlock is been checked)
the*
@UtkarshSingh the genesis block contains the whole blockchain config, such as orderer addresses and configurations, participating orgs, their certs, their admins, anchor peers and other things. Without this info the network will not function properly.
Yes mainly for the validation purpose
My question is :
At what steps, a peer or a txn needs to check with the GenesisBlock.
(I just found that, at the time of Instantiation, GenesisBlock is been checked)
At the time of creation of channel, mychannel.block was generated using " peer channel create " command
Using this participating peers can join a channel
mychannel.block also has the informations of organisations
So, on adding a new Org,
Do we need to update the mychannel.block? And, Will updated_mychannel.block be used to join the channel of Org3's peers ??
@UtkarshSingh to update the channel you should get the latest config_update from the channel. However, subsequent joining peers will still use the genesis block (which is a config_update block itself).
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hP9RY2otu7Lu4gm5f) @jeffgarratt
So to add a new Organisation, I need to do these things :
1. Update the GenesisBlock (by doing a CONFIG_UPDATE txn)
2. Update the channel (updating mychannel.block->updated_mychannel.block
passing updated_mychannel.block to new Org's peers to join the channel)
Right ??
@UtkarshSingh at any point in time both the peer utilizes the latest config_update block information for processing (the genesis block is a config_update, it just so happens it is also the first block).
you update he latest config, which happen to also be the genesis... but you always update the latest config in a channel
you update the latest config, which happen to also be the genesis... but you always update the latest config in a channel
so in your case.... 1) get the latest config_update from the channel 2) modify it per your desired new org (and collect sigs as needed). 3) submit the mod as a TX 4) You will currently always send the genesis block for new peers to join the channel
Is config_update a kind of Header, present in every block ?
it is a special type of TX (which currently will always exist by itself in a block)
of which the genesis block is the first in a channel (chain)
you subsequent modification is also a config_update (just like the genesis), but will then be used by all nodes in channel for processing
your subsequent modification is also a config_update (just like the genesis), but will then be used by all nodes in channel for processing
however, any new join will continue to use the genesis block
and will process through the chain's block, and eventially get to your latest config
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ciG53gMqfTg3qocJx)
Actually, the `CONFIG_UPDATE` is processed by the orderer into a `CONFIG`, so there is a full config block
hm, ok
and what happens to the CONFIG_UPDATE transaction?
It is embedded in the new config here: https://github.com/hyperledger/fabric/blob/master/protos/common/configtx.proto#L47
I get confused :neutral_face:
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WcEiMLcpu3eeRWB8D) @kostas Thank you for the response, i restarted the network and in all the kafka logs it says broker is started however in order log i see " Failed to connect to broker kafka1:9092: dial tcp: lookup kafka1 on 127.0.0.11:53: no such host" for all kafka brokers ( kafka0, 1 ,2 and 3)
@MadhavaReddy: That means you're not encoding the proper addresses for the Kafka brokers in your genesis block.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=A4DoGXJD2T2kLM2Xi) @kostas in configtx file i defined below brokers for Kafka and see same broker names in genesis block, am i missing anything? can you please guide me
Brokers:
- kafka0:9092
- kafka1:9092
- kafka2:9092
- kafka3:9092
Use Hastebin and paste here the link to the Docker Compose file you use to bring up the network.
sure
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=bYwazkhXiGSY3nnmC) @kostas please find below the link
https://hastebin.com/soneguwime.php -- docker-compose-cli.yaml
https://hastebin.com/acifohemuk.bash --peer-base.yaml
https://hastebin.com/hiqitunagu.bash -- kafka-base.yaml
https://hastebin.com/yuhewereve.cs -- docker-compose-base.yaml
Is there any recommended ratio between the number of orderer nodes, kafka nodes, and zookeeper nodes in a cluster?
@qizhang: No. It all depends on the application requirements. At a minimum 4 brokers, 4 ZK nodes.
Thanks @kostas . Broker means Kafka?
Yes.
@MadhavaReddy: Can you use Hastebin for the orderer logs as well? Use these settings: https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html#debugging
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=wHGfB5Jc3rDsBHEdM) @kostas please find the order log
https://hastebin.com/cejaqajace.vbs
@MadhavaReddy: Have you tried running a network of peers with the solo orderer?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=TQgAkRdK5SkedkFaF) @kostas yes, its working
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=TQgAkRdK5SkedkFaF) @kostas yes, i tried with solo and its working, in fact i tested both example02 and marble with solo
@MadhavaReddy: Can you send me the output of `docker ps -a` when you get a chance? (I know you're going to be offline for a while, take your time.)
https://chat.hyperledger.org/channel/fabric-sdk-node?msg=G922jRWjssCDJAoJM
^^ This is still an SDK-related question.
^^ This is still an SDK-related question as best as I can tell.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=vCRWNWMdaDsboAv6q) @kostas Please find below the output of docker ps -a
https://hastebin.com/ujilasatij.nginx
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=bnPEqqXhCak4RGgfa) @kostas Yes, I had posted on the SDK channel but wanted to find out whether anyone on this channel implemented retry logic with Orderer
@Ratnakar I am glad to see you in rocketchat. I run the examples that you commit on github. One example is two Orgs and solo mode, I run it very well. But another example is two Orgs and kafka mode, I always fail. The orderer logs display the orderer can’t connect to kafka. I asked many people , but they can’t handle it. The logs is below:
@Ratnakar I am glad to see you in rocketchat. I run the examples that you commit on github. One example is two Orgs and solo mode, I run it very well. But another example is two Orgs and kafka mode, I always fail. The orderer logs display the orderer can’t connect to kafka. I asked many people , but they can’t handle it. The logs is below: https://hastebin.com/usofirahoc.vbs
[sarama] 2017/11/02 05:43:49.400087 client.go:601: client/metadata fetching metadata for all topics from broker kafka1.example.com:9092
[sarama] 2017/11/02 05:43:49.402529 broker.go:96: Failed to connect to broker kafka1.example.com:9092: dial tcp 172.19.0.12:9092: getsockopt: connection refused
[sarama] 2017/11/02 05:43:49.415586 client.go:620: client/metadata got error from broker while fetching metadata: dial tcp 172.19.0.12:9092: getsockopt: connection refused
[sarama] 2017/11/02 05:43:49.415635 config.go:329: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
[sarama] 2017/11/02 05:43:49.415665 client.go:601: client/metadata fetching metadata for all topics from broker kafka3.example.com:9092
[sarama] 2017/11/02 05:43:49.416748 broker.go:96: Failed to connect to broker kafka3.example.com:9092: dial tcp 172.19.0.9:9092: getsockopt: connection refused
I think if my environment is something wrong, the version of my softwares is below:
the example is : https://github.com/hyperledger/fabric/tree/master/examples/e2e_cli
Docker version:17.06.0-ce
Docker images:
REPOSITORY TAG IMAGE ID CREATED SIZE
hyperledger/fabric-ccenv x86_64-1.0.2-snapshot-64c06f2 fc4c93d5794d 3 weeks ago 1.29GB
busybox latest 54511612f1c4 7 weeks ago 1.13MB
hyperledger/fabric-ccenv latest 3db26b5a2c65 8 weeks ago 1.29GB
hyperledger/fabric-ccenv x86_64-1.0.1-snapshot-3abe144 3db26b5a2c65 8 weeks ago 1.29GB
hyperledger/fabric-tools latest cab266cf7e0a 8 weeks ago 1.34GB
hyperledger/fabric-tools x86_64-1.0.1-snapshot-3abe144 cab266cf7e0a 8 weeks ago 1.34GB
hyperledger/fabric-couchdb latest e5454bd9231f 8 weeks ago 1.48GB
hyperledger/fabric-couchdb x86_64-1.0.1-snapshot-3abe144 e5454bd9231f 8 weeks ago 1.48GB
hyperledger/fabric-kafka latest 94d9ca4c6e7e 8 weeks ago 1.3GB
hyperledger/fabric-kafka x86_64-1.0.1-snapshot-3abe144 94d9ca4c6e7e 8 weeks ago 1.3GB
hyperledger/fabric-zookeeper latest 71761bc0f676 8 weeks ago 1.31GB
hyperledger/fabric-zookeeper x86_64-1.0.1-snapshot-3abe144 71761bc0f676 8 weeks ago 1.31GB
hyperledger/fabric-testenv latest a0cffbfa3b39 8 weeks ago 1.37GB
hyperledger/fabric-testenv x86_64-1.0.1-snapshot-3abe144 a0cffbfa3b39 8 weeks ago 1.37GB
hyperledger/fabric-buildenv latest 3332b2123a41 8 weeks ago 1.28GB
hyperledger/fabric-buildenv x86_64-1.0.1-snapshot-3abe144 3332b2123a41 8 weeks ago 1.28GB
hyperledger/fabric-orderer latest acaf05246e17 8 weeks ago 179MB
hyperledger/fabric-orderer x86_64-1.0.1-snapshot-3abe144 acaf05246e17 8 weeks ago 179MB
hyperledger/fabric-peer latest a09f37e850d6 8 weeks ago 182MB
hyperledger/fabric-peer x86_64-1.0.1-snapshot-3abe144 a09f37e850d6 8 weeks ago 182MB
hyperledger/fabric-javaenv latest 01ab6da6779d 8 weeks ago 1.42GB
hyperledger/fabric-javaenv x86_64-1.0.1-snapshot-3abe144 01ab6da6779d 8 weeks ago 1.42GB
hyperledger/fabric-tools x86_64-1.0.0 0403fd1c72c7 3 months ago 1.32GB
hyperledger/fabric-couchdb x86_64-1.0.0 2fbdbf3ab945 3 months ago 1.48GB
hyperledger/fabric-kafka x86_64-1.0.0 dbd3f94de4b5 3 months ago 1.3GB
hyperledger/fabric-zookeeper x86_64-1.0.0 e545dbf1c6af 3 months ago 1.31GB
hyperledger/fabric-orderer x86_64-1.0.0 e317ca5638ba 3 months ago 179MB
hyperledger/fabric-peer x86_64-1.0.0 6830dcd7b9b5 3 months ago 182MB
hyperledger/fabric-javaenv x86_64-1.0.0 8948126f0935 3 months ago 1.42GB
hyperledger/fabric-ccenv x86_64-1.0.0 7182c260a5ca 3 months ago 1.29GB
hyperledger/fabric-ca latest a15c59ecda5b 3 months ago 238MB
hyperledger/fabric-ca x86_64-1.0.0 a15c59ecda5b 3 months ago 238MB
hyperledger/fabric-baseimage x86_64-0.3.1 9f2e9ec7c527 5 months ago 1.27GB
hyperledger/fabric-baseos x86_64-0.3.1 4b0cab202084 5 months ago 157MB
Docker images: https://hastebin.com/ibohimuben.go
@Ratnakar Could you tell me your environment, or point out where is my problem?
@srongzhe Did you ran the sample as is which is not working for you ? Or Did you made any modifications ?
from your logs I see *kafka1.example.com:9092* instead of *kafka1:9092*
```
[sarama] 2017/11/02 05:43:49.400087 client.go:601: client/metadata fetching metadata for all topics from broker kafka1.example.com:9092
[sarama] 2017/11/02 05:43:49.402529 broker.go:96: Failed to connect to broker kafka1.example.com:9092: dial tcp 172.19.0.12:9092: getsockopt: connection refused
```
@srongzhe Did you ran the sample as is and which is not working for you ? Or Did you made any modifications ?
from your logs I see *kafka1.example.com:9092* instead of *kafka1:9092*
```
[sarama] 2017/11/02 05:43:49.400087 client.go:601: client/metadata fetching metadata for all topics from broker kafka1.example.com:9092
[sarama] 2017/11/02 05:43:49.402529 broker.go:96: Failed to connect to broker kafka1.example.com:9092: dial tcp 172.19.0.12:9092: getsockopt: connection refused
```
Can you share the modified docker-compose files if that is possible
Can you share the modified docker-compose files if that is possible ?
@Ratnakar Yes ,I modified it ,but if I don't modify this, I got worse. The logs from orderer is :
is there a way I can see your modifications or can you explain what changes you made in your `configtx.yaml` ?
the logs from orderer is below:
the logs from orderer is below: https://hastebin.com/xolejoduyu.vbs
2017-10-27 05:36:01.887 UTC [orderer/main] Deliver -> DEBU 11c Starting new Deliver handler
2017-10-27 05:36:01.887 UTC [orderer/common/deliver] Handle -> DEBU 11d Starting new deliver loop
2017-10-27 05:36:01.888 UTC [orderer/common/deliver] Handle -> DEBU 11e Attempting to read seek info message
2017-10-27 05:36:01.889 UTC [orderer/common/deliver] Handle -> WARN 11f [channel: testchainid] Rejecting deliver request because of consenter error
2017-10-27 05:36:01.889 UTC [orderer/main] func1 -> DEBU 120 Closing Deliver stream
2017-10-27 05:36:04.963 UTC [orderer/main] Deliver -> DEBU 121 Starting new Deliver handler
2017-10-27 05:36:04.963 UTC [orderer/common/deliver] Handle -> DEBU 122 Starting new deliver loop
2017-10-27 05:36:04.963 UTC [orderer/common/deliver] Handle -> DEBU 123 Attempting to read seek info message
2017-10-27 05:36:04.964 UTC [orderer/common/deliver] Handle -> WARN 124 [channel: testchainid] Rejecting deliver request because of consenter error
2017-10-27 05:36:04.964 UTC [orderer/main] func1 -> DEBU 125 Closing Deliver stream
2017-10-27 05:36:08.049 UTC [orderer/main] Deliver -> DEBU 126 Starting new Deliver handler
2017-10-27 05:36:08.049 UTC [orderer/common/deliver] Handle -> DEBU 127 Starting new deliver loop
2017-10-27 05:36:08.049 UTC [orderer/common/deliver] Handle -> DEBU 128 Attempting to read seek info message
2017-10-27 05:36:08.049 UTC [orderer/common/deliver] Handle -> WARN 129 [channel: testchainid] Rejecting deliver request because of consenter error
2017-10-27 05:36:08.050 UTC [orderer/main] func1 -> DEBU 12a Closing Deliver stream
[sarama] 2017/10/27 05:36:10.636650 broker.go:96: Failed to connect to broker kafka2:9092: dial tcp: i/o timeout
[sarama] 2017/10/27 05:36:10.638179 client.go:620: client/metadata got error from broker while fetching metadata: dial tcp: i/o timeout
[sarama] 2017/10/27 05:36:10.638308 client.go:626: client/metadata no available broker to send metadata request to
[sarama] 2017/10/27 05:36:10.638368 client.go:428: client/brokers resurrecting 4 dead seed brokers
[sarama] 2017/10/27 05:36:10.638418 client.go:590: client/metadata retrying after 250ms... (3 attempts remaining)
[sarama] 2017/10/27 05:36:10.889141 config.go:329: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
[sarama] 2017/10/27 05:36:10.889359 client.go:601: client/metadata fetching metadata for all topics from broker kafka3:9092
[channel: testchainid] Rejecting deliver request because of consenter error
I run all kinds of example with kafka, the result is similar
@Ratnakar I run the e2e_cli_kafka from you, the logs of orderer is below:
@Ratnakar I run the e2e_cli_kafka from you, the logs of orderer is below: https://hastebin.com/norakamasi.pas
2017-11-03 02:03:03.502 UTC [fsblkstorage] retrieveBlockByNumber -> DEBU 0de retrieveBlockByNumber() - blockNum = [0]
2017-11-03 02:03:03.502 UTC [fsblkstorage] newBlockfileStream -> DEBU 0df newBlockfileStream(): filePath=[/var/hyperledger/production/orderer/chains/testchainid/blockfile_000000], startOffset=[0]
2017-11-03 02:03:03.502 UTC [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 0e0 Remaining bytes=[6666], Going to peek [8] bytes
2017-11-03 02:03:03.502 UTC [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 0e1 Returning blockbytes - length=[6664], placementInfo={fileNum=[0], startOffset=[0], bytesOffset=[2]}
2017-11-03 02:03:03.502 UTC [orderer/multichain] newChainSupport -> DEBU 0e2 [channel: testchainid] Retrieved metadata for tip of chain (blockNumber=0, lastConfig=0, lastConfigSeq=0):
2017-11-03 02:03:03.503 UTC [orderer/kafka] newChain -> INFO 0e3 [channel: testchainid] Starting chain with last persisted offset -3 and last recorded block 0
2017-11-03 02:03:03.505 UTC [orderer/multichain] NewManagerImpl -> INFO 0e4 Starting with system channel testchainid and orderer type kafka
2017-11-03 02:03:03.505 UTC [orderer/main] main -> INFO 0e5 Beginning to serve requests
2017-11-03 02:03:03.506 UTC [orderer/kafka] setupProducerForChannel -> INFO 0e6 [channel: testchainid] Setting up the producer for this channel...
2017-11-03 02:03:03.506 UTC [orderer/kafka] try -> DEBU 0e7 [channel: testchainid] Connecting to the Kafka cluster
@Ratnakar Is it OK?
@Ratnakar But the e2e_cli_kafka example prompt me the error:
@Ratnakar But the e2e_cli_kafka example prompt me the error: https://hastebin.com/sefekeduhi.vbs
CORE_LOGGING_LEVEL=DEBUG
CORE_PEER_ADDRESS=peer0.org1.example.com:7051
2017-11-03 02:03:03.621 UTC [main] main -> ERRO 001 Cannot run peer because error when setting up MSP from directory /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp: err admin 0 is invalid, validation error Could not obtain certification chain, err A CA certificate cannot be used directly by this MSP
!!!!!!!!!!!!!!! Channel creation failed !!!!!!!!!!!!!!!!
================== ERROR !!! FAILED to execute End-2-End Scenario ==================
@Ratnakar the CA certificate error indicates that you seem to be running the peer with a LOCAL MSP signing cert that has the CA attribute (which is NOT allowed)
this is indicated by an X509 extension of 'CA:TRUE'
verify that you are NOT using a CA cert for the signing cert of the Peer
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=vCRWNWMdaDsboAv6q) @kostas have shared the docker ps -a output can you please help why am not able to create channel https://hastebin.com/ujilasatij.nginx
@Ratnakar the docker-compose file is:
@Ratnakar the docker-compose file is: https://hastebin.com/zunigiwije.http
version: '2'
services:
zookeeper0.example.com:
container_name: zookeeper0.example.com
extends:
file: base/docker-compose-base.yaml
service: zookeeper
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=server.1=zookeeper0.example.com:2888:3888 server.2=zookeeper1.example.com:2888:3888 server.3=zookeeper2.example.com:2888:3888
zookeeper1.example.com:
container_name: zookeeper1.example.com
extends:
file: base/docker-compose-base.yaml
service: zookeeper
environment:
- ZOO_MY_ID=2
- ZOO_SERVERS=server.1=zookeeper0.example.com:2888:3888 server.2=zookeeper1.example.com:2888:3888 server.3=zookeeper2.example.com:2888:3888
zookeeper2.example.com:
container_name: zookeeper2.example.com
extends:
file: base/docker-compose-base.yaml
service: zookeeper
environment:
- ZOO_MY_ID=3
- ZOO_SERVERS=server.1=zookeeper0.example.com:2888:3888 server.2=zookeeper1.example.com:2888:3888 server.3=zookeeper2.example.com:2888:3888
kafka0.example.com:
container_name: kafka0.example.com
extends:
file: base/docker-compose-base.yaml
service: kafka
environment:
- KAFKA_BROKER_ID=0
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0.example.com:2181,zookeeper1.example.com:2181,zookeeper2.example.com:2181
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
kafka1.example.com:
container_name: kafka1.example.com
extends:
file: base/docker-compose-base.yaml
service: kafka
environment:
- KAFKA_BROKER_ID=1
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0.example.com:2181,zookeeper1.example.com:2181,zookeeper2.example.com:2181
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
kafka2.example.com:
container_name: kafka2.example.com
extends:
file: base/docker-compose-base.yaml
service: kafka
environment:
- KAFKA_BROKER_ID=2
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0.example.com:2181,zookeeper1.example.com:2181,zookeeper2.example.com:2181
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
kafka3.example.com:
container_name: kafka3.example.com
extends:
file: base/docker-compose-base.yaml
service: kafka
environment:
- KAFKA_BROKER_ID=3
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0.example.com:2181,zookeeper1.example.com:2181,zookeeper2.example.com:2181
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
orderer.example.com:
extends:
file: base/docker-compose-base.yaml
service: orderer.example.com
container_name: orderer.example.com
links:
- kafka0.example.com
- kafka1.example.com
- kafka2.example.com
- kafka3.example.com
depends_on:
- zookeeper0.example.com
- zookeeper1.example.com
- zookeeper2.example.com
- kafka0.example.com
- kafka1.example.com
- kafka2.example.com
- kafka3.example.com
peer0.org1.example.com:
container_name: peer0.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org1.example.com
peer1.org1.example.com:
container_name: peer1.org1.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org1.example.com
peer0.org2.example.com:
container_name: peer0.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer0.org2.example.com
peer1.org2.example.com:
container_name: peer1.org2.example.com
extends:
file: base/docker-compose-base.yaml
service: peer1.org2.example.com
cli:
container_name: cli
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- CORE_LOGGING_LEVEL=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_PEER_TLS_ENABLED=true
- CORE_PEER_TLS_CERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.crt
- CORE_PEER_TLS_KEY_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/server.key
- CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
- CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}; sleep $TIMEOUT'
volumes:
- /var/run/:/host/var/run/
- ../chaincode/go/:/opt/gopath/src/github.com/hyperledger/fabric/examples/chaincode/go
- ./crypto-config:/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/
- ./scripts:/opt/gopath/src/github.com/hyperledger/fabric/peer/scripts/
- ./channel-artifacts:/opt/gopath/src/github.com/hyperledger/fabric/peer/channel-artifacts
depends_on:
- orderer.example.com
- peer0.org1.example.com
- peer1.org1.example.com
- peer0.org2.example.com
- peer1.org2.example.com
busybox:
container_name: busybox
image: busybox
tty: true
depends_on:
- orderer.example.com
- kafka0.example.com
@Ratnakar I do not modify the configtx.yaml file.
https://github.com/hyperledger/fabric/blob/master/examples/e2e_cli/configtx.yaml#L132-L135
please change this section as following.
```
Brokers:
- kafka0.example.com:9092
- kafka1.example.com:9092
- kafka2.example.com:9092
- kafka3.example.com:9092
```
If you don't modify configtx.yaml , Orderers genesis block will have wrong addresses of kafka brokers and hence you see those connectivity issues
https://github.com/hyperledger/fabric/blob/master/examples/e2e_cli/configtx.yaml#L132-L135
please change this section as following.
```
Brokers:
- kafka0.example.com:9092
- kafka1.example.com:9092
- kafka2.example.com:9092
- kafka3.example.com:9092
```
If you don't modify configtx.yaml , Orderers genesis block will have wrong addresses of kafka brokers and hence you might be seeing those connectivity issues
@Ratnakar Sorry, I forgot it , I modify this place in configtx.yaml. If I do not append ".example.com" to the kafka0 , the orderer can not resolve the address of kafka.
@Ratnakar I modified ......
Ah my bad.. I didn't noticed jeffs' message.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=oEjj8XLYwMj9q2CLo) @srongzhe As @jeffgarratt pointed can you please verify your certs ?
@Ratnakar In the folder "crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp", the sub-folder is only four "admincerts cacerts keystore signcerts"
@Ratnakar When I run "./generateArtifacts.sh" I got the prompt:
@Ratnakar When I run "./generateArtifacts.sh" I got the prompt: https://hastebin.com/ewefurebod.coffeescript
##########################################################
##### Generate certificates using cryptogen tool #########
##########################################################
org1.example.com
2017-11-02 23:23:21.985 EDT [bccsp] GetDefault -> WARN 001 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.008 EDT [bccsp] GetDefault -> WARN 002 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.010 EDT [bccsp] GetDefault -> WARN 003 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.011 EDT [bccsp] GetDefault -> WARN 004 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.012 EDT [bccsp] GetDefault -> WARN 005 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
org2.example.com
2017-11-02 23:23:22.015 EDT [bccsp] GetDefault -> WARN 006 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.016 EDT [bccsp] GetDefault -> WARN 007 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.017 EDT [bccsp] GetDefault -> WARN 008 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.019 EDT [bccsp] GetDefault -> WARN 009 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.020 EDT [bccsp] GetDefault -> WARN 00a Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.022 EDT [bccsp] GetDefault -> WARN 00b Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.023 EDT [bccsp] GetDefault -> WARN 00c Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.024 EDT [bccsp] GetDefault -> WARN 00d Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.026 EDT [bccsp] GetDefault -> WARN 00e Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.027 EDT [bccsp] GetDefault -> WARN 00f Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
2017-11-02 23:23:22.030 EDT [bccsp] GetDefault -> WARN 010 Before using BCCSP, please call InitFactories(). Falling back to bootBCCSP.
Using configtxgen -> /opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/bin/configtxgen
##########################################################
######### Generating Orderer Genesis block ##############
##########################################################
2017-11-02 23:23:22.046 EDT [common/configtx/tool] main -> INFO 001 Loading configuration
2017-11-02 23:23:22.051 EDT [msp] getMspConfig -> INFO 002 intermediate certs folder not found at [/opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/ordererOrganizations/example.com/msp/intermediatecerts]. Skipping.: [stat /opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/ordererOrganizations/example.com/msp/intermediatecerts: no such file or directory]
2017-11-02 23:23:22.052 EDT [msp] getMspConfig -> INFO 003 crls folder not found at [/opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/ordererOrganizations/example.com/msp/intermediatecerts]. Skipping.: [stat /opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/ordererOrganizations/example.com/msp/crls: no such file or directory]
2017-11-02 23:23:22.052 EDT [msp] getMspConfig -> INFO 004 MSP configuration file not found at [/opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/ordererOrganizations/example.com/msp/config.yaml]: [stat /opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/ordererOrganizations/example.com/msp/config.yaml: no such file or directory]
2017-11-02 23:23:22.074 EDT [msp] getMspConfig -> INFO 005 intermediate certs folder not found at [/opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/peerOrganizations/org1.example.com/msp/intermediatecerts]. Skipping.: [stat /opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/peerOrganizations/org1.example.com/msp/intermediatecerts: no such file or directory]
2017-11-02 23:23:22.075 EDT [msp] getMspConfig -> INFO 006 crls folder not found at [/opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/peerOrganizations/org1.example.com/msp/intermediatecerts]. Skipping.: [stat /opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/peerOrganizations/org1.example.com/msp/crls: no such file or directory]
2017-11-02 23:23:22.075 EDT [msp] getMspConfig -> INFO 007 MSP configuration file not found at [/opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/peerOrganizations/org1.example.com/msp/config.yaml]: [stat /opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/peerOrganizations/org1.example.com/msp/config.yaml: no such file or directory]
2017-11-02 23:23:22.076 EDT [msp] getMspConfig -> INFO 008 intermediate certs folder not found at [/opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/peerOrganizations/org2.example.com/msp/intermediatecerts]. Skipping.: [stat /opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/peerOrganizations/org2.example.com/msp/intermediatecerts: no such file or directory]
2017-11-02 23:23:22.077 EDT [msp] getMspConfig -> INFO 009 crls folder not found at [/opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/peerOrganizations/org2.example.com/msp/intermediatecerts]. Skipping.: [stat /opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/peerOrganizations/org2.example.com/msp/crls: no such file or directory]
2017-11-02 23:23:22.077 EDT [msp] getMspConfig -> INFO 00a MSP configuration file not found at [/opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/peerOrganizations/org2.example.com/msp/config.yaml]: [stat /opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli_kafka/crypto-config/peerOrganizations/org2.example.com/msp/config.yaml: no such file or directory]
2017-11-02 23:23:22.078 EDT [common/configtx/tool] doOutputBlock -> INFO 00b Generating genesis block
2017-11-02 23:23:22.079 EDT [common/configtx/tool] doOutputBlock -> INFO 00c Writing genesis block
@srongzhe To make it easier for others to follow this channel, please use a service like hastebin to save your logs, then simply paste the link here
@jyellick Thank you, I try it.
@jyellick https://pastebin.com/B3eVCLnN
@jyellick Is it right?
What is the default value of grpc.max_send_message_length and grpc.max_receive_message_length for orderer and peers ?
@MadhavaReddy: I am far from a Docker expert but as best as I can tell, your `orderer` belongs to a different network than the Kafka brokers. If your problem persists, please try to come up with the bare-minimum Docker Compose configuration that reveals this problem. (See: https://stackoverflow.com/help/mcve) As things stand, I'm looking at a network configuration that's needlessly complex for debugging this problem.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=4rPRuAfxNzuC8JbAD) @kostas Thank you, will try with simple one first
I like this diagram that @sanchezl has drawn to show the links between orderers, brokers, and ZK nodes. This might be helpful to you when revising your Docker Compose configuration: https://github.com/hyperledger/fabric/blob/master/orderer/common/server/docker-compose.yml#L16
Has joined the channel.
Has joined the channel.
hi guys, I am getting below error when I instantiate my chaincode with TLS enables ```root@991dbf7c759d:/opt/gopath/src/github.com/hyperledger/fabric/peer/channels# peer chaincode instantiate orderer.art.ifar.org:7050 -C mainchannel -n artmanager -v 0 -c '{"Args":[""]}' --tls $CORE_PEER_TLS_ENABLE-cafile $ORDERER_CA
2017-11-03 17:56:26.414 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2017-11-03 17:56:26.414 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2017-11-03 17:56:26.418 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
2017-11-03 17:56:26.418 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
2017-11-03 17:56:26.419 UTC [msp/identity] Sign -> DEBU 005 Sign: plaintext: 0AD1070A6908031A0C08CADBF2CF0...0A000A000A04657363630A0476736363
2017-11-03 17:56:26.419 UTC [msp/identity] Sign -> DEBU 006 Sign: digest: D0A96A11EEF74713723803BCFDDE78D85D8484641AFBF8D2613B06823D13D
Error: Error endorsing chaincode: rpc error: code = Unknown desc = Timeout expired while starting chaincodpeerid:peer0.louvre.fr,tx:d7125ed0e897b93f29f956b4243f43577a7e1471fe9a992ebe8af3dc582b077f)
Usage:
peer chaincode instantiate [flags]
Flags:
-C, --channelID string The channel on which this command should be executed (default "testchainid")
-c, --ctor string Constructor message for the chaincode in JSON format (default "{}")
-E, --escc string The name of the endorsement system chaincode to be used for this chaincode
-l, --lang string Language the chaincode is written in (default "golang")
-n, --name string Name of the chaincode
-P, --policy string The endorsement policy associated to this chaincode
-v, --version string Version of the chaincode specified in install/instantiate/upgrade commands
-V, --vscc string The name of the verification system chaincode to be used for this chaincode
Global Flags:
--cafile string Path to file containing PEM-encoded trusted certificate(s) for the orde
--logging-level string Default logging level and overrides, see core.yaml for full syntax
-o, --orderer string Ordering service endpoint
--test.coverprofile string Done (default "coverage.cov")
--tls Use TLS when communicating with the orderer endpoint
```
```docker logs 5037920ec8db
2017-11-03 17:56:27.076 UTC [shim] userChaincodeStreamGetter -> ERRO 001 Error trying to connect to local r: x509: cannot validate certificate for 172.18.0.3 because it doesn't contain any IP SANs
2017-11-03 17:56:27.076 UTC [artmanager] Errorf -> ERRO 002 error starting chaincode: Error trying to conn to local peer: x509: cannot validate certificate for 172.18.0.3 because it doesn't contain any IP SANs
```
hi guys, I am not able to instantiate chaincode if TLS enabled , container log```docker logs 5037920ec8db
2017-11-03 17:56:27.076 UTC [shim] userChaincodeStreamGetter -> ERRO 001 Error trying to connect to local r: x509: cannot validate certificate for 172.18.0.3 because it doesn't contain any IP SANs
2017-11-03 17:56:27.076 UTC [artmanager] Errorf -> ERRO 002 error starting chaincode: Error trying to conn to local peer: x509: cannot validate certificate for 172.18.0.3 because it doesn't contain any IP SANs
```
error from peer ``` peer chaincode instantiate orderer.art.ifar.org:7050 -C mainchannel -n artmanager -v 0 -c '{"Args":[""]}' --tls $CORE_PEER_TLS_ENABLE-cafile $ORDERER_CA
2017-11-03 17:56:26.414 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2017-11-03 17:56:26.414 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2017-11-03 17:56:26.418 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
2017-11-03 17:56:26.418 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
2017-11-03 17:56:26.419 UTC [msp/identity] Sign -> DEBU 005 Sign: plaintext: 0AD1070A6908031A0C08CADBF2CF0...0A000A000A04657363630A0476736363
2017-11-03 17:56:26.419 UTC [msp/identity] Sign -> DEBU 006 Sign: digest: D0A96A11EEF74713723803BCFDDE78D85D8484641AFBF8D2613B06823D13D
Error: Error endorsing chaincode: rpc error: code = Unknown desc = Timeout expired while starting chaincodpeerid:peer0.louvre.fr,tx:d7125ed0e897b93f29f956b4243f43577a7e1471fe9a992ebe8af3dc582b077f)
Usage:
peer chaincode instantiate [flags]
Flags:
-C, --channelID string The channel on which this command should be executed (default "testchainid")
-c, --ctor string Constructor message for the chaincode in JSON format (default "{}")
-E, --escc string The name of the endorsement system chaincode to be used for this chaincode
-l, --lang string Language the chaincode is written in (default "golang")
-n, --name string Name of the chaincode
-P, --policy string The endorsement policy associated to this chaincode
-v, --version string Version of the chaincode specified in install/instantiate/upgrade commands
-V, --vscc string The name of the verification system chaincode to be used for this chaincode
Global Flags:
--cafile string Path to file containing PEM-encoded trusted certificate(s) for the orde
--logging-level string Default logging level and overrides, see core.yaml for full syntax
-o, --orderer string Ordering service endpoint
--test.coverprofile string Done (default "coverage.cov")
--tls Use TLS when communicating with the orderer endpoint
```
error from client ``` peer chaincode instantiate orderer.art.ifar.org:7050 -C mainchannel -n artmanager -v 0 -c '{"Args":[""]}' --tls $CORE_PEER_TLS_ENABLE-cafile $ORDERER_CA
2017-11-03 17:56:26.414 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2017-11-03 17:56:26.414 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2017-11-03 17:56:26.418 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
2017-11-03 17:56:26.418 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
2017-11-03 17:56:26.419 UTC [msp/identity] Sign -> DEBU 005 Sign: plaintext: 0AD1070A6908031A0C08CADBF2CF0...0A000A000A04657363630A0476736363
2017-11-03 17:56:26.419 UTC [msp/identity] Sign -> DEBU 006 Sign: digest: D0A96A11EEF74713723803BCFDDE78D85D8484641AFBF8D2613B06823D13D
Error: Error endorsing chaincode: rpc error: code = Unknown desc = Timeout expired while starting chaincodpeerid:peer0.louvre.fr,tx:d7125ed0e897b93f29f956b4243f43577a7e1471fe9a992ebe8af3dc582b077f)
Usage:
peer chaincode instantiate [flags]
Flags:
-C, --channelID string The channel on which this command should be executed (default "testchainid")
-c, --ctor string Constructor message for the chaincode in JSON format (default "{}")
-E, --escc string The name of the endorsement system chaincode to be used for this chaincode
-l, --lang string Language the chaincode is written in (default "golang")
-n, --name string Name of the chaincode
-P, --policy string The endorsement policy associated to this chaincode
-v, --version string Version of the chaincode specified in install/instantiate/upgrade commands
-V, --vscc string The name of the verification system chaincode to be used for this chaincode
Global Flags:
--cafile string Path to file containing PEM-encoded trusted certificate(s) for the orde
--logging-level string Default logging level and overrides, see core.yaml for full syntax
-o, --orderer string Ordering service endpoint
--test.coverprofile string Done (default "coverage.cov")
--tls Use TLS when communicating with the orderer endpoint
```
``` -> DEBU 56b container lock deleted(dev-peer0.louvre.fr-artmanager-0)
peer0.louvre.fr | 2017-11-03 18:01:27.084 UTC [chaincode] Launch -> ERRO 56c launchAndWaitForRegister failed Timeout expired while starting chaincode artmanag er:0(networkid:dev,peerid:peer0.louvre.fr,tx:d7125ed0e897b93f29f956b4243f43577a7e1471 fe9a992ebe8af3dc582b077f)
peer0.louvre.fr | 2017-11-03 18:01:27.084 UTC [endorser] callChaincode -> DEBU 56d Exit
peer0.louvre.fr | 2017-11-03 18:01:27.084 UTC [endorser] simulateProposal -> ERRO 56e failed to invoke chaincode name:"lscc" on transaction d7125ed0e897b93f2 9f956b4243f43577a7e1471fe9a992ebe8af3dc582b077f, error: Timeout expired while startin g chaincode artmanager:0(networkid:dev,peerid:peer0.louvre.fr,tx:d7125ed0e897b93f29f9 56b4243f43577a7e1471fe9a992ebe8af3dc582b077f)
peer0.louvre.fr | 2017-11-03 18:01:27.084 UTC [endorser] simulateProposal -> DEBU 56f Exit
peer0.louvre.fr | 2017-11-03 18:01:27.084 UTC [lockbasedtxmgr] Done -> DE BU 570 Done with transaction simulation / query execution [832147a9-0061-49bf-8793-5e 8779f4f494]
peer0.louvre.fr | 2017-11-03 18:01:27.084 UTC [endorser] ProcessProposal -> DEBU 571 Exit
orderer.art.ifar.org | 2017-11-03 18:01:27.085 UTC [orderer/common/broadcast] Handle -> WARN 1373 Error reading from stream: rpc error: code = Canceled desc = cont ext canceled
orderer.art.ifar.org | 2017-11-03 18:01:27.086 UTC [orderer/main] func1 -> DEB U 1374 Closing Broadcast stream
```
Has joined the channel.
@agiledeveloper Chaincode related questions should be posted in #fabric-peer-endorser-committer , based on what you've posted, this isn't related to ordering
PLEASE READ: Questions here should be related to either the ordering service code and its APIs (Broadcast/Deliver), configuration transactions, or the ordering service consensus plugins (Solo/Kafka/SBFT). Before posting your question, please take time to ensure that your question is precise and concise, and use a service like Pastebin or GitHub Gist for all log outputs that you wish to reference. For example: Bad question: Why do I get the error `BAD_REQUEST`? Good question: Using `fabric-examples/first-network/byfn.sh`, when submitting the channel creation as `Admin@org1.example.com` it succeeds, but when using `User1@org1.example.com` it fails with `BAD_REQUEST`. (Full log can be found here: https://pastebin.com/LFGNB88a) Why does this second request fail?
Clipboard - November 6, 2017 2:24 PM
Has joined the channel.
Given the same number of orderers, will larger number of Kafka and Zookeeper nodes helps to improve the performance of the orderer cluster?
I'd wager not, since that means you need to replicate the same data across more nodes.
If you shard the data using channels, that will simply offload computations so that will improve things
> If you shard the data using channels, that will simply offload computations so that will improve things
@yacovm: Can you define "offload computations"?
yeah. if you compare having all kafka servers you have be used by all orderers
yeah. if you compare having all kafka servers you have be used by all orderers for all channels
vs only a subset of them
vs only a subset of them for each channel
Ah alright, we agree then. My response would have been: if you keep the replication factor fixed, increasing the number of Kafka brokers means that a broker may have less channels to replicate, thus its load is reduced, thus the performance you may get out of this might improve, etc. etc.
(And I'd also add to that that I don't expect this experiment to result in noticeable improvements. There's probably a good amount of overhead that we're adding before and after the Kafka part that we're not letting this improvement surface.)
@qizhang: At any rate, never do more than 5 or 7 ZK nodes.
Say @kostas , if the amount of transactions that reach the orderer is too higj for the block, do they simply get discarded on the ground or do they wait in the grpc handling logic?
Say @kostas / @jyellick , if the amount of transactions that reach the orderer is too higj for the block, do they simply get discarded on the ground or do they wait in the grpc handling logic?
@yacovm: It's the latter. (More precisely: this comes down to how HTTP/2 handles this https://http2.github.io/http2-spec/#rfc.section.5.2.1 - the sender is prevented from flooding the receiver)
Has joined the channel.
Isnt it a problem?
It means if the throughput of thr input is larger than the throughout of the output
So the queue of requests will grow indefonetely no?
(sorry, wroting from phone)
I haven't studied how gRPC handles this internally, but logic says that there is a finite buffer on the sender and any "send" operation invoked when that buffer is full would either block or result in an error. Most likely the former.
Hey All, I'm having some issues with the ordering service and creating a channel:
```
2017-11-06 17:21:47.872 UTC [cauthdsl] func2 -> ERRO 54e Principal deserialization failure (MSP DEFAULT is unknown) for identity 0a0744454641554c54129a072d2d2d2d2d424547494e202d2d2d2d2d0a4d494943697a4343416a4b6741774942416749554245567773537830546d7164627a4e776c654e42427a6f4954307777436759494b6f5a497a6a3045417749770a667a454c4d416b474131554542684d4356564d78457a415242674e5642416754436b4e6862476c6d62334a7561574578466a415542674e564241635444564e680a62694247636d467559326c7a59323878487a416442674e5642416f54466b6c7564475679626d5630494664705a47646c64484d7349456c75597934784444414b0a42674e564241735441316458567a45554d4249474131554541784d4c5a586868625842735a53356a623230774868634e4d5459784d5445784d5463774e7a41770a5768634e4d5463784d5445784d5463774e7a4177576a426a4d517377435159445651514745774a56557a45584d4255474131554543424d4f546d3979644767670a5132467962327870626d45784544414f42674e564241635442314a68624756705a326778477a415a42674e5642416f54456b6835634756796247566b5a3256790a49455a68596e4a70597a454d4d416f474131554543784d44513039514d466b77457759484b6f5a497a6a3043415159494b6f5a497a6a304441516344516741450a4842754b73414f34336873344a4770466669474d6b422f7873494c54734f766d4e32576d77707350485a4e4c36773848576533784350517464472f584a4a765a0a2b433735364b457355424d337977355054666b7538714f42707a43427044414f42674e56485138424166384542414d4342614177485159445652306c424259770a464159494b7759424251554841774547434373474151554642774d434d41774741315564457745422f7751434d414177485159445652304f42425945464f46430a6463555a346573336c746943674156446f794c66567050494d42384741315564497751594d4261414642646e516a32716e6f492f784d55646e3176446d6447310a6e4567514d43554741315564455151654d427943436d31356147397a6443356a62323243446e6433647935746557687663335175593239744d416f47434371470a534d343942414d43413063414d4551434944663948626c34786e337a3445774e4b6d696c4d396c58324671346a5770416152564239374f6d564565794169416b0a61587a422f6a6e6c5533394237577773394249723963386d534f455046365659317547502b644b5630673d3d0a2d2d2d2d2d454e44202d2d2d2d2d0a
2017-11-06 17:21:47.872 UTC [cauthdsl] func2 -> DEBU 54f 0xc420029458 principal evaluation fails
2017-11-06 17:21:47.872 UTC [cauthdsl] func1 -> DEBU 550 0xc420029458 gate 1509988907872140872 evaluation fails
2017-11-06 17:21:47.872 UTC [orderer/common/broadcast] Handle -> WARN 551 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2017-11-06 17:21:47.872 UTC [orderer/main] func1 -> DEBU 552 Closing Broadcast stream
2017-11-06 17:21:47.875 UTC [orderer/common/deliver] Handle -> WARN 553 Error reading from stream: rpc error: code = Canceled desc = context canceled
2017-11-06 17:21:47.875 UTC [orderer/main] func1 -> DEBU 554 Closing Deliver stream
```
network.zip
You can recreate this from spinning up the docker compose file here
`peer channel create -o orderer.example.com:7050 -c lab5channel -f /etc/hyperledger/artifacts/channel.tx`
is how I am building the project
@kostas right, per-client.
But, what about many clients?
Has anyone seen this issue or know how to fix it?
(thanks!)
In all kinds of web-sites or internet services, they return a status that indicates that they are overloaded
@qizhang More Kafka brokers might theoretically improve performance of the Kafka cluster, but for the purposes of fabric, I do not believe Kafka is likely to ever become a bottleneck. Adding additional OSNs (under the v1.1-preview or newer code) is a better answer for scale.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=bStGXFLTzLdF8yBH9)
@qizhang More Kafka brokers might theoretically improve performance of the Kafka cluster, but for the purposes of fabric, I do not believe Kafka is likely to ever become a bottleneck. Adding additional OSNs (under the v1.1-preview or newer code) is a better answer for scale.
@tom.appleyard You are attempting to 'create' a channel which already exists.
@jyellick upon rebooting:
```
2017-11-07 15:10:39.374 UTC [cauthdsl] func2 -> DEBU 2a4 0xc42007a140 identity 0 does not satisfy principal: The identity is a member of a different MSP (expected OwnerMSP, got ManufacturerMSP)
2017-11-07 15:10:39.374 UTC [cauthdsl] func2 -> DEBU 2a5 0xc42007a140 principal evaluation fails
2017-11-07 15:10:39.374 UTC [cauthdsl] func1 -> DEBU 2a6 0xc42007a140 gate 1510067439374600909 evaluation fails
2017-11-07 15:10:39.374 UTC [orderer/common/broadcast] Handle -> WARN 2a7 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2017-11-07 15:10:39.375 UTC [orderer/main] func1 -> DEBU 2a8 Closing Broadcast stream
2017-11-07 15:10:39.378 UTC [orderer/common/deliver] Handle -> WARN 2a9 Error reading from stream: rpc error: code = Canceled desc = context canceled
2017-11-07 15:10:39.378 UTC [orderer/main] func1 -> DEBU 2aa Closing Deliver stream
```
(and attempting to create the channel)
Ah, okay
So, `The identity is a member of a different MSP (expected OwnerMSP, got ManufacturerMSP)`
Sounds like you may have set the MSP ID correctly?
~Sounds like you may have set the MSP ID correctly?~ Actually
~Sounds like you may have set the MSP ID correctly?~ Actually, could you post complete orderer log to hastebin and link them here?
~Sounds like you may have set the MSP ID correctly?~ Actually, could you post complete orderer log to hastebin and link it here?
sure
https://pastebin.com/Qm18rfrm
Any thoughts @jyellick ?
Sorry, got a bit distracted, looking now
@tom.appleyard Line 1182 is your culprit:
```2017-11-07 15:10:39.369 UTC [cauthdsl] func2 -> DEBU 288 0xc4201bb480 identity 0 does not satisfy principal: This identity is not an admin```
I see, I assume that is because the organization doesn't have the rights to update the config?
i.e. it needs `AdminPrincipal: Role.ADMIN` in it?
@tom.appleyard The admin is whatever cert is in the admincerts directory during the creation of the genesis.block
In terms of 1.0.X at least
Which admincerts directory?
What is the fix
Not entirely sure what your setup is currently. http://hyperledger-fabric.readthedocs.io/en/latest/msp.html?highlight=admincerts#msp-setup-on-the-peer-orderer-side is a good place to start
But wherever you created the channel (using configtxgen), there would be a directory structure that includes a set of admincerts
And any of the certificates in that directory structure will be included as an admin for that channel
@tom.appleyard If you used `cryptogen`, you will find a directory like: ```fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/``` which represents your admin user. If you have a more novel crypto setup, you will need to do as @Asara recommends and find which certs were embedded into your bootstrap configuration, then find the corresponding user dir to use for submitting the creation tx.
@tom.appleyard If you used `cryptogen`, you will find a directory like: ```fabric-samples/first-network/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/``` which represents your admin user. If you have a more novel crypto setup, you will need to do as @Asara recommends and find which certs were embedded into your orderer's genesis block by `configtxgen`, then find the corresponding user dir to use for submitting the creation tx.
Thanks for the help guys
works now
However there is still an issue, so the above works when I `docker exec` into a peer container and run the command from there. If I have a local `peer` binary and I try to run this I get the above errors.
Given (from what I understand) the `peer` binary is assuming the identity of the Org Admin and sending the request to the ordering service, surely this should work fine so long as it is pointed at the ordering service? (what would be the issue with running it on my local machine?)
@tom.appleyard My suspicion would be that you will run into TLS problems locally. Since your local machine cannot resolve the hostname of the docker container, you will get a mis-match on the authentication. But certainly, there is nothing magical about docker that makes the peer command work. I run it locally on my machine frequently.
@jyellick I don't think I'm using TLS at all - I don't specify it anywhere in the `docker-compose`
If you are not using TLS, and you set the environment variables appropriately, then you should be fine to run the peer command from your local system. I would run a `printenv|grep peer` both inside the docker container, and on your host, and make sure the same vars are set. Also make sure that the paths in those vars are fixed for the outside-of-docker context
If you are not using TLS, and you set the environment variables appropriately, then you should be fine to run the peer command from your local system. I would run a `printenv|grep PEER` both inside the docker container, and on your host, and make sure the same vars are set. Also make sure that the paths in those vars are fixed for the outside-of-docker context
Getting this:
```
[thomasappleyard:...r Business Networks/network]$ printenv | grep peer
[thomasappleyard:...r Business Networks/network]$ echo $CORE_PEER_LOCALMSPID
ManufacturerMSP
```
Sorry, my typo, should be `PEER`
aha
still does't show anything
(printenv that is)
but if you try the vars indivdually they do
Odd.. does `printenv | grep PATH` produce output?
yes
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=rATj7DY6GLDRFhKL4) `5 to 7 ZK nodes` means 5 zookeeper nodes + 5 kafka nodes to 7 zookeeper nodes + 7 kafka nodes?
ah right, so if I put `export` in front when I set them
Hmmm, perhaps just look through the output of `printenv` to see what peer related variables are set.
Aha, my mistake...
then they appear
I learn something new every day: try: ` ( set -o posix ; set ) | grep PEER`
it worked!
(what does that last command do?)
Shamelessly stolen from here: https://askubuntu.com/questions/275965/how-to-list-all-variables-names-and-their-current-values
Lists all of the environment variables and shell variables
So, do you have all of the same vars set in your docker container as on your host? Are the paths all translated?
I've translated them to be local yes
the command seemed to work
now going to try joining a peer...
https://chat.hyperledger.org/channel/fabric-orderer?msg=vuiWFNZGedC2eDdAy
Out of curiosity, how did you infer this from the given snippet?
@jyellick: Out of curiosity, how did you infer this from the given snippet?
https://chat.hyperledger.org/channel/fabric-orderer?msg=mKjEEByS9duA8qEGB
@qizhang: No, I refer just to the number of ZK nodes.
@kostas Sorry, what is ZK nodes?
Ah, my bad. ZK = ZooKeeper.
I see, thanks :-)
I increased the number of kafka nodes from 3 to 5 and the number of zookeeper nodes also from 3 to 5, then I observed a 10% decrease in TPS, any thought on this?
Not really.
I am wondering what is the reason of `never do more than 5 to 7 ZK nodes`?
@qizhang ZK performs leader election consensus. The more nodes, the more failures are tolerated, but the harder leader election is. Essentially, adding nodes to ZK increases fault tolerance, but decreases performance.
@jyellick I see. Is there a minimal number ZK nodes?
3
@qizhang: Just to spell that out a bit clearer: the more nodes you have, the more nodes you need to write to before you have a quorum. ZK is not optimized for writes, so you're paying a high penalty if you go to double digits. A maximum of 7 or 9 is a common practitioner's rule -- I wish I had a reference handy.
> In all kinds of web-sites or internet services, they return a status that indicates that they are overloaded
@yacovm: Understood. We do not do that. Not sure it's as necessary when you have a gRPC stream given that you have a feedback mechanism built-in to your client locally.
why do you think that it does?
it just blocks, no?
In case for instance, a 1000 clients would connect to the orderer at once, what do you think will happen( I never tried, have you?)
In case for instance, a 1000 clients would connect to the orderer at once, what do you think will happen( I never tried, have you?) ?
I have not, which is why I wrote that I haven't gRPC's implementation earlier.
> it just blocks, no?
Correct, that's exactly your feedback mechanism.
> In case for instance, a 1000 clients would connect to the orderer at once, what do you think will happen?
My useless guess: on the server side, there is a buffer assigned to every stream, and all of these buffers collectively have a size that's less than or equal to some internal grpc default value. This is how you get the blocking effect on the server.
For the client, I wrote a bit above.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ebicxJjHyXYcJngT7) What will the leader do here?
@qizhang ZK is used to assign each partition a Kafka broker as leader. The leader is the one who receives new messages and disseminates them to the replicas. Please google "Kafka Architecture" or similar to find answers to this and more such questions.
@qizhang ZK is used to assign each partition a Kafka broker as leader. The leader is the one who receives new messages and disseminates them to the replicas (and clients). Please google "Kafka Architecture" or similar to find answers to this and more such questions.
https://kafka.apache.org/documentation/ is also a good place to start
Has joined the channel.
> ZK is used to assign each partition a Kafka broker as leader.
A nit, but technically it's actually a Kafka broker (the cluster controller) that does the partition assignment.
It is the controller that gets assigned via ZK.
ZK is used for controller election, for identifying brokers that have disconnected, and for persisting the ISR sets for each replica.
I second the recommendation for checking out the Kafka documentation. It's an excellent piece, easy to follow, and it'll do a far better job of answering your questions than we do here.
Thanks @kostas @jyellick
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pdsKKgSBGfNuqoW3k) @jyellick i use the java sdk to get the history data it exceed the 4M
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pdsKKgSBGfNuqoW3k) @jyellick i use the java sdk to get the history data it exceed the 4M ,i want to know how to set the maxinum in the chaincode grpc server
@kostas @jyellick can you tell where the source code use the BlockDataStructureWidth
@jyellick i think the genesis block contains the policy should be customize for the client user
@jyellick i think the genesis block contains the policy should be customize for the client user,but it's very hard to define the each policy's duty
Has left the channel.
Has joined the channel.
Has joined the channel.
@jyellick \ @kostas here is a question that I'm not certain of https://stackoverflow.com/questions/47174684/block-dissemination-in-fabric/47192169?noredirect=1#comment81334456_47192169
My guess is that it's fine to copy files between orderers as the peer doesn't care who signed the block and also the orderers never verify their own ledger but only either traverse or append to it.
Can you please confirm? (You can also write a comment if you want, of course)
@jyellick https://github.com/hyperledger/fabric/blob/a454d61767b27f57019711b8ab62a0ad344590cd/orderer/common/broadcast/broadcast.go#L37 i think the comment is not very good
@jyellick https://github.com/hyperledger/fabric/blob/a454d61767b27f57019711b8ab62a0ad344590cd/orderer/common/broadcast/broadcast.go#L37 i think the comment is not very good,the following is mine ```BroadcastChannelSupport parse the envelope which header_type should be `config_update` for reconfig the channel or creating new channel
@jyellick @kostas https://github.com/hyperledger/fabric/blob/a454d61767b27f57019711b8ab62a0ad344590cd/orderer/common/server/main.go#L153 the order is wrong
@jyellick @kostas https://github.com/hyperledger/fabric/blob/a454d61767b27f57019711b8ab62a0ad344590cd/orderer/common/server/main.go#L153 the order is wrong the following is right
```logger.Fatalf("Failed to load ServerRootCAs file '%s' (%s)",
serverRoot,err )```
Has joined the channel.
@yacovm: Correct, the orderer does not verify their own ledger so you should be able to do that w/o issues. Let's wait for @sanchezl to confirm however; he's been running these experiments this week.
Sure though I am willing to wager lots of fab-coin for your word ;)
I am willing to bet $100Tn Zimbambwe dollars that I'm right. (I actually bought one such bill from eBay once.)
I am willing to bet 100Tn Zimbambwe dollars that I'm right. (I actually bought one such bill from eBay once.)
I am willing to bet 100 trillion Zimbambwe dollars that I'm right. (I actually bought one such bill from eBay once.)
@asaningmaxchain: RE: ServerRootCAs error message -- good eye! Would you mind pushing a changeset fixing this? You know the process by now :wink: Feel free to tag the usual suspects as reviewers.
@asaningmaxchain: RE BroadcastChannelSupport message -- I'm not sure I follow. Can you recheck the link?
Basic question _I think_. If I were to have a counter chaincode , read a value increment it and store it back, assuming starting with value 0 and created two increment invoke proposals back to back then submitted both to the Orderer back to back what would be the expected outcome ? I'm pretty sure not two :) I suspect the first would pass but the second transaction would would be marked invalid (what RC?) and the value would be one.
@rickr: You are correct, the transaction that was ordered first between the two would be marked as valid by the committing peers and modify the counter value, while the second one would be marked as invalid (and leave the counter value intact).
So if I wanted to determine how _quickly_ *from a client* I could increment the count and have it match the number of invokes I would need to wait from the eventhub to see the block/transaction is stored successfully and then start the next proposal to increment.
@rickr: Correct. If you have an application where this kind of delay is unacceptable, you may want to consider skewing the data model so that it can work with concurrent updates. See @ajp's high-throughput sample for instance: https://github.com/hyperledger/fabric-samples/tree/release/high-throughput
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ePsNoxdLTohamBZ54) @kostas https://github.com/hyperledger/fabric/blob/938a3e6124e24e1665698cc54ae37c1473062e80/orderer/common/broadcast/broadcast.go#L37 can you take a look
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=LnJiB8xd9Wv2TzaGG) @rickr @kostas so how to calculate the tps
@asaningmaxchain: RE: BroadcastChannelSupport -- The comment that you're suggesting does not capture what the method does. The existing comment looks pretty accurate and clear to me.
@asaningmaxchain: RE: BroadcastChannelSupport -- The comment that you're suggesting does not capture what the method does. The existing comment seems accurate and clear to me.
``` whether the message is a config update
// and the channel resources for a message ```
@kostas i don't understand ,can you provide more detail about this?
@kostas i don't understand ,can you provide more detail about this?in my mind,the message is `config_udpate` if the channel id == system channel id,it means to reconfig otherwise it creates a new channel
@asaningmaxchain It's not very accurate. a user channel could be reconfigured as well.
> ,the message is `config_udpate` if the channel id == system channel id
@guoger ye
@guoger yes
Also, the [link](https://github.com/hyperledger/fabric/blob/938a3e6124e24e1665698cc54ae37c1473062e80/orderer/common/broadcast/broadcast.go#L37) looks good to me. Could you pls try to re-compose your question?
Has joined the channel.
Has joined the channel.
Has joined the channel.
hello im new on hyperledger, i found a trouble on create a channel from fabric sample, i have try from balance transfer.
i have got massage
[2017-11-12 18:31:58.951] [DEBUG] Helper - [crypto_ecdsa_aes]: ecdsa signature:
Signature {
r:
please help me
@novira: I suggest posting this message to #fabric instead.
Has joined the channel.
Hi guys, I am facing issue with orderer nodes going down with the following error -
Cannot set up channel consumer = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
Can someone help please? Thanks
@DeepaR: This message suggests that you're either misconfiguring your Kafka brokers, or you're bumping into some rare bug we haven't seen before.
Can you describe the problem in detail?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=d3XJYM9wnNT9EACLH) @jyellick would you give me a reply
`// set max send and recv msg sizes
serverOpts = append(serverOpts, grpc.MaxSendMsgSize(MaxSendMsgSize()))
serverOpts = append(serverOpts, grpc.MaxRecvMsgSize(MaxRecvMsgSize()))`
```// set max send and recv msg sizes
serverOpts = append(serverOpts, grpc.MaxSendMsgSize(MaxSendMsgSize()))
serverOpts = append(serverOpts, grpc.MaxRecvMsgSize(MaxRecvMsgSize()))` ``
```// set max send and recv msg sizes
serverOpts = append(serverOpts, grpc.MaxSendMsgSize(MaxSendMsgSize()))
serverOpts = append(serverOpts, grpc.MaxRecvMsgSize(MaxRecvMsgSize()))` ``i see in the core/comm/server whatever the peer server or chaincode server it set the default 100M
@asaningmaxchain The maximum message size in the gRPC server is hardcoded to be 104M
@asaningmaxchain The maximum message size in the gRPC server is hardcoded to be 100M
however when i use the getHistoryData it tell me that exceed the 4M
It is not configurable for the server. You must configure your client however to accept messages from the server which are over 4MB
i use the java sdk to get the data and i set the ```peerProperties.put("grpc.NettyChannelBuilderOption.keepAliveTime", new Object[]{5L, TimeUnit.MINUTES});
peerProperties.put("grpc.NettyChannelBuilderOption.keepAliveTimeout", new Object[]{8L, TimeUnit.SECONDS});
peerProperties.put("grpc.NettyChannelBuilderOption.maxInboundMessageSize", 100*1024*1024);``
@rickr
That is the max inbound the client will allow I can't imagine the orderer ever sending anything back over 4mb let alone 100mb . There MUST be a way for the server to also _adjust_ this. @jyellick @mastersingh24 ^^^
That is the max inbound the client will allow I can't imagine the orderer ever sending anything back over 4mb let alone 100mb . There MUST be a way for the server to also _adjust_ this for the messages it's receiving. @jyellick @mastersingh24 ^^^
@rickr There is a `maxOutboundMessageSize` as well which you must set
https://github.com/grpc/grpc-java/blob/9be41ba0e84d85b9c9a96992d8158c2e95cb3050/core/src/main/java/io/grpc/CallOptions.java#L347-L356
The server gRPC code sets both the maximum incoming and maximum outgoing messages sizes as well. These are both set to 100M
peerProperties.put("grpc.NettyChannelBuilderOption.maxOutboundMessageSize", 9000000);?
peerProperties.put("grpc.NettyChannelBuilderOption.maxOutboundMessageSize", 9000000);?
https://github.com/hyperledger/fabric-sdk-java/blob/d4ff8c2e0041b72fbe0b03eb959a701b17ff1841/src/test/java/org/hyperledger/fabric/sdkintegration/End2endIT.java#L684
@rickr
@rickr @jyellick
hi, even I faced the similar issue while trying to receive endorsement from peer which was greater then 100MB.
Yes, the peer/orderer share the same gRPC server core, which hardcoded the maximum inbound and outbound message sizes to be 100MB
So you will experience an error if you try to send or receive a message larger than this size
If you are seeing an error about 4MB, then your client is not configured correctly
@asaningmaxchain Looks like you have `peerProperties1` what about orderer are you doing the same for it ?
I am receiving an error when starting the orderer: ```2017-11-10 22:20:10.752 UTC [orderer/multichain] newLedgerResources -> CRIT 078 Error creating configtx manager and handlers: Setting up the MSP manager failed, err admin 0 is invalid, validation error Could not obtain certification chain, err A CA certificate cannot be used directly by this MSP
panic: Error creating configtx manager and handlers: Setting up the MSP manager failed, err admin 0 is invalid, validation error Could not obtain certification chain, err A CA certificate cannot be used directly by this MSP
goroutine 1 [running]:
panic(0xb31bc0, 0xc420378d30)
/opt/go/src/runtime/panic.go:500 +0x1a1
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc4201d31a0, 0xc71091, 0x30, 0xc420378c80, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x127
github.com/hyperledger/fabric/orderer/multichain.(*multiLedger).newLedgerResources(0xc4203a0e10, 0xc4203b21e0, 0xc4203b21e0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/multichain/manager.go:164 +0x393
github.com/hyperledger/fabric/orderer/multichain.NewManagerImpl(0x122a3a0, 0xc4203ae1e0, 0xc4203b2090, 0x1226ea0, 0x126ee88, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/multichain/manager.go:114 +0x23b
main.initializeMultiChainManager(0xc4202206c0, 0x1226ea0, 0x126ee88, 0xc4201f6880, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:219 +0x27a
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:75 +0x392``` It's not clear to me what certificate it is complaining about. I am using fabric-ca to generate the certificates. Can someone tell me what I'm missing?
I am receiving an error when starting the orderer: ```2017-11-13 16:30:24.115 UTC [orderer/multichain] newLedgerResources -> CRIT 078 Error creating configtx manager and handlers: Setting up the MSP manager failed, err admin 0 is invalid, validation error Could not obtain certification chain, err A CA certificate cannot be used directly by this MSP
panic: Error creating configtx manager and handlers: Setting up the MSP manager failed, err admin 0 is invalid, validation error Could not obtain certification chain, err A CA certificate cannot be used directly by this MSP
goroutine 1 [running]:
panic(0xb31bc0, 0xc420383a10)
/opt/go/src/runtime/panic.go:500 +0x1a1
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc4201ec810, 0xc71091, 0x30, 0xc420383960, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x127
github.com/hyperledger/fabric/orderer/multichain.(*multiLedger).newLedgerResources(0xc42039b540, 0xc42038b830, 0xc42038b830)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/multichain/manager.go:164 +0x393
github.com/hyperledger/fabric/orderer/multichain.NewManagerImpl(0x122a3a0, 0xc4203aa6e0, 0xc42038b6e0, 0x1226ea0, 0x126ee88, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/multichain/manager.go:114 +0x23b
main.initializeMultiChainManager(0xc4202286c0, 0x1226ea0, 0x126ee88, 0xc4201eef00, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:219 +0x27a
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:75 +0x392``` It's not clear to me what certificate it is complaining about. I am using fabric-ca to generate the certificates. Can someone tell me what I'm missing?
@latitiah This error is from the MSP, so you might be better off asking in #fabric-crypto, but I believe this indicates that a certificate which has been included in the admins directory has not been appropriately signed by the CA for that org.
@latitiah This error is from the MSP, so you might be better off asking in #fabric-crypto, but I believe this indicates that a certificate which has been included in the MSP admins directory has not been appropriately signed by the CA for that org.
ok, I'll give it a try. thanks.
All of the certs in the structure are from the fabric-ca
It's possible there is a bug in #fabric-ca you might try asking there as well
Has joined the channel.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=gBLqQhRE2vtB5gY9H) @rickr i set it for orderee
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=gBLqQhRE2vtB5gY9H) @rickr i set it for orderer
@rickr ``` for (String orderName : org.getOrdererNames()) {
Properties ordererProperties = new Properties();
ordererProperties.put("grpc.NettyChannelBuilderOption.keepAliveTime", new Object[]{5L, TimeUnit.MINUTES});
ordererProperties.put("grpc.NettyChannelBuilderOption.keepAliveTimeout", new Object[]{8L, TimeUnit.SECONDS});
ordererProperties.put("grpc.NettyChannelBuilderOption.maxInboundMessageSize",100*1024*1024);
newChannel.addOrderer(client.newOrderer(orderName, org.getOrdererLocation(orderName), ordererProperties));
}```
I have gone through this video : https://www.youtube.com/watch?time_continue=380&v=8kRc2895uMY
Can anyone tell me, how kafka/zookeeper can be used as a consensus algorithm ?
Has joined the channel.
We need urgent help. Actually We are getting following exception frequently. We have set property "grpc.NettyChannelBuilderOption.maxInboundMessageSize" as value "100MB". We are adding this property as peer property, still we are getting message size exceed exception.
io.grpc.netty.NettyClientTransport$3: Frame size 8196919 exceeds maximum: 4194304.
at org.hyperledger.fabric.sdk.OrdererClient.sendDeliver(OrdererClient.java:295)
at org.hyperledger.fabric.sdk.Orderer.sendDeliver(Orderer.java:172)
at org.hyperledger.fabric.sdk.Channel.seekBlock(Channel.java:1198)
at org.hyperledger.fabric.sdk.Channel.getLatestBlock(Channel.java:1274)
at org.hyperledger.fabric.sdk.Channel.getLastConfigIndex(Channel.java:1097)
at org.hyperledger.fabric.sdk.Channel.getConfigurationBlock(Channel.java:1028)
at org.hyperledger.fabric.sdk.Channel.parseConfigBlock(Channel.java:949)
at org.hyperledger.fabric.sdk.Channel.initialize(Channel.java:676)
Do we need to set property "grpc.NettyChannelBuilderOption.maxInboundMessageSize" as Peer property or Orderer Property?
@deepakvparmar where did you set that property? 4 MB is a limitation on an sdk side
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=KpqX26h7MpNE4BWuB) @Vadim : Yes, There is limit of 4 MB. But We need to modify maximum limit as our request is also having file and sometime it does exceed maximum file limit of 4MB. We are currently trying to modify maximum size limit by providing property "grpc.NettyChannelBuilderOption.maxInboundMessageSize" as peer property.
can you show your code?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nX3LTmowGHC3oa4Ed) @Vadim :
peerProperties.put("grpc.NettyChannelBuilderOption.maxInboundMessageSize", getMaxInboundMessageSize()); // getMaxInboundMessageSize() will return 100MB as bytes.
sampleOrg.addPeerProperties(nl[0], peerProperties);
Peer peer = _hfClient.newPeer(peerName, sampleOrg.getPeerLocation(peerName), sampleOrg.getPeerProperties(peerName));
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=d4dfbXxByxXBEScJv) @kostas when I try to start a peer node, I see this error is the peer log - Failed connecting to orderer, error: context deadline exceeded. [mychannel] Got error \u0026{SERVICE_UNAVAILABLE}
Orderer log has this error - [orderer/common/deliver] Handle -\u003e WARN 03d\u001b[0m [channel: mychannel] Rejecting deliver request because of consenter error\n" and after a while orderer node goes down with the error mentioned earlier
hi, just wondering - does someone has a sample config for a read-only org on a channel or a whole hl network?
with trace enabled on the client you should see by endpoint what properties are being configured
```
2017-11-15 00:09:35,955 main TRACE Endpoint:70 - Creating endpoint for url grpc://localhost:7050
2017-11-15 00:09:47,129 main TRACE Endpoint:296 - Endpoint with url: grpc://localhost:7050 set managed channel builder method public io.grpc.netty.NettyChannelBuilder io.grpc.netty.NettyChannelBuilder.keepAliveWithoutCalls(boolean) (true)
2017-11-15 00:10:15,386 main TRACE Endpoint:296 - Endpoint with url: grpc://localhost:7050 set managed channel builder method public io.grpc.internal.AbstractManagedChannelImplBuilder io.grpc.internal.AbstractManagedChannelImplBuilder.maxInboundMessageSize(int) (9000000)
2017-11-15 00:10:56,244 main TRACE Endpoint:296 - Endpoint with url: grpc://localhost:7050 set managed channel builder method public io.grpc.netty.NettyChannelBuilder io.grpc.netty.NettyChannelBuilder.keepAliveTimeout(long,java.util.concurrent.TimeUnit) (8, SECONDS)
```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=inRnh43aAKhjhEEW4) @deepakvparmar Doesn't the stacktrace show Orderer ?
@DeepaR As @kostas says in the quote you included, the orderer returning `SERVICE_UNAVAILABLE` is indicative of the Kafka cluster being configured incorrectly (or not giving it time to start up before connecting the orderers)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=mQq3FDkAJaiYZJs9B)
@david_dornseifer
> hi, just wondering - does someone has a sample config for a read-only org on a channel or a whole hl network?
If you wish for an org to be read only, then you will want to look at modifying the `/Channel/Application/Writers policy to include only the orgs you wish (by default, this includes all orgs)
@david_dornseifer
> hi, just wondering - does someone has a sample config for a read-only org on a channel or a whole hl network?
If you wish for an org to be read only, then you will want to look at modifying the `/Channel/Application/Writers` policy to include only the orgs you wish (by default, this includes all orgs)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hpajjDYeNpgcyLopA) @jyellick how do I resolve this?
Has joined the channel.
I'm running an Orderer in the SoloOrderer mode and have the following error message in the logs:
```
[orderer/common/deliver] Handle -> ERRO 6bc18[0m [channel: channelName] Error reading from channel, cause was: SERVICE_UNAVAILABLE
```
There are times when the error message doesn't show up in the logs for days but then also sometimes it's showing up hundreds of times in a short period of time.
Any idea what could be causing this?
I'm running an Orderer in the SoloOrderer mode and have the following error message in the logs:
```
[orderer/common/deliver] Handle -> ERRO 6bc18[0m [channel: channelName] Error reading from channel, cause was: SERVICE_UNAVAILABLE
```
There are times when the error message doesn't show up in the logs for days but then also sometimes it's showing up hundreds of times in a short period of time.
The used Fabric version was 1.0.0 for a long time and just upgraded to 1.0.4. The upgrade didn't make much of a difference in regards to the error message.
Any idea what could be causing this?
Has joined the channel.
Hi. I have deployed hyperldger-fabric kafka based ordering service using ansible on aws. Everything working fine for me till yesterday. However today when I launch a network , kafka container unable to communicate with zookeeper. Here are docker containers that are running
kafkaerror.png
Here are some parts of kafka container logs. Any trick to solve this
```[2017-11-16 08:23:36,075] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1223)
at org.I0Itec.zkclient.ZkClient.
check the zookeeper logs?
@yacovm Here are zookeepr logs ```2017-11-16 09:10:26,574 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper2nd to address: zookeeper2nd/172.16.29.4
2017-11-16 09:10:27,614 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.16.31.5:48608
2017-11-16 09:10:27,617 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2017-11-16 09:10:27,617 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1008] - Closed socket connection for client /172.16.31.5:48608 (no session established for client)
2017-11-16 09:10:31,578 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@400] - Cannot open channel to 3 at election address zookeeper3rd/172.16.22.5:3888
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:426)
at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:822)
2017-11-16 09:10:31,579 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper3rd to address: zookeeper3rd/172.16.22.5
2017-11-16 09:10:31,579 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:FastLeaderElection@852] - Notification time out: 60000
2017-11-16 09:10:34,343 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@192] - Accepted socket connection from /172.16.31.5:48662
2017-11-16 09:10:34,344 [myid:1] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@362] - Exception causing close of session 0x0 due to java.io.IOException: ZooKeeperServer not running
2017-11-16 09:10:34,344 [myid:1] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1008] - Closed socket connection for client /172.16.31.5:48662 (no session established for client)
2017-11-16 09:11:36,584 [myid:1] - WARN [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumCnxManager@400] - Cannot open channel to 2 at election address zookeeper2nd/172.16.29.4:3888
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:426)
at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:822)
2017-11-16 09:11:36,585 [myid:1] - INFO [QuorumPeer[myid=1]/0:0:0:0:0:0:0:0:2181:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper2nd to address: zookeeper2nd/172.16.29.4
```
so it seems they can't communicate with each other?
are they all up?
there are 3 of them I presume and this is node 1 and the others are 2,3
Yeah basically there are three ubuntu server. I am seeing this error on all three server. Here are my fabric setup file ```---
# The url to the fabric source repository
GIT_URL: "http://gerrit.hyperledger.org/r/fabric"
# The gerrit patch set reference, should be automatically set by gerrit
# GERRIT_REFSPEC: "refs/changes/23/11523/3" # 1.0.0
# GERRIT_REFSPEC: "refs/changes/47/12047/3" # 1.0.1
GERRIT_REFSPEC: "refs/changes/13/13113/1"
# This variable defines fabric network attributes
fabric: {
# The user to connect to the server
ssh_user: "ubuntu",
# options are "goleveldb", "CouchDB", default is goleveldb
peer_db: "CouchDB",
tls: false,
# The following section defines how the fabric network is going to be made up
# cas indicates certificate authority containers
# peers indicates peer containers
# orderers indicates orderer containers
# kafka indicates kafka containers
# all names must be in lower case. Numeric characters cannot be used to start
# or end a name. Character dot (.) can only be used in names of peers and orderers.
network: {
fabric001: {
cas: ["ca1st.orga"],
peers: ["anchor@peer1st.orga", "anchor@peer1st.orgb"],
orderers: ["orderer1st.orgc", "orderer1st.orgd"],
zookeepers: ["zookeeper1st"],
kafkas: ["kafka1st"]
},
fabric002: {
cas: ["ca1st.orgb"],
peers: ["worker@peer2nd.orga", "worker@peer2nd.orgb"],
orderers: ["orderer2nd.orgc", "orderer2nd.orgd"],
zookeepers: ["zookeeper2nd"],
kafkas: ["kafka2nd"]
},
fabric003: {
cas: ["ca1st.orgc", "ca1st.orgd"],
peers: ["worker@peer3rd.orga", "worker@peer3rd.orgb"],
orderers: [],
zookeepers: ["zookeeper3rd"],
kafkas: ["kafka3rd"]
}
},
baseimage_tag: "1.0.2",
ca: { tag: "1.0.2", admin: "admin", adminpw: "adminpw" }
}```
Yeah all are up
huh?
how does that prove anything?
look at the logs of the other nodes
not only node 1
Has joined the channel.
Question on sending proposals to multiple orderers. Is there a valid use case where a single proposal needs to be sent to more than one Orderer ? Ignore the network redundancy case where there may be more than one Orderer but as long as one get's the proposal successfully all is well. @mastersingh24 @jyellick ?
Not sure what you mean @rickr, the orderer is sent transactions (not proposals). In general, there is no reason to ever `Broadcast` to more than one orderer (unless a failure occurs)
That's what I assumed -- the only reason to have more than one Orderer is for redundancy. The JSDK if given more than one Orderer to send a transaction stops trying on the first one that returns a success response. I don't see a use case to try and send to all.
Hey all, curious what this error means: Will not enqueue, consenter for this channel hasn't started yet
Hm... seems like an issue in ZK
I've got a funny question.. I'm trying to control my orderer as a daemon process without using the pidfile nonsense, but rather using a uniquely identifiable commandline. For fabric-ca-server, this is easy because you can specify `--ca.name uniqueID`, and for peer, there's a hack where you can specify `--logging-level uniqueID --logging-level info`, but there's no opportunity that I can see for orderer. Can anyone think of a way to do this?
Hi - I'm bootstrapping my orderer in a yaml file. Generated the genesis block as follows
`configtxgen -profile NetworkDuo2 -outputCreateChannelTx ./channel-1.block -channelID mychannel`
Config TX art of NetworkDuo2
```NetworkDuo2:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *AdxOrg
- *ScaOrg
- *BrokerOrg
- *OnlineOrg
- *AuditorOrg
Application:
Organizations:
- *AdxOrg
- *ScaOrg
- *BrokerOrg
- *OnlineOrg
- *AuditorOrg
Consortium: SampleConsortium```
Config TX part of NetworkDuo2
```NetworkDuo2:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *AdxOrg
- *ScaOrg
- *BrokerOrg
- *OnlineOrg
- *AuditorOrg
Application:
Organizations:
- *AdxOrg
- *ScaOrg
- *BrokerOrg
- *OnlineOrg
- *AuditorOrg
Consortium: SampleConsortium```
but when I start orderer with the docker-compose
I get the following error:
```2017-11-17 19:41:36.286 UTC [msp] Validate -> DEBU 01d MSP OrdererOrg validating identity
2017-11-17 19:41:36.287 UTC [orderer/main] createLedgerFactory -> DEBU 01e Ledger dir: /var/hyperledger/production/orderer
2017-11-17 19:41:36.287 UTC [kvledger.util] CreateDirIfMissing -> DEBU 01f CreateDirIfMissing [/var/hyperledger/production/orderer/index/]
2017-11-17 19:41:36.287 UTC [kvledger.util] logDirStatus -> DEBU 020 Before creating dir - [/var/hyperledger/production/orderer/index/] does not exist
2017-11-17 19:41:36.287 UTC [kvledger.util] logDirStatus -> DEBU 021 After creating dir - [/var/hyperledger/production/orderer/index/] exists
panic: Unable to bootstrap orderer. Error unmarshalling genesis block: proto: bad wiretype for field common.BlockHeader.Number: got wiretype 2, want 0```
On inspecting the channel-1.block - I could see the following part
```"header": {
"channel_header": {
"channel_id": "mychannel",
"epoch": "0",
"timestamp": "2017-11-17T20:03:51.000Z",
"tx_id": "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
"type": 2,
"version": 0
}
}
```
and the type there is mentioned as "2" and not "0" - I guess that is what the orderer bootstrap is complaining about.
can somebody help here to get it resolved?
> Hey all, curious what this error means: Will not enqueue, consenter for this channel hasn't started yet
@asara: This means that you're hitting the OSN before it has established a connection with the Kafka/ZK cluster. 8 times out of 10 this is because users don't give enough time for the Kafka/ZK cluster to complete its setup. (The other 2 is because they are misconfiguring Kafka/ZK and thus the OSN won't ever be able to establish a connection, even if given enough time.)
@kostas Yeap it seems like the Kafka cluster hadn't come up yet
@vdods: Are you saying that you can't do this because you can't control the orderer's configuration settings via command line arguments?
(I am guessing you know that you can use ENV vars? But maybe this is an issue because you want to spawn multiple OSNs at the same time? I'd like some more detail on this one when you get some time.)
@Amjadnz: https://chat.hyperledger.org/channel/fabric-orderer?msg=jGju2fTSWsgmC6Yrb
I do not think so.
The error you're getting talks about `common.BlockHeader.Number`, i.e. https://github.com/hyperledger/fabric/blob/release/protos/common/common.proto#L154
The issue is that you're attempting to bootstrap the orderer with a channel creation transaction, instead of a genesis block.
See: http://hyperledger-fabric.readthedocs.io/en/release/configtxgen.html?highlight=configtxgen#bootstrapping-the-orderer
Kind request for next time: please use hastebin.com to post large snippets of logs/code, instead of posting them here.
@kostas No, not exactly. I do control things via env vars, but env vars don't show up in the process commandline (like when you do `ps aux`), and the process commandline is how I'm controlling my daemon processes (i.e. how do you identify which process is which). Meaning that I have to put a unique identifier (in any way possible, including that hack I mentioned with --logging-level) into the commandline.
It could be as simple as -- if the orderer binary had a `--comment XYZ` option that it ignored, then I could put the unique identifier in there. Ideally though, it would be something analogous to `--ca.name XYZ`
or just ignoring all commandline params after a solitary `--`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=KT62jqazPHCXgkKek) - Issue resolved [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8QhtDyo8ggCimH336) @kostas - I found the issue it was with my `--outputCreateChannelTx` it should have been `--outputBlock`
Reg PasteBin - I would keep that in mind - Thanks.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=KT62jqazPHCXgkKek)[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8QhtDyo8ggCimH336) @kostas - I found the issue it was with my `--outputCreateChannelTx` it should have been `--outputBlock`
Thanks @muralisr for the tip on my issue.
Reg PasteBin - I would keep that in mind - Thanks.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=KT62jqazPHCXgkKek)[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8QhtDyo8ggCimH336) @kostas - you are correct - The issue was with my `--outputCreateChannelTx` it should have been `--outputBlock`
Reg PasteBin - I would keep that in mind - Thanks.
Another question - if anyone has come across
```Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating ReadSet: Existing config does not contain element for [Values] /Channel/Consortium but was in the read set
```
I've a consortium of 4 orgs and 1 orderer. Specified it properly. Generated the genesis block and the channel tx file as well with the following commands
```configtxgen -profile ADXNetworkDuo2 -outputBlock ./adx-agm-channel-2.block -channelID adx-agm-channel
configtxgen -profile ADXNetworkDuo2 -outputCreateChannelTx ./adx-agm-channel-2.tx -channelID adx-agm-channel```
Configured them on the orderer block in docker yaml file to read the `adx-agm-channel-2.block`
Changed the nodejs files properly to point to the correct channel file `adx-agm-channel-2.tx`
Now when I issue the nodejs create-channel command - I get the above error
`node ./test/integration/e2e/create-channel.js`
My configtx.yaml for the ADXNetworkDuo2 are as follows
https://pastebin.com/UBB066ZL
In the orderer I get the error as
```orderer.ubn.ae | 2017-11-18 09:05:16.850 UTC [common/configtx] addToMap -> DEBU 192 Adding to config map: [Groups] /Channel
orderer.ubn.ae | 2017-11-18 09:05:16.850 UTC [common/configtx] addToMap -> DEBU 193 Adding to config map: [Groups] /Channel/Application
orderer.ubn.ae | 2017-11-18 09:05:16.850 UTC [common/configtx] addToMap -> DEBU 194 Adding to config map: [Values] /Channel/Consortium
orderer.ubn.ae | 2017-11-18 09:05:16.850 UTC [orderer/common/broadcast] Handle -> WARN 195 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating ReadSet: Existing config does not contain element for [Values] /Channel/Consortium but was in the read set
```
Ok - so for my work to move forward and troubleshooting - I changed the way my genesis block is created using the configtx.yaml file
https://pastebin.com/Y7rG3ZaW
Error is there still but a different one - when I try to create a channel through the nodejs command
`node ./test/integration/e2e/create-channel.js`
Output:
```orderer.ubn.ae | 2017-11-18 14:49:13.508 UTC [orderer/configupdate] Process -> DEBU 1fc Processing channel reconfiguration request for channel adx-agm-channel
orderer.ubn.ae | 2017-11-18 14:49:13.508 UTC [common/configtx] addToMap -> DEBU 1fd Adding to config map: [Groups] /Channel
orderer.ubn.ae | 2017-11-18 14:49:13.508 UTC [common/configtx] addToMap -> DEBU 1fe Adding to config map: [Groups] /Channel/Application
orderer.ubn.ae | 2017-11-18 14:49:13.508 UTC [common/configtx] addToMap -> DEBU 1ff Adding to config map: [Groups] /Channel/Application/ScaOrg
orderer.ubn.ae | 2017-11-18 14:49:13.508 UTC [common/configtx] addToMap -> DEBU 200 Adding to config map: [Groups] /Channel/Application/BrokerOrg
orderer.ubn.ae | 2017-11-18 14:49:13.509 UTC [common/configtx] addToMap -> DEBU 201 Adding to config map: [Groups] /Channel/Application/OnlineOrg
orderer.ubn.ae | 2017-11-18 14:49:13.509 UTC [common/configtx] addToMap -> DEBU 202 Adding to config map: [Groups] /Channel/Application/AuditorOrg
orderer.ubn.ae | 2017-11-18 14:49:13.509 UTC [common/configtx] addToMap -> DEBU 203 Adding to config map: [Groups] /Channel/Application/OrdererOrg
orderer.ubn.ae | 2017-11-18 14:49:13.509 UTC [common/configtx] addToMap -> DEBU 204 Adding to config map: [Groups] /Channel/Application/AdxOrg
orderer.ubn.ae | 2017-11-18 14:49:13.509 UTC [common/configtx] addToMap -> DEBU 205 Adding to config map: [Values] /Channel/Consortium
orderer.ubn.ae | 2017-11-18 14:49:13.509 UTC [orderer/common/broadcast] Handle -> WARN 206 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating ReadSet: Existing config does not contain element for [Groups] /Channel/Application/OnlineOrg but was in the read set```
Output:
https://pastebin.com/DMMN4AQC
```orderer.ubn.ae | 2017-11-18 14:49:13.509 UTC [orderer/common/broadcast] Handle -> WARN 206 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating ReadSet: Existing config does not contain element for [Groups] /Channel/Application/OnlineOrg but was in the read set```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=xr8oscMt7ZJbeeYqP) - Item is resolved. Was passing the channel ID to the `-outputBlock` statement which is not required. But still seems like it might be a bug in the generation of genesis block.
what are pros and cons of using kafka zookeeper consensus in hyperledger fabric?
For v1.0 fabric, you should always use the Kafka based consensus in production. Solo is only intended for test. Once new consensus types are introduced (no firm timetable, but expected in 2018), pros and cons will make more sense.
Alright
Has joined the channel.
peer chaincode instantiate using kafka orderer giving following error,
Error: Error endorsing chaincode: rpc error: code = Unknown desc = timeout expired while starting chaincode mycc
Any idea why this error
@jojialex2 pls don't post your question on many channels
Has joined the channel.
Hi guys! I'd like you to teach me following problem.
I'm building Kafka-based orderer.
Host pysical memory 0.7G
4 peers
2 orderers
4 Kafka cluster nodes
3 ZooKeeper ensamble
Kafka docker container does not start. So I checked docker-compose messages, and found below error.
```kafka0 | OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) f
ailed; error='Cannot allocate memory' (errno=12)
kafka0 | #
kafka0 | # There is insufficient memory for the Java Runtime Environment to continue.
kafka0 | # Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memor
y.```
According to Apache Kafka web site, I guess Kafka needs 1G memory in default.
Hi guys! I'd like you to teach me following problem.
I'm building Kafka-based orderer.
Host pysical memory 0.7G
4 peers
2 orderers
4 Kafka cluster nodes
3 ZooKeeper ensamble
Kafka docker container does not start. So I checked docker-compose messages, and found below error.
```kafka0 | OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) f
ailed; error='Cannot allocate memory' (errno=12)
kafka0 | #
kafka0 | # There is insufficient memory for the Java Runtime Environment to continue.
kafka0 | # Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memor
y.```
According to Apache Kafka web site, I guess Kafka needs 1G memory in default.
Are there any methods to reduce memory using hyperledger/fabric-kafka docker image?
Hi guys! I'd like you to teach me following problem.
I'm building Kafka-based orderer.
Host pysical memory 0.7G
4 peers
2 orderers
4 Kafka cluster nodes
3 ZooKeeper ensamble
Kafka docker container does not start. So I checked docker-compose messages, and found below error.
```kafka0 | OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c0000000, 1073741824, 0) f
ailed; error='Cannot allocate memory' (errno=12)
kafka0 | #
kafka0 | # There is insufficient memory for the Java Runtime Environment to continue.
kafka0 | # Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memor
y.```
According to Apache Kafka web site, I guess Kafka needs 1G memory in default.
Are there any methods to reduce memory using hyperledger/fabric-kafka docker image?
Below codes are docker file to start Kafka.
kafka0:
container_name: kafka0
image: hyperledger/fabric-kafka
# restart: always
environment:
- KAFKA_BROKER_ID=0
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
- KAFKA_MESSAGE_MAX_BYTES=103809024
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
ports:
- 9092:9092
- 9093:9093
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
networks:
- basic
Below codes are docker file to start Kafka.
``` kafka0:
container_name: kafka0
image: hyperledger/fabric-kafka
# restart: always
environment:
- KAFKA_BROKER_ID=0
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
- KAFKA_MESSAGE_MAX_BYTES=103809024
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
ports:
- 9092:9092
- 9093:9093
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
networks:
- basic```
ls
Hi, Please help on the following error :
peer chaincode instantiate using kafka orderer giving following error,
←[34;1mpeer0.org1.example.com |←[0m ←[33m2017-11-22 10:47:31.505 UTC [gossip/comm] sendToEndpoint -> WARN 54d←[0m Failed obtaining connection for 172.18.0.15:7051, PKIid:[71 88 115 27 28 222 244 157 196 20 206 87 238 41 173 71 185 195 175 121 236 34 206 183 63 153 178 63 241 226 254 204] reason: x509: cannot validate certificate for 172.18.0.15 because it doesn't contain any IP SANs
Has joined the channel.
hi, i've got a question: for example, i created a work hlf-network, with mounted volumes(empty volumes) for orderer and peers /var/hyperledger/production. after that i destroyed all components, and then start up with the same mounted dirs (but with information created when i created channel). why orderer can't provide exist files, in logs: Attempting to read seek info message
[channel: mychannel] Rejecting deliver request because of consenter error
orderer use kafka cluster as a broker
@jojialex2: This error is not related to the orderer. I suggest using #fabric for this question.
i reproduce step when suddenly all containers stopped, and started again and i want to save state before stopping
thx
@AfromR: The scenario you describe _should_ work. Set the orderer's level to `DEBUG` and post here. Use a service like hastebin.com for the log.
@takeo: You will most certainly need to provide much more computing resources to your host machine if you want to spin up all these containers.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=gQpEaNwD93dboJvTa) @kostas
https://hastebin.com/uheqahopeh.coffeescript
@AfromR: Something's up with your Kafka cluster installation. You'll notice that broker2 is inaccessible to begin with (grep for `server misbehaving` messages in the log early on), but that fault alone should be manageable -- Kafka is a CFT system after all. However I suspect that more faults are occuring in your Kafka cluster (running out of resources?) because the orderer is unable to set up the channel consumer for `mychannel` and `testchainid` (which are owned by broker1 and broker3, respectively) -- it's stuck in the retry loop. Since the orderer is unable to complete the handshake with the Kafka cluster, any Deliver requests your clients issue are naturally met with the "Rejecting deliver request because of consenter error" message.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ccJPMDBwzQeEgnTvC) @kostas
https://hastebin.com/subategejo.coffeescript
there are logs from orderer where it's fetching data from brokers(additional to previous logs)
also all normally works when i create new infrastructure
and should i mount anything data for kafka and zookeeper, because they started as a new container?
Ah yes, you should. Please consult the Zookeeper and Kafka documentation for the data directories you need to persist.
Has joined the channel.
How can I integrate my consensus protocol?. Is there a documentation for this on #hyperledger fabric ?
Has joined the channel.
In consensus, hyperledger fabric we implementing PBFT but for future developments it simply implementing with SBFT. What are the impact factors are implemented in SBFT. Anyone can elaborate
@aabdulwahed: A consensus plugin needs to implement the `Consenter` and `Chain`
interfaces defined here: https://github.com/hyperledger/fabric/blob/master/orderer/consensus/consensus.go
There are two plugins built against these interfaces already:
https://github.com/hyperledger/fabric/tree/master/orderer/consensus/solo
https://github.com/hyperledger/fabric/tree/master/orderer/consensus/kafka
You can study them to take cues for your own implementation.
The entire orderer code can be found here: https://github.com/hyperledger/fabric/tree/master/orderer
@Subramanyam: Not sure I get what you mean by impact factors. SBFT is PBFT with some reasonable, simplifying assumptions.
pbft used for some consensus framework and later sbft takes some changes to implementing in framework, I want those assumptions
comparison between two mechanisms in consensus
@kostas I need comparisons between two mechanisms PBFT and SBFT
https://jira.hyperledger.org/browse/FAB-378
@kostas @jyellick about I found SBFT has a problem if the primary is down, the state won't move on, as the request won't be batched, nor will it trigger a timeout to change the primary,This issue seems to be related to https://jira.hyperledger.org/browse/FAB-474,.Do you have a solution? thanks!
@kostas @jyellick about I found SBFT has a problem if the primary is down, the state won't move on, as the request won't be batched, nor will it trigger a timeout to change the primary,This issue seems to be related to https://jira.hyperledger.org/browse/FAB-474. Do you have a solution? thanks!
Has joined the channel.
Hi! Could you tell me what wrong is on below configtx.yaml?
```---
################################################################################
#
# Profile
#
################################################################################
Profiles:
ExampleTwoOrgsOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
- *Org1
- *Org2
ExampleOrgChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
################################################################################
#
# Section: Organizations
#
################################################################################
Organizations:
- &OrdererOrg
Name: OrdererOrg
ID: OrdererMSP
MSPDir: crypto-config/ordererOrganizations/example.jp/msp
- &Org1
Name: Org1MSP
ID: Org1MSP
MSPDir: crypto-config/peerOrganizations/peer.example.jp/msp
AnchorPeers:
- Host: peer0.example.jp
Port: 7051
- &Org2
Name: Org2MSP
ID: Org2MSP
MSPDir: crypto-config/peerOrganizations/clientx.jp/msp
AnchorPeers:
- Host: peer0.clientx.jp
Port: 7051
################################################################################
#
# SECTION: Orderer
#
################################################################################
Orderer: &OrdererDefaults
OrdererType: kafka
Addresses:
- orderer0.example.jp:7050
- orderer1.example.jp:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka0:9092
- kafka1:9092
- kafka2:9092
- kafka3:9092
Organizations:
################################################################################
#
# SECTION: Application
#
################################################################################
Application: &ApplicationDefaults
Organizations:```
I want to make 2 kafka-based orderes. When I do instatiate a chaincode, the first orderer doesn't work, but second orderer works well.
Log of the first orderer shows following error.
```orderer0.example.jp | 2017-11-24 04:16:20.681 UTC [cauthdsl] func2 -> ERRO 246 Principal deserialization failure (The supplied identity is not valid, Verify() returned x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate ... ```
How did you generate your certificates, using cryptogen?
@takeo Please use a service like hastebin.com for posting long segments of files or logs. If your configuration works on one orderer, but not the other, most likely it is a problem with how the orderers were bootstrapped. You should generate the genesis block once with `configtxgen`, then remove any ledger resources on the filesystem of each orderer (usually `/var/hyperledger/production/orderer/` ), and use the `file` genesis method, specifying the output file from `configtxgen`.
@Glen This is indeed related to the censorship detection, the JIRA outlines some possible solutions. [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=P8zmuKCF3uW73bkCy)
@Glen This is indeed likely related to the censorship detection, the JIRA outlines some possible solutions. The base PBFT paper has clients submit requests to all nodes in the network, and if the primary does not respond, the non-faulty nodes will trigger a view change. [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=P8zmuKCF3uW73bkCy)
yes,SBFT has done the broadcast job , but if the primary is down, no timeout to trigger the view change. so maybe the censorship is a solution, what's goning on with it? As i know, it's not in efffect yet
yes,SBFT has done the broadcast job , but if the primary is down, the coming requests will be pending and no view change will be triggered. so maybe the censorship is a solution, but how's going on with it? As i know, it's not in efffect yet
yes,SBFT has done the broadcast job , but if the primary is down, the coming requests will be pending and no view change will be triggered. so maybe the censorship is a solution, but how's going on with it? As i know, it's not in efffect yet @jyellick
@Glen @jyellick Thank you for replay. I made a following mistake.. orz
I wrote below crypto-config.yaml and used cryptogen.
```#
# Copyright SoftBank Payment Service Corp. All Rights Reserved.
#
# crypto-config.yaml
#
# "OrdererOrgs" - Definition of organizations managing orderer nodes
OrdererOrgs:
- Name: Orderer
Domain: sbps-bcpoc.jp
Specs:
- Hostname: orderer0
- Name: Orderer
Domain: sbps-bcpoc.jp
Specs:
- Hostname: orderer1
# "PeerOrgs" - Definition of organizations managing peer nodes
PeerOrgs:
- Name: Org1
Domain: peer.sbps-bcpoc.jp
Template:
Count: 2
# Hostname: {{.Prefix}}{{.Index}} # default
Users:
Count: 1
- Name: Org2
Domain: clientx.jp
Template:
Count: 2
# Hostname: {{.Prefix}}{{.Index}} # default
Users:
Count: 1```
That was wrong. Below was correct. All nodes work well.
```#
# Copyright SoftBank Payment Service Corp. All Rights Reserved.
#
# crypto-config.yaml
#
# "OrdererOrgs" - Definition of organizations managing orderer nodes
OrdererOrgs:
- Name: Orderer
Domain: sbps-bcpoc.jp
Specs:
- Hostname: orderer0
- Hostname: orderer1
# "PeerOrgs" - Definition of organizations managing peer nodes
PeerOrgs:
- Name: Org1
Domain: peer.sbps-bcpoc.jp
Template:
Count: 2
# Hostname: {{.Prefix}}{{.Index}} # default
Users:
Count: 1
- Name: Org2
Domain: clientx.jp
Template:
Count: 2
# Hostname: {{.Prefix}}{{.Index}} # default
Users:
Count: 1```
@Glen @jyellick Thank you for replay. I made a following mistake.. orz
I wrote below crypto-config.yaml and used cryptogen.
```#
# "OrdererOrgs" - Definition of organizations managing orderer nodes
OrdererOrgs:
- Name: Orderer
Domain: example.jp
Specs:
- Hostname: orderer0
- Name: Orderer
Domain: example.jp
Specs:
- Hostname: orderer1
# "PeerOrgs" - Definition of organizations managing peer nodes
PeerOrgs:
- Name: Org1
Domain: peer.exmaple.jp
Template:
Count: 2
# Hostname: {{.Prefix}}{{.Index}} # default
Users:
Count: 1
- Name: Org2
Domain: clientx.jp
Template:
Count: 2
# Hostname: {{.Prefix}}{{.Index}} # default
Users:
Count: 1```
That was wrong. Below was correct. All nodes work well.
```
# "OrdererOrgs" - Definition of organizations managing orderer nodes
OrdererOrgs:
- Name: Orderer
Domain: example.jp
Specs:
- Hostname: orderer0
- Hostname: orderer1
# "PeerOrgs" - Definition of organizations managing peer nodes
PeerOrgs:
- Name: Org1
Domain: peer.example.jp
Template:
Count: 2
# Hostname: {{.Prefix}}{{.Index}} # default
Users:
Count: 1
- Name: Org2
Domain: clientx.jp
Template:
Count: 2
# Hostname: {{.Prefix}}{{.Index}} # default
Users:
Count: 1```
@Glen No one is actively developing SBFT at the moment. We anticipate re-introducing it in the coming year, but for now, there is no outstanding work against it that I am aware of.
@takeo Thanks for updating that your problem has been fixed, but I must re-iterate Please use a service like hastebin.com or similar for posting any large segments of files or logs. It makes this channel very hard to read.
yeah,confirm
fabric-ca-client register -c org2CaAdmin/config.yaml --id.name "org2Orderer" --id.type "orderer" --id.affiliation "org2Orderer" --tls.certfilesca-cert.pem
2017/11/24 06:26:25 [INFO] Configuration file location: /home/suchit/hyperledger/fabric-ca/bin/client/org2CaAdmin/config.yaml
2017/11/24 06:26:25 [INFO] TLS Enabled
2017/11/24 06:26:25 [INFO] TLS Enabled
Error: Response from server: Error Code: 0 - Identity 'org2Ca' may not register type 'orderer'
Can someone help me that why is this error for orderer ? It is working for peer.
hi guys
i deployed a kafka based orderer service on k8s
got an error when create channel
log.txt
one orderer whole logs here
```
```2017-11-24 11:17:47.956 UTC [orderer/kafka] Enqueue -> DEBU 4c4 [channel: testchainid] Enqueueing envelope...
2017-11-24 11:17:47.956 UTC [orderer/kafka] Enqueue -> WARN 4c5 [channel: testchainid] Will not enqueue, consenter for this channel hasn't started yet
2017-11-24 11:17:47.956 UTC [orderer/main] func1 -> DEBU 4c6 Closing Broadcast stream
2017-11-24 11:17:47.958 UTC [orderer/common/deliver] Handle -> WARN 4c7 Error reading from stream: rpc error: code = Canceled desc = context canceled
2017-11-24 11:17:47.958 UTC [orderer/main] func1 -> DEBU 4c8 Closing Deliver stream
```
this may main error
anyone can help
Hi I am using CLI for chaincode instantiate , once the console get stuck after this message
←[31;1mpeer0.org1.example.com |←[0m ←[36m2017-11-24 11:40:29.631 UTC [dockercontroller] createContainer -> DEBU 3d6←[0m Created container: feature-peer0.org1.example
.com-mycc-1.0-69b833ea5b04412aed03b0935d4ea03b1b0d2b1788b7b308262c5fe39c29eee3
In CLI getting the time out after some time, .
[006 11-24 11:40:29.43 UTC] [github.com/hyperledger/fabric/msp] main.Execute.ExecuteC.execute.func1.chaincodeDeploy.instantiate.GetSignedProposal.Sign -> DEBU Sign: dig
est: 9F86E80C3F14B0F14959DC3781573F95542DE8C2F9E135871804A31E2FD9BFD8
Error: Error endorsing chaincode: rpc error: code = Unknown desc = Timeout expired while starting chaincode mycc:1.0(networkid:feature,peerid:peer0.org1.example.com,tx:
66a909cd6c86b389b73e2760bc86417b9b3047cc85b48f69e86803326a092b1d)
Is any one get this kind of error
Please help
@jojialex2 - what environment are you using to run your peers? This error is likely caused by the chaincode not being able to communicate with the peer
hyperledger/fabric-peer:x86_64-1.0.4
Thank you, hyperledger/fabric-peer:x86_64-1.0.4
I was getting x509: cannot validate certificate for 172.18.0.15 because it doesn't contain any IP SANs.
this is for kafka setup .. solo setup is working
hi guys
it seems like this issues https://jira.hyperledger.org/browse/FAB-6002
but i use k8s
if i don't set KAFKA_ADVERTISED_HOST_NAME
kafka replica fetch will failed
@mastersingh24 he opened https://jira.hyperledger.org/browse/FAB-7094 if that helps. I tried to run it, but his volumes are not configured correctly and I was reluctant to take the time in his stead to fix them.
But it seems the `core.peer.address` is configured correctly, and the TLS certs have DNS SANs in them
Clipboard - November 26, 2017 12:21 PM
Hi All, am running network with kafka and would like to know whether orderer is using kafka or not though in yaml file i given kafka parameters, with kafka log am not sure whether orderer is using kafka so can you please help me to understand how to check whether orderer using kafka or not
Has joined the channel.
@MadhavaReddy: When the orderer boots up it prints a log message with its type: https://github.com/hyperledger/fabric/blob/d9c320297bd2a4eff2eb253ce84dc431ef860972/orderer/multichain/manager.go#L114
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=qQcoikmaw8cx9NdXG) @kostas Thank you
Hi
I am getting following error
←[35;1morderer0.example.com |←[0m ←[36m2017-11-28 08:55:09.759 UTC [cauthdsl] func2 -> DEBU 300←[0m 0xc420024c58 principal evaluation fails
←[35;1morderer0.example.com |←[0m ←[36m2017-11-28 08:55:09.759 UTC [cauthdsl] func1 -> DEBU 301←[0m 0xc420024c58 gate 1511859309759396635 evaluation fails
←[35;1morderer0.example.com |←[0m ←[33m2017-11-28 08:55:09.759 UTC [orderer/common/broadcast] Handle -> WARN 302←[0m Rejecting CONFIG_UPDATE because: Error authori
zing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaini
ng
←[35;1morderer0.example.com |←[0m ←[36m2017-11-28 08:55:09.760 UTC [orderer/main] func1 -> DEBU 303←[0m Closing Broadcast stream
←[35;1morderer0.example.com |←[0m ←[33m2017-11-28 08:55:09.771 UTC [orderer/common/deliver] Handle -> WARN 304←[0m Error reading from stream: rpc error: code = Can
celed desc = context canceled
←[35;1morderer0.example.com |←[0m ←[36m2017-11-28 08:55:09.771 UTC [orderer/main] func1 -> DEBU 305←[0m Closing Deliver stream
←[35;1morderer0.example.com |←[0m ←[36m2017-11-28 08:55:09.771 UTC [grpc] Printf -> DEBU 306←[0m transport: http2Server.HandleStreams failed to read frame: read tc
p 172.21.0.11:7050->172.21.0.16:45194: read: connection reset by peer
Any idea why it is comming
Has joined the channel.
Has anyone tried SSL between Orderer and Kafka Cluster. Does it work ? Is it supported in Fabric ? Noticed there is property for enabling TLS (ORDERER_KAFKA_TLS_ENABLED=true). I am just interested in TLS between the orderer and Kafka with ssl.client.auth=none.
@gauthampamu Yes, TLS between the orderer and Kafka is both tested and supported
@jojialex2 What is it that you are trying to do? Based on the limited information available, I would guess that you are attempting to add or remove an organization from a channel with one member, but you are not signing with that org's admin cert.
@jojialex2 What is it that you are trying to do? Based on the limited information available, I would guess that you are attempting to add an organization to a channel with one member, but you are not signing with that org's admin cert.
@jojialex2 What is it that you are trying to do? Based on the limited information available, I would guess that you are attempting to add an organization to a channel with one member, but you are not signing with the existing org's admin cert.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=o8Bm4YuDRDoFnKmj3) @jyellick Thanks Jason. I am looking for documentation or sample docker compose files to enable TLS. Can you point me to the test sample that tests this configuration.
Has joined the channel.
@sanchezl Do you know where this is? ^^
I sent them a sample earlier today.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2CR9XAGP22g5YdbaM) @gauthampamu There is a sample config attached to https://jira.hyperledger.org/browse/FAB-5226
@sanchezl @jyellick @kostas I wonder if fabric 1.0.1 version fully support add new peer or add new org
@grapebaba Adding peers is definitely possible, and indeed, fairly trivial. Adding a new org is possible, and demonstrable, though it comes with a few caveats (in particular, gossip leader election must be enabled for peers from the new org until the peer catches up to the config block where the org was added)
@grapebaba Adding peers is definitely possible, and indeed, fairly trivial. Adding a new org is possible though less straightforward, and has been demonstrated. Adding a new org comes with a few caveats (in particular, gossip leader election must be enabled for peers from the new org until the peer catches up to the config block where the org was added)
@grapebaba Adding peers is definitely possible, and indeed, fairly trivial. Adding a new org is possible though less straightforward. Adding a new org comes with a few caveats (in particular, gossip leader election must be enabled for peers from the new org until the peer catches up to the config block where the org was added)
thanks jason
any instructions?
For adding a peer, it is really as simple as starting a new binary (with appropriately issued certificate and MSP definition), then executing the standard join channel. For adding an org, there is an example https://github.com/hyperledger/fabric/tree/master/examples/configtxupdate which may be a bit hard to follow. There is a CR from the doc team which is much more verbose and might be a better starting point https://gerrit.hyperledger.org/r/c/15323/
thanks a lot
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=s2BhQt5BdxyjQeb8d) @jyellick link to gerrit is broken
Has joined the channel.
@iamdm: You haven't switched to the new Gerrit UI and redirects are broken. Try this: https://gerrit.hyperledger.org/r/#/c/15323/
@iamdm: You haven't switched to the new Gerrit UI (look at the footer in any Gerrit page) and redirects from new UI to old UI are broken. Try this: https://gerrit.hyperledger.org/r/#/c/15323/
@kostas ohh great.. thx!
Has joined the channel.
Has joined the channel.
Has joined the channel.
If kafka clusters only store transactions( assuming in any order ) then how does blocks get synchronised between all orderers ?
The Kafka cluster assigns the order with which transactions are stored. This is what the orderers use in order to cut blocks.
Read more here: https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit
Has joined the channel.
Hi, I've been using kafka orderer, but after sometime of working properly, I get a log saying `Rejecting deliver because channel
Hi, I've been using kafka orderer, but after sometime of working properly, I get a log saying `Rejecting deliver because channel
what would be the scenario when we are using kafka based orderer for one channel only ?
@Luxii you want to run something in production, you should use kafka anyway
and kafka enables you to use several orderers, so if one orderer goes down, other orderers can be used to submit transactions
okay, thanks for your response
@chrisg I would need to know more about your problem to be more specific, but I'd encourage you to look at the orderer log and see i there are any errors
Has joined the channel.
Hi all. Can anyone let me know that how does the orderer order transactions, if block B comes before block A and orderer has received only B, how does the orderer wait for A before sending B to committing peers?
@ArnabChatterjee https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit
Clipboard - December 1, 2017 4:28 PM
peers couldnt connet to orderers
However Im able to register user , create channel and install chaincode
my network times out at instantiation
Any reason ?
Or in what direction should I debug it ?
@Luxii: Hmm, this doesn't quite make sense. See, chaincode instantiation has nothing to do with your inability to connect to the ordering service nodes. Let's see how we can debug this. First, let's come up with the simplest possible network. 1 org, 1 peer, and a solo ordering service. Can you create channel, install chaincode, and instantiate without issues? Let us know how that goes.
@kostas Okay, I'll try that ... thanks for your response
hold on, before that - can you log into the container
@Luxii - can you log into the container
and just ping the endpoints?
or `nslookup`
or try to connect via telnet
also worth to turn on the gRPC logging
if it's a TLS issue, it might show there
Has joined the channel.
@kostas @jyellick when i use the java to produce the genesis block,and then i start the use the orderer command to start the orderer node,the first one is the validate of the BlockDataStructure propertiy
@kostas @jyellick when i use the java to produce the genesis block,and then i start the use the orderer command to start the orderer node,
i send the some problem
i send the some problem or advice
1,in the java the Integer max_value if the 2<<31-1 however in the golang the max_value is 2<<32-1,BTW,i don't where the source code use the BlockDataStructure properties
2,https://github.com/hyperledger/fabric/blob/dd9902604314c3a45816174ce3f33a971644ba2e/common/tools/configtxgen/encoder/encoder.go#L247 can i don't set the policy for consortiums group
3,i advice the https://github.com/hyperledger/fabric/blob/dd9902604314c3a45816174ce3f33a971644ba2e/common/tools/configtxgen/main.go#L266 this line should be move to https://github.com/hyperledger/fabric/blob/dd9902604314c3a45816174ce3f33a971644ba2e/common/tools/configtxgen/main.go#L299 because the other operation doesn't need to do it
@asaningmaxchain
> when i use the java to produce the genesis block,and then i start the use the orderer command to start the orderer node
If I understand this correctly, then you are trying to implement your own genesis block producing code in java?
> 1. in the java the Integer max_value if the 2<<31-1 however in the golang the max_value is 2<<32-1,BTW,i don't where the source code use the BlockDataStructure properties
I would expect for java to supply some way to specify values greater than int64 for a uint64 proto field. But this is a java protobuf question, an unrelated to fabric, you can search for this online.
> can i don't set the policy for consortiums group
The comment explains the purpose for this line. It should be specified as indicated.
> 3 because the other operation doesn't need to do it
Is this causing a problem?
@jyellick the third one just a advice
can you please tell where the source code in the fabric uses the BlockDataStructrure properties
the first one yes, i use the java sdk to produce genesis block
@yacovm It says unknown hosts when i ping the orderer.example.com
@kostas I've tried with the network configs you suggested but the problem still remains same
Has joined the channel.
@Luxii: It seems that this problem is also being discussed in #fabric, so let's keep the conversation there.
@kostas yea sure , can you respond to my query there
1.1.0-preview. I have struggled for days to understand network and channel configuration accurately and creation and am clearly failing. Specifically, I am getting a "BAD_REQUEST -- Attempted to include a member which is not in the consortium" on channel creation from a peer. I have 2 Organizations, each with 2 peers and their own CA server. There are 3 orderers using 4 kafka brokers with 3 zookeepers. No errors on anything until channel creation. I registered and enrolled 3 orderer users and 4 peers. Each orderer has access to the crypto for all 3 orderers and all 4 peers. genesis block and channel creation on orderer1 complete without issue. Copy the genesis block to all 3 orderers and they start without errors. Copy the channel to peer1. Peer 1 starts fine. create channel orderer1 fails with "Attempted to include a member which is not in the consortium".
Using profiles to define network and channel. It has seemed to me that each orderer or peer is an 'Organization', as I assume they reference thier own crypto. So I have 3 orderers listed in the Orderer definition and 4 Organizations (1 for each peer) in the Consortium definition. Am I supposed to be generating 1 crypto for an org and using that for all orderers and peers in that org (from it's CA server)? Does the org name in the peer certs from the CA server need to match the organization in the genesis / channel definition? I remove the orderer and peer data files and restart on every attempt. Do more peers need to be started? I'm not getting any policy violations.
I've got to be close. What stupid thing do I not understand?
1.1.0-preview. I have struggled for days to understand network and channel configuration accurately and creation and am clearly failing. Specifically, I am getting a "BAD_REQUEST -- Attempted to include a member which is not in the consortium" on channel creation from a peer. I have 2 Organizations, each with 2 peers and their own CA server. There are 3 orderers using 4 kafka brokers with 3 zookeepers. No errors on anything until channel creation. I registered and enrolled 3 orderer users and 4 peers. Each orderer has access to the crypto for all 3 orderers and all 4 peers. genesis block and channel creation on orderer1 complete without issue. Copy the genesis block to all 3 orderers and they start without errors. Copy the channel to peer1. Peer 1 starts fine. create channel orderer1 fails with "Attempted to include a member which is not in the consortium".
Using profiles to define network and channel. It has seemed to me that each orderer or peer is an 'Organization', as I assume they reference their own crypto. So I have 3 orderers listed in the Orderer definition and 4 Organizations (1 for each peer) in the Consortium definition. Am I supposed to be generating 1 crypto for an org and using that for all orderers and peers in that org (from it's CA server)? Does the org name in the peer certs from the CA server need to match the organization in the genesis / channel definition? I remove the orderer and peer data files and restart on every attempt. Do more peers need to be started? I'm not getting any policy violations.
I've got to be close. What stupid thing do I not understand?
info.txt
@jworthington in the configtx when you declare orgs, you have to provide MSP of the orgs, not of the peers. E.g. `/etc/hyperledger/msp/byrlin` (assimung there are org certs in the byrlin folder) instead of `/etc/hyperledger/msp/byrlin/peer1/msp`. Same for the orderers.
so the org section of the application side of the network should look like:
```
Organizations:
- *BPNBelltane
- *BPNByrlin
and of the orderer:
```
Organizations:
- *BPNOrdererOrg
Thanks. i always thought that made more sense. So then i use the CA server to register and enroll an 'org' with the same name as listed in the config?
at that point you must already have the org certs
I know its recommended to have all orderers in the same org. Can I have 2?
I guess you can, but with kafka it does not make much sense since it's not BFT-consensus
I guess you can, but with kafka it does not make much sense since it's not BFT-consensus (IMHO)
i don't know, but I may get the question or have a member insist.
I mean, you can do it if you want, but you don't gain anything in terms of byzantine fault tolerance, since kafka is not BFT
When i register an org do I need to include --id.attrs 'admin=true:ecert' so it will match the channel writer policy?
Thanks. Let me rebuild this thing another couple hundred times until i understand it. ;)
I don't know how to register an org with CA. I suggest you ask on #fabric-ca. Also, the CA itself needs already to run with some certificates, which should be the root org certs
I don't know how to register an org with CA. I suggest you ask on #fabric-ca. Also, the CA itself needs already to run with some certificates, which should be the root org certs (i.e. the same certs you need to provide in the configtx.yaml)
I considered that, but seems odd to use the CA root for the org. To me.
I think if you start the CA without any certs, it will generate some new certs and you can use those as org certs. But maybe there is some other way.
@jworthington CA can run with intermediary certs, but you need that when it issues peer/user certs, that those certs are signed with a root or some intermediary cert which is linked to a root which is in the channel config block.
Thx
Hi @jyellick and @kostas as fabric moves forwads, will conventional orderer reboot be supported? I've tested kafka and solo can support at most two reboots, more will fail, and kafka can reboot from a clear container(without ledger data). Thanks
@Glen Arbitrary numbers of restarts should be supported with both mechanisms, if you can provide reproducible steps for a failure with either consensus mechanism, I would be very curious to try to follow them.
@Glen Arbitrary numbers of restarts should be supported with both consensus types, if you can provide reproducible steps for a failure with either consensus mechanism, I would be very curious to try to follow them.
https://pastebin.com/7tNRN28b
https://pastebin.com/7tNRN28b here is the log
@jyellick if we want to add a new orderer to existing orderer, how can we do that?
Has joined the channel.
Has joined the channel.
Hi guys, can you please help me?
I have brought up a network with 3 orderers, 4 kafka brokers, 3 zookeepers, and 2 orgs with one peer each.
I am able to create channel, join channel; install,instantiate,invoke,query SmartContract.(using CLI)
But when I bring down an orderer, and use that orderer to instantiate a SmartContract, I am getting the foll. error:-
2017-12-05 12:21:21.046 UTC [grpc] Printf -> DEBU 005 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: lookup orderer0.example.com on 127.0.0.11:53: no such host"; Reconnecting to {orderer0.example.com:7050
Clipboard - December 5, 2017 6:08 PM
@ancythomas why do you bring the orderer down?
in the command I see that you use the orderer which you turned off, so use another orderer which is online
Hi Vadim, I have brought down the orderer to test the real-time scenario, where an orderer can be down.(Error cases)
in the real world scenario you would run cli command against the running orderer. If you receive an error, you should retry
in the real world scenario you would run cli command against the running orderer. If you receive an error, you should retry with another orderer
When 3 orderers are present in the kafka network, and if one is down, what is the expected behaviour of the instantiate command in CLI, using that same orderer(which is down) ? Is it expected to pick up the available orderers, by itself?
well you need to specify the orderer explicitly using the `-o` parameter
it won't pick up other orderers
@Glen Unfortunately, that log is not very useful for resolving your problem. Could you please provide steps to reproduce that log?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=5ho5iLcpiwwSwtEtj)
Adding a new orderer is as simple as starting a new binary with the same genesis block as the other orderers. Once the orderer is up and functioning correctly, submit a config update to each channel, adding that orderer's address to the list of orderer addresses. [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=xgh9Gns9DLZNbY8ic)
Hello - on one of our setup, orderer fails to connect with `Connecting to the Kafka cluster` and looking at its log, it is for `setupChannelConsumerForChannel`. Any hint where we should troubleshoot this?
Hello - on one of our setup, orderer fails to connect with `Connecting to the Kafka cluster` and looking at its log, it is for `setupChannelConsumerForChannel`. Any hint where we should troubleshoot this?
The other connect points `setupParentConsumerForChannel` and `setupProducerForChannel` has received the connect message response - assuming they are good.
Hello - on one of our setup, orderer fails to connect with `Connecting to the Kafka cluster` and looking at its log, it is for `setupChannelConsumerForChannel`. Any hint where we should troubleshoot this?
The other connect points `setupParentConsumerForChannel` and `setupProducerForChannel` has received the connect message response - assuming they are good.
This is a new setup and initially for ~3days, everything was good. There was 1 orderer (out of 3) which got restarted, and above problem was seen. The other 2 orderers were running w/o any problem. We restarted the other 2 orderer (docker service), and this problem started for remaining too.
Hello - on one of our setup, orderer fails to connect with `Connecting to the Kafka cluster` and looking at its log, it is for `setupChannelConsumerForChannel`. Any hint where we should troubleshoot this?
We see 1 channel only is able to successfully complete `setupChannelConsumerForChannel` however rest channels (including system channel) is having this issue.
The other connect points `setupParentConsumerForChannel` and `setupProducerForChannel` has received the connect message response - assuming they are good.
This is a new setup and initially for ~3days, everything was good. There was 1 orderer (out of 3) which got restarted, and above problem was seen. The other 2 orderers were running w/o any problem. We restarted the other 2 orderer (docker service), and this problem started for remaining too.
Hello - on one of our setup, orderer fails to connect with `Connecting to the Kafka cluster` and looking at its log, it is for `setupChannelConsumerForChannel`. Any hint where we should troubleshoot this?
We see only 1 channel able to successfully complete `setupChannelConsumerForChannel` however rest channels (including system channel) is having this issue.
The other connect points `setupParentConsumerForChannel` and `setupProducerForChannel` has received the connect message response - assuming they are good.
This is a new setup and initially for ~3days, everything was good. There was 1 orderer (out of 3) which got restarted, and above problem was seen. The other 2 orderers were running w/o any problem. We restarted the other 2 orderer (docker service), and this problem started for remaining too.
@rahulhegde: Please create a bug in JIRA for this. Attach logs (make sure you've done this: https://hyperledger-fabric.readthedocs.io/en/release/kafka.html#debugging) and copy @sanchezl.
Has joined the channel.
What does this mean? [orderer/consensus/kafka] processRegular -> WARN 28c0 [channel: bpndev3] This orderer is running in compatibility mode
@jworthington: That you haven't enabled the 1.1 capabilities for the ordering service yet. You are basically running in legacy, 1.0 mode.
Ha. k. It is 1.1.0-preview. How do I enable the new capabilities?
Although maybe I should read about what they are before I enable them. ;)
You beat me to it. I can show you how to do it (and perhaps @jyellick is aware of a document that is WIP or already in `master` that talks about it in detail), but I'd advise against it. (If you are curious and assuming you are not running anything for production, pull the latest master and use a configtxgen profile that sets this: https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L125..L126)
For the ordering service, enabling the 1.1 capabilities means you should expect a noticeable increase in message throughput (and some other fixes regarding missing policies, etc.).
Sweet. thx. I can wait. I just don't like warnings.
Roger, I hear you.
Good, About to do some throughput testing, so I'll do some now and then compare later.
Hello All, I am trying to setup kafka cluster with 2 orderers in my application with 4 kakfa instances with 3 zookeeper. When using the SDK to create the channel, I am getting a SERVICE UNAVAILABLE error. Upon checking logs of orderer, it was throwing the logs :
`2017-12-06 23:15:01.431 UTC [orderer/kafka] startThread -> CRIT 5ad [channel: testchainid] Cannot set up producer = kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
panic: [channel: testchainid] Cannot set up producer = kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
goroutine 19 [running]:
panic(0xb31bc0, 0xc420366c30)
/opt/go/src/runtime/panic.go:500 +0x1a1
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc4201c0600, 0xc6ca2d, 0x29, 0xc420904ae0, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x127
github.com/hyperledger/fabric/orderer/kafka.startThread(0xc420790cf0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/kafka/chain.go:153 +0xfca
created by github.com/hyperledger/fabric/orderer/kafka.(*chainImpl).Start
/opt/gopath/src/github.com/hyperledger/fabric/orderer/kafka/chain.go:94 +0x3f
`
Hello All, I am trying to setup kafka cluster with 2 orderers in my application with 4 kakfa instances with 3 zookeeper. When using the SDK to create the channel, I am getting a SERVICE UNAVAILABLE error. Upon checking logs of orderer, it was throwing the logs :
```2017-12-06 23:15:01.431 UTC [orderer/kafka] startThread -> CRIT 5ad [channel: testchainid] Cannot set up producer = kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
panic: [channel: testchainid] Cannot set up producer = kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
goroutine 19 [running]:
panic(0xb31bc0, 0xc420366c30)
/opt/go/src/runtime/panic.go:500 +0x1a1
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc4201c0600, 0xc6ca2d, 0x29, 0xc420904ae0, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x127
github.com/hyperledger/fabric/orderer/kafka.startThread(0xc420790cf0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/kafka/chain.go:153 +0xfca
created by github.com/hyperledger/fabric/orderer/kafka.(*chainImpl).Start
/opt/gopath/src/github.com/hyperledger/fabric/orderer/kafka/chain.go:94 +0x3f
```
Hello All, I am trying to setup kafka cluster with 2 orderers in my application with 4 kakfa instances with 3 zookeeper. When using the SDK to create the channel, I am getting a SERVICE UNAVAILABLE error. Upon checking logs of orderer, it was throwing the logs :
```2017-12-06 23:15:01.431 UTC [orderer/kafka] startThread -> CRIT 5ad [channel: testchainid] Cannot set up producer = kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
panic: [channel: testchainid] Cannot set up producer = kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
goroutine 19 [running]:
panic(0xb31bc0, 0xc420366c30)
/opt/go/src/runtime/panic.go:500 +0x1a1
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc4201c0600, 0xc6ca2d, 0x29, 0xc420904ae0, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x127
github.com/hyperledger/fabric/orderer/kafka.startThread(0xc420790cf0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/kafka/chain.go:153 +0xfca
created by github.com/hyperledger/fabric/orderer/kafka.(*chainImpl).Start
/opt/gopath/src/github.com/hyperledger/fabric/orderer/kafka/chain.go:94 +0x3f
```
Mysteriously it was also trying to check for a channel `testchainid` which I dont remember having configured anywhere.
`2017-12-06 23:15:00.678 UTC [orderer/kafka] try -> DEBU 5ac [channel: testchainid] Connecting to the Kafka cluster`
Any ideas? :) Thanks
Hello All, I am trying to setup kafka cluster with 2 orderers in my application with 4 kakfa instances with 3 zookeeper. When using the SDK to create the channel, I am getting a SERVICE UNAVAILABLE error. Upon checking logs of orderer, it was throwing the logs :
```2017-12-06 11:10:05.502 UTC [orderer/kafka] Enqueue -> DEBU 4dc [channel: testchainid] Enqueueing envelope...
2017-12-06 11:10:05.502 UTC [orderer/kafka] Enqueue -> WARN 4dd [channel: testchainid] Will not enqueue, consenter for this channel hasn't started yet
2017-12-06 11:10:05.502 UTC [orderer/main] func1 -> DEBU 4de Closing Broadcast stream
2017-12-06 11:10:09.924 UTC [orderer/kafka] try -> DEBU 4df [channel: testchainid] Connecting to the Kafka cluster
.
.
.
2017-12-06 11:12:14.924 UTC [orderer/kafka] try -> DEBU 4f8 [channel: testchainid] Connecting to the Kafka cluster
2017-12-06 11:12:19.834 UTC [grpc] Printf -> DEBU 4f9 grpc: Server.Serve failed to complete security handshake from "172.19.0.1:37570": EOF
2017-12-06 11:12:19.924 UTC [orderer/kafka] try -> DEBU 4fa [channel: testchainid] Connecting to the Kafka cluster
.
.
.
2017-12-06 11:15:00.678 UTC [orderer/kafka] retry -> DEBU 51b [channel: testchainid] Switching to the long retry interval
2017-12-06 11:15:00.678 UTC [orderer/kafka] try -> DEBU 51c [channel: testchainid] Retrying every 5m0s for a total of 12h0m0s
.
.
.
2017-12-06 23:15:00.678 UTC [orderer/kafka] try -> DEBU 5ac [channel: testchainid] Connecting to the Kafka cluster
2017-12-06 23:15:01.431 UTC [orderer/kafka] startThread -> CRIT 5ad [channel: testchainid] Cannot set up producer = kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
panic: [channel: testchainid] Cannot set up producer = kafka: client has run out of available brokers to talk to (Is your cluster reachable?)
goroutine 19 [running]:
panic(0xb31bc0, 0xc420366c30)
/opt/go/src/runtime/panic.go:500 +0x1a1
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc4201c0600, 0xc6ca2d, 0x29, 0xc420904ae0, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x127
github.com/hyperledger/fabric/orderer/kafka.startThread(0xc420790cf0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/kafka/chain.go:153 +0xfca
created by github.com/hyperledger/fabric/orderer/kafka.(*chainImpl).Start
/opt/gopath/src/github.com/hyperledger/fabric/orderer/kafka/chain.go:94 +0x3f
```
Mysteriously it was also trying to check for a channel `testchainid` which I dont remember having configured anywhere.
`2017-12-06 23:15:00.678 UTC [orderer/kafka] try -> DEBU 5ac [channel: testchainid] Connecting to the Kafka cluster`
Any ideas? :) Thanks
@ArnabChatterjee: This error is indicative of a malfunctioning Kafka cluster.
Have you setup a Kafka cluster before?
no @kostas
any help please?
Sure. So here's what I would suggest.
Forget about the Kafka-based ordering service for a second.
Just try to complete the first 6 steps of the Kafka Quickstart Guide: https://kafka.apache.org/quickstart
Once you get this running, use our Kafka guide: https://hyperledger-fabric.readthedocs.io/en/release/kafka.html
Then you should be good to go.
Okay thanks. will try that. :) Thanks
> You beat me to it. I can show you how to do it (and perhaps @jyellick is aware of a document that is WIP or already in `master` that talks about it in detail), but I'd advise against it.
I'm unaware of any documentation currently in progress. For now, a bit focused on getting code merged, will move to focus on doc for those features, likely starting next week
@kostas - Thanks. It is working now. I am able to create and join channel. Any plans to provide a link to https://kafka.apache.org/quickstart on https://hyperledger-fabric.readthedocs.io/en/release/kafka.html ? I think it will help new users rather than assuming user knows about kafka-zookeeper ensemble.
hey all. is it possible to update orderer genesis block?
for example to onboard a new org
@gbolo Certainly, this is possible, and may be done at runtime without service interruption
https://github.com/hyperledger/fabric/blob/master/docs/source/channel_update.rst
@jyellick thanks
https://chat.hyperledger.org/channel/fabric-orderer?msg=iepBbELysGiNqse45
@ArnabChatterjee: This one's tricky. I am not comfortable suggesting to someone to setup a Kafka-based ordering service if they're not well versed in Kafka matters. There are _a lot_ of things that can go wrong. This is why I have this warning at the very top of the Kafka document. As things stand, our audience is folks who know how to set up Kafka/ZK without any pointers. (I understand this limits our audience, but it is what it is.) That said, we are compiling an ordering service FAQ and one of the QAs there touches on your very question and points to https://kafka.apache.org/quickstart
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=xEfcPGuuPxhRyNFrP) @kostas I found Kafka MUCH easier than MSP. ;)
Has joined the channel.
Has joined the channel.
Has left the channel.
@jyellick can you explain how to reconfig the channel ,i know it send the config_update message to the specified channel,i want to know the wri
@jyellick can you explain how to reconfig the channel ,i know it send the config_update message to the specified channel,i want to know how to set the read_set,write_set when i need to modify(update or delete) the configgroup/configpolicy/configvalue?
@jyellick can you explain how to reconfig the channel ,i know it send the config_update message to the specified channel,i want to know how to set the read_set,write_set when i need to modify(update or delete) the configgroup/configpolicy/configvalue? BTW i know you provider the configtxlator tool to do it,but i want to write it by the java,so can you help me?
@jyellick can you explain how to reconfig the channel ,i know it send the config_update message to the specified channel,i want to know how to set the read_set,write_set when i need to modify(update or delete) the configgroup/configpolicy/configvalue? and i know you provider the configtxlator tool to do it,but i want to write it by the java,so can you help me?
@jyellick @kostas can you explain how to reconfig the channel ,i know it send the config_update message to the specified channel,i want to know how to set the read_set,write_set when i need to modify(update or delete) the configgroup/configpolicy/configvalue? and i know you provider the configtxlator tool to do it,but i want to write it by the java,so can you help me?
@jyellick @kostas can you explain how to reconfig the channel ,i know it send the config_update message to the specified channel,i want to know how to set the read_set,write_set when i need to modify(add or delete) the configgroup/configpolicy/configvalue? and i know you provider the configtxlator tool to do it,but i want to write it by the java,so can you help me?
@jyellick @kostas @jeffgarratt can you explain how to reconfig the channel ,i know it send the config_update message to the specified channel,i want to know how to set the read_set,write_set when i need to modify(add or delete) the configgroup/configpolicy/configvalue? and i know you provider the configtxlator tool to do it,but i want to write it by the java,so can you help me?
@asaningmaxchain: I'm assuming you are familiar with this doc? https://hyperledger-fabric.readthedocs.io/en/release/configtx.html#configuration-updates
You might also look at https://github.com/hyperledger/fabric/tree/master/common/tools/configtxlator/update
@jyellick so the specified channel receive the config update message,it use the https://github.com/hyperledger/fabric/tree/master/common/tools/configtxlator/update to deal with the message
No, that is code which can take two a source config, a modified config, and produce the requisite update to transition between them
The actual config update processing code is `common/configtx` and `common/channelconfig`
@jyellick i know, the link tell me how to produce the config update message which should include in the configupdateEnvelope
@jyellick i got it, the link tell me how to produce the config update message which should include in the configupdateEnvelope
@jyellick @kostas thx very much
Has joined the channel.
Has joined the channel.
@jyellick i pull the master branch,it doesn't generate the hyperledger/fabric-ccenv images
@jyellick @kostas can i modify the msp value for the sake of reconfiguring the channel configuration?
@jyellick i think the orderer node should provide a status service, let developer to know the status of the node
@jyellick can i modify the msp value for the sake of reconfiging the channel config?
@jyellick i see the peer provide the AtomicBroadcastServer service that means the client can get the committed block?
@asaningmaxchain
You may update an MSP definition when reconfiguring the channel config. However, you may not change the MSP ID.
The idea with the `Deliver` service on the peer is that yes, clients may now query the peer for a stream of blocks, rather than querying the orderer directly.
the tx in the block is committed?
For my edification:
> However, you may not change the MSP ID.
Do we know how exactly would that fail?
> the tx in the block is committed?
The block metadata will contain the information on which txes committed successfully and which did not
> Do we know how exactly would that fail?
Indeed, the orderer will reject it because of: https://github.com/hyperledger/fabric/blob/master/common/channelconfig/bundle.go#L100 or https://github.com/hyperledger/fabric/blob/master/common/channelconfig/bundle.go#L118
@jyellick do you provide the all the modify about the channel config?
@jyellick do you provide the all the modify about the channel config? because i want to verify my java code has the same function with your code
@asaningmaxchain I do not understand
when i want to reconfig the channel config,i can modify any anything to test ,however,all the test is very large,so can you provide some example for me,i know you provide reconfig_batchsize and reconfig_membership,but it's not enough,so do you have all the tests?
The config framework operates on very simple rules. There are groups, values, and policies. I would recommend testing a reconfiguration of each of these three. Batch size is a value, membership is a group, I suggest also modifying a policy, such as `/Channel/Application/Admins`
ok,i got it
Has joined the channel.
@jyellick @kostas in SBFT, if an orderer is left behind, does that mean it needs to sync blocks from other orderers? Hello message may be applied for only one batch case
@Glen State transfer is something that is not present in SBFT -- strictly speaking, it's not necessary unless there were a reconfiguration which occurred.
Hi! I'm trying to create a new channel (default-channel) with:
```
peer channel create -o my-orderer:7050 --tls --cafile ORDERER_TLS_CAFILE -c defaultchannel -f /etc/hyperledger/configtx/default-channel.tx
```
and I get the error bellow:
```
[36m2017-12-12 14:24:00.405 UTC [orderer/kafka] Enqueue -> DEBU 11c1[0m [channel: testchainid] Enqueueing envelope...
[33m2017-12-12 14:24:00.405 UTC [orderer/kafka] Enqueue -> WARN 11c2[0m [channel: testchainid] Will not enqueue, consenter for this channel hasn't started yet
[33m2017-12-12 14:24:00.409 UTC [orderer/common/deliver] Handle -> WARN 11c4[0m Error reading from stream: rpc error: code = Canceled desc = context canceled
[36m2017-12-12 14:24:00.405 UTC [orderer/main] func1 -> DEBU 11c3[0m Closing Broadcast stream
[36m2017-12-12 14:24:00.409 UTC [orderer/main] func1 -> DEBU 11c5[0m Closing Deliver stream
```
Did yout have any idea what's going on?
@vsadriano This usually indicates that you are attempting to create the channel in a scripted environment, but not giving the orderer and especially Kafka time to finish starting up.
(Or, potentially your Kafka cluster or your connection to it is not configured correctly -- but most often the cluster is simply not started yet)
@jyellick thanks! I will try to set a sleep parameter.
@vsadriano You can see here: https://github.com/hyperledger/fabric/blob/master/examples/e2e_cli/scripts/script.sh#L57-L84 how the e2e_cli example waits for things to startup
@jyellick i meet a question,can you help me?
i use the java to reconfig the channel configuration,i use the configupdate/reconfig_membership example to test it,and i have a question,how do you set the msp for the ExampleOrg
however when i use the java code to operate it,it tell me
`setting up the MSP manager failed: getCertFromPem error: could not decode pem bytes`
https://github.com/hyperledger/fabric/blob/fc4298bdbe0e2f79f8a3a80c31ba0ac46dc91096/msp/mspimpl.go#L118
can you give me a clue?
https://github.com/hyperledger/fabric/blob/release/msp/configbuilder.go#L161 this is the code which builds that protobuf structure
i copy your ExampleOrg config,why you are correct,however when i do it,it's wrong
i copy your ExampleOrg config, you are correct,however when i do it,it's wrong
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=YbbcuRXhagDTy5FRw) @jyellick Thank you so much!
@jyellick I just try create one-by-one assets but the error continues. Any idea?
How can I test the Orderer in isolation?
How are you starting your orderer?
Kubectl.
1th: zookeeper
2th: kafka
3th orderer
All manually.
I use Headless Service for Zookeeper.
If you are starting your orderer manually, then you may use the sample clients in `fabric/orderer/sample_clients/`
In particular, there is `orderer/sample_clients/broadcast_msg/` and `orderer/sample_clients/deliver_stdout/`
You may build either of these by typing `go build` in the directory
`broadcast_msg` will send one or more messages into the orderer, and `deliver_stdout` will write blocks to the screen as they are created.
However, as a first step, I would confirm that your Kafka cluster has been created
You should complete *at a minimum* the first 6 steps of the Kafka Quickstart guide: https://kafka.apache.org/quickstart before experimenting with the Kafka-based ordering service.
You should complete *at a minimum* complete the first 6 steps of the Kafka Quickstart guide: https://kafka.apache.org/quickstart before experimenting with the Kafka-based ordering service.
You should *at a minimum* complete the first 6 steps of the Kafka Quickstart guide: https://kafka.apache.org/quickstart before experimenting with the Kafka-based ordering service.
Ok. I'll try. Thanks!:thumbsup:
Hi @jyellick! The cluster start up successfully with docker-compose (local workstation). All tests are ok! In k8s I get the error bellow:
```
[2017-12-13 13:06:30,375] WARN Error while fetching metadata with correlation id 0 : {TesteT=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
```
I'll serach for explanation and solution. Thanks!
That means that your Kafka cluster is not set up properly.
@kostas obviously! I just add the parameters on my Deployment:
```
- name: KAFKA_ADVERTISED_HOST_NAME
value: ${headless_service_name}
- name: KAFKA_ADVERTISED_PORT
value: "9092"
- name: KAFKA_LISTENERS
value: "PLAINTEXT://0.0.0.0:9092"
```
The cluster is on.
I'll run orderer service now.
i download the fabric 1.0 code from github, but there is no PBFT or SBFT algorithm. And i only find gossip there, is there no consensus in fabric 1.0 ?
@alix The concept of consensus on order is encapsulated by the ordering service. Consensus on transaction output is achieved via the endorsement procu
@alix The concept of consensus on order is encapsulated by the ordering service. Consensus on transaction output is achieved via the endorsement procedure
@jyellick I see the PBFT algorithm in fabric 0.6 source code. so ,we can make our own consensus policy instead of using the RAFT or PBFT in fabric 1.0?
@jyellick I mean the developer can make their own the endorse policy .
@jyellick [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=zZnreuFczrSEHRR2w) @jyellick as you said, so if reconfiguration happened, How can we deal with the reconfiguration block, could you please provide some advice?
Hi! I'm trying to create a new channel (defaultchannel) after apply the new settings on Kafka Cluster. Apparently the cluster is ok but I get the new error from orderer:
```
[sarama] 2017/12/14 11:57:55.742247 async_producer.go:744: producer/broker/0 state change to [retrying] on testchainid/0 because kafka server: Tried to send a message to a replica that is not the leader for some partition. Your metadata is out of date.
```
I think that orderer service is sending your messages **ONLY** for the first broker in list:
```
- kafka0:9092
- kafka1:9092
- kafka2:9092
- kafka3:9092
```
Any idea? Thanks!
> I think that orderer service is sending your messages *ONLY* for the first broker in list:
@vsadriano: Your Kafka cluster is not configured properly.
@kostas @jyellick in the fabric network,just only set one orderer
@asaningmaxchain: Not sure what the question is?
in one fabric network,just has only one orderer node?
If you're asking whether a network can have only one orderer node, the answer is yes, thought that means you have an ordering service with no fault tolerance. This wouldn't be a sane way of deploying a network.
@kostas thx for your answer. if a network can have only one orderer node.the hacker will attack it all the time
Correct.
@kostas Is it possible to configure the block size or interval for Orderer. I looked in the yaml file but did not see any environment variable.
https://github.com/hyperledger/fabric/blob/release/sampleconfig/configtx.yaml#L144
https://github.com/hyperledger/fabric/blob/release/sampleconfig/configtx.yaml#L144
https://github.com/hyperledger/fabric/blob/release/sampleconfig/configtx.yaml#L147
You set and modify these the same way you set/modify any value that is set in the channel's configuration.
Has joined the channel.
Hey Can I make a channel config such that no matter how many organizations join that specific channel, I only need signatures of admin of one specific organization to add more organizations and not all organizations signatures ?
@rock_martin: Yes.
Has joined the channel.
@rock_martin - you define this within the channel config block. Go through the reconfig tutorial. In your scenario, rather than adding a new organization you would modify the channel Admins mod_policy field. See this snippet as an example where we explicitly define Org1 & Org3 as being required to sign off on any modification to the channel definition ...
```{
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 1,
"value": {
"identities": [
{
"principal": {
"msp_identifier": "Org1MSP",
"role": "ADMIN"
}
},
{
"principal": {
"msp_identifier": "Org3MSP",
"role": "ADMIN"
}
}
],
"rule": {
"n_out_of": {
"n": 2,
"rules": [
{
"signed_by": 0
},
{
"signed_by": 1
}
]
}
}
}
},
"version": "0"
},```
I am trying to find how has the hyperledger kafka release (can say v1.0.4) defined Step 5 from http://hyperledger-fabric.readthedocs.io/en/release/kafka.html?highlight=kafka#steps to disable prune?
@kostas
I have my file-system for the orderer gone bad - how would i recover it using only Kafka? I see only system channel is able to recover, how can i recover the application channels using topic data?
@kostas
Scenario: I have my file-system for the orderer gone bad - how would i recover it using only Kafka? I see only system channel is able to recover, how can i recover the application channels using topic data?
@kostas
Scenario: I have my file-system for the orderer gone bad - how would i recover it using only Kafka? I see only system channel is able to recover, how can i recover the application channels using topic data?
Thanks.
@kostas
Scenario 1: If I have my file-system for all orderers gone bad - how would i recover it using only Kafka? I see only system channel is able to recover, how can i recover the application channels using topic data?
Scenario 2: If my Kafka-Zookeeper ensemble file-system has gone bad, can I built up its data using the orderer ledger data?
Could these scenarios be resolved via an Operation Procedure using the existing capabilities of Fabric 1.0.x?
Thanks.
@kostas
Scenario 1: If I have my file-system for all orderers gone bad - how would i recover it using only Kafka? I see only system channel is able to recover, how can i recover the application channels using topic data?
Scenario 2: If my Kafka-Zookeeper ensemble file-system has gone bad, can I built up its data using the orderer ledger data?
Could these scenarios be resolved via an Operation Procedure using the existing capabilities of Fabric 1.0.x?
Thanks.
Hello, @kostas
Scenario 1: If my file-system for all orderers has gone bad - how would i recover it using only Kafka?
Scenario 2: If my Kafka-Zookeeper ensemble file-system has gone bad, can I built up its data using the orderer ledger data?
Could these scenarios be resolved via an Operation Procedure using the existing capabilities of Fabric 1.0.x?
Thanks.
Hello, @kostas
Scenario 1: If my file-system for all orderers has gone bad - how would i recover it using only Kafka?
Scenario 2: If my Kafka-Zookeeper ensemble file-system has gone bad, can I built up its data using the orderer ledger data?
Could these scenarios be resolved via an Operational Procedure using the existing capabilities of Fabric 1.0.x?
Thanks.
@kostas @jyellick
Q2: Curious for confirmation - How do I disable time-based log retention for fabric-1.0.x setup as i found a sample in BDD Test @ https://github.com/hyperledger/fabric/blob/master/bddtests/dc-orderer-kafka-base.yml#L57?
@kostas @jyellick
Q2: Curious for confirmation - How do I disable time-based log retention for fabric-1.0.x setup, is it by following sample in BDD Test @ https://github.com/hyperledger/fabric/blob/master/bddtests/dc-orderer-kafka-base.yml#L57?
@rahulhegde
> Scenario 1: If my file-system for all orderers has gone bad - how would i recover it using only Kafka?
Assuming that the logs have not rolled, simply starting a new orderer binary with the original genesis block _should_ be sufficient to rebootstrap your system. Effectively, since orderers do not talk to eachother directly (only through Kafka), this recovery is no different than adding a new orderer.
> Scenario 2: If my Kafka-Zookeeper ensemble file-system has gone bad, can I built up its data using the orderer ledger data?
This is not really a scenario that I see a good recovery mechanism for. The blockchain metadata stores the Kafka offsets to know where to resume processing, so if there were some way to force Kafka to store specific messages at specific offsets, it might be possible, but in general, I would say this is simply an unsupported scenario. Kafka has crash fault parameters (RF and ISR). If the crash threshold is exceeded, then all bets are off.
If this were to somehow happen, I think the recovery would have to be somewhat manual. First, I would collect the longest chain for each channel into a central location. Then, I would append a custom (but properly signed) block to each chain which manually encoded new offsets (to zero). I would then propagate this full set of chains to all orderers. And then from a Kafka perspective, it would be a fresh start, but the blockchain would be unbroken. Obviously this would be a challenging procedure, and one best avoided by simply configuring sufficient fault tolerance in the Kafka cluster.
@nickgaski Thanks man !!!
Has left the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=47ryG6QEFL2AteaMd) @jyellick
Thanks Jason for your response.
@kostas @jyellick
Q2: Curious for confirmation - How do I disable time-based log retention for fabric-1.0.x setup, is it by following sample shown in BDD Test @ https://github.com/hyperledger/fabric/blob/master/bddtests/dc-orderer-kafka-base.yml#L57?
@rahulhegde Yes, I believe this is correct, @sanchezl or @kostas, can you confirm?
Has joined the channel.
Hi all,
We recently moved to kafka for our orderer and last night we saw some intermittent issues with writing transactions to fabric
Specifically we saw the below on the orderer log
```
2017-12-19 16:04:35.698 UTC [orderer/main] func1 -> DEBU 41d0 Closing Deliver stream
2017-12-19 17:04:35.208 UTC [orderer/main] Deliver -> DEBU 41d1 Starting new Deliver handler
2017-12-19 17:04:35.208 UTC [orderer/common/deliver] Handle -> DEBU 41d2 Starting new deliver loop
2017-12-19 17:04:35.208 UTC [orderer/common/deliver] Handle -> DEBU 41d3 Attempting to read seek info message
2017-12-19 17:04:35.208 UTC [orderer/common/deliver] Handle -> DEBU 41d4 Rejecting deliver because channel loyyalchannel not found
2017-12-19 17:04:35.209 UTC [orderer/main] func1 -> DEBU 41d5 Closing Deliver stream
2017-12-19 17:04:35.311 UTC [orderer/main] Deliver -> DEBU 41d6 Starting new Deliver handler
2017-12-19 17:04:35.311 UTC [orderer/common/deliver] Handle -> DEBU 41d7 Starting new deliver loop
2017-12-19 17:04:35.311 UTC [orderer/common/deliver] Handle -> DEBU 41d8 Attempting to read seek info message
2017-12-19 17:04:35.311 UTC [orderer/common/deliver] Handle -> DEBU 41d9 Rejecting deliver because channel loyyalchannel not found
```
This seems to be an intermittent issue in that it self corrected and transactions began flowing again
*Has anyone seen anything similar?*
@JohnWhitton Had you just created this channel? It may take a short period of time between creating a channel and having all orderers become aware of it
@jyellick No it was running and processing transactions and then it stopped working.
We resolved this by restarting the peer and the composer rest server
and then it started processing transactions again
@JohnWhitton Once an orderer is aware of a channel, I would never expect to see this message
> We resolved this by restarting the peer and the composer rest server
So you did nothing to the orderers?
Nope
> Rejecting deliver because channel *loyyalchannel* not found
Also, assuming this is the correct channel name (in particular, it is not a typo for *loyalchannel*
> Rejecting deliver because channel *loyyalchannel* not found
Also, assuming this is the correct channel name (in particular, it is not a typo for *loyalchannel*)
yep it is the correct channel name (as I said transactions were flowing and then stopped
If you can reproduce this, I would be very curious to examine the complete orderer log at DEBUG. There is no obvious way I could see this occurring.
You said you were previously using solo and recently moved to Kafka
We had some issues with our original configuration with our Kafka/zookeper containers and so we were having issues with deleting the pods in that they wouldn't restart - so we don't currently restart any of the orderer pods
Can you look for a message like:
```2017-12-19 15:22:36.481 EST [orderer/commmon/multichannel] NewRegistrar -> INFO 002 Starting system channel 'testchainid' with genesis block hash ec0c35e19df240d4aaa2f7e36231c6652d662c7c112896ef1cc0d727423d0dbf and orderer type solo
```
At the beginning of each of your orderer logs and paste it here?
Yep so the history was
a) we were running using solo
b) we moved to kafka (but hadn't configured it correctly for high availability)
c) WE updated our kafka config so that it now is set up for high availablity and if a pod get's deleted it will spin back up and we can continue to process transactions
Before checking the log, what version of fabric are you running?
We are running Fabric 1.0.2
If you are running 1.0.x, you will see a message like:
```2017-12-19 15:25:18.240 EST [orderer/multichain] NewManagerImpl -> INFO 003 Starting with system channel testchainid and orderer type solo```
Could you find it in each of your orderer logs?
```
2017-11-27 18:51:54.400 UTC [orderer/main] main -> INFO 001 Starting orderer:
Version: 1.0.2
Go version: go1.7.5
OS/Arch: linux/amd64
2017-11-27 18:51:54.407 UTC [bccsp_sw] openKeyStore -> DEBU 002 KeyStore opened at [/etc/crypto-config/crypto-config-dev/loyyal-network/crypto-config/ordererOrganizations/cibc.loyyal-devnet.com/orderers/orderer.cibc.loyyal-devnet.com/msp/keystore]...done
2017-11-27 18:51:54.407 UTC [bccsp] initBCCSP -> DEBU 003 Initialize BCCSP [SW]
```
@JohnWhitton There should be a message containing the string `Starting with system channel` in each of your orderer logs -- it is not in what you pasted above
```
2017-11-27 18:56:01.516 UTC [orderer/common/deliver] Handle -> DEBU 0ce Attempting to read seek info message
2017-11-27 18:56:01.516 UTC [orderer/common/deliver] Handle -> DEBU 0cf Rejecting deliver because channel loyyalchannel not found
2017-11-27 18:56:01.517 UTC [orderer/main] func1 -> DEBU 0d0 Closing Deliver stream
2017-11-27 18:56:01.618 UTC [orderer/main] Deliver -> DEBU 0d1 Starting new Deliver handler
2017-11-27 18:56:01.618 UTC [orderer/common/deliver] Handle -> DEBU 0d2 Starting new deliver loop
2017-11-27 18:56:01.618 UTC [orderer/common/deliver] Handle -> DEBU 0d3 Attempting to read seek info message
2017-11-27 18:56:01.619 UTC [orderer/common/deliver] Handle -> DEBU 0d4 Rejecting deliver because channel loyyalchannel not found
2017-11-27 18:56:01.619 UTC [orderer/main] func1 -> DEBU 0d5 Closing Deliver stream
```
```
2017-11-27 18:51:54.489 UTC [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 0c2 Remaining bytes=[6734], Going to peek [8] bytes
2017-11-27 18:51:54.489 UTC [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 0c3 Returning blockbytes - length=[6732], placementInfo={fileNum=[0], startOffset=[0], bytesOffset=[2]}
2017-11-27 18:51:54.489 UTC [orderer/multichain] newChainSupport -> DEBU 0c4 [channel: testchainid] Retrieved metadata for tip of chain (blockNumber=0, lastConfig=0, lastConfigSeq=0):
2017-11-27 18:51:54.489 UTC [orderer/multichain] NewManagerImpl -> INFO 0c5 Starting with system channel testchainid and orderer type solo
2017-11-27 18:51:54.489 UTC [orderer/main] main -> INFO 0c6 Beginning to serve requests
2017-11-27 18:55:50.252 UTC [orderer/main] Deliver -> DEBU 0c7 Starting new Deliver handler
2017-11-27 18:55:50.252 UTC [orderer/common/deliver] Handle -> DEBU 0c8 Starting new deliver loop
2017-11-27 18:55:50.252 UTC [orderer/common/deliver] Handle -> DEBU 0c9 Attempting to read seek info message
2017-11-27 18:55:50.297 UTC [orderer/common/deliver] Handle -> DEBU 0ca Rejecting deliver because channel loyyalchannel not found
2017-11-27 18:55:50.297 UTC [orderer/main] func1 -> DEBU 0cb Closing Deliver stream
2017-11-27 18:56:01.516 UTC [orderer/main] Deliver -> DEBU 0cc Starting new Deliver handler
2017-11-27 18:56:01.516 UTC [orderer/common/deliver] Handle -> DEBU 0cd Starting new deliver loop
2017-11-27 18:56:01.516 UTC [orderer/common/deliver] Handle -> DEBU 0ce Attempting to read seek info message
2017-11-27 18:56:01.516 UTC [orderer/common/deliver] Handle -> DEBU 0cf Rejecting deliver because channel loyyalchannel not found
2017-11-27 18:56:01.517 UTC [orderer/main] func1 -> DEBU 0d0 Closing Deliver stream
```
`Starting with system channel testchainid and orderer type solo`
This is your problem
You are not actually running with the Kafka consensus mechanism
So each of your orderers is functioning independently
So, you have effectively started two different parallel blockchains. The transient error you were seeing was likely because you had only created the channel on one of the orderers, and you later created it on the other
Yeah, it looks like we have two orderers the other orderer log says
```
2017-12-19 16:05:26.386 UTC [orderer/common/broadcast] Handle -> DEBU 604e [channel: loyyalchannel] Broadcast has successfully enqueued message of type ENDORSER_TRANSACTION
2017-12-19 16:05:26.386 UTC [fsblkstorage] nextBlockBytesAndPlacementInfo -> DEBU 604f Returning blockbytes - length=[17578], placementInfo={fileNum=[0], startOffset=[2104867], bytesOffset=[2104870]}
2017-12-19 16:05:26.388 UTC [orderer/common/deliver] Handle -> DEBU 6050 [channel: loyyalchannel] Delivering block for (0xc4207f7660)
2017-12-19 16:05:26.389 UTC [orderer/common/broadcast] Handle -> DEBU 6051 Received EOF, hangup
2017-12-19 16:05:26.389 UTC [orderer/main] func1 -> DEBU 6052 Closing Broadcast stream
2017-12-19 16:05:26.390 UTC [msp] SatisfiesPrincipal -> DEBU 6053 Checking if identity satisfies MEMBER role for CIBCDevnetOrgMSP
2017-12-19 16:05:26.390 UTC [msp] Validate -> DEBU 6054 MSP CIBCDevnetOrgMSP validating identity
2017-12-19 16:05:26.391 UTC [cauthdsl] func2 -> DEBU 6055 0xc420028618 principal matched by identity 0
2017-12-19 16:05:26.393 UTC [msp/identity] Verify -> DEBU 6056 Verify: digest = 00000000 22 ce 7b 48 48 ef f9 d8 4e 82 37 ff 15 56 d7 e5 |".{HH...N.7..V..|
```
If you wish to use multiple orderers, they must all be bootstrapped with the same genesis block, and this genesis block must be configured to use the Kafka consensus type
In general, the flow should be:
1. Define your orderer configuration, including the Kafka broker information in `configtx.yaml`
2. Execute `configtxgen` to create the genesis block.
3. Start as many orderers as you like, using the `file` genesis method and the block from (2)
Makes sense looks like we've made a mistake with our config
@collins is on our DevOps team - He'll look into this further
thanks @jyellick for pointing us in the right direction
Hello @jyellick, I see system channel block number is increased whenever operation like create application channel is performed. What is added as part of the system channel version as I don't see any object which is application channel specific in system channel configuration block? Is more block added for any other non-system channel related operations on system channel ledger?
*test*
Has joined the channel.
> What is added as part of the system channel version as I don't see any object which is application channel specific in system channel configuration block?
It's not clear to me why we're conflating "version" with the "application" part of a channel?
At any rate, blocks are added to the system chain when channels are created, consortiums are created/modified, or any of the properties in the `/Channel/Orderer` configuration group of the system chain need to be modified.
At any rate, blocks are added to the system chain when channels are created, consortiums are created/modified (i.e. modifications in '/Channel/Consortiums'), or any of the properties in the `/Channel/Orderer` configuration group of the system chain need to be modified.
@kostas @jyellick how to debug the fabric,i want to use the dev tool to debug the fabric,and then run the e2e_cli operation
don't tell me just following the debug log,i don't think it's good choose
Has joined the channel.
Hi everyone, I already set up a network with orderer using kafka,
Is there any way to add more orderer to the existing network? What are the steps?
Thank you!
what's the propose of the adding more orderer to the network?
@handasontam
> Hi everyone, I already set up a network with orderer using kafka,
> Is there any way to add more orderer to the existing network? What are the steps?
When you bootstrapped your orderer, you should have supplied it with a genesis block created by `configtxgen`. If you did not, then you must have used the provisional genesis method which is not recommended for production. If you did this, then you may use the `peer channel fetch` command to retrieve the genesis block for the ordering system channel. Once you have the ordering system channel genesis block, simply start a new orderer process uses this block to bootstrap (as well as properly generated crypto material)
> what's the propose of the adding more orderer to the network?
Crash fault tolerance, and horizontal scale
@jyellick so if one orderer shutdown,the another orderer can replace,but when i take the tx,i should invoke twice in different network?
@asaningmaxchain
> when i take the tx,i should invoke twice in different network?
If you have two orderers, you only need to invoke `Broadcast` on one of them. If you get a failure, you may wish to retry on the other.
so if the network contain more then one orderer,the orderers share the data each other?
so if the network contain more then one orderer,the orderers share the data each other? if the orderer starts with the same genesis block
So long as the Kafka consensus method is used, yes. If they did not, it would not be useful.
another question is the each orderer contains the data in the `/var/production/orderer`,when i want to restart the network,it the one orderer start first,it build the channel accord to the data which locates in the `var/production/orderer' folder
another question is the each orderer contains the data in the `/var/production/orderer`,when i want to restart the network,it the one orderer start first,it build the channel accord to the data which locates in the `var/production/orderer' folder,if will conflit?
Transactions which have not committed into blocks are stored at the Kafka broker. The orderer deterministically commits them into blocks. The contents of /var/... will always be conflict free across orderers (though one orderer may have newer blocks than another, they will never disagree about the contents of a block which both know about)
@jyellick i try it,but it tell me some error
```2017-12-21 15:50:12.009 UTC [channelCmd] getSpecifiedBlock -> ERRO 00b Received error: rpc error: code = Unavailable desc = transport: write tcp 172.20.0.15:52606->172.20.0.14:7050: use of closed network connection
Error: proto: Marshal called with nil
Usage:
peer channel fetch
https://pastebin.com/apTGmGRR
@asaningmaxchain have you supplied `peer channel fetch` with `-c channel-name`?
@guoger i just add an orderer to the docker-compose-cli
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=n7RmkeZLy2932a6gZ) @asaningmaxchain also, I always find tests + logs is a very good combination
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Bf8xKMXtotze88Nw9) @asaningmaxchain based on the log you posted, I'm assuming you are getting error trying to fetch config block?
Has joined the channel.
Hi, where do I find some examples for a configx.yaml file with a kafka setup with 4 brokers and 4 ordering service nodes?
@novusopt I would suggest starting from the e2e `configtx.yaml` https://github.com/hyperledger/fabric/blob/master/examples/e2e_cli/configtx.yaml#L114-L115 . You may add additional orderer addresses to the section highlighted.
Hi. I'm following the docs for reconfiguring the first network but I get the following error, `flag provided but not defined: -printOrg` when I try to print org3 config materials to a json file, i.e when I run this command `export FABRIC_CFG_PATH=$PWD && ../../bin/configtxgen -printOrg Org3MSP > ../channel-artifacts/org3.json`. Any idea why I'm getting this error?
@collins This is a flag which was introduced in the (yet unreleased) v1.1. To use this flag, you must build `configtxgen` using either the v1.1-preview branch, or the current master.
Thanks @jyellick . Is there a replacement for this flag in the current version?
Thanks @jyellick . Is there a replacement of this flag in the current version?
@collins No, this is a new feature in v1.1. You may however safely use the v1.1 version of `configtxgen` with a v1.0 network.
Got it. Thanks for the info @jyellick
@jyellick thx
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=dF7JaiSsgrHiYX9Hu) @jyellick
Thank you.
I have an additional question:
Let say in the beginning, I set up a network with 2 orderers 4kafka: I specify the Orderer1 address and Orderer2 address in the configtx.yaml and use configtxgen to create the genesis block. Then I use the genesis block to create the orderer docker.
Now I want to add one more orderer: Orderer3. Here is the problem, If I use "peer channel fetch" command to retrieve the genesis block. That genesis block doesn't include the Orderer3 address. Is there any problem with that?
@jyellick can you provide example which policy is ImplicitPolicy and rule = Majority or All?
@asaningmaxchain `/Channel/Application/Admins` is an example of 'Majority'
i know,can you provide an example which modify the value/group/policy need the `Majority`
Adding or removing an org requires satisfying this policy
i want to know how the build the request the fabric network,like the Org1.Member and Org2.Member should sign the data,the process is the Org1.Member sign the data which get the signed data and then Org2.Member sign the signed data?
i want to know how the build the request the fabric network,like the Org1.Member and Org2.Member should sign the data,the process is the Org1.Member sign the data which get the signed data and then Org2.Member sign the signed data? or the Org1.Member sign the data and Org2.Member sign the data
another question is i use the e2e_cli to test the multi_orderer,but i found some operation doesn't work
another question is i use the e2e_cli to test the multi_orderer,but i found some operation doesn't work,
another question is i use the e2e_cli to test the multi_orderer,but i found some operation doesn't work,But the precondition is that i run the e2e_cli and add the another orderer in the fabric network.
when i use the e2e_cli to test the multi_orderer,but i found some operation doesn't work,But the precondition is that i run the e2e_cli and add the another orderer in the fabric network.
when i use the e2e_cli to test the multi_orderer,but i found some operation doesn't work,But the precondition is that i run the e2e_cli and add the another orderer in the *.yaml file
when i use the e2e_cli to test the multi_orderer,but i found some operation doesn't work,But the precondition is that i run the e2e_cli and add the another orderer in the *.yaml file,so the question is?
when i use the e2e_cli to test the multi_orderer,but i found some operation doesn't work,But the precondition is that i run the e2e_cli , add the another orderer in the *.yaml file,so the question is?
when i use the e2e_cli to test the multi_orderer,but i found some operation doesn't work,But the precondition is that i run the e2e_cli ,data persistence add the another orderer in the *.yaml file,so the question is?
1) should be create channel in the all orderer node?
1) should be create channel in the all orderer node? the answer is yes
2) i can query,invoke,but the invoke doesn't work
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=uK5qWrf4ha6myhi4w) I have also met this problem when i tried to fetch the configuration block from cli, but I used Java SDK to fetch and update configuration is OK, so I doubt whether the cli peer channel fetch works.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=uK5qWrf4ha6myhi4w) I have also met this problem when i tried to fetch the configuration block from cli for fabric-1.0.0, not 1.1 yet, but I used Java SDK to fetch and update configuration is OK, so I doubt whether the cli peer channel fetch works.
Has joined the channel.
Hi. I am using a kafka cluster (4 Kafka, 3 Zookeeper, 2 orderer, 4 peer) in a multi AWS env. When I shutdown 2 kafka I am getting the following error : ```kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.``` Any ideas? Thanks. @kostas
@ArnabChatterjee check your zookeeper logs
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7ymh7DppxH7di995E) @ArnabChatterjee Any orchestrator or "pure" Docker?
I needed to change my "hostname" spec on deployment (Kubernetes).
@yacovm - Can you confirm that if I have ISR = 2 with Rep. Fac = 3, how many failures of ZK and Kafka can I tolerate at a time? I am a bit confused with the formulae and stuff. Thanks
Has joined the channel.
Has joined the channel.
@jyellick @kostas I'm trying to modify the channel configuration but met the following error: Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Values] /Channel/Orderer/BatchTimeout not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
@jyellick @kostas I'm trying to modify the channel configuration but met the following error: "Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Values] /Channel/Orderer/BatchTimeout not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining"
Hi@jyellick @kostas, I'm trying to submit a config update to orderer but get the following error:"Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Values] /Channel/Orderer/BatchTimeout not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining", I used one orderer admin to sign it before sending it.
@Glen can you show the current config?
https://pastebin.com/WhzTndpg]
https://pastebin.com/tRE6ncdJ
I'm trying to modify the BatchTimeout
@asaningmaxchain I'm trying to modify the BatchTimeout
https://pastebin.com/tRE6ncdJ
I'm not too familiar with the policy,any body can explain it? like which hierarchy policy controls which level of operation? thanks
are you sure that using the orderer msp?
orderer Admin is : &{name:ordererAdmin mspID:Org1MSP roles:[] privateKey:0xc4202642b0 enrollmentCertificate:[
I've printed the used Orderer amdin , I set the user context to this user
I seem to know, let me try
not yet
you should set the Org1Msp to operate it
you should set the Org1Msp to operate it,you must use the java sdk to operate it
no, I'm trying go sdk
why, that's peer org?
now is Org1MSP
"mod_policy": "Admins",
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
}
},
"BlockValidation": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"sub_policy": "Writers"
}
}
},
"Readers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"sub_policy": "Readers"
}
}
},
"Writers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"sub_policy": "Writers"
}
}
}
},
"values": {
"BatchSize": {
"mod_policy": "Admins",
"value": {
"absolute_max_bytes": 102760448,
"max_message_count": 10,
"preferred_max_bytes": 524288
}
},
"BatchTimeout": {
"mod_policy": "Admins",
"value": {
"timeout": "2s"
}
},
"ChannelRestrictions": {
"mod_policy": "Admins"
},
"ConsensusType": {
"mod_policy": "Admins",
"value": {
"type": "solo"
}
}
}
"mod_policy": "Admins",
"policies": {
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
}
},
"BlockValidation": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"sub_policy": "Writers"
}
}
},
"Readers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"sub_policy": "Readers"
}
}
},
"Writers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"sub_policy": "Writers"
}
}
}
},
"values": {
"BatchSize": {
"mod_policy": "Admins",
"value": {
"absolute_max_bytes": 102760448,
"max_message_count": 10,
"preferred_max_bytes": 524288
}
},
"BatchTimeout": {
"mod_policy": "Admins",
"value": {
"timeout": "2s"
}
},
"ChannelRestrictions": {
"mod_policy": "Admins"
},
"ConsensusType": {
"mod_policy": "Admins",
"value": {
"type": "solo"
}
}
}
https://pastebin.com/tRE6ncdJ
how to understand the modify policy of BatchTimeout
hi everyone, is it possible to set up tls connection between kafka and zookeeper using the docker image hyperledger/fabric-kafka and hyperledger/fabric-zookeeper? thank you.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=RTdZdw2mJ3dDhnY3F) @Glen http://hyperledger-fabric.readthedocs.io/en/latest/policies.html?highlight=policy
`/Channel/Orderer/BatchTimeout`
Hey, I have a basic understanding of Kafka. I read that we should at least have 4 kafka brokers and 3 zookeper nodes in production based orderer service for multiple orderers. Can anyone describe me the significance of the numbers 4(kafka brokers) and 3(zookeper)??
Hi, is it possible to add more kafka brokers to the existing network dynamically, what are the steps? Thanks a lot.
@CodeReaper This number implies that this is the minimum number of nodes necessary in order to exhibit crash fault tolerance, i.e. with 4 brokers, you can have 1 broker go down, all channels will continue to be writeable and readable, and new channels can be created.)
Z will either be 3, 5, or 7. It has to be an odd number to avoid split-brain scenarios, and larger than 1 in order to avoid single point of failures. Anything beyond 7 ZooKeeper servers is considered an overkill.
Hi, I'm following the docs on reconfiguring the first network but I'm stuck on this error, `Error: unknown shorthand flag: 'f' in -f` when I issuea peer channel signconfigtx i.e `peer channel signconfigtx -f org3_update_in_envelope.pb`. I'm using Version 1.1.0-preview of Hyperledger Fabric. Any idea why I'm getting this? The version I'm using should be supporting peer channel signconfigtx command.
Hi, anybody knows how to solve the go test dependency not found problem?
@jyellick ```/docker-entrypoint.sh: line 21: echo: write error: No space left on device ``` the zk exit
@mastersingh24 , @kostas , @jyellick the ZK transaction logs and snapshots are not auto-purged... and it is problematic for users which are not experienced with zookeeper.
I suggest we add to our zookeeper config:
`autopurge.snapRetainCount` and `autopurge.purgeInterval`
What do you think?
@mastersingh24 , @kostas , @jyellick the ZK transaction logs and snapshots are not auto-purged... and it is problematic for users which are not experienced with zookeeper.
I suggest we add to our zookeeper config:
`autopurge.snapRetainCount` and `autopurge.purgeInterval`
The current config in the `zoo.cfg` doesn't have it:
```
clientPort=2181
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=5
syncLimit=2
server.1=zookeeper0:2888:3888
server.2=zookeeper1:2888:3888
server.3=zookeeper2:2888:3888
```
What do you think?
@yacovm how to add ?
@yacovm how to add ?before when i use the `make docker` command to build the images,
I would rather have it changed in the docker hub repo
so everyone can benefit from this
i see the`Makefile`,it will pull the images from the dockerhub,so you can push it to the dockerhub?
I'm not pushing anything into dockerhub :)
@yacovm i use the zookeeper not hyperledger/fabric-zookeeper image ,it can set the configuration file by myself
ok
Has joined the channel.
hi, i keep getting this error on the orderer ```panic: Unable to bootstrap orderer. Error unmarshalling genesis block: proto: can't skip unknown wire type 7 for common.Block
goroutine 1 [running]:
github.com/hyperledger/fabric/orderer/common/bootstrap/file.(*fileBootstrapper).GenesisBlock(0xc420010dc0, 0xc420010dc0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/bootstrap/file/bootstrap.go:49 +0x1d4
github.com/hyperledger/fabric/orderer/common/server.initializeBootstrapChannel(0xc420374f00, 0x135a260, 0xc4200ce2c0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:186 +0x5bd
github.com/hyperledger/fabric/orderer/common/server.initializeMultichannelRegistrar(0xc420374f00, 0x1356a60, 0x13ba518, 0xc42000eb58, 0x1, 0x1, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:234 +0xa0
github.com/hyperledger/fabric/orderer/common/server.Start(0xcd744a, 0x5, 0xc420374f00)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:94 +0x2c3
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:74 +0x120
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20```
@sasiedu How did you generate your genesis block?
@sasiedu How did you generate your genesis block? In particular, what command did you run to produce it?
Hi, when a nodeJS SDK makes a sendTransaction call where the TransactionRequest object contains an array of ProposalResponse objects which all contain a R/W set along with the endorsement signature, does it decide and if so, how does it decide which R/W set to use for the committing Peers? Or does it just bundle everything up and send all the R/W sets to the committing peers? Thanks!
Hi, when a nodeJS SDK makes a sendTransaction call where the TransactionRequest object contains an array of ProposalResponse objects which all contain a R/W set along with the endorsement signature, does it decide and if so, how does it decide which R/W set to use for the committing Peers? Or does it just bundle everything up and send all the R/W sets to the committing peers? Or is the client just sending a single R/W set to the Orderers? Thanks!
it does not decide
when sending
the MVCC checks the R/W set during the commit phase
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BnApN2CEsuTfqZYTn) @jyellick ```configtxgen -outputBlock genesis.block -profile DefaultOrdererGenesis```
can you show me the configtx.yaml?
can you show me the configtx.yaml?
@sasiedu
@sasiedu The error you're reporting generally implies that the file is corrupt or, more often of the wrong file-type. Are you certain that the `genesis.block` file is the one you are passing to bootstrap the orderer?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7EyQr583Wk73NyqtR) @DannyWong [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7EyQr583Wk73NyqtR) @DannyWong Thank you.
Has joined the channel.
Hello! I have a conceptual question.
I am setting up a network with one orderer (kafka), 4 organizations (0,1,2,3) with two peers each and 2 channels (0-1-2,0-2-3). So in my use case Organization0 would be the "regulator" that has access to both channels. The network is running in localhost, with appropiate port mapping for every container
I do not want to do the set up through a client, because in my use case Organization0 - the regulator - should act as a client but with the other organizations having some privacy.
Im currently trying to create the channels and I have the problem that if I create the channel from the Organization0 container then it doesn't load the other signatures and does not deliver the block to the other organizations, only locally in the container, so to make the other peers join the channel I would have to act as a client and have certificates for the other organizations inside Organization0 and make them join the channel from this local block.
Does this make sense or am I making a fundamental mistake? Is there any way of creating a channel and delivering the block to the other organizations through the orderer without having their certificates?
Thanks guys!!!
[the error are these, although the block is created in Org0MSP:
identity 0 does not satisfy principal: The identity is a member of a different MSP (expected OrdererOrg0MSP, got Org0MSP)
identity 0 does not satisfy principal: The identity is a member of a different MSP (expected Org1MSP, got Org0MSP)]
Has joined the channel.
Hi! I've got ZK + Kafka + Orderer instances (among other things) in GKE. I upgraded the Node Pools as per this Meltdown + Spectre issue. Now I'm getting issues performing a transaction. Now Kafka is giving NullPointerExceptions, the ZK is giving "Got user-level KeeperException when processing sessionid", and the Orderer is giving "[channel: channelname] Rejecting deliver request because of consenter error". I'm pretty new to all this so I'd appreciate it if anybody could provide resources, insight, or advice :)
Hi! I've got ZK + Kafka + Orderer instances (among other things) in GKE. I upgraded the Node Pools as per this Meltdown + Spectre issue. Now after the new pods came up I'm getting issues performing a transaction. Now Kafka is giving NullPointerExceptions, the ZK is giving "Got user-level KeeperException when processing sessionid", and the Orderer is giving "[channel: channelname] Rejecting deliver request because of consenter error". I'm pretty new to all this so I'd appreciate it if anybody could provide resources, insight, or advice :)
Has joined the channel.
Clipboard - January 5, 2018 9:06 AM
Hi All, I have been trying to implement the kafka orderer for a truly distributed hyperledger network. I am able to get it running when everything is on localhost (the famous balance transfer example, I can get it to work with the Node SDK using kafka orderer system). When trying to implement in a distributed system, the zookeeper keeps giving me a binding error as shown. I have ensured that the port 2888/3888/2181 are not being used by anyone else in the system. Can someone help?
@tamycova
> Does this make sense or am I making a fundamental mistake? Is there any way of creating a channel and delivering the block to the other organizations through the orderer without having their certificates?
To join a channel, you need the genesis block for that channel. An org may retrieve that block directly from the orderer using the `peer channel fetch` command, or, it may receive the block out of band from another party (like the regulator in this example). Someone with administrative rights for the peer must invoke the join channel command.
@voutasaurus
> Now Kafka is giving NullPointerExceptions, the ZK is giving "Got user-level KeeperException when processing sessionid", and the Orderer is giving "[channel: channelname] Rejecting deliver request because of consenter error".
It sounds like your Kafka cluster is misconfigured. Please use the Kafka sample clients to confirm that your Kafka cluster is functional in your environment, and only then try to utilize fabric on top of it.
@SanketPanchamia
> the zookeeper keeps giving me a binding error as shown. I have ensured that the port 2888/3888/2181 are not being used by anyone else in the system. Can someone help?
As you have identified, this is a problem with your zookeeper configuration. I'd recommend reading the zookeeeper docs, stack overflow, etc. and as always, ensure your Kafka/ZK setup is functional without fabric before attempting to deploy fabric on it.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WQ2wMKCSiGDdkNdmg) @jyellick The same set of ports work on a single system.
A little background - I have implemented my distributed network using docker swarm which controls what services can run on what systems. The zk/kafka i am able to run on a single system with the same ports.
@jyellick thanks!! I saw the light with your answer :)
lsc
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=p5MAJE7Q7kchXq9mN) @jyellick i have a default genesis.block. i convert using configtxlator to change some data with then convert it back to .block using configtxlator
@jyellick where would I find the "Kafka sample clients"?
I looked in here and can't see it? https://github.com/hyperledger-archives/fabric/tree/master/bddtests
I looked in here and can't see it https://github.com/hyperledger-archives/fabric/tree/master/bddtests
(http://hyperledger-fabric.readthedocs.io/en/release/kafka.html says fabric/bddtests has dc-orderer-kafka-base.yml and dc-orderer-kafka.yml but I can't see them)
@sasiedu
> i convert using configtxlator to change some data with then convert it back to .block using configtxlator
My suspicion is that there was an error converting it back to protobuf. If you open the file with a text editor, I suspect you will see an error message.
@SanketPanchamia
> The same set of ports work on a single system.
Yes, this would suggest to me that this is some problem with your docker swarm, or other infrastructure. Which would not be a fabric related problem.
@voutasaurus
> Where would I find the "Kafka sample clients"?
You should complete *at a minimum* the first 6 steps of the `Kafka Quickstart guide https://kafka.apache.org/quickstart before experimenting with the Kafka-based ordering service. See in steps 3/4/5, the Kafka provided sample clients.
Thanks, I've jumped into the middle of an existing environment so I appreciate your patience and links to resources
I downloaded this https://www.apache.org/dyn/closer.cgi?path=/kafka/1.0.0/kafka_2.11-1.0.0.tgz and followed the steps and bin/kafka-console-consumer.sh isn't blocking or printing any messages. I can find a better forum for this question but if you know of a better tutorial (maybe a recent one using docker or kubernetes) that would be neat
I'm going to run through this first I think: https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/
I'm going to run through this first: https://kubernetes.io/docs/tutorials/stateful-application/zookeeper/
@voutasaurus That sounds like a great plan
@jyellick when I run peer channel fetch newest -o docker.for.mac.localhost:7050 , the orderer log shows Rejecting deliver because channel not found. The channel is created, peer0 (the creator of the channel) has the block and can join the channel but it seems that the orderer does not have the block to send it to the other peers, which is weird!
Has joined the channel.
@jyellick thanks, but the problem was different. I was using the signcerts instead of rootCerts when generating the genesis.block
@tamycova The command which creates the channel gets the genesis block effectively by the same means that peer channel fetch does. Are you certain that you are specifying the right channel ID? Also, you want block 0, not the newest block.
Hi, is it possible to add more kafka brokers to the existing network dynamically, what are the steps? Thanks a lot.
@handasontam - assuming you are just trying to expand the Kafka cluster, then this is not really Fabric-specific for the most part.
To actually add Kafka brokers to an existing cluster, you'll want to refer to the Kafka documentation.
From an orderer perspective, the only thing you may need to do is to actually update the orderer configuration on the system channel to add the additional brokers to the list. This is not strictly required in order for the new brokers to be used in general, but if you want to add them to the bootstrap set for additional resiliency then you'll want to to this.
@kostas - did I get this right? anything to add?
@mastersingh24 need to restart the fabric network?
For adding Kafka brokers?
yes
should not be necessary
if i have the channel admin policy ,i can modify the channel/OrdererAddress?
if i have right policy ,i can modify the channel/OrdererAddress?
modify the system channel or exist channel?
Should be the system channel but @kostas or @jyellick can confirm
And you would modify the Kafka/Brokers not the orderer addreses
how the exist channel get the extra broker
https://chat.hyperledger.org/channel/fabric-orderer?msg=sbCnaz6EQZqoZco2E
@handasontam: The above is 100% right. If you want to expand your Kafka cluster, all you need to do is spin up an additional broker with the right settings -- `broker.id`, `zookeeper.connect`, etc. see: https://github.com/hyperledger/fabric/blob/release/bddtests/dc-orderer-kafka.yml#L137..L141
ZK will let the Kafka controller know that there's a new addition to the family. This will make the new node eligible for replicating any channels that are created _after_ its addition.
If you wish to have the new broker replicate existing channels, you'll need to follow the Kafka guide by running the `kafka-reassign-partitions.sh` script: https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-ClusterExpansion
Generally, and assuming you have configured your cluster according to the instructions provided in https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html, you do not have to mess with existing channels, or call for a partition reassignment. Just spin up a broker, and you're good to go. As @mastersingh24 noted, you don't even need to modify the `Kafka/Brokers` entry in your system channel.
https://chat.hyperledger.org/channel/fabric-orderer?msg=SfxwXpii44fRDtYH5
(@asan
(@asaningmaxchain: Working on a response for you, just a sec.)
(@asaningmaxchain: I see you typing. Working on a response for you, just a sec.)
ok
@asaningmaxchain: See the responses by @mastersingh24 and myself above.
As you know, when a channel is created, it copies over the `Orderer.Kafka.Brokers` value from the system channel.
Unless you plan to retire all of the brokers listed in `Orderer.Kafka.Brokers`, you generally don't have any reason to "announce" the new broker in the system channel (and much less so in your existing channel). By "announce", I mean edit `Orderer.Kafka.Brokers` so as to include the new broker's address.
If you _were_ planning to retire all of the brokers listed in `Orderer.Kafa.Brokers`, then you would have to:
(a) Spin up new brokers
(b) Temporarily prevent the creation of new channels
(c) Reassign all existing partitions (channels) to the new brokers
(d) Modify `Orderer.Kafka.Brokers` in the system channel and all existing channels so as to point to the new brokers you spinned up in (a)
(e) Undo (b)
I have an FAQ sitting there as a draft changeset for a couple of months now, these are good questions - I will add them.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=JnPiW86tyrjgdwiok) @kostas yes
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7yh442Br65LyMB2P8) @kostas i can modify the orderer.kafka,Brokers for system channel,however the existed channel is ok,the new channel will use the updated orderer,kafka,brokers and the same as the existed channel
(Not sure if this is a question, but so far everything sounds right.)
so i can update the orderer.kafka.brokers dynamic as long as i set the configuration corrently when i want to create a new channel?
Not sure I get that. Can you rephrase?
i can update the `orderer.kafka.brokers` dynamic for system channel ?
1)i can update the `orderer.kafka.brokers` dynamic for system channel ?
1)i can only update the `orderer.kafka.brokers` dynamic for existed channel ?
2)i can update the `orderer.kafka.brokers` dynamic for system channel ?
@asaningmaxchain If you update the ordering system channel, then new channels will have the updated brokers. Once a channel has been created, you must reconfigure it to expose the new brokers.
@jyellick the existed channel should update?
@jyellick i got it
Hi, I have kafka orderer setup when i start the orderer I get following error message: *panic: Error reading configuration:Unsupported Config Type ""*
Someone with the same experience?
@novusopt Usually this indicates that it cannot find your config file. Have you set `FABRIC_CFG_PATH` to point to the directory containing your config?
@novusopt Usually this indicates that it cannot find your config file. Have you set `FABRIC_CFG_PATH` to point to the directory containing your `orderer.yaml`?
@jyellick No, I dont have any orderer.yaml
@novusopt Then that is your problem, the orderer requires this file to start.
But why do I need it, i thought the orderer get all information from the genesis.block
and by setting ENV variables...?
@jyellick any idea?
@novusopt The configuration system the orderer (and peer) uses is Viper https://github.com/spf13/viper
The way viper works, it requires that a variable be defined in the yaml before ENV overrides work.
So, even if you were to specify the entire config as ENV, you still need the config file so that Viper will work correctly.
You are correct that all of the shared parameters are encoded into the genesis block, but there are many local parameters, like the location on the filesystem the ledger is stored, or the logging level, which do not make sense to store on the chain (as they are not necessarily the same across orderers).
@jyellick yes, and I am setting those parameters properly
@novusopt You must have an `orderer.yaml` file available, even if all of the parameters are overridden via ENV, this is simply a limitation of the Viper configuration system.
@jyellick ok good to know. I dont know that. I will try. Thx:thumbsup:
Hey guys, quick question, what is the latest version of kafka supported by 1.0/planned for 1.1? http://hyperledger-fabric.readthedocs.io/en/release/kafka.html suggests that it is 0.10.2, but the sarama library supports up to 1.0
Hey guys, quick question, what is the latest version of kafka supported by 1.0/planned for 1.1? http://hyperledger-fabric.readthedocs.io/en/release/kafka.html suggests that it is 0.10.2, but the sarama library supports up to kafka 1.0
@Asara I'd like to get confirmation from @kostas or @sanchezl as they're typically the ones more involved in the Kafka versioning, but if Sarama supports it, then I'd say the orderer does as well.
@Asara I'd like to get confirmation from @kostas or @sanchezl as they're typically the ones more involved in the Kafka versioning, but if Sarama supports it, then I'd say the orderer does as well and it's simply a doc oversight.
@Asara At the moment, the fabric order does not use any Kafka message format options introduced after v0.9.0.1, so all kafka versions after 0.9.0.1 work. It doesn't matter much at the moment what the value of Kafka.Version is.
As for the documentation, there does seem to be a discrepancy caused when the version of Kafka we bundle in our sample kafka images was reverted (https://jira.hyperledger.org/browse/FAB-7288).
Awesome! Thanks for the info @sanchezl
Has joined the channel.
Has joined the channel.
Hey, in balance transfer I'm sending the responses of the sendTransactionProposal inside the request object of sendTransaction. This happends when all proposal responses were good. This I suppose shouldnt happen in production based application. If the application gets enough valid responses it should submit the transaction to the orderer. Is this the correct approach? Also if some peer doesnt give the response when doing sendTransactionProposal, will it wait whole 45 seconds? even when other all have given responses?
Hey, in balance transfer I'm sending the responses of the sendTransactionProposal inside the request object of sendTransaction. This happens when all proposal responses were good. This I suppose shouldnt happen in production based application. If the application gets enough valid responses it should submit the transaction to the orderer. Is this the correct approach? Also if some peer doesnt give the response when doing sendTransactionProposal, will it wait whole 45 seconds? even when other all have given responses?
Also I dont suppose the application should send the proposals to all peers in network-config.json, especially if it is scaled upto bulk number of peers, example-50-60. I'm guessing sending proposals to 2-3 peers of every organization is enough? Can anyone direct? Thannks
@CodeReaper: Please post this to #fabric.
ok
Has left the channel.
We came up in a scenario where the same ledger transaction ended up twice in the same block. The second transaction was marked as Invalid by committer thus not causing any issue on the ledger however the Fabric Java SDK HFC (client) went to an inconsistent state thus throwing a fatal.
This is kafka consensus setup with 3 orderers running on fabric-1.0.4.
Looking at the orderer logs - here is the scenario
1. HFC sent out ledger transaction to orderer2 which was not able to enqueue due to kafka network issue.
2. HFC has a timeout with orderer of 10s and hence once this was elapsed it went to orderer3.
3. Orderer3 was able to successfully enqueue transaction.
4. Orderer2 recovered connection with kafka and enqueued successfully the ledger transaction.
On Step3 & step4 - the same transaction is enqueued and thus part of the same block finally.
what is the expected remedy - is it? HFC should not have gone to the orderer3 or Orderer2 should not have retried or HFC timeout should be greater than Orderer server timeout?
@rahulhegde In general, after issuing a `Broadcast` you should wait until either a success or a failure, and only after receiving a failure, should you try another orderer. I would tend to leave timeouts to gRPC. However, in any distributed asynchronous system, if resubmission is timeout based, then there is always the risk of duplicates, and the application/fabric should handle them. What is the inconsistent state?
> What is the inconsistent state?
Quite interested in this one as well. The whole point of validation is so that duplicates don't affect your state.
> Fabric Java SDK HFC (client) went to an inconsistent state thus throwing a fatal.
Seems to me like a problem on its own...
@jyellick
HFC client upon receiving a ledger transaction marked with Invalid - retries the same proposal for endorsement. Chaincode does not approve it and hence HFC fatals for manual operation.
@jyellick
HFC client upon receiving a ledger transaction marked with Invalid - retries the same proposal for endorsement. Chaincode does not approve it and hence HFC fatals for manual operation/intervention.
When does a single orderer timeout to service a broadcast request?
@rahulhegde But presumably the txid was already marked valid before you encountered the invalid one? Perhaps the application needs to maintain a list of outstanding txes or similar?
> When does a single orderer timeout to service a broadcast request?
The orderer will generally not timeout any requests, the timeout would occur because of networking problems (and would depend on gRPC, HTTP2, TCP, etc.). The orderer attempts to send the message to Kafka via Sarama, and if the message cannot be successfully sent within some amount of time (as implemented by Sarama), then a failure is returned. @kostas would better be able to definitively answer, but I believe if there is no interruption of the network stream, Sarama will always reply with a success or failure, indicating that the tx was either received, or not.
^^ That is correct.
Also, see https://github.com/hyperledger/fabric/blob/release/sampleconfig/orderer.yaml#L176 and https://github.com/hyperledger/fabric/blob/release/sampleconfig/orderer.yaml#L186 and the respective configuration sections in the `sarama` library.
Generally, these wouldn't need fine-tuning.
Can i summarize to say, HFC connection timeout should be set to > ` (Producer.RetryMax) * (DialTimeout + Producer.RetryBackoff) ` in order to resolve this issue?
Can i summarize to say, HFC connection timeout should be set to above ` (Producer.RetryMax) * (DialTimeout + Producer.RetryBackoff) ` in order to resolve this issue?
Can we say (or there is more factor?), HFC connection timeout should be set to above ` (Producer.RetryMax) * (DialTimeout + Producer.RetryBackoff) ` in order to resolve this issue?
Can we say (or there is more factor?), HFC connection timeout should be set above ` (Producer.RetryMax) * (DialTimeout + Producer.RetryBackoff) ` in order to resolve this issue?
@jyellick
> But presumably the txid was already marked valid before you encountered the invalid one? Perhaps the application needs to maintain a list of outstanding txes or similar?
The same block had both. First 1 was mark valid and other was marked as invalid by the committer. Maintaining transaction id (with expiry marker) can be the last option but I feel adjusting the timeout as above could very well used to fix it.
why deliver produce such errorError reading from stream: rpc error: code = Canceled desc = context canceled
why deliver produce such error '''Error reading from stream: rpc error: code = Canceled desc = context canceled'''
'''2018-01-11 01:39:09.065 UTC [orderer/common/deliver] Handle -> WARN 02f Error reading from stream: rpc error: code = Canceled desc = context canceled'''
@rahulhegde It's always still possible that you may encounter this problem (as is the nature of any asynchronous distributed system)
Consider that:
1. Client invokes `Broadcast`with tx foo
2. Orderer sends to Kafka, message is successfully received
3. Orderer attempts to reply to client, but someone trips over the power cord, and the server goes down before replying
4. Client will eventually timeout, even though the message was successfully received
5. Client resubmits to another orderer
6. tx foo will appear twice in the ledger, the first valid, the second invalid.
Although tuning the timeout parameters might make this less likely, because this situation may still theoretically occur, I strongly suggest that the application handle it.
@rahulhegde It's always still possible that you may encounter this problem (as is the nature of any asynchronous distributed system)
Consider that:
1. Client invokes `Broadcast`with tx foo
2. Orderer sends to Kafka, message is successfully received
3. Orderer attempts to reply (with success status) to client, but someone trips over the power cord, and the server goes down before replying
4. Client will eventually timeout, even though the message was successfully received
5. Client resubmits to another orderer
6. tx foo will appear twice in the ledger, the first valid, the second invalid.
Although tuning the timeout parameters might make this less likely, because this situation may still theoretically occur, I strongly suggest that the application handle it.
Hi @jyellick If I want to modify the parameter for orderer, then I need the orderer msp admin signature right?
@Glen I would need a bit more detail, but it sounds right
if I want to modify the batchTimeout, it obeys to a policy ``` "BatchTimeout": {
"mod_policy": "Admins",
"value": {
"timeout": "2s"
}
},```
timeout's mod_policy is Admin, should it be derived from the policies from /Channel/Orderer/Admins?
timeout's mod_policy is Admin, should it be derived from the policy from /Channel/Orderer/Admins?
timeout's mod_policy is Admins, should it be derived from the policy from /Channel/Orderer/Admins?
is that right?
then /Channel/Orderer/Admins has such a definition ``` "Admins": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "MAJORITY",
"sub_policy": "Admins"
}
}
},```
so I need the signature from Orderer MSP's admin role to sign the config tx?
did I make it understood, thanks!
@Glen You are absolutely correct, and the flow you described is accurate
thanks, the modification is successful
Hi, I am facing an issue while creating the channel. I am using fabric 1.0.5 with following *crypto-config.yaml*: https://hastebin.com/tezugusibu.css & *configtx.yaml*: https://hastebin.com/umirufonod.makefile.
I am using following command to create the channel:
`peer channel create -o n1.orderer0.example.com:7050 -c mychannel -f ./channel-artifacts/channel.tx`
I receive following error message:
`2018-01-11 08:00:51.487 CET [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-01-11 08:00:51.487 CET [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-01-11 08:00:51.505 CET [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2018-01-11 08:00:51.506 CET [msp/identity] Sign -> DEBU 004 Sign: plaintext: 0A92070A5C08011A0C08A397DCD20510...41646D696E731A080A000A000A000A00
2018-01-11 08:00:51.506 CET [msp/identity] Sign -> DEBU 005 Sign: digest: 25739F6DE810E5838E1F0AE25C407631B2056B9B726062995AFA0A9034359AD6
Error: proposal failed (err: rpc error: code = Unknown desc = chaincode error (status: 500, message: "JoinChain" for chainID = testchainid failed because of validation of configuration block, because of Invalid configuration block, missing Application configuration group))`
Hi, I am facing an issue while creating the channel. I am using fabric 1.0.5 with following *crypto-config.yaml*: https://hastebin.com/tezugusibu.css & *configtx.yaml*: https://hastebin.com/umirufonod.makefile.
I am using following command to create the channel:
`peer channel create -o n1.orderer0.example.com:7050 -c mychannel -f ./channel-artifacts/channel.tx`
I receive following error message:
```2018-01-11 08:00:51.487 CET [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-01-11 08:00:51.487 CET [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-01-11 08:00:51.505 CET [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2018-01-11 08:00:51.506 CET [msp/identity] Sign -> DEBU 004 Sign: plaintext: 0A92070A5C08011A0C08A397DCD20510...41646D696E731A080A000A000A000A00
2018-01-11 08:00:51.506 CET [msp/identity] Sign -> DEBU 005 Sign: digest: 25739F6DE810E5838E1F0AE25C407631B2056B9B726062995AFA0A9034359AD6
Error: proposal failed (err: rpc error: code = Unknown desc = chaincode error (status: 500, message: "JoinChain" for chainID = testchainid failed because of validation of configuration block, because of Invalid configuration block, missing Application configuration group))````
Hi, I am facing an issue while joining. I am using fabric 1.0.5 with following *crypto-config.yaml*: https://hastebin.com/tezugusibu.css & *configtx.yaml*: https://hastebin.com/umirufonod.makefile.
I am using following command to create the channel:
`peer channel join -b ./channel-artifacts/genesis.block`
I receive following error message:
```2018-01-11 08:00:51.487 CET [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-01-11 08:00:51.487 CET [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-01-11 08:00:51.505 CET [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2018-01-11 08:00:51.506 CET [msp/identity] Sign -> DEBU 004 Sign: plaintext: 0A92070A5C08011A0C08A397DCD20510...41646D696E731A080A000A000A000A00
2018-01-11 08:00:51.506 CET [msp/identity] Sign -> DEBU 005 Sign: digest: 25739F6DE810E5838E1F0AE25C407631B2056B9B726062995AFA0A9034359AD6
Error: proposal failed (err: rpc error: code = Unknown desc = chaincode error (status: 500, message: "JoinChain" for chainID = testchainid failed because of validation of configuration block, because of Invalid configuration block, missing Application configuration group))````
Hi, I am facing an issue while joining. I am using fabric 1.0.5 with following *crypto-config.yaml*: https://hastebin.com/tezugusibu.css & *configtx.yaml*: https://hastebin.com/umirufonod.makefile.
I am using following command to join:
`peer channel join -b ./channel-artifacts/genesis.block`
I receive following error message:
```2018-01-11 08:00:51.487 CET [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-01-11 08:00:51.487 CET [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-01-11 08:00:51.505 CET [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2018-01-11 08:00:51.506 CET [msp/identity] Sign -> DEBU 004 Sign: plaintext: 0A92070A5C08011A0C08A397DCD20510...41646D696E731A080A000A000A000A00
2018-01-11 08:00:51.506 CET [msp/identity] Sign -> DEBU 005 Sign: digest: 25739F6DE810E5838E1F0AE25C407631B2056B9B726062995AFA0A9034359AD6
Error: proposal failed (err: rpc error: code = Unknown desc = chaincode error (status: 500, message: "JoinChain" for chainID = testchainid failed because of validation of configuration block, because of Invalid configuration block, missing Application configuration group))````
I created the crypto materials with:
`cryptogen generate --config=./crypto-config.yaml`
the genesis block:
`configtxgen -profile TwoOrgsChannel -outputBlock ./channel-artifacts/genesis.block`
and the channel.tx:
`configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID mychannel`
@novusopt what block did you give to the orderer?
in ORDERER_GENERAL_GENESISFILE?
@novusopt I see your error: [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=5pWzXqb4ccKGDh62n) should be `-profile TwoOrgsOrdererGenesis`
@Vadim sorry i am wrong I am using `configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block`
no idea?
is it sth with the channel ID?
Here is the log on the peer when I want to join: https://hastebin.com/dazodonota.hs
@novusopt You are attempting to join your peer to the ordering system channel
In general, you should not use the ordering system channel for application logic like chaincodes
If you are very sure that you wish to join peers to the ordering system channel, you must create an `Applicatoin` section in the genesis block, you can do this by adding one in `configtx.yaml` before bootstrapping
If you are very sure that you wish to join peers to the ordering system channel, you must create an `Application` section in the genesis block, you can do this by adding one in `configtx.yaml` before bootstrapping
@jyellick yes you are right. I finally found the issue
:worried:
when i do this `peer channel join -b ./channel-artifacts/genesis.block`I used the wrong file *.block file
I used which I created with `configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block`
which is wrong
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=uDkQYRpLiw2JidEW3) @jyellick so what's the difference between the system channel (include the `Application`) and normal channel
The ordering system channel is the channel that the orderers use to orchestrate channel creation. By default, we do not include an `Application` section to discourage peers from using this channel for application/chaincode logic, because this is usually a mistake. Ultimately, it is a channel like any other.
so i think the ordering system channel shouldn't include the `Application` field ,when the orderer start should check it
@asaningmaxchain There are some potential reasons why someone might want to have peers transact on the ordering system channel.
We provide defaults which omit it, but if you are knowledgeable enough to change those defaults, then you may.
Hi. Can someone please tell me the expected behaviour of the orderer when Kafka and ZK brokers are restarted in a Kafka ordering service? This has been causing some issues on my end.
@collins How are you doing the restart? In general, the orderer will attempt to reconnect to Kafka, rapidly at first, and then back off, trying less and less frequently. You can see the `Retry` section of the Kafka config in `orderer.yaml`
If the Kafka cluster is down for more than 12 hours (or whatever the long retry is configured to) you may need to restart your orderer process for it to reconnect.
@jyellick the restart happens when upgrading the kubernetes cluster. the upgrade of the node pools restarts all the containers including the orderer. I have 4 kb, 5 zk and 2 orderers. After all the containers have restarted and finished provisioning, the peer logs `Got error &{SERVICE_UNAVAILABLE}` and the orderer logs `Rejecting deliver request because of consenter error`. This state remains forever, and the temporal solution has been doing a new deploy. I was able to replicate this by manually restarting the containers using kubectl i.e deleting the containers which result to auto-creation of new ones. That's how Kubernetes works.
@collins Are you able to use the Kafka sample producer/consumer with the Kafka cluster after it has been restarted?
@collins Are you able to use the Kafka provided CLI sample producer/consumer with the Kafka cluster after it has been restarted?
@jyellick which CLI sample please?
https://kafka.apache.org/quickstart#quickstart_send
https://kafka.apache.org/quickstart#quickstart_consume
I am willing to bet a million (Zimbabwean) dollars that the Kafka cluster is not restarted properly.
This is a bet I can honor given than I once bought a 1 trillion Zimbabwean dollar bill off of eBay (for 6 USD IIRC).
@kostas I'm restarting by deleting the kafka brokers i.e `kubectl delete pod kafka-broker-container-name`
I have zero Kubernetes expertise unfortunately and can't help out there.
I have zero Kubernetes experience unfortunately and can't help out there.
I can talk about restarting Kafka brokers in general.
And I have captured how this process is supposed to work here: https://jira.hyperledger.org/browse/FAB-7330
(Though, really, there's a wealth of documentation out there and probably states things better than I do.)
I tried
https://kafka.apache.org/quickstart#quickstart_send
https://kafka.apache.org/quickstart#quickstart_consume
and yes I was able to do a successful restart.
You can read and write w/o issues?
From an _existing_ topic?
Yeah using the sample. But with kubernetes I can do a write but read happens without an issue.
Yeah using the sample. But with kubernetes I can't do a write but read happens without an issue.
> I can't do a write
This means that you are _not_ restarting the Kafka cluster properly.
Your cluster has become read-only because it can't get a quorum up and running properly.
This is what I get when I try to write,
```{
"error": {
"statusCode": 500,
"name": "Error",
"message": "Error trying invoke business network. Error: Error: Invalid results returned ::SERVICE_UNAVAILABLE",
"stack": "Error: Error trying invoke business network. Error: Error: Invalid results returned ::SERVICE_UNAVAILABLE\n at _initializeChannel.then.then.then.then.catch (/usr/local/lib/node_modules/composer-rest-server/node_modules/composer-connector-hlfv1/lib/hlfconnection.js:836:34)\n at process._tickDomainCallback (internal/process/next_tick.js:135:7)"
}
}```
Right, this is inline with what you discussed with Jason earlier. (And inline with what I'm suggesting that is most definitely going on as well.)
Right, this is inline with what you discussed with Jason earlier. (And inline with what I'm suggesting that is most definitely going on here as well.)
There must be a guide out there that explains how to do Kubernetes + Kafka?
As a general piece of advice --
And as FAB-7030 states --
And as FAB-7330 states --
Identify the number of brokers that can be shutdown abruptly.
Make sure that you don't exceed that.
I see you writing: https://chat.hyperledger.org/channel/fabric-orderer?msg=fntWpKFyAGjPm2vFW
And I'm clueless about k8s but this sounds to me like a non-graceful shutdown.
So if you got more than _f_ of those going on, and you're also considering the amount of time needed for a broker to come back up and sync up with the rest of the crew (remember: until that happens, a restarted broker is still part of the _failed_ brokers ensemble), then you'll end up with a cluster that's not writeable.
Makes sense @kostas. With a bit of research, I think I'll get a solution to this. You've shed the light. Thanks! :smile:
Has joined the channel.
@collins @kostas I am running into this same problem as well on a kubernetes cluster
@jmcnevin I'd strongly suggest you follow @kostas suggestion. Find a Kafka + Kubernetes guide online, and follow it. Especially make sure after restart that you can create new topics and read/write from/to them. And that you can read/write to existing topics as well. You should be able to accomplish this with only the sample binaries provided by Kafka. Once you are confident in your k8s+kafka environment, only then attempting to add fabric on top of it.
@jmcnevin I'd strongly suggest you follow @kostas suggestion. Find a Kafka + Kubernetes guide online, and follow it. Especially make sure after restart that you can create new topics and read/write from/to them. And that you can read/write to existing topics as well. You should be able to accomplish this with only the sample binaries provided by Kafka. Once you are confident in your k8s+kafka environment, only then attempt to add fabric on top of it.
at this point i'm just hoping I didn't somehow hose my entire cluster somehow :(
at this point i'm just hoping I didn't hose my entire cluster somehow :(
so, if my kafka cluster gets mangled somehow, is there any way to get my orderers/peers back on track, or do I need to start a new channel?
@jmcnevin In general, if your kafka cluster gets mangled, you are in for a long, difficult, and manual recovery.
It may be theoretically possible to repair your channels using a re-initialized Kafka cluster, but it would require some tooling which does not exist, and great care from the ordering admins.
quick question... is it possible to add another orderer node to a running network, and what happens in that scenario? Does the orderer node replay everything from the kafka log?
@jmcnevin Yes, you may bootstrap an orderer with only the genesis block, and it will replay the entire kafka log until it is up to date. Or, more practically, you may make a copy of an existing orderer's ledger, and use it to bootstrap your new one, so that the orderer must only replay txes since the copy.
We are actively working on supporting a more formal pruning/snapshotting strategy to allow expiration of the Kafka logs
@jyellick can you tell me the location of the peer pull the block from orderer in the source code?
who can help me how to use sbft in e2e_cli network(alpha2 version),i don not know how to Configure the yaml file?i tried to change the solo orderertype to sbft in configtx.yaml file,but it failed.so i don not how to do it?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Ed9EXeeJsdY6vYttR) @tingfa1 current,it doesn't support sbft
Has joined the channel.
getting the following error
end_to_end.go:68: create channel failed: failed broadcast to orderer: NewAtomicBroadcastClient failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
when i followed
cd $GOPATH/src/github.com/hyperledger/fabric-sdk-go/
make depend
make checks
make dockerenv-stable-up
cd $GOPATH/src/github.com/hyperledger/fabric-sdk-go/test/integration/e2e/end_to_end.go
go test
here is my docker process info using lsof -i -P
vpnkit 1098 pt 15u IPv4 0x2117ec140f90d119 0t0 ICMP *:*
vpnkit 1098 pt 18u IPv4 0x2117ec141010be69 0t0 UDP *:57279
vpnkit 1098 pt 19u IPv4 0x2117ec14150fc5c1 0t0 TCP *:8054 (LISTEN)
vpnkit 1098 pt 20u IPv6 0x2117ec1410d5c9f9 0t0 TCP localhost:8054 (LISTEN)
vpnkit 1098 pt 21u IPv4 0x2117ec141599df21 0t0 TCP *:7050 (LISTEN)
vpnkit 1098 pt 22u IPv6 0x2117ec1410d5cfb9 0t0 TCP localhost:7050 (LISTEN)
vpnkit 1098 pt 23u IPv4 0x2117ec1415357c61 0t0 TCP *:7054 (LISTEN)
vpnkit 1098 pt 24u IPv6 0x2117ec1410d5e6b9 0t0 TCP localhost:7054 (LISTEN)
vpnkit 1098 pt 25u IPv4 0x2117ec141010c119 0t0 UDP *:53874
vpnkit 1098 pt 26u IPv4 0x2117ec14124efc61 0t0 TCP *:8053 (LISTEN)
vpnkit 1098 pt 27u IPv6 0x2117ec1410d5c439 0t0 TCP localhost:8053 (LISTEN)
vpnkit 1098 pt 28u IPv4 0x2117ec1410ff61e1 0t0 TCP *:8051 (LISTEN)
vpnkit 1098 pt 29u IPv6 0x2117ec1410d5be79 0t0 TCP localhost:8051 (LISTEN)
vpnkit 1098 pt 30u IPv4 0x2117ec1415fd91e1 0t0 TCP *:7053 (LISTEN)
vpnkit 1098 pt 31u IPv6 0x2117ec1410d5ec79 0t0 TCP localhost:7053 (LISTEN)
vpnkit 1098 pt 32u IPv4 0x2117ec14162c6881 0t0 TCP *:7051 (LISTEN)
vpnkit 1098 pt 33u IPv6 0x2117ec1410d5f239 0t0 TCP localhost:7051 (LISTEN)
vpnkit 1098 pt 34u IPv4 0x2117ec141010e409 0t0 UDP *:49547
and here is docker ps output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
009f365a3e56 registry.hub.docker.com/hyperledger/fabric-peer:x86_64-1.0.5 "peer node start" 21 minutes ago Up 21 minutes 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp, 7052/tcp fabsdkgo_org1peer1_1
6cfdeaa9e7d6 registry.hub.docker.com/hyperledger/fabric-peer:x86_64-1.0.5 "peer node start" 21 minutes ago Up 21 minutes 0.0.0.0:8051->8051/tcp, 7052/tcp, 0.0.0.0:8053->8053/tcp fabsdkgo_org2peer1_1
5bea4b3deba2 registry.hub.docker.com/hyperledger/fabric-ca:x86_64-1.0.5 "sh -c 'fabric-ca-se…" 21 minutes ago Up 21 minutes 0.0.0.0:7054->7054/tcp fabsdkgo_org1ca1_1
2e01f7112f33 registry.hub.docker.com/hyperledger/fabric-orderer:x86_64-1.0.5 "orderer" 21 minutes ago Up 21 minutes 0.0.0.0:7050->7050/tcp fabsdkgo_orderer1_1
52bd06a57e02 registry.hub.docker.com/hyperledger/fabric-ca:x86_64-1.0.5 "sh -c 'fabric-ca-se…" 21 minutes ago Up 21 minutes 7054/tcp, 0.0.0.0:8054->8054/tcp fabsdkgo_org2ca1_1
73ed41a4eaa3 registry.hub.docker.com/hyperledger/fabric-ccenv:x86_64-1.0.5 "tail -F anything" 21 minutes ago Up 21 minutes fabsdkgo_builder_1
fd9e874dcf64 registry.hub.docker.com/hyperledger/fabric-baseos:x86_64-0.4.2 "tail -F anything" 21 minutes ago Up 21 minutes fabsdkgo_golangruntime_1
getting the following error
end_to_end.go:68: create channel failed: failed broadcast to orderer: NewAtomicBroadcastClient failed: rpc error: code = Unavailable desc = grpc: the connection is unavailable
when i followed https://github.com/hyperledger/fabric-sdk-go run tests manually
cd $GOPATH/src/github.com/hyperledger/fabric-sdk-go/
make depend
make checks
make dockerenv-stable-up
cd $GOPATH/src/github.com/hyperledger/fabric-sdk-go/test/integration/e2e/end_to_end.go
go test
here is my docker process info using lsof -i -P
vpnkit 1098 pt 15u IPv4 0x2117ec140f90d119 0t0 ICMP *:*
vpnkit 1098 pt 18u IPv4 0x2117ec141010be69 0t0 UDP *:57279
vpnkit 1098 pt 19u IPv4 0x2117ec14150fc5c1 0t0 TCP *:8054 (LISTEN)
vpnkit 1098 pt 20u IPv6 0x2117ec1410d5c9f9 0t0 TCP localhost:8054 (LISTEN)
vpnkit 1098 pt 21u IPv4 0x2117ec141599df21 0t0 TCP *:7050 (LISTEN)
vpnkit 1098 pt 22u IPv6 0x2117ec1410d5cfb9 0t0 TCP localhost:7050 (LISTEN)
vpnkit 1098 pt 23u IPv4 0x2117ec1415357c61 0t0 TCP *:7054 (LISTEN)
vpnkit 1098 pt 24u IPv6 0x2117ec1410d5e6b9 0t0 TCP localhost:7054 (LISTEN)
vpnkit 1098 pt 25u IPv4 0x2117ec141010c119 0t0 UDP *:53874
vpnkit 1098 pt 26u IPv4 0x2117ec14124efc61 0t0 TCP *:8053 (LISTEN)
vpnkit 1098 pt 27u IPv6 0x2117ec1410d5c439 0t0 TCP localhost:8053 (LISTEN)
vpnkit 1098 pt 28u IPv4 0x2117ec1410ff61e1 0t0 TCP *:8051 (LISTEN)
vpnkit 1098 pt 29u IPv6 0x2117ec1410d5be79 0t0 TCP localhost:8051 (LISTEN)
vpnkit 1098 pt 30u IPv4 0x2117ec1415fd91e1 0t0 TCP *:7053 (LISTEN)
vpnkit 1098 pt 31u IPv6 0x2117ec1410d5ec79 0t0 TCP localhost:7053 (LISTEN)
vpnkit 1098 pt 32u IPv4 0x2117ec14162c6881 0t0 TCP *:7051 (LISTEN)
vpnkit 1098 pt 33u IPv6 0x2117ec1410d5f239 0t0 TCP localhost:7051 (LISTEN)
vpnkit 1098 pt 34u IPv4 0x2117ec141010e409 0t0 UDP *:49547
and here is docker ps output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
009f365a3e56 registry.hub.docker.com/hyperledger/fabric-peer:x86_64-1.0.5 "peer node start" 21 minutes ago Up 21 minutes 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp, 7052/tcp fabsdkgo_org1peer1_1
6cfdeaa9e7d6 registry.hub.docker.com/hyperledger/fabric-peer:x86_64-1.0.5 "peer node start" 21 minutes ago Up 21 minutes 0.0.0.0:8051->8051/tcp, 7052/tcp, 0.0.0.0:8053->8053/tcp fabsdkgo_org2peer1_1
5bea4b3deba2 registry.hub.docker.com/hyperledger/fabric-ca:x86_64-1.0.5 "sh -c 'fabric-ca-se…" 21 minutes ago Up 21 minutes 0.0.0.0:7054->7054/tcp fabsdkgo_org1ca1_1
2e01f7112f33 registry.hub.docker.com/hyperledger/fabric-orderer:x86_64-1.0.5 "orderer" 21 minutes ago Up 21 minutes 0.0.0.0:7050->7050/tcp fabsdkgo_orderer1_1
52bd06a57e02 registry.hub.docker.com/hyperledger/fabric-ca:x86_64-1.0.5 "sh -c 'fabric-ca-se…" 21 minutes ago Up 21 minutes 7054/tcp, 0.0.0.0:8054->8054/tcp fabsdkgo_org2ca1_1
73ed41a4eaa3 registry.hub.docker.com/hyperledger/fabric-ccenv:x86_64-1.0.5 "tail -F anything" 21 minutes ago Up 21 minutes fabsdkgo_builder_1
fd9e874dcf64 registry.hub.docker.com/hyperledger/fabric-baseos:x86_64-0.4.2 "tail -F anything" 21 minutes ago Up 21 minutes fabsdkgo_golangruntime_1
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=HygiaJ3SKCZy9e2fY) @praveentalari i recommand you post your info in the channel #fabric-sdk-go
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=HygiaJ3SKCZy9e2fY) @praveentalari i advice you to post your info in the channel #fabric-sdk-go
@asaningmaxchain you may start with the function StartDeliverForChannel
Here is a sample config file for Orderer. How do we apply it to the Orderer? Is there an env var we need to set to point to this file? https://github.com/hyperledger/fabric/blob/release/sampleconfig/orderer.yaml
@YashGanthe `FABRIC_CFG_PATH`
see https://github.com/hyperledger/fabric/blob/release/core/config/config.go#L130-L175
@sanchezl https://jira.hyperledger.org/browse/FAB-7330 I saw latest comment mentioned kafka 0.11 version fixed this issue,however when i change use kafka 0.11 version, it report some errors
seems this https://issues.apache.org/jira/browse/KAFKA-3959
I wonder if fabric orderer support kafka 0.11 right now?
To my knowledge, anything past Kafka 0.9.0.1 should work. @sanchezl do you have any thoughts?
Anything past 0.9.0.1 should work.
i use fabric orderer 1.0.1 version
is it same ?
Yes.
@jyellick @kostas i start the fabric with multi orderer, and i use the java sdk to operate it,when i use the orderer0 to create/join channel,install /initialize chaincode and invoke is ok,but when i use the orderer1 to operate it,it can't invoke
@jyellick @kostas i start the fabric with multi orderer, and i use the java sdk to operate it,when i use the orderer0 to create/join channel,install /initialize chaincode and invoke is ok,but when i use the orderer1 to operate it,it can't invoke,i don't know it's the java sdk issue,or the fabric orderer issue
@asaningmaxchain Are you certain your orderer is configured correctly? It sounds like possibly you have started two solo orderers, instead of two kafka orderers
@jyellick i paste my config file,wait a moment
https://pastebin.com/F7sDGGVX
this is a crypto-config
https://pastebin.com/cQNBSR4U
this is configtx
https://pastebin.com/Z5uRqCpH
this is docker-compose file
Line 87 of `configtx.yaml` specifies the orderer type as `solo`
i am so ....
the other config is right?
@jyellick if the fabric network exists two orderer,how to keep the data consistency for each orderer?
If there are multiple orderers, and they are specified to use `kafka` for consensus, then they communicate through the kafka brokers to achieve data consistency
let me take example,the orderer0 send a tx,and the orderer1 send a tx,however the two orderer has TTC each other
orderer0 and orderer1 each send their message and TTC message to kafka. Kafka gives them a total order, and then orderer0 and orderer1 receive the order back from Kafka, and cut into blocks in the same way.
the TTC message to kafka too?
the TTC message to kafka too?,so the each orderer contains the other TTC
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=E4jEpwMboZQ9GSXYZ) @jyellick you are right,i have checked it
Yes, the TTC message goes to Kafka too
i still doesn't know how to keep the two orderer has same data
i still doesn't know how to keep the two orderer has same data,can you give me a picture to explain it
```
tx0 -> Orderer0 -> ######### |--> Orderer0
# Kafka # tx1, tx0--|
tx1 -> Orderer1 -> ######### |--> Orderer1
```
Transactions (and TTC) are all orderered by Kafka, then each orderer sees the same stream of txes and TTCs, and makes the same decisions about blocks.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pmsHC7MqWBJcjzCsM) @jyellick i know,but how the orderers make same decisions
They run the same code, which only makes decision based on the messages from Kafka (and they receive the same messages in the same order from Kafka). Since no decisions are about the local time, or other non-deterministic things, the decisions are the same.
so the orderers contains the same ledger?
Yes
Hi,i have already run the orderer images(sbft consensus, alpha2 version), but when i try to run the e2e_cli, i fail. it shows that:
Traceback (most recent call last):
File "/usr/local/bin/docker-compose", line 11, in
Has joined the channel.
I am facing issues when trying to customize fabric-samples balance transfer example with kafka configuration
I am doing setup on two machines. Org1 with orderer0 and kafka configuration on one machine and Org2 with orderer1 on second machine.
However its working fine when not using kafka
even with kafka configuration, I can successfully create channel, join Org1 peer and install chaincode on org1 peer on machine 1. However on second machine I am not able to join org2 peers to the existing channel "mychannel" (created on machine 1)
when checked orderer1.example.com logs, I can see below warning messages.
2018-01-16 11:32:27.847 UTC [orderer/consensus/kafka/sarama] func1 -> DEBU 7f0 producer/broker/1 state change to [retrying] on mychannel/0 because kafka server: Request was for a topic or partition that does not exist on this broker.
2018-01-16 11:32:27.847 UTC [orderer/consensus/kafka] try -> DEBU 7f1 [channel: mychannel] Retrying every 5m0s for a total of 12h0m0s
2018-01-16 11:33:12.736 UTC [orderer/common/server] Deliver -> DEBU 7f2 Starting new Deliver handler
2018-01-16 11:33:12.737 UTC [orderer/common/deliver] Handle -> DEBU 7f3 Starting new deliver loop for 172.18.0.1:42098
2018-01-16 11:33:12.737 UTC [orderer/common/deliver] Handle -> DEBU 7f4 Attempting to read seek info message from 172.18.0.1:42098
2018-01-16 11:33:12.737 UTC [orderer/common/deliver] deliverBlocks -> WARN 7f5 [channel: mychannel] Rejecting deliver request for 172.18.0.1:42098 because of consenter error
2018-01-16 11:33:12.738 UTC [orderer/common/deliver] Handle -> DEBU 7f6 Waiting for new SeekInfo from 172.18.0.1:42098
2018-01-16 11:33:12.738 UTC [orderer/common/deliver] Handle -> DEBU 7f7 Attempting to read seek info message from 172.18.0.1:42098
2018-01-16 11:33:12.738 UTC [orderer/common/deliver] Handle -> DEBU 7f8 Received EOF from 172.18.0.1:42098, hangup
2018-01-16 11:33:12.738 UTC [orderer/common/server] func1 -> DEBU 7f9 Closing Deliver stream
2018-01-16 11:34:26.128 UTC [orderer/common/server] Deliver -> DEBU 7fa Starting new Deliver handler
2018-01-16 11:34:26.128 UTC [orderer/common/deliver] Handle -> DEBU 7fb Starting new deliver loop for 172.18.0.1:42198
2018-01-16 11:34:26.128 UTC [orderer/common/deliver] Handle -> DEBU 7fc Attempting to read seek info message from 172.18.0.1:42198
2018-01-16 11:34:26.128 UTC [orderer/common/deliver] deliverBlocks -> WARN 7fd [channel: mychannel] Rejecting deliver request for 172.18.0.1:42198 because of consenter error
2018-01-16 11:34:26.128 UTC [orderer/common/deliver] Handle -> DEBU 7fe Waiting for new SeekInfo from 172.18.0.1:42198
2018-01-16 11:34:26.128 UTC [orderer/common/deliver] Handle -> DEBU 7ff Attempting to read seek info message from 172.18.0.1:42198
2018-01-16 11:34:26.128 UTC [orderer/common/deliver] Handle -> DEBU 800 Received EOF from 172.18.0.1:42198, hangup
2018-01-16 11:34:26.128 UTC [orderer/common/server] func1 -> DEBU 801 Closing Deliver stream
2018-01-16 11:36:31.467 UTC [orderer/common/server] Deliver -> DEBU 802 Starting new Deliver handler
2018-01-16 11:36:31.467 UTC [orderer/common/deliver] Handle -> DEBU 803 Starting new deliver loop for 172.18.0.1:42406
2018-01-16 11:36:31.467 UTC [orderer/common/deliver] Handle -> DEBU 804 Attempting to read seek info message from 172.18.0.1:42406
2018-01-16 11:36:31.467 UTC [orderer/common/deliver] deliverBlocks -> WARN 805 [channel: mychannel] Rejecting deliver request for 172.18.0.1:42406 because of consenter error
2018-01-16 11:36:31.467 UTC [orderer/common/deliver] Handle -> DEBU 806 Waiting for new SeekInfo from 172.18.0.1:42406
2018-01-16 11:36:31.467 UTC [orderer/common/deliver] Handle -> DEBU 807 Attempting to read seek info message from 172.18.0.1:42406
2018-01-16 11:36:31.468 UTC [orderer/common/deliver] Handle -> DEBU 808 Received EOF from 172.18.0.1:42406, hangup
2018-01-16 11:36:31.468 UTC [orderer/common/server] func1 -> DEBU 809 Closing Deliver stream
can someone pls help me in resolving this issue
@javrevasandeep I suspect there's misconfiguration of Kafka and/or orderer so that you get `Request was for a topic or partition that does not exist on this broker`. To diagnose, you probably could try http://kafka.apache.org/quickstart to make sure your kafka is working properly. Also, could you paste your orderer configuration to hastebin.com and post link here?
Thanks for your reply. I resolved the issue by reconfiguring kafka
I have one quick question. Is there any way to speed up the transactions by doing some configuration changes
It really depends on your business traffic and network setup. You could play around with block size, message count, timeout. I would say to spot the bottleneck of a distributed system is not an easy job :(
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=eE3kpsGKzooLC7K3E) @jyellick the peer how to select orderer,given multi ordreers
@asaningmaxchain Orderer addresses are included in the channel genesis block file you used to join a peer, so that peer would know all the orderers and *round robin* them
@guoger please tell the location of the source code
@guoger i know the genesis block file contains the info about the odrderer address,i want to know how the peer round robin them,please tell the location of the source code,
@jyellick
@jyellick please take a look
@asaningmaxchain start with `deliverservice/deliveryclient.go`
can you show the full path
Ultimately it is this code which selects the endpoint to connect to: https://github.com/hyperledger/fabric/blob/master/core/comm/producer.go#L58-L85
https://github.com/hyperledger/fabric/blob/master/core/deliverservice/deliveryclient.go#L219 ultimately invokes https://github.com/hyperledger/fabric/blob/master/core/deliverservice/client.go#L131, which invokes the above
it's random,not the round robin* them
I wouldn't call it random
https://github.com/hyperledger/fabric/blob/f36dd939b592ba43d6abda9b65d40c3ea9923b87/core/comm/producer.go#L71
There is a list of endpoints to connect to. It removes the endpoints which are known to be having problems. Then, of the list which have no known problems, it selects randomly.
yes,i got it
@jyellick i don't know where the question ask,it doesn't fit,i will remove it
there are four peer, if i have a chaincode,it moves the money from a to b,chaincodepolicy is any two of the peer,when the peer1 send the a tx which is move the 10$ from a to b,the peer2,peer3 sign the tx,the client can always do it,it doesn't fit the logic,
@guoger
@asaningmaxchain I don't quite get this question.. please rephrase..
wait a moment
Has joined the channel.
@jyellick @sanchezl I got an odd issue
when i changed kafka version to up 0.11
Can anyone pls help me with how to increase the performance of the network
Clipboard - 2018年1月17日下午4点15分
Hi. I meet some problems. when i start the e2e_cli network and use command(peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile $ORDERER_CA) to create a channel, it shows that:
Error connecting: rpc error: code = 14 desc = grpc: RPC failed fast due to transport failure
Error: failed connecting: rpc error: code = 14 desc = grpc: RPC failed fast due to transport failure
how to solve it?
i enter kafka container, and found default.replication.factor don't write into server.properties
Clipboard - 2018年1月17日下午4点18分
however i indeed add it
and more odd, when i change kafka version to 0.10.2.1
default.replication.factor is indeed in server.properties
Clipboard - 2018年1月17日下午4点35分
any clue?
@jyellick how can i build the image `hyperledger/fabric-tools` from Makefile
@grapebaba How are you building your Kafka images? In general, the images tend to build the config file is built by a script which executes prior to starting the process. You might try investigating the script for your image.
@tingfa1 There is not enough information to debug your problem, all that I can tell is that there is a transport level failure (ie, network related, not fabric related) which is preventing the command from succeeding.
@asaningmaxchain `make images` will build all of the fabric images, `make tools-docker`
@asaningmaxchain `make docker` will build all of the fabric images, `make tools-docker` will build only `fabric-tools`
i got it
i got it,thx
@jyellick i have a simple question,in the e2e_cli exmple,the chaincode move the money from a to b,however,in the real world,the client can do it repeatly,ti doesn't fit logic,can you explain it?
@jyellick i have a simple question,in the e2e_cli exmple,the chaincode move the money from a to b,however,in the real world,the client can do it repeatly,ti doesn't fit logic,can you explain it?
@asaningmaxchain These questions are not related to the orderer component, I suggest you ask them in another channel like #fabric or #fabric-questions
@jyellick ok
@jyellick ok,i remove it,you can do it too
@jyellick
do you mean this file
?
Clipboard - 2018年1月17日晚上11点11分
@grapebaba Yes, that looks like the file I am referring to
yeah
I don't change this file
so i am superised
it should add same content whatever kafka version i used
but it is missing default.replication.factor
when i use 0.11 and 1.0.0 version
@grapebaba Yes, I agree this is odd, my best advice would be to add some debugging statements to that file, and observe the output in the docker log
@grapebaba Yes, I agree this is odd, my best advice would be to add some debugging statements to that file which print the variable being read and the config parameter and file being written, and observe the output in the docker log
ok
@grapebaba , the server.properties might need an extra carriage return at the end, or alternatively, change the entry-point script to add a carriage return before it adds anything else to the file.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
@jyellick https://jira.hyperledger.org/browse/FAB-7829 please take a look
@jyellick https://jira.hyperledger.org/browse/FAB-7829 please take a look
Has joined the channel.
when orderer make new block,
orderer issue a deliver event to leading peers (through 7053 port)
after getting deliver event from orderer, leading peers pull new block from orderer.
after getting new block, leading peers send new block to other peers through gossip protocol.
Is my understanding correct?
Hi all, I've found a bug in configtxgen which fails silently, giving no indication of the source of failure, and have a simple repro instruction, as well as have identified what was causing the problem. On Jira, what would the appropriate component be for a bug in configtxgen? fabric-orderer, or something else?
@vdods fabric-tools
thanks for filing it!
No prob
Who should I assign it to?
@guoger Who should I assign it to?
raft
@vdods You may assign it to me
@jyellick It looks like it's been assigned to Min Luo, and is in-progress.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Zum8hFtBkKpZn38wM)
@Brucepark To be more precise, the Deliver service is an RPC which remains open and receives new blocks as a stream, as they are created.
So there is no event and then poll, the 'event' is the block itself.
The rest is correct
@jyellick :thumbsup:thank you. I didn't know rpc is used like that.
I have more questions for rpc connetion.
I think leading peer would try to make new connection when it just elected as a leading peer. And in that case leading peer’s 7053 port is not used but orderer’s 7050 port is used for listening and making new connection. Is my guessing right?
I don't know when 7053 port is used and where that port open to.
@Brucepark 7053 on peers is for events streaming. And yes, peer initiates connections with orderers at 7050
@guoger thank you
Can you tell me where to open 7053 port? who does try to connect to that port? may be sdk?
Ah, sorry for my confusion earlier, I was referring to the ordering Deliver service which operates on 7050, thanks for clarifying my response @guoger
Ah, sorry for my confusing answer earlier, I was referring to the ordering Deliver service which operates on 7050, thanks for clarifying my response @guoger.
Hi guys. can anyone pls help me with create channel issue. I am using 2 orgs with 2 peer and 1 CA each and kafka based ordering service. I have 2 orderers, 3 zookeepers and 4 kafka brokers on on 1st VM. Org1 peers and its CA on 2nd VM. Org2 peers and its CA on 3rd VM. I am invoking all the calls from node-sdk on 4th VM. I am getting error while creating channel
error: [Orderer.js]: sendBroadcast - on error: "Error: 14 UNAVAILABLE: Connect Failed\n at createStatusError (/home/bteam/go/src/github.com/hyperledger/fabric-samples-latest/fabric-samples/smartPropertyKafka/node_modules/grpc/src/client.js:65:15)\n at ClientDuplexStream._emitStatusIfDone (/home/bteam/go/src/github.com/hyperledger/fabric-samples-latest/fabric-samples/smartPropertyKafka/node_modules/grpc/src/client.js:271:19)\n at ClientDuplexStream._readsDone (/home/bteam/go/src/github.com/hyperledger/fabric-samples-latest/fabric-samples/smartPropertyKafka/node_modules/grpc/src/client.js:237:8)\n at readCallback (/home/bteam/go/src/github.com/hyperledger/fabric-samples-latest/fabric-samples/smartPropertyKafka/node_modules/grpc/src/client.js:297:12)"
[2018-01-23 13:53:26.962] [ERROR] Create-Channel - Error: SERVICE_UNAVAILABLE
when i checked orderer0.example.com logs. I can see the below error
transport: http2Server.HandleStreams failed to receive the preface from client: read tcp 172.18.0.9:7050->10.0.1.6:47326: read: connection reset by peer
i am sharing my docker-compose files
docker-compose-orderer-kafka-vm1.txt
docker-compose-org1-vm2.txt
docker-compose-org2-vm3.txt
network-config-vm4.txt
@javrevasandeep `SERVICE_UNAVAILABLE` means that your Kafka cluster is not servicing requests. Usually this is for one of two reasons:
1. Your Kafka cluster is simply not configured correctly. Before attempting to run fabric ordering on Kafka, at a minimum complete the first 6 steps of the Kafka Quickstart guide at https://kafka.apache.org/quickstart to ensure that your Kafka is at least basically functional.
2. Your Kafka cluster is setup correctly, but, it has not been given sufficient time to start. Simply add retry logic, or a sleep to ensure that the system is fully started before attempting to transact.
@jyellick I resolved the issue. But got another issue with orderer1.example.com. it seems org2 peer is not able to detect orderer1 with correct configuration. Though transactions are successfully routed through orderer0.example.com now but orderer1 is not getting connected. when i checked peer0.org2.example.com logs. I found this
UTC [endorser] ProcessProposal -> DEBU 37d Exit: request from%!(EXTRA string=10.0.1.6:44118)
2018-01-23 16:58:10.532 UTC [deliveryClient] StartDeliverForChannel -> DEBU 37e This peer will pass blocks from orderer service to other peers for channel mychannel
2018-01-23 16:58:10.536 UTC [ConnProducer] NewConnection -> ERRO 37f Failed connecting to orderer1.example.com:7050 , error: x509: certificate is valid for orderer0.example.com, orderer0, not orderer1.example.com
2018-01-23 16:58:10.541 UTC [deliveryClient] connect -> DEBU 380 Connected to orderer0.example.com:7050
2018-01-23 16:58:10.541 UTC [deliveryClient] connect -> DEBU 381 Establishing gRPC stream with orderer0.example.com:7050 ...
I also checked peer0.org1.example.com logs and found the below logs
UTC [endorser] ProcessProposal -> DEBU 37d Exit: request from%!(EXTRA string=10.0.1.6:32934)
2018-01-23 16:58:01.966 UTC [deliveryClient] StartDeliverForChannel -> DEBU 37e This peer will pass blocks from orderer service to other peers for channel mychannel
2018-01-23 16:58:01.975 UTC [deliveryClient] connect -> DEBU 37f Connected to orderer0.example.com:7050
2018-01-23 16:58:01.975 UTC [deliveryClient] connect -> DEBU 380 Establishing gRPC stream with orderer0.example.com:7050 ...
It sounds like you have setup orderer1 with a certificate claiming its name is orderer0, TLS negotiation will fail because of this
I double checked my docker compose files for orderer and org1 and org2 and couldn't find any mismatch with TLS root certs and TLS client certs
orderer0.example.com:
container_name: orderer0.example.com
image: hyperledger/fabric-orderer
dns_search: .
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=orderer0.example.com
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/crypto/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/crypto/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/crypto/orderer/tls/ca.crt, /etc/hyperledger/crypto/peerOrg1/tls/ca.crt]
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 7050:7050
volumes:
- ./channel:/etc/hyperledger/configtx
- ./channel/crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/:/etc/hyperledger/crypto/orderer
- ./channel/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/crypto/peerOrg1
extra_hosts:
- peer0.org1.example.com:10.0.1.7
- peer1.org1.example.com:10.0.1.7
- peer0.org2.example.com:10.0.1.5
- peer1.org2.example.com:10.0.1.5
depends_on:
- kafka0
- kafka1
- kafka2
- kafka3
https://hastebin.com/uvedalenan.js
orderer1.example.com:
container_name: orderer1.example.com
image: hyperledger/fabric-orderer
dns_search: .
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=orderer1.example.com
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/crypto/orderer1/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/crypto/orderer1/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto/orderer1/tls/serve.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/crypto/orderer1/tls/ca.crt, /etc/hyperledger/crypto/peerOrg2/tls/ca.crt]
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
ports:
- 8050:7050
volumes:
- ./channel:/etc/hyperledger/configtx
- ./channel/crypto-config/ordererOrganizations/example.com/orderers/orderer1.example.com/:/etc/hyperledger/crypto/orderer1
- ./channel/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/:/etc/hyperledger/crypto/peerOrg2
extra_hosts:
- peer0.org1.example.com:10.0.1.7
- peer1.org1.example.com:10.0.1.7
- peer0.org2.example.com:10.0.1.5
- peer1.org2.example.com:10.0.1.5
depends_on:
- kafka0
- kafka1
- kafka2
- kafka3
@javrevasandeep Please do not post long snippets of files or logs to this channel, use a service like hastebin.com and post the link here
srry. will do that going forward
so as you can see. I have everything in place related to certs for both of the orderers
```
- ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto/orderer1/tls/serve.crt
```
Please inspect this certificate using openssl and examine its common name
Such as
```
openssl x509 -in serve.crt -text -noout
```
its showing
Certificate Information:
Common Name: orderer1.example.com
Subject Alternative Names: orderer1.example.com, orderer1
Locality: San Francisco
State: California
Country: US
Valid From: January 15, 2018
Valid To: January 13, 2028
Issuer: tlsca.example.com, example.com
Serial Number: b1c5768409c16cf982babf8e5ad507e4
so this is correct rite?
It does look correct
> 2018-01-23 16:58:10.536 UTC [ConnProducer] NewConnection -> ERRO 37f Failed connecting to orderer1.example.com:7050 , error: x509: certificate is valid for orderer0.example.com, orderer0, not orderer1.example.com
Then the only explanation I can think of for this error is that perhaps your hostname resolution or network routing is incorrect.
That when the peer attempts to connect to orderer1, he is actually getting routed to orderer0
can you help me if i provide my docker files and network-config.yml in hastebin
https://hastebin.com/idecopecaf.coffeescript
docker compose for orderer and kafka
https://hastebin.com/xawalimefi.cs
docker compose for org1
@Vadim I am able to see this under network-config.yaml file for multiorderers
https://hastebin.com/ugonidamum.cs
docker compose for org2
https://hastebin.com/wicemepore.php
network-config.yaml
Did you get a chance to look at these files
I mean docker configurations
I have not looked through all of them yet
However, looking simply at your first link, I see the TLS CAs are inconsistently defined
@here Hi quick question when submitting a config update to add a new org who must sign such request current admin or both the current admin and new org admin?
both the current admins
@pmcosta1 both the current admins
@pmcosta1 admins of the current organizations
@pmcosta1 Look at the `mod_policy` for the group, this indicates which policy must be satisfied. By default, to add a new application org, a majority of organizations must have an admin sign the request.
@javrevasandeep In your docker compose files you are defining host addresses:
```
extra_hosts:
- orderer0.example.com:10.0.1.4
- orderer1.example.com:10.0.1.4
```
These are pointing to the same how, which explains your error
@jyellick @rohitadivi thanks
I'm getting the following error: '''ejecting CONFIG_UPDATE because: Error processing updated config: Setting up the MSP manager failed, err The supplied identity is not valid, Verify() returned x509: certificate signed by unknown authority'''
I'm getting the following error:
`rejecting CONFIG_UPDATE because: Error processing updated config: Setting up the MSP manager failed, err The supplied identity is not valid, Verify() returned x509: certificate signed by unknown authority`
I'm using configtxlator to generate the delta the new configuration has and entry with a different MSPID
If in the certificate section I pass the a (the I have only one) current certificates it works fine
When I replace them with the (base64 encoded) certificates of the new organisation I get the error above.
The new certificates have been generated by an intermediary ca
Am I missing some step here?
Am I missing some step here? @here
@pmcosta1 I strongly encourage you to create the MSP directory structure, add an entry for the org into your `configtx.yaml` file, and then run `configtxgen -printOrg
It will appropriately encode all of your certificates into base64
thanks again @jyellick
Happy to help. If you still find that problem, most likely one of the certificates you are including, either the intermediate CAs or admins was not appropriately signed by the CA
guys, when we use solo orderer service, can we have multiple orderers?
no
:wink:
multiple orderers only work against kafka?
yes
well technically, you can have many solo orderers serving different channels, but I guess your question was about several orderers in one channel
Has joined the channel.
ok
# Required. list of orderers designated by the application to use for transactions on this
# channel. This list can be a result of access control ("org1" can only access "ordererA"), or
# operational decisions to share loads from applications among the orderers. The values must
# be "names" of orgs defined under "organizations/peers"
orderers:
- orderer0.example.com
- orderer1.example.com
@Vadim I found this under balance transfer example network-config.yaml
I think they talk about firewalls
oh got it. so they mean to say that every client who wants to have seperate access to orderers, then they need to have their own network-config file on their client side. Is it something like that
you can also just allow access to orderer0 from the network of org1 only, then org2 peers/clients won't be able to connect to it
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3FM6YFepXyD7hvPkn) that too
Has joined the channel.
My orderer starts and then stops immediately. Its complaining that my x509 certificate has either expired or not valid. It was working fine yesterday. My setup is I have 4 separate VMs each for Orderer, Org1, Org2 and Org3. They are all running on separate physical boxes. Please see the attached screenshot for the exact error.
CertificateNotValid.png
@udaykhambadkone Is it possible your certificate expired?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=rAmwPSktipwm6XHEe) @jyellick I just figured out. The clock on the other machine was different than the machine I was running the Orderer. Its working after I synched all the dates.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nvDvptwmjE4NHKGfP) I just figured out. The clock on the other machine was different than the machine I was running the Orderer. Its working after I synched all the dates.
Hi guys, can you please help here..
I 've some issues with persisting chaincode with Kafka based orderer. I 've data persisted on all dockers. I reboot the system, the dockers are back up and invoke any transaction it works if I use a solo orderer. With all the same comfigurations except the orderer is kafka based, it doesn't work... Is there any additional confguration in the orderer/kafka/peer to handle that? I 'm sure should be a trivial issue but I 'm struggling with it.
When chaincodes are not installed.. the peers get connected easily to kafka even after reboot.. but when.. I 've chaincode installed after reboot, it refuses to connect.
Here are the logs:
[1d1 01-24 14:14:09.84 UTC] [github.com/hyperledger/fabric/orderer/common/deliver] handleStream.processStreamingRPC._AtomicBroadcast_Deliver_Handler.Deliver.Handle -> DEBU Starting new deliver loop
[1d2 01-24 14:14:09.84 UTC] [github.com/hyperledger/fabric/orderer/common/deliver] handleStream.processStreamingRPC._AtomicBroadcast_Deliver_Handler.Deliver.Handle -> DEBU Attempting to read seek info message
[1d3 01-24 14:14:09.85 UTC] [github.com/hyperledger/fabric/orderer/common/deliver] handleStream.processStreamingRPC._AtomicBroadcast_Deliver_Handler.Deliver.Handle -> WARN [channel: composerchannel] Rejecting deliver request because of consenter error
[1d4 01-24 14:14:09.85 UTC] [main] handleStream.processStreamingRPC._AtomicBroadcast_Deliver_Handler.Deliver.func1 -> DEBU Closing Deliver stream
[1d5 01-24 14:14:11.39 UTC] [github.com/hyperledger/fabric/orderer/kafka] setupChannelConsumerForChannel.retry.try -> DEBU [channel: composerchannel] Connecting to the Kafka cluster
Has joined the channel.
SERVICE_UNAVAILABLE to clients or consenter errors in the log generally indicates one of two things:
1. Your Kafka cluster is simply not configured correctly. Before attempting to run fabric ordering on Kafka, at a minimum complete the first 6 steps of the Kafka Quickstart guide at https://kafka.apache.org/quickstart to ensure that your Kafka is at least basically functional.
2. Your Kafka cluster is setup correctly, but, it has not been given sufficient time to start. Simply add retry logic, or a sleep to ensure that the system is fully started before attempting to transact.
@AshishMishra 1 Please make sure that you Kafka cluster is still functional per the instructions in that quick start guide. Most likely, there is a problem with your Kafka management scripts.
@AshishMishra 1 Please make sure that you Kafka cluster is still functional per the instructions in that quick start guide. Most likely, there is a problem with your Kafka management scripts.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=W3Fxx3SdchBoAp62A) @jyellick
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=W3Fxx3SdchBoAp62A) @jyellick
@jyellick thanks for the response.
1. My kafka cluster is fully functional, it performs all the operations i expect from it w.r.t fabric until I reboot the system.
2. Cool, I will put some wait time in my orderer service and the peers as well. The docker compose file has dependency on kafka for these containers still I 'm getting the issue anyways, I will try with more wait time. Also will review the configuration once.
@jyellick , I did wait very long this time for the kafka to come up and then started my orderer. Faced the same issue. One observation is if at least one kafka server is running from my kafka cluster, then the orderer connects. If all the kafka clusters are brought down, then I 'm having this issue.
Hi, just wondering - is there a way to have a normal consortium with a std. channelcreationpolicy(sub_policy: "Admins"), create a channel with a admin user of the consortium channel and then change the administrator rights of that channel to one identity?
more detailed - create a channel from a normal consortium e.g. ```ChannelCreationPolicy -> sub_policy: "Admins"``` and then change the admin ownership of that channel ```mod_policy``` on the ```Application/groups``` section to one MSP_ID e.g. by using the ```n_out_of``` rule
more detailed - create a channel from a normal consortium e.g. ```ChannelCreationPolicy -> sub_policy: "Admins"``` and then change the admin ownership of that channel ```mod_policy``` on the ```Application/groups``` section to one MSP_ID e.g. by using the ```n_out_of``` rule ?
Has joined the channel.
> If all the kafka clusters are brought down, then I 'm having this issue.
https://chat.hyperledger.org/channel/fabric-orderer?msg=uZtD5P4TdxDSTNjYj
@AshishMishra 1:
Wow, it is insane that Rocket.Chat allows spaces on usernames given that the `@` won't work. Anyway, I digress.
(Wow, it is insane that Rocket.Chat allows spaces on usernames given that the `@` won't work. Anyway, I digress.)
You may want to start reading from that link above.
If you're bringing all Kafka brokers down you're not operating the cluster the way you're supposed to.
https://chat.hyperledger.org/channel/fabric-orderer?msg=s5uJfj9uxSAvoiTJc
Has joined the channel.
I am trying to connect my second server (server-2) to the existing server (server-1) orderer so i will get syn ledger,but keep getting error such as "Error connecting to $SERVER_1_IP:7050 due to open {{the orderer .pem ca}}"
or "can't read the block: &{FORBIDDEN}" if i am not providing the --tls and --cafile
Anyone know how to connect to the existing orderer to get the ledger syn and the --cafile is the path on server-2 or server-1 when connect to its(server-1's ip) and the .pem is in /crypto-config/OrdererOrganization/example/ca/ isn't it ?
this is my command in ./startFabric.sh on server-2
docker exec -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin@org3.example.com/msp" peer0.org3.example.com peer channel fetch config -o $SERVER_1_IP:7050 -c composerchannel --tls --cafile $PATH_TO_PEM
docker exec -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin@org3.example.com/msp" peer0.org3.example.com peer channel join -b composerchannel.blockordererOrganizations
I am trying to connect my second server (server-2) to the existing server (server-1) orderer so i will get syn ledger,but keep getting error such as "Error connecting to $SERVER_1_IP:7050 due to open {{the orderer .pem ca}}"
or "can't read the block: &{FORBIDDEN}" if i am not providing the --tls and --cafile
Anyone know how to connect to the existing orderer to get the ledger syn
and what is the meaning of the --cafile ? is it the path on server-2 or server-1 when connect to its(server-1's ip)
and what is the .pem is it in /crypto-config/OrdererOrganization/example/ca/ ?
this is my command in ./startFabric.sh on server-2
docker exec -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin@org3.example.com/msp" peer0.org3.example.com peer channel fetch config -o $SERVER_1_IP:7050 -c composerchannel --tls --cafile $PATH_TO_PEM
docker exec -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin@org3.example.com/msp" peer0.org3.example.com peer channel join -b composerchannel.blockordererOrganizations
I am trying to connect my org2 on separate server to the existing org(org1) orderer so i will get syn ledger,but keep getting error such as "Error connecting to $ORG_1_IP:7050 due to open {{the orderer .pem ca}}"
or "can't read the block: &{FORBIDDEN}" if i am not providing the --tls and --cafile
Anyone know how to connect to the existing orderer to get the ledger syn
and what is the meaning of the --cafile ? is it the path on org2 or org1 when connect to its(org1's ip)
and what is the .pem is it in /crypto-config/OrdererOrganization/example/ca/ ?
this is my command in ./startFabric.sh on org2
docker exec -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin@org2.example.com/msp" peer0.org2.example.com peer channel fetch config -o $ORG_1_IP:7050 -c composerchannel --tls --cafile $PATH_TO_PEM
docker exec -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin@org2.example.com/msp" peer0.org2.example.com peer channel join -b composerchannel.blockordererOrganizations
@pichayuthk I do not understand what you are trying to accomplish. You say "a second server", in fabric, there are multiple server types. These types are orderer, peer, and optionally fabric-ca
@jyellick sorry for the confused explanation
I have two org on separate machine and I already have my org1 up and running
what i want to accomplish is start the org2 and connect to org1
@jyellick sorry for the confused explanation
I have two org on separate machine and I already have my org1 up and running
what i want to accomplish is start the org2 and connect to org1
@jyellick sorry for the confused explanation
I have two org on separate machine and I already have my org1 up and running
what i want to accomplish is start the org2 and connect to org1
btw, i have changed the words in the previous post.
@jyellick sorry for the confusing explanation
I have two org on separate machine and I already have my org1 up and running
what i want to accomplish is start the org2 and connect to org1
btw, i have changed the words in the previous post.
@jyellick sorry for the confusing explanation
I have two org on separate machine and I already have my org1 up and running
what i want to accomplish is start the org2 and connect to org1
i did some google and see one tutorial that use "peer fetch" to connect to the existing network to have the same genesis block
btw, i have changed the words in the previous post.
Has joined the channel.
@pichayuthk I'd recommend you look at some of our examples, like https://github.com/hyperledger/fabric-samples/tree/release/first-network which show you how to have multiple organizations transacting with eachother
Hi everybody! Fabric-orderer implements pluggable consensus algorithm, and that is great. So I tried to swap kafka to solo i my test setup(2 orgs, 4 peers, 1 orderer, 1 channel). Generated new transaction block and pushed it to anchor peer. But transaction block refused with error "Attempted to change the consensus type from kafka to solo after init." Comes out it is not possible. Is there a way to change consensus for working fabric-orderer node? Or maybe I can somehow migrate channel to new orderer without loosing peer data?
@kerokhin: Switching the consensus module on an established network is not supported.
@kostas Looks like I need to start over with new kafka-based network and figure out how to transfer/backup current channel world state.
@kostas Looks like I need to start over with new solo-based network and figure out how to transfer/backup current channel world state.
@jyellick i see the orderer add the TimeWindow pro
@jyellick i see the orderer add the TimeWindow properties
@jyellick i see the orderer add the TimeWindow properties,can you explain it?
@asaningmaxchain123 Transactions have a timestamp specified by the client. It was not previously checked. Now, the orderer makes sure that the timestamp in the header is within `TimeWindow` of the current time at the orderer.
``` // function to extract the TLS cert hash from a channel header
extract := func(msg proto.Message) []byte {
chdr, isChannelHeader := msg.(*cb.ChannelHeader)
if !isChannelHeader || chdr == nil {
return nil
}
return chdr.TlsCertHash
}
bindingInspector := comm.NewBindingInspector(mutualTLS, extract)```
for now,the bindingInspector is noopBinding
for now,the bindingInspector is noopBinding?
``` envTime := time.Unix(chdr.GetTimestamp().Seconds, int64(chdr.GetTimestamp().Nanos)).UTC()
serverTime := time.Now()
if math.Abs(float64(serverTime.UnixNano()-envTime.UnixNano())) > float64(ds.timeWindow.Nanoseconds()) {
err := errors.Errorf("timestamp %s is more than the %s time window difference above/below server time %s. either the server and client clocks are out of sync or a relay attack has been attempted", envTime, ds.timeWindow, serverTime)
return err
}```
the envTime and serverTime should always equal?
They should always be 'close'
Within `TimeWindow`
but the envTime and serverTime is always equal? is it?
but the envTime and serverTime is always equal. is it?
No, they will almost never be exactly equal
The client creates the tx payload, embedding the current time. The client then signs the payload, and sends it to the orderer.
Some time will have elapsed between the client setting the time in the header, and the orderer receiving the message
so the timeWindow is the gap the client send the tx timestamp and the server receives the tx?
if the gap greater than timewindow,return err
`TimeWindow` is the maximum amount of difference between the server time, and the timestamp
The client's clock could also be ahead of the server's clock.
But yes, if the time difference is greater than `TimeWindow`, the server rejects and returns an error.
can you tell me why have to do it?
for what?
Once a transaction is committed to the blockchain, everyone can see it. So, anyone could simply send that transaction a second time. The transaction would fail to validate because of MVCC, but it would end up wasting space in the blockchain. So in essence, it is another layer to prevent replay attacks.
Replay is already prevented in v1.0, but waiting for the MVCC check is unnecessary, if it can be rejected based on timestamp.
please let me repeat it ,it prevents replay attacks. i can accept,but the above ,i don't understand
the client can send the tx to the OSN for orderering,but when the commiting peer pull the block for validating,it doesn't matter with the orderer?
the client can send the tx to the OSN for orderering,so when the commiting peer pull the block for validating,it doesn't matter with the orderer?
@asaningmaxchain123 I'll pm you
@guoger @jyellick i got it
@guoger @jyellick i got it,thx
@guoger @jyellick it must check the same tx,however the orderer seems doesn't check it?
@jyellick thank you, but in the tutorial it uses the same orderer for org1 and org2.
in my case, i have org1 up and running (with its own orderer) and i want org2 to join the network from separate machine.
@pichayuthk It is much easier if you bootstrap the system with two orgs. You may modify the example I linked you to add a second ordering org prior to bootstrap.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=4KjajsB8bLBnbrR9G) @jyellick however, this check is performed in `deliver`, therefore not necessarily preventing MVCC attack?
@guoger Yes, this check is also performed in `Deliver`. If (somehow) an attacker were to intercept another client's `Deliver` request, then in v1.0 the attacker could simply replay this request indefinitely to gain access to the `Deliver` service.
To completely eliminate the replay attack against the `Deliver` service requires the introduction of mutual TLS and the TLS cert hash in the request, but the timestamp goes a long way to preventing many attacks.
I'm aware that we can add new org. Can we delete org too ? For example, a company has resigned from the consortium.
yes
Has joined the channel.
Has joined the channel.
Someone can help me to introduce Kaka in my docker configuration files?
Is it mandatory for a network with >4 peers?
In short, it's required for kafka-based Ordering Service. please refer to doc here: http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html
A simple question Why in every example I find 2 orderers and 4 kafka instances ?
you need at least 4 kafka instances to be cft, the doc I posted above explained the reason
ok but I don't understand the need of 2 orderers
Has joined the channel.
hi guys, it seems that Org1,Org2,Org3 use the same orderer, is there a multi-to-one mapping between organizations and Orderer?
> ok but I don't understand the need of 2 orderers
There is no _need_. It's just an example to show how a network with 2 orderers would work.
> hi guys, it seems that Org1,Org2,Org3 use the same orderer, is there a multi-to-one mapping between organizations and Orderer?
In whatever it is that you reference, the answer is obviously yes. You can of course also make it a one-to-one mapping if you can.
Hi , I want to update my channel config's mod_policy such that only one particular organization's admin has ability to add or remove other organization's . Any example or format that I can follow would be appreciated . Thanks in advance
@rock_martin
> Hi , I want to update my channel config's mod_policy such that only one particular organization's admin has ability to add or remove other organization's . Any example or format that I can follow would be appreciated . Thanks in advance
The channel config has a mod_policy per element. If I catch your meaning correctly, you would like for only one organization to have control over operations like adding and removing members from the network. I assume however that you would still want individual members to be able to manage their own CRLs, Anchor Peer addresses etc.
@rock_martin
> Hi , I want to update my channel config's mod_policy such that only one particular organization's admin has ability to add or remove other organization's . Any example or format that I can follow would be appreciated . Thanks in advance
The channel config has a mod_policy per element. If I catch your meaning correctly, you would like for only one organization to have control over operations like adding and removing members from the network. I assume however that you would still want individual members to be able to manage their own CRLs, Anchor Peer addresses etc.
If this is the case, you will want to copy the definition for the `Admins` policy of the org you wish to have control, and replace the definition for the `Admins` policy at the /Channel/Application level.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=EfK7xpCn8SNjyYwQg) Hi, what I mean is each Org uses independent Orderer, is it possible?
Hi can someone explain me how the trio of 3 zookeeper 4 kafka and 1 orderer are working...
I want also understand why there's 3 zookeeper 4 kafka and only 1 orderer ?
Thx !
Has joined the channel.
Hi, i am trying to deploy a kafka based ordering can someone please hint me toward a good example on kafka based ordering sample application. i have successfully built the network and now i need some example application to learn how to use kafka based ordering in the hyperledger.
@NeerajKumar `/opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli/docker-compose-e2e.yaml`
Has joined the channel.
Hi,
I plan to work with hyperledger with for my thesis project and I would like to know if there is somewhere a collection of architectures, in particular, I am looking for examples where a "distributed ordering service nodes" are used. Any links may help me since modelling endeavour before may make sense.
Thanks in advance!
Has joined the channel.
Has joined the channel.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Ht8DeAawQwGfCqmKJ) @jyellick In the highlighted text in below screenshot , This is the policies attribute you are asking to replace. I just want to understand the default policy, What is meant by type: 3 and rule:majority (I am guessing it requires signatures of majority of organization's admin's to change the channel config).
image.png
@zhasni
> I want also understand why there's 3 zookeeper 4 kafka and only 1 orderer ?
In production you should deploy more than one orderer. In the interest of keeping the number of containers smaller in this example there is only one, but for CFT among the orderer processes you should have at least 2.
@MartinKrmer
> I plan to work with hyperledger with for my thesis project and I would like to know if there is somewhere a collection of architectures, in particular, I am looking for examples where a "distributed ordering service nodes" are used. Any links may help me since modelling endeavour before may make sense.
A good place to start is http://hyperledger-fabric.readthedocs.io/en/latest/arch-deep-dive.html?highlight=architecture
@rock_martin
> I just want to understand the default policy, What is meant by type: 3 and rule:majority (I am guessing it requires signatures of majority of organization's admin's to change the channel config).
Take a look at https://github.com/hyperledger/fabric/blob/master/protos/common/policies.proto
There is an enum with policy types. Today, only types 1 (SIGNATURE) and 3 (IMPLICIT_META) are supported. These correspond to the `SignaturePolicyEnvelope` and `ImplicitMetaPolicy` messages defined in this file. So, 2 implies it is an ImplicitMetaPolicy, you may read the descriptions in the proto file for some additional understanding. You may also wish to look at http://hyperledger-fabric.readthedocs.io/en/latest/policies.html
thx @jyellick I was thinking the same way (1 orderer is not enough) I want at least 4 orderer
can you explain me the role of the kafka and zookeeper in this implementation ? (4 kafka, 3 zookeeper and 1 orderer)
and how everything articulates auround the orderer...
@zhasni You should start from your failure requirements and work backwards. How many nodes should be able to fail and keep your service available?
Let's assume this number is 1
Let's assume this number is f=1
This means, you should have 1+f = 2 orderers
For Kafka, there are two important factors, the replication factor (RF) or how many copies of the data are eventually persisted to the cluster, the number of in sync replicas (ISR) or the number of replicas which must have data before it is considered committed.
So, ISR = 1 + f = 2, so one node may fail, and there is still another with the current committed data. And, RF = ISR + 1, because there needs to be another node ready to catch up.
So, ISR = 1 + f = 2, so one node may fail, and there is still another with the current committed data. And, RF = ISR + f = 3, because there needs to be another node ready to catch up.
If there are not enough brokers to satisfy the RF, then creating new topics will fail. Creating new topics is required to create new channels, so we need the number of Kafka brokers to be RF+1=4
If there are not enough brokers to satisfy the RF, then creating new topics will fail. Creating new topics is required to create new channels, so we need the number of Kafka brokers to be RF+f=4
So, in total we require 4 Kafka brokers to allow 1 broker to crash without affecting the system.
Finally, Zookeeper deployments are always done in odd numbers, and to support one failure, while keeping a majority of nodes alive to vote requires minimally 3 ZK nodes.
OK thank you very for the detailed explaination
is there a documentation about this somewhere ?
There is a multitude of Kafka documentation available on the Kafka project site
In general, we try to avoid documenting Kafka procedures in fabric, because there is such quality and exhaustive documentation available provided by Kafka, it would be counterproductive to attempt to reproduce it.
ok I understand, I'll check Kafka project site then.
ty
@zhasni thanx for response but thats not what i was looking for, actually i need a sample application that uses kafka based ordering. please suggest something in that context.
@NeerajKumar The application should generally be unaware of the type of ordering consensus mechanism. From an application perspective, the client should simply select an orderer from the available orderers randomly, and send transactions to it. There is no Kafka specific logic required.
what about the producer and consumer app logic then
how an orderer will be selected out of available orderers
@NeerajKumar The producer and consumer are kafka concepts which the fabric orderer hides from the application.
great....:sunglasses:
The application simply needs a list of orderer nodes it may send transactions to, and it should select randomly among them. The SDKs have examples of this.
but i have 3 orderers and the network-config sets one. cant really have any example of that. need to hit and try i guess
@jyellick please look at this config, is it okay even if i use kafka based ordering
{
"network-config": {
"orderer": {
"url": "grpcs://localhost:7050",
"server-hostname": "orderer.example.com",
"tls_cacerts": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/ca.crt"
},
"org1": {
"name": "peerOrg1",
"mspid": "Org1MSP",
"ca": "https://localhost:7054",
"peers": {
"peer1": {
"requests": "grpcs://localhost:7051",
"events": "grpcs://localhost:7053",
"server-hostname": "peer0.org1.example.com",
"tls_cacerts": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt"
},
"peer2": {
"requests": "grpcs://localhost:7056",
"events": "grpcs://localhost:7058",
"server-hostname": "peer1.org1.example.com",
"tls_cacerts": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt"
}
},
"admin": {
"key": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/keystore",
"cert": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/signcerts"
}
},
"org2": {
"name": "peerOrg2",
"mspid": "Org2MSP",
"ca": "https://localhost:8054",
"peers": {
"peer1": {
"requests": "grpcs://localhost:8051",
"events": "grpcs://localhost:8053",
"server-hostname": "peer0.org2.example.com",
"tls_cacerts": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt"
},
"peer2": {
"requests": "grpcs://localhost:8056",
"events": "grpcs://localhost:8058",
"server-hostname": "peer1.org2.example.com",
"tls_cacerts": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt"
}
},
"admin": {
"key": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/keystore",
"cert": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/signcerts"
}
}
}
}
the java sdk provides it
@asaningmaxchain123 where in the java-sdk, please point...
can you join the fabric-sdk-java room
you can search it
#fabric-sdk-java
Hello Fabric Experts, I am facing a bottleneck in the the peer side when I am doing load testing with Fabric. When I initiate load and send Transaction to the orderer, I find that the rate at which blocks are created in the orderer are not at par with the peer nodes. I find that the block creation on the peer is always far behind than that of the orderer. During the test I simultaneously grepped the logs of the Orderer and the Peer and started tracking `Wrote block ... ` and `Created block [XXX] with YY transaction(s)` on the Orderer and Peer logs simultaneously. I tried increasing the Max Transactions per block to reduce number of blocks broadcasted to the peers, but it did not seem to help much. Since, I need to receive, events from the committing peers as well, it affects my overall throughput.
I am using Fabric v1.0, Any thoughts on this on how I can achieve higher throughput? Thanks. :) @yacovm @kostas
@ArnabChatterjee, how about the hits per second you applied? and the failure rate and delay? also which consensus did you use?
@Glen - I am using 50 concurrent users via Jmeter, with 2 simultaneous node servers sending transactions to the orderer. I am able to achieve a 100% success transaction throughput of 130 tps in Jmeter.
I could not receive any transactions' event from the committing peer. Almost all of them resulted in a timeout (of around 180 second). I am using kafka orderering with 4 kafka & 3 ZK nodes.
@Glen - I am using 50 concurrent users via Jmeter, with 2 simultaneous node servers sending transactions to the orderer. I am able to achieve a 100% success transaction throughput of 130 tps in Jmeter. The delay in block creation between the orderer and the peer seems to diverge over time, and at times as high as 10-15 minutes.
I could not receive any transactions' event from the committing peer. Almost all of them resulted in a timeout (of around 180 second). I am using kafka orderering with 4 kafka & 3 ZK nodes.
in fact peer and order uses bidirectional grpc stream to communicate blocks, so the block on order should be delivered to peer in time, that's my understanding
@Glen I would not bother trying to performance tune v1.0 too much. There are some very significant changes in v1.1 which boost throughput both at the orderer and peer.
@ArnabChatterjee I would not bother trying to performance tune v1.0 too much. There are some very significant changes in v1.1 which boost throughput both at the orderer and peer.
I'd strongly suggest that you take the v1.1 alpha and work from there
http://hyperledger-fabric.readthedocs.io/en/v1.1.0-alpha/upgrade_to_one_point_one.html @jyellick this is update with in the v1.1
http://hyperledger-fabric.readthedocs.io/en/v1.1.0-alpha/upgrade_to_one_point_one.html @jyellick this is update with in the v1.1?
Yes, that is the alpha version of that document
@Glen - I understand. But somehow I feel that there might be delay in execution on VSCC too.
@ArnabChatterjee In v1.0, validation of signatures in the VSCC is done serially. This is probably the largest bottleneck in the system for a multi-cored system. Beginning in v1.1 going forward, VSCC validation is done in parallel which increases the peer's block commit rate considerably.
@ArnabChatterjee In v1.0, validation of signatures in the VSCC is done serially. This is probably the most significant bottleneck in the system for a multi-cored system. Beginning in v1.1 going forward, VSCC validation is done in parallel which increases the peer's block commit rate considerably.
@jyellick - Thanks you very much for the insight. I will try upgrading to v1.0
@jyellick - Thank you very much for the insight. I will try upgrading to v1.0
Has joined the channel.
Question from @NagatoPeinI1 :
I'm trying to add new organization (org4) with 2 peers in a channel of 3 organizations (org1, org2 and org3 respectively) having 2 peers each. While doing that using configtxlator tool in docker cli container I'm getting this error while getting signature from 3rd organization (org3): Error: got unexpected status: BAD_REQUEST -- Error authorizing update: Error validating ReadSet: Readset expected key [Groups] /Channel/Application at version 1, but got version 2
Question from @NagatoPeinI1 :
I'm trying to add new organization (org4) with 2 peers in a channel of 3 organizations (org1, org2 and org3 respectively) having 2 peers each. While doing that using configtxlator tool in docker cli container I'm getting this error while getting signature from 3rd organization (org3): Error: got unexpected status: BAD_REQUEST -- Error authorizing update: Error validating ReadSet: Readset expected key [Groups] /Channel/Application at version 1, but got version 2. and after that wheni'm trying to add one more organization it is giving this error: Error: got unexpected status: BAD_REQUEST -- Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 3 sub-policies, required 1 remaining
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Ht8DeAawQwGfCqmKJ) @jyellick I replaced the 'admins' policy under 'policies' field of /Channel/Application with 'admins' policy under 'policies' of org1. I wasn't able to stop other organizations from adding other organizations. But when I totally replaced 'policies' field of /Channel/Application with 'policies' field of org1, I was able to achieve that only admin of org1 can add or remove other organization. The problem however came forward that those new organizations weren't able to add their peers using the admin's context of their own organization except for org1. In the case when we are replacing the whole 'policies' org2 and org3 cannot add peers putting their own admin's context. It shows error saying as "FORBIDDEN".
Hi,
sorry I am a newbiee greenhorn but can someone just answer my question:
In case I wanna use multichannel consensus.... Do I need one Ordering service per Channel?
@MartinKrmer what do you mean by "multichannel consensus"?
Well that the client sends via a few channels the information to the peers groups -> same data may be isolated for some channels -> the peer groups to not receive 1:1 the same information.
channels are just independent blockchains
I'm not sure what you want to achieve
The goal is data isolation for confidentiality => So that more sensitive data can be saved on the blockchain, but my original question is if having more than one ordering service would contribute to that. Sorry I am bad in explaining.
what about sidedb?
Has joined the channel.
Hello everyone, I am working on Hyperledger Fabric Consensus network. I wanted to know if it is possible to implement the rollback mechanism in case of transaction failure and if yes, which type of Consensus has to be employed. Or if some other properties have also to be taken into consideration?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=aeqgnwRWx8NE6iQk4)
@NagatoPeinI1 This occurs when you compute the config update from an older version of the config. Most likely, you retrieved the latest config, sent an update, and then tried to send a second update.
Each update creates a new version of the config, so you must re-pull it and base your config update computations off of it.
> @jyellick I replaced the 'admins' policy under 'policies' field of /Channel/Application with 'admins' policy under 'policies' of org1. I wasn't able to stop other organizations from adding other organizations. But when I totally replaced 'policies' field of /Channel/Application with 'policies' field of org1, I was able to achieve that only admin of org1 can add or remove other organization. The problem however came forward that those new organizations weren't able to add their peers using the admin's context of their own organization except for org1. In the case when we are replacing the whole 'policies' org2 and org3 cannot add peers putting their own admin's context. It shows error saying as "FORBIDDEN".
@CodeReaper Some step of the procedure must not be correct. This is a scenario we have definitely tested. You should leave the Readers and Writers policies alone, only modify the Admins policy for the /Channel/Application level.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=zvXxwNNEpuxGBYfjN) @jyellick we replaced the admins and the writers policy, and it seems to function properly. Only one organization can add or remove other orgs and other orgs admin are also able to add peers on their own. But we had to replace the admins and the writers policies. We checked it twice changing only the admins policy, it didn't seem to work.
What could be the shortcomings of changing the writers policy to one orgs to /channel/application's writers policy?
@CodeReaper The Writers policy is controlling who can send transactions via the orderer's `Broadcast` service. By replacing this policy, you are preventing all other orgs from submitting any transactions, including config updates.
> We checked it twice changing only the admins policy, it didn't seem to work.
All I can say is that there is something wrong. Are you certain that the admin for the org in the policy is not signing?
If post via hastebin, or dm me your orderer logs at debug, I can most likely tell you what is going wrong.
Ok, I'll try again.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ybCEx6S5HQiHdg7se) @jyellick Any other shortcomings, my current architecture includes nodejs apps to submit transactions to orderers. So this would still not affect me just now, any other shortcomings??
@CodeReaper The Writers policy controls who may submit transactions to ordering (and invoke chaincodes). By making only one org able to satisfy this policy, only members of this org may do those two things. So, if the other orgs need to submit transactions (via node or any other mechanism), you will encounter problems.
oh, so by other orgs you mean the peers and the users of that orgs as well. I see, that's a problem, I'll try replacing only the admins policy again. Just to confirm I'm doing everything right, I should copy one orgs whole admins policy and replace the application/channels admins policy completely?
I'll show you the screenshots if goes wrong, Thanks very much.
Yes, that procedure sounds correct to me.
Has left the channel.
Has joined the channel.
Still, i am facing many issues with kafka based ordering, someone please suggest a solution. I have attached network config file and docker-compose file, if you can sense the issue {which is SERVICE_UNAVALIABLE at the time of channel creation} please point me in the right direction.
{
"network-config": {
"orderer": {
"url": "grpcs://localhost:7050",
"server-hostname": "orderer0.example.com",
"tls_cacerts": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls/ca.crt"
},
"org1": {
"name": "peerOrg1",
"mspid": "Org1MSP",
"ca": "https://localhost:7054",
"peers": {
"peer1": {
"requests": "grpcs://localhost:7051",
"events": "grpcs://localhost:7053",
"server-hostname": "peer0.org1.example.com",
"tls_cacerts": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt"
},
"peer2": {
"requests": "grpcs://localhost:7056",
"events": "grpcs://localhost:7058",
"server-hostname": "peer1.org1.example.com",
"tls_cacerts": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org1.example.com/peers/peer1.org1.example.com/tls/ca.crt"
}
},
"admin": {
"key": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/keystore",
"cert": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp/signcerts"
}
},
"org2": {
"name": "peerOrg2",
"mspid": "Org2MSP",
"ca": "https://localhost:8054",
"peers": {
"peer1": {
"requests": "grpcs://localhost:8051",
"events": "grpcs://localhost:8053",
"server-hostname": "peer0.org2.example.com",
"tls_cacerts": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org2.example.com/peers/peer0.org2.example.com/tls/ca.crt"
},
"peer2": {
"requests": "grpcs://localhost:8056",
"events": "grpcs://localhost:8058",
"server-hostname": "peer1.org2.example.com",
"tls_cacerts": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org2.example.com/peers/peer1.org2.example.com/tls/ca.crt"
}
},
"admin": {
"key": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/keystore",
"cert": "/home/neeraj/Documents/Belrium-KYC-Layer-App/Bel-KYC-Layer-App-KafkaBasedNetwork/artifacts/crypto-config/peerOrganizations/org2.example.com/users/Admin@org2.example.com/msp/signcerts"
}
}
}
}
is this network-config file all okay?
@NeerajKumar it would be nice if you could use hastebin.com and paste link here. Or at least format your code with markdown syntax
Also, `SERVICE_UNAVALIABLE` often indicates that your kafka cluster is not configured properly. Pls make sure it works as expected with http://kafka.apache.org/quickstart
gouger here is the the link of hastebin
https://hastebin.com/bohemenano.cs
and for docker-compose
https://hastebin.com/azukiguqur.cs
please review
its a cluster of three kafka brokers with three zookeeper instances and 3 orderers as well
Has joined the channel.
these two links seem to contain identical content... have you tried the link I posted above? also, what's the orderer error log?
there is no error on any of the containers logs
only debug and info logs
sorry, this is the network-config.json link
https://hastebin.com/huteyofose.json
should i declare all three orderers in the network-config.json as well...
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=kgJq7EuxL4GHjS249) @NeerajKumar sorry I wasn't clear, I meant any kind of log
there will at least be warning AFAIK
latest logs of orderer0
https://hastebin.com/enewiwawor.tex
logs of orderer1 and orderer2 are same as:
https://hastebin.com/ifaguciqiq.cs
logs of kafka 0, 1, 2 are same as:
[2018-02-01 06:53:03,174] INFO [Group Metadata Manager on Broker 0]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
and no logs on the zookeeper for this request
now i came to a point where i think that its an issue with the broadcast right before the create channel and there is no service of kafka available to it
but how and why is not clear
@NeerajKumar from the orderer log you posted, it's very likely that you client is tls enabled but orderer is not
what does your docker-compose-base.yaml look like
docker-compose-base.yaml
https://hastebin.com/keriqakase.cs
docker-compose.yamle
https://hastebin.com/lexalugoza.cs
and the .env file
https://hastebin.com/qimuwadotu.ini
and how do you create channel?
though REST API call from node-sdk
yeah i have seen that TLS was not enabled at the orderer
currently checking the fix
or disable TLS on sdk side
still, not working
@guoger dude thanx, problem is almost solved or atleast channel is created but the issue was that, i was setting wrong volumes for genesis block and tls and msp directories in the docker-base.yaml and that is why it was failing even after saying that TLS enabled is true, but one more thing is there, hope you can help me out with that too
i have set these
- CORE_PEER_TLS_CLIENTAUTHREQUIRED=true
- CORE_PEER_TLS_CLIENTROOTCAS_FILES=[/var/hyperledger/tls/ca.crt]
but this location
CORE_PEER_TLS_CLIENTROOTCAS_FILES=[/var/hyperledger/tls/ca.crt]
became this /etc/hyperledger/fabric/[/var/hyperledger/tls/ca.crt] which the system can not locate and the peer container is not able to come up
any idea what should be the right location for this "CORE_PEER_TLS_CLIENTROOTCAS_FILES"
and what is this actually?
you need to mount a separate volume there where peers will keep their certificates
you mount the binaries to /etc/hyperledger/fabric and have another volume mount, some folder "certs" or whatever mounted to /var/hyperledger
ideally you'd also separate them between different peers "certs/peerX" for peer X
but what is client root cert?
my guess that it will be the tls root chain
http://hyperledger-fabric-ca.readthedocs.io/en/latest/users-guide.html#getting-a-ca-certificate-chain-from-another-fabric-ca-server
@dave.enyeart Thank Your for your answer
3
down vote
favorite
1
I'm trying to add new organization (org4) with 2 peers in a channel of 3 organizations (org1, org2 and org3 respectively) having 2 peers each. While doing that using configtxlator tool in docker cli container I'm getting this error while getting signature from 3rd organization (org3): Error: got unexpected status: BAD_REQUEST -- Error authorizing update: Error validating ReadSet: Readset expected key [Groups] /Channel/Application at version 1, but got version 2
Has joined the channel.
I'm trying to add new organization (org4) with 2 peers in a channel of 3 organizations (org1, org2 and org3 respectively) having 2 peers each. While doing that using configtxlator tool in docker cli container I'm getting this error while getting signature from 3rd organization (org3): Error: got unexpected status: BAD_REQUEST -- Error authorizing update: Error validating ReadSet: Readset expected key [Groups] /Channel/Application at version 1, but got version 2
@BOGATIM This almost definitely means that you have sent one config update, but have not pulled the config afterwards. Every time the config is updated, be sure to pull a new copy, and compute the config update from that.
@jyellick Adding other orgs with admin of one org only worked perfectly thanks. I have additional questions-
@jyellick Adding other orgs with admin of one org only worked perfectly thanks. I have additional questions-
I noticed that when I added a new org in running blockchain it automatically added its one peer. i didn't have to join that peer separately, like I did with others following it. How did it know which peer to add, I didn't it give it any information??
@jyellick Adding other orgs with admin of one org only worked perfectly thanks. I have additional questions-
I noticed that when I added a new org in running blockchain it automatically added its one peer. i didn't have to join that peer separately, like I did with others following it. How did it know which peer to add, if I didn't it give it any information??
Hi @kostas @guoger , Is Orderer.PreferredMaxBytes parameter kafka-broker specific ?how does it affects the throughput ?
Hi @kostas @guoger , Is Orderer.PreferredMaxBytes parameter kafka-broker specific ?how does it affects the throughput ?
https://github.com/hyperledger/fabric-samples/blob/release/first-network/configtx.yaml#L125
Hi, i was going through this docs http://hyperledger-fabric.readthedocs.io/en/latest/policies.html and couldnt understand exactly what ImplicitMetaPolicy is, can someone please help me with that , Thanks
Hi, i was going through this docs http://hyperledger-fabric.readthedocs.io/en/latest/policies.html and couldnt understand exactly what ImplicitMetaPolicy is, can someone please help me with that ,and one more thing the type:1 and type : 3 are only used what are type: 0 and type: 2
Hi, i was going through this docs http://hyperledger-fabric.readthedocs.io/en/latest/policies.html and couldnt understand exactly what ImplicitMetaPolicy is, can someone please help me with that
@CodeReaper
> i didn't have to join that peer separately, like I did with others following it. How did it know which peer to add, if I didn't it give it any information??

This would indeed be magical, but I can fairly guarantee that it did not happen. Perhaps you forgot about some component of your scripts? The only way a peer joins a channel is via the JoinChannel API invoked by its admin.
@niteshsolanki
> Is Orderer.PreferredMaxBytes parameter kafka-broker specific ?how does it affects the throughput ?
It is not Kafka specific. This number indicates the minimum threshold for how large a block can be, before it is cut due to size. For instance, perhaps you wish for blocks to be on average no more than 4MB, this would be your preferred max blocksize, and once a block reached 4MB, it would be cut, and the next block started. However, consider a 10MB transaction. Any block which contains this transaction will significantly exceed 4MB. So, it would be inaccurate to call `PreferredMaxBytes` something like `BlockSize` because the block may actualy be much larger. This is where the absolute max parameters come into play.
> Hi, i was going through this docs http://hyperledger-fabric.readthedocs.io/en/latest/policies.html and couldnt understand exactly what ImplicitMetaPolicy is, can someone please help me with that
@kapilAtrey An implicit meta policy is one which evaluates sub-policies. For instance, the default /Channel/Application/Readers policy will evaluate successfully if any of the /Channel/Application/Org1/Readers, /Channel/Application/Org2/Readers etc. policies are true. It is implicit because you do not have to explicitly declare which sub-groups to collect the sub-policies from, and it is meta, because it is a policy who's evaluation depends upon the evaluation of other policies.
@jyellick thanks for the clarification. So my understanding is, in Kafka block is created either batch size has reached or first TTC-x has been encountered. The TTC-x is also dependent on preferred max bytes or only or only the timeout?
@jyellick thanks for the clarification. So my understanding is, in Kafka block is created either batch size has reached or first TTC-x has been encountered. The TTC-x is also dependent on preferred max bytes or only the timeout?
TTC-x is dependent on time only.
ok. @jyellick which is given more priority when cutting blocks, the batch size or the preferred max bytes? Or whichever occurs first?
ok. @jyellick which parameter is given more priority when cutting blocks, the batch size or the preferred max bytes? Or whichever occurs first? assuming that timeout is very large
Dear experts,
the "orderer" is seen as issue in respect to confidential data. Can someone briefly tell me why?
In case: Creating multiple orderers and having a random selection of one of orderer by the client
Would it mitigate the problem?
@niteshsolanki it is whichever comes first
@jyellick as per the link http://hyperledger-fabric.readthedocs.io/en/release/kafka.html?highlight=Kafka#additional-considerations Kafka offers high throughput if preferred max bytes is low. how does preferred max bytes parameter affects the throughout? In kafka based, message is the txn that is captured in the topic and not the block
@niteshsolanki I agree, this link is confusing. What it is trying to convey, is that you should aim to make you transaction size small to increase Kafka throughput. As you point out, the batch size parameters have no affect on Kafka throughput.
@jyellick ok. Just want to understand what parameters affects the transaction size and which can be controlled by the application. one of them could be R-W set, anything else?
RW is the only component of the fabric transaction which the application may affect the size of. The rest of the transaction size is mostly identities (certificates)
Ok. Thanks @jyellick
Hello. If a user initiate or submits a transaction which will just query the ledger and fetch some data from ledger (but no update), will this submitted transaction go to ordering service after the generating R/W sets by endorsing peer? How does this transaction be added to committed block? Please explain this.
@AkshayJindal after collecting responses from the peers, it is up to the client whether those responses are packaged into a transaction and sent to ordering. Only transactions sent to ordering end up on the blockchain
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=9CHssdPCuj499SNLZ) @jyellick Thanks. How does client decide upon this? Is there any link? If client does not send this transaction to orderer then this transaction should not reflect in committed block..right?
@AkshayJindal The client makes the decision based on app design, generally clients do not send queries to ordering but send anything which modifies world state. And yes, if the client does not send the transaction to ordering, then it will not be reflected in any block
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3MCMKhHcY9DfHGMQf) @jyellick thanks for responding it is almost clear except one thing "it is meta, because it is a policy who's evaluation depends upon the evaluation of other policies"
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3MCMKhHcY9DfHGMQf) @jyellick thanks for responding it is almost clear except one thing "it is meta, because it is a policy who's evaluation depends upon the evaluation of other policies" i couldn't understand it's dependency on other policies
@kapilAtrey AFAIU, you could think of a tree of policies, internal (parent) nodes always depend on its children, and ultimately depend on leaf nodes.
so leaf nodes are implicit meta policy @guoger
so leaf nodes are implicit meta policy @guoger on whom they are depending
no, internal nodes are meta
```
enum PolicyType {
UNKNOWN = 0; // Reserved to check for proper initialization
SIGNATURE = 1; *// leaf*
MSP = 2;
IMPLICIT_META = 3; *// internal*
}
```
```
enum PolicyType {
UNKNOWN = 0;
SIGNATURE = 1; // leaf
MSP = 2;
IMPLICIT_META = 3; // internal
}
```
@guoger thanks man but still i have a little doubt on whom they are depending
if there are some sort of hierarchy then meta are depending on signature policies if i am taking this in a right way
for example, given following policies:
p1: tx must be signed by _A_
p2: tx must be signed by _B_
p3: p1 and p2 must be both true
p1 and p2 do *not* depend on other policies, they could be evaluated by them selves. p3 depends on the evaluation results of p1 and p2
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=CQfHq7EBvtLtZZQgQ) @kapilAtrey correct
@guoger great explanation thanks man
Hi, i have a question. I have 2 networks `N1 (with 1 orderer, Org1MSP, ca.org1msp and peer.org1msp)` and `N2 (with 1 orderer, Org2MSP, ca.org2msp, and peer.org2msp)`. N1 has a channel `org1channel` and Org2MSP wants to join it. How do i get this done?
I have added the `Org2MSP` to `N1` and `peer.org2msp` can fetch the channel genesis block from orderer of `N1`. However, when `peer.org2msp` tries to join the channel using the fetched block, it complains that the `Org1MSP is unknown`. Am i missing a step? Like an Anchor peer update?
Hi, i have a question. I have 2 networks `N1 (with 1 orderer, Org1MSP, ca.org1msp and peer.org1msp)` and `N2 (with 1 orderer, Org2MSP, ca.org2msp, and peer.org2msp)`. N1 has a channel `org1channel` and Org2MSP wants to join it. How do i get this done?
I have added the `Org2MSP` to `N1` and `peer.org2msp` can fetch the channel genesis block from orderer of `N1`. However, when `peer.org2msp` tries to join the channel using the fetched block, it complains that the `Org1MSP is unknown`. Am i missing a step? Like an Anchor peer update? Exact error is ```Error: proposal failed (err: rpc error: code = Unknown desc = chaincode error (status: 500, message: "JoinChain" request failed authorization check for channel [firstgtbchannel]: [Failed verifying that proposal's creator satisfies local MSP principal during channelless check policy with policy [Admins]: [This identity is not an admin]]))```
Hi All. I have a question regarding Orderer resiliency. I have a distributed system with Orderer and Peers running on different systems. Everything works fine till the orderer is fine. Now for some reason the Orderer system restarted and with that the orderer container also restarted. Once that happened, the entire system wasnt working. The only way is to re-deploy the entire network which means i lose all the data in the peer containers. Any idea if we can have some form of Orderer resiliency?
Has joined the channel.
I am using kafka orderer service. Normaly it is working fine, sometime I am getting error:
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:426)
at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:822)
2018-02-05 05:49:25,180 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper2 to address: zookeeper2/10.0.0.7
2018-02-05 05:49:25,180 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:FastLeaderElection@852] - Notification time out: 60000
2018-02-05 05:50:30,207 [myid:1] - WARN [QuorumPeer[myid=1]/0.0.0.0:2181:QuorumCnxManager@400] - Cannot open channel to 2 at election address zookeeper1/10.0.0.5:3888
java.net.SocketTimeoutException: connect timed out
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:426)
at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:822)
2018-02-05 05:50:30,207 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper1 to address: zookeeper1/10.0.0.5
2018-02-05 05:50:30,208 [myid:1] - WARN [QuorumPeer[myid=1]/0.0.0.0:2181:QuorumCnxManager@400] - Cannot open channel to 3 at election address zookeeper2/10.0.0.7:3888
java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method)
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
at java.net.Socket.connect(Socket.java:589)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:381)
at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectAll(QuorumCnxManager.java:426)
at org.apache.zookeeper.server.quorum.FastLeaderElection.lookForLeader(FastLeaderElection.java:843)
at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:822)
2018-02-05 05:50:30,208 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:QuorumPeer$QuorumServer@149] - Resolved hostname: zookeeper2 to address: zookeeper2/10.0.0.7
2018-02-05 05:50:30,208 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:FastLeaderElection@852] - Notification time out: 60000
Has joined the channel.
Hi guys, i would like to know where ORDERER_GENERAL_LEDGERTYPE is define ? In solo it's file but in kafka mode is it file again? Where is store the ledger in file mode by default, where can i find the configuration in golang file ?
Hi guys, i would like to know where ORDERER_GENERAL_LEDGERTYPE is define ? In solo it's file but in kafka mode is it file again? Where is store the ledger in file mode by default, where can i find the configuration in golang file ?
is it difference between ORDERER_GENERAL_LEDGERTYPE and ORDERER_GENERAL_LEDGER_TYPE ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=dQArEeQfYvEHuvumi) @awattez The ledger type is independent of the orderer consensus type. Look at the `FileLedger` section of the `orderer.yaml` to find the location.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=EZRurc9rizPRDaG7j) @SanketPanchamia I'm guess you are running in solo-orderer, would recommend to look at Kafka-based Ordering Service for resilient features.
http://hyperledger-fabric.readthedocs.io/en/release/kafka.html
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3L25ELvXQxYZprcRf) @harsha We have been facing issues with Kafka. We have stopped that for a while. Is it ok if i personally IM you once we get back started on that?
@rahulhegde
@rahulhegde
Has joined the channel.
Has joined the channel.
Has left the channel.
Has joined the channel.
Hi all if i am trying to add an organisation to a channel and we dont have any trace of it (we have it's certs and keys in crypto-config) but not in ORGS of helper.js and not in network-config.js now now when i am trying to add it it is giving this error
Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Policy] /Channel/Admins not satisfied: Failed to reach implicit threshold of 2 sub-policies, required 1 remaining
how to provide this i dont have any other organisation whose signature should i provide
?
@kapilAtrey It looks like your update is trying to change something more than just adding an org, can you post either your config update or orderer logs to hastebin and link here?
Has joined the channel.
Has joined the channel.
Hi, I tried to query a block from node SDK. That particular block had a very bulky transaction in it, so I can't seem to query the block. It shows-
Clipboard - February 9, 2018 11:07 AM
It seems to because of limit of message size in GRPC connection. Any config variable that can help me scale up the limit??
@jyellick i've uploaded the orderer logs on this link https://hastebin.com/zefonemiro.php
@jyellick yeah i am modifying the admin policy and setting it to the org1's admin policy this might be the problem you are seeking for
@kapilAtrey please also upload the jsob representation of your config update, it looks like you are modifying more than what you describe, according to those logs
@kapilAtrey please also upload the json representation of your config update, it looks like you are modifying more than what you describe, according to those logs
@CodeReaper This is a better question for #fabric-sdk-node
@jyellick the updated config https://hastebin.com/ziwiwulifa.json
@kapilAtrey The computed update please
@jyellick I beg your pardon sir
You should have an original config, a modified one, and then an update derived from the differences
ok wait i am on it
@jyellick this one is original config https://hastebin.com/royirihazi.json and this is updated one https://hastebin.com/ziwiwulifa.json
and another one you want is the response of this http://127.0.0.1:7059/configtxlator/compute/update-from-configs ??
Correct, decoded to JSON
@jyellick can you help me in that how to decode protobuf to json i can research over it but it is going to take a lot of time i tried JSON.stringify() but no luck
You can use `configtxlator` simply decode the output of `update-from-configs` as a `common.ConfigUpdatee`
You can use `configtxlator` simply decode the output of `update-from-configs` as a `common.ConfigUpdate`
@kapilAtrey Based on the original and modified config JSONs you uploaded, I can see you are attempting to add an org, and modify the /Channel/Admins policy. I expect that perhaps you wanted to modify the /Channel/Application/Admins policy
@kapilAtrey Based on the original and modified config JSONs you uploaded, I can see you are attempting to add an org, and modify the `/Channel/Admins` policy. I expect that perhaps you wanted to modify the `/Channel/Application/Admins` policy
When you modify the `/Channel/Admins` policy, it requires signatures not only from the application admins, but additionally from the orderer admins.
Usually, this is not what you want. Usually, you only wish to affect the `/Channel/Application/Admins` policy, which controls who may add or remove organizations.
@jyellick ok man let me check but thanks
@jyellick If i am right then i should modify channel_group.groups.Application.mod_policy.policies.Admins (as per the JSON file (the updated one) i've provided) and not channel_group.policies.Admins
and yes that's what i was willing to do thanks man respect++
@jyelick now the previous issue is solved but now the problem is the response of client.updateChannel(request);}) is coming undefined what could be the possible reason , and if i look at the orderer logs it looks like the org2 is succesfully added in the channel can you please help me in that what my guess is that ORGS network-config.json is not updated yet and so the ORGS variable of helper.js (as per balance transfer application) i've provided the orderer logs https://hastebin.com/icevucusad.hs they are ommited as the file was too big for hastebin thanks
Community, anyone have thoughts on designing the DR for Fabric? HA is achievable with multi-nodes K8S...
I have been reading these stories and start thinking based on that
https://jira.hyperledger.org/browse/FAB-2949
https://jira.hyperledger.org/browse/FAB-3364
In extremely high-level view,
> For peers, we need to backup the MSP and crypto materials... fetch the channel config block, join again, the it will get the blocks from other peers.
How about Orderer???
> We can setup Kafka cluster in multi-host K8S. However, how to do the DR... (let's say active-passive). I checked https://kafka.apache.org/documentation/#operations, the 2 clusters in 2 DCs will be having different offset, which is obviously not desirable... Or for Kafka DR, do we need confluent enterprise..?
All the channel config and most important stuffs of entire network (network and channel settings) are stored in Orderer... but not much insight how to setup DR for it...
Any insights / discuss together?
Or we can still use Kafka Mirror Maker, and setup peers in another DC to consume the messages such that the offset are same?
But seems to be very fragile...
@DannyWong As you indicate, with any blockchain it is imperative that messages never 'uncommit', so, ensure your configuration either never loses messages, or that if you do, that your recovery plan includes an audit of the blockchain to ensure that any messages which were committed are re-inserted into the Kafka queue.
Do we have an example where we add an organization at consortium level .Could you please give me pointers to the same .
It is certainly possible and works, but I don't know of an example. It should be very similar to adding an org to a channel, just inserting the org at a different spot
@jyellick thank you ... I tried and failed . let me try again
@jyellick I think , I am making mistake related to systemchannel id
Also could you please help me to rename the default system channel id name from "testchainid" to something else , using configtx.yaml
Do we need to modify systemchannel to add an org to a consortium ?
@vu3mmg When you generate your genesis block to bootstrap the orderer, you may specify a name with configtxgen, if you do not, it defaults to 'testchainid'
@vu3mmg When you generate your genesis block to bootstrap the orderer, you may specify a name with configtxgen's option `-channelID`, if you do not, it defaults to 'testchainid' . There is no way to change this after a network has been bootstrapped.
You do need to modify the orderer system channel if you wish for that org to be able to create channels and be included in initial channel creation. You may always simply add that org to existing channels without modifying the consortium definition
@vu3mmg I just tried adding an org to a consortium: the config manipulation step I did with this command:
```
jq ". * {\"channel_group\":{\"groups\":{\"Consortiums\":{\"groups\":{\"SampleConsortium\":{\"groups\":{\"ExampleOrg\":$(configtxgen -printOrg ExampleOrg)}}}}}}}" config.json > modified_config.json
```
Has joined the channel.
Hello, I'm trying to develop an application which involves multiple parties/organizations, and not all should be able to see the all the data/transactions (multiple channels). Should there be a single ordering server for all the channels or should there be an ordering service per channel?
@Bchainer It of course depends upon the specifics of your application. If it is permissible for the entity running ordering to have knowledge of the different channels, then a single ordering service would be the recommended deployment. If you need multiple ordering services, keep in mind that you should only join a peer to channels on the same ordering service.
What are the components in Fabric that will involve in consensus, orderers only or both orderers and endorsers?
@qizhang Orderers provide consensus on ordering, endorsers provide consensus on execution output.
@qizhang Orderers provide consensus on order, endorsers provide consensus on execution output.
Has joined the channel.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pnfAY8sFPQPSe5Pu6) @jyellick Thank you . Let me try the same
@jyellick One point i am struggling to get i the point about signatures . If we need to reflect this changes and commit the config tx , we need only orderer to sign ?
at a consortium level we dont need signatures of the new organizations ?
The consortium members pick the subset of members which form any new channels.
The orderer system channel is generally maintained by the orderer admins, though as a compromise, it was decided that orgs should still have to sign off for their MSP to be modified
OK. This means , we need to collect the signatures from the newly added orgs , we if add them at a consortium level
and when we create a new channel , we need to get the signatures of all the members of this new channel. Both(adding a new org at a consortium level) and then creating a channel are two distinct processes ......
@vu3mmg Please note, all of these policies are configurable, I have simply been describing the defaults. You may of course redefine these policies to support your specific use case, though we feel the defaults will be valid for most deployments.
@jyellick Thank you
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Hello everyone,
I restart my orderer service, then i run agian i face the following error
```
2018-02-15 11:51:25.106 UTC [orderer/main] Deliver -> DEBU 0e7 Starting new Deliver handler
2018-02-15 11:51:25.106 UTC [orderer/common/deliver] Handle -> DEBU 0e8 Starting new deliver loop
2018-02-15 11:51:25.106 UTC [orderer/common/deliver] Handle -> DEBU 0e9 Attempting to read seek info message
2018-02-15 11:51:25.106 UTC [orderer/common/deliver] Handle -> DEBU 0ea Rejecting deliver because channel mychannel not found
```
Could you please hlep me ?
@PyiTheinKyaw: Can you use hastebin.com and paste your `orderer.yaml` file?
do you mean, starting code ?
I mean the contents of your `orderer.yaml` configuration file.
Sorry for my undestanding, I run the orderer with shell file
Would you please refer to following link.
Now I am looking for "orderer.yaml"
Thank you for your help
https://hastebin.com/ugewigikar.makefile
In my peer log, https://hastebin.com/amelexeruh.vbs
How do you run your Kafka cluster? In Docker containers?
Yes, I run the Kafka-zookeeper in another machine.
I restart the orderer, but I do not restart the kafka
How many ordering service nodes do you run?
~How many ordering service nodes do you run?~
~How many ordering service nodes do you run?~ (Actually, never mind that.)
Actually, never mind that.
Use Hastebin and paste the ordering service log from the same node you used here: https://chat.hyperledger.org/channel/fabric-orderer?msg=HzeexaFTZR42jvJTX
Hi,
I am trying to start my kafka ordering service, but I am having a problem. I first run 3 zookeeper which seem to run fine, but when I start my kafka containers I get the following errors in zookeeper3 : https://pastebin.com/cPiswTGs , but no errors on zookeeper1 and zookeeper2. Does anyone have an idea what might be wrong here?
zookeeper3: http://prntscr.com/ifidww
kafka1: http://prntscr.com/ifie8x
Has joined the channel.
Hi all,
I was executing this command
docker exec peer0.org1.example.com peer channel create -o orderer.example.com:7050 -c channel001 -f /etc/hyperledger/configtx/composer-channel.tx
Its throwing error. So I checked orderer logs I found this
WARN 0d6 Rejecting CONFIG_UPDATE because: Proposed configuration has no application group members, but consortium contains members
2018-02-16 10:21:22.969 UTC [orderer/main] func1 -> DEBU 0d7 Closing Broadcast stream
Does anybody have any idea about this error?
You are proposing the creation of a channel without properly specifying its member orgs, it seems.
Has joined the channel.
@jyellick I was able to pull the testchainid details using node sdk
got a big json
What I found was, it was only having consortium details , not further channel detaisl
So to get details of Application channels , do we need to to pull per application channel ?
^ Yes.
Thank you
Hello Dear,
When Fetch existing channel on orderer by using following comand
`peer channel fetch config config_block.pb -o orderer:7050 -c smschannel1`
I faces the following error.
```
2018-02-19 10:59:18.503 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
Error: can't read the block: &{FORBIDDEN}
```
Would you please help me ?
Has joined the channel.
@PyiTheinKyaw The crypto material you are using to attempt to retrieve the block is not authorized to read t.
@PyiTheinKyaw The crypto material you are using to attempt to retrieve the block is not authorized to read it.
You should look at the orderer logs for more detailed messages
Yes, it is absolutely wrong the crypto config.
In orderer logs, display the following logs.
```
-----END CERTIFICATE-----
2018-02-20 03:59:37.699 UTC [cauthdsl] func2 -> ERRO 1101b Principal deserialization failure (The supplied identity is not valid, Verify() returned x509: certificate signed by unknown authority) for identity 0a074f7267304d5350129a072d2d2d2d2d424547494e202d2d2d2d2d0a4d494943697a4343416a4b6741774942416749554245567773537830546d7164627a4e776c654e42427a6f4954307777436759494b6f5a497a6a3045417749770a667a454c4d416b474131554542684d4356564d78457a415242674e5642416754436b4e6862476c6d62334a7561574578466a415542674e564241635444564e680a62694247636d467559326c7a59323878487a416442674e5642416f54466b6c7564475679626d5630494664705a47646c64484d7349456c75597934784444414b0a42674e564241735441316458567a45554d4249474131554541784d4c5a586868625842735a53356a623230774868634e4d5459784d5445784d5463774e7a41770a5768634e4d5463784d5445784d5463774e7a4177576a426a4d517377435159445651514745774a56557a45584d4255474131554543424d4f546d3979644767670a5132467962327870626d45784544414f42674e564241635442314a68624756705a326778477a415a42674e5642416f54456b6835634756796247566b5a3256790a49455a68596e4a70597a454d4d416f474131554543784d44513039514d466b77457759484b6f5a497a6a3043415159494b6f5a497a6a304441516344516741450a4842754b73414f34336873344a4770466669474d6b422f7873494c54734f766d4e32576d77707350485a4e4c36773848576533784350517464472f584a4a765a0a2b433735364b457355424d337977355054666b7538714f42707a43427044414f42674e56485138424166384542414d4342614177485159445652306c424259770a464159494b7759424251554841774547434373474151554642774d434d41774741315564457745422f7751434d414177485159445652304f42425945464f46430a6463555a346573336c746943674156446f794c66567050494d42384741315564497751594d4261414642646e516a32716e6f492f784d55646e3176446d6447310a6e4567514d43554741315564455151654d427943436d31356147397a6443356a62323243446e6433647935746557687663335175593239744d416f47434371470a534d343942414d43413063414d4551434944663948626c34786e337a3445774e4b6d696c4d396c58324671346a5770416152564239374f6d564565794169416b0a61587a422f6a6e6c5533394237577773394249723963386d534f455046365659317547502b644b5630673d3d0a2d2d2d2d2d454e44202d2d2d2d2d0a
2018-02-20 03:59:37.699 UTC [cauthdsl] func2 -> DEBU 1101c 0xc420caa090 principal evaluation fails
2018-02-20 03:59:37.699 UTC [cauthdsl] func1 -> DEBU 1101d 0xc420caa090 gate 1519099177699649655 evaluation fails
2018-02-20 03:59:37.699 UTC [orderer/common/deliver] Handle -> WARN 1101e [channel: smschannel1] Received unauthorized deliver request
2018-02-20 03:59:37.699 UTC [orderer/main] func1 -> DEBU 1101f Closing Deliver stream
```
But I have no idea, how can I change to valid crypto-config.
I have no idea, which crypto-config is valid in my blockchain network.
Any idea !
Thank you for your supporting
@PyiTheinKyaw This error indicates that the certificate was not issued by the CA of the organization your client is claiming to be from. How did you setup your crypto material?
I generated the crypto config by CA server
Thanks, Now I got it, I miss to set value to "CORE_PEER_TLS_ROOTCERT_FILE" and "CORE_PEER_MSPCONFIGPATH"
Thanks for your supproting.
I strongly recommend that you take a look at the first network tutorial http://hyperledger-fabric.readthedocs.io/en/release/build_network.html
hi, adding an additional org to the existing fabric network gives out error when the existing channel policy needed to be updated and signed by the old participants in the network. it gives the error: Error: proto: can't skip unknown wire type 6 for common.Envelope
did somebody have a clue of what this issue indicates?
Does anyone know if the orderer picks up environmental variables much like peers?
Using viper?
Has joined the channel.
hi,all,I'm new to hyperledger,after reading hyperledger whitepaper,I have some question,hope to disscuss here with you guys.default order service in fabric is kafka,as a blockchain,so kafka is a central service?if A\B\C do a bussiness on fabric,who to maintain kafka?
since kafka is not a decentral component,the maintainer can delete data on the kafka,right?
I have found the discussion in chat history between kostas and bh4rtp :-)
@NeerajKumar: You're doing something wrong when creating a protobuf message.
@paul.sitoh: Yes, use the `ORDERER_` prefix.
yes, but now i have solved the issue, anyways thanks @kostas
@NeerajKumar: No problem. Give us a heads up next time so that folks know not to spend time on it.
but @kostas can you tell what does this message means: failed to deserialize creator's identity, err MSP Org1MSP is Unknown,
i am facing this when i am trying to instantiate the chaincode
@JiyunYang: Correct. Kafka makes sense in trusted, dictator-like networks. It is an interim step until we take the ordering service where want to, i.e. BFT.
@NeerajKumar: It means that you are referencing an MSP in your instantiation call that is not present in the channel.
But i have only One MSP defined in my channel
how come it is not recognizing it
let me show you my channel policy its pretty straightforward, as it's a single org network and channel as well
Profiles:
#profile for orderer genesis block
BelriumOrdererGenesis:
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Application:
<<: *ApplicationDefaults
Organizations:
- *OrdererOrg
Consortiums:
SampleConsortium:
Organizations:
# - *OrdererOrg
- *Belrium
BelriumSingleOrgChannel:
Consortium: SampleConsortium
Application:
<<: *ApplicationDefaults
Organizations:
# - *OrdererOrg
- *Belrium
sorry for unpropper orientation
here is the hastebin
https://hastebin.com/ihahupibok.makefile
and can you tell me one more thing, what is the difference between an install proposal and an instantiate proposal, as my install proposals were good and accepted by the orderer, whereas one of my instantiate proposals is rejected by orderer?
install doesn't go to the orderer
it is a peer local operation which is independent of the chain
> how come it is not recognizing it
How do you instantiate the chaincode? i.e. what is the command you use?
Also, just as an FYI you can/should omit the `Application` part in the genesis profile.
> and can you tell me one more thing, what is the difference between an install proposal and an instantiate proposal, as my install proposals were good and accepted by the orderer, whereas one of my instantiate proposals is rejected by orderer?
As Yacov noted, the install doesn't go to the orderer. As for your invocation, I suspect that you are attempting to do so using an MSP that is not recognized by the orderer as having write-access to the channel.
We'll need the orderer's logs at the time of invocation to figure out what's going on.
i have simplified the genesis profile, and let me give you orderer logs
hastebin for docker logs
https://hastebin.com/jixagujipu.coffeescript
hastebin for complete configtx.yaml file
https://hastebin.com/ijadobofuv.coffeescript
can you notice any issue
I see no attempts to broadcast (i.e. push an invocation) on these logs?
Has joined the channel.
Has joined the channel.
any idea what the best way is to debug "ENDORSEMENT_POLICY_FAILURE"? I have the following endorsement policy "AND('VOOrg.member', OR('ANBOrg.member', 'GRBOrg.member'))" and my transaction is endorsed by a peer of the VOOrg and a peer of the ANBOrg. Both endorsements are the same and everything ... and still my transaction is marked with code "ENDORSEMENT_POLICY_FAILURE" ... can't seem to find anything useful in the logs of the orderer ... any idea how to solve this?
is it the orderer that enforces the ENDORSEMENT_POLICY or the PEER? Because the orderer adds it to the block and it seems that the peer is throwing the error
```
2018-02-21 11:01:01.527 UTC [couchdb] handleRequest -> DEBU 1b9d HTTP Request: GET /publicchannel/statedb_savepoint?attachments=true HTTP/1.1 | Host: 172.17.0.1:5984 | User-Agent: Go-http-client/1.1 | Accept: multipart/related | Authorization: Basic YWRtaW46cGFzc3dvcmQ= | Accept-Encoding: gzip | |
```
```
```
```
2018-02-21 11:01:01.508 UTC [vscc] Invoke -> DEBU 1b63 VSCC invoked
2018-02-21 11:01:01.508 UTC [vscc] deduplicateIdentity -> DEBU 1b64 Signature set is of size 2 out of 2 endorsement(s)
2018-02-21 11:01:01.509 UTC [vscc] Invoke -> WARN 1b65 Endorsement policy failure for transaction txid=9c91113fe6de2db2fd28cbdbcae341ab9aa5fc0e53f07245f3d154793fc0ae5e, err: Failed to authenticate policy
2018-02-21 11:01:01.509 UTC [shim] func1 -> DEBU 1b66 [d17903f2]Transaction completed. Sending COMPLETED
2018-02-21 11:01:01.509 UTC [shim] func1 -> DEBU 1b67 [d17903f2]Move state message COMPLETED
2018-02-21 11:01:01.509 UTC [shim] handleMessage -> DEBU 1b68 [d17903f2]Handling ChaincodeMessage of type: COMPLETED(state:ready)
2018-02-21 11:01:01.509 UTC [shim] func1 -> DEBU 1b69 [d17903f2]send state message COMPLETED
2018-02-21 11:01:01.509 UTC [chaincode] processStream -> DEBU 1b6a [d17903f2]Received message COMPLETED from shim
2018-02-21 11:01:01.509 UTC [chaincode] handleMessage -> DEBU 1b6b [d17903f2]Fabric side Handling ChaincodeMessage of type: COMPLETED in state ready
2018-02-21 11:01:01.509 UTC [chaincode] handleMessage -> DEBU 1b6c [d17903f2-ef73-43c4-811c-91318c93ff7b]HandleMessage- COMPLETED. Notify
2018-02-21 11:01:01.509 UTC [chaincode] notify -> DEBU 1b6d notifying Txid:d17903f2-ef73-43c4-811c-91318c93ff7b
2018-02-21 11:01:01.509 UTC [chaincode] Execute -> DEBU 1b6e Exit
2018-02-21 11:01:01.509 UTC [txvalidator] VSCCValidateTxForCC -> ERRO 1b6f VSCC check failed for transaction txid=9c91113fe6de2db2fd28cbdbcae341ab9aa5fc0e53f07245f3d154793fc0ae5e, error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-02-21 11:01:01.509 UTC [lockbasedtxmgr] Done -> DEBU 1b70 Done with transaction simulation / query execution [9c91113fe6de2db2fd28cbdbcae341ab9aa5fc0e53f07245f3d154793fc0ae5e]
2018-02-21 11:01:01.509 UTC [txvalidator] VSCCValidateTxForCC -> DEBU 1b71 VSCCValidateTxForCC completes for envbytes 0xc422490000
2018-02-21 11:01:01.509 UTC [txvalidator] VSCCValidateTx -> DEBU 1b72 VSCCValidateTx completes for env 0xc422955bf0 envbytes 0xc422490000
2018-02-21 11:01:01.509 UTC [txvalidator] validateTx -> ERRO 1b73 VSCCValidateTx for transaction txId = 9c91113fe6de2db2fd28cbdbcae341ab9aa5fc0e53f07245f3d154793fc0ae5e returned error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-02-21 11:01:01.509 UTC [txvalidator] validateTx -> DEBU 1b74 validateTx completes for block 0xc420303900 env 0xc422955bf0 txn 0
2018-02-21 11:01:01.509 UTC [txvalidator] Validate -> DEBU 1b75 got result for idx 0, code 10
2018-02-21 11:01:01.509 UTC [txvalidator] Validate -> DEBU 1b76 END Block Validation
2018-02-21 11:01:01.510 UTC [kvledger] CommitWithPvtData -> DEBU 1b77 Channel [publicchannel]: Validating state for block [5]
2018-02-21 11:01:01.510 UTC [lockbasedtxmgr] ValidateAndPrepare -> DEBU 1b78 Validating new block with num trans = [1]
2018-02-21 11:01:01.510 UTC [valimpl] ValidateAndPrepareBatch -> DEBU 1b79 ValidateAndPrepareBatch() for block number = [5]
2018-02-21 11:01:01.510 UTC [valimpl] ValidateAndPrepareBatch -> DEBU 1b7a preprocessing ProtoBlock...
2018-02-21 11:01:01.510 UTC [valimpl] preprocessProtoBlock -> WARN 1b7b Block [5] Transaction index [0] marked as invalid by committer. Reason code [10]
2018-02-21 11:01:01.510 UTC [valimpl] ValidateAndPrepareBatch -> DEBU 1b7c validating rwset...
2018-02-21 11:01:01.510 UTC [valimpl] ValidateAndPrepareBatch -> DEBU 1b7d postprocessing ProtoBlock...
2018-02-21 11:01:01.510 UTC [valimpl] ValidateAndPrepareBatch -> DEBU 1b7e ValidateAndPrepareBatch() complete
```
Hi Guys. I am planning to have 2 orderers, 4 kafka brokers and 3 zookeepers on Org1 and 2 orderers, 4 kafka brokers and 3 zookeepers on Org2. Is it possible?
apparently you can't have '-' in your MSPId :)
does anyone have an example of an endorsement policy that only has one organisation in there ... that organisation is the only one that needs to endorse
I've been running this locally w/o issues: `peer chaincode instantiate --orderer joe.example.com:7050 --tls --cafile $ORDERER_CA --channelID clarkschannel --name clarkschaincode --version 1.0 -c '{"Args":["init","a", "100", "b","200"]}' --policy "OR ('ClarkMSP.member')"`
when i did it like that i'm always getting endorsement issues :/
Has joined the channel.
Hi, I checked the Fabric-FAQ.html of the document. Regarding Security & Access Control, it wrote that 'If you do not want the data to go through the orderers at all, and you are only concerned about the input data, then you can use visibility settings. The visibility setting will determine whether input and output chaincode data is included in the submitted transaction, versus just output data. ' Could anyone tell me where I can find the visibility settings?
Hello Dear,
While running blockchain network, I just would like to stop and start agian the orderer.
Any Impact on running blockchain network ?
Would you please kindly help me out of this confusion.
Thank you.
Hi all, I'm running the network with the kafka orderer in the latest code in https://github.com/hyperledger/fabric/tree/master/examples/e2e_cli where there are 4 kafka and 3 zookeeper containers in the docker-compose-cli.yaml file. When I tried to manually create the channel inside the cli, entering the command
```
peer channel create -o orderer.example.com:7050 -c mychannel -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem```
i got the following error
```Error: got unexpected status: SERVICE_UNAVAILABLE -- will not enqueue, consenter for this channel hasn't started yet```
Anyone has the same error?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=oBRbyGp5efEih69Aw)
@Wangrj This exists in the protos, but is not implemented by any of the SDKs to my knowledge
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=wNJyXg7KsrZW3zEYH)
@PyiTheinKyaw This will really depend on your exact configuration. If you have properly configured your Kafka based ordering network, any affect on your network should be minimal and transient.
@JayJong Please see the last question in the Kafka section of the orderer faq http://hyperledger-fabric.readthedocs.io/en/latest/ordering-service-faq.html#id2
@jyellick Thank you. I checked the protos. What I found are the 'message ChaincodeActionPayload in protos/peer/transaction.proto' and the 'message ChaincodeHeaderExtension in protos/peer/proposal.proto'. Shall I use them to set visibility?
@Wangrj You may attempt to. However, because the SDKs do not implement this function yet, I would warn you that this has not been well tested, you might want to consider another approach until the SDKs add support.
@jyellick Thank you. I can understand. But I do not know whether there is another approach to make the input data invisible or hide the input data.
@Wangrj It would really depend on your application. If you do choose to use the visibility setting in the protos directly, do let us know how it turns out
@jyellick Thank you for your confirmation point.
According to your answer, If I would like to stop & start orderer if that are kafka base, We may also need to stop & start kafaka service ?
Just confirmation again.
Thank you for your supproting.
@PyiTheinKyaw You may stop and start orderer processes without modifying the Kafka service. In general, I would never stop or restart Kafka nodes except in cases of maintenance for a particular Kafka broker
Thank you.
When I stoped and start again the orderer,
My peers does not know the channel block (Orderer reject the peer request to access channel)
In that case, I created new channel and peers join to that new channel again. It was solved.
I think it is not real right solution.
That's why I would like to confirm it.
Thank you for your supporing and your references link.
Best Regards,
PyiThein Kyaw
Thank you @jyellick .
When I stoped and start again the orderer,
My peers does not know the channel block (Orderer reject the peer request to access channel)
In that case, I created new channel and peers join to that new channel again. It was solved.
I think it is not real right solution.
That's why I would like to confirm it.
Thank you for your supporing and your references link.
Best Regards,
PyiThein Kyaw
At that time, I remove "/var/hyperledger/production/chains/testchainid" . Just information
@PyiTheinKyaw We frequently test restarting the orderer process, and peers are able to reconnect successfully. If you are having a problem, please write up in detail, the exact procedure and steps you are following so that we may point out your error or reproduce your problem.
@jyellick
Thanks, Please let me a secs.
I will described my procedures. Thank you agiain
Sorry for my late reply
My procedure to stop & Start orderer service are :
1. Stop all Running service except CA server.
1.1 Stop All Peer serverce
1.2 Stop the orderer service
2. Starting the Orderer & peer service
2.1 Starting the peer service.
2.2 Remove folder under "/var/hyperledger/production/orderer/chain/testchainid" on orderer
2.3 Regenearte the configtx.yaml >> mychannel.tx, Org0MspAnchor.tx
2.4 Starting the orderer service
2.5 Then rejection error (Refer to figure)
Then I created new channel.
error2.png
@PyiTheinKyaw In step 2.2, you remove part of the orderer's ledger. This is not recommended, or supported. Why did you take this step?
You also start the orderer at step 2.1 and 2.4, why would you do this?
I do not understand the scenario you are simulating.
I think I created only "mychannel" block.
I have no idea what is mean by "testchainid". So, I assume that I need to remove "unnecessary" folder. That's why I remove it.
Sorry for your confusion !!
When you bootstrap your orderer, you create an 'orderer system channel'. If you do not specify a channel ID for the orderer system channel, it is named `testchainid`
The orderer system channel is used to coordinate channel creation tasks among the orderers.
By deleting it, you have damaged your network.
As a rule, I would never encourage anyone to delete files or folders, simply because they do not know the origin. Please investigate the origin and determine that they are safe to delete before doing so.
As a rule, I would never encourage anyone to delete files or folders simply because they do not know the origin. Please investigate the origin and determine that they are safe to delete before doing so.
@jyellick Thanks for your support!
Now I am more understanding about orderer.
Thank you.
sim
just wanted to check and make sure my understanding of a kafka based ordering service is correct.
just wanted to check and make sure my understanding of a kafka based ordering service is correct.
if i have a network with multiple ordering service nodes, an sdk client should be able to send a transaction to any OSN, right?
just wanted to check and make sure my understanding of a kafka based ordering service is correct.
if i have a network with multiple ordering service nodes, an sdk client should be able to send a properly endorsed transaction to any OSN, right?
correct ^^^^
hello @jyellick - we are in plan to change the hostname of an already running hyperledger blockchain environment. Channel contains information related to kafka and orderer. I am able do update changes on all application channel however it fails on system channel.
Fabric Release - v1.0.4
System Channel (Before changes)
```
"OrdererAddresses": {
"mod_policy": "/Channel/Orderer/Admins",
"value": {
"addresses": ["ord01clsorder.cit.clsnet:7050", "ord02clsorder.cit.clsnet:7050", "ord03clsorder.cit.clsnet:7050"]
}
}
```
System Channel - in Envelop (after Changes)
```
"OrdererAddresses": {
"mod_policy": "/Channel/Orderer/Admins",
"value": {
"addresses": ["ord01clsorder.cit.clsnet:7050", "ord02clsorder.cit.clsnet:7050", "ord03clsorder.cit.clsnet:7050"]
},
"version": "1"
}
}
```
Orderer logs indicating a Bad Request
```
2018-02-23 21:10:55.022 UTC [orderer/common/broadcast] Handle -> WARN 38a1 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Attempt to set key [Values] /Channel/Orderer/KafkaBrokers to version 1, but key is at version 1
2018-02-23 21:10:55.022 UTC [orderer/main] func1 -> DEBU 38a2 Closing Broadcast stream
2018-02-23 21:10:55.024 UTC [orderer/common/deliver] Handle -> WARN 38a3 Error reading from stream: rpc error: code = Canceled desc = context canceled
```
It reasons out as version conflict in orderer log - is this expected?
The strange part after that - this update is successfully committed on the System Channel.
hello @jyellick - we are in plan to change the hostname of an already running fabric blockchain environment. Channel contains information related to kafka and orderer. I am able do update changes on all application channel however it fails on system channel.
Fabric Release - v1.0.4
System Channel (Before changes)
```
"OrdererAddresses": {
"mod_policy": "/Channel/Orderer/Admins",
"value": {
"addresses": ["ord01clsorder.cit.clsnet:7050", "ord02clsorder.cit.clsnet:7050", "ord03clsorder.cit.clsnet:7050"]
}
}
```
System Channel - in Envelop (after Changes)
```
"OrdererAddresses": {
"mod_policy": "/Channel/Orderer/Admins",
"value": {
"addresses": ["ord01clsorder.cit.clsnet:7050", "ord02clsorder.cit.clsnet:7050", "ord03clsorder.cit.clsnet:7050"]
},
"version": "1"
}
}
```
Orderer logs indicating a Bad Request
```
2018-02-23 21:10:55.022 UTC [orderer/common/broadcast] Handle -> WARN 38a1 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Attempt to set key [Values] /Channel/Orderer/KafkaBrokers to version 1, but key is at version 1
2018-02-23 21:10:55.022 UTC [orderer/main] func1 -> DEBU 38a2 Closing Broadcast stream
2018-02-23 21:10:55.024 UTC [orderer/common/deliver] Handle -> WARN 38a3 Error reading from stream: rpc error: code = Canceled desc = context canceled
```
It reasons out as version conflict in orderer log - is this expected?
The strange part after that - this update is successfully committed on the System Channel.
Did you perhaps accidentally submit it twice?
Is it possible for you to dm me a tar of your update and intermediate artifacts? (RC may make you name it `.txt`, or perhaps you could put it somewhere like Google drive)
I will check the first part in the script.
Sure - I will DM - the things you would require is the system channel JSON of all steps (original - modified - envelope).
Correct, thanks
Right my bad - system channel was getting update twice in the script. Thanks @jyellick.
Right my bad - system channel was getting update twice in the script. Thanks @jyellick
Right my bad - system channel was getting updated twice in the script. Thanks @jyellick
Has joined the channel.
Hi,
I got this error when connecting a peer to the already exsiting fabric network, but the peer resides in different host
Failed to dial 10.0.0.6:7050: connection error: desc = "transport: authentication handshake failed: x509: cannot validate certificate for 10.0.0.6 because it doesn't contain any IP SANs"; please retry.
https://github.com/prometheus/prometheus/issues/1654#issuecomment-221390562
Has joined the channel.
where can i configure fabric order grpc max send length?
Has joined the channel.
Hi, I am trying to update the system channel, using the API fabric_client.updateChannel(request), get the following error from Orderer.
2018-02-22 14:58:54.652 UTC [orderer/common/broadcast] Handle -> DEBU faa Preprocessing CONFIG_UPDATE
2018-02-22 14:58:54.652 UTC [orderer/configupdate] Process -> DEBU fab Processing channel reconfiguration request for channel testchainid
2018-02-22 14:58:54.653 UTC [orderer/common/broadcast] Handle -> WARN fac Rejecting CONFIG_UPDATE because: Error authorizing update: proto: common.ConfigUpdate: wiretype end group for non-group
2018-02-22 14:58:54.653 UTC [orderer/main] func1 -> DEBU fad Closing Broadcast stream
any pointers? Thanks
@shibu it sounds like you may not have packaged the update into a transaction correctly
The ConfigUpdate is put into a ConfigUpdateEnvelope, which is then put into an Envelope of type CONFIG_UPDATE
Is there a configuration setting for this to make it more tolerant ? And if so how can you add that to a docker-compose.yaml ?
orderer.example.com | 2018-02-25 13:15:10.357 UTC [common/deliver] deliverBlocks -> WARN 358 Rejecting deliver for 10.0.2.2:44836 due to envelope validation error: timestamp 2018-02-25 13:34:23.79 +0000 UTC is more than the 15m0s time window difference above/below server time 2018-02-25 13:15:10.357632532 +0000 UTC m=+92.488755904. either the server and client clocks are out of sync or a relay attack has
@mastersingh24 @jyellick @muralisr ? ^^^^
@rickr There is a setting for this
`ORDERER_GENERAL_AUTHENTICATION_TIMEWINDOW`
By default, it is `15s` but you may make it whatever duration you would like
- ORDERER_GENERAL_AUTHENTICATION_TIMEWINDOW=3600s #Not for production -- remove.
Wrong RC channel but off the bat what about the peer eventing service ?
it's both i believe
the viper configs are different
but the peer has an equivalent one
There's a similar setting.... `CORE_PEER_EVENTS_TIMEWINDOW`
thx !!!
https://github.com/hyperledger/fabric/blob/master/sampleconfig/core.yaml#L295
no no Jason
we have 2 different ones
this is for the peer deliver: `timeWindow := viper.GetDuration("peer.authentication.timewindow")`
(from the code)
> bat what about the peer eventing service ?
I thought that is what @rickr wanted
But yes, `CORE_PEER_AUTHENTICATION_TIMEWINDOW` is also a variable
I guess do the shotgun approach :)
oh but I guess since he asked it here - he means the common deliver
I know for BOTH peer eventing and I'm pretty sure the event hubs we added a timestamp that I'm guessing both are not getting checked for this
I know for BOTH peer eventing and I'm pretty sure the event hubs we added a timestamp that I'm guessing both are now getting checked for this
Dear,
I facing the issue when create channel on the Orderer.
[orderer/common/broadcast] Handle -> WARN 198 Rejecting CONFIG_UPDATE because: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2018-02-25 16:24:29.128 UTC [orderer/main] func1 -> DEBU 199 Closing Broadcast stream
2018-02-25 16:24:29.128 UTC [grpc] Printf -> DEBU 19a transport: http2Server.HandleStreams failed to read frame: read tcp 174.32.1.147:7050->174.32.1.147:44916: read: connection reset by peer
2018-02-25 16:24:29.129 UTC [orderer/common/deliver] Handle -> WARN 19b Error reading from stream: rpc error: code = Canceled desc = context canceled
2018-02-25 16:24:29.129 UTC [orderer/main] func1 -> DEBU 19c Closing Deliver stream
===
bin/peer channel create -o orderer:7050 -c mychannel -f mychannel.tx
2018-02-25 16:24:29.105 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
Error: Got unexpected status: BAD_REQUEST
do you guys know how to fix this issue, please help out
@RyanLe 2 You are most likely not submitting the channel creation request with an admin certificate
@jyellick can you specific out, where can I get admin certificate
I'd encourage you to work through http://hyperledger-fabric.readthedocs.io/en/v1.0.6/build_network.html
You can see how the user context is switched in the "Create and Join" section
You can see how the user context is switched to an admin in the "Create and Join" section
thanks @jyellick , I already point CORE_PEER_MSPCONFIGPATH = ~/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp
Did you also set the MSPID?
no there is no MSPID,
btw, the error is " func2 -> ERRO 241 Principal deserialization failure (MSP DEFAULT is unknown)"
I wish we we replaced default with something like "Undefined"
I wish we had replaced default with something like "Undefined"
@RyanLe 2 Yes, you must also override `CORE_PEER_MSPID` with the ID for your MSP
@RyanLe 2 Yes, you must also override `CORE_PEER_LOCALMSPID` with the ID for your MSP
Thank you @jyellick!
Here is what I setting to create channel
export CHANNEL_NAME=mychannel
CORE_PEER_LOCALMSPID="Org0MSP"
CORE_PEER_ADDRESS=peer0:7051
CORE_PEER_MSPCONFIGPATH=~/conf/crypto/peerOrganizations/org0/users/Admin@org0/msp
CORE_PEER_TLS_ROOTCERT_FILE=~/conf/crypto/peerOrganizations/org0/peers/peer0.org0/tls/ca.crt
peer channel create -o orderer:7050 -c mychannel -f mychannel.tx
MSPCONFIGPATH and TLS_ROOTCERT_FILE point to msp Admin user
I have not clear how to fix
> I have not clear how to fix
What is the error you're getting? Do you have TLS enabled on the orderer? If so, you're missing a `--tls --cafile /pathToOrdererCAgoesHere` a part in your `peer channel...` command invocation above.
> I have not clear how to fix
What is the error you're getting? Do you have TLS enabled on the orderer? If so, you're missing a `--tls --cafile /pathToOrdererCAgoesHere` bit in your `peer channel...` command invocation above.
maybe I miss this opt, I will try this cafile to see how it go, Thank you @kostas
Has joined the channel.
hi everyone,
i added an org using https://hyperledger-fabric.readthedocs.io/en/latest/channel_update.html and https://www.ibm.com/developerworks/cloud/library/cl-add-an-organization-to-your-hyperledger-fabric-blockchain/index.html tutorials.
The peer added successfully and requests done perfectly. But when try to add an orderer for new org it fails.
We got an error like : "Error: got unexpected status: BAD_REQUEST -- Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Orderer not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining"
thanks in advence.
What is the policy that you attempt to satisfy when adding a new orderer org?
2018-02-27 13_30_25-Start.png
@Ryan2 This error implies that the certificate you are signing with has not been issued by the CA
Hi @jyellick, I still quite confused, I created crypo material with fabric-ca server and using this certificate to create Channel,
How can it fail like that
@Ryan2 I can only tell you what is wrong based on the log your provided. The certiicate is not valid because it has not been signed by a trusted CA. If you are having problems with fabric-ca usage, I suggest you ask in #fabric-ca
got it thank you @jyellick
Hi,
I got this error when connecting a peer to the already exsiting fabric network, but the peer resides in different host
Failed to dial 10.0.0.6:7050: connection error: desc = "transport: authentication handshake failed: x509: cannot validate certificate for 10.0.0.6 because it doesn't contain any IP SANs"; please retry.
@jyellick I am trying to add a new orderer in at the consortium level .
I am getting the following error
2018-02-27 11:09:29.447 UTC [orderer/common/broadcast] Handle -> WARN 811 [channel: testchainid] Rejecting broadcast of config message from 172.18.0.1:41940 because of error: Error authorizing update: unexpected EOF
2018-02-27 11:09:29.448 UTC [orderer/common/server] func1 -> DEBU 812 Closing Broadcast stream
how to decipher the same
When I try to change max_message_count , it is working
I am using the identity of orderer Admin
I am using the fabcar example and adding a second org to fabcar at consortium level
please find the paste2bin log
https://pastebin.com/aSAdWCu4
@yacovm , could you please help me to get some clue
Lets wait for @kostas or @jyellick
OK...Thank you
Is there any valid situation where I could end up with the same txId duplicated in the same or different blocks? What happens if a client app submits the same transaction for commit to two different OSNs? Does the same transaction being queued twice in Kafka and eventually included twice on the same (or adjacent) blocks?
Hi @kostas, we are use kafka type orderer and added a peer org to an existing network. So just want to add a new orderer for a newly added peer organization.
Hi @kostas, we are using kafka type orderer and added a peer org to an existing network. So just want to add a new orderer for a newly added peer organization.
@amolpednekar: Could it be because validation in general is more expensive than the other stages?
@SashiKanth: https://chat.hyperledger.org/channel/fabric-orderer?msg=83PyLobWRk9Qkp9AR
@vu3mmg: How many ordering orgs in your configuration, and which channel are you targetting?
channel testchain id
orgs 1
https://github.com/vu3mmg/fabric-samples
i have forked the fabric samples
this is what i am trying to do
@minollo: Yes, that can certainly happen. Only one of them (the first one), will be considered valid, come validation time on the committing peers.
add an org to fabcar example
@regy14: (With apologies for trying to solve this in a "teach a man how to fish" manner...) Whenever you wish to modify a configuration item, you need to satisfy a modification policy. In your case, what is the modification policy that your configuration update aims to satisfy?
@kostas to add a new org , i am creating a json with new Org2MSP and updating the JSON and creating a new config and sending the same to orderer .
if I update simple items such as max_message count it works
but it fails when i try to add a new Org , channel_group.groups.Consortiums.groups.SampleConsortium.groups["Org2MSP"]=org2
the file is configUpdate.js
in fabcar directory
the file which does all configupdates
Alright, I will try to reproduce locally and let you know later today.
Do me a favor in the meantime --
https://github.com/vu3mmg/fabric-samples
sure, please let me know
Your orderer log is at point A right before you send this failed config update tx.
After you send the failed config update tx your orderer log is at point B.
Give me the A to B part. (Use Hastebin.)
ok , how to infer this
point A and Point B
By mere observation. Assuming your network is idle before you send the failed config update tx, your orderer log should remain the same. The last visible line is point A.
Likewise for point B.
i will take the log again and update in hastebin
i am restarting fabcar
so that state is fresh
Don't update the existing hastebin log, post a new one.
Thanks.
yes created a new one
trying to see how to share the same
https://hastebin.com/jitebutasa.go
this is the new one
could you please check whether you could access the same
I can see that, yes. This is the full log though right?
yes
docker logs order.example.com
not other transaction
only a config updaate from start
I am using configtxlator:
Version: 1.1.0-preview
Ah, got it. Thanks,
Ah, got it. Thanks.
i think if we check out the code from https://github.com/vu3mmg/fabric-samples and go to fabcar directory and then do a " node configUpdate.js " we can reproduce the issue ,
before that npm install and ./startFabric.sh
we can reproduce in a single step
configtxlator should be running , I have updated the readme @ https://github.com/vu3mmg/fabric-samples
@vu3mmg: Another quick favor. Can you give me the proto (or the JSON via configtxlator) for the configuration update transaction you send to the network that adds the org?
ok
i will get the json
As the unexpected EOF message indicates, I suspect we might be looking at something other than an access control failure.
As the "unexpected EOF" message indicates, I suspect we might be looking at something other than an access control failure.
https://hastebin.com/uyiwudigad.json
ok
this is the JSON
i was not able to understand that . I tried validating the json
using https://jsonformatter.org/
Hold on. Is this what you _send_ to the ordering service?
yes
You should be sending an envelope, right?
I am converting the same to envelope
That's why I asked: https://chat.hyperledger.org/channel/fabric-orderer?msg=F9DJdRPThxr7u9qjX
I want the JSON representation of what you send to the OS.
https://github.com/vu3mmg/fabric-samples/blob/a7f8fbca47850db2c738f0b1b7d4efa7f1c044a4/fabcar/configUpdate.js#L134
So give me the JSON of the envelope.
Please let me know whether I am doing some steps wrong
I am sending the json https://hastebin.com/uyiwudigad.json to configtxlator and gettting the envelope back
and then sending the envelope to orderer
Right, I see what's wrong here.
Right, then I think I see what's wrong here.
ok
https://github.com/vu3mmg/fabric-samples/blob/a7f8fbca47850db2c738f0b1b7d4efa7f1c044a4/fabcar/configUpdate.js#L172 is the place
where i am sending the enevelope
https://chat.hyperledger.org/channel/fabric-orderer?msg=b7QhktWayDFsy4XqW
When you are sending this to configtxlator, you are not getting back an envelope.
ok
This is what you're doing here? https://github.com/vu3mmg/fabric-samples/blob/a7f8fbca47850db2c738f0b1b7d4efa7f1c044a4/fabcar/configUpdate.js#L134
I think i am missing some important point here
earlier i tried updating "max_message_count": 10, like this instead of Org2MSP , it worked
it was updating the max_message_count
I think some conceptual misunderstanding .... which I am not able to get
So the sequence is:
to express myself clear --->
/ UPDATE MAX MESSGE COUNT
//updated_config_json.channel_group.groups.Orderer.values.BatchSize.value.max_message_count += 100;
when i tried it worked instead of updated_config_json.channel_group.groups.Consortiums.groups.SampleConsortium.groups["Org2MSP"]=org2
1. Encode the original config json into Common.Config
2. Encode the final config json into Common.Config
3. Compute the update (type: Common.ConfigUpdate)
4. Wrap this into common.Envelope and send it to the OS
Let me look something in the Node JS SDK.
ok
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=g8SoH5dAkeN4SQpns) @kostas Interesting; so, the index which maps a txn to its block and position in the block, what will it store in that case? Multiple entries for the same txnId?
Which index?
@kostas I think i am doing all the steps https://github.com/vu3mmg/fabric-samples/blob/a7f8fbca47850db2c738f0b1b7d4efa7f1c044a4/fabcar/configUpdate.js#L158
original_config_proto , updated_config_proto and sending to config txlator
gettting response and sending to OS
think so
What I'm trying to confirm with Bret is whether `updateChannel(request)` (line 194) will send an Envelope even though you haven't set the envelope field in your `request` object (line 185).
ok
fine , I will wait
Do you have the script for updating max_message_count handy?
It should be the exact same sequence of steps.
yes
yes
Please point me to it.
I will add that in the repo
give me a minute
> What I'm trying to confirm with Bret is whether `updateChannel(request)` (line 194) will send an Envelope even though you haven't set the envelope field in your `request` object (line 185).
Bret says you can skip the envelope field and the Node JS SDK will construct it.
So that's not it.
(Still, send me the script for updating max_message_count.)
I will investigate this further later today.
There's a chance it might spill over to tomorrow. Either way, I'll look into it.
Can you open up a JIRA and assign it to me?
the script is configUpdateMaxMsgCount.js
the script pullconfig.js
will help you dispaly existing config
Thank you very much @kostas
Please search for " "max_message_count":
in the output json from pullconfig.js
Will do. Please remember to open up the JIRA item and assign it to me.
ok
sure
@kostas create the jira https://jira.hyperledger.org/browse/FAB-8556
created ,spelling mistake
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Sphvt6EsiaXkELuDP) @kostas The block index.
@minollo: The transactions will have a different index.
@kostas ...but what would getTransactionById(txnId) return? The duplicate transactions would have the same txnId...
Ah, I see what you're saying.
Good question. Let's ask @manish-sethi.
Good question. Let's ask @manish-sethi
@manish-sethi: For context:
Has joined the channel.
https://chat.hyperledger.org/channel/fabric-orderer?msg=EWG25XdzMKfscuN7L
And:
https://chat.hyperledger.org/channel/fabric-orderer?msg=g8SoH5dAkeN4SQpns
@kostas @minollo - yes, during commit time, a check is performed that the txid is unique. If a repeat is found, that transaction is marked invalid.
This is true even across blocks
Has joined the channel.
@manish-sethi So, the block index will just keep pointing to the first processed instance of that txnId?
Do you mean to say that when you execute get transaction by txid, what would you get?
that's another way to ask the question, yes
Good question. I think I would have to look into the code... I believe that you would get the later one
oh my
But ideally, you should be given back both
if anything, surely not the last one
Not sure. Will have to look
Will look and let you know. Would you mind posting this question on #fabric-ledger channel. I'll look in the code an let you know there
So it benefits others and if someone already knows it beforehand, they can answer
will do
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=dg4WudzrBMcxhtG5a)
@amolpednekar Assuming that your test is being bound by the io rate of the network, remember that the message must go from the orderer, to Kafka back to the orderer, and finally to the peer, so the last step involves more hops.
Has joined the channel.
Has joined the channel.
What is the main role of orderer in hyperledger fabric
@here
@kostas ^
@debutinfotech I recommend that you read http://hyperledger-fabric.readthedocs.io/en/latest/arch-deep-dive.html?highlight=architecture
@debutinfotech I recommend that you read http://hyperledger-fabric.readthedocs.io/en/latest/arch-deep-dive.html
Although the whole document is useful for context, your question is answered by http://hyperledger-fabric.readthedocs.io/en/latest/arch-deep-dive.html?highlight=architecture
Although the whole document is useful for context, your question is answered by http://hyperledger-fabric.readthedocs.io/en/latest/arch-deep-dive.html
Although the whole document is useful for context, your question is answered by http://hyperledger-fabric.readthedocs.io/en/latest/arch-deep-dive.html#ordering-service-nodes-orderers
Has joined the channel.
@here What is Endorser, Committer & Consenters? What are their roles?
Consensus service.png
Consensus service.png
What is the difference between Solo & Kafka?
@here
@debutinfotech stop using `@ here`
Ok
@debutinfotech Please stop using `@ here` . it make me annoying
Can someone suggest me where I can find description about solo and kafka?
Has left the channel.
Has joined the channel.
Has left the channel.
@debutinfotech https://github.com/hyperledger/fabric/blob/master/orderer/README.md
Has joined the channel.
Thank you very much @dinesh.rivankar
Really appreciated
Has joined the channel.
Has joined the channel.
Hi, currently im testing the speed of a single transaction using a solo ordering service. Im getting 1s for invoke and 0.8s for query using the node sdk. Is there anyway i can further reduce the time? Will using kafka orderers help to reduce the time? Does using better aws virtual machines help too?
Has joined the channel.
Has joined the channel.
Has joined the channel.
hi, how to config to use multiple orderer, have anyone tried this architecture yet?
@JayJong Most likely you are simply waiting for the batch timeout to expire. The orderer collects set of transactions into a batch before putting them into blocks. If the transaction throughput is not high enough, then eventually a timer expires creating the block.
You may configure your batch timeout to be very small if that is required for your application, or, you can decrease the batch size so that fewer messages will trigger a block (or you may increase your transaction rate)
@AndrewRy 1 Yes, this is well supported
> hi, how to config to use multiple orderer, have anyone tried this architecture yet?
@AndrewRy 1 Yes, this is well supported
Hi @jyellick , could we config to unitize say 2 Orderer to support only 1 channel for the sake of fault-tolerance?
thank you very much, could you please point out documentation how to do it,
@AndrewRy 1 Of course
You may use as many orderers for as many or few channels as you like
thank you very much, could you please point out documentation how to do it,
If you are bootstrapping a network, then the process is very easy
Simply edit your `configtx.yaml` here: https://github.com/hyperledger/fabric/blob/release/sampleconfig/configtx.yaml#L140-L141
To set the addresses of your desired orderer nodes
Then, generate your genesis block, and use it to bootstrap each of the orderers
great,
What if I already have network running, now I want to employ one more orderer into system, is this possible?
Yes, it is certainly possible
You must follow the same steps as adding an org to your channel
Except instead of adding the org entry, you should add the new orderer to the list of orderer addresses
thank you, got it
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=p4RPEDGja3Th8dYsw) @jyellick hi thanks for the reply, yes my application focuses more on the speed of a single transaction and the invoke and query has to be close to instantaneous
another qn, currently im setting the e2e_cli setup found in the master branch of fabric/examples/e2e_cli. I ran docker-compose-cli.yaml but something weird happens.
```
ex02 Init
Aval = 100, Bval = 200
ex02 Invoke
Query Response:{"Name":"a","Amount":"100"}
ex02 Invoke
Aval = 90, Bval = 210
ex02 Invoke
Query Response:{"Name":"a","Amount":"100"}
```
It seems that the invoke did not write the latest state to the block. Any1 had this issue too?
@JayJong Please run `./network_setup.sh up` instead
can anyone pls help me with orderer configuration with kafka. I have 2 orderers, 4 kafka brokers and 3 zookeepers running in my ubuntu machines
i am facing issues while connecting to kafka broker. below given are my orderer logs.
orderer1-org0 logs
https://hastebin.com/nudarahumu.hs
orderer2-org0 logs
https://hastebin.com/ciboxokike.hs
my docker-compose file
> my application focuses more on the speed of a single transaction and the invoke and query has to be close to instantaneous
@JayJong Then please simply configure your max message count to `1`, you will always have minimal latency, though your throughput may be hurt
my docker-compose file for orderer services
https://hastebin.com/datomefupe.http
can anyone pls help me with orderer configuration with kafka. I have 2 orderers, 4 kafka brokers and 3 zookeepers running in my ubuntu machines
i am facing issues while connecting to kafka broker. below given are my orderer logs.
orderer1-org0 logs
https://hastebin.com/nudarahumu.hs
orderer2-org0 logs
https://hastebin.com/ciboxokike.hs
@javrevasandeep I see in your logs:
```Failed to connect to broker kafka1:9092: dial tcp: i/o timeout```
This almost definitely means that your networking layer is not setup correctly. Please see the first question here: http://hyperledger-fabric.readthedocs.io/en/latest/ordering-service-faq.html?#id2
but i think there is issue with fabric-ca-orderer which comes with fabric 1.1 version and fabric-orderer. The thing is it works fine with
fabric-orderer instead of fabric-ca-orderer
Please do not use the fabric-ca-orderer images, as you say, they likely have problems. Please use the real fabric-orderer images.
Actually I am generating certificates dynamically for the orderers through fabric-ca-orderer. I don't want to use cryptogen tool to create members certificates. Therefore in my docker compose file, I am first enrolling the orderer and generate the certificates and the running the orderer in the same ordered container.
@kostas able to get any clue for our org update issue - https://jira.hyperledger.org/browse/FAB-8556
I will post on that JIRA when I have updates. I'm tied up on something else at the moment.
@javrevasandeep You are certain that this exact compose file works with the standard fabric-orderer image?
Hi guys. Any updates on the issue I am facing related to Kafka. I have all possible workarounds but couldn't able to get it to work. I have specific requirement to use fabric-ca-orderer image instead of fabric-orderer image
Yes this works with fabric-orderer image
@javrevasandeep There should be absolutely no differences between the two images other than the inclusion of the CI client
Please start up your network, and confirm using the kafka sample clients that you can connect to the Kafka cluster from within the orderer container.
When I look at your compose file, I see ``` kafka1:
ports:
- 9192:9092
```
And your log says:
```Failed to connect to broker kafka1:9092: dial tcp: i/o timeout```
I suspect the two are related.
Below given is the exact docker-compose file that works with fabric-orderer image
https://hastebin.com/ribaroqape.cs
@javrevasandeep Can you confirm using the kafka sample clients that you can connect to the Kafka cluster from within the orderer container?
I see ```- - KAFKA_ADVERTISED_LISTENERS=EXTERNAL://10.0.1.4:9192,REPLICATION://kafka1:9093
+ - KAFKA_ADVERTISED_LISTENERS=EXTERNAL://10.0.1.5:9192,REPLICATION://kafka1:9093
```
As a difference between the two compose files -- is this desired?
yes actually in the first configuration we are using 10.0.1.4 machine for our ordering service while in 2nd one we are using 10.0.1.5 machine
Has joined the channel.
I thought that was likely the case. Could you please confirm with the Kafka sample clients then?
@javrevasandeep please confirm what version of kafka you are using.
@javrevasandeep Looking at your compose file, you define a `fabric-ca` network, but then only connect your orderers to this network. This is most likely your problem.
Further notes:
- no need to export ports for kafka or zookeeper (unless you expect orderer to reach kafka )
- you should not have to define any specific ip addresses. change your version to `3` or above and read https://docs.docker.com/compose/networking/.
@javrevasandeep Looking at your compose file, you define a `fabric-ca` network, but then only connect your orderers to this network. This is most likely your problem.
Further notes:
- no need to export ports for kafka or zookeeper (unless you expect orderer to reach kafka via the host network )
- you should not have to define any specific ip addresses. change your version to `3` or above and read https://docs.docker.com/compose/networking/.
- I don't see a need for you to have defined and EXTERNAL and REPLICATION listeners due to their being no need for external (e.g. mapping to host ports) access to kafka. Consider just having a single listener.
Has joined the channel.
Hi guys, I was wondering if anyone knows if there is a simple way to verify that the orderer service is up. Is there a HEAD request that we can make similar to the fabric-ca?(the url is http://localhost:7054/api/v1/cainfo)
You may invoke the `Deliver` API on any known channel for a block like `newest`. For instance from the CLI `peer channel fetch newest`. If you get back success instead of `SERVICE_UNAVAILABLE` the orderer is up
@sthavisomboon You may invoke the `Deliver` API on any known channel for a block like `newest`. For instance from the CLI `peer channel fetch newest`. If you get back success instead of `SERVICE_UNAVAILABLE` the orderer is up
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3Eof3svmMJJ7Z9mgj) @jyellick ya i was using network_setup.sh previously. I just reinstalled the fabric repo and it's working fine now. Thanks for the help!
hi! i just tried to change my batch timeout variable (using kafka orderer) to see if it affects the amount of time it takes to invoke a transaction but i realized that the time taken remains the same, i.e. my batch timeout isn't updated at all! has anyone faced similar issues before?
Hi Fabric Experts, Thanks for your support.
I wish to allow only read access to a given channel in the network. I think this can be done by setting the Channel's writer's policy. But I cannot find any specific procedure/examples to achieve this. Any ideas? Thanks. 🙂
@Taffies How are you changing your batch timeout? Once your network is bootstrapped. Batch timeout is managed on a per channel basis.
@ArnabChatterjee Yes, you may modify the `/Channel/Application/Writers` policy to include only the identities you wish to be able to submit transactions on the network.
I'm unaware of any documented examples for this, this is an area we hope to expand on in our documentation in the coming months, but multiple users have successfully configured read-only organizations through this mechanism.
Has joined the channel.
Hello. I was wondering if there was support anywhere for using Amazon Kinesis in place of Kafka for use by the ordering service
No support for Kinesis. Given the similarities between the two systems, I would imagine that adapting Fabric to Kinesis would be a straightforward-ish task.
That was about my thinking. Thanks for clarifying though @kostas
guys, we have a naive design for introduce tendermint as a fabric ordering service, hope you guys both know fabric and tendermint to review and give suggestion. The jira ticket is https://jira.hyperledger.org/browse/FAB-8643
Hello Everyone, I was using kafka based ordering for the fabric and for the time being I am able to achieve 100 transactions per seconds, for more than that my node sdk based client crashes. so i adjusted the PreferredMaxBytes to 256 KB and AbsoluteMaxBytes to 175 MB to achieve MaxMessageCount 700, as implying that 700 message per orderer block would mean 700 transactions per seconds. Is this implication right? if not, how to achieve better transactions per seconds?
I mean, how many transactions are getting validated per seconds can result in transactions per second...
@NeerajKumar if your application is crashing, this is the problem you need to address. Tuning block sizes is the wrong solution.
In general, larger batch sizes and longer timeouts will increase throughput. However, until you are seeing an order of magnitude more throughput, I doubt that they are your constraint.
As a side note, 256kb preferred max size is quite small. You will never see 700 messages in a block in that configuration. Set the max message count to 700, and the preferred block size to be large. Also note, many blocks may be cut per second. Blocks are cut when the max size is reached or the batch timer expires, whichever comes first.
hi forks, I just wanted to know on the existing fabric network, how can I check which peer is endorser, Can I re-instantiate existing peer to become endoser peer
Hi Team, I want to set mirroring for my kafka cluster. Now in order to do so, I need the consumer group id or group id. I 'm not able to see any group id in my installation. Is it a installation issue or some custom implementation where group id is not being used? Afaik group id is mandatory for kafka.
`Error: Error endorsing query: rpc error: code = Unknown desc = make sure the chaincode mycc has been successfully instantiated and try again: could not find chaincode with name 'mycc' -
@Ryan2 To make a peer an endorser, simply install the chaincode. Note, this is not the appropriate channel for questions about endorsers
Has joined the channel.
What is the meaning of endorsement in Hyperledger fabric?
@akshay.sood This is not the right channel for your question, however, I recommend you read http://hyperledger-fabric.readthedocs.io/en/latest/arch-deep-dive.html
> Is it a installation issue or some custom implementation where group id is not being used? Afaik group id is mandatory for kafka.
@AshishMishra 1 Fabric Kafka based orderering does does not use consumer groups. Each orderer starts a consumer, and does not attempt to join an existing group. This is because all orderers need all messages.
@kostas kostas, good afternoon.
got a question, like to get some confirmation from you. We are running a system test with 32 orgs and 2 peers in each org.
we currently setup a different organization with 3 orderers in this same org. I like to know if this is a reasonable config when the load starts ramp up.
> we currently setup a different organization with 3 orderers in this same org
I'm not sure what it means to be "in this same org"
@jyellick jason, thanks for responding to my question. one org has 3 orderers.
this org is not any of the 32 peer orgs.
Yes, for v1.0/v1.1 the standard configuration is that the ordering org is an independent org from any of the application orgs. Even if the org is actually part of the same business, for management reasons, it is best to have different MSPs.
I am trying to figuring out, when load increases, more ordereres will hurt the performance or help ease the load.
Adding orderers should help the system scale horizontally, up to a point.
(Assuming you are using the v1.1 codebase, the v1.0 codebase does not scale well horizontally)
I guess that the question is that will multiple orderers work together like share the load or more orderers will simply do the same thing, then more communications among orderers are needed, at the end hurting the performance.
we are using the latest code. 1.1. RC1, I think.
Yes, adding more orderers should help
does it matter that more orderers come from same org or different orgs?
In particular, if you are CPU bound, adding more orderers will help because you may spread transaction validation across the orderers
The organization of an orderer does not matter -- though for real life scenarios, there is typically only one ordering og.
The organization of an orderer does not matter -- though for real life scenarios, there is typically only one ordering org.
if one orderer validates, other orderers simply take what that orderer has done and won't do duplicate work, right?
Correct
ok. thanks a lot Jason, you mentioned earlier that adding more orderers will help to a certain point, any addition info on that? when it will not help?
Eventually, I expect you will become IO bound. Since all orderers write all blocks to disk, there is currently no way to scale past this point.
haha, you beat me on that. great. thanks so much for your answers.
if I start a peer container multiple times, each time it will use a different local volume or it will try to use the same volume for ledger?
> if I start a peer container multiple times
I don't really follow. How are you starting it? How are you stopping it?
You may configure your peers to use a shared volume for their ledger, or not. You may destroy the shared volume on stop, or not.
@jyellick only specify where the keyfiles are. and /host/var/run.
nothing about where the ledger should be. and to be honest I do not know how.
Then if you remove the container, your ledger will be deleted with it. If you simply stop and start it, then the ledger should stay intact.
great. in that case, where will the ledger be saved?
Inside the container at `/var/hyperledger/production` (unless you changed the default)
we are running the system tests, we run the test many times and like to have a clean start each time.
that is great. We did not change that.
should I worry about running out of space?
one test run over 12 hours with many transactions.
It really depends on the load you're injecting into the system. It should be a fairly easy thing to calculate. Look at the size of the typical transaction you will be pushing, multiply it by the number you plan to push, and your ledger will be roughly that size, plus whatever state db and indexing etc.
I'm not sure what the goal of your test is. Is it performance? Stability? Something else?
@jyellick I and Scott have been working on this with Surya.
both performance and stability. we actually have a lot of trouble to run the tests.
Read before posting: https://wiki.hyperledger.org/chat_channels/fabric-orderer
[Please Read Before Posting](https://wiki.hyperledger.org/chat_channels/fabric-orderer)
which is content inside this directory of Orderer /var/hyperledger/production/orderer/index, could someone tell me?
Academia papers say the orderer is fault tolerant but does not protect against malicious actors. How do you mitigate this?
Directed Acyclic Graphs (DAG) are used by 2 platforms IOTA and Byteball. What are the pro/cons of DAGs against Kafka/Zkeeper? Realize DAG is very much in its infancy
@DennisM330 The fabric architecture supports pluggable consensus mechanisms. We are actively working towards providing a production ready BFT consensus implementation for fabric. I don't really see how DAGs or crypto-coins are relevant or consensus on order. Although you may implement crypto-coins on fabric, there is nothing inherent to the fabric which requires them.
@DennisM330 The fabric architecture supports pluggable consensus mechanisms. We are actively working towards providing a production ready BFT consensus implementation for fabric. I don't really see how DAGs or crypto-coins are relevant for consensus on order. Although you may implement crypto-coins on fabric, there is nothing inherent to the fabric which requires them.
jyellick
dave.enyeart
kostas
Directed Acyclic Graphs appears to not use blocks and instead link transactions. Any opio
Directed Acyclic Graphs appears to not use blocks and instead link transactions so yes it does n
Has joined the channel.
Could someone please show me an example of how to configure multiple orderers?
Couldn't find a single example from anywhere
at2plus
Agreed that we should add an example for this. But it really fairly simple. Simply follow a Kafka example, add as many orderer addresses as you would like here: https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L286 before bootstrapping. Then simply start the orderers at the addresses specified (using the genesis block you created with `configtxgen`).
Agreed that we should add an example for this. But is really fairly simple. Simply follow a Kafka example, add as many orderer addresses as you would like here: https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L286 before bootstrapping. Then simply start the orderers at the addresses specified (using the genesis block you created with `configtxgen`).
Hi, anybody tried kafka deployment on kuberntes?
thaks @jyellick
@Glen I believe this CR has an example k8s fabric deployment with Kafka https://gerrit.hyperledger.org/r/c/12159/
Has this CR entered the master branch?
It is still pending, it could use some testing and support
ok, thanks
Hi @jyellick after I ran make kubernetes, and then kubectl create -f build/kubernetes.yaml ran into this error`
[root@master cluster]# kubectl create -f build/kubernetes.yaml --validate
error: error converting YAML to JSON: yaml: line 8: could not find expected ':'`
Hi @jyellick after I ran make kubernetes, and then kubectl create -f build/kubernetes.yaml ran into this error```
[root@master cluster]# kubectl create -f build/kubernetes.yaml --validate
error: error converting YAML to JSON: yaml: line 8: could not find expected ':'```
could you run this example on k8s?
Can the Composer rest server query enumeration types directly? A query on an enumeration redu
Can the Composer rest server query enumeration types directly? A query on an enumeration redu
Hi @jyellick
"In particular, if you are CPU bound, adding more orderers will help because you may spread transaction validation across the orderers "
I don't really know what is the thing the orderer validates the transaction,
Hi @jyellick
"In particular, if you are CPU bound, adding more orderers will help because you may spread transaction validation across the orderers "
I don't really know what is the thing the orderer validates the transaction, is it checking the identity or signature of TX
the orderer checks that the signature of the client that sent the proposal, can be verified under the client's certificate, and also that the client's certificate is valid under one of the root CA certificate chains of the corresponding organization that the client belongs to.
@Ryan2
so if multiple Orderer implemented, does this framework load balance the request from clients automatically or need to be implemented load balancer (like ELB)
if we plug many Orderers like using Kafka?
if we plug many Orderers like using Kafka?, the load will equally spread out, how can I know this from the code or any evidence for that one?
you can send to a random orderer node
from the client
Has joined the channel.
thank you @yacovm
not sure if this is the right channel to ask, but i'm trying to figure out how to get visibility into the message size that is sent for ordering. should i be grabbing that from orderer logs? or can i just measure the size of the request in the sdk before it gets broadcast?
@jrosmith There are really many different ways to do this. Likely the simplest is simply to fetch the block which commits and extract the transaction from it, inspecting its size. You may also turn on debug transaction tracing in the orderer which will write the transactions into file system as they arrive, or, I expect the SDKs have a similar option.
@jyellick thank you!
@Glen this is a template of fabric-kafka deployment on k8s https://hastebin.com/tutaquxuri.cs
yes, that's right, kafka needs to wait for zookeeper to get up
Has joined the channel.
Hi @sh777 , can you also provide your zookeeper yaml defiinitions, as I sometimes get connection refused errors in setting up the zookeeper cluster.
Hi @sh777 , can you also provide your zookeeper yaml defiinitions, as I sometimes get connection refused errors in setting up the zookeeper cluster. Thanks
Can somebody help me to find out a documentation which lists out all possible configurations for "hyperledger/fabric-orderer" docker image, I really want to/ have to tweak some parts of the functionality of the orderer through configurations?
@Glen https://github.com/inklabsfoundation/fabric-deployment maybe felp
@grapebaba thanks for response, and can be good for beginners, but i really need detailed configurations about every possible functionality.
@NeerajKumar There are channel configuration parameters, these are defined https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml and then the orderer process parameters which are defined https://github.com/hyperledger/fabric/blob/master/sampleconfig/orderer.yaml . The docker images generally override these values trough environment variables, there is nothing specific to the configuration of the docker images vs those of the standard processes.
There are some additional channel configuration options which you may find described http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html
There are some additional channel configuration options which you may find described http://hyperledger-fabric.readthedocs.io/en/latest/config_update.html
Is it true that two orderers (using kafka ) generate the same block, and put the two block to peers, the peer accepts one block, and discards one block ?
Is it true that two orderers (using kafka ) generate the same block, and put the two blockes to peers, the peer accepts one block, and discards one block ?
Thanx @jyellick this really helped. I also really want to increase the TPS(Transactions per second) of my fabric network, also increasing orderer messages per batch limit can help but that's not the solution. can you point me toward the right doc which says what needed to be done to increase TPS. I am also separating out the endorsing peer to something powerful in computing is there any extra config in need to touch to tweak up the TPS.
Hello! I have a question. If I remove the *BatchTimeout* from the configtx.yaml file, the timeout becomes infinite or there is an upper bound? I tested and it seems to be limited but I'm not sure. Thanks in advance.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=vKPcrhDe2QSwNRDWM) @mikykey The configtxgen tool defaults to a `BatchTimeout` value of 2 seconds if not specified.
Has joined the channel.
@NeerajKumar
> can you point me toward the right doc which says what needed to be done to increase TPS
There is no one size fits all answer. This is why the system is configurable. As a rule of thumb, the larger the batch size (in max message count, and preferred size) the higher the orderer throughput will be, but the higher the transaction latency will be. If you are not already using fabric v1.1.0-rc1, I strongly suggest you move to this release. There are significant performance improvements over v1.0.x
Anyone here know how to find `CORE_PEER_LOCALMSPID`?
@pankajcheema I'm not sure what the question is? That is an environment variable that can be set. You could `echo $CORE_PEER_LOCALMSPID` or `env | grep CORE_PEER_LOCALMSPID`
@pankajcheema Also, you are cross-posting to channels, please don't do this
@jyellick The file `docker-compose-base.yaml is asking for this parameter`
Each organization in the network has an MSP ID. You configured this in your `configtx.yaml` file
@jyellick it is asking for Local MSP
Each peer have a local msp id
I'm unable to find that
I'll reply to you in #fabric-questions
Ok
Hi I got the issue when start up Orderer regarding Kafka,
"[orderer/common/deliver] Handle -> WARN Rejecting deliver request because of consenter error",
despite Topic is created successfully in the Kafka cluster.
Do you know how to resolve this one?
anyone facing this issue? `Error creating channelconfig bundle: initializing configtx manager failed: error converting config to map: Illegal characters in key: [Group]`
orderer is throwing this error
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nQ8nJMAr8EvQLTL2p) @sanchezl Thank you very much :)
Has joined the channel.
Anyone know this error? ```orderer.example.com | 2018-03-12 10:43:28.563 UTC [orderer/commmon/multichannel] newLedgerResources -> CRIT 004 Error creating channelconfig bundle: initializing configtx manager failed: error converting config to map: Illegal characters in key: [Group]
```
currently i'm using hyperledger fabric v-1.1.0-preview. When i'm trying to add org4 via configtxlator tool. When i'm viewing the logs of docker cli container i can see this error:
error: 2018-03-12 11:52:07.759 UTC [gossip/service] configUpdated -> ERRO 25b Tried joining channel mychannel but our org( Org4MSP ), isn't among the orgs of the channel: [Org1MSP Org2MSP] , aborting.
Hi all, I have successfully deployed a network with Kafka on the same machine using the docker-compose examples. But when I try to split the configuration to have it on 2 different servers, I have this error message :
kafka0 | java.io.IOException: Connection to 25ff3a528146:9092 (id: 2 rack: null) failed
*kafka0 | java.io.IOException: Connection to 25ff3a528146:9092 (id: 2 rack: null) failed
*
*kafka0 | java.io.IOException: Connection to 25ff3a528146:9092 (id: 2 rack: null) failed*
does anyone understand why the host name is not IPV4 address as I set it on the config ?
is it a IPV6 one that is displayed on the error ? And does it mean ?
is it a IPV6 one that is displayed on the error ? And what does it mean ?
@pankajcheema You have been warned not to cross-post your questions, you will be muted in this channel
pankajcheema
@BOGATIM
> error: 2018-03-12 11:52:07.759 UTC [gossip/service] configUpdated -> ERRO 25b Tried joining channel mychannel but our org( Org4MSP ), isn't among the orgs of the channel: [Org1MSP Org2MSP] , aborting.
The peer is processing blocks from genesis going forward. It sees that its org is not in the genesis block so gets a little confused. This is a known issue and we are working on making this nicer. For the time being, if you enable gossip leader election for your peer, it should bypass this issue.
@bfuentes@fr.ibm.com You should look into http://kafka.apache.org/090/documentation.html and the advertised hostname. Kafka looks up its hostname and reports it to clients as how to connect to the container. The hostnames docker assigns to containers can be rather cryptic
thank you
byfn always shows ```ERROR: for peer1.debutinfotech.com Cannot start service peer1.debutinfotech.com: driver failed programming external connectivity on endpoint DebutInfotechPeer1 (f808c3c48dfebbe508c66b81aeaceb137656520998d5089d895b698822910149): Bind for 0.0.0.0:1051 failed: port is already allocated``` even if the ports are changed
@akshay.sood This is not the appropriate channel for the peer, I suggest #fabric-questions
@akshay.sood This is not the appropriate channel for questions about the peer or byfn, I suggest #fabric-questions
@jyellick no one is answering my question
seems like people either do not know the solution or dont know the solution
I don't see your question in that channel?
or do not want to help***
@akshay.sood This is unfortunate, but the point remains, and Jason's right. This is an off-topic question for this channel.
@jyellick If one org admin creates a channel for two orgs using only his own Admin signature, while setting a mod-policy in the create-channel transaction that requires signatures of both orgs, would it be rejected?
I could break it down into preliminary questions: (1) Is it possible to set that mod policy in the channel creation tx? (2) Would the orderer read and use it, or would it get ignored? (3) If the orderer uses it, would it be effective on the channel creation transaction itself, or only on subsequent modification requests?
@scottz @jyellick
I ran a quick test using *e2e_cli* sample, changed the mod policy to include *ALL* in all mod-policy sections of the channel transaction (protobuf converted to json) ,
and again converted the JSON back to protobuf using the command
```
configtxlator proto_encode --input channel-artifacts/channel.json --type common.Envelope --output channel-artifacts/channel.tx
```
I see channel created successfully and now the Channel Genesis block has different mod-policy information than what I added.
Has joined the channel.
@scottz @Ratnakar Expected behavior is:
1) Certainly, the channel creator may set any mod_policy, or policy in the /Channel/Application group
2) Yes, the orderer would read and use it
3) Everythng is effective after commit. For mod_policy you would only notice it on an update, for other policies, like the Readers policy, it would take affect from the commit time on
Has joined the channel.
Has left the channel.
pankajcheema
Anyone faced this issue? ```ca.debutinfotech.com | 2018-03-13 05:25:13.528 UTC [cauthdsl] deduplicate -> ERRO 00b Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.debutinfotech.com")) ```
Seems like theres an issue with certificate of orderer
`ca.debutinfotech.com` is an orderer
I tried with generating new certificates
but failed again
Does anyone have suggestions?
TLS is enabled
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=rpDXHvWABahFbkQCf) @pankajcheema Clear the crypto config folder and try again
@varun-raj done several times but no luck
It happens mostly when you create crypto materials without deleting the old ones
@varun-raj I have deleted the crypto config directory and I do this every time I make changes in file
but still not working
@fanjianhang can you look into this issue?
Has joined the channel.
This is the error ```ca.debutinfotech.com | 2018-03-13 06:31:28.871 UTC [cauthdsl] deduplicate -> ERRO 16a Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.debutinfotech.com")) for identity 0a0844656275744d535012ae062d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d4949434c54434341644f67417749424167495146764a3965534756666c586e7245546e506b6e4e4f7a414b42676771686b6a4f50515144416a42314d5173770a435159445651514745774a56557a45544d4245474131554543424d4b5132467361575a76636d3570595445574d4251474131554542784d4e5532467549455a790a5957356a61584e6a627a45614d4267474131554543684d525a47566964585270626d5a766447566a6143356a623230784854416242674e5642414d5446474e680a4c6d526c596e56306157356d6233526c59326775593239744d423458445445344d444d784d7a41324d6a55774e316f58445449344d444d784d4441324d6a55770a4e316f776254454c4d416b474131554542684d4356564d78457a415242674e5642416754436b4e6862476c6d62334a7561574578466a415542674e56424163540a44564e6862694247636d467559326c7a59323878447a414e42674e5642417354426d4e7361575675644445674d423447413155454177775851575274615735410a5a47566964585270626d5a766447566a6143356a623230775754415442676371686b6a4f5051494242676771686b6a4f50514d4242774e434141524f437939350a7552716c75354234756332334a4942765964476256684a626d7666674b425434555a64375543636751515761685474524f344b515870774c6347756d577851790a7a397a584536485168743434664c45796f303077537a414f42674e56485138424166384542414d434234417744415944565230544151482f42414977414441720a42674e5648534d454a444169674342766530444449614177616d4d59735256483962616357626d336c754254666e4d452f4b5577413553654b7a414b426767710a686b6a4f5051514441674e494144424641694541795166666147516c47732f434377716d6b7158693345613438774a717377773648307854435a6a45685441430a494736624e2f416f7a514b6f34314e4c62504a4634582f5345687362324370567048317666734c71593675560a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a```
@TobiasN
@jyellick
@pankajcheema I think the crypto-config including certs, like pem files, are not matched the ca's.
@fanjianhang But I'm generating the certificates properly
Try check the ca-container logs.
```- &DebutInfotechOrderer
# DefaultOrg defines the organization which is used in the sampleconfig
# of the fabric.git development environment
Name: DebutInfotechPvtLtdOrderer
# ID to load the MSP definition as
ID: DebutOrdererMSP
# MSPDir is the filesystem path which contains the MSP configuration
MSPDir: crypto-config/ordererOrganizations/debutinfotech.com/msp```
This is the orderer config in cryptotx.yaml
I fixed this by renaming the orderer. Previously it was ca.debutinfotech.com.. now its name is hr.debutinfotech.com and it works fine. Anyone knows this issue?
seems like everyone is sleeping
Hi, I got problem with Kafka-based Orderer
I deleted "/tmp/kafka-logs" folder on the kafka server then restart kafka and zookeeper service, the cluster start up successful.
But when I restart Orderring service, it said that "[orderer/common/deliver] Handle -> WARN 5b0 [channel: mychannel] Rejecting deliver request because of consenter error" App cannot working on this channel
Do you guys know how to solve this issue, what to do in this case
Thanks
Has joined the channel.
Has left the channel.
@pankajcheema I'm not sure why you could have named your orderer 'ca', my best guess is that this name was already taken by your ca server and the conflicting common names caused a problem
@Ryan2 Most likely, the orderer has simply not reconnected to Kafka yet. If the cluster was down long enough, you might need to restart the orderer to force it to attempt to reconnect.
Hi @jyellick , Thank you for the response.
since in my network have two channels, the Orderer connected to other channel successfully, but cannot connected into 'mychannel' (as the log showed)
What do you think, which wrong in my case?
@Ryan2 That's unusual that it would connect to one channel and not the other. Have you tried restarting your orderer process?
yes, it's weird, I restarted the Orderer several times
yes, it's weird, I restarted the Orderer several times,but Orderer only connected to 1 channel, other channel (mychannel) cannot be
yes, it's weird, I restarted the Orderer several times,but Orderer only connected to 1 channel, other channel (mychannel) cannot be
Situation like the log below:
"
2018-03-13 15:17:17.187 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 280 [channel: testchainid] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
2018-03-13 15:17:22.155 UTC [orderer/kafka] try -> DEBU 282 [channel: mychannel] Connecting to the Kafka cluster
2018-03-13 15:17:22.160 UTC [orderer/kafka] try -> DEBU 283 [channel: mychannel] Connecting to the Kafka cluster
2018-03-13 15:17:27.155 UTC [orderer/kafka] try -> DEBU 284 [channel: mychannel] Connecting to the Kafka cluster
2018-03-13 15:17:27.159 UTC [orderer/kafka] try -> DEBU 285 [channel: mychannel] Connecting to the Kafka cluster
2018-03-13 15:17:32.155 UTC [orderer/kafka] try -> DEBU 286 [channel: mychannel] Connecting to the Kafka cluster
"
Have you have any idea @jyellick
@Ryan2 I am not certain, I've never seen anything like this. Have you checked your Kafka/ZK logs for anything out of the ordinary?
Hi @jyellick, I not sure it's abnormal or not. but for Kafka cluster, I manually create one topic, and create producer and consumer, manually I can send and see message on producer and consumer terminals then I assume that Kakfa cluster setup correct.
Then I start the Orderer, since the orderer ledger file have 2 channel, but it can connect to only one, cannot connect to the one (mychannel) which I want to use
But thank you for the consideration :)
Hi @jyellick, I not sure it's abnormal or not. but for Kafka cluster, I manually create one topic, and create producer and consumer, manually I can send and see message on producer and consumer terminals then I assume that Kakfa cluster setup correct.
Then I start the Orderer, since the orderer ledger file have 2 channel, but it can connect to only one, cannot connect to the one (mychannel) which I want to use
But thank you for the consideration :thumbsup:
@Ryan2 My concern would be that perhaps you have lost the data in you Kafka cluster. That the orderer is trying to connect to the partition at an offset which no longer exists, so it is failing.
oh maybe, Before restart Kafka cluster, I cleaned all data in the /tmp/kafka-logs, is this data which you mean I lost?
I can't claim a great deal of experience in Kafka administration, but it sounds possible
So this mean that, the fabric network for the specific channel will not be recovered in case that Kafka data lost?
How weird it is
Fabric relies on Kafka to provide a crash fault tolerant data store for ordering
If it crashes and is not recoverable, this will be a serious problem for your fabric network
thank you for the insight.
one concern left is that due to I cleaned all data in Kafka cluster, if as you said, non of the channel cannot be started up, but 1 channel still be up, (as I showed earlier)
Each channel is represented by a Kafka topic/partition. If the partition is available/correct, then the channel will function correctly. So this is the natural way it would fail
Has joined the channel.
Hello to all ! I'm trying to make a small app with the node SDK, but I encountered an issue I have no idea to fix (I spend many hours on Google and Jira with no success). My setup is based on Fabcar from Fabric-samples. the startfabric script executes without error and everything seems fine.
When I launch my server.js I got this error
```
error: [Orderer.js]: sendDeliver - rejecting - status:FORBIDDEN
error: [Channel.js]: getChannelConfig - Failed Proposal. Error: Error: Invalid results returned ::FORBIDDEN
```
I dug into the SDK and Fabric to find that it came from some certificates. the logs from the orderer can be seen here (https://hastebin.com/kimowijili.hs). I tried to regenerate crypto material but I have the same result everytime.
@AnthonyRoux Are you certain that the MSP ID you are specifying for your cert matches the MSP ID of the issuing CA?
@AnthonyRoux That error is pretty much as it reads. The identity you are using is being rejected by the orderer because the orderer checked to see if it was issued by any CA in the channel, and it is not finding one.
(Upon further reflection, the MSP ID mis-match would produce a different error)
@jyellick Normally everything should be ok since I just launched the scripts from Fabcar. Right ?
Did you make sure you removed any of your previous network artifacts? The crypto is stored in the blockchain, so if the ledgers are around and you regenerated your cypto, you would see an error like this
I do it every time. I just did it one more time and the result is the same
You do not get the same error with the un-modified example?
It is an unmodified example (the network part). The node application is from me. I use a config.json file to get the information needed but I don't think the mistakes come from there
If I bypass the line that get me the errors (`channel.initialize()`) and I go through the Invoke function, I got this ```Error: 2 UNKNOWN: Failed to deserialize creator identity, err The supplied identity is not valid, Verify() returned x509: certificate signed by unknown authority```
Can you confirm that you still see this error with the unmodified application? As you say, it should not have an effect, but it is worth verifying
unfortunately I got the same kind of error (https://hastebin.com/eqojagutew.vbs)
If you have debug logging enabled, you should be able to see the certificate in the orderer logs. You can then use openssl to verify that it is signed appropriately by one of your CAs
@jyellick So it's seems there is a problem. If I copy the certificate present in the logs of the orderer the issuer common name is fabric-ca-server. When I check another certificate present in the crypto-config folder the issuer name is ca.org1.example.com
Has joined the channel.
Hi guys! I'm trying to up a network with kafka and everything goes well until the channel creation, kafka creates the topic but there isn
Hi guys! I'm trying to up a network with kafka and everything goes well until the channel creation, kafka creates the topic but there isn't messages being delivered to them by the orderer after the channel creation.
Hi guys! I'm trying to bring up a network with kafka and everything goes well until the channel creation, kafka creates the topic but there isn't messages being delivered to them by the orderer after the channel creation.
the kafka/zk images are in `x86_64-1.1.0-preview` tag
and its docker-file is here https://hastebin.com/sufofetove.bash
the orderer file https://hastebin.com/naxagizida.cs
and the configtx.yaml https://hastebin.com/aqanihemow.coffeescript
following the [quickstart](https://kafka.apache.org/quickstart) from apache I can either create a topic and consume it on my hyperledger-kafka/zk instances, but I can't produce on it
so every time I try to
```bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
bin/kafka-console-producer.sh --broker-list localhost:9070 --topic test```
And write some message, I got:
```>teasd
>[2018-03-14 18:58:31,375] ERROR Error when sending message to topic test with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Expiring 1 record(s) for test-0: 1525 ms has passed since batch creation plus linger time
```
and you can check at kafka's docker-file that the port to kafka is correct
In other hand, the consume works well for both the test created topic following quickstart and mychannel topic created by fabric-network, but in both cases it hasn't any message on it
In other hand, the consumer works well for both the test created topic following quickstart and mychannel topic created by fabric-network, but in both cases it hasn't any message on it
Has joined the channel.
Has joined the channel.
Just a guess, but perhaps you don't have enough connected replicas to satisfy the ISR requirements for production. Do your Kafka logs look happy?
Hi@jyellick, I set up a kafka cluster and start orderer, it seems the orderer process is blocked here ```2018-03-15 02:11:29.787 UTC [orderer/kafka] sendConnectMessage -> INFO 007 [channel: testchainid] About to post the CONNECT message...```
Hi All. Anyone have idea of connecting multiple orderers. I am getting this error when I tried 2 orderer of same organization using command `docker-compose -f docker-compose-cli.yaml up` and the error is ```orderer.debut.com | panic: Error while trying to open DB: resource temporarily unavailable```
@TobiasN
@jyellick
@fanjianhang
@mikykey
Can you guys help me?
Hi guys, can I hide data from my orderer and kafka service? I can see a lot of data in the orderer and kafka logs. I want only my client or sdk and peers to know the data.
Failed to connect to broker network01-kafka3-1749917152-0fqhp:9092: dial tcp: i/o timeout
orderer log: Failed to connect to broker network01-kafka3-1749917152-0fqhp:9092: dial tcp: i/o timeout
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fC577Fxwc8DuYAGTt) @pankajcheema I created an organization with many orderers only with this tool https://github.com/ibm-silvergate/netcomposer
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fC577Fxwc8DuYAGTt) @pankajcheema I created an organization with many orderers only with this tool https://github.com/ibm-silvergate/netcomposer
did you use Kafka orderers or Solo orderer? (because I think you can't create many solo orderers)
Hi Experts, I have a question, that i am dealing with right now, lets say you registered a user on behalf of a organisation in the fabric and now you want to invoke a transaction on behalf of that user, why fabric signs this transaction with the private key of the user, actually i am removing all private keys of the user just after registering and storing them in aes 256 encrypted format in the db, but as soon as i am about to invoke the transaction i need to acquire the regtsered users context who is invoking this transaction but as i have already deleted that private key an issue is thrown which says : "Private key missing from key store. Can not establish the signing identity for user". why it requires to sign on behalf of the end user.
*"getUserContext(name, checkPersistence) "* this method is able to load user's certificate successfully but soon after that it throws this privte key missing error
`-P "('DebutInfotechMSP.peer')"` instansiating chaincode on order and giving policy only for single organization.Getting error `Error: Invalid policy ('DebutInfotechMSP.peer')
` .please help
`-P "('DebutInfotechMSP.peer')"` instansiating chaincode on order and giving policy only for single organization.Getting error `Error: Invalid policy ('DebutInfotechMSP.peer')` .please help
`-P "('DebutInfotechMSP.peer')"` instansiating chaincode on orderer and giving policy only for single organization.Getting error `Error: Invalid policy ('DebutInfotechMSP.peer')` .please help
@Glen I think that is not possible, but as the participating organizations can operate there own ordering service, the data don't need to be encrypted.
@Glen I think it is impossible, because a transactions ReadWrite set is describing how the ledger get effected by the transaction. here are some thoughts:
1. encrypt the the arguments and function name of a transaction should be possible, the secred (private-key or passphrase) could be shared in chaincode.
2. if the read write set contains encrypted data, every reader (chaincode and thirdparty apps) would need to have the secret. (will be difficult to keep that a secret).
3. as in a blockchain the data is by default shared among different organizations, you never know, if one of them is providing public access to the information (if that is legal will be a nontechnical question)
4. when data is transported between the services, usually TLS is used, so on the wire, the data is encrypted.
@NeerajKumar that is right, you can not delete the private keys. I think to have that scenario complete, the fabric-ca-client would need a rewrite. today, the user need to trust its organization only to invoke transactions with the users key, that the user want to. With the todays node-SDK you are only able to produce trust between organizations, but not necessarily from the organization to the user. but normally employees can trust there company.
@NeerajKumar can I ask what a crypto-store and keystore implementation you use to store the keys in DB?
Anyone here how do you set endorsment policy while instantiating a chain code when you have a single organization?. I am using the following command: ```root@739ce58d4963:/opt/gopath/src/github.com/hyperledger/fabric/peer# peer chaincode instantiate -o hr.debut.com:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/debut.com/orderers/hr.debut.com/msp/tlscacerts/tlsca.debut.com-cert.pem -C $CHANNEL_NAME -n mycc -v 1.0 -c '{"Args":["init","a", "100", "b","200"]}' -P DebutMSP.peer```
it throws the following error: `Error: Invalid policy DebutMSP.peer`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3ScwQLbbbfLcixgNu) FIXED
Anyone knows how to use this command when you have 2 orderers of same organization? ```peer channel create -o orderer.debut.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/debut.com/orderers/orderer.debut.com/msp/tlscacerts/tlsca.debut.com-cert.pem```
@DarshanBc @TobiasN @mikykey @jyellick @rocket.cat
Has joined the channel.
Has joined the channel.
@DarshanBc @TobiasN @mikykey @jyellick
Hi, I faced one issue regarding to kafka-based orderer,
I have cluster of 3 kafka and zookeeper cluster, and have configed
"*offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2*
" on the kafka server.properties
First, I started up kafka cluster successfully, zookeeper found quorum
Then I start the Orderer, create channel and so on to start fabric network
Checking details of topic created for channel, which I started. There is only 1 ReplicationFactor for my channel NOT 3 as I configurated
"
*Topic:mychannel PartitionCount:1 ReplicationFactor:1 Configs:
Topic: mychannel Partition: 0 Leader: 2 Replicas: 2 Isr: 2*
"
Could you provide any help to fix this case.**
Have I need to create topic with the same name as create channel in advanced like below for actually have 3 replication on 3 servers
*bin/kafka-topics.sh --create --zookeeper zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 --replication-factor 3 --partitions 1 --topic mychannel*
Hi, I faced one issue regarding to kafka-based orderer,
I have cluster of 3 kafka and zookeeper cluster, and have configed
*offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2*
on the kafka server.properties
First, I started up kafka cluster successfully, zookeeper found quorum
Then I start the Orderer, create channel and so on to start fabric network
Checking details of topic created for channel, which I started. There is only 1 ReplicationFactor for my channel NOT 3 as I configurated
*Topic:mychannel PartitionCount:1 ReplicationFactor:1 Configs:
Topic: mychannel Partition: 0 Leader: 2 Replicas: 2 Isr: 2*
Could you provide any help to fix this case.**
Have I need to create topic with the same name as create channel in advanced like below for actually have 3 replication on 3 servers
*bin/kafka-topics.sh --create --zookeeper zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 --replication-factor 3 --partitions 1 --topic mychannel*
Hi, I faced one issue regarding to kafka-based orderer,
I have cluster of 3 kafka and zookeeper cluster, and have configed
*offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2*
on the kafka server.properties
First, I started up kafka cluster successfully, zookeeper found quorum
Then I start the Orderer, create channel and so on to start fabric network
Checking details of topic created for channel, which I started. There is only 1 ReplicationFactor for my channel NOT 3 as I configurated
*Topic:mychannel PartitionCount:1 ReplicationFactor:1 Configs:
Topic: mychannel Partition: 0 Leader: 2 Replicas: 2 Isr: 2*
Could you provide any help to fix this case.**
Have I need to create topic with the same name as create channel in advanced like below for actually have 3 replication on 3 servers
*bin/kafka-topics.sh --create --zookeeper zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 --replication-factor 3 --partitions 1 --topic mychannel*
Hi, I faced one issue regarding to kafka-based orderer,
I have cluster of 3 kafka and zookeeper cluster, and have configed
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
on the kafka server.properties
First, I started up kafka cluster successfully, zookeeper found quorum
Then I start the Orderer, create channel and so on to start fabric network
Checking details of topic created for channel, which I started. There is only 1 ReplicationFactor for my channel NOT 3 as I configurated
Topic:mychannel PartitionCount:1 ReplicationFactor:1 Configs:
Topic: mychannel Partition: 0 Leader: 2 Replicas: 2 Isr: 2
Could you provide any help to fix this case.**
Have I need to create topic with the same name as create channel in advanced like below for actually have 3 replication on 3 servers
*bin/kafka-topics.sh --create --zookeeper zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 --replication-factor 3 --partitions 1 --topic mychannel*
Hi, I faced one issue regarding to kafka-based orderer,
I have cluster of 3 kafka and zookeeper cluster, and have configed
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
on the kafka server.properties
First, I started up kafka cluster successfully, zookeeper found quorum
Then I start the Orderer, create channel and so on to start fabric network
Checking details of topic created for channel, which I started. There is only 1 ReplicationFactor for my channel NOT 3 as I configurated
Topic:mychannel PartitionCount:1 ReplicationFactor:1 Configs:
Topic: mychannel Partition: 0 Leader: 2 Replicas: 2 Isr: 2
Could you provide any help to fix this case.**
Have I need to create topic with the same name as created channel in advanced like below for actually have 3 replication on 3 servers
*bin/kafka-topics.sh --create --zookeeper zookeeper0:2181,zookeeper1:2181,zookeeper2:2181 --replication-factor 3 --partitions 1 --topic mychannel*
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=oJmKYk4YzDt2dYcAw) @pankajcheema I think that the command you wrote is correct, if I am not wrong you must always specify only one of the orderers of your organization, it is not possible to "broadcast" the request to all the orderers simulaneously
@mikykey Thanks for the answer
@mikykey do you know how can I run that command if I have multiple orderers (We can have more than one orderer in our project)?
https://chat.hyperledger.org/channel/fabric-orderer?msg=C6uSXgkWtSYppSZWr
@pankajcheema This sounds like you are using the same ledger volume for both images
https://chat.hyperledger.org/channel/fabric-orderer?msg=8wmkg7ockWN26AQMu
@jyellick this error is fixed
@AshishMishra 1 You may be interested in the experimental v1.1.0 feature called 'private data' or 'sideDB'
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=tkQa3C9KkbQwpd39K) can you guide something about the issue @jyellick
@NeerajKumar This sounds like you are using one of the SDKs? You should ask in the appropriate SDK channel.
https://chat.hyperledger.org/channel/fabric-orderer?msg=Eavn99sWuYYj2Nqmi
@jyellick ok
@Ryan2 Have you tried creating a new topic via the CLI without specifying a replication factor? Do you see an RF of 1 or 3 then? Most likely the configuration you specified in the server.properties is somehow being overridden or ignored. When configured properly, the new topics created by Fabric will have the default RF
@pankajcheema You need only submit your channel creation request to a single orderer (just like your other transactions). It will be created for all orderers.
If you are not observing this behavior, please confirm that your orderers are appropriately setup using the Kafka consensus protocol. It sounds like you may have set up two orderers with 'solo' which will not work correctly.
@jyellick yes we were applying two orderer with solo.
i think we have to write seperate configuration for kafka @jyellick
@pankajcheema You cannot use more than one solo orderer.
You must base your network off of the Kafka consensus protocol to support multiple orderers.
@jyellick Thanks for your valuable feedback.
Hi @jyellick thank you for the feebback
"Have you tried creating a new topic via the CLI without specifying a replication factor? Do you see an RF of 1 or 3 then?"
I not tried creating a new topic via CLI,
I specified in the server.properties file that "offsets.topic.replication.factor=3", when I started the Orderer, I saw RF of 1.
Since I have 3 Kafka servers, I want to have RF of 3,
How to let Fabric create RF of 3? what to do to configured properly, could you help.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=z8RuG2RmBYKazqpTj) @jyellick yes all logs are ok
@Ryan2 Please make sure that the CLI will properly create the default RF. If the CLI does not, then you have misconfigured Kafka and modifying fabric will not help you
@vieiramanoel Can you try to consume that partition with the sample client?
I'm able to consume, but there isn't anything in the channel topic
Interesting. There should definitely be records in the channel topic
You do see messages on the system channel topic?
I can see ordered transacting the blocks, but Kafka doesn't say anything about it
Once per 10 minutes the ordered gets the last Kafka state, but there isn't anything there so it does nothing
Are you certain you configured your orderer to use Kafka?
Look for a message like:
```2018-03-15 15:17:13.853 UTC [orderer/commmon/multichannel] NewRegistrar -> INFO 004 Starting system channel 'testchainid' with genesis block hash 50dbaeb75a96f51123842ceb1ed321adb249b450c2f71ce0393062ae94ae4dc0 and orderer type solo
```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=QLF3CjefPZZAEd8Bq) @jyellick yes, you can check on configtxgen.yaml, plus if I put kafka down the orderer stops
@vieiramanoel Please find this line in your orderer logs and confirm that it says Kafka and not Solo
ok
@jyellick ```ubuntu@ip-192-168-0-57:~$ docker logs orderer0.goledger.com 2>&1 | grep "Starting system channel 'testchainid' with genesis block hash "
2018-03-15 17:04:16.203 UTC [orderer/commmon/multichannel] NewRegistrar -> INFO 0a6 Starting system channel 'testchainid' with genesis block hash 2e97cee032f72fdcadc10539515dce5429ed52ffc13d232376aa01b25bfb6149 and orderer type kafka
```
> I can see ordered transacting the blocks, but Kafka doesn't say anything about it
What do you you see to tell you this?
Clipboard - March 15, 2018 4:10 PM
@jyellick I've already tested writing a lot of blocks, like almost 70 and kafka is still at the same point
Maybe this is the expected behavior, but idk if it is.
It's kinda strange for that kafka doesn't log what it is doing
Yes it looks like all is functioning as expected. I'm not sure what is is you expecting? When you said 'Kafka logs' I assume you meant messages in the partitions, but do you mean the literal debug logs?
@jyellick Yes, the debug flags are set ``` - KAFKA_TOOLS_LOG4J_LOGLEVEL=ERROR
- KAFKA_LOG4J_ROOT_LOGLEVEL=WARN```
but yet I can't see what kafka is doing with the new blocks that arrives to it
but yet I can't see what kafka is doing with the new blocks that arrive to it
Orderers seems to communicate among them, but when I try to consume the partition there's nothing inside it
Is that the expected
Is that the expected?
The orderers order transactions through Kafka, and generate the blocks locally
I would expect you to find data in the partitions, but this data would be transactions
If the orderers are creating blocks, then there must be data in the partitions, there is no other way if Kafka is the consensus protocol being used
Would I be able to produce data to a new topic created by me to this kafka server? Just to make sure that everything is fine? Cuz when I create a topic and try to produce in it I get this https://chat.hyperledger.org/channel/fabric-orderer?msg=ta68ygpFyQT8Skfwb
You should absolutely be able to do that. This is what does not make sense to me. If your orderers are truly using the Kafka consensus protocol, then they are/must be creating topics, and writing data to them. Otherwise you would never see blocks committed. But, if the CLI cannot create a topic, it seems very unlikely the that the fabric orderer process can. And, the fact that you see nothing would indicate that the fabric orderer is being just as unsuccessful.
they can create a topic
but i cannot use this topic
How would you use this topic?
To debug
Fabric needs to control which messages go into the topic
The command you showed is attempting to create a topic
I'm not trying to publish to the fabric channel topic `mychannel`
but to the channel I've created myself
but to the topic I've created myself
What you saying is that the fabric refuses this messages as they are not fabric messages, am I right
What you saying is that the fabric refuses this messages as they are not fabric messages, am I right?
In this case, the kafka server must logs that is refusing the message created by the external producer, but when I try to write to the `test` topic nothing happens on kafka server
In this case, the kafka server must logs that it is refusing the message created by the external producer, but when I try to write to the `test` topic nothing happens on kafka server
In this case, the kafka server must log that it is refusing the message created by the external producer, but when I try to write to the `test` topic nothing happens on kafka server
Did you create a channel named 'test'?
yes
Then you cannot create a topic named `test`, it already exists
Try with a different name
Yes, it exists, my trouble isn't at creation, but using the kafka-console-producer
I'm still very confused with what you are trying to accomplish
at this created topic
The command you pasted is an attempt to create a channel
The command you pasted is an attempt to create a topic
Oh, I see, there is a producer command after it
https://github.com/wurstmeister/kafka-docker/issues/100
This looks like it might be useful
@jyellick well, I tried to add the flags they say there, no progress
I think I cannot produce to the created topic due to kafka not listening to my host pc
but it is just a hunt
but it is just a hunch
` - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka3:9092, EXTERNAL://ec2-35-172-85-248.compute-1.amazonaws.com:9072`
`- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka3:9092, EXTERNAL://ec2-35-172-85-248.compute-1.amazonaws.com:9072`
I've set the external listeners
but still at the same error
and the request even gets to kafka (it has no logs about the producer request, just stands there as it is)
and the request doesn't even gets to kafka (it has no logs about the producer request, just stands there as it is)
and the request doesn't even get to kafka (it has no logs about the producer request, just stands there as it is)
running from inside the broker container I can produce/consume to kafka servers
running from inside orderer0.goledger.com container everything goes well, I won't try to produce from outside the container
Closing the topic: this is the normal behavior and kafka doesn't logs the messages that arrives
Closing the topic: this is the normal behavior and kafka doesn't log the messages/blocks that arrives
Closing the topic: this is the normal behavior and kafka doesn't log the messages/blocks that arrives on it
Hi @jyellick I found orderer log output as ```2018-03-15 20:20:53.234 UTC [orderer/kafka] try -> DEBU 5a1 [channel: testchainid] Attempting to post the CONNECT message...
[sarama] 2018/03/15 20:20:53.234752 client.go:599: client/metadata fetching metadata for [testchainid] from broker network01-kafka3:9092
[sarama] 2018/03/15 20:20:53.236441 client.go:610: client/metadata found some partitions to be leaderless
[sarama] 2018/03/15 20:20:53.236492 client.go:590: client/metadata retrying after 250ms... (3 attempts remaining)
[sarama] 2018/03/15 20:20:53.486654 client.go:599: client/metadata fetching metadata for [testchainid] from broker network01-kafka3:9092
[sarama] 2018/03/15 20:20:53.488022 client.go:610: client/metadata found some partitions to be leaderless
[sarama] 2018/03/15 20:20:53.488086 client.go:590: client/metadata retrying after 250ms... (2 attempts remaining)
[sarama] 2018/03/15 20:20:53.738258 client.go:599: client/metadata fetching metadata for [testchainid] from broker network01-kafka3:9092
[sarama] 2018/03/15 20:20:53.739622 client.go:610: client/metadata found some partitions to be leaderless
[sarama] 2018/03/15 20:20:53.739652 client.go:590: client/metadata retrying after 250ms... (1 attempts remaining)
[sarama] 2018/03/15 20:20:53.989822 client.go:599: client/metadata fetching metadata for [testchainid] from broker network01-kafka3:9092
[sarama] 2018/03/15 20:20:53.991380 client.go:610: client/metadata found some partitions to be leaderless```, Is there any advice?
Hi @jyellick I found orderer log output as ```2018-03-15 20:20:53.234 UTC [orderer/kafka] try -> DEBU 5a1 [channel: testchainid] Attempting to post the CONNECT message...
[sarama] 2018/03/15 20:20:53.234752 client.go:599: client/metadata fetching metadata for [testchainid] from broker network01-kafka3:9092
[sarama] 2018/03/15 20:20:53.236441 client.go:610: client/metadata found some partitions to be leaderless
[sarama] 2018/03/15 20:20:53.236492 client.go:590: client/metadata retrying after 250ms... (3 attempts remaining)
[sarama] 2018/03/15 20:20:53.486654 client.go:599: client/metadata fetching metadata for [testchainid] from broker network01-kafka3:9092
[sarama] 2018/03/15 20:20:53.488022 client.go:610: client/metadata found some partitions to be leaderless
```, Is there any advice?
@Glen Your Kafka cluster is not configured correctly. Please complete the kafka quickstart guide before trying to deploy Fabric on Kafka.
I deployed kafka-based fabric on K8S, but the configuration actually works with docker-compose setup except I add some environments for k8s
or which part could lead to this error, thanks
Hi Experts, i am using a Kafka based network which has three Kafka brokers and three ordering peers, i have found some material online to add an organization to the current channel of the fabric network. But i am not able to find any doc *for adding an ordering peer to already live network* please share some thoughts on how to do this using docker compose
Also, i am trying to add an organization to the live network to use this org for new channel with existing orgs, this should be pretty straightforward and easy, but i am not able to figure out how to modify the *Consortium which possesses the local names of the existing orgs* now even when you success fully created the crypto material and able to bring up the network, how will you create a channel policy through configtx and then use that xhannel tx binary file to create that channel?
@NeerajKumar You may simply bootstrap your new orderer with the same genesis block. Then do a config update which updates the `OrdererAddresses` instead of the org
Thanx @jyellick , let me try this
Has joined the channel.
@jyellick once i discussed an issue with you about the TPS, that was about increasing orderer's block size for containing more transaction messages per block, and you said that "512KB of absolute max bytes for message size is quite less" as I was already using 256KB absolute max bytes for message size and then i was setting the message count to 400 for achieving 400 TPS from the Hyperledger Fabric with a 150MB of the MaxBytes for the Block size. now as i wanted to further increase this TPS i have to increase the Block size which will be huge in MB's. These Preffered max bytes for the message size is pretty huge for the network traffic and my application will be so costly that will be of no use economically. what i am doing, what should i do to reduce these preferred max bytes from KB's to Bytes in size for a single message? so that way i can decrease the block size drastically.... please help me out
@NeerajKumar These parameters are about block sizes. How many transactions go into each block / how many bytes go into each block. Your transactions are a fixed size. You cannot simply make the block size smaller and fit the same number of transactions inside. We provide sensible defautls for these values,
``` MaxMessageCount: 10
AbsoluteMaxBytes: 10 MB
PreferredMaxBytes: 512 KB
```
If you want higher throughput, you could try increasing these numbers slightly. See what the new results are, and then tune them further
``` MaxMessageCount: 20
AbsoluteMaxBytes: 10 MB
PreferredMaxBytes: 1024 KB
```
thanx for the response @jyellick , but i am now stuck with low throughput ;-(
Why are you convinced that your throughput is constrained by these parameters?
so if not , that means i can achieve more through put by just using the same
MaxMessageCount: 20
AbsoluteMaxBytes: 10 MB
PreferredMaxBytes: 1024 KB
handling these blocks which are 10MB per block will be traffic heavy and also it will include a higher cost for handling at least 10MB per second of the bandwidth to route this block among orderers/peers?
@NeerajKumar , I just want to point out that you will only ever get a block larger that 1024 KB with that config unless a single transaction is larger that 1024 KB.
@NeerajKumar , I just want to point out that you will only ever get a block larger that 1024 KB with that config if a single transaction is larger that 1024 KB.
AbsoluteMaxBytes is an upper limit, meant to correspond with any limits in the underlying consensus implementation. For example, your Kafka provider might have a limit on the size of messages you can post to a topic. Tweak MaxMessageCount, PreferredMaxBytes and BatchTimeout to control your block sizes.
Has joined the channel.
Has joined the channel.
i did @sanchezl , thats when i hit these issues of huge block size
Hi Experts, how to get the actual size of a particular transaction message in the blocks cut by the orderer? I know that we can set the preferred max bytes, but that's the upper limit, I really want to look at the actual size, lower than this upper limit...
Hi, guys, I am trying to implement Broadcast GRPC interface, i wonder what level promise for this method? Should I must write transaction to persistent storage then can response to client or just write transaction to memory then can reponse to client?
@jyellick @kostas @sanchezl
Is PBFT implemented in 1.1 release?
Hi! i'm trying to create a channel with all the certs of the orgs generated using the CA instead of cryptogen and when I perform the peer channel create i'm getting:
Orderer
```
Hi! i'm trying to create a channel with all the certs of the orgs generated using the CA instead of cryptogen and when I perform the peer channel create i'm getting:
Orderer
```
2018-03-19 10:42:00.750 UTC [msp/identity] newIdentity -> DEBU 153 Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICNjCCAd2gAwIBAgIRAMnf9/dmV9RvCCVw9pZQUfUwCgYIKoZIzj0EAwIwgYEx
CzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1TYW4g
RnJhbmNpc2NvMRkwFwYDVQQKExBvcmcxLmV4YW1wbGUuY29tMQwwCgYDVQQLEwND
T1AxHDAaBgNVBAMTE2NhLm9yZzEuZXhhbXBsZS5jb20wHhcNMTcxMTEyMTM0MTEx
WhcNMjcxMTEwMTM0MTExWjBpMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZv
cm5pYTEWMBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEMMAoGA1UECxMDQ09QMR8wHQYD
VQQDExZwZWVyMC5vcmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0D
AQcDQgAEZ8S4V71OBJpyMIVZdwYdFXAckItrpvSrCf0HQg40WW9XSoOOO76I+Umf
EkmTlIJXP7/AyRRSRU38oI8Ivtu4M6NNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1Ud
EwEB/wQCMAAwKwYDVR0jBCQwIoAginORIhnPEFZUhXm6eWBkm7K7Zc8R4/z7LW4H
ossDlCswCgYIKoZIzj0EAwIDRwAwRAIgVikIUZzgfuFsGLQHWJUVJCU7pDaETkaz
PzFgsCiLxUACICgzJYlW7nvZxP7b6tbeu3t8mrhMXQs956mD4+BoKuNI
-----END CERTIFICATE-----
2018-03-19 10:42:00.751 UTC [cauthdsl] deduplicate -> ERRO 154 Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority) for identity 0a076f7267314d535012ba062d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d4949434e6a4343416432674177494241674952414d6e66392f646d563952764343567739705a5155665577436759494b6f5a497a6a304541774977675945780a437a414a42674e5642415954416c56544d524d77455159445651514945777044595778705a6d3979626d6c684d525977464159445651514845773154595734670a526e4a68626d4e7063324e764d526b77467759445651514b45784276636d63784c6d56345957317762475575593239744d517777436759445651514c45774e440a543141784844416142674e5642414d5445324e684c6d39795a7a45755a586868625842735a53356a623230774868634e4d5463784d5445794d544d304d5445780a5768634e4d6a63784d5445774d544d304d544578576a42704d517377435159445651514745774a56557a45544d4245474131554543424d4b5132467361575a760a636d3570595445574d4251474131554542784d4e5532467549455a795957356a61584e6a627a454d4d416f474131554543784d44513039514d523877485159440a5651514445785a775a5756794d433576636d63784c6d56345957317762475575593239744d466b77457759484b6f5a497a6a3043415159494b6f5a497a6a30440a41516344516741455a3853345637314f424a70794d49565a64775964465841636b49747270765372436630485167343057573958536f4f4f4f3736492b556d660a456b6d546c494a5850372f4179525253525533386f493849767475344d364e4e4d45737744675944565230504151482f42415144416765414d417747413155640a457745422f7751434d4141774b7759445652306a42435177496f4167696e4f5249686e5045465a5568586d366557426b6d374b375a633852342f7a374c5734480a6f7373446c437377436759494b6f5a497a6a304541774944527741775241496756696b49555a7a6766754673474c5148574a55564a43553770446145546b617a0a507a46677343694c785541434943677a4a596c57376e765a7850376236746265753374386d72684d5851733935366d44342b426f4b754e490a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a
```
Hi! i'm trying to create a channel with all the certs of the orgs generated using the CA instead of cryptogen and when I perform the peer channel create i'm getting:
Orderer
```
2018-03-19 10:42:00.750 UTC [msp/identity] newIdentity -> DEBU 153 Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICNjCCAd2gAwIBAgIRAMnf9/dmV9RvCCVw9pZQUfUwCgYIKoZIzj0EAwIwgYEx
CzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1TYW4g
RnJhbmNpc2NvMRkwFwYDVQQKExBvcmcxLmV4YW1wbGUuY29tMQwwCgYDVQQLEwND
T1AxHDAaBgNVBAMTE2NhLm9yZzEuZXhhbXBsZS5jb20wHhcNMTcxMTEyMTM0MTEx
WhcNMjcxMTEwMTM0MTExWjBpMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZv
cm5pYTEWMBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEMMAoGA1UECxMDQ09QMR8wHQYD
VQQDExZwZWVyMC5vcmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0D
AQcDQgAEZ8S4V71OBJpyMIVZdwYdFXAckItrpvSrCf0HQg40WW9XSoOOO76I+Umf
EkmTlIJXP7/AyRRSRU38oI8Ivtu4M6NNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1Ud
EwEB/wQCMAAwKwYDVR0jBCQwIoAginORIhnPEFZUhXm6eWBkm7K7Zc8R4/z7LW4H
ossDlCswCgYIKoZIzj0EAwIDRwAwRAIgVikIUZzgfuFsGLQHWJUVJCU7pDaETkaz
PzFgsCiLxUACICgzJYlW7nvZxP7b6tbeu3t8mrhMXQs956mD4+BoKuNI
-----END CERTIFICATE-----
2018-03-19 10:42:00.751 UTC [cauthdsl] deduplicate -> ERRO 154 Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority) for identity 0a076f7267314d535012ba062d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d4949434e6a4343416432674177494241674952414d6e66392f646d563952764343567739705a5155665577436759494b6f5a497a6a304541774977675945780a437a414a42674e5642415954416c56544d524d77455159445651514945777044595778705a6d3979626d6c684d525977464159445651514845773154595734670a526e4a68626d4e7063324e764d526b77467759445651514b45784276636d63784c6d56345957317762475575593239744d517777436759445651514c45774e440a543141784844416142674e5642414d5445324e684c6d39795a7a45755a586868625842735a53356a623230774868634e4d5463784d5445794d544d304d5445780a5768634e4d6a63784d5445774d544d304d544578576a42704d517377435159445651514745774a56557a45544d4245474131554543424d4b5132467361575a760a636d3570595445574d4251474131554542784d4e5532467549455a795957356a61584e6a627a454d4d416f474131554543784d44513039514d523877485159440a5651514445785a775a5756794d433576636d63784c6d56345957317762475575593239744d466b77457759484b6f5a497a6a3043415159494b6f5a497a6a30440a41516344516741455a3853345637314f424a70794d49565a64775964465841636b49747270765372436630485167343057573958536f4f4f4f3736492b556d660a456b6d546c494a5850372f4179525253525533386f493849767475344d364e4e4d45737744675944565230504151482f42415144416765414d417747413155640a457745422f7751434d4141774b7759445652306a42435177496f4167696e4f5249686e5045465a5568586d366557426b6d374b375a633852342f7a374c5734480a6f7373446c437377436759494b6f5a497a6a304541774944527741775241496756696b49555a7a6766754673474c5148574a55564a43553770446145546b617a0a507a46677343694c785541434943677a4a596c57376e765a7850376236746265753374386d72684d5851733935366d44342b426f4b754e490a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a
```
Hi! i'm trying to create a channel with all the certs of the orgs generated using the CA instead of cryptogen and when I perform the peer channel create i'm getting:
Orderer
```
2018-03-19 10:42:00.750 UTC [msp/identity] newIdentity -> DEBU 153 Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICNjCCAd2gAwIBAgIRAMnf9/dmV9RvCCVw9pZQUfUwCgYIKoZIzj0EAwIwgYEx
CzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1TYW4g
RnJhbmNpc2NvMRkwFwYDVQQKExBvcmcxLmV4YW1wbGUuY29tMQwwCgYDVQQLEwND
T1AxHDAaBgNVBAMTE2NhLm9yZzEuZXhhbXBsZS5jb20wHhcNMTcxMTEyMTM0MTEx
WhcNMjcxMTEwMTM0MTExWjBpMQswCQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZv
cm5pYTEWMBQGA1UEBxMNU2FuIEZyYW5jaXNjbzEMMAoGA1UECxMDQ09QMR8wHQYD
VQQDExZwZWVyMC5vcmcxLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYIKoZIzj0D
AQcDQgAEZ8S4V71OBJpyMIVZdwYdFXAckItrpvSrCf0HQg40WW9XSoOOO76I+Umf
EkmTlIJXP7/AyRRSRU38oI8Ivtu4M6NNMEswDgYDVR0PAQH/BAQDAgeAMAwGA1Ud
EwEB/wQCMAAwKwYDVR0jBCQwIoAginORIhnPEFZUhXm6eWBkm7K7Zc8R4/z7LW4H
ossDlCswCgYIKoZIzj0EAwIDRwAwRAIgVikIUZzgfuFsGLQHWJUVJCU7pDaETkaz
PzFgsCiLxUACICgzJYlW7nvZxP7b6tbeu3t8mrhMXQs956mD4+BoKuNI
-----END CERTIFICATE-----
2018-03-19 10:42:00.751 UTC [cauthdsl] deduplicate -> ERRO 154 Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority) for identity 0a076f7267314d535012ba062d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d4949434e6a4343416432674177494241674952414d6e66392f646d563952764343567739705a5155665577436759494b6f5a497a6a304541774977675945780a437a414a42674e5642415954416c56544d524d77455159445651514945777044595778705a6d3979626d6c684d525977464159445651514845773154595734670a526e4a68626d4e7063324e764d526b77467759445651514b45784276636d63784c6d56345957317762475575593239744d517777436759445651514c45774e440a543141784844416142674e5642414d5445324e684c6d39795a7a45755a586868625842735a53356a623230774868634e4d5463784d5445794d544d304d5445780a5768634e4d6a63784d5445774d544d304d544578576a42704d517377435159445651514745774a56557a45544d4245474131554543424d4b5132467361575a760a636d3570595445574d4251474131554542784d4e5532467549455a795957356a61584e6a627a454d4d416f474131554543784d44513039514d523877485159440a5651514445785a775a5756794d433576636d63784c6d56345957317762475575593239744d466b77457759484b6f5a497a6a3043415159494b6f5a497a6a30440a41516344516741455a3853345637314f424a70794d49565a64775964465841636b49747270765372436630485167343057573958536f4f4f4f3736492b556d660a456b6d546c494a5850372f4179525253525533386f493849767475344d364e4e4d45737744675944565230504151482f42415144416765414d417747413155640a457745422f7751434d4141774b7759445652306a42435177496f4167696e4f5249686e5045465a5568586d366557426b6d374b375a633852342f7a374c5734480a6f7373446c437377436759494b6f5a497a6a304541774944527741775241496756696b49555a7a6766754673474c5148574a55564a43553770446145546b617a0a507a46677343694c785541434943677a4a596c57376e765a7850376236746265753374386d72684d5851733935366d44342b426f4b754e490a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a
```
Peer
```
| Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
```
Deserializing the Cert i can see that i corresponds to `peer0.org1.example.com` not to the certs i'm creating. I've tried with both fabric-ca-orderer and fabric-orderer images and same result
the configtx.yaml file is the default changing the msp dirs for the correct orgs
any idea? it's a hardcoded value for now?
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
@dsanchezseco You may certainly use crypto material generated by fabric-ca, or cryptogen, or openssl, or any other tool which generates x509 compliant ecdsa certs. Did you bootstrap the network with this new material? Keep in mind, the crypto material is written into the ledger at bootstrap time, and though it may be modified, if you wish to simply use the alternate crypto source without attempting reconfiguration, you must go back and re-bootstrap.
Has joined the channel.
Has joined the channel.
Hello everyone,
I have an confusion which related with orderer. :unamused:
As you already known there are 2 mostly used types of orderer.
That are - solo and kafka base orderer.
In the architecture of kafka, there are Producer and consumer.
So, would you please kindly tell me,
In kafka base orderer, who will be Producer and who will consumer in kafak base orderer.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=HWjz8opZmTpKa4FTo) @jyellick Thanks, i'd already solved it, it was a problem with `CORE_PEER_LOCALMSPID` and `CORE_PEER_MSPCONFIGPATH` being unset. I thought that leeving them unset will use the default valu to store the certs in but it wasn't the case and instead it used the example certs
Has joined the channel.
@PyiTheinKyaw , The orderer is both a Kafka producer and consumer. See http://hyperledger-fabric.readthedocs.io/en/release-1.1/kafka.html#big-picture
@jyellick quick question: to create the channel artifacts (genesis.block and channel.tx) we need the admincerts of the orgs that are going to be in that channel, but we don't need the private keys of that admins, do we?
If that's right right now there are only two ways of getting the admincerts (as far as i know) :
* Manually pass the certs
* enroll as that admin (needing the enroll secret and get as a byproduct the priv key)
Wouldn't be better to have function like the getcacerts in fabric-ca-client that only returns the admincert?
@jyellick quick question: to create the channel artifacts (genesis.block and channel.tx) we need the admincerts of the orgs that are going to be in that channel, but we don't need the private keys of that admins, do we?
If that's right right now there are only two ways of getting the admincerts (as far as i know) :
* Manually pass the certs
* enroll as that admin (needing the enroll secret and get as a byproduct the priv key, with all the security issue that means)
Wouldn't be better to have function like the getcacerts in fabric-ca-client that only returns the admincert?
@jyellick quick question: to create the channel artifacts (genesis.block and channel.tx) we need the admincerts of the orgs that are going to be in that channel, but we don't need the private keys of that admins, do we?
If that's right right now there are only two ways of getting the admincerts (as far as i know) :
* Manually pass the certs
* enroll as that admin (needing the enroll secret and get as a byproduct the priv key, with all the security issues that means)
Wouldn't be better to have function like the getcacerts in fabric-ca-client that only returns the admincert?
> we need the admincerts of the orgs that are going to be in that channel, but we don't need the private keys of that admins, do we?
Yes of course, if fabric required that the orgs share their private keys to bootstrap a channel, the system would not be very useful.
This is probably a better question for #fabric-ca but, prior to bootstrap each org needs to contribute an 'msp' directory, containing the ca certs, intermediate ca certs, tls ca certs, and admin certs. This is all public crypto information and contains no secrets. When running `configtxgen`, to produce a genesis block for the orderer, it will read in these directories, and encode this information into the genesis block, with one 'msp' definition per org. Then, when you create new channels, you actually do not even need this crypto material, only the org names, the orderer will fill the crypto material in for you (as it is stored in its genesis block)
The key being here that you could generate your certs with fabric-ca, or with openssl, or with cryptogen, or, from some external CA like verisign etc.
But, ultimately, we need this 'msp' directory with the appropriate public pieces of crypto so that we can encode it into the network to bootstrap
right that was my thought. thanks @jyellick ! i'll check with #fabric-ca and fill and issue to start working on it
Hey, guys! I'm trying to deploy kafka to aws instances, where each zk and kafka will be in a different instance
first from zk problem: when I start them I got some errors that I couldn't fix
```zookeeper0 | 2018-03-21 19:41:51,749 [myid:1] - INFO [QuorumPeer[myid=1]/0.0.0.0:2181:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (2, 1)
zookeeper1 | 2018-03-21 19:41:52,934 [myid:0] - INFO [QuorumPeer[myid=0]/0.0.0.0:2181:QuorumCnxManager@199] - Have smaller server identifier, so dropping the connection: (2, 0)
zookeeper2 | 2018-03-21 19:42:54,688 [myid:2] - WARN [QuorumPeer[myid=2]/0.0.0.0:2181:QuorumCnxManager@400] - Cannot open channel to 0 at election address /192.168.0.212:3888
``` and then it try again
the zooservers var are set
```- ZOO_SERVERS=server.0=192.168.0.212:2888:3888 server.1=0.0.0.0:2888:3888 server.2=192.168.0.5:2888:3888```
where `server.N` is `0.0.0.0` for `zookeeperN`
I did some google on that but no progress :/
I googled on that but no progress :/
I googled that but no progress :/
If someone knows how to fix this, I'll be really grateful
@vieiramanoel You are likely to have better answers from a zookeeper forum than a fabric one like this, my search revealed https://stackoverflow.com/questions/22155494/why-cant-my-zookeeper-server-rejoin-the-quorum as the first result. The resolution seems to be doing a rolling restart of the cluster
Has joined the channel.
Hi everyone. Is there a straightforwad example to follow to setup multiple ordering services, like 4 kafka and 3 zookeeper? I tried with the official document, and keep getting "Initial attempt failed = kafka server: Messages are rejected since there are fewer in-sync replicas than required. " error. Also, how can I decide how many ordering services to setup, does the amount matter with the amount of kafka and zookeeper? Also, if I setup multiple ordering services, how could I decide which ordering services to call, eg for chaincode instantiation or it doesn't matter which one to call?
Hi everyone. Is there an straightforwad example to follow to setup multiple ordering services, like 4 kafka and 3 zookeeper? I tried with the official document, and keep getting "Initial attempt failed = kafka server: Messages are rejected since there are fewer in-sync replicas than required. " error. Also, how can I decide how many ordering services to setup, does the amount matter with the amount of kafka and zookeeper? Also, if I setup multiple ordering services, how could I decide which ordering services to call, eg for chaincode instantiation or it doesn't matter which one to call?
@azur3s0ng You can fomd am example of a 3 ZK, 4 KB environment here: https://github.com/hyperledger/fabric/tree/release-1.1/examples/e2e_cli
The particular number of OSNs (Ordering Service Nodes) will vary by deployment. This example only uses one, but you may deploy as many as is appropriate for the load you require. I would recommend 3 to start for a production env.
@jyellick Thanks! Also, is solo a consensus mechanism or there is NO consensus with solo at all?
Solo is a consensus of one (ie, there is no consensus, since there is only one node)
@jyellick I see. So the number of OSN doesn't matter, it is the kafka and zk nodes doing the consensus, is that correct?
It is Kafka and ZK which does the consensus. It is the OSNs which do the transaction validation and block delivery.
@jyellick That's very helpful. Thanks!
@jyellick Kafka ordering is working on my network architecture, really appreciate your help!
can anyone pls help me with the below error i am getting while doing invokation
E0322 05:40:39.499744200 21999 ssl_transport_security.cc:989] Handshake failed with fatal error SSL_ERROR_SSL: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed
below given is my connection profile i am using
https://pastebin.com/iaF2xAmU
@jyellick Hi, I set the kafka ordering up last night and let it run for multiple transactions for pressure test. It's fine that the transaction amount is small, but docker crashed around 4,000 entries after I query/post to the ledger, and I got the timeout error. Is there anything I could do to tweak this? Also, I've noticed that the example you showed me have "restart: always" attributed in the docker-base.yaml for both kafka and zookeeper containers, was that the issue? Thanks!
@azur3s0ng When you say docker crashed? Do you mean the dockerd process itself, or the containers? If the containers, which?
@javrevasandeep This is a TLS error, make sure that your TLS cert is signed by a TLS CA the server is aware of
@jyellick I am not sure, I would assume it's the processed. Because I was trying to delete and containers and restart, it wouldn't let me do so, I couldn't even end the docker process. I had to reboot my os, after I start the docker process again and do a "docker ps", only kafka and zookeeper containers were there, peers, orderer and CAs were gone.
What platform are you using?
@jyellick I am testing it again, let me reproduce this problem when the entry hits certain amount, and I will post all the stuff in pastbin.
Mac OS
How did you install docker? I run on Linux, so I am not sure of the details, but I have heard that docker on Mac can have problems depending on how it is configured. Did you follow the recommendations in the Fabric documentatoin?
@jyellick To be honest, I don't remember, I installed it a while ago, but I believed I installed it using the official dmg, not the home-brew via command line.
You might have better luck asking for help from other mac users in #fabric-questions , but if I recall, there is "Docker for Mac", and "Docker Toolbox" and one of these options is superior
@jyellick Thanks! I will take a look. I have a Gentoo Linux workstation in office, I can setup the test there to see how it runs on Linux.
Has joined the channel.
@jyellick It crashed again, this time is at around 6,000th entry. I have the log for orderer pasted here. https://pastebin.com/b8HAeFzx Thanks!
@jyellick Docker containers are still up. It looks like there are some problems with the kafka/zookeeper cluster.
@azur3s0ng When you say it 'crashes', what do you mean? I do not see a panic in the log. Is the container still running?
@jyellick I mean the crash to my pressure test, the containers are still up.
@jyellick The heatsink fan got crazy loud like helicopter. :sob:
Yes, it looks like the orderer is having trouble connecting to your Kafka cluster, I would look at those logs to try to diagnose your problem
logs for kafka and zookeeper?
Yes
For what it's worth, we have seen tests successfully pushing 3500 transactions per second without issue.
https://pastebin.com/H8EHW0pA log for kafka0
https://pastebin.com/72t57BDr log for zookeeper0
The errors look fairly generic to me. You might find more success with someone who knows Kafka/ZK better, or, you may simply be trying to do too much on a single machine. I am not sure exactly what sort of hardware you are running, but running a full Kafka cluster, Fabric Network, and Client on one machine will consume many resources.
I am running on a MacBook Pro with quad core CPU, 16G ram and 512G SSD. I have 4 Orgs, 2 Peers/Org, 4CAs, 1 Orderer, 4 Kafka and 3 Zookeeper defined in my network template.
Also I have the NodeJS based Fabric client running.
You said you had a Linux machine you might try this on? I vaguely recall some issues with docker networking on Mac while under load
I've done 10,000 entries tests successfully without Kafka ordering setup.
Yes, I have a Gentoo workstation with dual Xenon E5 processor and 32G ram. I already have it up and will know what's going on with that trial later tonight.
I was wondering if I messed up on the configurations...
Kafka and ZK are very well tested and widely deployed
If they are having timeout issues, it seems most likely that it is something with your environment
@jyellick https://blog.datasyndrome.com/docker-on-os-x-hyperkit-not-ready-21c3ca74562a I think here is your early reference to the Docker Toolbox and Docker for Mac. And sadly I am running the native one with hyperkit and it might be the reason.
@jyellick The pressure test were all passed on my Linux workstation. It's the Mac Docker problem. Thanks for your help!
Glad things are working for you!
Hello everyone!
I'm trying to up a kafka orderer network on different machines:
Machine 0: 1 kafka, 1 zookeeper
Machine 1: 1 kafka, 1 zookeeper
Hello everyone!
I'm trying to up a kafka orderer network on different machines:
Machine 0: 1 kafka, 1 zookeeper
Machine 1: 1 kafka, 1 zookeeper
Machine 2: 1 kafka, 1 zookeeper, 1 peer
Machine 3: 1 kafka, 1 orderer
Hello everyone!
I'm trying to up a kafka orderer network on different machines:
Machine 0: 1 kafka, 1 zookeeper
Machine 1: 1 kafka, 1 zookeeper
Machine 2: 1 kafka, 1 zookeeper, 1 peer
Machine 3: 1 kafka, 1 orderer
They all connect fine to each other (apparently) but for some reason ZK won't allow kafka to use it properly. The ZK leader keeps showing KeeperExceptions and the orderer can't find kafka partitions metadata.
Here are the logs I get:
ZooKeeper0 log: https://hastebin.com/fahotevowa.rb
Seems to be working fine.
ZooKeeper1 log: https://hastebin.com/inavimitih.rb
There is a warning there but it seems to be working too.
ZooKeeper2 log: https://hastebin.com/bucuvoxufu.sql
ZK2 was elected the leader. It shows a lot of KeeperExceptions that seem to be the problem here. Not sure how to treat them.
Kafka0 log: https://hastebin.com/seyalusume.vbs
It seems like it is trying to fetch something from the ZKs and failing.
Other 3 Kafkas: https://hastebin.com/ocucarotah.coffeescript
All other 3 Kafkas have very similar logs and keep on removing 0 expired offsets later on.
Orderer log: https://hastebin.com/ahayehokaw.vbs
Orderer can't get Kafka partitions metadata. Probably because it doesn't exist since it wasn't created in the ZK ensemble.
Hello everyone!
I'm trying to up a kafka orderer network on different machines:
```
Machine 0: 1 kafka, 1 zookeeper
Machine 1: 1 kafka, 1 zookeeper
Machine 2: 1 kafka, 1 zookeeper, 1 peer
Machine 3: 1 kafka, 1 orderer
```
They all connect fine to each other (apparently) but for some reason ZK won't allow kafka to use it properly. The ZK leader keeps showing KeeperExceptions and the orderer can't find kafka partitions metadata.
Here are the logs I get:
```
ZooKeeper0 log: https://hastebin.com/fahotevowa.rb
Seems to be working fine.
ZooKeeper1 log: https://hastebin.com/inavimitih.rb
There is a warning there but it seems to be working too.
ZooKeeper2 log: https://hastebin.com/bucuvoxufu.sql
ZK2 was elected the leader. It shows a lot of KeeperExceptions that seem to be the problem here. Not sure how to treat them.
Kafka0 log: https://hastebin.com/seyalusume.vbs
It seems like it is trying to fetch something from the ZKs and failing.
Other 3 Kafkas: https://hastebin.com/ocucarotah.coffeescript
All other 3 Kafkas have very similar logs and keep on removing 0 expired offsets later on.
Orderer log: https://hastebin.com/ahayehokaw.vbs
Orderer can't get Kafka partitions metadata. Probably because it doesn't exist since it wasn't created in the ZK ensemble.
```
Can anyone help me with this? I'm not sure what is wrong with the ZK ensemble and why it's not supporting the kafka cluster properly.
Hello everyone!
I'm trying to up a kafka orderer network on different machines:
```
Machine 0: 1 kafka, 1 zookeeper
Machine 1: 1 kafka, 1 zookeeper
Machine 2: 1 kafka, 1 zookeeper, 1 peer
Machine 3: 1 kafka, 1 orderer
```
They all connect fine to each other (apparently) but for some reason ZK won't allow kafka to use it properly. The ZK leader keeps showing KeeperExceptions and the orderer can't find kafka partitions metadata.
Here are the logs I get:
```
ZooKeeper0 log: https://hastebin.com/fahotevowa.rb
Seems to be working fine.
ZooKeeper1 log: https://hastebin.com/inavimitih.rb
There is a warning there but it seems to be working too.
ZooKeeper2 log: https://hastebin.com/bucuvoxufu.sql
ZK2 was elected the leader. It shows a lot of KeeperExceptions that seem to be the problem here. Not sure how to treat them.
Kafka0 log: https://hastebin.com/seyalusume.vbs
It seems like it is trying to fetch something from the ZKs and failing.
Other 3 Kafkas: https://hastebin.com/ocucarotah.coffeescript
All other 3 Kafkas have very similar logs and keep on removing 0 expired offsets later on.
Orderer log: https://hastebin.com/ahayehokaw.vbs
Orderer cannot get Kafka partitions metadata.
```
Can anyone help me with this? I'm not sure what is wrong with the ZK ensemble and why it's not supporting the kafka cluster properly.
Hello everyone!
I'm trying to up a kafka orderer network on different machines:
```
Machine 0: 1 kafka, 1 zookeeper
Machine 1: 1 kafka, 1 zookeeper
Machine 2: 1 kafka, 1 zookeeper, 1 peer
Machine 3: 1 kafka, 1 orderer
```
They all connect fine to each other (apparently) but for some reason ZK won't allow kafka to use it properly. The ZK leader keeps showing KeeperExceptions and the orderer can't find kafka partitions metadata.
Here are the logs I get:
```
ZooKeeper0 log: https://hastebin.com/fahotevowa.rb
Seems to be working fine.
ZooKeeper1 log: https://hastebin.com/inavimitih.rb
There is a warning there but it seems to be working too.
ZooKeeper2 log: https://hastebin.com/bucuvoxufu.sql
ZK2 was elected the leader. It shows a lot of KeeperExceptions that seem to be the problem here. Not sure how to treat them.
Kafka0 log: https://hastebin.com/seyalusume.vbs
It seems like it is trying to fetch something from the ZKs and failing.
Other 3 Kafkas: https://hastebin.com/ocucarotah.coffeescript
All other 3 Kafkas have very similar logs and keep on removing 0 expired offsets later on.
Orderer log: https://hastebin.com/ahayehokaw.vbs
Orderer cannot get Kafka partitions metadata.
```
Can anyone help me with this? I'm not sure what is wrong with the ZK ensemble and why it's not supporting the kafka cluster properly.
Hello everyone!
I'm trying to up a kafka orderer network on different machines:
```
Machine 0: 1 kafka, 1 zookeeper
Machine 1: 1 kafka, 1 zookeeper
Machine 2: 1 kafka, 1 zookeeper, 1 peer
Machine 3: 1 kafka, 1 orderer
```
They all connect fine to each other (apparently) but for some reason ZK won't allow kafka to use it properly. The ZK leader keeps showing KeeperExceptions and the orderer can't find kafka partitions metadata.
Here are the logs I get:
```
[ZooKeeper0 log](https://hastebin.com/fahotevowa.rb)
Seems to be working fine.
ZooKeeper1 log: https://hastebin.com/inavimitih.rb
There is a warning there but it seems to be working too.
ZooKeeper2 log: https://hastebin.com/bucuvoxufu.sql
ZK2 was elected the leader. It shows a lot of KeeperExceptions that seem to be the problem here. Not sure how to treat them.
Kafka0 log: https://hastebin.com/seyalusume.vbs
It seems like it is trying to fetch something from the ZKs and failing.
Other 3 Kafkas: https://hastebin.com/ocucarotah.coffeescript
All other 3 Kafkas have very similar logs and keep on removing 0 expired offsets later on.
Orderer log: https://hastebin.com/ahayehokaw.vbs
Orderer cannot get Kafka partitions metadata.
```
Can anyone help me with this? I'm not sure what is wrong with the ZK ensemble and why it's not supporting the kafka cluster properly.
Hello everyone!
I'm trying to up a kafka orderer network on different machines:
```
Machine 0: 1 kafka, 1 zookeeper
Machine 1: 1 kafka, 1 zookeeper
Machine 2: 1 kafka, 1 zookeeper, 1 peer
Machine 3: 1 kafka, 1 orderer
```
They all connect fine to each other (apparently) but for some reason ZK won't allow kafka to use it properly. The ZK leader keeps showing KeeperExceptions and the orderer can't find kafka partitions metadata.
Here are the logs I get:
[ZooKeeper0 log](https://hastebin.com/fahotevowa.rb)
Seems to be working fine.
ZooKeeper1 log: https://hastebin.com/inavimitih.rb
There is a warning there but it seems to be working too.
ZooKeeper2 log: https://hastebin.com/bucuvoxufu.sql
ZK2 was elected the leader. It shows a lot of KeeperExceptions that seem to be the problem here. Not sure how to treat them.
Kafka0 log: https://hastebin.com/seyalusume.vbs
It seems like it is trying to fetch something from the ZKs and failing.
Other 3 Kafkas: https://hastebin.com/ocucarotah.coffeescript
All other 3 Kafkas have very similar logs and keep on removing 0 expired offsets later on.
Orderer log: https://hastebin.com/ahayehokaw.vbs
Orderer cannot get Kafka partitions metadata.
```
Can anyone help me with this? I'm not sure what is wrong with the ZK ensemble and why it's not supporting the kafka cluster properly.
Hello everyone!
I'm trying to up a kafka orderer network on different machines:
```
Machine 0: 1 kafka, 1 zookeeper
Machine 1: 1 kafka, 1 zookeeper
Machine 2: 1 kafka, 1 zookeeper, 1 peer
Machine 3: 1 kafka, 1 orderer
```
They all connect fine to each other (apparently) but for some reason ZK won't allow kafka to use it properly. The ZK leader keeps showing KeeperExceptions and the orderer can't find kafka partitions metadata.
Here are the logs I get:
[ZooKeeper0 log](https://hastebin.com/fahotevowa.rb)
Seems to be working fine.
ZooKeeper1 log: https://hastebin.com/inavimitih.rb
There is a warning there but it seems to be working too.
ZooKeeper2 log: https://hastebin.com/bucuvoxufu.sql
ZK2 was elected the leader. It shows a lot of KeeperExceptions that seem to be the problem here. Not sure how to treat them.
Kafka0 log: https://hastebin.com/seyalusume.vbs
It seems like it is trying to fetch something from the ZKs and failing.
Other 3 Kafkas: https://hastebin.com/ocucarotah.coffeescript
All other 3 Kafkas have very similar logs and keep on removing 0 expired offsets later on.
Orderer log: https://hastebin.com/ahayehokaw.vbs
Orderer cannot get Kafka partitions metadata.
Can anyone help me with this? I'm not sure what is wrong with the ZK ensemble and why it's not supporting the kafka cluster properly.
Hello everyone!
I'm trying to up a kafka orderer network on different machines:
```
Machine 0: 1 kafka, 1 zookeeper
Machine 1: 1 kafka, 1 zookeeper
Machine 2: 1 kafka, 1 zookeeper, 1 peer
Machine 3: 1 kafka, 1 orderer
```
They all connect fine to each other (apparently) but for some reason ZK won't allow kafka to use it properly. The ZK leader keeps showing KeeperExceptions and the orderer can't find kafka partitions metadata.
Here are the logs I get:
[ZooKeeper0 log](https://hastebin.com/fahotevowa.rb)
Seems to be working fine.
[ZooKeeper1 log](https://hastebin.com/inavimitih.rb)
There is a warning there but it seems to be working too.
[ZooKeeper2 log](https://hastebin.com/bucuvoxufu.sql)
ZK2 was elected the leader. It shows a lot of KeeperExceptions that seem to be the problem here. Not sure how to treat them.
[Kafka0 log](https://hastebin.com/seyalusume.vbs)
It seems like it is trying to fetch something from the ZKs and failing.
[Other 3 Kafkas](https://hastebin.com/ocucarotah.coffeescript)
All other 3 Kafkas have very similar logs and keep on removing 0 expired offsets later on.
[Orderer log](https://hastebin.com/ahayehokaw.vbs)
Orderer cannot get Kafka partitions metadata.
Can anyone help me with this? I'm not sure what is wrong with the ZK ensemble and why it's not supporting the kafka cluster properly.
Hello everyone!
I'm trying to up a kafka orderer network on different machines:
Machine 0: 1 kafka, 1 zookeeper
Machine 1: 1 kafka, 1 zookeeper
Machine 2: 1 kafka, 1 zookeeper, 1 peer
Machine 3: 1 kafka, 1 orderer
They all connect fine to each other (apparently) but for some reason ZK won't allow kafka to use it properly. The ZK leader keeps showing KeeperExceptions and the orderer can't find kafka partitions metadata.
Here are the logs I get:
[ZooKeeper0 log](https://hastebin.com/fahotevowa.rb)
Seems to be working fine.
[ZooKeeper1 log](https://hastebin.com/inavimitih.rb)
There is a warning there but it seems to be working too.
[ZooKeeper2 log](https://hastebin.com/bucuvoxufu.sql)
ZK2 was elected the leader. It shows a lot of KeeperExceptions that seem to be the problem here. Not sure how to treat them.
[Kafka0 log](https://hastebin.com/seyalusume.vbs)
It seems like it is trying to fetch something from the ZKs and failing.
[Other 3 Kafkas](https://hastebin.com/ocucarotah.coffeescript)
All other 3 Kafkas have very similar logs and keep on removing 0 expired offsets later on.
[Orderer log](https://hastebin.com/ahayehokaw.vbs)
Orderer cannot get Kafka partitions metadata.
Can anyone help me with this? I'm not sure what is wrong with the ZK ensemble and why it's not supporting the kafka cluster properly.
When I restart Kafka0 it seems to work fine but orderer still finds leaderless partitions
It seems ZK behavior (logging those KeeperExceptions at the beginning) is normal when using it with Kafka, but I still can't fix the leaderless partitions issue
@bandreghetti Without more info, my initial guess would be to review the *listeners* and *advertised.listeners* settings on your Kafka brokers.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ekKgdifeKoBT9yoC5) @sanchezl it is actually not configured at all. what does this config do exactly?
*listeners* defines the host:port that kafka listens on. *advertised.listeners* define the public host:port that clients can use to actually reach those endpoints.
Has joined the channel.
it seems like both would be the same for my case...
so i should put all the orderer host:port on that list in all the kafkas docker compose files?
it seems like both would be the same for my case...
so i should put the orderer host:port on that list in all the kafkas docker compose files?
Here is what one of your kafka brokers added to zookeeper:
`[2018-03-22 21:44:46,845] INFO Registered broker 2 at path /brokers/ids/2 with addresses: EndPoint(03b11200f167,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
`
Can orderer reach the broker at `03b11200f167:9092` ?
I don't even know what this hostname is. Maybe docker sets KAFKA LISTENERS automatically thinking they are all in the same machine?
I don't even know what this hostname is. Maybe docker sets KAFKA LISTENERS automatically thinking they are all in the same machine?
I mean, using container IDs instead of public IPs?
So, these are docker containers on separate machines?
yes, they are on separate machines
should all kafka brokers use different ports?
all the 4 brokers are using port 9092
all 4 brokers are using port 9092
i got this error message when i set kafka listeners variable to the 4 ip:port pairs in all 4 brokers:
```[2018-03-23 14:38:37,634] FATAL (kafka.Kafka$)
java.lang.IllegalArgumentException: requirement failed: Each listener must have a different port, listeners: PLAINTEXT://52.4.221.127:9092,PLAINTEXT://52.201.148.51:9092,PLAINTEXT://35.172.85.248:9092,PLAINTEXT://52.86.181.69:9092```
i think i'm still unsure what a listener is
Here is an example that might help you understand, but it is not your exact situation:
https://github.com/hyperledger/fabric/blob/release-1.1/orderer/common/server/docker-compose.yml
Your solution will depend on how you are configuring the networking of the containers. But basically, make sure that each broker advertises the actual public host:port that can be used to reach it. Consider starting your containers using `host` networking to help simplify things for yourself. See https://docs.docker.com/network/.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8apQygahxNrgNCjrL) @sanchezl thank you! this helped me a lot
i'm facing another error now. apparently kafkas aren't
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8apQygahxNrgNCjrL) @sanchezl thank you! this helped me a lot
Has joined the channel.
Hi All, QQ Do we have any keepalive setting between peer and orderer ? We are using fabric 1.0.3
Yes, there are gRPC keepalive settings, though I do not think they are configurable
Actually, it looks like it may be
https://github.com/hyperledger/fabric/blob/release-1.1/sampleconfig/core.yaml#L102-L126
@jyellick Thanks for the updates. Looks like keep alive is not available in 1.0.3
Can you please help here how to configure keep alive
between peer and orderer
Keep-alive is enabled by default, but i your version it may not be configurable
Keep-alive is enabled by default, but in your version it may not be configurable
@jyellick okey cool. Is that configurable in 1.0.3 ? It would be great help if you can share that settings
@jyellick Actually our problem is if we keep environment idle for 15 mins, we are seeing timeout between peer and orderer
@patelan No, it is not configurable in v1.0.x, it is only configurable in v1.1.0
@jyellick seeing below error from node js SDK (1.0.0) while instantiating chain code.
First time it is working fine. if we keep everything idle for 15 mins and then try to add new smart contract we are seeing this error.
0|app | error: [Channel.js]: getChannelConfig - Failed Proposal. Error: Error: REQUEST_TIMEOUT
0|app | at Timeout._onTimeout (/src/node_modules/fabric-client/lib/Orderer.js:186:20)
0|app | at ontimeout (timers.js:386:11)
0|app | at tryOnTimeout (timers.js:250:5)
0|app | at Timer.listOnTimeout (timers.js:214:5)
0|app | error: [Channel.js]: Error: REQUEST_TIMEOUT
0|app | at Timeout._onTimeout (/src/node_modules/fabric-client/lib/Orderer.js:186:20)
0|app | at ontimeout (timers.js:386:11)
0|app | at tryOnTimeout (timers.js:250:5)
0|app | at Timer.listOnTimeout (timers.js:214:5)
https://github.com/hyperledger/fabric/blob/release-1.0/core/comm/config.go#L26-L31
These are the default keepalive options in v1.0.x
@patelan I'm not sure why you think that this is related to the connection between the peer and the orderer?
This looks like an error with the SDK connecting to the orderer
@jyellick Our initial investigation seems issue between peer and orderer. But we will double check. Thanks a lot for your help.
Has joined the channel.
@jyellick Do you have any idea by any chance what will be the probable fix ?
@jyellick to make connection alive between fabric-sdk to orderer
I expect that the answer will be in some configuration of the SDK, someone in #fabric-sdk-node might have a better answer
@jyellick okey sure Thanks
Has joined the channel.
Hi experts; Orderer unable to connect to kafka broker in V1.0
```[sarama] 2018/03/24 19:55:32.672843 client.go:601: client/metadata fetching metadata for all topics from broker kafka0:9092
[sarama] 2018/03/24 19:55:42.673491 broker.go:96: Failed to connect to broker kafka0:9092: dial tcp: i/o timeout
[sarama] 2018/03/24 19:55:42.673778 client.go:620: client/metadata got error from broker while fetching metadata: dial tcp: i/o timeout
[sarama] 2018/03/24 19:55:42.674249 config.go:329: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
```
Hi experts; Can you please advise on this error: Orderer unable to connect to kafka broker in V1.0
```[sarama] 2018/03/24 19:55:32.672843 client.go:601: client/metadata fetching metadata for all topics from broker kafka0:9092
[sarama] 2018/03/24 19:55:42.673491 broker.go:96: Failed to connect to broker kafka0:9092: dial tcp: i/o timeout
[sarama] 2018/03/24 19:55:42.673778 client.go:620: client/metadata got error from broker while fetching metadata: dial tcp: i/o timeout
[sarama] 2018/03/24 19:55:42.674249 config.go:329: ClientID is the default of 'sarama', you should consider setting it to something application-specific.
```
```kafka0:
container_name: kafka0
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_BROKER_ID=0
- KAFKA_HOST_NAME=kafka0
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
# - KAFKA_LISTENERS=PLAINTEXT://kafka0:9092,REPLICATION://kafka0:9093
# - KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://0.0.0.0:9092,REPLICATION://kafka0:9093
- KAFKA_LISTENERS=PLAINTEXT://kafka0:9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka0:9092
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=EXTERNAL:PLAINTEXT
# - KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=EXTERNAL:PLAINTEXT,REPLICATION:PLAINTEXT
# - KAFKA_INTER_BROKER_LISTENER_NAME=REPLICATION
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_MIN_INSYNC_REPLICAS=2
ports:
- "9092:9092"
# extends:
# file: base/docker-compose-base.yaml
# service: kafka
depends_on:
- zookeeper0
- zookeeper1
- zookeeper2
networks:
- provenance
orderer0.art.ifar.org:
# Container name
container_name: orderer0.art.ifar.org
# Public Heyperledger Fabric ARCH x86_64 version V1.0.0
image: hyperledger/fabric-orderer:x86_64-1.0.0
#Envirnoment varaibles for service container
# - VAR:VAL
environment:
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/crypto/orderer/msp
# enabling TLS
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/crypto/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/crypto/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/crypto/orderer/tls/ca.crt, /var/hyperledger/crypto/ca.egyptianmuseum/tls/ca.crt, /var/hyperledger/crypto/ca.louvre/tls/ca.crt]
## - ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/crypto/ca.bauhaus/tls/ca.crt]
#Kafka
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers
command: orderer
volumes:
#mapping for genesis block
- ./artifacts/orderer/:/var/hyperledger/orderer
#mapping for orderer crypto material
- ./config/crypto-config/ordererOrganizations/art.ifar.org/orderers/orderer0.art.ifar.org/:/var/hyperledger/crypto/orderer
- ./config/crypto-config/peerOrganizations/egyptianmuseum.org/ca/root/:/var/hyperledger/crypto/ca.egyptianmuseum
- ./config/crypto-config/peerOrganizations/louvre.fr/ca/root/:/var/hyperledger/crypto/ca.louvre
# - ./config/crypto-config/peerOrganizations/bauhaus.de/ca/root/:/var/hyperledger/crypto/ca.bauhaus
# HOST_PORT:CONTAINER_PORT
ports:
- '7050'
depends_on:
- ca.egyptianmuseum
- ca.louvre
- zookeeper0
- zookeeper1
- zookeeper2
- kafka0
- kafka1
- kafka2
- kafka3
# Networks accessible to this container (orderer.art.ifar.org)
networks:
- provenance```
`docker network inspect art_provenance`
``` "b054ccc692094d4b2bd13699f4a1ff8d95afce431f6e26c97b6d9b58882611f1": {
"Name": "kafka0",
"EndpointID": "1aeb8df4c80234cd73deb37a020829727e3bdfede0d404e85a9a4c05c8290184",
"MacAddress": "02:42:ac:13:00:0b",
"IPv4Address": "172.19.0.11/16",
"IPv6Address": ""
},
"993307ec4f696611f0a72952c70584a7e73a14896b830bf659b10ff60c98de51": {
"Name": "orderer0.art.ifar.org",
"EndpointID": "3957e5b86fab61af49b57786a03d031b3cdcdc029549b5d8918d5dc4e26ef6b2",
"MacAddress": "02:42:ac:13:00:0d",
"IPv4Address": "172.19.0.13/16",
"IPv6Address": ""
}```
`docker logs kafka0` can be found at ```https://pastebin.com/P0LNmFA4```
`docker logs kafka0` can be found at ```https://pastebin.com/P0LNmFA4```
'docker logs orderer0 | grep kafka 0' can be found at ```https://pastebin.com/XsWcEhsp```
`docker logs kafka0` can be found at
```https://pastebin.com/P0LNmFA4```
'docker logs orderer0 | grep kafka 0' can be found at
```https://pastebin.com/XsWcEhsp```
`docker logs kafka0`
```https://pastebin.com/P0LNmFA4```
`docker logs orderer0 | grep kafka 0`
```https://pastebin.com/XsWcEhsp```
`docker logs kafka0`
```https://pastebin.com/P0LNmFA4```
`docker logs orderer0 | grep kafka 0`
```https://pastebin.com/XsWcEhsp```
`docker logs kafka0`
```https://pastebin.com/P0LNmFA4``` &```
`docker logs orderer0 | grep kafka 0`
```https://pastebin.com/XsWcEhsp```
`docker logs kafka0` `https://pastebin.com/P0LNmFA4`
`docker logs orderer0 | grep kafka 0` `https://pastebin.com/XsWcEhsp`
@agiledeveloper Please do not post long snippets like that, it makes this channel very difficult to read. Please use services like pastbin (though not putting them in `code sections` will make them linkable
@agiledeveloper Please do not post long snippets like that, it makes this channel very difficult to read. Please use services like pastbin (though not putting them in `code sections` will make them linkable)
Nothing sticks out as obviously wrong from your configuration, however, from the logs, it seems `kafka0` is not a resolvable name. I would exec into the orderer container and confirm this. If this is indeed the case, you need to fix your docker networking such that it may be resolved.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=tjEgqvNThq8efWbeW) yes, I will foolow the guidelines
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=D5KFGed42DN4uJ3B8) @jyellick I had the same guess, can you please explain how to test from orderer? or how to fix docker networking ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nMqhAWebbASrG9vyE) I have `apt-get update && apt-get install -y iputils-ping` on the orderer bash, then pinged the kafka0 container successfully ```64 bytes from kafka0.art_provenance (172.19.0.11): icmp_seq=39 ttl=64 time=0.135 ms
64 bytes from kafka0.art_provenance (172.19.0.11): icmp_seq=40 ttl=64 time=0.064 ms
64 bytes from kafka0.art_provenance (172.19.0.11): icmp_seq=41 ttl=64 time=0.063 ms
64 bytes from kafka0.art_provenance (172.19.0.11): icmp_seq=42 ttl=64 time=0.086 ms
64 bytes from kafka0.art_provenance (172.19.0.11): icmp_seq=43 ttl=64 time=0.064 ms
64 bytes from kafka0.art_provenance (172.19.0.11): icmp_seq=44 ttl=64 time=0.054 ms
64 bytes from kafka0.art_provenance (172.19.0.11): icmp_seq=45 ttl=64 time=0.060 ms
64 bytes from kafka0.art_provenance (172.19.0.11): icmp_seq=46 ttl=64 time=0.085 ms```
@agiledeveloper Resolving docker networking is really outside the scope of Fabric. However, I would suggest as a next step, you try the Kafka sample consumer from the orderer container. If you are unfamiliar with the sample clients, you can learn about them in this tutorial https://kafka.apache.org/quickstart
Hi, I got an issue is that, my fabric network is using multiple Orderers
"
"OrdererAddresses": {
"mod_policy": "/Channel/Orderer/Admins",
"value": {
"addresses": [
"orderer:7050",
"orderer1:7050"
]
},
"version": "1"
}
"
When I killed 1 Orderer, And invoke chaincode on another Orderer, It throw the execption:
"Error: Error getting broadcast client: Error connecting to orderer:7050 due to context deadline exceeded"
I thought that, with 2 Orderer, fabric network should work despite 1 Orderer down?
I don't know this is normal or abnormal behavior. Can you tell me?
(I kill the original Orderer and let additional Added Orderer running then invoke chaincode on the additional Added Orderer)
(I killed the original Orderer and let additional Added Orderer running then invoke chaincode on the additional Added Orderer)
Hi I configured an orderer for 1.1 using kafka brokers. I came across a problem-
Clipboard - March 26, 2018 3:27 PM
Has joined the channel.
When I try Kafka orderer rather than Solo, which orderer should I send as argument while running peer channel creation `docker exec -e "CORE_PEER_LOCALMSPID=Org1MSP" -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin@org1.example.com/msp" peer0.org1.example.com peer channel create -o orderer.example.com:7050 -c mychannel -f /etc/hyperledger/configtx/channel.tx`
When I try Kafka orderer rather than Solo, which orderer should I send as argument while running peer channel creation ?
Say, I have orderer0.example.com orderer1.example.com and orderer2.example.com
what is the syntax for instantiate chaincode on multi orderers
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nQfmQi7AxKLh5QQZp) @navdevl any of them as long as it's not crashed
kafka ordering should sync them all up
hi i have two orderer `technical.ramaconsultancy.com:5050` and `hr.debutinfotech.com:7050`.COntainers are running fine .but after `docker exec -it cli bash` when i try to create channel with command `peer channel create -o technical.ramaconsultancy.com:5050 -c $CHANNEL_NAME -f ./config/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/ramaconsultancy.com/orderers/technical.ramaconsultancy.com/msp/tlscacerts/tlsca.ramaconsultancy.com-cert.pem` getting error `Error: failed to create deliver client: orderer client failed to connect to technical.ramaconsultancy.com:5050: failed to create new connection: c`
if i tried to create channel with my first orederer `hr.debutinfotech.com:7050` with command `peer channel create -o hr.debutinfotech.com:7050 -c $CHANNEL_NAME -f ./config/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/debutinfotech.com/orderers/hr.debutinfotech.com/msp/tlscacerts/tlsca.debutinfotech.com-cert.pem
` works fine
if i tried to create channel with my first orederer `hr.debutinfotech.com:7050` with command `peer channel create -o hr.debutinfotech.com:7050 -c $CHANNEL_NAME -f ./config/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/debutinfotech.com/orderers/hr.debutinfotech.com/msp/tlscacerts/tlsca.debutinfotech.com-cert.pem` works fine
Has left the channel.
I am unable to find the difference both orderer are running fine with docker
Has joined the channel.
Has joined the channel.
hey guys!
with multiple orderers with kafka consensus, how do I call peer commands so that, given a list of possible orderers, it decides itself which one will it contact?
@bandreghetti The peer CLI does not support automatic fail over
Passing in a single valid orderer address is sufficient, even if you have multiple orderers. If the command fails because the orderer is unavailable, simply choose another and try again
Hi @jyellick , can you tell me the help of Multi Orderer, Because, transaction always point to only 1 Orderer ([chaincodeCmd] InitCmdFactory -> INFO 001 Get chain(mychannel) orderer endpoint: orderer:7050) even 2 orderers available, If the pointed Orderer down, then fabric network will not work.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=xmho53LcAMNtDSPGK) @jyellick So if we're trying to submit transactions with the NodeJS SDK where we map all the possible orderers in the connection profile, will it automatically retry with the live orderers in case first is down?
Screenshot from 2018-03-27 10-57-23.png
Its because your Kafka hasnt started yet. The Long_Retry_Waiting_Time is 5minutes. So it will retry only after 5mins. You can override it with the environment settings you pass while starting the kafka. As for now, wait for 5minutes and check your Kafka logs, and try joining channel. It will work.
Its because your Kafka hasnt started yet. The Long_Retry_Waiting_Time is 5minutes. So it will retry only after 5mins. You can override it with the environment settings you pass while starting the kafka. As for now, wait for 5minutes and check your Kafka logs, and try joining channel. It will work. @pankajcheema
all kafka brokers and zookeepers are up @navdevl , i think the problem is with orderer failing to connect to sockets of brokers `Error no such host`
all kafka brokers and zookeepers are up @navdevl , i think the problem is with orderer failing to connect to sockets of brokers `Error no such host` , in the kaka and zookeeper logs they show that server has started
Has joined the channel.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=PwAYXA5ZAgBs7QeHX) @jyellick oh well... i hoped peer CLI would be smart enough to try again with another orderer since it knows them all "/
thanks!
Has joined the channel.
Has joined the channel.
@varun-raj I do not believe the node-sdk does automatic failover for orderers, though you might want to check #fabric-sdk-node
I implemented multiple instances (3 instances) under the same hostname "orderer", do you have any ideal to load balancing the TX, I tried ELB but not working
I implemented multiple instances (3 instances) under the same hostname "orderer", --> this help HA purpose
but I want load balance the endosered transactions from Client
do you have any ideal to load balancing the TX, I tried ELB but not working
@Ryan2 If you have TLS enabled, then injecting a load balancer is likely to cause problems. You could either do a naive TCP layer load balancer, or, you would have to handle the TLS re-packaging yourself.
Has joined the channel.
thank you for your feedback!
so you mean that if TLS is not enabled, then aws ELB will work fine?
@Ryan2 I have never tested it, but I would expect that with TLS disabled, standard HTTP2 proxies should be compatible with Fabric
thank you
hello everyone, I just wanted to know that how to add an Organisation to the consortium policy, as I was only adding a new organization to the cluster to create a brand new channel by including this new organization... please help
@NeerajKumar The process is the same, except, modify the org's definition in the `Consortiums` section instead of in the `Application` section. You will need to modify the `jq` commands to accomplish this slightly.
@NeerajKumar The process is the same, except, modify the org's definition in the `Consortiums` section instead of in the `Application` section. You will need to modify the `jq` commands slightly to accomplish this.
Has joined the channel.
Hi! just updated kafka+zookeeper images in test environment as part of upgrading to 1.1-release, and got orderer error
2018-03-30 13:20:49.075 UTC [orderer/consensus/kafka] try -> DEBU 1b0 [channel: testchainid] Connecting to the Kafka cluster
2018-03-30 13:20:49.080 UTC [orderer/consensus/kafka] try -> DEBU 1b1 [channel: testchainid] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
Anyone facing same issue?
As I see kafka containers got recreated and no data is saved, so offset is out of rage, but how to fix?
@kerokhin If you have lost the data inside your Kafka containers, this will be very difficult to recover from
I suggest you restore the backup of the Kafka cluster you took before upgrade.
@jyellick fixed by dirty hack: flood kafka with fake messages until current id for channel topic reaches one neded by orderer. I did not map any persistent volumes to kafka/zookeeper containers, ant that was a huge mistake.
@jyellick fixed by dirty hack: flood kafka with fake messages until current id for channel topic reaches one neded by orderer. I did not map any persistent volumes to kafka/zookeeper containers, and that was a huge mistake.
@kerokhin Glad to hear you got your problem solved!
@jyellick I dont think it is a solution, if I start new orderer it is not possible to properly unmarshal all kafka messages. So i think ledger database will not be synced on new orderer.
@kerokhin You may copy the ledger directory from your exsting orderer to your new orderer before starting
Although it is not a perfect solution, unless you iterate through the blockchain and re-inject the correct messages into Kafka, I think this is your best option
Although it is not a perfect solution, unless you iterate through the blockchain and re-inject the correct messages into Kafka, (which, actually is probably not possible, as there are control messages, like "time-to-cut" which would need to be produced) I think this is your best option
Hi People
I am checking the chains of the orderer, and I have seen that the application channel data is being saved there.
I have understood that transaction data should not even be introspected, but instead the whole chain is saved
is that correct? There is some config option that should be changed?
Clipboard - April 2, 2018 8:09 PM
@albert.lacambra The orderer assembles transactions into blocks, and then stores the blocks on the filesystem so that peers may retrieve them (this is the `blockfile_000000` you are seeing). The orderer does not execute the transactions and does not build or maintain a state database, the peer does this. Does this answer your question?
I had understood that orderer was keeping a different chain
with valid and invalid blocks
whose genessys block was the one given on orderer startup
but it looks to me that is just the same chain that any peer will have for a channel
is that correct?
I am also seeing that the genesys block begins with something about a "testchainid". What is that? And where is the chain begining with this block?
I am confused...:hushed:
@albert.lacambra The orderer maintains one chain per channel
@albert.lacambra The orderer maintains one chain per channel (and the blocks will be the same, though the peers store additional block metadata)
The `testchain` you are seeing is actually the 'orderer system channel'. This is just a default name, which many people choose not to, or forget to change.
The `testchainid` you are seeing is actually the 'orderer system channel'. This is just a default name, which many people choose not to, or forget to change.
When you bootstrap your orderer, you use the `configtxgen` tool. You may specify any channel ID for the orderer system channel you like, by passing `-channelID`, but if you do not, it defaults to `testchainid`
ok. I see.
Then we should count that the organization owning the orderer will be able to see every transaction
is that correct?
What is exactly the system channel tracking? Only policies and informations to create new channels? Where are failed blocks set? Are they also in the channels on the peer?
@albert.lacambra
> Then we should count that the organization owning the orderer will be able to see every transaction
Yes, the ordering organization is able to see every transaction. If this is a concern, it may be mitigated in a number of ways.
1. The application may encrypt the transaction contents, so that the transaction data is present, but not meaningful to the orderer.
2. The application may use 'sidedb' which hashes the transaction contents and sends the hash only through ordering, then distributes the transaction contents peer to peer. (note, this is an experimental feature in v1.1, but is being finalized for v1.2)
(1) is likely a simpler configuration, but in some spaces, encrypting data is not enough. (2) is more difficult to configure, but, gives stronger privacy guarantees.
> What is exactly the system channel tracking? Only policies and informations to create new channels? Where are failed blocks set? Are they also in the channels on the peer?
The orderer system channel is used to orchestrate channel creation. It records the channel creation requests, and serves as the initial source of configuration for new channels.
Hi , I got the issue while adding new Org into network
"UTC [cauthdsl] deduplicate -> WARN 04c De-duplicating identity 0a074f7267304d535012c2062d2d2d2d2d424547494e204345525449464943415445......649434154452d2d2d2d2d0a at index 1 in signature set
2018-04-03 02:49:00.576 UTC [kvledger] Commit -> INFO 04d Channel [mychannel]: Created block [4] with 1 transaction(s)
"
Do you guys know to solve the issue, thank you in advanced
@Ryan2 Most likely, you ran `peer channel signconfigtx` with a user, and also ran `peer channel update` with that same user.
The `peer channel update` effectively executes `peer channel signconfigtx` with the current user context before submitting the update.
So, if you have already done `peer channel signconfigtx` then you are adding a second signature from the same user.
(So, to fix this warning, simply remove the redundant `peer channel signconfigtx` from your update)
Thank you @jyellick for your point out.
I did "The `peer channel update` effectively executes `peer channel signconfigtx` with the current user context before submitting the update." as you said
As for
"So, if you have already done `peer channel signconfigtx` then you are adding a second signature from the same user."
Could you tell how to do
(My network currrently has Org0 (peer0, peer1), I want to add Org1)
Thank you @jyellick for your point out.
I did ran "The `peer channel update` effectively executes `peer channel signconfigtx` with the current user context before submitting the update." as you said
As for
"So, if you have already done `peer channel signconfigtx` then you are adding a second signature from the same user."
Could you tell how to do
(My network currrently has Org0 (peer0, peer1), I want to add Org1)
If your channel only has one org, then you most likely only need a single signature. You may skip the `signconfigtx` step. Instead, simply do a `peer channel update`
Thank you, I see,
What if I already run,
how to"remove the redundant `peer channel signconfigtx` from your update"
There is no function to remove signatures. The easiest way is likely to simply generated the original input, and skip the `signconfigtx` step
I got it, thank you very much
It's clear. Thank you @jyellick
Has joined the channel.
Hi @jyellick , by skipping the step of signconfigtx, I got error as below
`[gossip/discovery] func1 -> WARN 0cd Could not connect to {peer0:7051 [] [] peer0:7051
Hi @jyellick , by skipping the step of signconfigtx, I got error as below
"
[gossip/discovery] func1 -> WARN 0cd Could not connect to {peer0:7051 [] [] peer0:7051
Hi @jyellick , by skipping the step of signconfigtx, I got error as below
"
[gossip/discovery] func1 -> WARN 0cd Could not connect to {peer0:7051 [] [] peer0:7051
Hi @jyellick , by skipping the step of signconfigtx,
when checking, the new Peer ( on added Org) has joined the channel but I got error as below
"
[gossip/discovery] func1 -> WARN 0cd Could not connect to {peer0:7051 [] [] peer0:7051
Hi @jyellick , by skipping the step of signconfigtx,
when checking, the new Peer ( on added Org) has joined the channel, but I got error as below
"
[gossip/discovery] func1 -> WARN 0cd Could not connect to {peer0:7051 [] [] peer0:7051
hi! how do you gauge how many ordering service nodes you should be having on your blockchain in a production-grade set up? i know it should be more than one for redundancy purposes, but is there a guideline to determining the number of OSNs you should be having?
hi,i have modified the docker-compose file in first -network by adding few more Orgs and peer. while running this docker file i have encountered the error as
```
[channel: medicichannel] Rejecting broadcast of config message from 172.19.0.6:46798 because of error: Attempted to include a member which is not in the consortium
```
Has joined the channel.
Hi @jyellick , i got an issue with creating the channel. I created a network with 3 orgs , started the network with docker-compose file and when i run the command ```peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
```
Hi @jyellick , i got an issue with creating the channel. I created a network with 3 orgs , started the network with docker-compose file and when i run the command ```peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
```
i got the below error. ```
2018-04-03 11:18:54.567 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2018-04-03 11:18:54.717 UTC [common/tools/configtxgen/localconfig] Load -> INFO 002 Loaded configuration: /etc/hyperledger/fabric/configtx.yaml
Error: got unexpected status: BAD_REQUEST -- Attempted to include a member which is not in the consortium
```
Hi @jyellick , i got an issue with creating the channel. I created a network with 3 orgs , started the network with docker-compose file and when i run the command ```peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
```
i got the below error. ```
2018-04-03 11:18:54.567 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2018-04-03 11:18:54.717 UTC [common/tools/configtxgen/localconfig] Load -> INFO 002 Loaded configuration: /etc/hyperledger/fabric/configtx.yaml
Error: got unexpected status: BAD_REQUEST -- Attempted to include a member which is not in the consortium
```
my config tx file is ```Profiles:
HealthcareOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
HealthcareConsortium:
Organizations:
- *InPatient
- *OutPatient
- *Lab
HealthcareChannel:
Consortium: HealthcareConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *InPatient
- *OutPatient
- *Lab
Capabilities:
<<: *ApplicationCapabilities
```
i have not included the whole config tx but i declared all the orgs and orderer
Hi @jyellick , i got an issue with creating the channel. I created a network with 3 orgs , started the network with docker-compose file and when i run the command ```peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
```
i got the below error. ```
2018-04-03 11:18:54.567 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2018-04-03 11:18:54.717 UTC [common/tools/configtxgen/localconfig] Load -> INFO 002 Loaded configuration: /etc/hyperledger/fabric/configtx.yaml
Error: got unexpected status: BAD_REQUEST -- Attempted to include a member which is not in the consortium
```
my config tx file is ```Profiles:
HealthcareOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
HealthcareConsortium:
Organizations:
- *InPatient
- *OutPatient
- *Lab
HealthcareChannel:
Consortium: HealthcareConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *InPatient
- *OutPatient
- *Lab
Capabilities:
<<: *ApplicationCapabilities
```
i have not included the whole config tx but i declared all the orgs and orderer
@jworthington , can you please look at the above issue
@jaswanth Please use a service like hastebin.com when pasting your config files. Based on the `configtx.yaml` provided, I would expect no problems. Are you certain you bootstrapped your orderer with this file and not some earlier iteration?
How to modify channel writers policy to add ability to members to write?
I added member role to MSP policy but it doesn't work
@iamdm The channel writers policy is defined at `/Channel/Writers`, by default, this policy is satisfied if either `/Channel/Orderer/Writers` or `/Channel/Application/Writers` is satisfied. By default, `/Channel/Application/Writers` is satisfied if any of `/Channel/Application/
(This policy is emitted by default from `configtxgen -printOrg`)
I have this policy in channel writers: https://pastebin.com/UX95JKRr, but it doesn't work
Yes, this policy translated to "1 of yourbankmsp.Admin"
Note, you have two principals defined in `identities`:
``` "principal": {
"msp_identifier": "yourbankmsp",
"role": "ADMIN"
},
"principal_classification": "ROLE"
},
{
"principal": {
"msp_identifier": "yourbankmsp",
"role": "MEMBER"
},
```
Then your rule says:
```
"rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0
}
]
}
},
```
So, "1 out of [Signed by principal at index 0]" must be true, so it is "1 out of [Signed by yourbankmsp.Admin]
If you changed the `"signed_by": 0` to be `"signed_by": 1` things should work. Or, since it is unused, you may simply delete the first admin principal, and the index will be correct.
@jyellick what i need to do if i want to give ability to endorse for ADMIN and MEMBER both?
@iamdm The ability to endorse is not set in the channel config, it is set in the endorsement policy. But, in any case, `Org.Member` is a super-set of `Org.Admin` so there is no need to explicitly include `Org.Admin` in any policy which allows `Org.member`
@iamdm The ability to endorse is not set in the channel config, it is set in the endorsement policy. But, in any case, `Org.Member` is a super-set of `Org.Admin` so there is no need to explicitly include `Org.Admin` in any policy which allows `Org.Member`
If for whatever reason you wanted to explicitly include the admin and member in the above policy, you would write the rule as:
``` "rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0,
"signed_by": 1
}
]
}
},
```
@jyellick what config i should to if i want to allow to write to all members of all msps? In each section of Msp I should use this policy?
and one more question - if i want to endorse chaincode by member, which cert is issued by CA, should i add his cert to channel config?
Has joined the channel.
@iamdm
> what config i should to if i want to allow to write to all members of all msps? In each section of Msp I should use this policy?
The default channel config allows what you ask for. That all members of all MSPs may write. But yes, if you reproduce the changes above on all orgs, this should accomplish what you want.
> if i want to endorse chaincode by member, which cert is issued by CA, should i add his cert to channel config?
There is no need to update the MSP when issuing certificates. The only time the MSP needs to be updated is when issuing new admin certificates, or modifying or adding CA certificates.
@jyellick i've done all what you wrote me, all my peers are now accepting tx, but orderer says next: `Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied`
Could you paste your orderer logs at debug to a service like hastebin.com?
logs of orderer?
https://hastebin.com/azokuxorir.hs
`2018-04-03 16:10:02.011 UTC [policies] func1 -> DEBU 1626 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ yourbankmsp.Writers LabMSP.Writers merchantmsp.Writers ]`
I would need to see the lines above this one to give you a more accurate diagnosis, but in short, the submitter did not satisfy either of those policies.
https://hastebin.com/uyacedenoc.coffeescript
I think error in signature, but i don't understand why peers accepted signature, but orderer didn't
Yes, as you say:
```2018-04-03 16:48:18.656 UTC [cauthdsl] func2 -> DEBU 1999 0xc42000e608 signature for identity 0 is invalid: The signature is invalid
```
Appears to be the underlying problem.
How are you generating/adding the signature for your transaction?
I used Go SDK for making transaction, i used identity generated by CA with role MEMBER for signing proposal
But SDK was created with peer identity and my proposal is transaction is signed by peer certificate
That should all be okay
If your certificate were not recognized as being signed by the CA, then I would have expected an error like "Certificate issued by unknown authority"
Certificate is recognized cause peer is channel admin :)
I've solved problem, it really was bad signer :woo:
Wonderful!
I have set ''grpc.keepalive_time_ms': 6*60*1000 in node SDK. Will that keep alive ping come in orderer logs ?
- CORE_LOGGING_LEVEL=debug
- ORDERER_GENERAL_LOGLEVEL=debug
I have set ''grpc.keepalive_time_ms': 6*60*1000 in node SDK. Will that keep alive ping come in orderer logs ?
@patelan I do not believe so, you might be able to see it if your turned on the gRPC debug tracing, but I am not certain.
@patelan I do not believe so, you might be able to see it if you turned on the gRPC debug tracing, but I am not certain.
@jyellick If i turned on grpc debug on node SDK application. Will it show keepalive logs on orderer or I need to set anything else in orderer ? set grpc export GRPC_TRACE=http,connectivity_state,timer,timer_check,api
export GRPC_VERBOSITY=DEBUG
If you have set it for your application, I would expect to see the keepalive messages in your application's logs. No need to adjust logging or settings on the orderer.
@jyellick Thanks. Let me check
Can I ask one question, when adding new Org, then join new peer succeeded, the newly joined peer got issue as
{code}
[gossip/discovery] func1 -> WARN 0cd Could not connect to {peer0:7051 [] [] peer0:7051
Can I ask one question, when adding new Org, then join new peer succeeded, the newly joined peer got issue as
"
[gossip/discovery] func1 -> WARN 0cd Could not connect to {peer0:7051 [] [] peer0:7051
Can I ask one question, when adding new Org, then joining new peer succeeded, new peer on newly added Org, the newly joined peer got issue as
"
[gossip/discovery] func1 -> WARN 0cd Could not connect to {peer0:7051 [] [] peer0:7051
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nJMbbTmLYCQvLaRfQ) @jyellick Thakns for the reply , I am really not understanding where its going wrong , here are my config files ,
configtx file https://hastebin.com/raw/dokunaxane
docker file - https://hastebin.com/raw/zikitasibo
and all the commands that i used - https://hastebin.com/raw/ijusovixaq
can you please look into that , cause I'm struck with this for a long time and really not getting where the issue is .
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nJMbbTmLYCQvLaRfQ) @jyellick Thank you for the reply , I am really not understanding where its going wrong , here are my config files ,
configtx file https://hastebin.com/raw/dokunaxane
docker file - https://hastebin.com/raw/zikitasibo
and all the commands that i used - https://hastebin.com/raw/ijusovixaq
can you please look into that , cause I'm struck with this for a long time and really not getting where the issue is .
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nJMbbTmLYCQvLaRfQ) @jyellick Thank you for the reply , I am really not understanding where its going wrong , here are my config files ,
configtx file https://hastebin.com/raw/dokunaxane
docker file - https://hastebin.com/raw/zikitasibo
and all the commands that i used - https://hastebin.com/raw/ijusovixaq
can you please look into that , cause I'm struck with this for a long time and really not getting where the issue is .
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nJMbbTmLYCQvLaRfQ) @jyellick Thank you for the reply , I am really not understanding where its going wrong , here are my config files ,
configtx file https://hastebin.com/raw/dokunaxane
docker file - https://hastebin.com/raw/zikitasibo
and all the commands that i used - https://hastebin.com/tigowiteci.bash
can you please look into that , cause I'm struck with this for a long time and really not getting where the issue is .
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nJMbbTmLYCQvLaRfQ) @jyellick Thank you for the reply , I am really not understanding where its going wrong , here are my config files ,
configtx file https://hastebin.com/raw/dokunaxane
docker file - https://hastebin.com/zikitasibo.cs
and all the commands that i used - https://hastebin.com/tigowiteci.bash
can you please look into that , cause I'm struck with this for a long time and really not getting where the issue is .
have anyone facing the error like "Tried joining channel mychannel but our org( Org1MSP ) isn't among the orgs of the channel: [Org0MSP] , aborting.", how to solve?
have anyone facing the error like *Tried joining channel mychannel but our org( Org1MSP ) isn't among the orgs of the channel: [Org0MSP] , aborting.*, how to solve?
have anyone facing the error like {color:red}*Tried joining channel mychannel but our org( Org1MSP ) isn't among the orgs of the channel: [Org0MSP] , aborting.*{color:red}, how to solve?
have anyone facing the error like {color:red}*Tried joining channel mychannel but our org( Org1MSP ) isn't among the orgs of the channel: [Org0MSP] , aborting.*{color}, how to solve?
have anyone facing the error like *Tried joining channel mychannel but our org( Org1MSP ) isn't among the orgs of the channel: [Org0MSP] , aborting.*, how to solve?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hYN8Qi8jj3XhMXX8m) @Ryan2 check channel configuration, does the org to be joined is in channel application group?
2018-04-04 19_08_33-Cortana.png
2018-04-04 19_08_33-Cortana.png
Has joined the channel.
@Ryan2 This is a limitation in the current version of fabric. If you join a peer to a channel who's org was not in the original membership list, it will display some errors in the logs until it catches up with the network. Ensure that configuration has leader election enabled for gossip, and the problem should disappear after the peer catches up.
@jyellick I am not seeing any Keep-alive logs in Orderer from node SDK.
@jyellick seeing intermittent error. 0|app | error: [Orderer.js]: sendBroadcast - on error: "Error: 14 UNAVAILABLE: TCP Read failed\n at createStatusError (/src/node_modules/grpc/src/client.js:64:15)\n at ClientDuplexStream._emitStatusIfDone (/src/node_modules/grpc/src/client.js:270:19)\n at ClientDuplexStream._receiveStatus (/src/node_modules/grpc/src/client.js:248:8)\n at /src/node_modules/grpc/src/client.js:804:12"
0|app | [2018-04-03 17:02:31.568] [ERROR] instantiate-chaincode - Failed to send instantiate transaction and get notifications within the timeout period. Error: SERVICE_UNAVAILABLE
0|app | [2018-04-03 17:02:31.568] [ERROR] instantiate-chaincode - Failed to order the transaction. Error code: undefined
@jyellick not sure how to verify grpc keepalive is working from node-sdk to orderer ?
@patelan I'm not sure exactly how to do these things with the node SDK, but you should be able to open a connection to the `Deliver` interface, send nothing, and hold it open indefinitely
@jyellick okey great. How can I open connection from nodeSDK to orderer indefinite time ?
I am not an expert with the node SDK, you might find someone who can tell you in #fabric-sdk-node What you would like to do connect to the gRPC interface, open a stream to the `Deliver` (or `Broadcast`) service, and then not send any messages
@jyellick Thanks. Let me try. I added my question in #fabric-sdk-node.
Hi. I have a question about reconfiguration of ordering service.
Is it possible to change type of ordering, service for example, from solo to kafka by submitting a CONIFIG_UPDATE? If so, is there any consideration points in how to set up a new Kafka/ZooKeeper cluster?
Is it possible to change type of ordering service, for example, from solo to kafka by submitting a CONIFIG_UPDATE? If so, is there any consideration points in how to set up a new Kafka/ZooKeeper cluster?
Is it possible to change type of ordering service, for example, from solo to kafka by submitting a CONFIG_UPDATE? If so, is there any consideration points in how to set up a new Kafka/ZooKeeper cluster?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ErP5d2SyaDyYCZ4dP) @jyellick can you please look at it
@yoheiueda No, changing the consensus algorithm is not supported
@jaswanth Can you please post your orderer logs at debug?
Thank you very much.
How about adding a new orderer service node into an existing kafka cluster? It is not supported?
@jyellick when iam running `peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem` for the first time these are the orderer logs https://hastebin.com/mumuwewuso.hs
`channel is not created ` then i ran it again then i got error as `readset expected key [Group] /Channel/Application at version 0, but got version 1` here are my orderer logs https://hastebin.com/topeqapiha.hs
@jyellick when iam running ```
peer channel create -o orderer.example.com:7050 -c $CHANNEL_NAME -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
``` for the first time these are the orderer logs https://hastebin.com/mumuwewuso.hs
`channel is not created ` then i ran it again then i got error as `readset expected key [Group] /Channel/Application at version 0, but got version 1` here are my orderer logs https://hastebin.com/topeqapiha.hs
is possible to have two or more orderer orgs ordering one channel?? I'm trying to do it but each orderer has it's own ledger that is not synced
I supposed that adding another org in the configtx would be enough but doesn't seem so
```
OrgsOrdererGenesis:
Orderer:
# Orderer Type: The orderer implementation to start
# Available types are "solo" and "kafka"
OrdererType: solo
Addresses:
- orderer-org0:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB
Kafka:
# Brokers: A list of Kafka brokers to which the orderer connects
# NOTE: Use IP:port notation
Brokers:
- 127.0.0.1:9092
# Organizations is the list of orgs which are defined as participants on
# the orderer side of the network
Organizations:
- *orderer
- *orderer2
```
I supposed that adding another org in the configtx would be enough but doesn't seem so
```
OrgsOrdererGenesis:
Orderer:
OrdererType: solo
Addresses:
- orderer-org0:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- 127.0.0.1:9092
Organizations:
- *orderer
- *orderer2
```
I supposed that adding another org in the configtx would be enough but doesn't seem so
```
OrgsOrdererGenesis:
Orderer:
OrdererType: solo
Addresses:
- orderer-org0:7050
- orderer-org3:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- 127.0.0.1:9092
Organizations:
- *orderer
- *orderer2
```
it' possible to do it with 2 solo's or must be with two kafka's? because the comments in configtx.yaml make me feel like it's possible to user more than one org ->
```
# Organizations is the list of orgs which are defined as participants on
# the orderer side of the network
```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nmsMzQg7jBEbANT2y) @dsanchezseco You can't have more than one solo orderer for a given channel.
@sanchezl and what about two kafka cluster of orderers? being each kafka a different org
I mean if i have a channel with just one orderer organization who it's in charge of it? A third party? one of the peer organizations? How to have a share tenancy of the orderer or multiple orderers?
• You can have two kafka orderers, each with it's own owning org. The owning org is specified when you configure the LocalMSPDir and LocalMSPID on each orderer.
• There are orderer system channels (well, just one per orderer 'network') and then application ("regular") channels.
• Orderer system channel would specify both/all orderer orgs.
• Application channels don't need to give ANY permissions to orderer orgs. Only to the peer orgs.
(As I understand it at the moment)
@sanchezl ok thanks! i'll give it a try, but first i need more knowledge of kafka in general. Thanks!
> How about adding a new orderer service node into an existing kafka cluster? It is not supported?
@yohelueda Yes, this is certainly supported. Simply bootstrap the new orderer as you did the originals. Then, you will need to issue a config update to your channels (including your orderer system channel) which adds the new orderer to the list of orderers so that peers know to query it.
> channel is not created ` then i ran it again then i got error as `readset expected key [Group] /Channel/Application at version 0, but got version 1` here are my orderer logs
@jaswanth The second error indicates that the channel was created. The logs you also show me indicate that channel `healthcarechanneltemp` already exists.
I've tried this case, but Client only calls to originals endpoint (orderer:7050), I stopped original orderer, fabric network not working, It did not fail over
` [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SifzM94tQbFWfny9y)
Have you tried orderer fail over successfully, What step need to be taken?
I confused that, invoke chaincode failed, but Blocked still be created
my case is
On peer0 invoke CC log, if failed like below, but Block [9] still be created
`2018-04-06 09:36:19.682 UTC [committer/txvalidator] validateTx -> ERRO 052 VSCCValidateTx for transaction txId = d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae returned error: VSCC error: endorsement policy failure, err: signature set did not satisfy policy
2018-04-06 09:36:19.683 UTC [valimpl] preprocessProtoBlock -> WARN 053 Channel [mychannel]: Block [9] Transaction index [0] TxId [d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae] marked as invalid by committer. Reason code [ENDORSEMENT_POLICY_FAILURE]
2018-04-06 09:36:19.692 UTC [kvledger] CommitWithPvtData -> INFO 054 Channel [mychannel]: Committed block [9] with 1 transaction(s)
`
On the Peer1, get TX mark invalid but still commit the block [9]
`2018-04-06 09:36:19.708 UTC [vscc] Invoke -> WARN 03d Endorsement policy failure for transaction txid=d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae, err: Failed to authenticate policy
2018-04-06 09:36:19.708 UTC [txvalidator] VSCCValidateTxForCC -> ERRO 03e VSCC check failed for transaction txid=d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae, error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-04-06 09:36:19.708 UTC [txvalidator] Validate -> ERRO 03f VSCCValidateTx for transaction txId = d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae returned error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-04-06 09:36:19.709 UTC [statevalidator] ValidateAndPrepareBatch -> WARN 040 Block [9] Transaction index [0] marked as invalid by committer. Reason code [10]
2018-04-06 09:36:19.711 UTC [kvledger] Commit -> INFO 041 Channel [smschannel9]: Created block [9] with 1 transaction(s)
2018-04-06 09:36:19.722 UTC [eventhub_producer] SendProducerBlockEvent -> INFO 042 Channel [smschannel9]: Sending event for block number [9]
`
Do anyone know why??
I confused that, invoke chaincode failed, but Blocked still be created
my case is
On peer0 invoke CC log, if failed like below, but Block [9] still be created
`*2018-04-06 09:36:19.682 UTC [committer/txvalidator] validateTx -> ERRO 052 VSCCValidateTx for transaction txId = d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae returned error: VSCC error: endorsement policy failure, err: signature set did not satisfy policy
2018-04-06 09:36:19.683 UTC [valimpl] preprocessProtoBlock -> WARN 053 Channel [mychannel]: Block [9] Transaction index [0] TxId [d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae] marked as invalid by committer. Reason code [ENDORSEMENT_POLICY_FAILURE]
2018-04-06 09:36:19.692 UTC [kvledger] CommitWithPvtData -> INFO 054 Channel [mychannel]: Committed block [9] with 1 transaction(s)
`*
On the Peer1, get TX mark invalid but still commit the block [9]
*`2018-04-06 09:36:19.708 UTC [vscc] Invoke -> WARN 03d Endorsement policy failure for transaction txid=d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae, err: Failed to authenticate policy
2018-04-06 09:36:19.708 UTC [txvalidator] VSCCValidateTxForCC -> ERRO 03e VSCC check failed for transaction txid=d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae, error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-04-06 09:36:19.708 UTC [txvalidator] Validate -> ERRO 03f VSCCValidateTx for transaction txId = d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae returned error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-04-06 09:36:19.709 UTC [statevalidator] ValidateAndPrepareBatch -> WARN 040 Block [9] Transaction index [0] marked as invalid by committer. Reason code [10]
2018-04-06 09:36:19.711 UTC [kvledger] Commit -> INFO 041 Channel [smschannel9]: Created block [9] with 1 transaction(s)
2018-04-06 09:36:19.722 UTC [eventhub_producer] SendProducerBlockEvent -> INFO 042 Channel [smschannel9]: Sending event for block number [9]*
`
Do anyone know why??
I confused that, invoke chaincode failed, but Blocked still be created
my case is
On peer0 invoke CC log, if failed like below, but Block [9] still be created
*2018-04-06 09:36:19.682 UTC [committer/txvalidator] validateTx -> ERRO 052 VSCCValidateTx for transaction txId = d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae returned error: VSCC error: endorsement policy failure, err: signature set did not satisfy policy
2018-04-06 09:36:19.683 UTC [valimpl] preprocessProtoBlock -> WARN 053 Channel [mychannel]: Block [9] Transaction index [0] TxId [d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae] marked as invalid by committer. Reason code [ENDORSEMENT_POLICY_FAILURE]
2018-04-06 09:36:19.692 UTC [kvledger] CommitWithPvtData -> INFO 054 Channel [mychannel]: Committed block [9] with 1 transaction(s)
*
On the Peer1, get TX mark invalid but still commit the block [9]
*2018-04-06 09:36:19.708 UTC [vscc] Invoke -> WARN 03d Endorsement policy failure for transaction txid=d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae, err: Failed to authenticate policy
2018-04-06 09:36:19.708 UTC [txvalidator] VSCCValidateTxForCC -> ERRO 03e VSCC check failed for transaction txid=d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae, error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-04-06 09:36:19.708 UTC [txvalidator] Validate -> ERRO 03f VSCCValidateTx for transaction txId = d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae returned error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-04-06 09:36:19.709 UTC [statevalidator] ValidateAndPrepareBatch -> WARN 040 Block [9] Transaction index [0] marked as invalid by committer. Reason code [10]
2018-04-06 09:36:19.711 UTC [kvledger] Commit -> INFO 041 Channel [smschannel9]: Created block [9] with 1 transaction(s)
2018-04-06 09:36:19.722 UTC [eventhub_producer] SendProducerBlockEvent -> INFO 042 Channel [smschannel9]: Sending event for block number [9]*
Do anyone know why??
I confused that, invoke chaincode failed, but Blocked still be created
my case is
On peer0 invoke CC log, if failed like below, but Block [9] still be created
*2018-04-06 09:36:19.682 UTC [committer/txvalidator] validateTx -> ERRO 052 VSCCValidateTx for transaction txId = d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae returned error: VSCC error: endorsement policy failure, err: signature set did not satisfy policy
2018-04-06 09:36:19.683 UTC [valimpl] preprocessProtoBlock -> WARN 053 Channel [mychannel]: Block [9] Transaction index [0] TxId [d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae] marked as invalid by committer. Reason code [ENDORSEMENT_POLICY_FAILURE]
2018-04-06 09:36:19.692 UTC [kvledger] CommitWithPvtData -> INFO 054 Channel [mychannel]: Committed block [9] with 1 transaction(s)*
On the Peer1, get TX mark invalid but still commit the block [9]
*2018-04-06 09:36:19.708 UTC [vscc] Invoke -> WARN 03d Endorsement policy failure for transaction txid=d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae, err: Failed to authenticate policy
2018-04-06 09:36:19.708 UTC [txvalidator] VSCCValidateTxForCC -> ERRO 03e VSCC check failed for transaction txid=d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae, error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-04-06 09:36:19.708 UTC [txvalidator] Validate -> ERRO 03f VSCCValidateTx for transaction txId = d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae returned error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-04-06 09:36:19.709 UTC [statevalidator] ValidateAndPrepareBatch -> WARN 040 Block [9] Transaction index [0] marked as invalid by committer. Reason code [10]
2018-04-06 09:36:19.711 UTC [kvledger] Commit -> INFO 041 Channel [smschannel9]: Created block [9] with 1 transaction(s)
2018-04-06 09:36:19.722 UTC [eventhub_producer] SendProducerBlockEvent -> INFO 042 Channel [smschannel9]: Sending event for block number [9]
*
Do anyone know why?? I though that Block should not be committed
I confused that, invoke chaincode failed, but Blocked still be created
my case is
On peer0 invoke CC log, if failed like below, but Block [9] still be created
*2018-04-06 09:36:19.682 UTC [committer/txvalidator] validateTx -> ERRO 052 VSCCValidateTx for transaction txId = d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae returned error: VSCC error: endorsement policy failure, err: signature set did not satisfy policy
2018-04-06 09:36:19.683 UTC [valimpl] preprocessProtoBlock -> WARN 053 Channel [mychannel]: Block [9] Transaction index [0] TxId [d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae] marked as invalid by committer. Reason code [ENDORSEMENT_POLICY_FAILURE]
2018-04-06 09:36:19.692 UTC [kvledger] CommitWithPvtData -> INFO 054 Channel [mychannel]: Committed block [9] with 1 transaction(s)*
On the Peer1, get TX mark invalid but still commit the block [9]
*2018-04-06 09:36:19.708 UTC [vscc] Invoke -> WARN 03d Endorsement policy failure for transaction txid=d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae, err: Failed to authenticate policy
2018-04-06 09:36:19.708 UTC [txvalidator] VSCCValidateTxForCC -> ERRO 03e VSCC check failed for transaction txid=d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae, error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-04-06 09:36:19.708 UTC [txvalidator] Validate -> ERRO 03f VSCCValidateTx for transaction txId = d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae returned error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-04-06 09:36:19.709 UTC [statevalidator] ValidateAndPrepareBatch -> WARN 040 Block [9] Transaction index [0] marked as invalid by committer. Reason code [10]
2018-04-06 09:36:19.711 UTC [kvledger] Commit -> INFO 041 Channel [smschannel9]: Created block [9] with 1 transaction(s)
2018-04-06 09:36:19.722 UTC [eventhub_producer] SendProducerBlockEvent -> INFO 042 Channel [mychannel]: Sending event for block number [9]
*
Do anyone know why?? I though that Block should not be committed
I confused that, invoke chaincode failed, but Blocked still be created
my case is
On peer0 invoke CC log, if failed like below, but Block [9] still be created
*2018-04-06 09:36:19.682 UTC [committer/txvalidator] validateTx -> ERRO 052 VSCCValidateTx for transaction txId = d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae returned error: VSCC error: endorsement policy failure, err: signature set did not satisfy policy
2018-04-06 09:36:19.683 UTC [valimpl] preprocessProtoBlock -> WARN 053 Channel [mychannel]: Block [9] Transaction index [0] TxId [d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae] marked as invalid by committer. Reason code [ENDORSEMENT_POLICY_FAILURE]
2018-04-06 09:36:19.692 UTC [kvledger] CommitWithPvtData -> INFO 054 Channel [mychannel]: Committed block [9] with 1 transaction(s)*
On the Peer1, get TX mark invalid but still commit the block [9]
*2018-04-06 09:36:19.708 UTC [vscc] Invoke -> WARN 03d Endorsement policy failure for transaction txid=d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae, err: Failed to authenticate policy
2018-04-06 09:36:19.708 UTC [txvalidator] VSCCValidateTxForCC -> ERRO 03e VSCC check failed for transaction txid=d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae, error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-04-06 09:36:19.708 UTC [txvalidator] Validate -> ERRO 03f VSCCValidateTx for transaction txId = d067542153777a200adc3399f4910f1ef3545fe7818541b3cbe2ccfc0c3d83ae returned error VSCC error: policy evaluation failed, err Failed to authenticate policy
2018-04-06 09:36:19.709 UTC [statevalidator] ValidateAndPrepareBatch -> WARN 040 Block [9] Transaction index [0] marked as invalid by committer. Reason code [10]
2018-04-06 09:36:19.711 UTC [kvledger] Commit -> INFO 041 Channel [mychannel]: Created block [9] with 1 transaction(s)
2018-04-06 09:36:19.722 UTC [eventhub_producer] SendProducerBlockEvent -> INFO 042 Channel [mychannel]: Sending event for block number [9]
*
Do anyone know why?? I though that Block should not be committed
Hi!
I've started an ordering service using kafka following the e2e example but using two zookeepers and 2 kafka brokers instead of 4.
If my understanding of kafka is correct i have the producer aka peer, the brokers aka kafka and the subscribers aka orderers, right? So Following this though whn i want to create a new transaction i should send it to the kafka broker which woudl broadcast it to the orderer/s.
But in the examples the invoke it's sent to the orderer directly and no update of the topic is being done on the broker.
Did i miss something?
Is there any transaction flow diagram using kafka ordering model and also how communication happens between multiple OSN and the kafka cluster for transaction queuing and signing and then who broadcasts the block to all the peers?
@Ryan2 what is the `channel_group`? is it an example from fabric?
You do not ever directly communicate with Kafka from the client perspective. Only the OSNs themselves communicate with Kafka. The basic flow:
1) Client sends transaction to one of the OSN nodes
2) OSN node makes sure the creator is allowed to write to the channel / makes sure it's a valid orderer message
3) If 2) is successful, then the OSN node publishes to Kafka
4) All OSNs are subscribe to Kafka and receive the orderer transaction
5) Decision is made to cut a block (broadcast to all OSNs)
6) Each OSN generates a block and delivers to any peers which are connected for the given channel(s)
(https://chat.hyperledger.org/channel/fabric-orderer?msg=BoP5CsoyoLcJZrGHx) @dsanchezseco
@Ryan2 - what are you using to invoke the chaincode?
> Do anyone know why?? I though that Block should not be committed
@Ryan2 Blocks are always committed, regardless of the validity of the transactions inside them. The valid transactions inside the block will be applied to the world state. The invalid transactions will have no effect.
@jyellick i don't know why the system channel receive config_update(it used to create the channel) tx the tx seems to process twice by the system channel ProcessConfigUpdateMsg
@jyellick i don't know why the system channel receive config_update(it used to create the channel) tx the tx seems to process twice by the system channel `ProcessConfigUpdateMsg`
@jyellick i don't know why the system channel receive config_update(it used to create the channel) tx the tx seems to process twice by the system channel `ProcessConfigUpdateMsg`can you tell me why?
@jyellick i don't know why the system channel receive config_update(it used to create the channel) tx the tx seems to process twice by the system channel `ProcessConfigUpdateMsg`can you tell me why?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8qoGnPgSjmh7pXfYy) @mastersingh24 the step 6, the OSN will deliver block to the peer,not to the orderer
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SifzM94tQbFWfny9y) @jyellick so if i want to add new org to the non-system channel,it should send the config_update tx to the system-channel or application-channel
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SifzM94tQbFWfny9y) @jyellick so if i want to add new org to the non-system channel,it should send the config_update tx to the system-channel or application-channel?and the logic is process the `config_update` tx twice, why the msg needs to deal with a lot?
Has joined the channel.
could you please help me to understand the atomictiy of hyperledger transactions.
Invoke (input credit_transaction_data){
account = GetState(credit_transaction_data.account_number )
//adjust account
PutState(account_number,account )
PutState(credit_transaction_data.account_number,credit_transaction_data )
}
Is atomictiy guranteed for the above for a given credit transaction .
@vu3mmg - Yes .... all PutState calls made in a single chaincode invocation are atomic
thank you .
Thank you @jyellick for your information
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=JgafriNfLLPYH67qQ)
Has joined the channel.
@mastersingh24 Thanks!! I got it now ( and working too :smiley: ). Not seeing anything in the kafka logs confused me, but it was because i only had on orderer, so no need to broadcast. Thanks again!!!
Hi, I got an problem which is OSN stopped frequently, I have no clue how it's process killed
I dunno if someone is trying to deploy the orderers with kafka in multiple machines, but if you do be aware to set the `KAFKA_ADVERTISED_HOST_NAME=` env var in your kafka containers, otherwise they would register with the container name which is not on all the machines
I dunno if someone is trying to deploy the orderers with kafka in multiple machines, but if you do be aware to set the `KAFKA_ADVERTISED_HOST_NAME=
I dunno if someone is trying to deploy the orderers with kafka in multiple machines, but if you do be aware to set the `KAFKA_ADVERTISED_HOST_NAME=
Has joined the channel.
is that true that the more endorser in network (belong to one channel) will reduce the performance of the network because of gossip exchange messages among endorsers?
I got this issue when new peer joining the channel
`2018-04-10 09:48:32.679 UTC [gossip/gossip] Gossip -> WARN 07d Failed obtaining gossipChannel of [115 109 115 99 104 97 110 110 101 108 50 52] aborting
2018-04-10 09:48:32.679 UTC [gossip/election] waitForInterrupt -> DEBU 07e [173 185 61 253 106 44 137 103 30 103 15 236 149 16 143 50 24 106 119 133 88 184 178 228 82 39 157 246 79 98 98 188] : Entering
2018-04-10 09:48:35.429 UTC [ConnProducer] DisableEndpoint -> WARN 07f Only 1 endpoint remained, will not black-list it
2018-04-10 09:48:35.435 UTC [blocksProvider] DeliverBlocks -> ERRO 080 [mychannel] Got error &{FORBIDDEN}
`
What is this error and how to fix?
I got this issue when new peer joining the channel
`2018-04-10 09:48:32.679 UTC [gossip/gossip] Gossip -> WARN 07d Failed obtaining gossipChannel of [115 109 115 99 104 97 110 110 101 108 50 52] aborting
2018-04-10 09:48:32.679 UTC [gossip/election] waitForInterrupt -> DEBU 07e [173 185 61 253 106 44 137 103 30 103 15 236 149 16 143 50 24 106 119 133 88 184 178 228 82 39 157 246 79 98 98 188] : Entering
2018-04-10 09:48:35.429 UTC [ConnProducer] DisableEndpoint -> WARN 07f Only 1 endpoint remained, will not black-list it
2018-04-10 09:48:35.435 UTC [blocksProvider] DeliverBlocks -> ERRO 080 [mychannel] Got error &{FORBIDDEN}`
What is this error and how to fix?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=RgoF4LGiXQYdATTbK) @Ryan2 no
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ArmRXPsKvMFrsFKz4) @Ryan2 it looks like peer trying to read block from the ordering service while it's not eligible of doing it, hence getting `FORBIDDEN` in response
> is that true that the more endorser in network (belong to one channel) will reduce the performance of the network because of gossip exchange messages among endorsers?
No, the number of endorsers in your channel is unrelated to performance. (Except to allow chaincode invocations to scale, in which more endorsers means better performance)
thank you @jyellick @C0rWin
Hi, how to remove created channel?, I not using specific channel and want to remove it
@Ryan2 You can make channels read-only by changing their policies, but you cannot remove them.
Hi,@jyellick, can fabric-kafka be substituted by some other image, is there any requirements on this?
Thank you for the information @jyellick
so any created channel cannot remove, only make it read-only?
do you have specific guide how to make it (changing their policies channels read-only)
I would suggest to make `/Channel/Application/Writers` as requiring "1 out of []" (ie, 1 out of 0, a policy which can never be satisfied)
Thanks, I give it a try
Hi, if we use kubernetes, can we setup orderer such that there is orderer as replicaSet, and runs 3 orderer. In this case, peer knows one IP address and port (through load balancer). Also, can we setup Kafka/Zookeeper in similar way ?
@ibmamnt You can, though you will have to be careful with how you issue and manage your TLS certificates.
@ibmamnt I believe you can for orderers, though you will have to be careful with how you issue and manage your TLS certificates.
Kafka and ZK I do not think will function will in this setup.
Kafka and ZK I do not think will function well in this setup.
@jyellick Thanks. While I write deployment yaml, I noticed about TLS certificate as well as local Orderer ledger location. For the time being, I may seperate just like order0,1,3.
@jyellick , refer to /Channel/Application/Writers is defined like this,
"
"Writers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Writers"
}
},
"version": "0"
}
" [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=aSb9PAvpGAXiH7MSw)
for simple question, what's the meaning of "sub_policy": "Writers", does this refer to `/Channel/Application/Groups/ApplicationOrganization`
(I refered from here http://hyperledger-fabric.readthedocs.io/en/release-1.0/policies.html#configuration-and-policies)
I did udpate "1 out of []" for `/Channel/Application/Groups/ApplicationOrganization` but computed updated error "proto: can't skip unknown wire type 6 for common.ConfigUpdate" ?
Do you have any idea?
@jyellick , refer to /Channel/Application/Writers is defined like this,
"
"Writers": {
"mod_policy": "Admins",
"policy": {
"type": 3,
"value": {
"rule": "ANY",
"sub_policy": "Writers"
}
},
"version": "0"
}
" [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=aSb9PAvpGAXiH7MSw)
for simple question, what's the meaning of "sub_policy": "Writers", does this refer to `/Channel/Application/Groups/ApplicationOrganization`
(I refered from here http://hyperledger-fabric.readthedocs.io/en/release-1.0/policies.html#configuration-and-policies)
I did udpate "1 out of []" for `/Channel/Application/Groups/ApplicationOrganization` but computed updated error "proto: can't skip unknown wire type 6 for common.ConfigUpdate" ?
Do you have any idea to make read-only channel as you said?
@jyellick i can modify the system channel config?
like add an org
like add an `org` or add a `peer`
like add an `org` or add a `peer`,i have done https://github.com/hyperledger/fabric/tree/release-1.1/examples/configtxupdate ,it's ok, and i see the orderer source code,it support the update for system channel
like add an `org` or add a `peer`,i have done https://github.com/hyperledger/fabric/tree/release-1.1/examples/configtxupdate ,it's ok, and i see the orderer source code,it support the update for system channel, if so,the new config update will effect the previous application channel?
kafaka
Has joined the channel.
You may modify the orderer system channel config. It will not affect previously created channels, but it will affect new channels.
@jyellick i can also add new peer to existing org?
@jyellick i can also add new peer to existing org? if it can,it will use the gossip protocol to sync the previous channel data?
Adding a new peer to an existing org does not require a channel config update.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=rqwetYJ48YwpSXZkW) @jyellick yes,but the previous channel data will sync by the gossip protocol
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=rqwetYJ48YwpSXZkW) @jyellick yes,but the previous channel data will sync by the gossip protocol?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=rqwetYJ48YwpSXZkW) @jyellick yes,but the previous channel data will sync by the gossip protocol? if it can, please tell me how the other peer find the peer
In a channel, peers discover eachother either via the list of `CORE_PEER_GOSSIP_BOOTSTRAP` or via the anchor peer defined in the channel's channel config.
@jyellick thx,i see the http://hyperledger-fabric.readthedocs.io/en/v1.1.0-alpha/gossip.html?highlight=gossip but i have some question, `Manages peer discovery and channel membership, by continually identifying available member peers, and eventually detecting peers that have gone offline.
Disseminates ledger data across all peers on a channel. Any peer with data that is out of sync with the rest of the channel identifies the missing blocks and syncs itself by copying the correct data.
Bring newly connected peers up to speed by allowing peer-to-peer state transfer update of ledger data.`
@jyellick thx,i see the http://hyperledger-fabric.readthedocs.io/en/v1.1.0-alpha/gossip.html?highlight=gossip but i have some question, ```Manages peer discovery and channel membership, by continually identifying available member peers, and eventually detecting peers that have gone offline.
Disseminates ledger data across all peers on a channel. Any peer with data that is out of sync with the rest of the channel identifies the missing blocks and syncs itself by copying the correct data.
Bring newly connected peers up to speed by allowing peer-to-peer state transfer update of ledger data.```
@jyellick thx,i see the http://hyperledger-fabric.readthedocs.io/en/v1.1.0-alpha/gossip.html?highlight=gossip but i have some question,
@jyellick thx,i see the http://hyperledger-fabric.readthedocs.io/en/v1.1.0-alpha/gossip.html?highlight=gossip but i have some question, please wait a moment
Clipboard - April 11, 2018 11:07 PM
the above picture tell me the function of the gossip protocol,so the first question is the `Manager peer` is set by the `CORE_PEER_GOSSIP_BOOTSTRAP` or anchor peer defined in the channel's channel config.
the above picture tell me the function of the gossip protocol,so the first question is the `Manager peer` is set by the `CORE_PEER_GOSSIP_BOOTSTRAP` or anchor peer defined in the channel's channel config. ?
the above picture tell me the function of the gossip protocol,so the first question is the `Manager peer` is set by the `CORE_PEER_GOSSIP_BOOTSTRAP` or anchor peer defined in the channel's channel config. ? it also tell me the `Manager peer` to pull the block from the kafka cluster if i use the kafka for constener
the above picture tell me the function of the gossip protocol,so the first question is the `Manager peer` is set by the `CORE_PEER_GOSSIP_BOOTSTRAP` or anchor peer defined in the channel's channel config. ? it also tell me the `Manager peer` to pull the block from the kafka cluster if i use the kafka for constener,and if i delete the peer ledger file,it still will sync the data. right?
the above picture tell me the function of the gossip protocol,so the first question is the `Manager peer` is set by the `CORE_PEER_GOSSIP_BOOTSTRAP` or anchor peer defined in the channel's channel config. ? it also tell me the `Manager peer` to pull the block from the kafka cluster if i use the kafka for constener,and if i delete the peer ledger file,it still will sync the data. right? @jyellick
the above picture tell me the function of the gossip protocol,so the first question is the `Manager peer` is set by the `CORE_PEER_GOSSIP_BOOTSTRAP` or anchor peer defined in the channel's channel config. ? it also tell me the `Manager peer` to pull the block from the kafka cluster if i use the kafka for constener,and if i delete the peer ledger file,it still will sync the data. right? @jyellick @mastersingh24
The 'Manager peer' I think is also called the 'leader peer'. It can be configured statically, or, by default, it uses 'leader election', and picks one peer from each org to be the a leader
so each org can containers many `ledger peer`
so each org can containers many `ledger peer`?
so each org can containers many `leader peer`?
` # Defines whenever peer will initialize dynamic algorithm for
# "leader" selection, where leader is the peer to establish
# connection with ordering service and use delivery protocol
# to pull ledger blocks from ordering service. It is recommended to
# use leader election for large networks of peers.
useLeaderElection: true
# Statically defines peer to be an organization "leader",
# where this means that current peer will maintain connection
# with ordering service and disseminate block across peers in
# its own organization
orgLeader: false` @jyellick i got it
` `` # Defines whenever peer will initialize dynamic algorithm for
# "leader" selection, where leader is the peer to establish
# connection with ordering service and use delivery protocol
# to pull ledger blocks from ordering service. It is recommended to
# use leader election for large networks of peers.
useLeaderElection: true
# Statically defines peer to be an organization "leader",
# where this means that current peer will maintain connection
# with ordering service and disseminate block across peers in
# its own organization
orgLeader: false``` @jyellick i got it
` `` # Defines whenever peer will initialize dynamic algorithm for
# "leader" selection, where leader is the peer to establish
# connection with ordering service and use delivery protocol
# to pull ledger blocks from ordering service. It is recommended to
# use leader election for large networks of peers.
useLeaderElection: true
# Statically defines peer to be an organization "leader",
# where this means that current peer will maintain connection
# with ordering service and disseminate block across peers in
# its own organization
orgLeader: false```
` `` Defines whenever peer will initialize dynamic algorithm for
# "leader" selection, where leader is the peer to establish
# connection with ordering service and use delivery protocol
# to pull ledger blocks from ordering service. It is recommended to
# use leader election for large networks of peers.
useLeaderElection: true
# Statically defines peer to be an organization "leader",
# where this means that current peer will maintain connection
# with ordering service and disseminate block across peers in
# its own organization
orgLeader: false```
# Defines whenever peer will initialize dynamic algorithm for
# "leader" selection, where leader is the peer to establish
# connection with ordering service and use delivery protocol
# to pull ledger blocks from ordering service. It is recommended to
# use leader election for large networks of peers.
useLeaderElection: true
# Statically defines peer to be an organization "leader",
# where this means that current peer will maintain connection
# with ordering service and disseminate block across peers in
# its own organization
orgLeader: false
# Defines whenever peer will initialize dynamic algorithm for
# "leader" selection, where leader is the peer to establish
# connection with ordering service and use delivery protocol
# to pull ledger blocks from ordering service. It is recommended to
# use leader election for large networks of peers.
useLeaderElection: true
# Statically defines peer to be an organization "leader",
# where this means that current peer will maintain connection
# with ordering service and disseminate block across peers in
# its own organization
orgLeader: false
@jyellick i got it
Has joined the channel.
Hello there,
I have a problem with the kafka consensus. I have deployed the Kafka cluster to MachineA with 4 brokers as described in "Bringing up a Kafka-based Ordering Service". I also tested my Kafka Cluster with the steps in Kafka QuickStart.
Compose file I used for the Kafka cluster: https://paste.ee/p/t24Mi#s=0
I want to run ordering services in MachineB and MachineC. I edited configtx.yaml as: https://paste.ee/p/t24Mi#s=2
I have set up compose files for ordering services as follows: https://paste.ee/p/t24Mi#s=1
When I run the ordering service, I get a timeout. Logs are as follows: https://paste.ee/p/t24Mi#s=3
Addresses such as 5b9ae1a5079d, 1be019639f7d, 035674ce3128, 27bb00ce4550 mentioned in the logs are my kafka containers ID working in my MachineA. But I did not set to the ordering service in no way. How did he know that? Also, "ClientID is the default of 'sarama', you should consider setting it to something application-specific." is this message normal?
Hello there,
I have a problem with the kafka consensus. I have deployed the Kafka cluster to MachineA with 4 brokers as described in "Bringing up a Kafka-based Ordering Service". I also tested my Kafka Cluster with the steps in Kafka QuickStart.
Compose file I used for the Kafka cluster: https://paste.ee/p/t24Mi#s=0
I want to run ordering services in MachineB and MachineC. I edited configtx.yaml as: https://paste.ee/p/t24Mi#s=2
I have set up compose files for ordering services as follows: https://paste.ee/p/t24Mi#s=1
When I run the ordering service, I get a timeout like this: "Failed to connect to broker 5b9ae1a5079d:9092: dial tcp: i/o timeout" All logs are as follows: https://paste.ee/p/t24Mi#s=3
Addresses such as 5b9ae1a5079d, 1be019639f7d, 035674ce3128, 27bb00ce4550 mentioned in the logs are my kafka containers ID working in my MachineA. But I did not set to the ordering service in no way. How did he know that? Also, "ClientID is the default of 'sarama', you should consider setting it to something application-specific." is this message normal?
Hello there,
I have a problem with the kafka consensus. I have deployed the Kafka cluster to MachineA with 4 brokers as described in "Bringing up a Kafka-based Ordering Service". I also tested my Kafka Cluster with the steps in Kafka QuickStart.
Compose file I used for the Kafka cluster: https://paste.ee/p/t24Mi#s=0
I want to run ordering services in MachineB and MachineC. I edited configtx.yaml as: https://paste.ee/p/t24Mi#s=2
I have set up compose files for ordering services as follows: https://paste.ee/p/t24Mi#s=1
When I run the ordering service, I get a timeout like this: "Failed to connect to broker 5b9ae1a5079d:9092: dial tcp: i/o timeout" All logs are as follows: https://paste.ee/p/t24Mi#s=3
Addresses such as 5b9ae1a5079d, 1be019639f7d, 035674ce3128, 27bb00ce4550 mentioned in the logs are my kafka containers ID working in my MachineA. But I did not set to the ordering service in no way. How did it know that? Also, "ClientID is the default of 'sarama', you should consider setting it to something application-specific." is this message normal?
@mozkarakoc You almost definitely need to set `KAFKA_ADVERTISED_HOSTNAME` and friends. Please make sure you can connect to the Kafka cluster with the quickstart clients from machine B to machine A
@jyellick I can not connect to MachineA from MachineB with quicstart clients:
[2018-04-11 21: 32: 31,390] ERROR Error when sending message to topic my-replicated-topic with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback )
org.apache.kafka.common.errors.TimeoutException: Expiring 3 record (s) for my-replicated-topic-0: 25074 ms has passed since batch creation plus linger time
I'm new to kafka and zookeper. I apologize that I have difficulty understanding and interpreting the errors.
What is KAFKA_ADVERTISED_HOSTNAME? what should i set?
Hi @jyellick can I have a question, If define List of `CORE_PEER_GOSSIP_BOOTSTRAP` peers when start one peer, does this will consume more CPU/ RAM resource? I think in this case peer will connect with more peers for exchanging message, hence this leading to more server recourse consumption, is that correct?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=aYzxJMqiTQ3ADxfXZ)
Hi, I got one question, how to remove joined peer from the channel?, if it can be, what is procedure for that operation?
@Ryan2 The bootstrap parameter is only for discovering other peers at 'bootstrap', or when the peer is first starting. Assuming gossip is working (via say, anchor peers) you should see no additional overhead.
There is no support for un-joining a peer to a channel. The easiest recourse is to simply deploy a new peer
so if peer joined channel, then that is, it's permanent belong to the channel,
Yes, but peers are easy to provision, you may throw one out and recreate it, and you will have lost nothing
thank you
Hello everyone, I'm trying to setup a Network using Kafka+zookeepers but the kafka containers are getting re-started due to which I'm not able to start the network. All the containers (ca, peers, orderers, kafka, zookeeper) are inside a single physical machine. When I looked into the docker logs of one of the kafka containers (there are 4 in total), I see the following message:
"[2018-04-12 07:17:48,628] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000"
Hello everyone, I'm trying to setup a Network using Kafka+zookeepers but the kafka containers are getting re-started due to which I'm not able to start the network. All the containers (ca, peers, orderers, kafka, zookeeper) are inside a single physical machine. I'd be grateful if someone could tell me the reason.
When I looked into the docker logs of one of the kafka containers (there are 4 in total), I see the following message:
"[2018-04-12 07:17:48,628] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000"
Hello everyone, I'm trying to setup a Network using Kafka+zookeepers but the kafka containers are getting re-started every minute due to which I'm not able to start the network. All the containers (ca, peers, orderers, kafka, zookeeper) are inside a single physical machine. I'd be grateful if someone could tell me the reason.
When I looked into the docker logs of one of the kafka containers (there are 4 in total), I see the following message:
"[2018-04-12 07:17:48,628] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000"
Hello everyone, I'm trying to setup a Network using Kafka+zookeepers but the kafka containers are getting re-started every minute due to which I'm not able to start the network. All the containers (ca, peers, orderers, kafka, zookeeper) are inside a single physical machine. I'd be grateful if someone could tell me the reason.
When I looked into the docker logs of one of the kafka containers (there are 4 in total), I see the following message:
"[2018-04-12 07:17:48,628] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000"
Hello everyone, I'm trying to setup a Network using Kafka+zookeepers but the kafka containers are getting re-started every minute due to which I'm not able to start the network. All the containers (ca, peers, orderers, kafka, zookeeper) are inside a single physical machine. I'd be grateful if someone could tell me the reason. @jyellick @mastersingh24
When I looked into the docker logs of one of the kafka containers (there are 4 in total), I see the following message:
"[2018-04-12 07:17:48,628] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000"
these are the docker-compose files that I'm using for kafka and zookeeper.
Kafka docker-compose file: https://pastebin.com/XNXDVvxq
Zookeeper docker-compose file: https://pastebin.com/zmKHYBfD
Base file: https://pastebin.com/Qw9LFBNA
Hi @jyellick do you configure fabric-kafka to auto create the topic testchainid when the cluster is up?
Hi @jyellick do you configure the fabric-kafka images to auto create the topic testchainid when the cluster is up?
Hi @jyellick
I set `KAFKA_ADVERTISED_HOSTNAME` and `KAFKA_ADVERTISED_LISTENERS` in my brokers. Now, orderer0 logs: https://paste.ee/p/pIrNZ
`orderer0.example.com | 2018-04-12 09:50:26.857 UTC [orderer/common/broadcast] Handle -> WARN 16e [channel: mychannel] Rejecting broadcast of message from ## ORG-1 IP ##:59368 with SERVICE_UNAVAILABLE: rejected by Consenter: will not enqueue, consenter for this channel hasn't started yet
`
https://chat.hyperledger.org/channel/fabric-orderer?msg=b8n3K6TTBGYn8EGXu
Hi @jyellick
I set `KAFKA_ADVERTISED_HOSTNAME` and `KAFKA_ADVERTISED_LISTENERS` in my brokers. Now, orderer0 logs: https://paste.ee/p/pIrNZ
When a post a createChannel request:
`orderer0.example.com | 2018-04-12 09:50:26.857 UTC [orderer/common/broadcast] Handle -> WARN 16e [channel: mychannel] Rejecting broadcast of message from ## ORG-1 IP ##:59368 with SERVICE_UNAVAILABLE: rejected by Consenter: will not enqueue, consenter for this channel hasn't started yet`
@mozkarakoc Are you able to connect to Kafka with the sample cli clients? If you are automating this with a script, did you wait long enough for the Kafka cluster to finish starting up before sending the channel create to the orderer?
Has joined the channel.
@jyellick yes, I can connect to Kafka cluster with cli clients. I play around some producing and consuming. I don't understand that should I send the create request to Kafka brokers? I'm using for node.js sdk and the sdk posts the createChannel request to orderer0. Is this flow wrong?
> org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server within timeout: 6000"
@anishman This indicates a misconfiguration of Kafka. Please follow the Kafka quickstart guide and get Kafka working before trying to include Fabric
@mozkarakoc You are able to produce/consume to the Kafka cluster from the orderer hosts?
@jyellick yes
Can you paste your orderer log, at debug, to a service like hastebin.com ?
Order logs of my previous attempt: https://paste.ee/p/pIrNZ
It looks like Kafka may still be starting up during this log
The orderer will return `SERVICE_UNAVAILABLE` until Kafka is stable, has elected the leaders for the partitions etc.
If, after 5 minutes or so, you are still seeing the connection errors in the log, then it points towards a Kafka misconfiguration
But if the sample clients are working, I suspect you simply did not wait long enough
I wasn't waited for 5 minutes. I will try it, thanks
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=GFwveMCwmKNH9jwKf) @jyellick I see. thanks for the reply.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=GFwveMCwmKNH9jwKf) @jyellick I see. thanks a lot for the reply.
Hi @jyellick , My network comes up with 2 Orgs (I setup `leader election` and `CORE_PEER_GOSSIP_BOOTSTRAP`) without specifying `anchor peer`,
As my guess in this solution ledgers on all peers still in synced via leader peer and gossip means, and does not need use anchor peer for communicating between Orgs,
Can you put your thought on how *reliablility* and *pros ans cons* of this solution compared to solution of using `anchor peer` without `CORE_PEER_GOSSIP_BOOTSTRAP`
Thanks in advanced.
Hi @jyellick , My network comes up with 2 Orgs (I setup `leader election` and `CORE_PEER_GOSSIP_BOOTSTRAP`) without specifying `anchor peer`,
As my guess in this solution ledgers on all peers still in synced via leader peer and gossip means, and does not need use anchor peer for communicating between Orgs,
Can you put your thought on how *reliablility* and *pros ans cons* of this solution compared to solution of using `anchor peer` without `CORE_PEER_GOSSIP_BOOTSTRAP`
Or these 2 solutions will get the same result?
Thanks in advanced.
Hi @jyellick , My network comes up with 2 Orgs (I setup `leader election` and `CORE_PEER_GOSSIP_BOOTSTRAP`) without specifying `anchor peer`,
As my guess in this solution ledgers on all peers still in synced via leader peer and gossip means, and does not need use anchor peer for communicating between Orgs,
Can you put your thought on how *reliablility* and *pros ans cons* of this solution compared to solution of using `anchor peer` without `CORE_PEER_GOSSIP_BOOTSTRAP`,
Or these 2 solutions will get the same result?
Thanks in advanced.
Hi @jyellick , My network comes up with 2 Orgs (I setup `leader election` and `CORE_PEER_GOSSIP_BOOTSTRAP`) without specifying `anchor peer`,
As I've built and tested, in this solution ledgers on all peers still in synced via leader peer and gossip means, and does not need use anchor peer for communicating between Orgs,
Can you put your thought on how *reliablility* and *pros ans cons* of this solution compared to solution of using `anchor peer` without `CORE_PEER_GOSSIP_BOOTSTRAP`,
Or these 2 solutions will get the same result?
Thanks in advanced.
Hi @jyellick , My network comes up with 2 Orgs (I setup `leader election` and `CORE_PEER_GOSSIP_BOOTSTRAP`) without specifying `anchor peer` in the channel configuration,
As I've built and tested, in this solution ledgers on all peers still in synced via leader peer and gossip means, and does not need use anchor peer for communicating between Orgs,
Can you put your thought on how *reliablility* and *pros ans cons* of this solution compared to solution of using `anchor peer` without `CORE_PEER_GOSSIP_BOOTSTRAP`,
Or these 2 solutions will get the same result?
Thanks in advanced.
Anchor peer and bootstrap peer are effectively the same for intra-org gossip
Anchor peer gives additionally inter-org gossip
Hi @jyellick, orderer prints this log ```2018-04-13 03:40:40.136 UTC [orderer/common/deliver] Handle -> DEBU 82d Starting new deliver loop
2018-04-13 03:40:40.136 UTC [orderer/common/deliver] Handle -> DEBU 82e Attempting to read seek info message
2018-04-13 03:40:40.136 UTC [orderer/common/deliver] Handle -> WARN 82f [channel: mychannel] Rejecting deliver request because of consenter error```
which part may be the cause?
@Glen Either your Kafka cluster is not configured correctly, or you have not waited long enough for startup to complete
ok, i try
by the way, the following log indicates the kafka cluster is ready? ```2018-04-13 03:41:03.065 UTC [orderer/kafka] try -> DEBU 842 [channel: mychannel] Error is nil, breaking the retry loop
2018-04-13 03:41:03.065 UTC [orderer/kafka] startThread -> INFO 843 [channel: mychannel] Channel consumer set up successfully
2018-04-13 03:41:03.065 UTC [orderer/kafka] startThread -> INFO 844 [channel: mychannel] Start phase completed successfully
2018-04-13 03:41:03.069 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 845 [channel: mychannel] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
2018-04-13 03:41:03.069 UTC [orderer/kafka] processConnect -> DEBU 846 [channel: mychannel] It's a connect message - ignoring```
`Start phase completed successfully` is a good indicator, but in general, you may poll waiting for the `SERVICE_UNAVAILABLE` to go away
Hi @jyellick , I used the peer channel fetch 0 -o fabric-orderer:7050 -c "testchainid" to get the block, it's successful, but when I send my tx with the sdk, it prints the consenter error
Has joined the channel.
hi @jyellick, I waited a litlte bit and channel has been setup successfuly. thanks :)
https://chat.hyperledger.org/channel/fabric-orderer?msg=doAFiKeQRn2TjHpeE
Has joined the channel.
@Glen What version of Fabric are you using?
Has joined the channel.
1.0.0
it's working now, maybe kafka will enter a leaderless state each time a new channel is created, then I wait some time it will be ready.
@Glen I'd recommend you move to v1.1.0
In v1.0.x the `Deliver` service may return success before the `Broadcast` service is ready to accept requests
In v1.1.0, this has been changed such that `Deliver` will also return `SERVICE_UNAVAILABLE` until the whole system has initialized.
yes, we are working on 1.0, we're considering moving to 1.1.0
Hi @jyellick,
I finally bring up kafka based OSN. When I send transactions to order 0, order 1 receives messages via kafka. My infrastructur:
-Kafka cluster is running at MachineA. (I will scale it later)
-orderer0 is running at MachineB
-orderer1 is running as MachineC
When I stop the orderer0 and send the transactions to the orderer1, orderer1 receives message, starts broadcast. But it can not deliver messages.
Error logs:
`orderer1.example.com | 2018-04-14 15:06:47.097 UTC [orderer/common/server] func1 -> CRIT 46d Broadcast client triggered panic: runtime error: invalid memory address or nil pointer dereference`
What is wrong?
Hi @jyellick,
I finally bring up kafka based OSN. When I send transactions to order 0, order 1 receives messages via kafka. My infrastructur:
-Kafka cluster is running at MachineA. (I will scale it later)
-orderer0 is running at MachineB
-orderer1 is running as MachineC
When I stop the orderer0 and send the transactions to the orderer1, orderer1 receives message, starts broadcast. But it can not deliver messages.
Error logs:
`orderer1.example.com | 2018-04-14 15:06:47.097 UTC [orderer/common/server] func1 -> CRIT 46d Broadcast client triggered panic: runtime error: invalid memory address or nil pointer dereference`
Hi @jyellick,
I finally bring up kafka based OSN. When I send transactions to orderer0, orderer1 receives messages via kafka. My infrastructur:
-Kafka cluster is running at MachineA. (I will scale it later)
-orderer0 is running at MachineB
-orderer1 is running as MachineC
When I stop the orderer0 and send the transactions to the orderer1, orderer1 receives message, starts broadcast. But it can not deliver messages.
Error logs:
`orderer1.example.com | 2018-04-14 15:06:47.097 UTC [orderer/common/server] func1 -> CRIT 46d Broadcast client triggered panic: runtime error: invalid memory address or nil pointer dereference`
Hi @jyellick,
I finally bring up kafka based OSN. When I send transactions to orderer0, orderer1 receives messages via kafka. My infrastructure:
-Kafka cluster is running at MachineA. (I will scale it later)
-orderer0 is running at MachineB
-orderer1 is running as MachineC
When I stop the orderer0 and send the transactions to the orderer1, orderer1 receives message, starts broadcast. But it can not deliver messages.
Error logs:
`orderer1.example.com | 2018-04-14 15:06:47.097 UTC [orderer/common/server] func1 -> CRIT 46d Broadcast client triggered panic: runtime error: invalid memory address or nil pointer dereference`
Hi @jyellick,
I finally bring up kafka based OSN. When I send transactions to orderer0, orderer1 receives messages via kafka. My infrastructure:
-Kafka cluster is running at MachineA. (I will scale it later)
-orderer0 is running at MachineB
-orderer1 is running as MachineC
When I stop the orderer0 and send the transactions to the orderer1, orderer1 receives message, starts broadcast with error.
Error logs:
`orderer1.example.com | 2018-04-14 15:06:47.097 UTC [orderer/common/server] func1 -> CRIT 46d Broadcast client triggered panic: runtime error: invalid memory address or nil pointer dereference`
@mozkarakoc Can you please post full orderer logs at debug?
@mozkarakoc Can you please post full orderer logs at debug? (to a service like hastebin.com)
@jyellick orderer1.example.com full logs: https://hastebin.com/pisopehaqe.rb
@mozkarakoc What version of the orderer are you running? I see how a panic could occur, though it would generally indicate that your message is malformed in some way
@jyellick I'm using latest docker images. I test this case with theese steps: I create a transaction wi
@jyellick I'm using latest docker images. I test this case:
- docker stop order0
- create a transaction proposal with node.js sdk
- after receive success proposal response, send the to orderer0 and get SERVICE_UNAVAILABLE response
- send the same transaction to orderer1
Could it be reason of the malformation I send the same transaction? But orderer0 never receives the transaction.
@jyellick I'm using latest docker images. I test this case:
- docker stop orderer0
- create a transaction proposal with node.js sdk
- after receive success proposal response, send transaction the to orderer0 and get SERVICE_UNAVAILABLE response
- send the same transaction to orderer1
Could it be reason of the malformation I send the same transaction? But orderer0 never receives the transaction.
@jyellick I'm using latest docker images. I test this case:
- docker stop orderer0
- create a transaction proposal with node.js sdk
- after receive success proposal response, send the transaction to orderer0 and get SERVICE_UNAVAILABLE response
- send the same transaction to orderer1
Could it be reason of the malformation I send the same transaction? But orderer0 never receives the transaction.
@mozkarakoc No, I don't think it's related to sending the same transaction, unless it is some quirk of the node sdk
What if you do not stop any orderers?
@jyellick If I don't stop any orderers, there is no problem. But the SDK always send the transation to orderer0.
I can choose which orderer I can send the transaction via sdk. When two orderers are up and running, I sent different transactions both of them and they processed the transactions.
But If I don't choose the orderer, the SDK sends transactions to first orderer in the configuration. (orderer0). If orderer0 is up and running there is no problem. It processes transactions and orderer1 receive via kafka cluster.
I want to test system behavior when orderer0 is down. So I stop orderer0.
@jyellick If I don't stop any orderers, there is no problem. But in that case, the SDK always send the transation to orderer0.
I can choose which orderer I can send the transaction via sdk. When two orderers are up and running, I sent different transactions both of them and they processed the transactions.
But If I don't choose the orderer, the SDK sends transactions to first orderer in the configuration. (orderer0). If orderer0 is up and running there is no problem. It processes transactions and orderer1 receive via kafka cluster.
I want to test system behavior when orderer0 is down. So I stop orderer0.
Has joined the channel.
Has joined the channel.
> But If I don't choose the orderer, the SDK sends transactions to first orderer in the configuration. (orderer0). If orderer0 is up and running there is no problem. It processes transactions and orderer1 receive via kafka cluster.
Does it automatically fail over to the second orderer? I did not think it did. So I suspect you are manually failing over to the next orderer? In this case, are you certain that it is the same transaction and that you have not missed any steps? As I mentioned before, it looks like the transaction is not complete
@jyellick node sdk doesn't start automatic failover for ordering services. I ask in #fabric-sdk-node but no one has answered yet. :(
https://chat.hyperledger.org/channel/fabric-sdk-node?msg=46T9bsqDPJucrpT8W
I didn't apply same steps. For example I'm using same endorsement responses, same txId etc. Should I change it?
You should be able to use the same steps for the second orderer
ok, i will try it
@jyellick I applied same steps with first transactions. New transactionId, proposal requests etc. Now it is working like a boss! :)
now, if the node-sdk starts automatic failover for me everything is perfect for me.
@jyellick I applied same steps with first transactions. New transactionId, proposal requests etc. Now it is working like a charm! :)
now, if the node-sdk starts automatic failover for me everything is perfect for me.
Can we create a structure where orderer is a part of the same organization as peers instead of creating separate orderer organization ?
Has joined the channel.
Hi All!!!
I'm trying to use kaka as ordering service, but I have a doubt, makes sense have 1 orderer only and 4 kafka instances in my network ?
I'm trying to use kafka as ordering service, but I have a doubt, makes sense have 1 orderer only and 4 kafka instances in my network ?
I'm trying to use kafka as ordering service, but I have a doubt, makes sense to have 1 orderer only and 4 kafka instances in my network ?
single machine
Has joined the channel.
Has joined the channel.
Screenshot from 2018-04-17 17-17-22.png
Any documents regarding the implementation of kafka.
> makes sense to have 1 orderer only and 4 kafka instances in my network ?
@ascatox In general, no. Using Kafka gives "crash fault tolerance", so, putting all services on one machine eliminates this benefit. Of course it is a fine scenario for testing, but should never be deployed in production this way.
> Can we create a structure where orderer is a part of the same organization as peers instead of creating separate orderer organization ?
@ranjan008 It is possible, but highly not recommended. Remember, members of the ordering organization by default have the authority to sign blocks, they also have the authority to read any channel. Even if the same organization is both in ordering and application, it is better to logically separate them with two different fabric organizations.
> Any documents regarding the implementation of kafka.
@Unni_1994 http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html is a good place to start
Has joined the channel.
Hi , Still I am facing issues while doing this kafka consensus. Please give documentation for implementing this ,
Has joined the channel.
@Unni_1994 have you ever been setup kafka cluster?
no
beginner
@Unni_1994 As suggested in the FAQ linked as this channel's description ( https://wiki.hyperledger.org/chat_channels/fabric-orderer ) it's best to get familiar with Kafka without Fabric before trying to integrate the two. A good place to start is https://kafka.apache.org/quickstart
I'm keep getting below error from orderer logger when connect peers to orderer, can someone tell me what could be wrong with my configuration?
```
2018-04-18 15:16:07.653 UTC [cauthdsl] deduplicate -> ERRO 557dca Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.org1.at2chain.ai2baas.com")) for identity 0a074f7267314d535012c7062d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d494943505443434165536741774942416749514a2b304837514b735153433534593678554478494444414b42676771686b6a4f50515144416a43426854454c0a4d416b474131554542684d4356564d78457a415242674e5642416754436b4e6862476c6d62334a7561574578466a415542674e564241635444564e68626942470a636d467559326c7a59323878496a416742674e5642416f54475739795a7a457559585179593268686157347559576b79596d46686379356a623230784a54416a0a42674e5642414d5448474e684c6d39795a7a457559585179593268686157347559576b79596d46686379356a623230774868634e4d5467774e4445334d5451770a4f5455325768634e4d6a67774e4445304d5451774f545532576a42744d517377435159445651514745774a56557a45544d4245474131554543424d4b513246730a61575a76636d3570595445574d4251474131554542784d4e5532467549455a795957356a61584e6a627a45784d4338474131554541784d6f59585179593268680a615734746347566c636a4d7562334a6e4d53356864444a6a614746706269356861544a695957467a4c6d4e766254425a4d424d4742797147534d3439416745470a43437147534d343941774548413049414241544b42592b3758585372382f41523437457a643748707733423163336334667558425a68725a6730527a514d776f0a6348524a326168496d49385a41544d596f42787155723348444d69386e66506c76674c784251366a5454424c4d41344741315564447745422f775145417749480a6744414d42674e5648524d4241663845416a41414d437347413155644977516b4d434b41494e2b586b4d616d634757643057717771414b534b72784752755a4b0a79507646436c6a5666796f3753544f774d416f4743437147534d343942414d43413063414d4551434948456138616e4b2b696a324f62515552704d5758432b6a0a47784e71517732746f4a724d486951585346616c41694174576230334f3132787242644b4c6f31637151396e546243777a577a75734d776a446e546d4c5a76630a74413d3d0a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a
2018-04-18 15:16:07.653 UTC [cauthdsl] func1 -> DEBU 557dcb 0xc4201da058 gate 1524064567653370907 evaluation starts
2018-04-18 15:16:07.653 UTC [cauthdsl] func2 -> DEBU 557dcc 0xc4201da058 signed by 0 principal evaluation starts (used [false])
2018-04-18 15:16:07.653 UTC [cauthdsl] func2 -> DEBU 557dcd 0xc4201da058 principal evaluation fails
2018-04-18 15:16:07.653 UTC [cauthdsl] func1 -> DEBU 557dce 0xc4201da058 gate 1524064567653370907 evaluation fails
2018-04-18 15:16:07.653 UTC [policies] Evaluate -> DEBU 557dcf Signature set did not satisfy policy /Channel/Application/Org1MSP/Readers
2018-04-18 15:16:07.653 UTC [policies] Evaluate -> DEBU 557dd0 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/Org1MSP/Readers
2018-04-18 15:16:07.653 UTC [policies] func1 -> DEBU 557dd1 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Org1MSP.Readers ]
2018-04-18 15:16:07.653 UTC [policies] Evaluate -> DEBU 557dd2 Signature set did not satisfy policy /Channel/Application/Readers
2018-04-18 15:16:07.653 UTC [policies] Evaluate -> DEBU 557dd3 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/Readers
2018-04-18 15:16:07.653 UTC [policies] func1 -> DEBU 557dd4 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Orderer.Readers Application.Readers ]
2018-04-18 15:16:07.653 UTC [policies] Evaluate -> DEBU 557dd5 Signature set did not satisfy policy /Channel/Readers
2018-04-18 15:16:07.653 UTC [policies] Evaluate -> DEBU 557dd6 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Readers
2018-04-18 15:16:07.653 UTC [common/deliver] deliverBlocks -> WARN 557dd7 [channel: ccert] Client authorization revoked for deliver request from 172.16.4.1:60994: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
2018-04-18 15:16:07.653 UTC [common/deliver] Handle -> DEBU 557dd8 Waiting for new SeekInfo from 172.16.4.1:60994
2018-04-18 15:16:07.653 UTC [common/deliver] Handle -> DEBU 557dd9 Attempting to read seek info message from 172.16.4.1:60994
2018-04-18 15:16:07.689 UTC [common/deliver] Handle -> WARN 557dda Error reading from 172.16.4.1:60994: rpc error: code = Canceled desc = context canceled
2018-04-18 15:16:07.689 UTC [orderer/common/server] func1 -> DEBU 557ddb Closing Deliver stream
```
Hi @jyellick, you said v1.1 codebase does this mean fabric v1.1?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=W8bnzaRhFn45pCNqL)
Hi @jyellick, you said v1.1 codebase does this mean fabric v1.1?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=W8bnzaRhFn45pCNqL)
(Assuming you are using the v1.1 codebase, the v1.0 codebase does not scale well horizontally)
@sh777 Please use a service like hastebin.com when pasting logs, you can paste more context and the channel will stay readable.
What organization are the peers a member of?
> Hi @jyellick, you said v1.1 codebase does this mean fabric v1.1?
@Ryan2 Yes, v1.1.0 fabric
when adding new node for ordering (into "OrdererAddresses") what action need to be done to new ordering node join existing channels
when adding new node for ordering (into "OrdererAddresses") what action need to be done to new ordering node join existing channels
(I start new node, but it does not fetch data from kafka)
@jyellick so basically what you are trying to say is that all orderer nodes has to be in a separate organization lets say orderer organization since that organization will have access to all the channel information. And also if its possible how can we achieve that?
When upgrading 1.1, the upgrade is complete. However, several transactions were written without opening the Capabilities. Is there a problem?
Today's upgrade of the production environment encountered a more thorny problem: Orderer reported such errors when reading the system chain.
2018-04-19 05:23:17.321 UTC [orderer/consensus/kafka] try -> DEBU 5c3 [channel: cbcagenesis] Connecting to the Kafka cluster
2018-04-19 05:23:17.322 UTC [orderer/consensus/kafka] try -> DEBU 5c4 [channel: cbcagenesis] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
2018-04-19 05:23:17.363 UTC [orderer/common/server] Deliver -> DEBU 5c5 Starting new Deliver handler
2018-04-19 05:23:17.363 UTC [common/deliver] Handle -> DEBU 5c6 Starting new deliver loop for 47.95.252.160:50106
2018-04-19 05:23:17.363 UTC [common/deliver] Handle -> DEBU 5c7 Attempting to read seek info message from 47.95.252.160:50106
2018-04-19 05:23:17.364 UTC [common/deliver] deliverBlocks -> WARN 5c8 [channel: cbcagenesis] Rejecting deliver request for 47.95.252.160:50106 because of consenter error
2018-04-19 05:23:17.364 UTC [common/deliver] Handle -> DEBU 5c9 Waiting for new SeekInfo from 47.95.252.160:50106
2018-04-19 05:23:17.364 UTC [common/deliver] Handle -> DEBU 5ca Attempting to read seek info message from 47.95.252.160:50106
2018-04-19 05:23:17.369 UTC [common/deliver] Handle -> WARN 5cb Error reading from 47.95.252.160:50106: rpc error: code = Canceled desc = context canceled
2018-04-19 05:23:17.369 UTC [orderer/common/server] func1 -> DEBU 5cc Closing Deliver stream
2018-04-19 05:23:18.321 UTC [orderer/consensus/kafka] try -> DEBU 5cd [channel: cbcagenesis] Connecting to the Kafka cluster
2018-04-19 05:23:18.322 UTC [orderer/consensus/kafka] try -> DEBU 5ce [channel: cbcagenesis] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
When upgrading 1.1.0from 1.0.2, the upgrade is complete. However, several transactions were written without opening the Capabilities. Is there a problem?
Today's upgrade of the production environment encountered a more thorny problem, cbcagenesis is system Channel
executed:
```peer channel fetch config config_block.pb -o orderer0.bqj.cn:7050 -c cbcagenesis --tls --cafile $ORDERER_CA```
get error:```peer channel fetch config config_block.pb -o orderer0.bqj.cn:7050 -c cbcagenesis --tls --cafile $ORDERER_CA
2018-04-19 12:37:10.876 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-04-19 12:37:10.877 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-04-19 12:37:10.930 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2018-04-19 12:37:10.930 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2018-04-19 12:37:10.930 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2018-04-19 12:37:10.930 UTC [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2018-04-19 12:37:10.930 UTC [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2018-04-19 12:37:10.930 UTC [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0A9B060A1708021A0608F69AE2D60522...A0AE0A6D829512080A020A0012020A00
2018-04-19 12:37:10.930 UTC [msp/identity] Sign -> DEBU 009 Sign: digest: 196F2086EF7535170E642A6D45CB049F8F2617EA6A096E1CF481B3AAC33975EB
2018-04-19 12:37:10.934 UTC [channelCmd] readBlock -> DEBU 00a Got status: &{SERVICE_UNAVAILABLE}
Error: can't read the block: &{SERVICE_UNAVAILABLE}
Usage:
````
then I checked the orderer's log, got information below:
```
2018-04-19 05:23:17.321 UTC [orderer/consensus/kafka] try -> DEBU 5c3 [channel: cbcagenesis] Connecting to the Kafka cluster
2018-04-19 05:23:17.322 UTC [orderer/consensus/kafka] try -> DEBU 5c4 [channel: cbcagenesis] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
2018-04-19 05:23:17.363 UTC [orderer/common/server] Deliver -> DEBU 5c5 Starting new Deliver handler
2018-04-19 05:23:17.363 UTC [common/deliver] Handle -> DEBU 5c6 Starting new deliver loop for 47.95.252.160:50106
2018-04-19 05:23:17.363 UTC [common/deliver] Handle -> DEBU 5c7 Attempting to read seek info message from 47.95.252.160:50106
2018-04-19 05:23:17.364 UTC [common/deliver] deliverBlocks -> WARN 5c8 [channel: cbcagenesis] Rejecting deliver request for 47.95.252.160:50106 because of consenter error
2018-04-19 05:23:17.364 UTC [common/deliver] Handle -> DEBU 5c9 Waiting for new SeekInfo from 47.95.252.160:50106
2018-04-19 05:23:17.364 UTC [common/deliver] Handle -> DEBU 5ca Attempting to read seek info message from 47.95.252.160:50106
2018-04-19 05:23:17.369 UTC [common/deliver] Handle -> WARN 5cb Error reading from 47.95.252.160:50106: rpc error: code = Canceled desc = context canceled
2018-04-19 05:23:17.369 UTC [orderer/common/server] func1 -> DEBU 5cc Closing Deliver stream
2018-04-19 05:23:18.321 UTC [orderer/consensus/kafka] try -> DEBU 5cd [channel: cbcagenesis] Connecting to the Kafka cluster
2018-04-19 05:23:18.322 UTC [orderer/consensus/kafka] try -> DEBU 5ce [channel: cbcagenesis] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
```
we have running in version 1.0.2 for serveral month and write a lot in the application channel
but it should not appear ```offset is outside the range of offsets ....``` error in the system channel
could any one help us solve this
When upgrading 1.1.0from 1.0.2, the upgrade is complete. However, several transactions were written without opening the Capabilities. Is there a problem?
Today's upgrade of the production environment encountered a more thorny problem, cbcagenesis is system Channel
executed:
```peer channel fetch config config_block.pb -o orderer0.bqj.cn:7050 -c cbcagenesis --tls --cafile $ORDERER_CA```
get error:
```peer channel fetch config config_block.pb -o orderer0.bqj.cn:7050 -c cbcagenesis --tls --cafile $ORDERER_CA
2018-04-19 12:37:10.876 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-04-19 12:37:10.877 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-04-19 12:37:10.930 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2018-04-19 12:37:10.930 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2018-04-19 12:37:10.930 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2018-04-19 12:37:10.930 UTC [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2018-04-19 12:37:10.930 UTC [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2018-04-19 12:37:10.930 UTC [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0A9B060A1708021A0608F69AE2D60522...A0AE0A6D829512080A020A0012020A00
2018-04-19 12:37:10.930 UTC [msp/identity] Sign -> DEBU 009 Sign: digest: 196F2086EF7535170E642A6D45CB049F8F2617EA6A096E1CF481B3AAC33975EB
2018-04-19 12:37:10.934 UTC [channelCmd] readBlock -> DEBU 00a Got status: &{SERVICE_UNAVAILABLE}
Error: can't read the block: &{SERVICE_UNAVAILABLE}
Usage:````
then I checked the orderer's log, got information below:
```2018-04-19 05:23:17.321 UTC [orderer/consensus/kafka] try -> DEBU 5c3 [channel: cbcagenesis] Connecting to the Kafka cluster
2018-04-19 05:23:17.322 UTC [orderer/consensus/kafka] try -> DEBU 5c4 [channel: cbcagenesis] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
2018-04-19 05:23:17.363 UTC [orderer/common/server] Deliver -> DEBU 5c5 Starting new Deliver handler
2018-04-19 05:23:17.363 UTC [common/deliver] Handle -> DEBU 5c6 Starting new deliver loop for 47.95.252.160:50106
2018-04-19 05:23:17.363 UTC [common/deliver] Handle -> DEBU 5c7 Attempting to read seek info message from 47.95.252.160:50106
2018-04-19 05:23:17.364 UTC [common/deliver] deliverBlocks -> WARN 5c8 [channel: cbcagenesis] Rejecting deliver request for 47.95.252.160:50106 because of consenter error
2018-04-19 05:23:17.364 UTC [common/deliver] Handle -> DEBU 5c9 Waiting for new SeekInfo from 47.95.252.160:50106
2018-04-19 05:23:17.364 UTC [common/deliver] Handle -> DEBU 5ca Attempting to read seek info message from 47.95.252.160:50106
2018-04-19 05:23:17.369 UTC [common/deliver] Handle -> WARN 5cb Error reading from 47.95.252.160:50106: rpc error: code = Canceled desc = context canceled
2018-04-19 05:23:17.369 UTC [orderer/common/server] func1 -> DEBU 5cc Closing Deliver stream
2018-04-19 05:23:18.321 UTC [orderer/consensus/kafka] try -> DEBU 5cd [channel: cbcagenesis] Connecting to the Kafka cluster
2018-04-19 05:23:18.322 UTC [orderer/consensus/kafka] try -> DEBU 5ce [channel: cbcagenesis] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
```
we have running in version 1.0.2 for serveral month and write a lot in the application channel
but it should not appear ```offset is outside the range of offsets ....``` error in the system channel
could any one help us solve this
@duwenhui is my teamate, the kafka's log is paste below
```"log":"[2018-04-19 08:54:52,473] INFO Truncating log cbcagenesis-0 to offset 623. (kafka.log.Log)\n","stream":"stdout","time":"2018-04-19T08:54:52.473874775Z"}
{"log":"[2018-04-19 08:54:52,497] INFO [ReplicaFetcherManager on broker 3] Added fetcher for partitions List([[cbcagenesis,0], initOffset 623 to```
> when adding new node for ordering (into "OrdererAddresses") what action need to be done to new ordering node join existing channels
> (I start new node, but it does not fetch data from kafka)
@Ryan2 You should be able to bootstrap your new orderer with your original genesis block and things should 'just work'
@jyellick really?
Yes, so long as your Kafka logs have not been pruned/rolled
If they have, you will need to copy the ledger directory from a working orderer before starting.
@jyellick your mean every orderer node's ledge is the same?
The metadata will be slightly different, but the blocks are all the same, yes
what's the metadata? is it the index?
The metadata will include the offsets, which will be the same, but the metadata also contains a signature the orderer generated attesting to the validity of the block. Since each orderer has a different identity, this signature will vary by orderer. Otherwise the data ill be the same
would you please help answer question from @duwenhui
would you please help answer question from @duwenhui ,@jyellick
@duwenhui @baoyangc First, please when pasting your logs, as indicated in the channel topic link, please use a service like hastebin.com Including them in this channel makes it very difficult to read.
> However, several transactions were written without opening the Capabilities. Is there a problem?
I don't understand what this means
@jyellick the transaction is sync in the fabric ?
@asaningmaxchain123 I also don't understand your question
the Transactions are synchronous?
From what perspective? From a client perspective, transactions are asynchronous. They are submitted to ordering, receive a `SUCCESS` if they are accepted for ordering, and then at some later time, they commit at each peer. The client listens for the commit asynchronously via the event services.
so if i want to know whether the tx takes effect,just to get the event from the peer
so if i want to know whether the tx takes effect,just to get the event from the peer?
Yes
@jyellick transactions were commit into the block. I don't think it matter. the question is that:
we execute `peer channel fetch config confib_block.pb ....` meet errors below
```
bock -> DEBU 00a Got status: &{SERVICE_UNAVAILABLE}
Error: can't read the block: &{SERVICE_UNAVAILABLE}
```
another question is the gossip,the peer use gossip to broadcast ledger data, the range is org-scoped or channel-scoped?
another question is the gossip,the peer use gossip to broadcast ledger data, the range is org-scoped or channel-scoped? @jyellick
@jyellick ok
and the orderer's log says: https://hastebin.com/irigubewak.vbs
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=X4ZAsiefvfsCsXoRy) @baoyangc please make sure the service start normally
yes the orderer is working
```peer channel fetch config confib_block.pb -c systemchainid```
we ary try to fetcho config block from system channel
we are trying to fetch config block from system channel
@asaningmaxchain123 This is not the appropriate channel for gossip questions, please try #fabric-gossip
@baoyangc The log you pasted indicates that the orderer has _not_ started correctly, it cannot connect to Kafka, it claims Kafka is missing offsets. It sounds like your Kafka logs have rolled.
is there an method to fix this
If you have a currently working orderer, then you may copy its ledger to your failed orderer and restart the failed orderer. This should advance the offset of the failed orderer past the point the Kafka logs have rolled.
I suggest that you disable the Kafka log expiration, or ensure you periodically take backups of the orderer ledger so that this scenario is recoverable
the channel we fetch config from is just system channel, it should not have much data
i can't understand what make this happen
@baoyangc When the orderer starts, it attempts to connect to the last offset it knows in Kafka.
If this offset is missing, then it will return the error you mentioned
Kafka logs can be configured to expire after some period of time (regardless of how large they are). I expect this is what occurred to you.
but why we can fetch block from an application channel
we have tried to clean orderer's data and then restart it, and we found that the system channel's data is recovered. but the application's channel's data is not recovered
@jyellick
Can anyone tell me the difference between system channel and Kafka channel
Can anyone tell me the difference between system channel and application channel
@jyellick
@yacovm
@baoyangc Do you have a working orderer which you may recover the database from?
@pankajcheema The orderer system channel is used internally by the orderers to cordinate channel creation and only the orderer organization has authority to read it (as it can leak channel creation details). Peers are generally not connected to the orderer system channel, but instead are connected to application channels.
@baoyangc If you have a working orderer, I suggest you do the following
Create a new channel, just so that you get a new record in the Kafka partition
Then, stop this orderer, backup its ledger, and copy the ledger to your failed orderer
Then start both orderers back up
I expect this will resolve your problem
In the future I suggest that you disable Kafka log expiration
@jyellick it means we should not set expiration of Kafka logs ?
I can't fetch config block of system channel with any orderer nodes. but the transactions submitted to theses orderers could be committed into peer's ledger
@baoyangc Each channel is managed independently. This is expected.
@pankajcheema Correct, I recommend for production setups that Kafka logs do not expire.
@jyellick I 'm back , We currently have about 15 orderers configured, about 10 kafka, but we only enabled 4 kafaka and 3 orderers. The current situation is that we can write leadger data and query leadger data to the application channel. But when we get the system channel configuration, we can't get from any of 3 orderers. But we can get the application channel configuration data from any of 3 orderers. We also verified the following results. Join me to delete the data directory of the orderer0, data directory will recover the system channel configuration data, application channel configuration data will not recover . And from this orderer we can read the system channel configuration data.
we do no transaction in the system channel after the network is started
@duwenhui I am concerned that the orderer system channel was not actually appropriately recovered
Are your Kafka clusters set to expire logs?
we used default setting of the image
@jyellick where can I check system channel and application channel if I export channelname=test
@jyellick Never set it,Keep the default configuration all the time。
I think the defaults may vary based on which images you are using @sanchezl ?
@jyellick I think Kafka is set to expire log after some time by default
Clipboard - 2018年4月20日凌晨12点52分
@jyellick I just fetch two block in an application channel, they both success
Yes @baoyangc , as I said, channels are managed independently. It is possible that the Kafka partition for one channel has problems while the other does not.
but the kafka's expire setting should be the same
@jyellick now I just use the kafka images version is 1.0.0 and 1.0.2
@baoyangc If you have been adding new transactions to your application channel, these transactions will have created new blocks, pointing to non-expired offsets.
But if you have not been transacting on the orderer system channel, then the blocks may point at an offset which has expired.
@jyellick can we delete the orderer system channel and create the new one that to create offset new.
@jyellick can we delete the orderer system channel and create the new one that to create offset new?
@jyellick I'm not sure we have a working orderer, because all the orderer can't be used to fetch config block of system channel
so I'm not sure we can create a new channel
any way, thanks for your suggestion, we will try to create a new channel
@duwenhui I agree, your orderer is only partially functional because of this, and I expect new channel creation will not work. Although it is technically possible to make a repair, it would require implementing custom code which bypasses the Kafka consensus out of band. There are some issues in the JIRA backlog for this, but they are as of yet, unimplemented.
Clipboard - 2018年4月20日凌晨1点20分
is the `591` is the correct offfset?
@baoyangc The encoded offset is unrelated to the filename
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=bKKA5siuk4FzKYDpY) @jyellick IFAIK, the default is always `testchainid` (thanks to it being the default in configtxtxgen).
@sanchezl Wrong default. I was talking about the Kafka default log expiration
Ahh, sorry.
(And, FYI, you may have any ID you like for your orderer system channel, but if you don't specify one, you get `testchainid`)
No problem
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=kdbxneCrzyXvCrSCy) @jyellick Kafka defaults to 1 week / 1 GB retention (whichever comes first). The only difference between versions is that starting in 0.10.1 , time based retention is based on the actual timestamps of the messages, not the modified time on the segment file. One important detail to understand is that kafka apply its retention rules to to _closed_ segments. Depending on your configuration (segment size) and message flow rate, and typical message size, it could take much longer than a week to expire a segment given the default retention configuration.
What about the fabric Kafka images, do we override any of these defaults? I thought we did
when add new Org into network, and joining peer (from new Org), although peer get synchronized ledger data but I got the issue
`2018-04-20 09:39:53.050 UTC [gossip/comm] authenticateRemotePeer -> ERRO 062 Failed verifying signature from 174.32.1.161:7051 : Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2018-04-20 09:39:53.051 UTC [gossip/comm] Handshake -> WARN 063 Authentication failed: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2018-04-20 09:39:53.052 UTC [gossip/gossip] func1 -> WARN 064 Deep probe of peer0:7051 failed: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
github.com/hyperledger/fabric/gossip/gossip.(*gossipServiceImpl).learnAnchorPeers.func1
/opt/gopath/src/github.com/hyperledger/fabric/gossip/gossip/gossip_impl.go:249
github.com/hyperledger/fabric/gossip/discovery.(*gossipDiscoveryImpl).Connect.func1
/opt/gopath/src/github.com/hyperledger/fabric/gossip/discovery/discovery_impl.go:152
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:2337
2018-04-20 09:39:53.052 UTC [gossip/discovery] func1 -> WARN 065 Could not connect to {peer0:7051 [] [] peer0:7051
@Ryan2 looks like one of the anchor peers has not joined the channel
or the channel config is misconfigured
hey guys!
what are the ports I need to keep open in the zookeeper machines for kafka ordering?
@bandreghetti https://stackoverflow.com/questions/18168541/what-is-zookeeper-port-and-its-usage ought to get you going
oh thanks, I had added 2181 and 2888 but forgot to open 3888 too
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=sAXgfdv2GjF8rrLEc) @jyellick We disable the time base retention in our images, but the size based retention is still active.
hello everyone, I am trying to start the "first-network" but ran into some errors. I would be grateful if someone can tell how to fix them.
CLI containter logs: https://pastebin.com/FRwSic96
orderer containter logs: https://pastebin.com/43vdZEA9
hello everyone, I am trying to start the "first-network" but ran into some errors. I would be grateful if someone can tell how to fix them.
CLI container logs: https://pastebin.com/FRwSic96
orderer container logs: https://pastebin.com/43vdZEA9
Hi guys,
Could you explain how the order assembles a block of transactions? how many transactions can be aggregated into a block?
Hi Experts, can i endorse transactions in parallel by different endorsing peers all saperatly simulating different transactions on state of same asset in a same ledger, if yes how other endorsing peers and orderer will manage to validate the state of that asset against the same ledger being edited by many endorsing peers in a parallel manner?
> Could you explain how the order assembles a block of transactions? how many transactions can be aggregated into a block?
@tiennv This is a bit of a broad question, but the orderer applies rules found in the config (such as max messages per block, preferred block size, etc.) to decide at what point to start the next block
@NeerajKumar
>  can i endorse transactions in parallel by different endorsing peers all saperatly simulating different transactions on state of same asset in a same ledger
Yes, you may
> if yes how other endorsing peers and orderer will manage to validate the state of that asset against the same ledger being edited by many endorsing peers in a parallel manner?
The orderer, does not validate the endorsements, only that the client is authorized to submit, the peer validates that the set of endorsements all endorse the same state modifications (using the versioned RW set and MVCC checks), and that sufficiently many endorsements according to the endorsement policy have been included.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=grHK7YakeHpCCv2KD) @jyellick Thanks. Where do I can look at the config?
You may view the defaults in this orderer section of your `configtx.yaml` https://github.com/hyperledger/fabric/blob/release-1.1/sampleconfig/configtx.yaml
I got this error, do you know how this happens
`2018-04-23 05:49:35.259 UTC [gossip/comm] GossipStream -> ERRO 028 Authentication failed: failed classifying identity: Unable to extract msp.Identity from peer Identity: Peer Identity [0a 07 4f 72 672d 2.... 2d 2d 2d 0a] cannot be validated. No MSP found able to do that.`
I got this error, do you know how does this happens
`2018-04-23 05:49:35.259 UTC [gossip/comm] GossipStream -> ERRO 028 Authentication failed: failed classifying identity: Unable to extract msp.Identity from peer Identity: Peer Identity [0a 07 4f 72 672d 2.... 2d 2d 2d 0a] cannot be validated. No MSP found able to do that.`
@Ryan2 What peer version are you using? Are you using idemix?
Hi @jyellick I'am using v1.1.0 for peer
Hi @jyellick I'am using v1.1.0 for peer and Go 1.9
@jyellick We try to create a new channel to recover the system channel offset of kafka log.I can't create it at all. Peer respond below:
```
root@438c183f79d2:/opt/gopath/src/github.com/hyperledger/fabric/peer# peer channel create -f ./channel-artifacts/interchannel.tx -c internal -o orderer0.bqj.cn:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/bqj.cn/orderers/orderer0.bqj.cn/msp/tlscacerts/tlsca.bqj.cn-cert.pem -t 50
2018-04-23 06:52:34.355 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-04-23 06:52:34.355 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-04-23 06:52:34.407 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2018-04-23 06:52:34.407 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2018-04-23 06:52:34.407 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2018-04-23 06:52:34.407 UTC [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2018-04-23 06:52:34.407 UTC [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2018-04-23 06:52:34.407 UTC [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0A97060A12626C6F636B636861696E62...6F727469756D120812060A0463626361
2018-04-23 06:52:34.407 UTC [msp/identity] Sign -> DEBU 009 Sign: digest: BA41717B27E752333B5F7C29E1B3575AE93C4A07450E62DA1B50F4F1141447F1
2018-04-23 06:52:34.408 UTC [msp] GetLocalMSP -> DEBU 00a Returning existing local MSP
2018-04-23 06:52:34.408 UTC [msp] GetDefaultSigningIdentity -> DEBU 00b Obtaining default signing identity
2018-04-23 06:52:34.408 UTC [msp] GetLocalMSP -> DEBU 00c Returning existing local MSP
2018-04-23 06:52:34.408 UTC [msp] GetDefaultSigningIdentity -> DEBU 00d Obtaining default signing identity
2018-04-23 06:52:34.408 UTC [msp/identity] Sign -> DEBU 00e Sign: plaintext: 0ACD060A1408021A0608B285F6D60522...114493048044DA460B5133C84003F756
2018-04-23 06:52:34.408 UTC [msp/identity] Sign -> DEBU 00f Sign: digest: C1A7C0CABF178688E13AA1B38D3454CE081144873510E19D3E271624618BECB0
Error: Got unexpected status: SERVICE_UNAVAILABLE
Usage:
peer channel create [flags]
```
Orderer logs below:
```
[sarama] 2018/04/23 06:49:25.263024 client.go:601: client/metadata fetching metadata for all topics from broker kafka2.bqj.cn:9092
[sarama] 2018/04/23 06:49:25.423179 client.go:601: client/metadata fetching metadata for all topics from broker kafka1.bqj.cn:9092
2018-04-23 06:52:34.609 UTC [orderer/kafka] Enqueue -> WARN 008 [channel: cbcagenesis] Will not enqueue, consenter for this channel hasn't started yet
2018-04-23 06:52:34.614 UTC [orderer/common/deliver] Handle -> WARN 009 Error reading from stream: rpc error: code = Canceled desc = context canceled
````
so now I don't a working orderer at all. I can't recover log by this way.
Do you have any other good suggestiong for this situation?
if we can't recover this ledger data. can we have some good way to restore ledger data to a new chain of Hyperledger?
@jyellick We try to create a new channel to recover the system channel offset of kafka log.I can't create it at all. Peer respond below:
```
root@438c183f79d2:/opt/gopath/src/github.com/hyperledger/fabric/peer# peer channel create -f ./channel-artifacts/interchannel.tx -c internal -o orderer0.bqj.cn:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/bqj.cn/orderers/orderer0.bqj.cn/msp/tlscacerts/tlsca.bqj.cn-cert.pem -t 50
2018-04-23 06:52:34.355 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-04-23 06:52:34.355 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-04-23 06:52:34.407 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2018-04-23 06:52:34.407 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2018-04-23 06:52:34.407 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2018-04-23 06:52:34.407 UTC [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2018-04-23 06:52:34.407 UTC [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2018-04-23 06:52:34.407 UTC [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0A97060A12626C6F636B636861696E62...6F727469756D120812060A0463626361
2018-04-23 06:52:34.407 UTC [msp/identity] Sign -> DEBU 009 Sign: digest: BA41717B27E752333B5F7C29E1B3575AE93C4A07450E62DA1B50F4F1141447F1
2018-04-23 06:52:34.408 UTC [msp] GetLocalMSP -> DEBU 00a Returning existing local MSP
2018-04-23 06:52:34.408 UTC [msp] GetDefaultSigningIdentity -> DEBU 00b Obtaining default signing identity
2018-04-23 06:52:34.408 UTC [msp] GetLocalMSP -> DEBU 00c Returning existing local MSP
2018-04-23 06:52:34.408 UTC [msp] GetDefaultSigningIdentity -> DEBU 00d Obtaining default signing identity
2018-04-23 06:52:34.408 UTC [msp/identity] Sign -> DEBU 00e Sign: plaintext: 0ACD060A1408021A0608B285F6D60522...114493048044DA460B5133C84003F756
2018-04-23 06:52:34.408 UTC [msp/identity] Sign -> DEBU 00f Sign: digest: C1A7C0CABF178688E13AA1B38D3454CE081144873510E19D3E271624618BECB0
Error: Got unexpected status: SERVICE_UNAVAILABLE
Usage:
peer channel create [flags]
```
Orderer logs below:
```
[sarama] 2018/04/23 06:49:25.263024 client.go:601: client/metadata fetching metadata for all topics from broker kafka2.bqj.cn:9092
[sarama] 2018/04/23 06:49:25.423179 client.go:601: client/metadata fetching metadata for all topics from broker kafka1.bqj.cn:9092
2018-04-23 06:52:34.609 UTC [orderer/kafka] Enqueue -> WARN 008 [channel: cbcagenesis] Will not enqueue, consenter for this channel hasn't started yet
2018-04-23 06:52:34.614 UTC [orderer/common/deliver] Handle -> WARN 009 Error reading from stream: rpc error: code = Canceled desc = context canceled
````
so now I don't hava a working orderer at all. I can't recover log by this way.
Do you have any other good suggestiong for this situation?
if we can't recover this ledger data. can we have some good way to restore ledger data to a new chain of Hyperledger?
@jyellick We try to create a new channel to recover the system channel offset of kafka log.I can't create it at all. Peer respond below:
```
root@438c183f79d2:/opt/gopath/src/github.com/hyperledger/fabric/peer# peer channel create -f ./channel-artifacts/interchannel.tx -c internal -o orderer0.bqj.cn:7050 --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/bqj.cn/orderers/orderer0.bqj.cn/msp/tlscacerts/tlsca.bqj.cn-cert.pem -t 50
2018-04-23 06:52:34.355 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-04-23 06:52:34.355 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-04-23 06:52:34.407 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2018-04-23 06:52:34.407 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2018-04-23 06:52:34.407 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2018-04-23 06:52:34.407 UTC [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2018-04-23 06:52:34.407 UTC [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2018-04-23 06:52:34.407 UTC [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0A97060A12626C6F636B636861696E62...6F727469756D120812060A0463626361
2018-04-23 06:52:34.407 UTC [msp/identity] Sign -> DEBU 009 Sign: digest: BA41717B27E752333B5F7C29E1B3575AE93C4A07450E62DA1B50F4F1141447F1
2018-04-23 06:52:34.408 UTC [msp] GetLocalMSP -> DEBU 00a Returning existing local MSP
2018-04-23 06:52:34.408 UTC [msp] GetDefaultSigningIdentity -> DEBU 00b Obtaining default signing identity
2018-04-23 06:52:34.408 UTC [msp] GetLocalMSP -> DEBU 00c Returning existing local MSP
2018-04-23 06:52:34.408 UTC [msp] GetDefaultSigningIdentity -> DEBU 00d Obtaining default signing identity
2018-04-23 06:52:34.408 UTC [msp/identity] Sign -> DEBU 00e Sign: plaintext: 0ACD060A1408021A0608B285F6D60522...114493048044DA460B5133C84003F756
2018-04-23 06:52:34.408 UTC [msp/identity] Sign -> DEBU 00f Sign: digest: C1A7C0CABF178688E13AA1B38D3454CE081144873510E19D3E271624618BECB0
Error: Got unexpected status: SERVICE_UNAVAILABLE
Usage:
peer channel create [flags]
```
Orderer logs below:
```
[sarama] 2018/04/23 06:49:25.263024 client.go:601: client/metadata fetching metadata for all topics from broker kafka2.bqj.cn:9092
[sarama] 2018/04/23 06:49:25.423179 client.go:601: client/metadata fetching metadata for all topics from broker kafka1.bqj.cn:9092
2018-04-23 06:52:34.609 UTC [orderer/kafka] Enqueue -> WARN 008 [channel: cbcagenesis] Will not enqueue, consenter for this channel hasn't started yet
2018-04-23 06:52:34.614 UTC [orderer/common/deliver] Handle -> WARN 009 Error reading from stream: rpc error: code = Canceled desc = context canceled
````
so now I don't hava a working orderer at all. I can't recover log by this way.
How do we simplely reproduce this bug of kafka log deletion on test environment?
Do you have any other good suggestiong for this situation?
if we can't recover this ledger data. can we have some good way to restore ledger data to a new chain of Hyperledger?
@duwenhui As @jyellick [specifically asked](https://chat.hyperledger.org/channel/fabric-orderer?msg=ZbRqAiprwGKdSQzjh), can you use a service like Hastebin for your logs instead of posting them here directly? This is also noted in the README in this channel.
> How do we simplely reproduce this bug of kafka log deletion on test environment?
@duwenhui I've looked at all the messages that your colleague @baoyangc and you have posted. It is simply impossible to tell what exactly happened in your system based on the data that we have, and we cannot reproduce this locally as we would have to replay every transaction that you pushed into your network. What we can state with certainty, as @jyellick noted, is that your Kafka logs rolled at one point in time, which is why your orderers are unable to retrieve a message with a certain offset from Kafka.
> Do you have any other good suggestiong for this situation?
> if we can't recover this ledger data. can we have some good way to restore ledger data to a new chain of Hyperledger?
As for this -- can you please do a post summarizing exactly where you and _everything_ that you've tried, so that we're all on the same page? I see messages that seemingly contradict each other such as:
> we have tried to clean orderer's data and then restart it, and we found that the system channel's data is recovered. but the application's channel's data is not recovered
and then:
> @jyellick I just fetch two block in an application channel, they both success
And I'm at a loss as to what you have managed to fix and what not.
https://chat.hyperledger.org/channel/fabric-orderer?msg=ZJwjPBQtZeRQExNdn
@Ryan2: This is not related to #fabric-orderer
@kostas this is just the situation as @jyellick noted, is that your Kafka logs rolled at one point in time, which is why your orderers are unable to retrieve a message with a certain offset from Kafka. we can't read the system channel config data by peer channel featch command
@kostas this is just the situation as @jyellick noted, is that your Kafka logs rolled at one point in time, which is why your orderers are unable to retrieve a message with a certain offset from Kafka. we can't read the system channel config data by peer channel featch command, and orderer logs below:
Cannot set up channel consumer = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
@jyellick suggested us a way that we can create a new channel to recreate the kafka logs of system channel which rolled , but we can't create new channel at all
I'm not sure I follow.
@jyellick Do you have any suggestion for my question? would you please give some more help to me?
@duwenhui Your Kafka logs have expired, so you have limited options. You may delete and recreate the topic for the orderer system channel, and re-populate it with empty records until the offset is correct. However, this would not be a 'supported' path. You have lost data at your Kafka brokers due to log expiration, there is no way to recover it.
@jyellick so The best way to slove the productiong environment questiong is to recreate the new chain and write the data. the way repopulate it with empty records is not decent.
@duwenhui As I said, you have lost data, it is not possible to exactly re-create the Kafka log as it was. Because your orderers have already processed that data, it is safe to replace it with empty records. You will not be able to bootstrap a new orderer from genesis anymore, you will have to copy the ledger of an orderer which has already processed beyond the empty records. It is not a good solution, but the records are gone, and unless you took a backup of your Kafka logs before they expired, there is no way to recover them.
I totally get it . If I through the way repopulate it with empty whit empty records. How do I get the exact offset? can you show me this too?
@jyellick
@duwenhui That is about all the guidance I can give you. If you are not comfortable manipulating your Kafka cluster like this, you are probably best off to simply rebuild your network with better log retention settings.
Hi @jyellick , could you give some examples on how to configure configtx.yaml for production grade deployment? In fact I'm not too clear about the organizations and consortiums in this file, how they work.
and will the generated genesis block include the msp certs for the organizations?
or in other words, as the genesis block contains the root cert for each org, how will it differentiate the admin role and member role of each organization?
as my understanding, the consortium is the entity hosting the initial organizations and each organization will carry its own root certificate and admin certificate, so if we issue a transaction in name of admin, then it will be recognized , otherwise it's a member , right? Then I have one question, as we can dynamically add new orgs, so the orgs seem different from the genesis organizations?
@jyellick as my understanding, the consortium is the entity hosting the initial organizations and each organization will carry its own root certificate and admin certificate, so if we issue a transaction in name of admin, then it will be recognized , otherwise it's a member , right? Then I have one question, as we can dynamically add new orgs, so the orgs seem different from the genesis organizations?
is this common to use separated only one Orderer Organization?
is there any case whereas need to use more than one Orderer Organization?
Can you guys give me explanation
how does orderer will communicate over kafka? I mean I've tested kafka services on different machines with producing and consuming messages and it works fine with created topic but how orderer will create topics to communicate with each other(assuming both orderer are hosted on different machines)
All OSNs relay incoming client transactions to the appropriate topics/partitions in the Kafka cluster. All OSNs consume the appropriate topics/partitions from the Kafka cluster, store the orderer sequence of TXs in their local ledger, then serve deliver requests (from clients, validating peers).
@JayPandya: All OSNs relay incoming client transactions to the appropriate topics/partitions in the Kafka cluster. All OSNs consume the appropriate topics/partitions from the Kafka cluster, store the orderer sequence of TXs in their local ledger, then serve deliver requests (from clients, validating peers).
http://hyperledger-fabric.readthedocs.io/en/release-1.1/kafka.html?highlight=kafka#big-picture
@Ryan2: For any CFT ordering service (such as Kafka, which is the only one we support now), a single orderer org makes sense. For the BFT case (which isn't supported yet), you'll need to use multiple ordering orgs.
> Hi @jyellick , could you give some examples on how to configure configtx.yaml for production grade deployment? In fact I'm not too clear about the organizations and consortiums in this file, how they work.
@Glen: This question is _way_ too broad. What are your questions specifically?
Hi @kostas I've got answer for my above questions here. so I can modify configtx.yaml accordingly for production grade deploy.
Has joined the channel.
thank you @kostas
Hi, I have question regarding endorsement policy on fabric-peer-endorser-committer room, but not got feedback yet, which is
my network has 2 Orgs A and B, OrgA has 2 peer and OrgB has 2 peers.
what is endorsement policy syntax for the case is
"2 signatures from org A" and "1 signature from org B"
I used `-P "AND ('OrgAMSP.member','OrgAMSP.member','OrgBMSP.member')"` and saw block committed.
But I don't know it's valid syntax or not? please correct me
Has joined the channel.
after generating genesis block for new configtx update
when I try to create channel its throwing following error:
```
2018-04-26 12:10:01.511 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-04-26 12:10:01.512 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
2018-04-26 12:10:01.515 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
2018-04-26 12:10:01.518 UTC [msp] GetLocalMSP -> DEBU 004 Returning existing local MSP
2018-04-26 12:10:01.518 UTC [msp] GetDefaultSigningIdentity -> DEBU 005 Obtaining default signing identity
2018-04-26 12:10:01.519 UTC [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2018-04-26 12:10:01.519 UTC [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2018-04-26 12:10:01.519 UTC [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0AA2060A074F7267314D53501296062D...6D706F736572436F6E736F727469756D
2018-04-26 12:10:01.519 UTC [msp/identity] Sign -> DEBU 009 Sign: digest: C168C3209A890B57D6C7B44E2DEAC04F2912B1630302AFFA0F10D57638D5E0F0
2018-04-26 12:10:01.519 UTC [msp] GetLocalMSP -> DEBU 00a Returning existing local MSP
2018-04-26 12:10:01.520 UTC [msp] GetDefaultSigningIdentity -> DEBU 00b Obtaining default signing identity
2018-04-26 12:10:01.520 UTC [msp] GetLocalMSP -> DEBU 00c Returning existing local MSP
2018-04-26 12:10:01.520 UTC [msp] GetDefaultSigningIdentity -> DEBU 00d Obtaining default signing identity
2018-04-26 12:10:01.520 UTC [msp/identity] Sign -> DEBU 00e Sign: plaintext: 0ADF060A1B08021A0608998387D70522...5BEC41259A053D2DAC7414ABF4FBCF17
2018-04-26 12:10:01.520 UTC [msp/identity] Sign -> DEBU 00f Sign: digest: B9208324A6020DDC9EAAB87BA5EBECD6D21B7E9BD8FB044D0DE6DCFFF9EAED64
Error: got unexpected status: SERVICE_UNAVAILABLE --
```
@JayPandya Please see the FAQ linked in the channel topic: http://hyperledger-fabric.readthedocs.io/en/latest/ordering-service-faq.html
yeah tried that but error in orderer logs is different here
```
[2018-04-26 14:17:05,280] WARN [ReplicaFetcher replicaId=1, leaderId=3, fetcherId=0] Error when sending leader epoch request for Map(testchainid-0 -> -1) (kafka.server.ReplicaFetcherThread)
java.net.SocketTimeoutException: Failed to connect within 30000 ms
at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:92)
at kafka.server.ReplicaFetcherThread.fetchEpochsFromLeader(ReplicaFetcherThread.scala:312)
at kafka.server.AbstractFetcherThread.maybeTruncate(AbstractFetcherThread.scala:130)
at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:102)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64)
[2018-04-26 14:17:05,281] INFO [ReplicaFetcher replicaId=1, leaderId=3, fetcherId=0] Retrying leaderEpoch request for partition testchainid-0 as the leader reported an error: UNKNOWN_SERVER_ERROR (kafka.server.ReplicaFetcherThread)
```
@JayPandya This indicates a misconfiguration of your Kafka cluster, Please follow the https://kafka.apache.org/quickstart as mentioned in the FAQ and ensure that your Kafka cluster is working properly before trying to deploy fabric with it
yeah I got that part is there any example of kakfa integration with orderer
though I've tested separate docker file for kafka and its working
Can you run the Kafka sample clients successfully from inside your orderer container?
Often/usually these errors occur because of host name resolution failures which can occur when the components are deployed in different docker networks for instance
Is there a reason that there is not a 1.1.0 version here? https://hub.docker.com/r/hyperledger/fabric-couchdb/tags/
yeah after setting up longer timeout for channel creation it worked though I don't know if this is normal or not
but Thanks for help @jyellick
Hi I am adding new org I am getting this error
```Error: failed to create deliver client: orderer client failed to connect to orderer.example.com:7050: failed to create new connection: context deadline exceeded```
while doing this `peer channel fetch 0 mychannel.block -o orderer.example.com:7050 -c $CHANNEL_NAME --tls --cafile $ORDERER_CA`
I am able to run same commad in cli conatiner but not in Org3cli
I am able to run *same command in cli container but not in Org3cli*
@DarshanBc Sounds to me like there is a networking problem between your Org3cli and the orderer
Are you following some tutorial, or how have you verified the correctness of your network configuration?
I am actually implementing ./efyn.sh to balance transfer example
I mean example thats there in fabric-sample/first-network
how to ensure connectivity is there between org3 containers and orderer ?
Have you successfully run the original eyfn.sh example?
peer channel update -f org3_update_in_envelope.pb -c $CHANNEL_NAME -o orderer.example.com:7050 --tls --cafile $ORDERER_CA
Error: got unexpected status: BAD_REQUEST -- initializing channelconfig failed: could not create channel Application sub-group config: setting up the MSP manager failed: the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.Org3.example.com")
Usage:
peer channel update -f org3_update_in_envelope.pb -c $CHANNEL_NAME -o orderer.example.com:7050 --tls --cafile $ORDERER_CA
Error: got unexpected status: BAD_REQUEST -- initializing channelconfig failed: could not create channel Application sub-group config: setting up the MSP manager failed: the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.Org3.example.com")
Usage:
Seeing this error while running peer channel update command for adding org3 in running fabric instance ??
peer channel update -f org3_update_in_envelope.pb -c $CHANNEL_NAME -o orderer.example.com:7050 --tls --cafile $ORDERER_CA
Error: got unexpected status: BAD_REQUEST -- initializing channelconfig failed: could not create channel Application sub-group config: setting up the MSP manager failed: the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.Org3.example.com")
Usage:
Seeing this error while running peer channel update command for adding org3 in running fabric instance ?? Can you please help if anybody faced similar error,
@jyellick By any chance do we have any step by step document to create new channel in running fabric layer ? Can you please share if we have something similar
@patelan It sounds to me like there is a problem with your new CA cert, that it is somehow malformed and not a valid ECDSA certificate. Can you use a command like:
```openssl verify -CAfile ca.pem ca.pem
```
@patelan It sounds to me like there is a problem with your new CA cert, that it is somehow malformed and not a valid ECDSA certificate. Can you use a command like:
```openssl verify -CAfile ca.pem ca.pem
```
If this does not return successfully, then there is an error in your crypto generation.
Has joined the channel.
@jyellick Thanks it is working after regenerating certs. Seeing this error which peer channel update Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 4 sub-policies, required 3 remaining
@jyellick Thanks it is working after regenerating certs. Seeing this error with above peer channel update Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 4 sub-policies, required 3 remaining
@patelan To add an organization to your network requires that a majority of the organization admins approve, but it looks like only one of the admins has signed.
@patelan To add an organization to your network requires that a majority of the organization admins approve, but it looks like only one of the four admins has signed.
@patelan To add an organization to your network requires that a majority of the organization admins approve, but it looks like only one of the seven admins has signed.
@patelan To add an organization to your network requires that a majority of the organization admins approve, but it looks like only one of the six/seven admins has signed.
@jyellick yes I have 6 organization. How to get sign from all organization if you can quick help on that ?
@jyellick If I want to allow with signed by any one organization, is there any configuration,?
Clipboard - April 27, 2018 3:07 PM
@jyellick Thanks for your help. I am able to fix my issue
@jyellick In new peer I am seeing "can't read the block: &{FORBIDDEN}" this error while running peer channel fetch. Can you please help on this.
@patelan Look at the orderer's log (preferably at debug) for more details
@jyellick seeing this error in orderer Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Org3MSP.Readers Ta1MSP.Readers TabulationMSP.Readers Bank1MSP.Readers BroadridgeGlobalMSP.Readers Broker1MSP.Readers Depo1MSP.Readers ]
Yes, please take the previous 100 lines of your log and paste it to a service like hastebin.com and I will point out the underlying failure.
Yes, please take the previous 100 lines of your log and paste it to a service like hastebin.com and copy the link here and I will point out the underlying failure.
@jyellick please let me know if you need any other details https://hastebin.com/qanarekuma.hs
@patelan See line 47 (and it is repeated in other places):
``` [cauthdsl] deduplicate -> ERRO 1e0a Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority)
```
This indicates that the certificate for the identity you are using was not signed by any of the CAs known to the system.
Usually this happens because the crypto artifacts were not appropriately regenerated on network recreation
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fcZpe3jJTbuQDrbw7) @jyellick hey error got solved problem was with org3.yaml file networks:
Hi I add org3 to existing network for balance transfer example once I added I am trying to install new chaincode on peers of org1 and org3 while instantiating I am getting this error ```
[2018-04-28 15:53:45.990] [DEBUG] instantiate-chaincode - Successfully got the fabric client for the organization "Org3"
[2018-04-28 15:53:45.990] [DEBUG] Helper - [NetworkConfig101.js]: getChannel - name mychannel
[2018-04-28 15:53:45.990] [DEBUG] Helper - [NetworkConfig101.js]: getPeer - name peer0.org1.example.com
[2018-04-28 15:53:45.990] [DEBUG] Helper - [NetworkConfig101.js]: getPeer - name peer1.org1.example.com
[2018-04-28 15:53:45.991] [DEBUG] Helper - [NetworkConfig101.js]: getPeer - name peer0.org2.example.com
[2018-04-28 15:53:45.991] [DEBUG] Helper - [NetworkConfig101.js]: getPeer - name peer1.org2.example.com
[2018-04-28 15:53:45.991] [DEBUG] Helper - [NetworkConfig101.js]: getPeer - name peer0.org3.example.com
[2018-04-28 15:53:45.992] [DEBUG] Helper - [NetworkConfig101.js]: getPeer - name peer1.org3.example.com
[2018-04-28 15:53:45.992] [DEBUG] Helper - [NetworkConfig101.js]: getOrderer - name orderer.example.com
[2018-04-28 15:53:46.003] [DEBUG] Helper - [crypto_ecdsa_aes]: ecdsa signature: Signature {
r:
Hi I added org3 to existing network for balance transfer example once I added I am trying to install new chaincode on peers of org1 and org3 while instantiating I am getting this error ```
[2018-04-28 15:53:45.990] [DEBUG] instantiate-chaincode - Successfully got the fabric client for the organization "Org3"
[2018-04-28 15:53:45.990] [DEBUG] Helper - [NetworkConfig101.js]: getChannel - name mychannel
[2018-04-28 15:53:45.990] [DEBUG] Helper - [NetworkConfig101.js]: getPeer - name peer0.org1.example.com
[2018-04-28 15:53:45.990] [DEBUG] Helper - [NetworkConfig101.js]: getPeer - name peer1.org1.example.com
[2018-04-28 15:53:45.991] [DEBUG] Helper - [NetworkConfig101.js]: getPeer - name peer0.org2.example.com
[2018-04-28 15:53:45.991] [DEBUG] Helper - [NetworkConfig101.js]: getPeer - name peer1.org2.example.com
[2018-04-28 15:53:45.991] [DEBUG] Helper - [NetworkConfig101.js]: getPeer - name peer0.org3.example.com
[2018-04-28 15:53:45.992] [DEBUG] Helper - [NetworkConfig101.js]: getPeer - name peer1.org3.example.com
[2018-04-28 15:53:45.992] [DEBUG] Helper - [NetworkConfig101.js]: getOrderer - name orderer.example.com
[2018-04-28 15:53:46.003] [DEBUG] Helper - [crypto_ecdsa_aes]: ecdsa signature: Signature {
r:
I checked the peers whether the code is existing or not its there at this path
```root@9a8377b5940d:/var/hyperledger/production/chaincodes# ls
fabcar.v0 marbles02.v0
```
this is the command I ran to instantiate ```echo "POST instantiate chaincode on peer1 of Org1"
echo
curl -s -X POST \
http://localhost:4000/channels/mychannel/chaincodes \
-H "authorization: Bearer $ORG3_TOKEN" \
-H "content-type: application/json" \
-d "{
\"chaincodeName\":\"marbles02\",
\"chaincodeVersion\":\"v0\",
\"chaincodeType\": \"$LANGUAGE\",
\"args\":[]
}"```
this is the command I ran to instantiate ```echo "POST instantiate chaincode on peer1 of Org3"
echo
curl -s -X POST \
http://localhost:4000/channels/mychannel/chaincodes \
-H "authorization: Bearer $ORG3_TOKEN" \
-H "content-type: application/json" \
-d "{
\"chaincodeName\":\"marbles02\",
\"chaincodeVersion\":\"v0\",
\"chaincodeType\": \"$LANGUAGE\",
\"args\":[]
}"```
I can see this error in docker logs of endorsement peers ```2018-04-28 15:18:01.873 UTC [endorser] simulateProposal -> DEBU 59b [][cfbf36fa] Exit
2018-04-28 15:18:01.873 UTC [endorser] ProcessProposal -> ERRO 59c [][cfbf36fa] simulateProposal() resulted in chaincode name:"lscc" response status 500 for txid: cfbf36fa36e2d826e2f3fa54a644af628b4ebecf89b8635dd5efde783a294de5
2018-04-28 15:18:01.873 UTC [endorser] ProcessProposal -> DEBU 59d Exit: request from%!(EXTRA string=172.19.0.1:54570)
2018-04-28 15:18:46.815 UTC [endorser] ProcessProposal -> DEBU 59e Entering: Got request from 172.19.0.1:54598```
@jyellick - After starting docker with kafka configurations its has started kafka successfully(I checked logs) but when I see logs for orderer in docker container it says:
```
018-04-29 14:25:58.447 UTC [orderer/kafka] Enqueue -> DEBU 445 [channel: testchainid] Enqueueing envelope...
2018-04-29 14:25:58.447 UTC [orderer/kafka] Enqueue -> WARN 446 [channel: testchainid] Will not enqueue, consenter for this channel hasn't started yet
2018-04-29 14:25:58.451 UTC [orderer/main] func1 -> DEBU 448 Closing Broadcast stream
2018-04-29 14:25:58.451 UTC [orderer/common/deliver] Handle -> WARN 447 Error reading from stream: rpc error: code = Canceled desc = context canceled
```
and still it throws `SERVICE_UNAVAILABLE` error
and it's throwing `SERVICE_UNAVAILABLE` error
Has joined the channel.
Has joined the channel.
@jyellick Thanks. I am able to add new org in existing channel. Issue is fixed by creating new certs
@JayPandya: `consenter for this channel hasn't started yet` means that the connection between the ordering service node(s) and the Kafka cluster hasn't been established yet. If the Kafka cluster is unreachable (due to, say, configuration issues) you'll keep getting this issue. Can you connect to the Kafka cluster using Kafka's sample clients?
yeah I tried consuming/producing messages on Kafka clusters externally and I've integrated same configurations into docker for orderer
Production/consumption worked w/o issues?
yeah it worked there without issue on both machines
don't know why its not connecting to orderer
it seems network issue with HOST name
If it's a network issue (which means it's configuration issue at the Docker composition level), I'm not sure how I can help.
is it necessary to start 2nd orderer on other machine? I don't think Kafka is making pre-check for network
I mean for producing/consuming message I've just mentioned IP of machines and exposed ports and it worked there
I am not sure what you mean by other machine. Can you summarize your setup?
yeah sure on Machine 1 I'm starting fabric network with 1 ca, 1 orderer and 4 kafka clusters among which 2 will listen to same IP of machine and other 2 kafka clusters will listen to port of Machine-2
on machine 2 I will set 1 orderer, 2 peer and will join channel from Machine 1
but getting issue while creating channel on Machine-1
> 4 kafka clusters among which 2 will listen to same IP of machine and other 2 kafka clusters will listen to port of Machine-2
I'm not sure what "listen to ..." means. Can you rephrase?
ohh that means I've set 2 kafka with`KAFKA_ADVERTISED_HOST_NAME` which have IP of machines and these kafka clusters will listen on specified ports like for ex.9092 and other 2 will have IP of machine - 2
so then if orderer is down then it can elect leader among other clusters
Do you suspect a bug in the orderer code?
If you do, I'll gladly look into it.
I do suspect however that this is a Kafka/Docker configuration issue.
(Whereby "suspect" I mean I'm certain.)
yeah I guess you're right but can I see any working orderer configuration with Kafka integration?
Sure, see: https://github.com/hyperledger/fabric/tree/release-1.1/bddtests
And look for the `dc-orderer-*` files.
And look for the `dc-orderer-*.yml` files.
cool but all this will be running on same machine right?
@jyellick By any chance do we have any document to create new channel in running fabric layer ? Can you please help on this.
@JayPandya: Correct. This is a Docker composition.
Thanks man for Help and this is working on same machine with kafka clusters
thing I want to do is I want to implement kafka for different machine
Thanks man for Help and this is working on same machine with kafka clusters
thing I want to do is I want to implement kafka for different machine (connection)
There are no hard-coded references here for a single machine.
For instance, you see that the first ZK node is referenced as: `zookeeper0:2888:3888`
can you explain me if 1 orderer goes down then how it will elect 2nd orderer to send txs?
I mean it will use Kafka internally but what will be the common way for that
There is no election process going on among the orderers. I noticed that bit in your original message a few lines above as well.
If the orderer is unavailable, then the deliver component on the peer will reach out to another OSN in its list.
ohh then i misunderstood that so how it will accomplish fault tolerance?
If a Kafka broker goes down, the rest of the Kafka cluster can work w/o issues. Likewise, if an OSN goes down, you can reach out to any other OSN and get your job (broadcast/deliver requests) done w/o issues.
ohhh so how to connect to OSN on different machine with running fabric network?
What is the subject in the sentence above?
I mean let's say I've 2 orderers total 1st is on machine-1 and 2nd on machine-2
and then orderer goes down on machine-1 then how i can send tx to second orderer?
What are you using to send this transaction?
The SDK?
composer
Then this is a matter of whether Composer keeps a list of all OSNs on the network, and automatically tries the next one in the list, if the first OSN is unavailable.
I would check with the composer folks in the respective room.
I would check with the composer folks in the respective RC channel.
ohhh sorry for this silly question but then where kafka will fit in this picture?
This is not a silly question. Clients/peers broadcast messages to the OSNs. The OSNs relay these transactions to the Kafka cluster. The Kafka cluster orders these transactions. The OSNs read (consume) the _orderer_ stream of transactions. They also persists this locally in their ledger.
Clients/peers issue deliver request to the OSNs. These are serviced by the local ledger of the OSNs.
TL;DR - Think of this as two concentric circles. The inner one is the Kafka cluster, the outer one the ordering service.
Any incoming request has to get router to the Kafka cluster so that it gets ordered.
Once it is ordered, it is consumed by the ordering service and returned to deliver clients.
Ahhh this makes things clear to me now
and that's what i want to do when clients/peer will send messages it will also consumed by all OSNs(having 1 OSN on different machine)
Why would you want to send a transaction to multiple OSNs?
hmmm initially I wanted to do this to make sure that when orderer goes down then it will read/write operations on other OSNs
want to make it distributed in true sense
This will work, but come validation time (when the peers read the ordered stream of TXs from the orderers) half of your transactions (if you have 2 OSNs) will be invalid, because they're duplicate. The problem is obviously exacerbated if you add more OSNs and follow this pattern that you suggest.
> want to make it distributed in true sense
Not sure what true sense means, but this is literally not distribution. It is duplication.
Ensure that whatever client you use to broadcast messages, can failover when it comes to OSNs.
so how can i make it distributed?
and you're right this is more of duplication keeping same thing on all over place :disappointed:
Assuming your OSNs and Kafka brokers are spread over more than one machines, this is already distributed?
yeah that's what I was trying to setup OSNs and kafka brokers on more that 1 machines
Understood. You are misconfiguring your Kafka cluster. Proceed in an iterative fashion. Forget about Fabric. Spread Kafka over X machines, set up advertised hosts so that Kafka sample clients can reach _all_ Kafka brokers. The addresses that the Kafka sample clients use during this process is the set of addresses you need to encode in `Kafka.Brokers` in your Fabric configuration file. Now you're good to repeat this process with Fabric.
got it
let's say I've kafka0, kafka1 cluster having `KAFKA_ADVERTISED_HOST_NAME` which is Ip of machine 1 and
kafka2 and kafka3 will have the same thing for another machine right?
We're delving into Apache Kafka configuration territory now. This is not Fabric related.
agree
its just I'm trying to connect this 2 things
its just I'm trying to connect these 2 things
btw thanks for all explanation
Hi. I am trying to create new channel with new organization in running fabric layer. Seeing this error "got unexpected status: BAD_REQUEST -- Unknown consortium name: Org3Consortium" Can anyone please help if you saw similar error
@patelan: As the error suggests, your channel creation request is referencing a consortium that does not exist.
@kostas Thanks for the details. Do we have any document to create new channel for running fabric layer ? I added new organization. Do we need to update orderer genesis block ?
@patelan: You do not need to update the genesis block. A new channel is created when a message is broadcasted by a client of an org to the ordering service that (a) references a channel name that the ordering service hasn't processed before, and (b) contains enough signatures to satisfy the ChannelCreationPolicy that the broadcasting client's org consortium has put forth.
@patelan: You do not need to update the genesis block. A new channel is created when a config update message is broadcasted by a client of an org to the ordering service that (a) references a channel name that the ordering service hasn't processed before, and (b) contains enough signatures to satisfy the ChannelCreationPolicy that the broadcasting client's org consortium has put forth.
When you bootstrap the ordering service, you encode consortiums and their ChannelCreationPolicies in the genesis block.
(You can modify these as need with config updates.)
What happened in your case is that the system channel does not contain a consortium named Org3Consortium.
You can verify this by fetching the most recent configuration block of the system channel and scanning through all the consortiums available.
Thanks for your help. I am trying to create new channel with new org.
I have created new channel artifacts org3orgschannel.tx
Clipboard - April 30, 2018 2:03 PM
Then I am trying to create channel using peer channel create
and it is failing.
Looks like I am missing some step inbetween
Is the new org part of a consortium that the system channel is aware of?
no.
Then that is the step you're missing.
How to do that? Can you please help
Assuming you're using the defaults for modification policies --
yes
And you have a consortium defined with, say, 2 orgs: Foo and Bar
Let's call this consortium "FooBar"
You'll need to submit a configuration update to the system channel so that the FooBar consortium inclues "Foo, Bar, _and Baz_" where "Baz" is the new org that you wish to add.
This configuration update transaction will require signatures by an administrator on Foo and an administrator on Bar.
(More generally: signed by the majority of organization admins on the consortium)
I got it. I did this while adding new org in existing channel
how to read system channel ?
system channel means any channel where 'Foor Bar' consortium is there ?
Your system channel has a name. Issue a deliver request for the channel. You'll need to issue this request using the credentials from one of the ordering service orgs, since these are the orgs that are allowed to read it.
> system channel means any channel where 'Foor Bar' consortium is there ?
Correct. There is only one channel where consortiums are defined. This is the system channel.
Little confused about how to find system channel ? Each channel we have one consortium.
@kostas can you please help for sample code or document to update consortium in system channel ?
> Little confused about how to find system channel?
When you generate the genesis block using configtxgen you may also specify a channel name. This is the name of the system channel.
> Each channel we have one consortium.
I don't understand what this means.
> can you please help for sample code or document to update consortium in system channel ?
The best way to go at it is: understand how the configuration update mechanism works in this tutorial https://hyperledger-fabric.readthedocs.io/en/release-1.1/channel_update_tutorial.html#fetch-the-configuration
Then adjust according to the directions here: https://chat.hyperledger.org/channel/fabric-orderer?msg=k25ukfGZTFqyh3f9r
The mechanism is exactly the same. What changes is what you edit (the orgs in the consortium), where you edit it (the system channel), and who signs it (majority of ordering service orgs).
Thanks for your help. Let me go through both the links in details and try
@kostas With below step, I am able to create new channel and add new org
Clipboard - May 1, 2018 12:30 PM
What does step 1 mean?
@kostas we have one channel which is having all the orgs.I belive it is system channel. (You'll need to submit a configuration update to the system channel so that the FooBar consortium inclues "Foo, Bar, _and Baz_" where "Baz" is the new org that you wish to add.)
Ah, you mean to say that you add an org to a consortium.
yup
I am seeing below error in new peer. peer0.org3.example.com | 2018-05-01 17:57:54.591 UTC [cauthdsl] deduplicate -> ERRO 401 Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.org3.example.com")) for identity
peer1.org3.example.com | 2018-05-01 17:39:33.452 UTC [ConnProducer] NewConnection -> ERRO 415 Failed connecting to orderer.example.com:7050 , error: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "tlsca.example.com")
I don't have ca.org3.example.com. Can you please help on this.
@patelan It sounds like your crypto material has not been generated correctly. I'd recommend using a tool like openssl to inspect your certificates and make sure that the appropriate names, CAs etc. are referenced in them
@jyellick Thanks. Let me check
Orderer: &OrdererDefaults
# Orderer Type: The orderer implementation to start
# Available types are "solo" and "kafka"
OrdererType: solo
Addresses:
- orderer.example.com:7050
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 98 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB
Is it possible to increase the timeout value
I have changed it and create the crypto-materials , but while invoking the chain code I am getting some eventhub error
@Unni_1994 Please do not paste segments of files or logs directly. Use a service like hastebin.com
The values in `configtx.yaml` are used by `configtxgen`, not by `cryptogen`. You would need to regenerate your genesis block and rebootstrap your network if you wished to change the parameters via this file. It is also possible to change these values after a network has been bootstrapped, but it becomes more complicated.
Has joined the channel.
When you create the genesis block with configtxgen do we need public certificate and private key of members of the channel ? or just public certificate ?
Just the certificates.
Has joined the channel.
when running a peer channel update command i'm getting an error in the orderer log: Config update for channel creation does not set application group version to 1, was 2...what does that mean?
@xiven: You are not constructing the config update properly. A proper config update reads the most recent config of the channel and bumps up the version of the config group that it modifies by 1. In your case, it should bump the application group version to 3, since 2 seems to be the current version.
ok i realized that i hadn't created and joined the channel that i was trying to update. i corrected that issue but now i get a slightly different error: error authorizing update: error validating Readset: readset expected key [Group] /Channel/Application at version 2, but got version 1
i looked further up in the logs and it throws an error when executing the command peer channel fetch: Error: can't read the block: &{FORBIDDEN}
could that cause those errors
Has joined the channel.
I would like to modify `BatchTimeOut` to some higher value like 4 seconds and Then I would like see block committing time. Would it increase by 4 sec? Please suggest or advice
@AkshayJindal: Yes.
@xiven: Again, same answer because of this: "readset expected key [Group] /Channel/Application at version 2, but got version 1"
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=GWPwyQY35EccTpqzB) @kostas Hi. Thanks..I need to modify `BatchTimeOut` and need to reflect the changes in fabric.. How can I do this?
@AkshayJindal If you are bootstrapping a new network, you may simply modify these settings in your `configtx.yaml` prior to bootstrapping. If you wish to change it after, you must do a config update https://hyperledger-fabric.readthedocs.io/en/release-1.1/config_update.html
2018-05-04 14:57:31.875 UTC [cauthdsl] deduplicate -> ERRO 3e8 Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.org3.example.com")) for identity 0a074f7267334d535012..... Seeing this error in peer while lunching org3 peers
Orderer Logs : 2018-05-04 14:59:41.925 UTC [grpc] Printf -> DEBU 1a59 grpc: Server.Serve failed to complete security handshake from "172.25.0.34:47356": remote error: tls: bad certificate
I generated certs 2-3 times. Same error every time. Can you please help
I don't have ca.org3.example.com not sure from where it is coming
@patelan: Step into our shoes for a moment, and walk me through how you expect anyone to be able to help you. There is absolutely no context provided, no artifacts provided, no listing of the commands you ran. I understand the frustration in not being able to get Fabric to run, and I am genuinely sorry that it's not there yet so that these issues can be avoided, but how this is supposed to be the beginning of a session where you allow us to help you in a quick and efficient way is beyond me. You are not new to this room. There is a README here, linked to at the very top: https://wiki.hyperledger.org/chat_channels/fabric-orderer Have a look at the README, take the time to phrase your question appropriately. As a rule of thumb, I'd say that if it takes less than 10 minutes to write a question here, you're probably doing it wrong.
@kostas Very Sorry for less details. Actually I pasted my question with all details on Tuesday ( May 1st) and working on same issue. jyellick asked me to regenerate certs. I was on vacation. Today I regenerated certs but still seeing same issue. 90% time I got all my answers with the details I provided. All Details :: I have running fabric layer with 6 organizations, 6 CA and 6 channels. I want to add new organization (org3.example.com) in existing channel and create new channel in running fabric layer. I am able to update existing channel configuration to add new org(Peer channel update). While launching new org peers I am seeing this cert issue. Other details are here https://pastebin.com/EEdBGTeg Please let me know if you need any other details. Thanks for your help.
@kostas Very Sorry for less details. Actually I pasted my question with all details on Tuesday ( May 1st) and working on same issue but forgot to mention. jyellick asked me to regenerate certs. I was on vacation. Today I regenerated certs but still seeing same issue. 90% time I got all my answers with the details I provided. All Details :: I have running fabric layer with 6 organizations, 6 CA and 6 channels. I want to add new organization (org3.example.com) in existing channel and create new channel in running fabric layer. I am able to update existing channel configuration to add new org(Peer channel update). While launching new org peers I am seeing this cert issue. Other details are here https://pastebin.com/EEdBGTeg Please let me know if you need any other details. Thanks for your help.
Some more logs : https://pastebin.com/GZuyPT6Y
Let's go
@patelan: I see that you provide a cryptogen.yaml in the logs but just to double-check: do you generate the crypto material for org3 using cryptogen?
@kostas Yes
Clipboard - May 4, 2018 3:59 PM
You say that you don't have ca.org3.example.com, but you do. `cryptogen` generates this for you.
This CA cert corresponding to ca.org3.example.com can be found under `msp/cacerts`
I've had a look at your logs and nothing sticks out as wrong there. The only reasonable explanation, as stated before, is that something goes wrong when you generate the crypto material for org3. Have you been generating the crypto material for the other orgs (the ones whose peers launch w/o issues) in the exact same way?
Also, Jason suggested that you use `openssl` to inspect the certificate chain for org3. Did you do this?
yes but for all other orgs I have separate CA service.
Fyi, I am not running any CA service for this new org
Again, when you generate the crypto material with cryptogen, a CA entity for this org is established automatically. You don't have to _run_ a CA service.
> yes but for all other orgs I have separate CA service.
This suggests me to that you're generating the crypto material for the other orgs in a different way. Can you describe it?
> yes but for all other orgs I have separate CA service.
This suggests to me to that you're generating the crypto material for the other orgs in a different way. Can you describe it?
I kept crypto-config and script portion here for other orgs. https://pastebin.com/byfteBLF
I don't follow. You're pointing me to a crypto-config file. So you _do_ generate the crypto material for all other orgs using cryptogen as well?
I don't follow. This looks like crypto-config file, so does this mean that you do generate the crypto material for all other orgs using cryptogen as well?
I don't follow. This looks like a crypto-config file, so does this mean that you do generate the crypto material for all other orgs using cryptogen as well?
Also: https://chat.hyperledger.org/channel/fabric-orderer?msg=3wSruGzwnYogxK8yT
^^
Has joined the channel.
Has joined the channel.
We are trying to set up three channel, one orderer and seven peer Hyperledger business network. We are failing to create peerAdmin card and network admin card. The following script is used to create PeerAdmin card.
Channels: Records, lending, Books
Peers: Appraiser, Titel, Registry, Audit, Bank, Insurance and FiCO
Order: one orderer
Would you please go through this script and advise where we are making the mistake. Can you suggest any template whcich deals with multi channel, multiple peer with one orderer.
createPeerAdmin card script:
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
!/bin/bash
# Exit on first error
set -e
# Grab the current directory
DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
echo
# check that the composer command exists at a version >v0.14
if hash composer 2>/dev/null; then
composer --version | awk -F. '{if ($2<15) exit 1}'
if [ $? -eq 1 ]; then
echo 'Sorry, Use createConnectionProfile for versions before v0.15.0'
exit 1
else
echo Using composer-cli at $(composer --version)
fi
else
echo 'Need to have composer-cli installed at v0.15 or greater'
exit 1
fi
# need to get the certificate
continued ..```cat << EOF > /tmp/.connection.json
{
"name": "hlfv1",
"type": "hlfv1",
"orderers": [
{ "url" : "grpc://localhost:7050" }
],
"ca": { "url": "http://localhost:7054", "name": "ca.orderer.mrtgexchg.com"},
"channels": {
“records”: {
"orderers": [
"orderer.mtrgexchg.com"
],
"peers": {
"peer0.Appraiser.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Audit.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Bank.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Fico.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Insurance.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Registry.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Title.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
}
}
},
“books”: {
"orderers": [
"orderer.mtrgexchg.com"
],
"peers": {
"peer0.Appraiser.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Audit.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Bank.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Registry.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Title.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
}
}
},
“lending”: {
"orderers": [
"orderer.mtrgexchg.com"
],
"peers": {
"peer0.Audit.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Bank.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Fico.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Insurance.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
},
"peer0.Title.com": {
"endorsingPeer": true,
"chaincodeQuery": true,
"eventSource": true
}
}
}
},
```
continueed ....```
"orderers": {
"orderer.mrtgexchg.com": {
"url": "grpcs://localhost:7050",
"grpcOptions": {
"ssl-target-name-override": "orderer.mrtgexchg.com"
},
"tlsCACerts": {
"pem": "tlsca.mrtgexchg.com-cert"
}
}
},
"peers": {
"peer0.Appraiser.com": {
"url": "grpcs://localhost:11051",
"eventUrl": "grpcs://localhost:11053",
"grpcOptions": {
"ssl-target-name-override": "peer0.Appraiser.com"
},
"tlsCACerts": {
"pem": "tlsca.Appraiser.com-cert"
}
},
"peer0.Audit.com": {
"url": "grpcs://localhost:12051",
"eventUrl": "grpcs://localhost:12053",
"grpcOptions": {
"ssl-target-name-override": "peer0.Audit.com"
},
"tlsCACerts": {
"pem": "tlsca.Audit.com-cert"
}
},
"peer0.Bank.com": {
"url": "grpcs://localhost:7051",
"eventUrl": "grpcs://localhost:7053",
"grpcOptions": {
"ssl-target-name-override": "peer0.Bank.com"
},
"tlsCACerts": {
"pem": "tlsca.Bank.com-cert"
}
},
"peer0.Fico.com": {
"url": "grpcs://localhost:13051",
"eventUrl": "grpcs://localhost:13053",
"grpcOptions": {
"ssl-target-name-override": "peer0.Fico.com""
},
"tlsCACerts": {
"pem": "tlsca.Fico.com-cert"
}
},
"peer0.Insurance.com": {
"url": "grpcs://localhost:10051",
"eventUrl": "grpcs://localhost:10053",
"grpcOptions": {
"ssl-target-name-override": "peer0.Insurance.com"
},
"tlsCACerts": {
"pem": "tlsca.Insurance.com-cert"
}
},
"peer0.Registry.com": {
"url": "grpcs://localhost:9051",
"eventUrl": "grpcs://localhost:9053",
"grpcOptions": {
"ssl-target-name-override": "peer0.Registry.com"
},
"tlsCACerts": {
"pem": "tlsca.Registry.com-cert"
}
},
"peer0.Title.com": {
"url": "grpcs://localhost:8051",
"eventUrl": "grpcs://localhost:8053",
"grpcOptions": {
"ssl-target-name-override": "peer0.Title.com"
},
"tlsCACerts": {
"pem": "tlsca.Title.com-cert"
}
}
}
}
EOF
```
Continued..```
PRIVATE_KEY="${DIR}"/crypto-config/peerOrganizations/Appraiser.com/users/User1@Appraiser.com/msp/keystore/96553efcf0deac6fe5110b83c91b7801d0b16ee1c745fce2fc5865254785ed97_sk
CERT="${DIR}"/crypto-config/peerOrganizations/Appraiser.com/users/User1@Appraiser.com/msp/signcerts/User1@Appraiser.com-cert.pem
if composer card list -n PeerAdmin-Appraiser@hlfv1 > /dev/null; then
composer card delete -n PeerAdmin-Appraiser@hlfv1
fi
composer card create -p /tmp/.connection.json -u PeerAdmin-Appraiser -c "${CERT}" -k "${PRIVATE_KEY}" -r PeerAdmin -r ChannelAdmin --file /tmp/PeerAdmin-Appraiser@hlfv1.card
composer card import --file /tmp/PeerAdmin-Appraiser@hlfv1.card
#omposer card import -f Appraiser@mrtgexchg-network.card
PRIVATE_KEY="${DIR}"/crypto-config/peerOrganizations/Audit.com/users/User1@Audit.com/msp/keystore/f0850e27496091b96cc27714af8db653355597f8349d90f3f071aa7e70ad163e_sk
CERT="${DIR}"/crypto-config/peerOrganizations/Audit.com/users/User1@Audit.com/msp/signcerts/User1@Audit.com-cert.pem
if composer card list -n PeerAdmin-Audit@hlfv1 > /dev/null; then
composer card delete -n PeerAdmin-Audit@hlfv1
fi
composer card create -p /tmp/.connection.json -u PeerAdmin-Audit -c "${CERT}" -k "${PRIVATE_KEY}" -r PeerAdmin -r ChannelAdmin --file /tmp/PeerAdmin-Audit@hlfv1.card
composer card import --file /tmp/PeerAdmin-Audit@hlfv1.card
PRIVATE_KEY="${DIR}"/crypto-config/peerOrganizations/Bank.com/users/User1@Bank.com/msp/keystore/5c164df3a28e3ed072a176c30c36a85264caff2b24e21b545e6b64a65bee6986_sk
CERT="${DIR}"/crypto-config/peerOrganizations/Bank.com/users/User1@Bank.com/msp/signcerts/User1@Bank.com-cert.pem
if composer card list -n PeerAdmin-Bank@hlfv1> /dev/null; then
composer card delete -n PeerAdmin-Bank@hlfv1
fi
composer card create -p /tmp/.connection.json -u PeerAdmin-Bank -c "${CERT}" -k "${PRIVATE_KEY}" -r PeerAdmin -r ChannelAdmin --file /tmp/PeerAdmin-Bank@hlfv1.card
composer card import --file /tmp/PeerAdmin-Bank@hlfv1.card
PRIVATE_KEY="${DIR}"/crypto-config/peerOrganizations/Fico.com/users/User1@Fico.com/msp/keystore/8cfa9c438eb339ca24f3eec8190a983af95d14f974b4034ebda3e16ffbf91895_sk
CERT="${DIR}"/crypto-config/peerOrganizations/Fico.com/users/User1@Fico.com/msp/signcerts/User1@Fico.com-cert.pem
if composer card list -n PeerAdmin-Fico@hlfv1 > /dev/null; then
composer card delete -n PeerAdmin-Fico@hlfv1
fi
composer card create -p /tmp/.connection.json -u PeerAdmin-Fico -c "${CERT}" -k "${PRIVATE_KEY}" -r PeerAdmin -r ChannelAdmin --file /tmp/PeerAdmin-Fico@hlfv1.card
composer card import --file /tmp/PeerAdmin-Fico@hlfv1.card
PRIVATE_KEY="${DIR}"/crypto-config/peerOrganizations/Insurance.com/users/User1@Insurance.com/msp/keystore/73837b21ac9b7e2dd2ebe4fea83ecef3f15f05c9dff2581b999717690f17e588_sk
CERT="${DIR}"/crypto-config/peerOrganizations/Insurance.com/users/User1@Insurance.com/msp/signcerts/User1@Insurance.com-cert.pem
if composer card list -n PeerAdmin-Insurance@hlfv1 > /dev/null; then
composer card delete -n PeerAdmin-Insurance@hlfv1
fi
composer card create -p /tmp/.connection.json -u PeerAdmin-Insurance -c "${CERT}" -k "${PRIVATE_KEY}" -r PeerAdmin -r ChannelAdmin --file /tmp/PeerAdmin-Insurance@hlfv1.card
composer card import --file /tmp/PeerAdmin-Insurance@hlfv1.card
PRIVATE_KEY="${DIR}"/crypto-config/peerOrganizations/Registry.com/users/User1@Registry.com/msp/keystore/2ea7796eeffc4b4add55168e86a5d52879e0b22140ed53adc358a966c7f3918f_sk
CERT="${DIR}"/crypto-config/peerOrganizations/Registry.com/users/User1@Registry.com/msp/signcerts/User1@Registry.com-cert.pem
if composer card list -n PeerAdmin-Registry@hlfv1 > /dev/null; then
composer card delete -n PeerAdmin-Registry@hlfv1
fi
composer card create -p /tmp/.connection.json -u PeerAdmin-Registry -c "${CERT}" -k "${PRIVATE_KEY}" -r PeerAdmin -r ChannelAdmin --file /tmp/PeerAdmin-Registry@hlfv1.card
composer card import --file /tmp/PeerAdmin-Registry@hlfv1.card
PRIVATE_KEY="${DIR}"/crypto-config/peerOrganizations/Title.com/users/User1@Title.com/msp/keystore/a1f1b4f733ad0f0a35e9385592f6de7d5fea6bc1e999acb122be0ee45b42ffff_sk
CERT="${DIR}"/crypto-config/peerOrganizations/Title.com/users/User1@Title.com/msp/signcerts/User1@Title.com-cert.pem
if composer card list -n PeerAdmin-Title@hlfv1 > /dev/null; then
composer card delete -n PeerAdmin-Title@hlfv1
fi
composer card create -p /tmp/.connection.json -u PeerAdmin-Title -c "${CERT}" -k "${PRIVATE_KEY}" -r PeerAdmin -r ChannelAdmin --file /tmp/PeerAdmin-Title@hlfv1.card
composer card import --file /tmp/PeerAdmin-Title@hlfv1.card
rm -rf /tmp/.connection.json
echo "Hyperledger Composer PeerAdmin card has been imported"
composer card list
```
@bc2017 I have deleted all of your subsequent messages. Please read the rules for this channel: https://wiki.hyperledger.org/chat_channels/fabric-orderer
Never post long snippets of logs, scripts, or config files directly. Use a service like hastebin.com
Also, please take time to articulate your question carefully. No one here has time to read through hundreds of lines of shell script looking for an error. Please reduce your failing scenario to as small a setup as possible Ideally using one of the base fabric samples, like 'first-network'
I am getting error in orderer log in the time of invocation:
2018-05-07 13:13:07.014 UTC [grpc] Printf -> DEBU 1b75 grpc: Server.Serve failed to complete security handshake from "<
Please provide any solution for that
@souvik: https://chat.hyperledger.org/channel/fabric-orderer?msg=eiYmZhRK9RHjCtQJv
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=PLw32YyvoBe3xJ94k) @kostas Sorry for the message.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=qqgxR6YFEzfF9KW8B) @kostas I am generating certs using cryptogen only. I am not aware about openssl and looking into it.`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=qqgxR6YFEzfF9KW8B) @kostas I am generating certs using cryptogen only. I am not aware about openssl and looking into it.`Do I need to verify any specific thing in certificate using openssl ?
https://chat.hyperledger.org/channel/fabric-orderer?msg=EW695eneoNvNCJ2mn
Hi Experts any help https://stackoverflow.com/questions/50226153/orderer-and-committer-taking-time-to-put-data-into-ledger-hyperledger-fabric-blo
@jyellick
@pankajcheema i faced this problem too. like `balance_transfer` example, every transaction must wait 2 seconds to wait new block creation triggered batch timeout, which can be set by `batchTimeout` in `configtx.yaml`.
@pankajcheema i faced this problem too. like `balance_transfer` example, every transaction must wait 2 seconds for new block creation triggered batch timeout, which can be set by `batchTimeout` in `configtx.yaml`.
@pankajcheema i faced this problem too. like `balance_transfer` example, every transaction must wait 2 seconds for new block creation triggered by batch timeout, which can be set by `batchTimeout` in `configtx.yaml`.
@bh4rtp can i set it to 0 ?
@bh4rtp we also set this to 0 the getting delay of 600ms
@pankajcheema it will result in another issue, every block contains only one transaction.
@bh4rtp agree then what is the issue currently with 2 seconds also .Each block has one transaction
@bh4rtp Or with 2 seconds if i push 2 transaction within the time then will it store 2 ?
@pankajcheema yes, i think the consensus mechanism does not work perfectly at this point. in every 2 seconds, new block will wait batch timeout to be created but do not accept batch size condition fulfilled as for `balance_transfer` example.
@bh4rtp I think any of one condition need to be satisfy either size or timeout .What do you think ?
i have tested using another client to commit transactions to satisfy batch size, but without any effects, still only one transaction in every block and every transaction will not return successfully in 2 seconds.
@bh4rtp How you get rid of this delay ?
@pankajcheema remove event hub from client side, do not wait the event.
but data is not saved to the ledger but i will query for the data then it will give me empty @bh4rtp
because i have a form on client side after submission this form i have to redirect the user to listing @bh4rtp
immediately
if you want to put the data into the ledger and then retrieve it, i think you must wait batch timeout. :grinning:
ok thanks @bh4rtp nice technical discussion with you.Thanks again.
that is exactly the implementation of `balance_transfer`.
@bh4rtp
can you please check this issue https://stackoverflow.com/questions/50227365/hyperledger-fabric-return-json-in-shim-error
@pankajcheema do you mean return error code in addition to error message? i think you cannot, all my chaincodes return a string message if failed.
@pankajcheema do you mean return error code in addition to error message? i think you cannot, all my chaincodes return a string message if fails.
@bh4rtp am also unable to find anything regarding this
But if i will show this message to front end user it looks weird.
yes, but you know whether it successes or fails from client.
Yes i need to show custom message on the basis of success or failure.
@bh4rtp
there is no perfect way to do that if for none-blockchain applications. you can return error code formatted in error message, or construct formatted error message that can be trimmed by client.
there is no perfect way to do that althogh for none-blockchain applications. you can return error code formatted in error message, or construct formatted error message that can be trimmed by client.
there is no perfect way to do that even for none-blockchain applications. you can return error code formatted in error message, or construct formatted error message that can be trimmed by client.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=iSZLG6mKMwN6TkAkn) @kostas Thanks. Let me try to use openssl and verify all certs
https://chat.hyperledger.org/channel/fabric-orderer?msg=EGbodKyTjcs9axifD
@bh4rtp: I'm not sure I follow. Do you claim that the consensus mechanism does not respect the batch size condition?
https://chat.hyperledger.org/channel/fabric-orderer?msg=8eHMHRxGk4AP24m9N
@pankajcheema: From your description (I read the S/O post) it is unclear to me which ledger you are referring to. I will assume you are referring to the peer ledger. The lifecycle of the transaction is: client -> orderer (-> Kafka -> orderer) -> peer
Where the parentheses apply if you're running the Kafka-based orderer.
On the peer side, the transactions get validated before they're committed to the ledger.
All of these stages have a time cost.
All of these stages incur a time delay.
You can't expect to broadcast a transaction and see it in your peer's ledger in 0 milliseconds.
I've also responded on S/O, so let's continue that discussion there.
Has joined the channel.
Hi, Quick question : I am removing peer containers and launching it again and running peer channel list inside peer. It is showing the old channels. How Can I clear the complete cache ?
Hi, Quick question : I am removing peer containers and launching it again and running peer channel list inside peer. It is showing the old channels. How Can I clear the complete cache ? Thanks in advance :)
@jyellick if i want to add new consortium to the block network, what should i do? like what's the policy
@jyellick if i want to add new consortium to the block network, what should i do? like what's the policy or how to control to do it? please tell me the location of the source code
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7Nq5zJXuxymuL3LxM) Thanks got the answer we need to mention --volumes while bringing down the network.
@asaningmaxchain123: The policy is `/Channel/Orderer/Admins`, i.e. the majority of the orderer admins.
See: https://github.com/hyperledger/fabric/blob/release-1.1/common/tools/configtxgen/encoder/encoder.go#L244..L245
so it use the orderer org to define the consortium creation?
so it use the orderer org to define the consortium creation? @kostas
so it use the orderer org(msp) to define the consortium creation? @kostas
Not sure I follow.
but from the source code,it really use it @jyellick can you give me an answer?
The majority of the orderer admins, means the admin identities of the MSPs of the ordering orgs.
"The majority of the orderer admins" means the admin identities of the MSPs of the ordering orgs.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pCyXd4qc3rR7NcaJ6) @kostas i know
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pCyXd4qc3rR7NcaJ6) @kostas i know,however the `peer channel create ` it just carry one signature from the specified msp
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pCyXd4qc3rR7NcaJ6) @kostas i know,however the `peer channel create ` it just carry one signature from the specified msp,i think should modify it
@kostas i am sorry for that
@kostas i got it
What is your question then?
@asaningmaxchain123: you're editing your previous messages to answer the questions I ask afterwards.
it's my fault,please let me write the question fully
Nobody can read this transcript and get continuity.
Just follow up with a comment instead of revising past messages.
`peer channel create`, as you specify in your updated message, has to do with creating a channel, not a consortium.
@kostas i got it
but from the source code,the `ChannelCreationPolicy` policy define the `/Channel/Orderer/Admins`
https://github.com/hyperledger/fabric/blob/c257bb31867b14029c3a6afe1db35b131757d2bf/common/tools/configtxgen/encoder/encoder.go#L279
Ah, that is the _modification policy_ for the `ChannelCreationPolicy`.
It doesn't refer to the _content_ of the `ChannelCreationPolicy`.
Clipboard - May 9, 2018 9:53 PM
my fault,i got it
Exactly.
so where it define the policy for `channel creation`
It is the second argument in the function that you linked to: `channelconfig.ChannelCreationPolicyValue(policies.ImplicitMetaAnyPolicy(channelconfig.AdminsPolicyKey).Value())`
The value of the policy is: `policies.ImplicitMetaAnyPolicy(channelconfig.AdminsPolicyKey).Value()`
Which gives you the Implicit Meta policy of ANY Admins that you see above.
https://github.com/hyperledger/fabric/blob/c257bb31867b14029c3a6afe1db35b131757d2bf/common/tools/configtxgen/encoder/encoder.go#L279:42
https://github.com/hyperledger/fabric/blob/c257bb31867b14029c3a6afe1db35b131757d2bf/common/tools/configtxgen/encoder/encoder.go#L279:42,that means any org admin which locate in the consortium could create the channel?
Correct.
thx again
@kostas when i set the multiple orderer,when i want to create a channel,the tx should send to all?
@kostas when i set the multiple orderer,when i want to create a channel,the tx should send to all?
@kostas @jyellick when i set the multiple orderer,when i want to create a channel,the tx should send to all?
Is it possible to add orderer with existing network dynamically?
no,when i the blockchain startup,i set the multiple orderer, i have test it , it just need pick one to create it,by the way,if i want to add an orderer dynamically,what should i do? @kostas @jyellick
i think it should send a config update tx .and an orderer address to the previous config
i think it should send a config update tx .and an orderer address to the previous config,after it ,the channel creation will use it the previous channel doesn't take effect?
i think it should send a config update tx to the system channel .and an orderer address to the previous config,after it ,the channel creation will use it the previous channel doesn't take effect. @kostas @jyellick
i think it should send a config update tx to the system channel .and an orderer address to the current config,after it ,the channel creation will use it the previous channel doesn't take effect. @kostas @jyellick
i think it should send a config update tx to the system channel .and an orderer address to the current config, the channel creation will use it. however the previous channel doesn't take effect. @kostas @jyellick
Hi, Need some help. I am getting below error while reading systemchannel config block with below steps
Steps :-
docker exec -it cli bash
export ORDERER_CA=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
CORE_PEER_LOCALMSPID="OrdererMSP"
CORE_PEER_TLS_ROOTCERT_FILE=$ORDERER_CA
CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/users/Admin@example.com/msp
peer channel fetch config sys_config_block.pb -o orderer.example.com:7050 -c testchainid --tls --cafile $ORDERER_CA
error :-
2018-05-09 19:03:13.854 UTC [msp] GetLocalMSP -> DEBU 006 Returning existing local MSP
2018-05-09 19:03:13.854 UTC [msp] GetDefaultSigningIdentity -> DEBU 007 Obtaining default signing identity
2018-05-09 19:03:13.854 UTC [msp/identity] Sign -> DEBU 008 Sign: plaintext: 0AC5060A1708021A0608F18BCDD70522...E5DA7AD6BC4912080A020A0012020A00
2018-05-09 19:03:13.854 UTC [msp/identity] Sign -> DEBU 009 Sign: digest: 5DA2B0A28EC782390C64B8A040EE9B89127240B7DD5A930306FCBAF276150085
2018-05-09 19:03:13.856 UTC [channelCmd] readBlock -> DEBU 00a Got status: &{FORBIDDEN}
Error: can't read the block: &{FORBIDDEN}
Orderer Logs : https://pastebin.com/xd6RFGs4
Need some help. I am trying to update systemchannel with new consortium (Org3Consortium).
1) In below command, Do I need to replace "SampleConsortium" with new consortium name ??
jq -s '.[0] * {"channel_group":{"groups":{"Consortiums":{"groups": {"SampleConsortium": {"groups": {"Org3MSP":.[1]}}}}}}}' sys_config.json ./channel-artifacts/org3.json >& sys_updated_config.json
2) Peer channel update, I am seeing this error. Do I need to set any mod_policy or it will use default one.
Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: invalid mod_policy for element [Group] /Channel/Consortiums/SampleConsortium: mod_policy not set
Need some help. I am trying to update systemchannel with new consortium (Org3Consortium).
1) In below command, Do I need to replace "SampleConsortium" with new consortium name ??
jq -s '.[0] * {"channel_group":{"groups":{"Consortiums":{"groups": {"SampleConsortium": {"groups": {"Org3MSP":.[1]}}}}}}}' sys_config.json ./channel-artifacts/org3.json >& sys_updated_config.json
2) Peer channel update with ordererMSP, I am seeing this error. Do I need to set any mod_policy or it will use default one.
Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: invalid mod_policy for element [Group] /Channel/Consortiums/SampleConsortium: mod_policy not set
Has joined the channel.
How does the orderer know the address of the kafka brokers? Is it just from the configtx file used to create the genesis block? I ask because I am currently having an issue with my network when my kafka brokers are sitting on a different machine to my orderer.```
If I have my kafka brokers up and running on one machine and then spin up an orderer, it initially hits the cluster and creates the topic for testchainid channel, but then the orderer can't maintain a connection with it and try to connect to an IP and PORT that I have not specified anywhere.```
I have a full explanation and some log dumps on a stack overflow question here: https://stackoverflow.com/questions/50244517/hyperledger-fabric-orderers-will-not-connect-to-kafka-brokers-from-a-different-m```
I'm not sure where else to look for assistance.
```
```
```
How does the orderer know the address of the kafka brokers? Is it just from the configtx file used to create the genesis block? I ask because I am currently having an issue with my network when my kafka brokers are sitting on a different machine to my orderer.
If I have my kafka brokers up and running on one machine and then spin up an orderer, it initially hits the cluster and creates the topic for testchainid channel, but then the orderer can't maintain a connection with it and try to connect to an IP and PORT that I have not specified anywhere.
I have a full explanation and some log dumps on a stack overflow question here: https://stackoverflow.com/questions/50244517/hyperledger-fabric-orderers-will-not-connect-to-kafka-brokers-from-a-different-m
I'm not sure where else to look for assistance.
How does the orderer know the address of the kafka brokers? Is it just from the configtx file used to create the genesis block? I ask because I am currently having an issue with my network when my kafka brokers are sitting on a different machine to my orderer.
If I have my kafka brokers up and running on one machine and then spin up an orderer on another machine, it initially hits the cluster and creates the topic for testchainid channel, but then the orderer can't maintain a connection with it and try to connect to 127.0.0.11:53 that I have not specified anywhere. If I was to spin up the orderer on the same machine as the kafka brokers, everything works fine.
I have a full explanation and some log dumps on a stack overflow question here: https://stackoverflow.com/questions/50244517/hyperledger-fabric-orderers-will-not-connect-to-kafka-brokers-from-a-different-m
I'm not sure where else to look for assistance.
How does the orderer know the address of the kafka brokers? Is it just from the configtx file used to create the genesis block? I ask because I am currently having an issue with my network when my kafka brokers are sitting on a different machine to my orderer.
If I have my kafka brokers up and running on one machine and then spin up an orderer on another machine, it initially hits the cluster and creates the topic for testchainid channel, but then the orderer can't maintain a connection with it and tries to connect to 127.0.0.11:53 that I have not specified anywhere. If I was to spin up the orderer on the same machine as the kafka brokers, everything works fine.
I have a full explanation and some log dumps on a stack overflow question here: https://stackoverflow.com/questions/50244517/hyperledger-fabric-orderers-will-not-connect-to-kafka-brokers-from-a-different-m
I'm not sure where else to look for assistance.
How does the orderer know the address of the kafka brokers? Is it just from the configtx file used to create the genesis block? I ask because I am currently having an issue with my network when my kafka brokers are sitting on a different machine to my orderer.
If I have my kafka brokers up and running on one machine and then spin up an orderer on another machine, it initially hits the cluster and creates the topic for testchainid channel, but then the orderer can't maintain a connection with it and tries to connect to 127.0.0.11:53 that I have not specified anywhere. If I was to spin up the exact same orderer on the same machine as the kafka brokers, everything works fine.
I have a full explanation and some log dumps on a stack overflow question here: https://stackoverflow.com/questions/50244517/hyperledger-fabric-orderers-will-not-connect-to-kafka-brokers-from-a-different-m
I'm not sure where else to look for assistance.
@kostas I just read over your conversation with JayPandya and I think it might be relevant to my problem. Do I need to setup advertised host on my kafka brokers for my orderer to connect to them from a different machine?
Has joined the channel.
> Is it possible to add orderer with existing network dynamically?
@souvik Yes, it is possible. Simply bootstrap the orderer with the genesis block the same way you would if you were adding the orderer before starting the network. You will need to do a channel config update to add it to the list of orderers if you wish peers to pull from it. Otherwise, you may simply have clients broadcast to it.
> i think it should send a config update tx to the system channel .and an orderer address to the current config, the channel creation will use it. however the previous channel doesn't take effect
@asaningmaxchain123 Yes, you should send a config update to the orderer system channel, as well as all other channels, updating the orderer addresses to include this new one.
@patelan Just as the error indicates, your new consortium must have a mod_policy set. Simply add a 'mod_policy" field set to "/Channel/Orderer/Admins" and the complaint should go away
> How does the orderer know the address of the kafka brokers? Is it just from the configtx file used to create the genesis block?
@antonioGlaveocevic I'll respond in your SO question
> How does the orderer know the address of the kafka brokers? Is it just from the configtx file used to create the genesis block?
@antonioGlavocevic I'll respond in your SO question
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=rdDvhaRcd8fopGbz5) @jyellick but it is meaningless for just the client broadcast tx to it
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WtMzABNyJLhYaaFah) @jyellick that means the previous channel can use the new orderer if i send a config update tx to it,
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WtMzABNyJLhYaaFah) @jyellick that means the previous channel can use the new orderer if i send a config update tx to it,
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fqDQPNdh2WwYy8xDQ) @jyellick Thanks
Has joined the channel.
Can i lanuch a orderer service solely?
I lanuch a orderer service solely, and i didn't define the organization in the configtx.yaml yet. So can I add the org into orderer service dynamically?
Hi When I am executing a transaction though the transaction fails as it doesn't same response from all nodes the database gets updtaed withouut transaction being recoreded inside the block why is this happeninging?
Has joined the channel.
> I lanuch a orderer service solely, and i didn't define the organization in the configtx.yaml yet. So can I add the org into orderer service dynamically?

@wtlife I don't know what it means to define an orderer service 'solely'. You may create an orderer system channel with no consortiums or consortium members defined. If you do not, you may add new consortiums and consortium member definitions in the future through a channel reconfiguration transactions.
> Hi When I am executing a transaction though the transaction fails as it doesn't same response from all nodes the database gets updtaed withouut transaction being recoreded inside the block why is this happeninging?
@DarshanBc This channel is for questions about the ordering service, please see the wiki entry linked in the channel topic
@jyellick @kostas can you give me a hand?
i deploy the kafka in different machine by docker way,and then i use the shell which locate in the kafka
Clipboard - May 14, 2018 10:39 PM
i can't send the msg
Clipboard - May 14, 2018 10:43 PM
Currently I have to sign with more than 50% org (default policy) for adding new org or config block update. What all changes I have to do to make one organization as a leader org ? Please let me know if you need any other details
Has joined the channel.
The following questions have been on my mind for a while regarding the placement and ownership (administration / maintenance) of some of the fabric ordering components. Hopefully someone here can help.
We have been using fabric for a while, but have always taken responsibility for the whole fabric, even for other organisations. We are hoping to change this, which leads into my questions.
Within a multiple oraginastion fabric, with a variety of consortiums, and multiple channels, where organastions will be responsible for their own peers, CA, MSP, and potentially OSNs, it seems to me there will be a requirement for a body to take responsibility for the ordering kafka and zookeeper clusters. Is this the case, and best practice?
Alternatively I imagine we could have each organistion maintain their own kafka and zookeeper clusters. This would potentially mean the channels across the fabric would be using different OSNs, and underlying kafka and zookeeper clusters. I don't see this as an issue, but more to maintain, and maybe overkill. Would this be a better approach?
@asaningmaxchain123 This is a Kafka question, so you are likely to get better answers by asking in a Kafka venue rather than a Fabric one. Did you set appropriate KAFKA_ADVERTISED_PORT KAFKA_ADVERTISED_HOST_NAME etc.?
@jyellick i have resolve it
> Within a multiple oraginastion fabric, with a variety of consortiums, and multiple channels, where organastions will be responsible for their own peers, CA, MSP, and potentially OSNs, it seems to me there will be a requirement for a body to take responsibility for the ordering kafka and zookeeper clusters. Is this the case, and best practice?
@julian The ordering organization or organizations should be logically distinct from the consortium organizations, even if it is the same business entity in control of both. I would never recommend that a single organization (from a MSP/CA perspective) be a member of both the ordering orgs and the consortium orgs. As a best practice, I would recommend that a single entity be assigned the responsibility for ordering. This entity should define the crypto for the ordering org and should run the OSNs, Kafka, and ZK nodes. As Kafka is a CFT system, adding additional organizations to the mix does not necessarily add any benefits. We're actively working on a BFT solution where multi-org ordering makes more sense. If there are concerns about a single organization running ordering, tyou might want to look at "SideDB" which prevents the orderers from actually seeing transaction contents.
@jyellick thank you.
Has joined the channel.
Hi, Can anyone please help with below question. I am running fabric layer with 6 org , 6 CA, 1 orderer. When I am adding new org in channel, I have to sign the updated block with majority of orgs in fabric layer (peer channel signconfigtx -f org3_enevelope.pb). I want to make my company org as a leader org. So it can only approve new org instead of depending on other orgs. Which policy I have to change to make my company org as a leader org ? Please let me know if you need any other details to understand this scenario.
@patelan You should change the /Channel/Application/Admins policy to be for your org. The easiest way to accomplish this is to simply copy the definition of the /Channel/Application/
From thereafter, only your org will need to sign updates.
@jyellick the policy check in orderer, like /Application/Admin needs all of the sub group admins signatures, the it must be contains the all group signature in orderer by defined
@jyellick the policy check in orderer, like /Application/Admin needs all of the sub group admins signatures, the it must be contains the all group signature in orderer by predefined
@jyellick the policy check in orderer ? like /Application/Admin needs all of the sub group admins signatures, the it must be contains the all group signature in orderer by predefined
Has joined the channel.
Hi
is it reasonable to run a separate orderer for every organization?
@asaningmaxchain123 The default /Channel/Application/Admins policy requires a majority of the /Channel/Application/
@gravity You may do this, however, see my comment above:
https://chat.hyperledger.org/channel/fabric-orderer?msg=FoXLwNGrAfFsGoCEP
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=4yBBemukvjaFbvq7q) @jyellick the majority signature should be in order?
hi, I am trying to understand the lifecycle of a transaction inside the business logic. When the async function that the embodies the transaction code completes, has consensus been reached, or is the function executing locally to the peer? How does the timing of the transaction function, and that of the REST API endpoint, relate to the fabric transaction flow http://hyperledger-fabric.readthedocs.io/en/release-1.1/txflow.html ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=diC7HM5zTnKhrsYTr) @jyellick Thanks for the details. Is this policy channel specific ? Not sure how to copy the definition of the /Channel/Application/
@jyellick thanks a lot! great explanation
hello when I try to create the channel throw the nodeSDK I have this error on the orderer `UTC [orderer/common/broadcast] Handle -> WARN 16c [channel: mychannel] Rejecting broadcast of config message from *:* because of error: Error authorizing update: Error validating DeltaSet: Policy for [Groups] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining ` Could you help me ?
My certificate is generated by cryptogen
> hi, I am trying to understand the lifecycle of a transaction inside the business logic. When the async function that the embodies the transaction code completes, has consensus been reached, or is the function executing locally to the peer? How does the timing of the transaction function, and that of the REST API endpoint, relate to the fabric transaction flow http://hyperledger-fabric.readthedocs.io/en/release-1.1/txflow.html ?
@acbellini From an ordering perspective, once a transaction has been acknowledged by `Broadcast`, it will eventually commit (unless the user authorization is revoked or similar). The transaction will be written into a block, and then from an ordering perspective it is committed. However, then the peers receive the block, and they evaluate whether the transaction in its committed order is valid or invalid. If it is valid, then its changes are applied to the state database. I cannot speak to the REST API endpoint, you might try #fabric-sdk-node
@patelan The policy is not channel specific. It will simply reference your MSP ID and admin role. You should change the policy value, but you may leave all `mod_policy` values alone.
@bourbonkidQ It sounds like you are submitting the channel creation request as a normal user, and not as an Admin. Please make sure that your msp id is set correctly and that you are submitting using an admin certificate for one of your application orgs.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=r6eYWCGFvkskRqY7x) @jyellick thanks a lot
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=owJ4RBEsEPK3WPXsB) @jyellick Thanks. I understood I have to change policy from policy:/Channel/Application/Admins to policy:/Channel/Application/
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=owJ4RBEsEPK3WPXsB) @jyellick Thanks. I understood I have to change policy from policy:/Channel/Application/Admins to policy:/Channel/Application/
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=owJ4RBEsEPK3WPXsB) @jyellick Thanks. I understood I have to change policy from policy:/Channel/Application/Admins to policy:/Channel/Application/
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=owJ4RBEsEPK3WPXsB) @jyellick Thanks. I understood I have to change policy from policy:/Channel/Application/Admins to policy:/Channel/Application/
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=owJ4RBEsEPK3WPXsB) @jyellick Thanks. I understood I have to change policy from policy:/Channel/Application/Admins to policy:/Channel/Application/
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=owJ4RBEsEPK3WPXsB) @jyellick Thanks. I understood I have to change policy from policy:/Channel/Application/Admins to policy:/Channel/Application/
@jyellick thanks. But I was wondering about composer-rest-server, that's why I was asking here
@acbellini You might want to try #composer
oh sorryyyyy
I was in the wrong channel :P
Hello, is it possible to use an load-balancer in front of the orderer ?
@bourbonkidQ It is possible, though if you wish to enable mutual TLS, the configuration will be non-trivial
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=9xasWM3LAtu3tvh68) @jyellick Will we have to create the channel on both of the orderer ?
@bourbonkidQ No, you need only send the transaction to any orderer, Fabric synchronizes the transactions between them to form the blockchain (in fact, this is the primary function of the orderer)
Hi there
is it possible to restrict chaincode invocation for particular users?
I mean, if there is an organization, with 4 peers, peer0 and peer1 are in channel A, peer2 and peer3 are in channel B.
there are 4 users registered with fabric-ca. is it possible to allow user0 and user1 to have an access only to channel A, and user2 and user3 to have only an access to channel B?
Thanks in advance
Answered in #fabric-sdk-java
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7cuATg78hrhaJoBzC) Able to make one org as leader org. But I have to sign one time with majority orgs to make one org as a leader org. Is there any way to avoid that. Like before launching network I can modify the policy ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7cuATg78hrhaJoBzC) Able to make one org as leader org. But I have to sign one time config update with majority orgs to make one org as a leader org. Is there any way to avoid that. Like before launching network I can modify the policy ?
@patelan Of course, you may configure this policy when creating your channel. Whoever creates the channel may set whatever application level policies they like
In v1.2 this can be accomplished via `configtxgen`. In v1.1 and prior, you must manually edit the channel creation tx with `configtxlator`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WY7zeAF4KFSwsfrbo) @jyellick Thanks for help. Let me check.
Hi everyone, *i wanted to figure out the size of each transacton message in the block cut* so i was looking at the *blocks* each containing only one transaction message, in the block there is a data array in the data object, i copied all contents of this data array and calculated the sized of the contents of this array, i was baffled that the size of this array was not uniform as in *cx* where *c* is some constants representing how many transactions and *x* is the sizeof one transaction. Now i am confused and have no clue that how to get the size of each transaction so that i can set the block size limit and number of messages in a block right? please help me out
Has joined the channel.
Transactions can have different sizes.
@kostas how can i add a new consourtium to the system channel? that means what's policy should fit by default way
you can tell me the location of the source code
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=EBXykpp56FGK2HLbz) @jyellick I need to modify orderer genesis.block only right ?
@patelan Technically you are modifying the channel creation transaction, which the orderer converts into a genesis block
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fgh9vpzuLKt595wiL) @jyellick Actually I have pre generated channel1.tx, channel2.tx and genesis.block. I just need to modify policy in genesis.block using configtxlator right ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fgh9vpzuLKt595wiL) @jyellick Actually I have pre generated channel1.tx, channel2.tx and genesis.block. I just need to modify policy in channel1.tx and channel2.tx using configtxlator right ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fgh9vpzuLKt595wiL) @jyellick Sorry didn't get you.Actually I have pre generated channel1.tx, channel2.tx and genesis.block. I just need to modify policy in channel1.tx and channel2.tx or genesis.block using configtxlator right ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fgh9vpzuLKt595wiL) @jyellick Sorry didn't get you.Actually I have pre generated channel1.tx, channel2.tx and genesis.block. I just need to modify policy in channel1.tx and channel2.tx or genesis.block using configtxlator ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ahW4SaZzqafYZcL4z) Thanks it is working after modify channel1.tx policy. I used this as reference to modify channel : https://fabric-sdk-node.github.io/tutorial-channel-create.html
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ahW4SaZzqafYZcL4z) Thanks it is working fine after modify channel1.tx policy. I used this as reference to modify channel : https://fabric-sdk-node.github.io/tutorial-channel-create.html
I got Orderer cannot connected to Kafka cluster with below error, Can I ask how do this happen
'2018-05-18 00:37:48.326 UTC [orderer/kafka] processMessagesToBlocks -> ERRO fb1 [channel: smschannel3] Error during consumption: kafka: error while consuming smschannel3/0: kafka: error decoding packet: invalid byteslice length'
I got Orderer cannot connected to Kafka cluster with below error, Can I ask how do this happen
`2018-05-18 00:37:48.326 UTC [orderer/kafka] processMessagesToBlocks -> ERRO fb1 [channel: smschannel3] Error during consumption: kafka: error while consuming smschannel3/0: kafka: error decoding packet: invalid byteslice length`
I got Orderer cannot connected to Kafka cluster with below error, Can I ask how do this happen
`2018-05-18 00:37:48.326 UTC [orderer/kafka] processMessagesToBlocks -> ERRO fb1 [channel: mychannel] Error during consumption: kafka: error while consuming smschannel3/0: kafka: error decoding packet: invalid byteslice length`
I got Orderer cannot connected to Kafka cluster with below error, Can I ask how do this happen
`2018-05-18 00:37:48.326 UTC [orderer/kafka] processMessagesToBlocks -> ERRO fb1 [channel: mychannel] Error during consumption: kafka: error while consuming mychannel/0: kafka: error decoding packet: invalid byteslice length`
I got Orderer cannot connected to Kafka cluster with below error, Can I ask how do this happen
`2018-05-18 00:37:48.326 UTC [orderer/kafka] processMessagesToBlocks -> ERRO fb1 [channel: mychannel] Error during consumption: kafka: error while consuming mychannel/0: kafka: error decoding packet: invalid byteslice length`
full log: https://hastebin.com/asorecisov.vbs
I got Orderer cannot connected to Kafka cluster with below error, Can I ask how do this happen
`2018-05-18 00:37:48.326 UTC [orderer/kafka] processMessagesToBlocks -> ERRO fb1 [channel: mychannel] Error during consumption: kafka: error while consuming mychannel/0: kafka: error decoding packet: invalid byteslice length`
full log: https://hastebin.com/abevuvuwes.vbs
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
`export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp; peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile [/etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem](https://pastebin.com/Uf45fsYF)`
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
```export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp; peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile [/etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem](https://pastebin.com/Uf45fsYF)```
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
`export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp; peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile [/etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem](https://pastebin.com/Uf45fsYF)`
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
`export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp; peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile [/etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem](https://pastebin.com/Uf45fsYF)`
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
```export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp; peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile [/etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem](https://pastebin.com/Uf45fsYF)```
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
```export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp; peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile /etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem```
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
`export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp; peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile /etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem`
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
`export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp; peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile /etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem`
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
`export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp;
peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile /etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem`
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
```export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp;
peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile /etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem```
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
```export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp;
peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile [/etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem]((https://pastebin.com/Uf45fsYF))```
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
```export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp;
peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile [/etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem](https://pastebin.com/Uf45fsYF)```
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
Hi.
I having problems with channel creation. When I run the following command from my peer1.org1:
```export CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/peer/admin/msp;
peer channel create -o orderer1.ordererorg1:7050 -c businesschannel -f /etc/hyperledger/keyfiles/business channel.tx --tls true --timeout 120 --cafile /etc/hyperledger/keyfiles/ordererorg1-ca-chain.pem```
cafile: https://pastebin.com/Uf45fsYF
I get the following output: ``` 2018-05-18 08:54:33.577 UTC [msp] GetLocalMSP -> DEBU 001 Returning existing local MSP
2018-05-18 08:54:33.577 UTC [msp] GetDefaultSigningIdentity -> DEBU 002 Obtaining default signing identity
Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded
2018-05-18 08:54:36.578 UTC [grpc] Printf -> DEBU 003 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp: operation was canceled"; Reconnecting to {orderer1.ordererorg1:7050
The command finishes in about 3 seconds and the error is context deadline exceeded. Shouldn't my timeout flag increase the deadline or something?
The command finishes in about 3 seconds and the error is context deadline exceeded. Shouldn't my timeout flag increase the deadline or is it for something else?
Now after runing the command about 20 times (and getting slightly different errors, always about conection cancelled/timeout), it somehow worked. It looks like the connection between my hosts is not the best.
But still, is there a way to increase the timeout or let the connection retry for a longer time?
So, I get the connection error about 4/5 times I run the command, but sometimes it seems to create the channel regardless.
I have also found out that my dns seems to be slow - up to 3 seconds to resovle a DN, which looks like it is too much and the connection timeout error is thrown. Does anyone know if there is a way to increase the wait time or if this is a bug?
So, I get the connection error about 4/5 times I run the command, but sometimes it seems to create the channel regardless (I get the /Channel/Application at version 0, but got version 1 error).
I have also found out that my dns seems to be slow - up to 3 seconds to resovle a DN, which looks like it is too much and the connection timeout error is thrown. Does anyone know if there is a way to increase the wait time or if this is a bug?
So, I get the connection error about 4/5 times I run the command, but sometimes it seems to create the channel regardless (I get the /Channel/Application at version 0, but got version 1 error).
I have also found out that my DNS seems to be slow - up to 3 seconds to resovle a DN, which looks like it is too much and the connection timeout error is thrown. Does anyone know if there is a way to increase the wait time or if this is a bug?
> The command finishes in about 3 seconds and the error is context deadline exceeded. Shouldn't my timeout flag increase the deadline or is it for something else?
@SimonOberzan The timeout is how long to wait until after successfully connecting for the block to appear. In your case, it looks like you simply cannot contact the orderer.
Based on your log output, it looks to me like you are having some sort of networking connection issue between the machines. It is like the socket opens, but no data can flow over it.
https://chat.hyperledger.org/channel/fabric-orderer?msg=nQeoPNRSDC4gswFN3
@Ryan2 Is it possible you used a sample client or something other than an orderer process to write data to this partition?
@jyellick Yeah that is the line that causes problems: `Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded`. I am almost sure that the problem is in the DNS. When i ping my orderer1.ordererorg1 it usually takes like 2 seconds before the first ping gets through, but then all the ping have low latency and go through normally. When pinging by container IP (I'm using flannel) the pings are instantaneous.
Don't you think that there should be at least a flag to control the deadline, and not just simply cancel the connection so quickly, after all if I try the command several times it will work sooner or later.
@jyellick Yeah that is the line that causes problems: `Error: failed to create deliver client: orderer client failed to connect to orderer1.ordererorg1:7050: failed to create new connection: context deadline exceeded`. I am almost sure that the problem is in the DNS. When i ping my orderer1.ordererorg1 it usually takes like 2 seconds before the first ping gets through, but then all the ping have low latency and go through normally. When pinging by container IP (I'm using flannel) the pings are instantaneous.
Don't you think that there should be at least a flag to control the deadline, and not just simply cancel the connection so quickly, after all if I try the command several times it will work sooner or later, but that messes with my ansible routine.
@SimonOberzan It could be a nice improvement, you're certainly welcome to open a JIRA item and advocate for it. Though in your particular case, I'd work on fixing the name resolution issues, as you will likely find yourself with overall throughput problems eventually
Actually, it looks like it may already be there
https://github.com/hyperledger/fabric/blob/release-1.1/peer/common/peerclient.go#L31
Appears to be hardcoded to 3 seconds. Would be fairly easy to make tuneable though
Oh nice, should I just go ahead and open a ticket?
@jyellick Oh nice, should I just go ahead and open a ticket?
@SimonOberzan Please do. Open a JIRA "Improvement", set the component to be 'fabric-peer', add a label of 'help-wanted' and set the fix-version to v1.3
If you want to paste the JIRA link here once you've created it, I can check to make sure the assorted bureaucratic bits are set properly
Ok, will do.
https://jira.hyperledger.org/browse/FAB-10211
Hi @jyellick
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SJ4z5TJFPBBsDfnkZ)
I performed loading test on fabric-network through node-sdk, and this error occurred,
anyway, for this kind of error, is there any way to continue working on current ledger or have to build new one?
Has joined the channel.
Hi,
I am new to the Hyperledger fabric . I am able to start the network and use caliper. But i want to benchmark the individual component . Can anyone please guide me in this ?
Also, as given in the page (https://github.com/hyperledger/fabric/tree/release-1.1/orderer), Experimenting with the Orderer service section: when i try to run the Orderer Binary . I get following error:
[orderer/common/server] Main -> ERRO 001 failed to parse config: Error reading configuration: Unsupported Config Type ""
i tried to follow the StackOverflow by setting FABRIC_CFG_PATH=$PWD. But this was of no help !
Thanks in Advance !
Hi,
I am new to the Hyperledger fabric . I am able to start the network and use caliper. But i want to benchmark the individual component . Can anyone please guide me in this for Orderer? How do I benchmark Orderer. I tried using Fabric-test OTE. But am only able to see around 1650TPS for a Solo Orderer, which seems to be way less.
Also, as given in the page (https://github.com/hyperledger/fabric/tree/release-1.1/orderer), Experimenting with the Orderer service section: when i try to run the Orderer Binary . I get following error:
[orderer/common/server] Main -> ERRO 001 failed to parse config: Error reading configuration: Unsupported Config Type ""
i tried to follow the StackOverflow by setting FABRIC_CFG_PATH=$PWD. But this was of no help !
Thanks in Advance !
Hi,
I am new to the Hyperledger fabric . I am able to start the network and use caliper. But i want to benchmark the individual component . Can anyone please guide me in this for Orderer? How do I benchmark Orderer. I tried using Fabric-test OTE. But am only able to see around
TPS for a Solo Orderer, which seems to be way less.
Also, as given in the page (https://github.com/hyperledger/fabric/tree/release-1.1/orderer), Experimenting with the Orderer service section: when i try to run the Orderer Binary . I get following error:
[orderer/common/server] Main -> ERRO 001 failed to parse config: Error reading configuration: Unsupported Config Type ""
i tried to follow the StackOverflow by setting FABRIC_CFG_PATH=$PWD. But this was of no help !
Thanks in Advance !
@DivyaAgrawal Please make sure you have set `FABRIC_CFG_PATH` to point to the directory containing the `orderer.yaml`, that is responsible for the error you are seeing
@jyellick Does this directory containing `orderer.yaml` and binary need to be same ?
No
If you downloaded the v1.1.0 binaries, there should be a `bin` dir and a `config` dir, you should set this variable to point at the latter
@jyellick Thanks have resolved this, but having some MSP errors now. Will try to resolve them . If I am not successful , will post my findings and questions again.
Can you also guide on how to benchmark individual components ! That would be really helpful
@DivyaAgrawal Have you looked at this paper https://arxiv.org/pdf/1801.10228.pdf ?
yes
If you are looking to benchmark the orderer directly, I would recommend looking at the sample clients provided in the source tree. There are only two APIs exposed, `Broadcast` and `Deliver`, you may benchmark them separately, or together.
@jyellick Actually I wanted to benchmark each individual component and the complete system to get to the results given in the paper. I thought of starting with the Orderer as I felt that would be the simplest of all the components !
Running purely on my local laptop, if I execute:
```ORDERER_GENERAL_GENESISPROFILE=SampleSingleMSPSolo orderer
```
and then:
```./broadcast_msg -server 127.0.0.1:7050 -size 1024 -messages 10000 -goroutines 8
```
```10000 / 10000 [========================================================================================] 100.00% 1074/s 9s
----------------------broadcast message finish-------------------------------
```
So, using the same CPU for both generating signatures and verifying them, I see local throughput on my laptop at about 1000 txps. As this is CPU bound, splitting the task across hosts should at least double performance. Using real server hardware or hardware accelerated crypto instead of a laptop would likely also help.
Thanks a lot!
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=F5o3RtvbauRkuMjq8) This is the steps I followed. I downloaded the binaries for v1.1.0 . Set the `FABRIC_CFG_PATH` to `config`dir . And when I run ./bin/orderer i get Failed to initialize local MSP error. no file or directory in ../config/msp/signcerts . Am I missing some step ?
@DivyaAgrawal You must generate crypto material first and modify your config file to point to it, you may use the `cryptogen` tool, or `fabric-ca`
@DivyaAgrawal You must generate crypto material, you may use the `cryptogen` tool, or `fabric-ca`
If you are not already familiar with these concepts, I suggest you begin by looking at the fabric samples, like http://hyperledger-fabric.readthedocs.io/en/release-1.1/build_network.html
kafka
Hi, @jyellick, if we want to add an organization, we may add the organization directly to the channel configuration. we may also be able to add the organization to the consortium in the system channel for the new channel to build, right?
Hi, @jyellick, if we want to add an organization, we may add the organization directly to the channel configuration. we may also be able to add the organization to the consortium in the system channel for the coming channel to build, right?
@Glen Adding the organization to the orderer system channel allows that organization to create channels and be included in new channels at genesis. An organization may always be added a channel which already exists.
yes, if we use that consortium as the profile for new channel
I'm reading the fabric doc http://hyperledger-fabric.readthedocs.io/en/latest/membership/membership.html#msp-levels, it mentions network msp, can I interpret it as the msp of the system channel?
Hi All! I've installed a fabric installation with 2 kafka brokers
but I'm encountering problems with memory in the docker kafka
someone has experienced the same problems
Hi, my network has mychannel channel with RF=3, and testchainid RF=1, leader node for these two topic is on broker#1,
when I stopped broker#1, mychannel changed leader to other node, and fabric network still work fine, and I saw on the orderer log, orderer still trying to connect to testchainid topic.
`Can I ask which is role of testchainid in the fabric network,`
As I got, testchainid is system chaincode,
in my scenario whether it's normal or abnormal if testchainid channel does not work but my working channel (mychannel) still work?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WSRyjv4JHKkdQ5TD3) @jyellick Thanks. Had generated the crypto material but had not modified the config file.
> it mentions network msp, can I interpret it as the msp of the system channel?
@Glen yes, I tihnk that's an accurate characterization
@Ryan2 `testchainid` is the default channel name used for the orderer system channel. It is used by the orderers to orchestrate channel creation, it has nothing to do with chaincode (system or otherwise)
thank you @jyellick ,what will happen if `testchainid` lost, I mean testchainid topic become corrupted, will I cannot create new channel, or any impact on Orderer service performance,
You will not be able to create new channels if the orderer system channel is corrupted
thank you
in order to bring `SSL` communication between Kafka and the Orderer, If orderer running on the Docker we update Kafka.TLS on orderer.yaml file (https://github.com/yacovm/fabricDeployment/blob/master/orderer.yaml#L200)
However if my `Orderer running binary where can I configure to make OSNs and Kafka communication over SSL`,
thanks
@Ryan2 I'm not sure I understand your question. I believe all versions of the orderer binary support communicating with Kafka over TLS @sanchezl
in the document 'https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html' stated that:
`and set the keys under Kafka.TLS in orderer.yaml on every OSN accordingly.`
But how to setup orderer binary communicating with Kafka over TLS, I thing orderer binary not configure via orderer.yaml.
Specifically, According to this `https://github.com/yacovm/fabricDeployment/blob/master/orderer.yaml#L199`, I see some variable need to specified
If Orderer binary communicate with Kakfa over TLS, which variable equivalent to the options on the orderer.yaml?
When using the docket images, you can override the values in the default `orderer.yaml` via environment variables.
Specifically, According to this `https://github.com/yacovm/fabricDeployment/blob/master/orderer.yaml#L199`, I see some variable need to specified
If Orderer binary communicate with Kakfa over TLS, which variable equivalent to the options on the orderer.yaml?
Specifically, According to this `https://github.com/yacovm/fabricDeployment/blob/master/orderer.yaml#L199`, I see some variable need to specified
If Orderer binary communicate with Kakfa over TLS, which variable equivalent to these options on the orderer.yaml?
Greetings to all. I am trying to perform a channel update command in my local fabric setup. I know I need to use configtxgen to generate a config update transaction, but I can't find the proper command to use. outputBlock is to generate a genesisblock, is to generate a channel creation tx, and outputAnchorPeersUpdate is meant for an anchor peer, but I don't see anything to create the channel update tx. How should I usr configtxgen to create such tx?
Can you show me which environment variables need to use?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Wd3Cfx4ttt3y1EcYpQ)
Anyone know the solution of this?
`2018-05-24 20:28:24.464 IST [orderer/consensus/kafka] try -> DEBU 32b [channel: testchainid] Need to retry because process failed = kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
`
```2018-05-24 20:28:24.464 IST [orderer/consensus/kafka] try -> DEBU 32b [channel: testchainid] Need to retry because process failed = kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
```
Orderer showing this
@jyellick
> Greetings to all. I am trying to perform a channel update command in my local fabric setup. I know I need to use configtxgen to generate a config update transaction, but I can't find the proper command to use. outputBlock is to generate a genesisblock, is to generate a channel creation tx, and outputAnchorPeersUpdate is meant for an anchor peer, but I don't see anything to create the channel update tx. How should I usr configtxgen to create such tx?
@snakejerusalem `configtxgen` does not support generating reconfiguration transactions yet, if you want to reconfigure, you must use `configtxlator` which is a more powerful, but slightly less friendly tool, see http://hyperledger-fabric.readthedocs.io/en/latest/config_update.html
@pankajcheema This indicates that your Kafka cluster is either still starting up or is misconfigured
@jyellick thanks!
@jyellick any idea?
3 zookeepers are working
@pankajcheema I already replied to your question
4 kafkas are up
@jyellick any specific idea about it?
No. This is a Kafka problem, not a Fabric one. Please use the Kafka sample clients to diagnose it
ok thanks @jyellick
Has joined the channel.
Has joined the channel.
why when instantiate chaincode for channel `mychannel`, on the peer creates the database with name like `...database mychannel_$s$m$s_$c$c_001...` and `mychannel_`
On the fabric 1.0.2 database name is the same as channel name not additional characters
@Ryan2 You must be speaking of couchdb state database. Please move any further conversation to #fabric-ledger . But to quickly answer your question, as of v1.1 there is a channel_ database for channel metadata and a channel_chaincode database per chaincode to isolate each chaincode's data into it's own database. Upon upgrade to v1.1 you'll see the conversion is done automatically. Chaincode names allows uppercase characters, while couchdb database names do not. The dollar signs indicate a character has been lowercased. To avoid that, use lowercase in chaincode names. That being said, the database name is an implementation detail that the user should not be concerned with.
thank you @dave.enyeart for the explanation , sorry for not put question on the right place.
Divag7111710!
Hi just wondering, are the default policies of genesis blocks (consortium writers ...) generated with 1.0.x binaries similar to genesis blocks generated with 1.1 binaries?
@david_dornseifer Yes, the default policies have not changed between v1.0.x and v1.1 `configtxgen`
You'll see an upcoming warning in `configtxgen` v1.2 which announces the deprecation of default policies, and an updated `configtx.yaml` which makes them explicitly defined
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
@jyellick Thanks for the answer. I'm facing the problem right now that configupdates are rejected by our HLF 1.1 Orderers. Orderers running 1.0.x binaries accept the updates. The updates are generated via the configtxlator running the 1.1 binary as well. Any idea what could cause the HLF 1.1 Orderer to reject config updates?
@jyellick Thanks for the answer. I'm facing the problem right now that configupdates are rejected by our HLF 1.1 Orderers that belong to an Orderer org that has been added the the network afterwards. If I do the same process (add orderer org to existing network, injecting an update) running the Orderer on 1.0.x binaries we are good. To create the update I'm running the configtxlator in version 1.1
Has joined the channel.
Hi. I am new to Hyperledger Fabric and i have question about the orderer.My understanding is that in the present setup there is a single central ordering service(with Multiple OSN’s for CFT using Kafka) serving multiple channels in the fabric network. Is it possible to have a channel's specific ordering service to ensure that a channel’s data is not visible outside the channel?
Has joined the channel.
Hi, regarding security measures, does this need to configure the Orderer to be authenticated to Kafka cluster by Kerberos, authentication between broker and zookeeper or just need to encrypt communication between OSN and kafka
from here https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html `Set up the OSNs and Kafka cluster so that they communicate over SSL..` I not sure what author want to mention on this regards
Hi, regarding security measures, does this need to configure the Orderer to be authenticated to Kafka cluster by Kerberos, authentication between broker and zookeeper or just need to encrypt communication between OSN and kafka
from here https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html `Set up the OSNs and Kafka cluster so that they communicate over SSL` I not sure what author want to mention on this regards
why would you want to authenticate with kerberos?
Hi Experts
Can you please look into the issue I have found?
Hyperledger fabric nodejs sdk returns success message even if orderers are down
https://stackoverflow.com/questions/50571107/hyperledger-fabric-nodejs-sdk-returns-success-message-even-if-orderers-are-down
Thanks
Has joined the channel.
hi @yacovm , based on this material https://www.confluent.io/blog/apache-kafka-security-authorization-authentication-encryption/
I want to bring full features of Kafka security into fabric network,
But I wonder whether encrypt network traffic via TLS between Brokers and the Orderer is enough fit or need to do more than that?
do you have a kerberos server in your organization?
Yes, I have
well... do you trust the people that manage it?
anyway
I'm not sure what to tell you, it's a complicated answer
you need TLS anyway to secure the communication
kerberos is only for authentication, right?
so, if you have also kerberos - it adds another part to the system.
what if the kerberos server goes down? etc.
i guess these all should be taken into account... also i'm not sure if the sarama library even supports kerberos, but @kostas might know
> I'm facing the problem right now that configupdates are rejected by our HLF 1.1 Orderers that belong to an Orderer org that has been added the the network afterwards. If I do the same process (add orderer org to existing network, injecting an update) running the Orderer on 1.0.x binaries we are good. To create the update I'm running the configtxlator in version 1.1
@david_dornseifer Are you perhaps submitting updates to the orderer system channel? In v1.0.x, non-orderer users may submit config updates to the orderer system channel. This was remedied in v1.1, so now the application user must sign the config update, while the orderer user submits it.
> My understanding is that in the present setup there is a single central ordering service(with Multiple OSN’s for CFT using Kafka) serving multiple channels in the fabric network. Is it possible to have a channel's specific ordering service to ensure that a channel’s data is not visible outside the channel?
@sarapara Your understanding is correct. It is not currently possible to have different sets of orderers serve different channels. You could likely achieve the same result with multiple orderer networks however.
@Ryan2 It's hard to say without knowing more about your deployment strategy, but often it is more effective to use simple network partitioning to prevent unauthorized access. If your Kafka cluster is only accessible to the orderer processes, then authentication and possible even encryption may be unnecessary. What additional value do you see kerberos adding over say mutual TLS authentication?
@Ryan2 It's hard to say without knowing more about your deployment strategy, but often it is more effective to use simple network partitioning to prevent unauthorized access. If your Kafka cluster is only accessible to the orderer processes, then authentication and possibly even encryption may be unnecessary. What additional value do you see kerberos adding over say mutual TLS authentication?
@pankajcheema Please do not cross post. I see you already posted your question to #fabric-sdk-node which is the correct venue for this question. Also, FYI, it is a national holiday today in the United States, so very few people are working.
Sorry @jyellick
Has left the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=r6P8ALRLjuPzY5eHY) @jyellick thx
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=r6P8ALRLjuPzY5eHY) @jyellick thx the peer org signature was missing
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=j72zd34jEao4vSEQn) @jyellick Thank you for the clarification. Can you please explain what you mean by Multiple orderer networks?
@sarapara For instance, if you bootstrap two solo orderers, you have bootstrapped two separate networks. Each of these orderers may host different sets of channels and will be unaware of eachother.
Is it advisable to run orderer networks per channel ? If an org is part of two channels and there are two separate orderer network one per channel , can i still do cross-channel communication? What are the dis-advantages of running orderer networks per channel?
I can say that although this sort of configuration is envisioned to work in the long term, I do not think that it has seen much test, and I suspect there are bugs lurking. It depends of course on your requirements, but in my evaluation, most scenarios can be implemented with just a single ordering network. If you are concerned about the orderer having visibility into the transaction contents, you might want to look at SideDB which hides the transaction contents from the orderer.
Has joined the channel.
Has joined the channel.
Hello, I have a little issue which I need help on. ! have 3 orderers in my local network and two of them went down. In the time it took to bring them up, the kafka logs where not available from the point where the orderer blocks where the same. So right now I don't know how I would go about syncing the blocks.
@eetti Specifically which channels have the Kafka logs expired for?
@jyellick testchainid and my custom channel
Okay. So, it is possible that if you restart your remaining orderer, that it may not be able to reconnect. So first, I recommend that you:
1. Create a new channel. This will create a new block on testchainid, and ensure that the correct offset is reachable.
2. Commit a new transaction on your custom channel. This will likewise ensure that the correct offset is reachable.
Once you have done both of these things, you may safely shut down your up to date orderer, copy its ledger directory (usually `/var/hyperledger/production/orderer`) and start it back up, and the system should continue functioning correctly. Then, you may take the copy of this ledger, and replace the ledger of the two orderers which were down with this copy. When you start these two remaining orderers, they should start successfully and you should be back up and working.
Then, I'd recommend that you disable Kafka log expiration.
@jyellick Thank you. I will try that out.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=5TRJyAQtJ4aeybKfn) @jyellick The scenario i am thinking of is to have Country specific channels for GDPR compliance but the problem is with orderer since all channels transaction go to the orderer. Do you think this problem is solvable by running multiple orderer networks?
@sarapara I actually think using SideDB is a much better solution for GPDR compliance. Because the data itself is not written onto the chain, only irreversible hash representations, the orderer will not have any visibility into this data. Similarly, if you must delete data in compliance with a GPDR request, you may delete the private data without breaking the hash chain.
I understand that SideDB feature is only available as an experimental feature in V1.1. Do you know of the timeframe when this feature will be available for production usecase?
Yes, it is part of the v1.2 release, which is scheduled for June
Thank you. This helps and i will play around SideDB experimental feature on v1.1. I could not get much documentation on SideDB feature, can you please point me to it ?
@sarapara the side db docs and sample will be merged soon. In the meantime check these resources:
https://docs.google.com/document/d/1sdfSIyLvoVW_32LXipm8sgs87oCE7D7bK2eCC0sLH9s/edit
https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-docs-build-x86_64/347/html/private_data_tutorial.html
https://gerrit.hyperledger.org/r/#/c/22255/
Has joined the channel.
Hi @jyellick , can we delete an organization from the channel, is that the same as adding an organization?
@Glen certainly, it is the same procedure, but simply deleting the definition rather than adding a new one
Ok
Has joined the channel.
Has joined the channel.
Hi! Is it a good practice to have one orderer service per channel?
Hi! Is it a good practice to have one orderer service per channel?
Hi! Is it a good practice to have one orderer service per channel?
Hi! Is it a good practice to have one orderer service per channel?
Hi! Is it a good practice to have one orderer service per channel?
Hi! Is it a good practice to have one orderer service per channel?
Has joined the channel.
@krabradosty In general, no. It's best to have one fault tolerant ordering service to service all channels. Keep in mind if you were to have one ordering service per channel, this would mean deploying 3 orderers, 3 Kafka brokers, and 3 Zookeepers if you did not want to share the Kafka infrastructure.
has any one tried orderer service based on kafka? Getting this error on orderer logs:
```
2018-05-31 13:32:45.046 UTC [grpc] Printf -> DEBU 0e9 grpc: Server.Serve failed to complete security handshake from "172.18.0.14:50508": tls: first record does not look like a TLS handshake
2018-05-31 13:32:45.054 UTC [grpc] Printf -> DEBU 0ea grpc: Server.Serve failed to complete security handshake from "172.18.0.14:50510": tls: first record does not look like a TLS handshake
```
has any one tried orderer service based on kafka? Getting this error in orderer logs:
```
2018-05-31 13:32:45.046 UTC [grpc] Printf -> DEBU 0e9 grpc: Server.Serve failed to complete security handshake from "172.18.0.14:50508": tls: first record does not look like a TLS handshake
2018-05-31 13:32:45.054 UTC [grpc] Printf -> DEBU 0ea grpc: Server.Serve failed to complete security handshake from "172.18.0.14:50510": tls: first record does not look like a TLS handshake
```
@jyellick In that case how can I set up different genesis.block for different channels? Or the only way is to update channel configuration right after creation?
@krabradosty You configure your orderer system channel first. This contains the rules and seed data (like crypto material) for creating new channels. Then, you may submit a channel creation transaction which creates a new channel, with a genesis block created by a combination of that seed data and channel creation tx. With appropriate signatures, you may customize any portion of this genesis block.
@jyellick you mean `-f` flag of `peer channel create`?
> @jyellick you mean `-f` flag of `peer channel create`?
Correct, the -f flag takes in a channel creation tx, which is usually generated by `configtxgen`
many thanks!
> has any one tried orderer service based on kafka? Getting this error in orderer logs:
@JayPandya The Kafka based ordering is well tested and deployed many places. That error sounds like you are getting a socket connection which is not a true gRPC connection. Can you be more specific about exactly what steps create that error?
yeah so before this I was running kafka based service for orderer but in non-TLS mode
and now I've added
```
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
```
@jyellick - yeah so before this I was running kafka based service for orderer but in non-TLS mode
and now I've added
```
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/orderer/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/orderer/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/orderer/tls/ca.crt]
```
^ these configurations
and now its throwing error on channel creation
and yeah sorry actually I wanted to ask kafka based ordering service with TLS enabled
Are you trying to enable TLS between the client and the orderer, or between the orderer and Kafka?
not between orderer and kafka just trying to run fabric network with enabled TLS
i didn't set ssl for Kafka brokers in here
This is a well tested and supported configuration. I suggest you look at https://github.com/hyperledger/fabric/tree/release-1.1/examples/e2e_cli
okay will try that
Thanks
@jyellick - Tried this - https://github.com/hyperledger/fabric/tree/release-1.1/examples/e2e_cli
but no luck
getting same error
@JayPandya You ran this example exactly as-is with no modification?
@jyellick - Ahh no i compared docker files from example and i'm not getting that error but now i'm getting this:
```
SERVICE_UNAVAILABLE: rejected by Consenter: will not enqueue, consenter for this channel hasn't started yet
```
@JayPandya This either indicates that your Kafka cluster is misconfigured, or you have not waited long enough for the Kafka cluster to start up
Yeah I don't think kafka cluster is misconfigured because it was working before
@jyellick - Yeah it I didn't wait enough After increasing Fabric timeout It Worked
@jyellick - Yeah I didn't wait enough After increasing Fabric timeout It Worked
@jyellick - Yeah I didn't wait enough
After increasing Fabric timeout It Worked
Thanks :smile:
Hello, I have 3 orderer's in my network. Every 12 hours the docker container gets shutdown. Is it possible that the orderer node gets shutdown due to inactivity?
@jyellick As per the answer you gave yesterday with regards to 2/3 orderers been shutdown and testchainid offset error was occuring. What steps should I take to fix the missing data if all 3 orderers are down. Thank you
what do the logs say @eetti ?
@yacovm https://pastebin.com/ATcuQJyL
Has joined the channel.
Hi, I update Blocksize parameters for `max_message_count` from 30 to 20
But when sending transaction, block created with size =30,
any reason for that, and how to make update Blocksize take effect?
`
"BatchSize": {
"mod_policy": "Admins",
"value": {
"absolute_max_bytes": 103809024,
"max_message_count": 20,
"preferred_max_bytes": 524288
},
"version": "2"
},
`
thanks
Has joined the channel.
Hi, This is regarding signature of Orderer in block metadata. In case of multiple OSNs in a network, whose signature will be there in block. If it is of OSN cutting the block, will that not make blocks in network different (though only by signature). Pls help.
@ashishchainworks: It would make them different. But the metadata field of each block is not included in the block's hash, so the hash chain is the same across the network. (Please don't double post.)
Thanks kostas
@eetti If all three orderers are unable to connect because Kafka offsets have expired, then you do not have many options. At some point I would like to put out a tool to help fix broken blockchains like this, but if you have lost data then by design, the system cannot start normally.
@Ryan2 What version of fabric are you running?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=EKaAtdL6k4P3J9qzm) @jyellick That would be a huge issue if I can't start the orderer.
@eetti As a blockchain, Fabric will not proceed in a fashion where transactions have been committed would be discarded (causing a fork). If your Kafka/ZK data has expired and none of your orderers are working, then you have experienced data loss, and you must essentially manually induce a fork into your blockchain.
@jyellick Is there any documentation I can read up on how to induce a fork in this case?
As I mentioned, this is something I've wanted to write a tool for, but unless you wish to learn the fabric data structures intimately, it is not something I would recommend you attempt
@jyellick Okay.
Hi @jyellick
I'm using fabric v1.1.0:
https://chat.hyperledger.org/channel/fabric-orderer?msg=SKrCwe2bvYXwYy4Bw
@Ryan2 So you have updated the max message count to 20, and you are still seeing blocks with 30 transactions in them?
hi @jyellick , yes, it was, I want to do some performance test, I updated to change blocksize `max_message_count` from 30 to 20 , while sending transactions, I saw blocks created with size was 30 TXs, I'm running fabric network with version 1.1.0
What consensus mechanism?
(Kafka or Solo?)
I using Kafka consensus
@Ryan2 I think this is a bug, could you open a JIRA and assign it to me?
hi @jyellick , I just reproduce the issue, and found one thing lead to this situation is peer `Failed to update ordering service endpoints` as shown on the peer log as below:
`2018-06-02 03:29:37.709 UTC [gossip/service] updateEndpoints -> WARN 1299 Failed to update ordering service endpoints, due to Channel with mychannel id was not found
2018-06-02 03:29:37.723 UTC [kvledger] CommitWithPvtData -> INFO 129a Channel [mychannel]: Committed block [3445] with 1 transaction(s)`
anyway, I opened the ticket, please take a look or reject if invalid:
https://jira.hyperledger.org/browse/FAB-10521
Thanks
Has joined the channel.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Ld3P6vj2WQBtjFokc) @mastersingh24 One more query related to an old question . If one PutState() fails in a transaction having multiple putState() calls , do the developer need to do any kind of rollback manually ? or the hyperledger system will take care of undoing the previous putState()s
@vu3mmg I think you have the wrong channel. But there should be no need to do any manual rollback. If the transaction fails, it will have no affect on the state.
@jyellick thank you and apologies for selecting wrong channel ..
Hello All,
I have 2 orderers, 3 peers, 4 kafka, 3 zookeepers setup on 13 physical machines and connected to a network using switch.
The issue occures when I was experimenting with network and manually turned off the switch and the again turned on the switch. I tried to perform write operations to my network but it did'nt worked. I tried to restart peers but still not worked. Then I restarted `orderer` and all the previous transactions were written to the blockchain as well as to couchdb. Now the write operations are working. Anyone knows why it happend? and it happens with me every time.
@jyellick
@vick
Has joined the channel.
@vu3mmg
I think its a bug
https://stackoverflow.com/questions/50699633/hyperledger-fabric-need-to-restart-orderer-manually-if-network-connection-if-off
If anyone knows the answer then please post it on stackoverflow
Any `expert` in `orderer` here?
Hi everyone!
My name is Alex Males. I am part of Hashgraph development community. We are interested in building a PoC to plug-in Hashgraph consensus (aBFT) into Fabric's Orderer.
Hashgraph is a high throughput (200k+ TPS) consensus algorithm that has been proved viable for permissioned ledger use cases. They are also in plan to launch a public ledger this year.
Can you help me figure out if an integration Fabric Orderer + Hashgraph would be helpful? As I see it, hashgraph needs a good dev toolset/framework (like Composer or chaincode) while Fabric might benefit from BFT provided by Hashgraph. I saw that there is a BFT consensus implementation in the works in hyperledger JIRA.
> Can you help me figure out if an integration Fabric Orderer + Hashgraph would be helpful?
It most certainly would.
We'll be glad to help you out w/ any questions you may have during this integration.
Also in about a week or so, we plan to have a design document of sorts for the upcoming Raft + Fabric orderer integration which may have a few bits in there that are helpful to you as well.
Thank you very much! Is there anyone willing to participate in the effort to create this PoC? I am a Java/Scala developer and it might take me sometime to learn golang to implement the required interfaces for the orderer. I can provide the socket on the other side (hashgraph) to make the integration.
I would love to do that, mostly because I want to learn more about Hashgraph, but all of my cycles will be spent on the aforementioned Raft integration. When you're ready to roll, maybe post in the mailing list? We can make sure this gets amplified appropriately. I think that if you come up with a high-level design document explaining what needs to be done (this should be language agnostic), you should be able to rally the troops around this effort more easily.
And as I said, we should be able to clarify how certain things work from the Fabric side of things when you write that design.
@kostas Please let me know when you have time and I can introduce you to hashgraph dev community and may be you'll want to join our weekly zoom call to see what it is about. I am willing to work on an initial design document on hashgraph consensus integration. I am also interested about the Raft + Orderer. Where can I follow the progress on that?
We will post the link to the design doc in the fabric mailing list, and also in #fabric-orderer-dev. I can @-you as well. We're looking forward to help you out with this.
Great! I'll keep in touch with you on the dev channel
Has joined the channel.
@kostas
I need some help
My orderer does not connects with kafka and stops updating couchdb if network connectivity restarted
I have to restart the orderer also and then it start the write operations
could you please help me?
@jyellick
@pankajcheema it is not possible for anyone to assist you with the detail you have provided. please rephrase you question in accordance with the [channel guidelines](https://wiki.hyperledger.org/chat_channels/fabric-orderer)
@jrosmith are you able to understand my situation?. I M just plugin and play with lan cable on orderer. If I plug out the lan cable from my orderer system and then plugin the lan cable to the same system in 3 seconds, the orderer fails to reconnect to kafka servers and stop the write operations until I manually restart the orderer process.
@tkuhrt
@jyellick
Guys you are the admin and moderators. I expect answer from you guys. If it is a bug please let me know so that I can report it on JIRA
i can reproduce it multiple times
@pankajcheema
First, as @jrosmith indicated, you are not posting your questions here in a constructive manner, and because of this, you are unlikely to get help from anyone here.
Second, Hyperledger Fabric is an open source project, and everyone here is volunteering their time. I do not believe that English is your first language, but when you say that you expect answers from us, it makes it sound like something you have been promised or entitled to. We all want Hyperledger Fabric to succeed, but we all have many other responsibilities beyond assisting on rocketchat.
In the stack overflow you posted, @kostas asked you if you could reproduce this issue with solo. You never answered him. We cannot assist you if you ignore our requests to troubleshoot. Attempting to reproduce your entire environment and scenario is a significant amount fo work. You can help us by devising a way to recreate your issue in a minimal reproducible way so that we can help you debug. Perhaps start with the fabric-samples/first-network example. If you pause a container and then resume it, do you see the same problem? We are happy to help, but you must help us do it.
yes @jyellick i can understand your responsibility
but i have setup a physical network .and purchased 13 computers with high configuration.I tested so many times with docker no issue ther
but i have setup a physical network .and purchased 13 computers with high configuration.I tested so many times with docker no issue there
Can you reproduce with just 2 computers?
One running a solo orderer, and the other running a peer?
no @jyellick am running my machines with kafka and zookeeper ensembel
@pankajcheema We all realize this. I am asking you, to please configure a new network, using two of your machines. One running a solo orderer, the other running a peer. I realize that your full network does not work, I am trying to help you debug.
ok @jyellick
but my project is in production near about.
i have setup everything.
and they are testing the fault failure of network
@kostas I Have really try this so many time .
i can send you the configuration also.
> but my project is in production near about.
This is unfortunate, and I understand you're in bad spot. I truly do. What I'd ask you to understand is that most folks around here who could potentially help you, are swamped with work, so you have to meet them in the middle. And the way you do this is by attempting to recreate the problem in the smallest/simplest setup possible. This will take several iterations from your part, and will be time-consuming. But nothing else can work unfortunately.
> but my project is in production near about.
This is unfortunate, and I understand you're in a bad spot. I truly do. What I'd ask you to understand is that most folks around here who could potentially help you, are swamped with work, so you have to meet them in the middle. And the way you do this is by attempting to recreate the problem in the smallest/simplest setup possible. This will take several iterations from your part, and will be time-consuming. But nothing else can work unfortunately.
It's basically a "help us help you" kind of thing.
@kostas I am understanding your point and let me try again. As you suggested.
@kostas @jyellick i will try with single orderer and simplest network and will get back to you
Thanks
After integrating TLS in orderer I'm trying to fetch channel config but getting this error:
```
Error: failed to create deliver client: orderer client failed to connect to
I'm trying to add Peer on existing network
@JayPandya Are you trying to connect to the orderer by its ip address? For TLS authentication to work, you must either add the IP address to the TLS cert in the IP SANs section, or, you must connect via the hostname listed in that cert.
@jyellick - how I can add it in IP SANs section? Can you give me an example of that?
btw I've added IP of orderer in `Addresses` section in `configtx.yml` file and in non TLS version connecting direct via IP worked before
This is unrelated to `configtx.yaml`, it is your TLS certificate
How did you generate your TLS certs?
```
OrdererOrgs:
# ---------------------------------------------------------------------------
# Orderer
# ---------------------------------------------------------------------------
- Name: Orderer
Domain: example.com
# ---------------------------------------------------------------------------
# "Specs" - See PeerOrgs below for complete description
# ---------------------------------------------------------------------------
Specs:
- Hostname: orderer
# ---------------------------------------------------------------------------
# "PeerOrgs" - Definition of organizations managing peer nodes
# ---------------------------------------------------------------------------
PeerOrgs:
# ---------------------------------------------------------------------------
# Org1
# ---------------------------------------------------------------------------
- Name: Org1
Domain: org1.example.com
# ---------------------------------------------------------------------------
# "Specs"
# ---------------------------------------------------------------------------
# Uncomment this section to enable the explicit definition of hosts in your
# configuration. Most users will want to use Template, below
#
# Specs is an array of Spec entries. Each Spec entry consists of two fields:
# - Hostname: (Required) The desired hostname, sans the domain.
# - CommonName: (Optional) Specifies the template or explicit override for
# the CN. By default, this is the template:
#
# "{{.Hostname}}.{{.Domain}}"
#
# which obtains its values from the Spec.Hostname and
# Org.Domain, respectively.
# ---------------------------------------------------------------------------
# Specs:
# - Hostname: foo # implicitly "foo.org1.example.com"
# CommonName: foo27.org5.example.com # overrides Hostname-based FQDN set above
# - Hostname: bar
# - Hostname: baz
# ---------------------------------------------------------------------------
# "Template"
# ---------------------------------------------------------------------------
# Allows for the definition of 1 or more hosts that are created sequentially
# from a template. By default, this looks like "peer%d" from 0 to Count-1.
# You may override the number of nodes (Count), the starting index (Start)
# or the template used to construct the name (Hostname).
#
# Note: Template and Specs are not mutually exclusive. You may define both
# sections and the aggregate nodes will be created for you. Take care with
# name collisions
# ---------------------------------------------------------------------------
Template:
Count: 5
# Start: 5
# Hostname: {{.Prefix}}{{.Index}} # default
# ---------------------------------------------------------------------------
# "Users"
# ---------------------------------------------------------------------------
# Count: The number of user accounts _in addition_ to Admin
# ---------------------------------------------------------------------------
Users:
Count: 0
```
didn't change it except Template count for Peer section
@JayPandya Per the [channel guideline](https://wiki.hyperledger.org/chat_channels/fabric-orderer) do _not_ post long snippets of config files
sorry for that
I've deleted your post
what attribute is missing in my crypto-config.yml file?
Please post your crypto-config.yaml file via a service like hastebin.com
> how I can add it in IP SANs section? Can you give me an example of that?
Line 36 here: https://gerrit.hyperledger.org/r/c/19743/17/integration/orderer/raft/testdata/cryptogen.yml#36
Here it is: https://hastebin.com/umidodagoq.makefile
So i need to add orderer IP to SANS section
will try it
Thanks for the Help @kostas @jyellick
@kostas @jyellick Tried with SANS configurations but still getting same error
I've seen output of `cryptogen showtemplate` but its still same
Can you do a:
```openssl x509 -noout -text -in
orderers TLS root certificate right?
@jyellick - Here is text version of TLS cert - https://hastebin.com/uwomixapaz.rb
I do not see IPs which I've configured in SANs there :thinking:
Yes, I would expect for there to be configured SANs there
Could you maybe remove that file and try regenerating the crypto again?
The SANs feature of `cryptogen` should definitely work
@jyellick - This is my crypto-config file by which I'm generating certs: https://hastebin.com/kuzedifoxu.makefile
if you can take a look and tell me where I'm configuring SANs wrong
I assume that `x.x.x.x` is simply you redacting your real IPs?
That cert you pasted, is that your CA cert, or the orderer server cert?
orderer server cert
and yeah `x.x.x.x` is my real IPs
@jyellick - Ohh one thing I'm noticing that when I try to see Peer certificated I can find alternative DNS with IP like
```
X509v3 Subject Alternative Name:
DNS:peer2.org1.example.com, DNS:peer2, DNS:x.x.x.x
```
but this is not case with orderer
Interesting, and you're certain the orderer certs are being regenerated?
(when you re-execute things, you get new timestamps and serial numbers etc.)
@jyellick - yes I've deleted both folders under `crypto-config` and regenerated certs
Oh, I think I may see
You are specifying in the template section
You want to simply add the SANs tag to line 20, like:
``` Specs:
- Hostname: orderer
SANs:
- x.x.x.x
```
ohh lemme try this
@jyellick - I see one thing After above change when o try to see server cert for orderer its giving me (path is : `example.com/orderers/orderer.example.com/tls/server.crt`)
```
X509v3 Subject Alternative Name:
DNS:orderer.example.com, DNS:orderer, DNS:x.x.x.x
```
but when i try to see this (path is: `ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem`)
its not giving me Alternative DNS
and this last one certificate is one that I'm using to fetch channel info
@jyellick - Which file should we use to fetch channel data?
@JayPandya The TLS CA cert should not have SAN information. The orderer server cert should. If you are using mutual TLS the client cert should not need a SAN
Has joined the channel.
@jyellick - Okay that part is clear
so when i try to fetch config from channel for which SANs its looking for
```
docker exec -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin@org1.example.com/msp" peer1.org1.example.com peer channel fetch config -o x.x.x.x:7050 -c composerchannel --tls --cafile /var/hyperledger/orderer/certs/tlsca.example.com-cert.pem
```
for this ^ its throwing error `x509: cannot validate certificate for x.x.x.x because it doesn't contain any IP SANs`
Has joined the channel.
You're certain that the server TLS cert for your orderer has the SANs defined now? Can you execute:
```openssl s_client -showcerts -connect orderer.example.com:7050 < /dev/null | openssl x509 -text -noout
```
And verify that the cert displayed has SANs in it?
@jyellick - Adding `--ordererTLSHostnameOverride orderer` worked with the same changes
Yes, the hostname override allows you to skip the need for SANs in this case
is that right way to do it?
In general, TLS is designed to work with hostnames. If you need it to work with IPs, then you should be able to use the SANs, but, using hostnames is generally preferable.
okay thanks @jyellick
Dear Guys, is currently add orderer Organization dynamically is supported?
I found some problem and looking for help.. Thanks in advance
@davidkhala Yes, adding an orderer organization dynamically is supported. However, if your current set or orderer addresses does not overlap with the original set of orderer addresses, you may have some trouble joining new peers to the channel.
Dear Jason, thanks for your prompt reply, but the problem I see is not that far
It occurs when a new orderer from a new organization is started.
I would like to introduce how I did.
I have a set of kafka orderer in machine A, run perfectly, then 1. I create CA on machine B 2. generate crypto material for the new orderer organization by new CA on machine B 3. do channel update to existing channel 'appChannel' by updating both 'orderer.groups' and 'ordererAddresses' 4. starting the new orderer with *old* genesis block
@davidkhala Are you attempting to start with the genesis block of `appChannel`? You must bootstrap your new orderer with the genesis block of the orderer system channel (`testchainid` if you did not modify it)
yes, I am using the 'testchainid' genesis block
But the log seems suspicious. like it could not fetch the *appChannel*
Is it possible that you have enabled Kafka log expiration?
If your Kafka partition has expired some offsets, then the new orderer could erroneously believe that the 'oldest offset' is the start point.
```
```
I did not change that, retention should be -1 as default
log
the part of log I feel problematic is attached
``MSP NewConsensusMSP is unknown `` NewConsensusMSP is the MSP name of new orderer organization
@davidkhala It looks like you have not added your new orderer org to the ordering system channel, only to your app channel?
Yes, Jason, you get the point
I only change the channel config of `appChannel`,
So should I change the config in `testchainid` also?
Yes, although each channel is managed independently, it is expected that the orderer addresses and orgs eventually sync between them.
Traditionally, the workflow would be:
1. Update the orderer system channel configuration (testchainid by default)
2. For each application channel, update its configuration
It is not currently supported that some orderer organizations (or even orderers) service some channels but not others
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pT3bW5EZ9N5MyrQEw) @jyellick Thanks, I previous assumed orderer config is configured independently, that is my concept error.
> Thanks, I previous assumed orderer config is configured independently, that is my concept error.
@davidkhala Happy to help. FYI, some aspects of the ordering config may be managed independently, such as batch size and batch timeout, but orderer addresses and orgs must be synced.
> Thanks, I previous assumed orderer config is configured independently, that is my concept error.
@davidkhala Happy to help. FYI, some aspects of the ordering config may be managed independently by channel, such as batch size and batch timeout, but orderer addresses and orgs must be synced.
But, just a suggestion, I got confused because I saw them explicitly in app channel config, could we hide them? Then if developer will turn to search them in other place like system channel LOL
But, just a suggestion, I got confused because I saw them explicitly in app channel config, could we hide them? Then possibly a developer will turn to search them in other place like system channel LOL
@davidkhala We could certainly make this more clear. Each channel maintains a copy of the channel configuration because there is no relative order between the channels. We have also investigated automatically syncing the pieces of configuration which must be synced, but how we would accomplish this is still an open question.
Yes, I think there is much work to do, and even more if we take pluggable consensus design into consideration.
@jyellick another problem is: could we skip setting any consortium ( or peer organization) in genesis block section of configtx.yaml?
looks there is a duplicate in application.Organizations
I guess in current version the answer is 'no'.
hello everyone, I'm trying to setup a kafka version of first-network in multi-server environment. When I try to start the network, I get the following message inside the docker container of the orderer. Has anyone faced the same issue before and has resolved it? - thanks beforehand
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 289 [channel: mychannel] Channel consumer set up successfully
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 28a [channel: mychannel] Start phase completed successfully
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28b [channel: mychannel] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28c [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28d [channel: mychannel] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28e [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28f [channel: mychannel] Successfully unmarshalled consumed message, offset is 2. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 290 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 291 [channel: mychannel] Successfully unmarshalled consumed message, offset is 3. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 292 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.467 UTC [orderer/kafka] try -> DEBU 293 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:20.467 UTC [orderer/kafka] try -> DEBU 294 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:25.467 UTC [orderer/kafka] try -> DEBU 295 [channel: testchainid] Connecting to the Kafka cluster
hello everyone, I'm trying to setup a kafka version of first-network in multi-server environment. When I try to start the network, I get the following message inside the docker container of the orderer. Has anyone faced the same issue before and has resolved it? - thanks beforehand
`2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 289 [channel: mychannel] Channel consumer set up successfully
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 28a [channel: mychannel] Start phase completed successfully
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28b [channel: mychannel] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28c [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28d [channel: mychannel] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28e [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28f [channel: mychannel] Successfully unmarshalled consumed message, offset is 2. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 290 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 291 [channel: mychannel] Successfully unmarshalled consumed message, offset is 3. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 292 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.467 UTC [orderer/kafka] try -> DEBU 293 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:20.467 UTC [orderer/kafka] try -> DEBU 294 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:25.467 UTC [orderer/kafka] try -> DEBU 295 [channel: testchainid] Connecting to the Kafka cluster
`
hello everyone, I'm trying to setup a kafka version of first-network in multi-server environment. When I try to start the network, I get the following message inside the docker container of the orderer. Has anyone faced the same issue before and has resolved it? - thanks beforehand
-------------------------
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 289 [channel: mychannel] Channel consumer set up successfully
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 28a [channel: mychannel] Start phase completed successfully
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28b [channel: mychannel] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28c [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28d [channel: mychannel] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28e [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28f [channel: mychannel] Successfully unmarshalled consumed message, offset is 2. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 290 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 291 [channel: mychannel] Successfully unmarshalled consumed message, offset is 3. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 292 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.467 UTC [orderer/kafka] try -> DEBU 293 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:20.467 UTC [orderer/kafka] try -> DEBU 294 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:25.467 UTC [orderer/kafka] try -> DEBU 295 [channel: testchainid] Connecting to the Kafka cluster
-------------------------
hello everyone, I'm trying to setup a kafka version of first-network in multi-server environment. When I try to start the network, I get the following message inside the docker container of the orderer that says "[channel: testchainid] Connecting to the Kafka cluster". Has anyone faced the same issue before and has resolved it? - thanks beforehand
-------------------------
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 289 [channel: mychannel] Channel consumer set up successfully
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 28a [channel: mychannel] Start phase completed successfully
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28b [channel: mychannel] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28c [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28d [channel: mychannel] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28e [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28f [channel: mychannel] Successfully unmarshalled consumed message, offset is 2. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 290 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 291 [channel: mychannel] Successfully unmarshalled consumed message, offset is 3. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 292 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.467 UTC [orderer/kafka] try -> DEBU 293 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:20.467 UTC [orderer/kafka] try -> DEBU 294 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:25.467 UTC [orderer/kafka] try -> DEBU 295 [channel: testchainid] Connecting to the Kafka cluster
-------------------------
hello everyone, I'm trying to setup a kafka version of first-network in multi-server environment. When I try to start the network, I get the following message inside the docker container of the orderer that says "[channel: testchainid] Connecting to the Kafka cluster". Has anyone faced the same issue before and has resolved it? - thanks beforehand
-------------------------
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 289 [channel: mychannel] Channel consumer set up successfully
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 28a [channel: mychannel] Start phase completed successfully
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28b [channel: mychannel] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28c [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28d [channel: mychannel] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28e [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28f [channel: mychannel] Successfully unmarshalled consumed message, offset is 2. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 290 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 291 [channel: mychannel] Successfully unmarshalled consumed message, offset is 3. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 292 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.467 UTC [orderer/kafka] try -> DEBU 293 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:20.467 UTC [orderer/kafka] try -> DEBU 294 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:25.467 UTC [orderer/kafka] try -> DEBU 295 [channel: testchainid] Connecting to the Kafka cluster
-------------------------
hello everyone, I'm trying to setup a kafka version of first-network in multi-server environment. When I try to start the network, I get the following message inside the docker container of the orderer that says "[channel: testchainid] Connecting to the Kafka cluster". Has anyone faced the same issue before and has resolved it? - thanks beforehand
-------------------------
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 289 [channel: mychannel] Channel consumer set up successfully
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 28a [channel: mychannel] Start phase completed successfully
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28b [channel: mychannel] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28c [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28d [channel: mychannel] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28e [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28f [channel: mychannel] Successfully unmarshalled consumed message, offset is 2. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 290 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 291 [channel: mychannel] Successfully unmarshalled consumed message, offset is 3. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 292 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.467 UTC [orderer/kafka] try -> DEBU 293 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:20.467 UTC [orderer/kafka] try -> DEBU 294 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:25.467 UTC [orderer/kafka] try -> DEBU 295 [channel: testchainid] Connecting to the Kafka cluster
-------------------------
hello everyone, I'm trying to setup a kafka version of first-network in multi-server environment. When I try to start the network, I get the following message inside the docker container of the orderer that says "[channel: testchainid] Connecting to the Kafka cluster". Has anyone faced the same issue before and has resolved it? - thanks beforehand
-------------------------
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 289 [channel: mychannel] Channel consumer set up successfully
2018-06-11 05:07:15.455 UTC [orderer/kafka] startThread -> INFO 28a [channel: mychannel] Start phase completed successfully
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28b [channel: mychannel] Successfully unmarshalled consumed message, offset is 0. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28c [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28d [channel: mychannel] Successfully unmarshalled consumed message, offset is 1. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 28e [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 28f [channel: mychannel] Successfully unmarshalled consumed message, offset is 2. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 290 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.456 UTC [orderer/kafka] processMessagesToBlocks -> DEBU 291 [channel: mychannel] Successfully unmarshalled consumed message, offset is 3. Inspecting type...
2018-06-11 05:07:15.456 UTC [orderer/kafka] processConnect -> DEBU 292 [channel: mychannel] It's a connect message - ignoring
2018-06-11 05:07:15.467 UTC [orderer/kafka] try -> DEBU 293 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:20.467 UTC [orderer/kafka] try -> DEBU 294 [channel: testchainid] Connecting to the Kafka cluster
2018-06-11 05:07:25.467 UTC [orderer/kafka] try -> DEBU 295 [channel: testchainid] Connecting to the Kafka cluster
-------------------------
@anishman: Can you retry this with the verbosity of the orderer logs set to DEBUG?
https://github.com/hyperledger/fabric/blob/release-1.1/sampleconfig/orderer.yaml#L54
Then post the log again here using a service like Hastebin.
Has joined the channel.
@jyellick Dear Jason, I have successfully tried to change the system channel to add new orderer Org as you suggested, but it still failed to the join that new orderer in. Need your help when you are back, thanks so much
@davidkhala What is the error you are encountering now?
Hello Everyone. I have some doubts regarding the ordering service. Can you please tell me how are the explicit transaction dependencis and cyclic dependencies taken care of by the OSNs? Consider a transaction A which is dependent on transaction B and should be committed only after transaction B. Which fields in the deliver() method have to specify this or is it specified by the client in the transaction identifier? I read in the WG paper that the user specifies the explicit dependencies but it is not mentioned how.
It is strictly up to the client to make sure that transaction B is submitted after transaction A has been added to the ledger.
It is strictly up to the client to make sure that transaction B is submitted after transaction A has been added to the ledger. (As it should, I might add.)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=eug6KJzpDbkrsKxGt) @jyellick
log
Has joined the channel.
Has joined the channel.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pqvymuHkttpfTXNY5) @jyellick Hi.. Is the release date for V1.2 set?
I know this channel is not related to Kafka configs but If possible can someone tell me how I can setup my Kafka clusters on different machines?
Right now I'm trying 4 kafka clusters on single machine where orderer is also hosted but Getting `Can't allocate memory error`
I know this channel is not related to Kafka configs but If possible can someone tell me how I can setup my Kafka clusters on different machines?
Right now I'm trying 4 kafka clusters on single machine where orderer is also hosted but Getting `Can't allocate memory` error
@sarapara https://chat.hyperledger.org/channel/fabric-scrum?msg=SwzKuijh2zLEKh7A6
@JayPandya There is a multitude of good Kafka tutorials and documentation you may search for online, I'd recommend you start there
@jyellick - yeah and along with also want to ask that I'm also setting new orderer on different machine and As I see for first orderer I've created channel with
```
docker exec peer0.org1.example.com peer channel create -o orderer.example.com:7050 -t 30 -c composerchannel -f /etc/hyperledger/configtx/composer-channel.tx
```
and then in order to add peer I'm fetching configurations from this channel on multiple node
@jyellick - yeah and along with also want to ask that I'm also setting new orderer on different machine and As I see for first orderer I've created channel with
```
docker exec peer0.org1.example.com peer channel create -o orderer.example.com:7050 -t 30 -c composerchannel -f /etc/hyperledger/configtx/composer-channel.tx
```
and then in order to add peer I'm fetching configurations from this channel on multiple nodes
So can you guide me how I can add new Orderer for same channel ? :thinking:
Looks like need to create same channel for new orderer is this right?
Looks like need to create the same channel for new orderer is this right?
Looks like need to create the same channel(with same name) for new orderer is this right?
Each orderer services all channels. Simply bootstrap your new orderer with the orderer system channel genesis block
When you create a channel targetting any orderer, all orderers become aware of it.
ahh okay so when I created channel on Peer1 then for new orderer I just need to fetch configs from channel blocks on each peer is this flow right?
If you wish to dynamically add a new orderer, start it, bootstrapping it with the same genesis block as your original orderer. Then you need to update the orderer addresses in each channel via a channel config update to include this new orderer. No other action is needed.
@jyellick - Can you explain what does bootstrapping with new orderer? this means fetching configs from genesis block of original ordere?
@JayPandya You bootstrapped your original orderer. You did this by creating a genesis block with `configtxgen`, then specifying that file as the `ORDERER_GENERAL_GENESISFILE` when starting your orderer for the first time. You must simply take this same file, and set the `ORDERER_GENERAL_GENESISMETHOD=file` and `ORDERER_GENERAL_GENESISFILE=
If you have lost this file, you may retrieve it by fetching the genesis block of the orderer system channel.
@jyellick - ahh okay Yeah I've just added that same file `ORDERER_GENERAL_GENESISFILEO=
and then I did
```
docker exec -e "CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/users/Admin@org1.example.com/msp" peer2.org1.example.com peer channel fetch config -o x.x.x.:7050 -c composerchannel
```
for all peers
though I did this same step earlier for original orderer
Why are you fetching the config block on each of your peers?
This simply writes a file onto the filesystem, it has no other effect.
@jyellick - Ohh I'm fetching and Joining that config.block to sync all peers in my network
sorry for noob question but other than that what should I do?
No problem
https://chat.hyperledger.org/channel/fabric-orderer?msg=Z9ZBHdsEwKranhChQ
This is the procedure for adding a new orderer. It does not involve any modifications to your peers.
ohh and to add orderer address I've already added that new orderer's address in `configtx.yml` file under Address section
@jyellick - ohh and to add orderer address I've already added that new orderer's address in `configtx.yml` file under Address section
so I think I'm good there right?
Has joined the channel.
Can existing orderers be pointed at a new Kafka/Zookeeper cluster without hiccup? Or is there data on the Kafka/Zookeeper cluster that needs to be migrated?
@hamptonsmith The orderers maintain the offset into the Kafka log to which they have consumed for each partition (corresponding to each channel). You cannot simply point the orderers to a new cluster.
> ohh and to add orderer address I've already added that new orderer's address in `configtx.yml` file under Address section
@JayPandya Unfortunately your entries in `configtx.yaml` do not matter unless they were made before you bootstrapped your network. You will need to go through the channel reconfiguration process as described (here)[https://hyperledger-fabric.readthedocs.io/en/release-1.1/config_update.html] for each of your channels.
> ohh and to add orderer address I've already added that new orderer's address in `configtx.yml` file under Address section
@JayPandya Unfortunately your entries in `configtx.yaml` do not matter unless they were made before you bootstrapped your network. You will need to go through the channel reconfiguration process as described [here](https://hyperledger-fabric.readthedocs.io/en/release-1.1/config_update.html) for each of your channels.
> ohh and to add orderer address I've already added that new orderer's address in `configtx.yml` file under Address section
@JayPandya Unfortunately your entries in `configtx.yaml` do not matter unless they were made before you bootstrapped your network. You will need to go through the channel reconfiguration process [as described here](https://hyperledger-fabric.readthedocs.io/en/release-1.1/config_update.html) for each of your channels.
@jyellick - In my case I think I've made `configtx.yml` changes before I did bootstrapping of network
I've already updated genesis block with new orderers address
Ah, perfect, then you should be all set.
You should be able to confirm this by pulling the config block from one of your channels, and inspecting the orderer addresses
I'm using `configtx.yml` configuration to write genesis block
@jyellick - How I can do that?
and I've only 1 channel
`peer channel fetch config` as you pasted above will retrieve the latest config block for your channel.
Then, you may either use
```configtxlator proto_decode --type=common.Block --input=
@jyellick - After performing peer channel fetch config
```
2018-06-14 16:12:58.096 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections i
nitialized
2018-06-14 16:12:58.129 UTC [main] main -> INFO 002 Exiting.....
```
getting ^ this
https://chat.hyperledger.org/channel/fabric-orderer?msg=dYEnNcqfoyo45c86q
Simply do this on some peer, this will fetch the config, and write a file I beliee named `composer_channel.block`
Simply do this on some peer, this will fetch the config, and write a file I believe named `composer_channel.block`
Use this file as the input to one of those commands.
@jyellick Thank you! Is there devops advice somewhere for how to safely reconfigure/relocate my Kafka cluster if orderers are already using it? Currently I've just got a toy Kafka cluster of size 1 running on the same server as its toy Zookeeper dependency, but obviously improving that situation is high on my to-do list.
@hamptonsmith Kafka and Zookeeper support dynamically expanding the set of nodes. Could you simply grow your cluster?
@jyellick Likely. Theoretically, I could then point Fabric at some of the new nodes and then retire the old node without incident? (Sorry if this is remedial: software engineer wearing his very dusty devops hat today.)
hey all, looking for some help with updating a channe configl. i have a network with N orgs, and i first updated the [channel_group.Application.policies.Admins](https://hastebin.com/kuqukanepi.pl) to require only the signature of the first organization instead of it being the "MAJORITY" rule.
afterwards i tried to do another channel update signing with only the first organization, updating the [BatchTimeout time](https://hastebin.com/uculafiled.scala) but received a [BAD REQUEST](https://hastebin.com/ogigavojoq.vbs) error. but in my first update i thought i took out the reference to asub policy? did I update the wrong admin policy? or am I missing another step?
[full original channel configuration](https://hastebin.com/uvovudeqeb.json) and [full config after updating the admin policy](https://hastebin.com/ifoqafakuh.json) for reference.
hey all, looking for some help with updating a channel config. i have a network with N orgs, and i first updated the [channel_group.Application.policies.Admins](https://hastebin.com/kuqukanepi.pl) to require only the signature of the first organization instead of it being the "MAJORITY" rule.
afterwards i tried to do another channel update signing with only the first organization, updating the [BatchTimeout time](https://hastebin.com/uculafiled.scala) but received a [BAD REQUEST](https://hastebin.com/ogigavojoq.vbs) error. but in my first update i thought i took out the reference to asub policy? did I update the wrong admin policy? or am I missing another step?
[full original channel configuration](https://hastebin.com/uvovudeqeb.json) and [full config after updating the admin policy](https://hastebin.com/ifoqafakuh.json) for reference.
@jyellick - After inspecting block i can see both orderers addresses in output
again thanks for your help :smile:
@JayPandya Great, happy to help, and glad that you appear to be all set.
@jrosmith The batch size is a part of the orderer configuration, changing that admins policy put your organization in charge of the application portion of configuration, for doing things like adding and removing orgs, or defining who can invoke chaincode.
By default, all of the ordering related parameters, such as batch size, are controlled by the /Channel/Orderer/Admins policy
(Which is by default a majority of the /Channel/Orderer/*/Admins policies. Which, in the case of a single ordering org is 1)
@jyellick ahhh that makes sense. so if i wanted my first organization to have sole control over those update i would need to update the orderer admin policy, yes?
Correct
You should think of the configuration as a tree. Each level of the tree has an Admins policy which is used as the default control mechanism for modifications.
that helps a lot, thanks so much!
Regarding the orderer. Is it possible to have one orderer organization consisting of 4 orderers.
2 orderers in data centre A
2 orderers in data centre B
Basically the orderer org is distributed on two data centers.
If one data centre goes down the two surviving can continue to server the network.
Is there reference architecture or link I can read about it further? Thank you.
Hi everyone! I'm using the node.js SDK and I would like to update the endorsement policy, does anyone have an example of the syntax for that?
I've looked at this resource https://fabric-sdk-node.github.io/global.html#ChaincodeInstantiateUpgradeRequest
But that only tells me how to write the actual policy, not how to assign it to the `ChaincodeInstantiateUpgradeRequest`
Thanks!
Has joined the channel.
It is noted that the Kafka ordering service is not Byzantine Fault Tolerant. I see how a malicious orderer could add their own transaction to the next block even with invalid endorsements. But wouldn't the endorsement verification of the "bad transaction" fail when the peers receiving the new block verify its data?
@mogamboizer Yes, this is certainly possible.
@mogamboizer Yes, this is certainly possible. Though I would need to understand your goals. Are you looking for disaster fault tolerance where one of the datacenters goes offline? If so, you may run into problems because one datacenter must necessarily have a majority of ZK nodes. For true datacenter fault tolerance you would need an odd number of datacenters.
> I see how a malicious orderer could add their own transaction to the next block even with invalid endorsements. But wouldn't the endorsement verification of the "bad transaction" fail when the peers receiving the new block verify its data?
@Exci correct, the orderer cannot fabricate transactions the peer will accept. The orderer could inject a transaction, but it would be marked as invalid.
> I see how a malicious orderer could add their own transaction to the next block even with invalid endorsements. But wouldn't the endorsement verification of the "bad transaction" fail when the peers receiving the new block verify its data?
@Exci correct, the orderer cannot fabricate transactions the peer will accept. The orderer could inject a transaction, but it would be marked as invalid by the peer at validation time.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WTehejuf2vdY4j39o) @jyellick Thank you. If there are three data centres A, B, C and each has two orderers. What is the recommended kafka/zookeeper set that should go on each DC? Can each DC have 3 Kafka and 2 zookeepers for example?
2 Kafka brokers & 1 ZK node in every datacenter.
@mogamboizer This is really more of a Kafka/ZK question than a fabric one, but you must have an odd number of ZK nodes. I'd suggest you look at [the assorted](https://docs.confluent.io/3.0.0/kafka/deployment.html) and excellent online [Kafka references](https://kafka.apache.org/documentation/)
If you're looking for the minimum setup.
Thank you so much :)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=e8BbreJvGujiNBqiS) @jyellick I assume however that the peer would then reject the block? So essentially what a malicious orderer could do is disrupt block storage? In that case I assume there would also be problems if a peer rejected "Block 10 with a bad transaction" but then received "good block 11", since it would still expect block 10?
@Exci It is transactions that are marked valid or invalid, not the block. So if individual transactions within the block are marked invalid, for whatever reason- even if every transaction in a block are marked invalid- the block itself is still appended to the chain
@Exci It is transactions that are marked valid or invalid, not the block. So if individual transactions within the block are marked invalid, for whatever reason- even if every transaction in a block is marked invalid- the block itself is still appended to the chain
@Exci The byzantine attack on ordering is when the orderer presents one order of the blockchain to one set of clients, and another order of the blockchain to another. Of course this is detectable if the clients communicate as the hash chain will be different, but with a BFT ordering service, clients have greater assurances on the integrity of the order.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fDWyK4ECNCyuTDB9Y) @jyellick There is also the "order" aspect, isn't there: a malicious ordering service could have a bias (in a way or another) toward transactions submitted by a specific client
@minollo Certainly. A byzantine orderer can censor or arbitrarily delay a client's transactions.
@silliman @jyellick Thanks for the responses, much appreciated! :slight_smile:
Has joined the channel.
In my orderer docker instance, I am seeing this warning statement every 10 ms or so. I am using standard fabric docker containers (v1.1.0). How should I investigate this?
```[orderer/consensus/kafka] processRegular -> WARN 0d5[0m [channel: mychannel] This orderer is running in compatibility mode```
In my orderer docker instance, I am seeing this warning statement every 10 ms or so. I am using standard fabric docker containers (v1.1.0). How should I investigate this? My network is setup using Docker swarm. 2 orgs, 1 peer per org,
```[orderer/consensus/kafka] processRegular -> WARN 0d5[0m [channel: mychannel] This orderer is running in compatibility mode```
@tallharish: You are running a network of 1.1 binaries but don't have the 1.1 capability enabled.
If this is a new network that you're bringing up for testing, make sure that you're creating a genesis block with these capabilities set to true: https://github.com/hyperledger/fabric/blob/release-1.1/sampleconfig/configtx.yaml#L360
Otherwise, follow the upgrade doc here: https://hyperledger-fabric.readthedocs.io/en/release-1.1/upgrading_your_network_tutorial.html
This is what you're seeing: https://github.com/hyperledger/fabric/blob/13447bf5ead693f07285ce63a1903c5d0d25f096/orderer/consensus/kafka/chain.go#L659..L663
Thanks @kostas The problem went away by adding the Capabilities section in configtx.yaml
Has left the channel.
@jyellick Is it necessary to have "consortium" defined in the orderer genesis block profile in configtx.yaml, or is it fine to have it only the in a specific channel profile?
@amolpednekar: Only the system channel (which is instantiated via the genesis block profile) can carry consortium definitions. So if you want to support consortiums, this is where it belongs.
can someone point me to the code that picks the implementation of orderer ledger storage? e.g. ram vs disk
What is the purpose of `testchainid`? is it some sort of hack for orderer bootstrapping (chicken and egg)?
@kostas ^^
or does it have a real purpose?
@gbolo: Just how we name the system channel in tests.
It doesn't short-circuit anything and any name would work.
hi, usuallly orgs' admin set which org's member can read
hi, usuallly orgs' admin set which org's member can read/write a channel. but how to set these?
If using the default policies, each org has a policy named Readers/Writers located at /Channel/Application/
You may update these policies via the https://hyperledger-fabric.readthedocs.io/en/release-1.1/config_update.html process. Or, beginning in v1.2, you may set them in `configtx.yaml` prior to bootstrapping your system.
Are multiple ordering organizations permitted for a single channel (each, presumably, with a separate Kafka service)? Where can I find information on how consensus is reached in this case?
hey guys, i was reading the [performance and benchmarking paper](https://drive.google.com/file/d/1OsIoPtlv5X2PWyOAlDn1FCnHCZPyrF57/view) and saw that as far as orderer settings were concerned they were focused on block size as a driver for maximizing throughput and decreasing latency, but we also have the option to decrease time to cut.
is it better to use block size or time to cut as the main driver of increasing throughput/minimizing latency?
hey guys, i was reading the [performance and benchmarking paper](https://drive.google.com/file/d/1OsIoPtlv5X2PWyOAlDn1FCnHCZPyrF57/view) and saw that as far as orderer settings were concerned they were focused on block size as a driver for maximizing throughput and decreasing latency, but we also have the option to decrease time to cut.
is it better to use block size or time to cut as the main driver of increasing throughput/minimizing latency? i've mostly been editing time to cut and have see big improvements in transaction times, but could that possibly bite me in the ass if blocks arent getting filled?
> Are multiple ordering organizations permitted for a single channel (each, presumably, with a separate Kafka service)? Where can I find information on how consensus is reached in this case?
@hamptonsmith Multiple ordering organizations are allowed, but with Kafka, there must always be a common Kafka cluster for all orderers. Each organization could host one or more brokers. There are other consensus mechanisms in the works where multiple ordering organizations makes more sense.
> is it better to use block size or time to cut as the main driver of increasing throughput/minimizing latency? i've mostly been editing time to cut and have see big improvements in transaction times, but could that possibly bite me in the ass if blocks arent getting filled?
@jrosmith Obviously they are both tools at your disposal.
You can generally think of the batch timeout as being part of the upper bound on transaction latency. Ignoring the latency introduced by the round trips through the Kafka brokers (which should be relatively constant). The batch timeout is the maximum amount of latency you can expect before a transaction commits.
On the other hand, the preferred block size and max message count are about throughput. In general, the larger the block size, the higher the throughput. Constructing a block requires computing some hashes and signatures, as well as writing to the filesystem. These things have overhead, and although hashing, signing, or writing a larger block will usually take slightly more time, it should take less time than doing the same operations to two blocks half the size.
The two interplay, If the batch timeout is set too low, then your transaction latency is more tightly constrained, but, you may not give the block sufficient time to fill to its maximum capacity to maximize throughput. On the other hand, if the batch timeout is too high, then when transaction volume isn't high enough to saturate the throughput per the batch size parameters, then transactions will take longer to commit.
Your individual workload will ultimately dictate what these parameters should be set to.
@jyellick makes sense, thank you for the explanation
@jyellick ,thank you very much for the reply on the policy seting also for the mutiple order organs.
Hello, I am trying to setup a local Fabric V1.1.0 network instance running on K8S network with a tutorial example deployed - in addition to composer. Details of this work at the following URL:
https://github.com/IBM-Blockchain/ibm-container-service
This setup works for me correctly when I deploy v1.0.3 release.
This fails to deploy at joinchannel step. Looks like - endorsement policy failures? I need help with how to recover from this issue and some pointers - details are in yaml files or config files or both - per my initial understanding.
My debug traces are here:
https://pastebin.com/26fq4Vr0
https://pastebin.com/afuNCQBh
https://pastebin.com/MDeqCPQb
hi, add org3 to a channel
hi, adding org3 to a channel should mean adding org3 to order genesis block. However, from the link adding an org3 to a channel I did not find this step.
when peer channel update runs, the order genesis block is updated too?
just as we dont need to send new config blokc to peers because the configu block is sent to peers automatically.
the configuration items like batch timeout, batch size should be used for order, that is, we are updating the order genesis block?
@qsmen The genesis block is a part of the blockchain. It cannot be changed. Instead, a configuration update transaction is sent which creates a new _configuration block_. The genesis block is simply the first _configuration block_, but new ones may be created and become the source for the channel configuration for future blocks.
@pvrbharg From your logs:
> 2018-06-19 01:08:21.276 UTC [grpc] Printf -> DEBU 007 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.103.127.238:30110: getsockopt: connection refused"; Reconnecting to {blockchain-org1peer1:30110
Yes - I did not use curl but used nslookup. Everything checks out - I can capture traces. I also deployed example multi-pod samples from kubernetes.io and they connect. I can redo. If there is a way I can enable TLS SSL handshake debug traces - we would see more. I used to be able to do this kind of debugging in JDK, JCE, JCA layers of JEE world. This is where I have some due diligence to perform and learn a bit. I would try curl and see if that makes any difference.
@pvrbharg I could certainly be wrong, but I believe `nslookup` is only going to check to make sure that the name is resolvable, it will not actually try to connect to the target.
It is possible to turn on gRPC debug logs, but I think this is lower in the stack, if the connection is refused, I do not think the socket is even opening.
@jyellick in the system channel,i can modify the `consortiums` like add a new `consortium`
@asaningmaxchain123 Is this a question or a statement? But yes, you may define new consortiums in the orderer system channel.
so which policy should i meet by default
By default, the /Channel/Orderer/Admins policy is set as the `mod_policy` of the /Consortiums group.
can i add an org in the orderer system channel?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=o38puuhx3b5uc4TtM) @jyellick ok,i got it
Clipboard - June 21, 2018 10:53 PM
why the `/Channel/Consortiums` doesn't define the `Writer,Reader,Admins` policy
Because the orderer system channel is only visible to, and administrated by the orderer admins.
so if i want add an org for one consortium in the orderer system channel, can i do it? if it support,please tell me how to do it
@pvrbharg You should be able to do a test like this:
```# The service is up, though gRPC uses http2, we are supply bad input, so we get some bytes transferred, then a failure
$ curl --http2 --output /dev/null http://127.0.0.1:7050/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 18 0 18 0 0 18000 0 --:--:-- --:--:-- --:--:-- 18000
curl: (56) Recv failure: Connection reset by peer
# The service is unreachable and the TCP link is wrong
$ curl --http2 --output /dev/null http://127.0.0.1:7050/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to 127.0.0.1 port 7050: Connection refused
```
@pvrbharg You should be able to do a test like this:
```# The service is up, though gRPC uses http2, we are supplying bad input, so we get some bytes transferred, then a failure
$ curl --http2 --output /dev/null http://127.0.0.1:7050/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 18 0 18 0 0 18000 0 --:--:-- --:--:-- --:--:-- 18000
curl: (56) Recv failure: Connection reset by peer
# The service is unreachable and the TCP link is wrong
$ curl --http2 --output /dev/null http://127.0.0.1:7050/
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0curl: (7) Failed to connect to 127.0.0.1 port 7050: Connection refused
```
@asaningmaxchain123 You may add a new org to an existing consortium, or add a new consortium with a new org. Either way, simply create the config update, and sign and submit it as the ordering admin
doesn't it effect the existing application channel?
No
Once a channel is created, any modifications to the orderer system channel have no affect on it.
so if the channel is created ,any operation is bind to the specified channel?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8J79MAThZt6abvyfH) @jyellick what's policy should i meet by default?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8J79MAThZt6abvyfH) @jyellick use the `/Channe/Orderer/Admins` to do that? can you tell me the location of the source code that define it
Once a channel is created, its configuration is managed independently by submitting configuration update transactions to that channel.
ok,i got it
@jyellick use the `/Channe/Orderer/Admins` to do that? can you tell me the location of the source code that define it?
https://github.com/hyperledger/fabric/blob/1dd3acf25fd27d29838fdb785755cdc6d1f3799e/common/tools/configtxgen/encoder/encoder.go#L143
@jyellick no,can you tell me when the orderer system channel receive an config update tx, how does it deal with?
The same way every other config update message is dealt with
https://github.com/hyperledger/fabric/blob/release-1.1/common/configtx/validator.go#L130
i got it
thx very much
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8qTxFNLiQhazQ9ZSm) @jyellick Believe it or not - I still can not get this simple request to accomplish on RHEL platform where I need this to work. curl does not support --http2 option and this platform is official rhel platform
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8qTxFNLiQhazQ9ZSm) @jyellick Believe it or not - I still can not get this simple request to accomplish on RHEL platform where I need this to work. curl does not support --http2 option and this platform is official rhel platform
So I need to get my curl upgraded first and they get this output
If you know any easier way on RHEL platform please share your wisdom - I am on RHEL v7.5 (maipo) 64 bit
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=jWzKaNxMjLa9EJTdD) @jyellick thank you for the reply. Sorry that I didnot make my questions clear. my question is as follows:
From adding an org to a channel link, we know the channel(not order system channel) is updated with config update process. Order system channel is not updated. How do orderers know org3's msp? Throught the channel joining step? where the config update block is sent to some orderers.
IBMCS_K8S_Marbles_V1.1.0-Issue_curlOutput_RHEL.txt
one more question: how can I add a new peer into a channel. the peer belongs to one existing org of the channel.
thank you in advance
Has joined the channel.
You have the peer fetch the genesis block for the channel, and then have the peer join that channel using the block.
http://hyperledger-fabric.readthedocs.io/en/latest/channel_update_tutorial.html#join-org3-to-the-channel
> Use the peer channel fetch command to retrieve this block:
@pvrbharg It is a less complete check, but you may try simply using telnet to assist in your debugging which should be available on RHEL.
```# Connect to a running gRPC service, note, there will be an unprintable character to begin
# then you must hit ctrl-] to escape before typing exit
$ telnet 127.0.0.1 7050
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
^C^]
telnet> quit
Connection closed.
# Connect to a port which is inaccessible
$ telnet 127.0.0.1 7050
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused
```
2018-06-22_10-20-40.png
IBMCS_K8S_Marbles_V1.1.0-Issue_TelnetOutput_RHEL.txt
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=QS4uQRvRmT2QQwk8y) @jyellick
@pvrbharg I was hopeful that you would see the `Connection refused`, but it seems like it is not. However, I do not see any bytes coming over the wire to your console output. So it seems like perhaps data is not flowing from the server. This is why I would have preferred to use `curl`, as it would have completed the TLS handshake
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nGYQtkZfbQ4D9SZ69) @jyellick I posted curl output and I got curl to work on rhel instance - after jumping some hoops and building the binary by compiling source code. Just a bit above this thread. Please let me know if that was of any value. I plan to see if I can replicate this issue on my Mac platform
IBMCS_K8S_Marbles_V1.1.0-Issue_curlOutput_RHEL.txt
2018-06-22_10-20-40.png
IBMCS_K8S_Marbles_V1.1.0-Issue_TelnetOutput_RHEL.txt
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nGYQtkZfbQ4D9SZ69) @jyellick Perhaps my bad and my apologies. My previous post went into ether of Rocket - I believe. Thanks.
The curl output looks pretty good to me as well. From your logs we have:
```2018-06-19 01:08:21.276 UTC [grpc] Printf -> DEBU 007 grpc: addrConn.resetTransport failed to create client transport: connection error: desc = "transport: Error while dialing dial tcp 10.103.127.238:30110: getsockopt: connection refused"; Reconnecting to {blockchain-org1peer1:30110
Has joined the channel.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=puwPGe7nJkss3gvZE) @kostas ,Thank you very much. Ii got it
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=puwPGe7nJkss3gvZE) @kostas ,Thank you very much. I got it
Has joined the channel.
Hello all,
I have fabric with 3 orderers but the problem is when I stop the orderer container which runs on 7050 port, I can't make any transaction. It says me that "service unavailable" but when I stop other container (which is running on port other than 7050) all works fine. What should be the reason behind this? I have checked logs of peer and orderes but there is nothing like error. Can anyone please help me on this?
My network is with 1 peer, 1 CA and 3 orderers
I have 3 zookeepers and 4 kafka borkers.
logs when I start the fabric:
1. peer logs : https://pastebin.com/hPhKu3QB
2. orderer0 logs: https://pastebin.com/57aYAeW2
3. orderer1 logs: https://pastebin.com/PzLKurE2
2. orderer2 logs: https://pastebin.com/5D4shcJN
Note: I am testing it using composer tool
Anyone knows about how to check kafka cluster is reachable or not?
I am getting following error:
```
2018-06-25 09:53:23.723 UTC [common/deliver] deliverBlocks -> WARN 4da [channel: composerchannel] Rejecting deliver requestfor 172.18.0.14:37702 because of consenter error
```
Please help
Has joined the channel.
Hello everyone, I've succeed to followed the tutorial about joining a new org to an already existing channel. But now I'm trying to understand what is the recommended way to deal with my use case:
I want to to create a new channel with inside a new org. From what I understand the orderer-service is supposed to be the same. Do that mean I have to always include an org that's already a member of the consortium to create a new channel. And then join the new org?
Best regards, Kevin
```
Hello everyone, I've succeed to followed the tutorial about joining a new org to an already existing channel. But now I'm trying to understand what is the recommended way to deal with my use case:
I want to to create a new channel with inside a new org. From what I understand the orderer-service is supposed to be the same. Do that mean I have to always include an org that's already a member of the consortium to create a new channel. And then join the new org?
Best regards, Kevin
Hello everyone, I've finished the tutorial about joining a new org to an already existing channel. But now I'm trying to understand what is the recommended way to deal with my use case:
I want to to create a new channel with inside a new org. From what I understand the orderer-service is supposed to be the same. Do that mean I have to always include an org that's already a member of the consortium to create a new channel. And then join the new org?
Best regards, Kevin
Hello everyone, I've finished the tutorial about joining a new org to an already existing channel. But now I'm trying to understand what is the recommended way to deal with my use case:
I want to to create a new channel with inside a new org. From what I understand the orderer-service is supposed to be the same. Does that mean I have to always include an org that's already a member of the consortium to create a new channel. And then join the new org?
Best regards, Kevin
Hello everyone, I've finished the tutorial about joining a new org to an already existing channel. But now I'm trying to understand what is the recommended way to deal with my use case:
I want to to create a new channel with inside a new org. From what I understand the orderer-service is supposed to be the same. Does that mean I have to always include an org that's already a member of the consortium to create a new channel. And then join the new org? Even if i only want the new org inside.
Best regards, Kevin
Hello everyone, I've finished the tutorial about joining a new org to an already existing channel. But now I'm trying to understand what is the recommended way to deal with my use case:
It's a Fabric network with two kinds of orgs.
- One with its own channel
- One who will join multiples channels from the first type of orgs.
For example Org1 and Org2 both have their own channels, then Org3 is a member of both channels.
From what I understand the orderer-service is supposed to be the same for both channel. Does that mean I have to always include an org that's already a member of the consortium to create a new channel. And then join the new org? Even if i only want the new org inside at the beginning.
Best regards, Kevin
Hello, I have a basic question
If i add some debug statements in the orderer and build it
and copy the new binary to the bin location in fabric-sample
on running the first-network will new orderer be executed? or I need to something else too?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=cYcxY75uaj8j57H5W) @DivyaAgrawal to use it in fabric-samples, you need to build a docker image with updated binary file
@C0rWin Okay. Thanks. Do you have any documentation for the same?
I am also facing a problem with make dist-clean all on fabric release-1.1 with following error :
Step 1/10 : FROM hyperledger/fabric-buildenv:x86_64-1.1.1-snapshot-c257bb3
pull access denied for hyperledger/fabric-buildenv, repository does not exist or may require 'docker login'
Makefile:305: recipe for target 'build/image/testenv/.dummy-x86_64-1.1.1-snapshot-ff5e861' failed
make: *** [build/image/testenv/.dummy-x86_64-1.1.1-snapshot-ff5e861] Error 1
Do you have any idea about this?
@C0rWin Okay. Thanks. Do you have any documentation for the same?
I am also facing a problem with make dist-all clean on fabric release-1.1 with following error :
Step 1/10 : FROM hyperledger/fabric-buildenv:x86_64-1.1.1-snapshot-c257bb3
pull access denied for hyperledger/fabric-buildenv, repository does not exist or may require 'docker login'
Makefile:305: recipe for target 'build/image/testenv/.dummy-x86_64-1.1.1-snapshot-ff5e861' failed
make: *** [build/image/testenv/.dummy-x86_64-1.1.1-snapshot-ff5e861] Error 1
Do you have any idea about this?
`make orderer-docker` will produce docker image with updated orderer binary
> Step 1/10 : FROM hyperledger/fabric-buildenv:x86_64-1.1.1-snapshot-c257bb3
pull access denied for hyperledger/fabric-buildenv, repository does not exist or may require 'docker login'
Makefile:305: recipe for target 'build/image/testenv/.dummy-x86_64-1.1.1-snapshot-ff5e861' failed
make: *** [build/image/testenv/.dummy-x86_64-1.1.1-snapshot-ff5e861] Error 1
I'd suggest to try asking at #fabric-ci about this
> Step 1/10 : FROM hyperledger/fabric-buildenv:x86_64-1.1.1-snapshot-c257bb3
>pull access denied for hyperledger/fabric-buildenv, repository does not exist or may require 'docker login'
>Makefile:305: recipe for target 'build/image/testenv/.dummy-x86_64-1.1.1-snapshot-ff5e861' failed
>make: *** [build/image/testenv/.dummy-x86_64-1.1.1-snapshot-ff5e861] Error 1
I'd suggest to try asking at #fabric-ci about this
> Step 1/10 : FROM hyperledger/fabric-buildenv:x86_64-1.1.1-snapshot-c257bb3 pull access denied for hyperledger/fabric-buildenv, repository does not exist or may require 'docker login' Makefile:305: recipe for target 'build/image/testenv/.dummy-x86_64-1.1.1-snapshot-ff5e861' failed make: *** [build/image/testenv/.dummy-x86_64-1.1.1-snapshot-ff5e861] Error 1
I'd suggest to try asking at #fabric-ci about this
Sure. Will do thanks a lot !
Hi Guys.. is anyone facing issues with Orderer service mainly because of Kafka cluster? I 'm getting this issue, if my fabric network is idle for a long time eg. overnight, next day the transactions don't work, unless I restart the kafka containers. The zookeeper and Kafka logs says unable to elect leader..
Has joined the channel.
Hi Guys, right now we are using solo orderer service in the existing Blockchain network & we are trying to use Kafka Orderer service instead of Solo in Hyperledger Fabric Network. Can anyone tell me how could this be acheived??
Hi Guys, right now we are using solo orderer service in the existing Blockchain network & we are trying to use Kafka Orderer service instead of Solo in Hyperledger Fabric Network. Can anyone tell me how could this be acheived?? Is there any plug-gable mechanism available.
@C0rWin I followed the `make orderer-docker` cmd, but i cant see the new docker image using `docker images`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=MyuJEaQRLBturYe3w) @DivyaAgrawal have you seen any errors? can you check with `docker images | grep orderer` and see what is the output?
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=auLdNNadYvegEu2Rr) @puneetsharma86 I previously read on this chat channel that it's not possible to swap out a solo orderer to Kafka for pre-existing channels.
does a single orderer handle ordering for all channels? also what would happen to the peers if the connected orderer changed the history of blocks?
@puneetsharma86: @julian is correct. See: https://hyperledger-fabric.readthedocs.io/en/release-1.1/ordering-service-faq.html#general
@moodysalem: In case of solo, yes. In case of Kafka, a single ordering service handles ordering for all channels.
If the ordering service changed the history of the blocks, a newly connected peer connecting directly to the ordering service would get that modified block sequence.
This is where the BFT service, with f+1 signatures on each block will come handy.
Thanks @kostas & @julian for the detailed explanation.
I found this link:https://github.com/hyperledger/fabric/blob/master/examples/e2e_cli/docker-compose-e2e.yaml , where Kafka is used as an ordering service in Fabric 1.1 ver.
Has joined the channel.
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.`
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` ```
```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.`
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` ```
`Kafka Logs 2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids````
`Kafka Logs: [2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
`
```
```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` ```
`Zookeeper Logs ```
``` 2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids```````
```
`Kafka Logs: ```
``` [2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
`
```
```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` ```Zookeeper Logs ```
``` 2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids```````
``` Kafka Logs: ```
``` [2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` ```Zookeeper Logs ```
```2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids```
```
``` Kafka Logs: ```
``` [2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` Zookeeper Logs
2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
``` Kafka Logs: ```
``` [2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` Zookeeper Logs ```
```
2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
Kafka Logs:
```
``` [2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` Zookeeper Logs ```
2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
```
Kafka Logs:
```
[2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
```
```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` Zookeeper Logs ```
2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
```
Kafka Logs:
```
[2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` Zookeeper Logs ```
2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
```
Kafka Logs:
```
[2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` Zookeeper Logs ```
2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
```
Kafka Logs:
```
[2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
``` Orderer Logs```
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka/sarama] NewClient -> DEBU 0a0 Successfully initialized new client
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] try -> DEBU 0a1 [channel: testchainid] Error is nil, breaking the retry loop
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] startThread -> INFO 0a2 [channel: testchainid] Producer set up successfully
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 0a3 [channel: testchainid] About to post the CONNECT message...
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] try -> DEBU 0a4 [channel: testchainid] Attempting to post the CONNECT message...
2018-06-28 09:18:39.905 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 0a5 client/metadata fetching metadata for [testchainid] from broker kafka0:9092
2018-06-28 09:18:39.927 UTC [orderer/consensus/kafka/sarama] updateMetadata -> DEBU 0a6 Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.
2018-06-28 09:18:39.928 UTC [orderer/consensus/kafka] try -> DEBU 0a7 [channel: testchainid] Initial attempt failed = kafka server: Replication-factor is invalid.
2018-06-28 09:18:39.928 UTC [orderer/consensus/kafka] try -> DEBU 0a8 [channel: testchainid] Retrying every 1s for a total of 30s
2018-06-28 09:18:40.928 UTC [orderer/consensus/kafka] try -> DEBU 0a9 [channel: testchainid] Attempting to post the CONNECT message...
2018-06-28 09:18:40.928 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 0aa client/metadata fetching metadata for [testchainid] from broker kafka0:9092
2018-06-28 09:18:40.933 UTC [orderer/consensus/kafka/sarama] updateMetadata -> DEBU 0ab Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.
2018-06-28 09:18:40.933 UTC [orderer/consensus/kafka] try -> DEBU 0ac [channel: testchainid] Need to retry because process failed = kafka server: Replication-factor is invalid.
```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` Zookeeper Logs ```
2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
```
Kafka Logs:
```
[2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
``` Orderer Logs```
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka/sarama] NewClient -> DEBU 0a0 Successfully initialized new client
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] try -> DEBU 0a1 [channel: testchainid] Error is nil, breaking the retry loop
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] startThread -> INFO 0a2 [channel: testchainid] Producer set up successfully
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 0a3 [channel: testchainid] About to post the CONNECT message...
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] try -> DEBU 0a4 [channel: testchainid] Attempting to post the CONNECT message...
2018-06-28 09:18:39.905 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 0a5 client/metadata fetching metadata for [testchainid] from broker kafka0:9092
2018-06-28 09:18:39.927 UTC [orderer/consensus/kafka/sarama] updateMetadata -> DEBU 0a6 Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.
2018-06-28 09:18:39.928 UTC [orderer/consensus/kafka] try -> DEBU 0a7 [channel: testchainid] Initial attempt failed = kafka server: Replication-factor is invalid.
2018-06-28 09:18:39.928 UTC [orderer/consensus/kafka] try -> DEBU 0a8 [channel: testchainid] Retrying every 1s for a total of 30s
2018-06-28 09:18:40.928 UTC [orderer/consensus/kafka] try -> DEBU 0a9 [channel: testchainid] Attempting to post the CONNECT message...
2018-06-28 09:18:40.928 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 0aa client/metadata fetching metadata for [testchainid] from broker kafka0:9092
2018-06-28 09:18:40.933 UTC [orderer/consensus/kafka/sarama] updateMetadata -> DEBU 0ab Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.
2018-06-28 09:18:40.933 UTC [orderer/consensus/kafka] try -> DEBU 0ac [channel: testchainid] Need to retry because process failed = kafka server: Replication-factor is invalid.```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` Zookeeper Logs ```
2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
```
Kafka Logs:
```
[2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
``` Orderer Logs```
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka/sarama] NewClient -> DEBU 0a0 Successfully initialized new client
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] try -> DEBU 0a1 [channel: testchainid] Error is nil, breaking the retry loop
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] startThread -> INFO 0a2 [channel: testchainid] Producer set up successfully
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 0a3 [channel: testchainid] About to post the CONNECT message...
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] try -> DEBU 0a4 [channel: testchainid] Attempting to post the CONNECT message...
2018-06-28 09:18:39.905 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 0a5 client/metadata fetching metadata for [testchainid] from broker kafka0:9092
2018-06-28 09:18:39.927 UTC [orderer/consensus/kafka/sarama] updateMetadata -> DEBU 0a6 Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.
2018-06-28 09:18:39.928 UTC [orderer/consensus/kafka] try -> DEBU 0a7 [channel: testchainid] Initial attempt failed = kafka server: Replication-factor is invalid.
2018-06-28 09:18:39.928 UTC [orderer/consensus/kafka] try -> DEBU 0a8 [channel: testchainid] Retrying every 1s for a total of 30s
2018-06-28 09:18:40.928 UTC [orderer/consensus/kafka] try -> DEBU 0a9 [channel: testchainid] Attempting to post the CONNECT message...
2018-06-28 09:18:40.928 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 0aa client/metadata fetching metadata for [testchainid] from broker kafka0:9092
2018-06-28 09:18:40.933 UTC [orderer/consensus/kafka/sarama] updateMetadata -> DEBU 0ab Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.
2018-06-28 09:18:40.933 UTC [orderer/consensus/kafka] try -> DEBU 0ac [channel: testchainid] Need to retry because process failed = kafka server: Replication-factor is invalid.
```
Hi Can anyone please help me with this error : ` Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.` Zookeeper Logs ```
2018-06-28 09:17:40,947 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@928] - Client attempting to establish new session at /10.0.0.12:52838
2018-06-28 09:17:40,948 [myid:] - INFO [SyncThread:0:ZooKeeperServer@673] - Established session 0x16445adb3dc0001 with negotiated timeout 6000 for client /10.0.0.12:52838
2018-06-28 09:17:42,135 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1d zxid:0x20 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NodeExists for /brokers
2018-06-28 09:17:42,136 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@649] - Got user-level KeeperException when processing sessionid:0x16445adb3dc0001 type:create cxid:0x1e zxid:0x21 txntype:-1 reqpath:n/a Error Path:/brokers/ids Error:KeeperErrorCode = NodeExists for /brokers/ids
```
Kafka Logs:
```
[2018-06-28 09:17:03,950] INFO Registered broker 0 at path /brokers/ids/0 with addresses: EndPoint(kafka0,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.utils.ZkUtils)
[2018-06-28 09:17:03,952] WARN No meta.properties file under dir /tmp/kafka-logs/meta.properties (kafka.server.BrokerMetadataCheckpoint)
[2018-06-28 09:17:03,982] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,984] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-06-28 09:17:03,987] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
``` Orderer Logs:
```2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka/sarama] NewClient -> DEBU 0a0 Successfully initialized new client
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] try -> DEBU 0a1 [channel: testchainid] Error is nil, breaking the retry loop
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] startThread -> INFO 0a2 [channel: testchainid] Producer set up successfully
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 0a3 [channel: testchainid] About to post the CONNECT message...
2018-06-28 09:18:39.904 UTC [orderer/consensus/kafka] try -> DEBU 0a4 [channel: testchainid] Attempting to post the CONNECT message...
2018-06-28 09:18:39.905 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 0a5 client/metadata fetching metadata for [testchainid] from broker kafka0:9092
2018-06-28 09:18:39.927 UTC [orderer/consensus/kafka/sarama] updateMetadata -> DEBU 0a6 Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.
2018-06-28 09:18:39.928 UTC [orderer/consensus/kafka] try -> DEBU 0a7 [channel: testchainid] Initial attempt failed = kafka server: Replication-factor is invalid.
2018-06-28 09:18:39.928 UTC [orderer/consensus/kafka] try -> DEBU 0a8 [channel: testchainid] Retrying every 1s for a total of 30s
2018-06-28 09:18:40.928 UTC [orderer/consensus/kafka] try -> DEBU 0a9 [channel: testchainid] Attempting to post the CONNECT message...
2018-06-28 09:18:40.928 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 0aa client/metadata fetching metadata for [testchainid] from broker kafka0:9092
2018-06-28 09:18:40.933 UTC [orderer/consensus/kafka/sarama] updateMetadata -> DEBU 0ab Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.
2018-06-28 09:18:40.933 UTC [orderer/consensus/kafka] try -> DEBU 0ac [channel: testchainid] Need to retry because process failed = kafka server: Replication-factor is invalid.```
Try to check the logs of your kafka servers @anjalinaik
and paste them here
also did you try to search google for `replication-factor is invalid` ?
Has joined the channel.
@kostas thanks! i'm also wondering about the case of kafka orderer-if there is a single kafka cluster, how can different orgs run orderers for different channels? does the orderer support using different kafka clusters for different channels?
also what would happen to peers that are already synced if the orderer changed the blocks? i assume that behavior is undefined, and the peers would likely just stop syncing blocks since the hashes wouldn't match
@moodysalem: The Kafka-based service is meant to be run by one org. Different orgs can run their own brokers and take part in a joint cluster, but it really all comes down to which org runs the so-called cluster controller, the Kafka broker that handles partition assignments. That org has the ultimate power. This is all part of why we're working towards a BFT solution.
> also what would happen to peers that are already synced if the orderer changed the blocks? i assume that behavior is undefined, and the peers would likely just stop syncing blocks since the hashes wouldn't match
A peer that's synced up to block X, would request blocks X+1 to latest from the ordering service.
> also what would happen to peers that are already synced if the orderer changed the blocks? i assume that behavior is undefined, and the peers would likely just stop syncing blocks since the hashes wouldn't match
I assume they would stop syncing as well, but I do not know that for a fact. Good question. Perhaps @C0rWin or @yacovm knows?
thanks again @kostas !
can someone point me to where in the code i can look to understand how connected peers are notified when a new block is minted? i want to know if that's going through writing to the ledger and then reading from the ledger
@moodysalem: #fabric-peer-endorser-committer might be a better venue for that one.
@C0rWin No, there are no errors. I just see the previous docker image of orderer(as per the timestamp). If i am correct , i should see a latest orderer image with updated time.
hmmm but i think that would probably happen in the deliver rpc handler and/or the WriteBlock impl.? seems like an orderer thing, maybe i'm misunderstanding though
@moodysalem
@moodysalem https://github.com/hyperledger/fabric/blob/release-1.1/core/committer/committer_impl.go : postCommit function in this file. I am also trying to understand the same functionality, came across this code today.
@moodysalem: Unless I'm misunderstanding the question, it has to do with the peer's deliver client, hence the reference to the #fabric-peer-endorser-committer. At any rate, the orderer part of this equation (i.e. its deliver handler logic) can be seen found: https://github.com/hyperledger/fabric/blob/ff5e861deba7b394ed1aaaa85f9220c4677dc6ff/common/deliver/deliver.go#L153
What If you delete the kafka and zookeeper logs
Will my network remain functional?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WFpMFhAXbrEs5zpuN) @kostas Will you please look into this question?
Hi.. Has anybody tried launching Kafka Orderers with zookeeper ensemble in multiple hosts?
Hi! I have a teoretical question:
When mixing writing and reading from the system a read-operation sometimes takes A LOT of time. (like over 30 seconds instead of the usual ~100 ms)
This seems to happen when an update of a value is commited to the ordering service and I try to read from one of the non-endorsing peers.
Does anyone know why this happens? (It only happens sometimes, not always under these circumstances)
Example:
b=1 in the ledger
I send transaction that b=2 to org1 and org2 and get confirmation that the transaction is successfully sent to the orderer
I try to read b from org1, within 100ms it gives me 2
I try to read b from org3, it takes 30 seconds and then it gives me 2.
(Using Node.js SDK, 20 peers (one per org), SOLO orderer)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=LKkCN3vurdYG6nsw2) @SaraEmily Maybe since the delay is on a non-endorsing peer the Docker image and container for that chaincode on that peer had not been created yet so the delay you experience is the overhead of setting that up?
@silliman Thanks for your answer, I will look in to that!
@silliman Yep, that was the problem! Thanks!
Has left the channel.
Has left the channel.
@anjalinaik: I would search Google for "Replication-factor is invalid" and work my way from there. This is purely a Kafka issue.
@DivyaAgrawal @kostas thanks, i was referring to the orderer's deliver handler and those links are useful. i'm still learning go, but i see this is the code that reads from the channel up to the specified block https://github.com/hyperledger/fabric/blob/ff5e861deba7b394ed1aaaa85f9220c4677dc6ff/common/deliver/deliver.go#L235
does the deliver call stop when all the specified blocks have been retrieved? i thought you could subscribe to new blocks via the rpc by calling deliver, but it looks like it terminates when all the blocks have been delivered. it does look like the read goes through the ledger though, via the iterator, so a new block is written to disk and then read from disk to serve the connected peers
nevermind, i see it blocks on iterCh
nevermind, i see it blocks on iterCh and only fails if SeekInfo_FAIL_IF_NOT_READY flag is set
@DivyaAgrawal i think the code you linked is only used in the peer
@moodysalem Yes, it is for event generation after a block has been committed to the ledger.
so... hopefully one last question, can anyone confirm that the orderer does not need to index the blocks? afaict the ledger indexing is only for the peers that need to serve client requests
@moodysalem: The orderer does index the blocks as it has its own ledger.
It doesn't _verify_ the transactions in the blocks though, in contrast to what the peer does, when committing the transactions to its ledger. (If you were to compare the two ledgers, you'd see that blocks on the peer side have an additional validity bitmask in the block's metadata.)
The orderer needs its own local ledger as this is where it serves Deliver requests from.
Are you a developer by the way?
to clarify, by index i mean the peer ledger interface allows you to look up transactions by hash, etc., as in the blkstorage.go file https://github.com/hyperledger/fabric/blob/release-1.1/common/ledger/blkstorage/fsblkstorage/blockindex.go
Ah, no. No index of that sort.
(Good questions.)
Has left the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ZpTo7CeJTG5EerAQE) @kostas Hi.. thankyou for your reply. unfortunately i am unable to find what is wrong :( i am launching the kafka orderers in 4 different servers connected by a docker swarm. Will the configurations remain same or is there any change when launched in different servers?
hi! anyone used SSL with kafka for fabric before?
Yep
ServiceUnavailableError.txt
@channel, I am seeing the above shown error when I try to create a channel for a customized tutorial. I can provide more info if needed and hoping someone can help me see what I am doing wrong or missing. Thanks.
@pvrbharg This usually indicates either:
1. The Kafka cluster has not had time to properly start. The error will resolve itself in a few minutes.
2. There is a communication problem within the Kafka cluster, or between the OSN and the Kafka cluster.
@jyellick Dear Jason - OK let me retry and give it some time to see if the boot up of cluster settles itself - I was getting self doubts... Thank you for your update and guidance!
Has joined the channel.
Hi Experts.. If i have to bring down a hyperledger fabric network with Kafka-Zookeeper orderers, what components [docker images/containers etc ] will need to be removed, so that the system has no residual data of previous network?
@anjalinaik If you are using docker, I would recommend that you simply remove all containers and shared volumes.
Has joined the channel.
Will the BFT consensus method(RAFT) in the future be an easy swap with the current kafka solution, also is there an ETA on the RAFT solution? I see that it's mentioned potentially in 1.4
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=9ANaLfLcv7D2SPbQc) My apologies Jason - after setting up a brand new instance on Ubuntu and replicating my work [previously done on RHEL v7.5] - I end up at same issue and the problem is persistent - one of 3 zk instance gets in trouble and my step to create a channel fails. I tried an experiment by add one extra zk instance - 4 instead of 3 and 3 of them come up healthy with one instance getting in sick - my step still fails. I am at this point suspecting - my config or setup may be the culprit - however I do not know how to get by this unnecessary issue (not related to what I am trying to do really). I also moved up my kafka/zk images to most current versions with no avail. I am hoping you can point me to someone who may review my config and logs that I am attaching here in - thank you for any guidance you may provide...
ServiceUnavailableErrorLogs_July052018.zip
> Will the BFT consensus method(RAFT) in the future be an easy swap with the current kafka solution, also is there an ETA on the RAFT solution? I see that it's mentioned potentially in 1.4
Raft is CFT, not BFT. I think it's going to be tight for 1.3. We will be releasing a tool that allows you to import your existing blockchain from a Kafka network into a Raft one. If this is not released at the time of the Raft-based ordering service, it will be right after.
What happen if kafka or zookeeper's logs are deleted. Is there any way to resume the network without deleting everything on a production environment?
@pankajcheema The short answer is 'no'. The long answer is 'yes', a manual recovery is possible, but presently, there are no tools to facilitate this.
Hi! I am trying to create 2 channels, one that have 3 Chaincodes and the 2nd has only one. I have one orderer, three zookeeper and four kafka nodes. Anyone can explain me why I am getting time zookeeper timeout when I try do create the 2nd channel chaincode
Hi! I am trying to create 2 channels, one that have 3 Chaincodes and the 2nd has only one. I have one orderer, three zookeeper and four kafka nodes. Anyone can explain me why I am getting time zookeeper timeout when I try do create the 2nd channel chaincode?
@dharuq Instantiating chaincode and zookeeper are nearly entirely unrelated. If you are seeing zookeeper timeouts during instantation, then my suspicion is that your docker networking is having problems under load. Are you perhaps running your environment on MacOS?
when a brand new OSN launched and connects to Kafka for existing channels and topics that have a history of messages, what does it do? does it read from the beginning and reconstruct the orderer ledger?
@moodysalem Correct, it replays the entire history of transactions a encoded in the Kafka brokers.
You may shortcut this process by taking a backup of an existing OSN and using it to bootstrap your new OSN's ledger before starting.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WYoNRXTqnhZNtwYvW) @jyellick thanks for your quick support. Can you guide me how to do a manual recovery?
Yes..I am on macos @jyellick [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=CuJe65qaLaercgha7)
Yes..I am on macos..
How should I solve this? @jyellick [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=CuJe65qaLaercgha7)
Hi Guys. Can anyone please help me out. i have 4 orderers with kafka zookeeper cluster and i added another organization to the existing running network. However I found that while doing so, I am not able to get same channel config file from orderer1-org0 as compared to what i am getting from other orderers i.e. orderer2-org0, orderer3-org0 and orderer4-org0
I am getting the below logs from orderer1-org0 (block no. 4 as latest) when i trigger peer channel fetch config config_block.pb -c mychannel $ORDERER_CONN_ARGS
2018-07-07 20:22:50.487 UTC [msp] setupSigningIdentity -> DEBU 034 Signing identity expires at 2019-07-07 08:30:00 +0000 UTC
2018-07-07 20:22:50.487 UTC [msp] Validate -> DEBU 035 MSP org1MSP validating identity
2018-07-07 20:22:50.488 UTC [msp] GetDefaultSigningIdentity -> DEBU 036 Obtaining default signing identity
2018-07-07 20:22:50.490 UTC [grpc] Printf -> DEBU 037 parsed scheme: ""
2018-07-07 20:22:50.494 UTC [grpc] Printf -> DEBU 038 scheme "" not registered, fallback to default scheme
2018-07-07 20:22:50.495 UTC [grpc] Printf -> DEBU 039 ccResolverWrapper: sending new addresses to cc: [{orderer1-org0:7050 0
but when i trigger the same command from other org - orderer3-org0 then I am getting 5th block as latest one. Below given are logs
2018-07-07 20:24:34.956 UTC [msp] setupSigningIdentity -> DEBU 034 Signing identity expires at 2019-07-07 08:30:00 +0000 UTC
2018-07-07 20:24:34.957 UTC [msp] Validate -> DEBU 035 MSP org1MSP validating identity
2018-07-07 20:24:34.962 UTC [msp] GetDefaultSigningIdentity -> DEBU 036 Obtaining default signing identity
2018-07-07 20:24:34.963 UTC [grpc] Printf -> DEBU 037 parsed scheme: ""
2018-07-07 20:24:34.964 UTC [grpc] Printf -> DEBU 038 scheme "" not registered, fallback to default scheme
2018-07-07 20:24:34.964 UTC [grpc] Printf -> DEBU 039 ccResolverWrapper: sending new addresses to cc: [{orderer3-org0:7050 0
this is what i am getting as latest from orderer1-org0 logs. Can anyone pls confirm whether this is a bug or i am missing something
2018-07-07 20:25:17.495 UTC [orderer/consensus/kafka/sarama] RefreshMetadata -> DEBU 3288 client/metadata fetching metadata for all topics from broker kafka0:9092
2018-07-07 20:25:18.695 UTC [orderer/consensus/kafka/sarama] RefreshMetadata -> DEBU 3289 client/metadata fetching metadata for all topics from broker kafka0:9092
Has joined the channel.
@pankajcheema @dharuq Manual recovery is not something that I'd recommend as an exercise for a user. It really needs some tool support.
Is there an assumption (documented preferably) that the ordering service is the ultimate source of truth?
> that the ordering service is the ultimate source of truth?
@toddinpal I'm not sure what you mean. The ordering service creates the blockchain, and the authenticity of the blocks is verified because the orderers have signed the blocks. The world state, and other aspects of the state database, are largely derived from the blockchain, but are computed and stored by peers. If you wished to know for instance the value of a key in the state database, you cannot ask the orderers, you must ask a peer or peers.
@jyellick Hi Jason, I understand how Fabric works, I'm just curious in the case of a total disaster, is the Fabric assumption the ordering service has the last say as to what blocks were cut and which weren't?
In the case of a 'total' disaster, I think you would have to go to a 'longest chain' recovery mode, where you got the blockchain for every channel from every node in the network, orderer and peer, and picked the longest valid chain for each channel. It's certainly possible that an orderer cuts and delivers a block, then catches fire. If enough such simultaneous faults occurred, it's possible that the OSNs could have fewer blocks than the peers, but this is truly a very corner case fault statement.
Right, but it's this corner case I'm looking at. From what I can tell, there is no mechanism in Fabric for the ordering service to recover from the peers, so it seems of questionable value to try and look at all the channels/nodes and determine what was the last block cut on each. Is such a corner case considered by Fabric or not?
Fabric works under certain fault assumptions. In ordering, our crash fault tolerance is tied to the crash fault tolerance of Kafka. If you have configured your Kafka network to tolerate one crash fault, and two simultaneous crash faults occur, then all guarantees are off, and you are looking at manual, human driven recovery. This likely means shutting down the nodes in the network, and copying blocks around by hand until the network is consistent again. This would require considerable knowledge of the internal workings of Fabric, and would also likely require tooling which has not been created yet. In short, do not violate Fabric's fault tolerance assumptions, and you can avoid all of the pain above. If you do not violate Fabric's crash fault assumptions, then your original question about "ordering being the source of truth" (for the blockchains) is valid.
It would be nice if that were stated somewhere as an underlying architectural or design assumption, i.e., that recovery in many cases requires having a working ordering service. Peers are easily recoverable from other peers or the ordering service in the case of a total loss, and that is fully supported by the protocols as far as I can tell. The ordering service on the other hand has no such mechanism.
If you can think of a good place to highlight this, you're welcome to submit a CR or open a JIRA to have someone else do so.
I think a section somewhere in the architecture documentation ought to document the failure scenarios and their recovery mechanisms.
Do you have a link to where you are thinking? There are several different pieces of architecture documentation.
not at the moment, let me look it over and see where it makes sense. For software that will be used in enterprise environments, this sort of information is critical for the success of any project long term.
probably as a new section under: https://hyperledger-fabric.readthedocs.io/en/release-1.2/architecture.html
Something that covers high availability, fault tolerance, and recovery processes
I've opened https://jira.hyperledger.org/browse/FAB-11103 for you. Feel free to edit or comment as you see fit.
That Jira looks great. Thanks
@jyellick @kostas is there a detailed description of SBFT protocol? all I can find online is this: http://sammantics.com/blog/2016/7/27/chain-1
@jyellick @kostas is there a detailed description of SBFT protocol? all I can find online is this: http://sammantics.com/blog/2016/7/27/chain-1. also can you confirm that SBFT is being added to Fabric (and not BFTSmart?) as the protocol of choice for a BFT consensus?
@jyellick @kostas is there a detailed description of SBFT protocol? all I can find online is this: http://sammantics.com/blog/2016/7/27/chain-1. also can you confirm that SBFT is being added to Fabric (and not BFTSmart?) as the protocol of choice for a BFT consensus? thanks!
@jimthematrix: Hi Jim. The SBFT protocol described in that link is different to what we have been calling SBFT, they just share the same name. The closest thing you'll find to a detailed description of what we're trying to do is https://jira.hyperledger.org/browse/FAB-378 + https://jira.hyperledger.org/browse/FAB-897. We have been working on a detailed spec but it is not shareable yet. I would hesitate to state authoritatively what the project will choose as its primary BFT protocol, but practically and realistically speaking the answer is yes -- the plan is to roll with this PBFT variant of ours.
thanks very much for the information @kostas :thumbsup:
Has joined the channel.
Has joined the channel.
Has joined the channel.
while trying to create channel in testAPI, Iam getting service unavailable error from the ordering service.
the logs of orderer container is shown below:
grpc: Server.Serve failed to complete security handshake from "172.23.0.1:35240": EOF
@Sreesha it sounds like you have a networking error between your client and your orderer
hi all, if you have multiple OSNs subscribing to Kafka for a channel, and 3 decide to cut a block at the same time, and one node actually cuts the block and signs it (could use some confirmation on how this decision is made-i think it's just the first message sent to kafka), how do the other 2 OSNs get the block signed by the 1 OSN that signed it?
hi all, if you have multiple OSNs subscribing to a Kafka topic for a channel, and 3 decide to cut a block at the same time, and one node actually cuts the block and signs it (could use some confirmation on how this decision of which OSN cuts the block is made-i think it's just the first message to cut the block as decided by Kafka), how do the other 2 OSNs get the block signed by the 1 OSN that signed it?
hi all, if you have multiple OSNs subscribing to a Kafka topic for a channel, and 3 decide to cut a block at the same time, and one node actually cuts the block and signs it (could use some confirmation on how this decision of which OSN cuts the block is made-i think it's just the first message to cut the block as decided by Kafka), how do the other 2 OSNs get the block signed by the 1 OSN that cut it?
@moodysalem Each OSN cuts based on the first message, and signs themselves. Each orderer constructs an independent copy of the blockchain, but deterministically.
Is there anything that we can do from orderer side related to MVCC error? I know its coming from peer side but just want to know if is there anything related to orderer
@JayPandya No, the orderer does nothing with respect to MVCC
Okay thanks for confirmation
@jyellick does that mean that multiple copies of a block number exist and are propagated with different signatures? that's a bit alarming, unless the signature isn't part of the hash?
@jyellick does that mean that multiple versions of a specific block number exist and are propagated with different signatures? that's a bit alarming, unless the signature isn't part of the hash?
i guess it's not really alarming, but different peers are likely to have different signatures on their blocks..
does it matter if my OSNs all share a certificate vs. each has a separate signing certificate?
@moodysalem Correct, the block signature is part of the block metadata, which is not aprt of the hash.
So, each block has a unique signature, but a same hash.
However, there is ongoing work to add new consensus models. The closest to ready is a Raft implementation. In this case, the same signature will be propagated to all nodes.
After this, we hope to add a PBFT based consensus to ordering, in which case each block will have a set of at least f+1 signatures, though the particular set may vary node to node.
Hello @jyellick - In fabric-1.0.x, do we have peer CLI command that would list the created channel on the network?
Hello @jyellick - In fabric-1.0.x, do we have CLI command that would list the created channel on the network?
Has joined the channel.
@rahulhegde If you have access to an orderer's filesystem, you can likely do so with 'ls' or similar. But from a fabric perspective, no.
There is an issue on the backlog to encode the list of channels into the config block of the orderer system channel, but it is as of yet, unimplemented.
After enrolling orderer using fabric ca , i have the following certs generated:
Clipboard - July 12, 2018 12:41 PM
Clipboard - July 12, 2018 12:42 PM
Clipboard - July 12, 2018 12:43 PM
Now in balance-transfer/artifacts/network-config.yaml which tls ca cert should i specify for orderer
Clipboard - July 12, 2018 12:45 PM
Has joined the channel.
i am facing an error when i am trying to create a channel = "error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining"
i am facing an error when i am trying to create a channel = ``` "error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining" ```
i am facing an error when i am trying to create a channel = ``` "error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining" ``` does anybody know what the problem is?
@Sreesha Please don't paste screenshots like this. Use a service like hastebin.com and paste links. Pasting images makes this channel very hard to read.
Usually, you can find the orderer tls cert at a location like:
```$ ls crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/
ca.crt server.crt server.key
```
The folder you referenced is the TLS CA for the ordering org. You should be able to find a directory for the specific orderer which contains a signing MSP and TLS info. If you do not have one, you may need to generate one.
pvrbharg
@thakurnikk Most likely you are not submitting the request as an admin
@jyellick i changed the user context from "user" to "admin" in nodesdk still i get the same error
To debug the exact policy evaluation failure. Please set your orderer logs to debug, reproduce the error, and post the logs to a service like hastebin.com for us to analyze here.
sure @jyellick
@jyellick Sorry for the screenshots.I will take care of it from now onwards.But as you can see i have just server.crt and server.key in my tls folder.I dont have ca.crt generated.SO which file should i use for tls.
here are the orderer logs - https://hastebin.com/ogeyakujaw.cpp for channel creation error @jyellick
@thakurnikk
```orderer0.debutinfotech.com | 2018-07-13 04:48:14.241 UTC [msp] satisfiesPrincipalInternalPreV13 -> DEBU 12e Checking if identity satisfies ADMIN role for debutMSP
orderer0.debutinfotech.com | 2018-07-13 04:48:14.241 UTC [cauthdsl] func2 -> DEBU 12f 0xc4200ba8c8 identity 0 does not satisfy principal: This identity is not an admin
orderer0.debutinfotech.com | 2018-07-13 04:48:14.241 UTC [cauthdsl] func2 -> DEBU 130 0xc4200ba8c8 principal evaluation fails
```
It looks like the admin cert is not actually an authorized admin cert. Remember, if you issue an admin cert after bootstrapping the system, you must update the MSP definition in the channel configuration to reflect the new cert.
Has joined the channel.
@Sreesha The you should be able to point to `/etc/hyperledger/orderer/msp/tlscacerts/
[Please Read Before Posting](https://wiki.hyperledger.org/community/chat_channels/fabric-orderer)
In an attempt to keep the Wiki clean (with a small number of top level links), I moved the Wiki page that contains the posting guidelines. Hence the topic update.
In an attempt to keep the Wiki clean (with a small number of namespaces), I moved the Wiki page that contains the posting guidelines. Hence the topic update.
Hi!
I'm trying to use Kafka-ordering service for my node.js SDK network. I used this code as a template to my work: https://github.com/skcript/Kafka-Fabric-Network (but with some changes since I don't use the CLI version).
The network and all components start-up just fine but when trying to create a channel I get `[ERROR] Service Unavailable` with the text `Handshake failed with fatal error SSL_ERROR_SSL: routines:SSL3_GET_RECORD:wrong version number`
The logs for one of the orderer indicates an error with the error `grpc: Server.Serve failed to create ServiceTransport: connection error: desc="transport: http2Server.HandleStream received bogus greeting from client"`
Does anyone know why this problem occurs? Or know of any resource on how to apply kafka OS to node.js SDK networks?
Thanks!
@SaraEmily seems to me like the SDK isn't using TLS to connect to the orderer...
or... you're using maybe the CLI to create the channel?
No I'm using the node.js SDK to set up the channel, I'm doing it in the same way as in the balace-transfer sample: https://github.com/hyperledger/fabric-samples/tree/release-1.2/balance-transfer
I would bet the node SDK isn't configured to use TLS then
What does that imply for me?
you're saying that in the *orderer logs* you see these "received bogus from client" right?
Yes, in one of my 3 orderers
So... you need to make it use TLS
I'll DM you
Has joined the channel.
@mastersingh24 any idea why her config file https://drive.google.com/file/d/11O0GMQ8WHpXw78LMFvkywBIY3vvw-DF_/view
doesn't make the node SDK connect with TLS? (It seems like that's the reason... no?)
Has joined the channel.
hi. Where can I find now "ordering with Kafka" design document? Seems it is missing here: https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing
hi. Where can I now find the missing document on kafka ordering https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing
hi. Where can I find now "ordering with Kafka" design document? Seems it is missing here: https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing
hi. It seems design document on kafka ordering https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing is missing. Where can I find it (or smth similar) now?
hi. It seems design document on kafka ordering https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing is missing. Where can I find it (or smth similar) now?
hi, if we want to add a new org to orderer channel, config update operation for all channels in the network should be done. If the new org be added to application channel, config update operation can be done only for the application channel. Am I right? Thank you.
If I am wrong, please kindly tell me the procedures for the two cases.
hi , if a new org be added to an application channel, should the order system channel be updated? The link adding an org to an channel does not mention to update the order system channel. but if system order channel is not updated, the org's client user can not ask for ordering service.
hi , if a new org be added to an application channel, should the order system channel be updated? The link adding an org to an channel does not mention to update the order system channel. but if system order channel is not updated to include the new org's msp in order system, the org's client user can not ask for ordering service because the order can not verify its signature.
Hi, I notice that FAB-7330 describes a problem if multiple nodes in the kafka cluster go down as kafka does not flush all data to disk. In FAB-7330 it describes the circumstances when this was caused by an upgrade, and is fixed by making the upgrade gracefully shut down. What would happen if the failure was not caused by upgrade but by for example a hardware failure of a server ? Is there a way of recovering from this scenario. Is there a possibility, for example, that one of the orderers ledgers contain a block with transactions in that the kafka no longer has, and would that matter ?
I see that http://grokbase.com/t/kafka/users/162rashxyz/unable-to-start-kafka-cluster-after-crash-0-8-2-2 mentions a way of getting the Kafka cluster up but with data loss. What would the orders make of this, and how should the system be recovered ?
Has joined the channel.
Has joined the channel.
Has joined the channel.
Hello! Does anybody have experience with BFT-SMaRt? There's a github package that integrates it with fabric, but I'm having trouble making it work in a distributed environment
@kristycarp - your best bet will be to ping the folks who contributed the code
https://chat.hyperledger.org/channel/fabric-orderer?msg=H2SD8dEu2W8TBhxq9
Has joined the channel.
Hi everyone.Has anyone tried generating certificates using fabric-ca rather than cryptogen and using them to run balance-transfer
For me the orderer is rejecting the channel creation request.
Can anyone help ,me
Has joined the channel.
@Sreesha Most likely, you generated your admin certificate after bootstrapping your orderer. Admin certificates are enumerated in the channel configuration, so you should generate them before bootstrapping the orderer.
hi. where is the design document on kafka ordering https://docs.google.com/document/d/1vNMaM7XhOlu9tB_10dKnlrhy5d7b1u8lSY8a-kVjCO4/edit?usp=sharing ? Is it no more available...
@kostas I believe you owned the above document, any idea? ^
@ddurnev @jyellick: Ugh, this may have been lost accidentally when I did some spring cleaning between my Google accounts earlier this week. This new link should work: https://docs.google.com/document/d/19JihmW-8blTzN99lAubOfseLUZqdrB6sBR0HsRgCAnY/edit?usp=sharing
@ddurnev: Can you point me to where you found the original link so that I can update it?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=xF3KhcEKLNTRELTzp) @jyellick If we want to create a new channel after the network is booted already, how to do? add new consortium to config.yaml and update the order channel configuration to including a new consortium for the new channel and generate the new channel tx for the new channel and go on?
Has joined the channel.
Hi all , I have a doubt regarding kafka ordering service, I have 4 kafkas running in 4 different machine with 3 zookeepers in each machine,
when i try to bringing up the ordere I am getting an error " Unexpected topic-level metadata error: kafka server: Replication-factor is invalid."
Replication factot that i provide is 3
@jyellick Here: https://hyperledger-fabric.readthedocs.io/en/release-1.1/kafka.html in "Big Picture"
Hi all , When I am submitting the channel creation using "peer channel create " command I am getting an error (Multiple host )
error-kafka1.png
erro-kafka.png
Failed to connect to broker b06eb1f5dbfc:9092: dial tcp: lookup b06eb1f5dbfc on 127.0.0.11:53: no such host
I have added the corresponding IP's to the /etc/hosts
I am able to telnet the kafka0 running at 9092
https://chat.hyperledger.org/channel/fabric-orderer?msg=ytXj8T2xNdqFBNXKe
@qsmen You do not need to create a new consortium, simply update the org definition with the new admin cert. Then channel creation will work normally.
https://chat.hyperledger.org/channel/fabric-orderer?msg=L7tpc4qgBiWLfJSmd
@ddurnev Yes, I assumed that was where you had found it. This link used to be valid. I believe @kostas was the owner of this document, I suspect he may have accidentally archived it or similar which has made it inaccessible. He is currently on vacation, but when he returns, we will see about restoring it.
@Unni_1994 Most likely you are hosting your Kafka cluster inside docker and connecting to it from outside. You need to set the advertised host and port to something which is externally routable. http://kafka.apache.org/090/documentation.html see `advertised.host.name` and `advertised.port` (these can also usually be overridden from the environment.
flush
@jyellick OK, thanks. Are there any more documents on kafka ordering(except the design doc)? Or kafka consensus is considered a temporary(non-BFT) solution and will be dropped in near future?
Presently, development effort is focused on a Raft based solution which is seen as a stepping stone to BFT. I can't speak towards when/if Kafka support will be dropped, I think it will depend on how Raft performs, especially with a plurality of channels. That design document is probably the best thing that was available. You're of course also welcome to look at the code. The Kafka connecting piece itself is 1431 lines, including comments and whitespace.
Presently, development effort is focused on a Raft based solution which is seen as a stepping stone to BFT. I can't speak towards when/if Kafka support will be dropped, I think it will depend on how Raft performs, especially with a plurality of channels. That design document is probably the best thing that was available. You're of course are also welcome to look at the code. The Kafka connecting piece itself is 1431 lines, including comments and whitespace.
Presently, development effort is focused on a Raft based solution which is seen as a stepping stone to BFT. I can't speak towards when/if Kafka support will be dropped, I think it will depend on how Raft performs, especially with a plurality of channels. That design document is probably the best thing that was available. You're of course also welcome to look at the code. The Kafka connecting piece itself is 1431 lines, including comments and whitespace.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=97H5DBym7RMjjoNiQ) @rsherwood Kafka requires that at lease 1 ISR remain alive at all times in order to not lose a message. By shutting down all Kafka brokers simultaneously and forcefully, they were reproducing the worst case scenario for Kafka data retention. That said, because even this "simultaneous" shutdown is not truly simultaneous, some brokers did have time to flush out more messages than others. Upon restarting, this Kafka cluster should of been able to synchronize its replicas without the loss of messages, but a bug prevented that from happening. Fixes available from Kafka 0.11 on handle this situation.
That said if you manage to:
• Have every Kafka broker die simultaneously
• And a message has been replicated enough to be considered "committed"
• And that message has been delivered to a subset of your orderers
• And that message was not flushed to the disk of ANY of the Kafka brokers
Then the message is lost, and upon restarting the OSNs will fork the chain. The ledgers on the peers would become corrupted as they mix blocks from your set of OSNs. Recovery from this scenario would require you to:
• Find the 'correct' chains on the disks of the OSNs.
• Copy the 'correct' chains to your various OSNs.
• Start the OSNs.
• Delete the ledgers from all your peers and have the peers rebuild them from the OSNs.
If you do suffer an outage that you suspect might put you into this position, you can upon restarting the OSNs, and before you allow any peers to connect to them , retrieve the latest blocks from the OSNs and compare their headers.
Of course, this scenario is unlikely. Even if you try to forcefully kill every broker simultaneously, as long as one of them has committed the message, everything will be ok (which is what happened in FAB-7330, but then we ran into a Kafka bug).
Has joined the channel.
Has joined the channel.
anyone knows about `AbsoluteMaxBytes: 99 MB` & `PreferredMaxBytes: 512 KB` in `configtx.yaml` in `Orderer`
@pankajcheema There are comments in the config file which describe their use
Hi, jyellick, thank you. But I am still confused a little. To create a channel, I will run configtxgen channelprofile, where consortium is included. So would you please tell me procedure to add a new channel? Further, the link https://hyperledger-fabric.readthedocs.io/en/release-1.2/network/network.html describes the dynamic expansion of a network, however, the whole doc does not describe how to achieve it. It will be great if the doc could describe how to achieve.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=chKEr4F5i73pKs544) @jyellick
qsmen 10:14 AM
Hi, jyellick, thank you. But I am still confused a little. To create a channel, I will run configtxgen channelprofile, where consortium is included. So would you please tell me procedure to add a new channel? Further, the link https://hyperledger-fabric.readthedocs.io/en/release-1.2/network/network.html describes the dynamic expansion of a network, however, the whole doc does not describe how to achieve it. It will be great if the doc could describe how to achieve.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=chKEr4F5i73pKs544) @jyellick
Hi, jyellick, thank you. But I am still confused a little. To create a channel, I will run configtxgen channelprofile, where consortium is included. So would you please tell me procedure to add a new channel? Further, the link https://hyperledger-fabric.readthedocs.io/en/release-1.2/network/network.html describes the dynamic expansion of a network, however, the whole doc does not describe how to achieve it. It will be great if the doc could describe how to achieve.
@jyellick I try to understand from comment but no luck
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=u8FkCRfTyKXxY7QSf) Can anyone make it more clear
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fQyXq2wgsafwr6nme) Please
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=NnYD79aQt547PBAg5) @sanchezl @sanchezl Regarding the kafka cluster OSNs Recovery:
How to find the 'correct' chains? Do they need to match with kafka or why exactly my orderers are logging `Rejecting deliver request for xx.xx.xx.xx:52438 because of consenter error`
@sanchezl e.g. when my orderers are at offset 1000 for a topic and my (recovered) kafka cluster is at 900 because of some data loss. Is there any way to recover from this?
thanks @jyellick
Hi when I try to join the channel in multihost setup I am getting an error like this "gossip/discovery] func1 -> WARN 1d4 Could not connect to {peer0.org1.example.com:7051 [] [] peer0.org1.example.com:7051} : context deadline exceeded" in the peer logs
hi! @Kyroy I'm also interested what happens in the similar case, but when some peers retained more blocks, than was recovered after kafka failure. It seems that kafka is SPOT (and SPOF) in this case and missing blocks(not in kafka partition) are not recoverable...
1*join-error.png
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=KY8zrfErgjysGxTm2) @Kyroy You would have to add a block to the OSN ledger to point to the next kafka offset to continue from. If you can somehow roll back your ledgers (and accept the data loss) on all peers and OSNs to the appropriate state, this offset might be 9000. If you can't accept the data loss, then you can point to offset 0 and start over with fresh new Kafka topics (this has consequences for how you initialize a new OSN in the future). We don't have a tool to manipulate the OSN ledger, but it should be possible to write one.
@sanchezl Thanks for the info :) Where can I find more information about the state that is maintained by the orderers? Also, are there any plans to make this more error prune?
You can see what extra metadata the orderer adds to its ledger here: https://github.com/hyperledger/fabric/blob/5a6e86267d8436be1c862b9d542e0c21f60fbe7b/protos/orderer/kafka.proto#L61-L83
There are no plans at the moment to make the needed utilities or any other changes to the Kafka orderer that would be relevant to this scenario. Thankfully a *properly* setup and *administered* Kafka cluster is rather robust and we haven't had the need to worry too much about this.
https://chat.hyperledger.org/channel/fabric-orderer?msg=WMaACEL77aT2yaK8q
Has joined the channel.
@pankajcheema Please take time to ask good questions. "I don't understand this" is not a good question. Try something like: "The `AbsoluteMaxBytes` value in the configtx.yaml says that is is the maximum number of bytes to be included in a batch. Does this mean that the total size of each block will be this size of smaller?"
We are happy to help, but unless we can understand exactly what you are asking, it will not be a productive conversation for anyone.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=maY5bbi5JekEdxoqq) @jyellick Instead of writing this much. You could simply describe. Please note: Everyone is not on same level. I already told you I could not understand these 2 variables. Today is 3rd day since I asked this question
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fQyXq2wgsafwr6nme) I just have an idea that `AbsoluteMaxBytes` might be the max size of a block
but `PreferredMaxBytesP ` I am completely unaware about this variable
Has joined the channel.
am unable to find good documentation or examples around ACLs....
Problem:
I have 3 organizations in a channel - 2 is allowed to submit proposal, 3rd to query it ( and restrict proposals addition or change on ledger..
In addition, organization 3 - is allowed to only see part of metadata value , which i think can be acheived by private data collections ..but i would really need information first to get right ACLs setup ...
I am using Fabric 1.2.0 ....We have just started a PoC on Fabric, What do you suggest , should we use Fabric or composer ....
is it possible to direct me also to channeltx.yml directives .. since i am planning to setup network with ACLs applied on various consortiums and restrict client changing it. I am referring this : https://hyperledger-fabric.readthedocs.io/en/release-1.2/access_control.html,
However, it is not clear , how do i achieve it.
Has joined the channel.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SjiBpStFWa33NYRif) @sanchezl Hi thanks for the replies. I assume that the block you referenced is kept within each ledger, out of interest is that within the signed blocks sent to the peers?
Can I please explore when the Fabric may detect the scenarios.
If we assume that the Kafka has settled on offset 900 and one orderer in a cluster has offset 1000 when will this be detected ?
At startup is there any reconciliation between the last block an orderer recorder and the kafka offset ?
Would the orderer detect when offset 901 is delivered that this is a different message than the 901 it already had?
I would assume that if a peer had been deliverer the block with offset 1000 and it connects to a orderer with offset 900 then it would detect the ledger fork as soon as the next block is sent. Is that correct?
If the Orderers have not detected a problem and the block had not been delivered to any orderer and there is a ledger fork between orderers then I would assume that peers will detect a problem if they have cause to swap orderers which may not be very frequent. Is this correct?
Similarly if the peers happen to connect to the same orderer that they did previously when would that be detected ?
> @jyellick Instead of writing this much. You could simply describe. Please note: Everyone is not on same level. I already told you I could not understand these 2 variables. Today is 3rd day since I asked this question
@pankajcheema You are very close to being muted in this channel. You have a history of asking unhelpful and vague questions, and refusing to provide more detail when others take time to follow up on them. See: https://stackoverflow.com/questions/50699633/hyperledger-fabric-need-to-restart-orderer-manually-if-network-connection-if-off (now removed for its unhelpfulness) and https://jira.hyperledger.org/browse/FAB-10909 for example.
I asked you to please, be more specific and ask a more constructive question than to simply state "I don't understand". Scolding the volunteers on this channel for simply not answering your vague question is equally nonconstructive. As I've stated before, everyone here wants Hyperledger Fabric to be successful, but answering questions on rocketchat is no one's primary focus.
Because the descriptions in `configtx.yaml` are apparently not clear enough, I have submitted a CR which clarifies: https://gerrit.hyperledger.org/r/c/24655/
`Because the descriptions in `configtx.yaml` are apparently not clear enough, I have submitted a CR which clarifies: https://gerrit.hyperledger.org/r/c/24655/` This is what I was looking for. Thanks for this.
Regarding the questions I asked. It may happen that I was trying to tell something and due to my english you understand something else.
`Because the descriptions in configtx.yaml are apparently not clear enough, I have submitted a CR which clarifies: https://gerrit.hyperledger.org/r/c/24655/` This is what I was looking for. Thanks for this.
Regarding the questions I asked. It may happen that I was trying to tell something and due to my english you understand something else.
https://stackoverflow.com/questions/50699633/hyperledger-fabric-need-to-restart-orderer-manually-if-network-connection-if-off : I did not removed this question. I don't know who did this
Regarding this bug. https://jira.hyperledger.org/browse/FAB-10909: This happens at the time of query too. You asked me the logs of orderer. I think there is nothing to do with orderer when you are not writing anything but only query the chaincode? Am I right?
@pankajcheema
> I did not removed this question. I don't know who did this
Stack Overflow has a moderation system designed to allow other users to remove unhelpful or inappropriate posts. Apparently enough Stack Overflow users down-voted your question as unhelpful that it was removed.
> This happens at the time of query too. You asked me the logs of orderer. I think there is nothing to do with orderer when you are not writing anything but only query the chaincode? Am I right?
The limited logs your provided pointed towards something in ordering. Whether or not the problem is in the orderer, the point is that you opened a bug, marked it highest severity, but did not provide anywhere near enough information for anyone to help you. When you were asked to provide more information, you simply said you did not agree with the diagnosis. When asked again, you claimed you would post the information, then never did. Even now, rather than provide the information, you are describing a different scenario instead.
> Regarding the questions I asked. It may happen that I was trying to tell something and due to my english you understand something else.
This is not a matter of language. As I request before, please take time when writing a question or opening a bug. If you are experiencing a problem, take the time to reduce it to the smallest set of steps possible to reproduce the problem, be sure to collect all configuration and logs at a debug level. If someone trying to help asks for more details, provide them instead of arguing that they are not needed.
Everyone here wants to help you, and we appreciate your passion for learning to use Fabric. All we ask is that you respect the time and effort of the volunteers here by taking your own time to thoughtfully ask good questions.
@pankajcheema
> I did not removed this question. I don't know who did this
Stack Overflow has a moderation system designed to allow other users to remove unhelpful or inappropriate posts. Apparently enough Stack Overflow users down-voted your question as unhelpful that it was removed.
> This happens at the time of query too. You asked me the logs of orderer. I think there is nothing to do with orderer when you are not writing anything but only query the chaincode? Am I right?
The limited logs you provided pointed towards something in ordering. Whether or not the problem is in the orderer, the point is that you opened a bug, marked it highest severity, but did not provide anywhere near enough information for anyone to help you. When you were asked to provide more information, you simply said you did not agree with the diagnosis. When asked again, you claimed you would post the information, then never did. Even now, rather than provide the information, you are describing a different scenario instead.
> Regarding the questions I asked. It may happen that I was trying to tell something and due to my english you understand something else.
This is not a matter of language. As I request before, please take time when writing a question or opening a bug. If you are experiencing a problem, take the time to reduce it to the smallest set of steps possible to reproduce the problem, be sure to collect all configuration and logs at a debug level. If someone trying to help asks for more details, provide them instead of arguing that they are not needed.
Everyone here wants to help you, and we appreciate your passion for learning to use Fabric. All we ask is that you respect the time and effort of the volunteers here by taking your own time to thoughtfully ask good questions.
https://chat.hyperledger.org/channel/fabric-orderer?msg=BNWfuq4dW8fNZ8tHj
@nukulsharma It sounds to me like you want all three organizations to be able to invoke chaincode in a read-only way (query), but for only two of the organizations to be able to actually submit transactions to modify state.
Chaincode queries are a special type of 'Invoke', by default, to execute any 'Invoke' requires that your user satisfies the /Channel/Application/Writers policy (as defined by the `peer/Propose` ACL). And, by default, if a user satisfies the /Channel/Application/Writers policy, then they may submit transactions to the blockchain.
To accomplish what you desire, I would suggest that you first update the `peer/propose` ACL to be /Channel/Application/Readers . Then, you should modify the /Channel/Application/Writers policy to allow only a member of one of your two 'writing' organizations. The combination of these two modifications should have the effect you desire
@pankajcheema `PreferredMaxBytes` is pretty clearly described in the [documentation on channel configurations](http://hyperledger-fabric.readthedocs.io/en/release-1.1/config_update.html#editing-a-config)
@jrosmith Thanks friend! I missed that
@jyellick I will take care of the points you told me. and will provide the orderer logs as soon as possible https://jira.hyperledger.org/browse/FAB-10909 once it is replicated
I will try this at highest priority
Has joined the channel.
Has joined the channel.
Hi everyone
Is it possible to configure hyperledger fabric orderer solo/kafka to deliver a single transaction to peers instead of a block(batch of transactions)
@username343 Certainly
Simply configure the `MaxMessageCount` in `configtx.yaml` to be 1, prior to bootstrapping your network
https://github.com/hyperledger/fabric/blob/5a6e86267d8436be1c862b9d542e0c21f60fbe7b/sampleconfig/configtx.yaml#L240
but that one transaction will be included in a block, right?
Yes
but the architecture document states that it is possible to configure orderer to deliver just a transaction without enclosing it in a block
Fabric is a blockchain, eliminating the blocks would be a pretty radical departure :slight_smile:
Some of the architectural document is a bit out of date. Can you point me to the section? At one point, we called the blocks returned from ordering 'batches', instead of blocks, because the transactions inside had not yet been validated. However, this semantic differentiation was more confusing than it was helpful, so we abandoned this and simply call them all blocks.
I've collected a highly link for that section which is -> https://www.highly.co/hl/98Vcz2z8GPMUY0
this section is on this page: https://hyperledger-fabric.readthedocs.io/en/release-1.2/arch-deep-dive.html
it's the second paragraph in ledger and block formation
Yes, that documentation is a bit unclear. The paragraph was attempting to convey that sometimes a batch has multiple transactions, but sometimes it has more than one. The use of 'blob' is to emphasize that the transaction contents are opaque to the orderer and not validated or deeply inspected.
Yes, that documentation is a bit unclear. The paragraph was attempting to convey that sometimes a batch has multiple transactions, but sometimes it has only one. The use of 'blob' is to emphasize that the transaction contents are opaque to the orderer and not validated or deeply inspected.
ok thanks for clearing that
@jyellick
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=KtHHMtb3geumDertm) @jyellick thanks @jyellick - Will try and confirm outcome soon .
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Hi everyone. I was searching for a design doc for Ordering with Kafka and it seems like the doc is gone from Google Docks. The respective link (number 14. ) in this wiki page is broken: https://wiki.hyperledger.org/projects/fabric/design-docs . Is there any other place to get this document from?
Basically I'm looking for a document, describing how Ordering service works with Kafka and potentially with other consensus algorithms.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2juMjp3XQuXQiwmBy) @nvlasov The new link is available [here](https://chat.hyperledger.org/channel/fabric-orderer?msg=4Rz2gmdNgKwmwJ5oC)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ZMYqf9DQtjThQvM49) @adarshsaraf123 Got it! Thanks a lot!
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=eBYHTfvEf3sBbMbYY) @boonthep FYI ^^^
@nvlasov: Sorry about that. (And @adarshsaraf123 thanks for pointing to the right link.) I've updated the design-docs entry in the Wiki accordingly.
I need to fix the references in what we've released as well. `master`, `1.2` are definitely in. I think I'll do `1.1` as well.
Has joined the channel.
Hi all, I have a doubt on Kafka setup. I have 4 kafka's running in 4 different hosts , what is the KAFKA_DEFAULT_REPLICATION_FACTOR and KAFKA_MIN_INSYNC_REPLICAS values?
I have one more doubt , is this setup is correct (4 kafka's running in 4 different hosts ) ?
@Unni_1994: Set RF to 3 and ISR to 2. See here for more: https://github.com/hyperledger/fabric/blob/release-1.1/bddtests/dc-orderer-kafka.yml
I need to add a link to this file to the Kafka FAQ.
https://gerrit.hyperledger.org/r/c/24863/
Hi anyone have any idea or suggestion for me what am doing wrong .Please have a look. https://stackoverflow.com/questions/51578617/acl-implementation-of-hyperledger-fabric-1-2
@kostas Could you please provide your suggestion to the below scenario
I started with just one org and started the network. later in the future i wanted to add few more orgnaizations such that only i should be having the authority to add and remove the organizations in the network. In the hyperledger fabric documentation i found this "The modification policy (mod_policy) for our channel Application group is set to the default of “MAJORITY”, which means that we need a majority of existing org admins to sign it. Because we have only two orgs – Org1 and Org2 – and the majority of two is two, we need both of them to sign. Without both signatures, the ordering service will reject the transaction for failing to fulfill the policy."
so is there any way that I can change it to "ORG1MSP" instead of "MAJORITY".....?
Thanks @kostas
I have set the values as specified in the link. When i try to create the channel I am getting an error . Could you please check it?
kafka.png
for my case single kafka's are running in each machine,
https://gerrit.hyperledger.org/r/#/c/24655/1/sampleconfig/configtx.yaml,
about the link above, I have two questions:
1. what ‘s any reader, or any writer that is used to define policy?
2.In application part, exhaustive acls are given, would you tell me which acl is related to the following acl that editing the configuration of a particular org requires only the admin signature of that org, mentioned in http://hyperledger-fabric.readthedocs.io/en/release-1.1/config_update.html#get-the-necessary-signatures?
thank you
@kostas @jyellick
Could you please provide your suggestion to the below scenario
I started with just one org and started the network. later in the future i wanted to add few more orgnaizations such that only i should be having the authority to add and remove the organizations in the network. In the hyperledger fabric documentation i found this "The modification policy (mod_policy) for our channel Application group is set to the default of “MAJORITY”, which means that we need a majority of existing org admins to sign it. Because we have only two orgs – Org1 and Org2 – and the majority of two is two, we need both of them to sign. Without both signatures, the ordering service will reject the transaction for failing to fulfill the policy."
so is there any way that I can change it to "ORG1MSP" instead of "MAJORITY".....?
Has joined the channel.
Hi Guys, fabric 1.2 balance transfer example getting this error..please help
https://pastebin.com/5c41rc8u
error: [Remote.js]: Error: Failed to connect before the deadline
error: [Orderer.js]: Orderer grpcs://localhost:7050 has an error Error: Failed to connect before the deadline
[2018-07-30 15:21:55.939] [ERROR] Create-Channel - Error: Failed to connect before the deadline
I think its related to https://jira.hyperledger.org/browse/FABJ-32
2018-07-30 09:52:02.670 UTC [grpc] Printf -> DEBU 0bd grpc: Server.Serve failed to complete security handshake from "172.20.0.1:37610": EOF
Has joined the channel.
@javrevasandeep you can do it with acl
ACL
@javrevasandeep Use any in place of MAJORITY
@javrevasandeep Use `ANY` in place of `MAJORITY` .
@kostas @jyellick .Please correct me if i am wrong.
@javrevasandeep: When you say network, I read consortium.
So I think your request translates to the following:
So I think your request translates to: I wish to have Org1 be the only org that can add new members to the consortium.
A. I wish to have Org1 be the only org that can add new members to the consortium, and
Let's have a look at what's going on behind the scenes.
The modification policy for _any_ consortium group is equal to the "Admins" policy of your "Orderer" config group.
Source: https://github.com/hyperledger/fabric/blob/1c94e9e9108409ba3278d2bc40a184b1ef7127e9/common/tools/configtxgen/encoder/encoder.go#L362
And: https://github.com/hyperledger/fabric/blob/1c94e9e9108409ba3278d2bc40a184b1ef7127e9/common/tools/configtxgen/encoder/encoder.go#L30
This will also be the case then for your consortium. Let's call it "MyConsortium".
As I see it then, you have two options.
1. Assuming Org1 is listed as an organization under your "Orderer" config group (i.e. Org1 is an ordering org), then make sure to edit the "Admins" policy for the "Orderer" group to: `Type: Signature` and `Rule: Or('Org1MSP.admin')`
Consider this example:
`Joe` is listed as an ordering org: https://github.com/kchristidis/fabric-example/blob/6ade4d8d558e502b9601fc325b767d58de35f0a0/config/configtx.yaml#L119
thanks @kostas for so clear and explained answer.
*back
Getting back to the example. You''ll notice that the policy for `Orderer/Admins` is still set to `(ImplicitMeta, MAJORITY Admins)`: https://github.com/kchristidis/fabric-example/blob/6ade4d8d558e502b9601fc325b767d58de35f0a0/config/configtx.yaml#L92..L94
You should edit that to something like this: https://github.com/kchristidis/fabric-example/blob/6ade4d8d558e502b9601fc325b767d58de35f0a0/config/configtx.yaml#L14..L16
That's option 1.
2. I think the proper way to address this would be via a configuration update transaction that targets the system channel and modifies the modification policy of MyConsortium from majority of orderer org admins to Org1 admin.
This is less constraining in the sense that Org 1 is not required to be an ordering org.
The advantage of this approach is that Org 1 is not required to be an ordering org.
The advantage of this approach is that Org1 is not required to be an ordering org.
However Org1 would still need to get the majority of the orderer org admins to sign this configuration update transaction that modifies the modification policy of MyConsortium.
And this path is obviously longer than option 1.
@Unni_1994: Your system only "sees" 1 Kafka broker as being up.
I strongly suggest that you forget about running Fabric for a sec, and just try to run the first 6 steps of the Apache Kafka Quickstart guide with your existing hardware setup.
Build it iteratively and try to get the quickstart example to work with a replication factor of 3.
Once you reach that stage, getting Fabric to work on top should be trivial.
@knagware9: This is going to be a vague and kind of obvious answer, but something's up with the way you're setting up TLS in your client-server comms. No orderer bug here.
@kostas if `Orderer's` `Readers` and `Admins` policy fails what actions will be affected .
instead `Writers` passed
https://hastebin.com/atetuhodeh.coffeescript
@kostas this is my configtx.yaml . Am intentionally failing `Readers Policy` and `Admins Policy` .But I was able to do evrything.
Define everything?
Define "everything"?
@kostas like `peer channel create`, `peer channel join` ,`peer chaincode intatnsiate` and `peer chaincode query`
@kostas like `peer channel create`, `peer channel join` ,`peer chaincode intatnsiate` and `peer chaincode query`
@kostas like `peer channel create`, `peer channel join` , `peer chaincode intatnsiate` and `peer chaincode query`
Didn't face problem anywhere ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=j3wSQrQTdZA8tFZPX) @kostas Thank you for reply!! actually I am using fabric-samples/balance-transfer ,,no modification in code and certificate files..dont know why this happenning..,other people also facing this issue
Has joined the channel.
Has joined the channel.
Can anyone help me on the mentioned error ..... [orderer/multichain] newLedgerResources -> CRIT 066[0m Error creating configtx manager and handlers: Error deserializing key Capabilities for group /Channel: Unexpected key Capabilities
panic: Error creating configtx manager and handlers: Error deserializing key Capabilities for group /Channel: Unexpected key Capabilities
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2redSXchzX4oJNsYQ) @pankajcheema None of these actions are affected by a broken Orderer/Readers or Orderer/Admins policy.
@VinayChaudharyOfficial It sounds like you have used a newer version of `configtx.yaml` to bootstrap a orderer network at v1.0.x levels of code
Please make sure that the version of all of your binaries and config files match.
Hi Please have a look on my question >>https://stackoverflow.com/questions/51393748/hyperledger-fabric-orderer-variable-configuration
Hi Experts I have set Org1 Writers policy as ``` Writers:
Type: Signature
Rule: "AND('Org1MSP.member','Org2MSP.member')"```
and In Channel/Application/Writers as ``` Writers:
Type: ImplicitMeta
Rule: "ANY Writers"```
Then getting error `Error: got unexpected status: FORBIDDEN -- Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied`
Then i know i need to provide two signature but how to give using environment variables of peer ?
this is my complete coonfigtx.yaml https://hastebin.com/gicepikibu.coffeescript
@kostas
@jyellick thankx a lot... working on same version of bin and images v1.1.0 now error is coming as................................... ca_peerNsp1 | Error: Failed to find private key for certificate in '/etc/hyperledger/fabric-ca-server-config/ca.nsp1.orh.in-cert.pem': Could not find matching private key for SKI: Failed getting key for SKI [[19 183 185 215 150 209 173 189 141 211 33 75 11 34 106 63 95 245 80 175 29 192 167 149 238 113 87 25 84 178 116 179]]: Key with SKI 13b7b9d796d1adbd8dd3214b0b226a3f5ff550af1dc0a795ee71571954b274b3 not found in /etc/hyperledger/fabric-ca-server - FABRIC_CA_SERVER_CA_NAME=ca-nsp1/msp/keystore
couchdb2 | [notice] 2018-08-01T14:27:26.753716Z nonode@nohost <0.335.0> -------- chttpd_auth_cache changes listener died database_does_not_exist at mem3_shards:load_shards_from_db/6(line:403) <= mem3_shards:load_shards_from_disk/1(line:378) <= mem3_shards:load_shards_from_disk/2(line:407) <= mem3_shards:for_docid/3(line:91) <= fabric_doc_open:go/3(line:38) <= chttpd_auth_cache:ensure_auth_ddoc_exists/2(line:187) <= chttpd_auth_cache:listen_for_changes/1(line:134)
couchdb2 | [error] 2018-08-01T14:27:26.753921Z nonode@nohost emulator -------- Error in process <0.473.0> with exit value:
couchdb2 | {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users",[{file,"src/mem3_shards.erl"},{line,403}]},{mem3_shards,load_shards_from_disk,1,[{file,"src/mem3_shards.erl"},{line,378}]},{mem3_shards,load_shards_from_disk,2,[{file,"src/mem3_shards.erl"},{line,407}]},{mem3_shards,for_docid,3,[{file,"src/mem3_shards.erl"},{line,91}]},{fabric_doc_open,go,3,[{file,"src/fabric_doc_open.erl"},{line,38}]},{chttpd_auth_cache,ensure_auth_ddoc_exists,2,[{file,"src/chttpd_auth_cache.erl"},{line,187}]},{chttpd_auth_cache,listen_for_changes,1,[{file,"src/chttpd_auth_cache.erl"},{line,134}]}]}
@jyellick How would one go about renaming/reconfiguring orderer name/crypto material after network has started
Has joined the channel.
Has joined the channel.
Hi! I've set up a small Fabric network which uses 4 kafka servers and 3 zookeeper nodes as the orderering service. But how do I know that it's actually used? I can see docker logs in both kafka servers and zookeepers nodes but there seems to be no logs entered after setup?
Thanks!
https://chat.hyperledger.org/channel/fabric-orderer?msg=qWq4hYzmdbKN6Xfv2
@pankajcheema ^ The above policy will never work. The `Writers` policy must be satisfiable by a single signature.
@pankajcheema ^ The above policy will never work. The `Writers` policy must be satisfiable by a single signature. Use `OR` instead of `AND`
@VinayChaudharyOfficial These errors are bit across the board and not ordering related. I'd suggest you take the certificate issue to #fabric-ca and the couchdb issue to #fabric-ledger
@SaraEmily: Check the logs of any of your ordering service nodes. At the very top, you'll see a line similar to this:
> 2018-07-28 13:39:59.444 UTC [orderer/commmon/multichannel] NewRegistrar -> INFO 005 Starting system channel 'systemchain' with genesis block hash c03fd210a9813e3a4d26c91b7394b1f616bb32ad5cff8cedd7fed9e2ef17c9fb and orderer type solo
^^ This is an ordering service running on solo.
If you see a reference to "orderer type kafka", you're good.
Thanks @kostas , it was kafka :) I was just surprised becuase I thought the latency of my system would go down more than it did when I switched from SOLO to kafka
@SaraEmily you are running everything in the same machine.... :rolling_eyes: i don't think you can measure latency very well. You can try to use `tc` (traffic control) to induce artificial latency
@yacovm I know, but still, i thought I would see something. But I guess not
And I'd argue that for simple workloads, it wouldn't surprise me if solo was behaving better, latency-wise.
There are less roundtrips involved.
maybe the logs slow you down? if you have everything in debug
then everything is slow
good point, thanks
Is a full history of the ledger kept on Kafka, or at some point do older entries "fall off" the end? Operationally, do I need to be ready to iteratively expand Kafka's backing storage?
Until we get to a point where we can support pruning, you're looking at the full history being kept on Kafka, and your storage needs growing as more entries are added.
BTW what if full kafka topic log is not retained(some old messages are deleted): only new orderer nodes will not be able to start? but existing orderers will continue to work?
Hi Guys , I think we need to improve documentation of ACL ? @kostas @jyellick
https://hyperledger-fabric.readthedocs.io/en/release-1.2/access_control.html?highlight=ACl
Hi all!
Hi all! I have a question regarding the ordering service using kafka. For now the most straight forward to create a network is having an org acting as ordering service, acting as third party providing the trust in the network.
I was wondering if it was possible to instead of having just one org doing all the ordering the process could be split in multiple orgs that belong to different companies, i think the orderers itselves are not a problem, but the kafka/zookeeper service could be, as a leader must be defined to orquestate the kafka service. I am right?(i dont have to much experience with kafka)
Hi all! I have a question regarding the ordering service using kafka. For now the most straight forward to create a network is having an org acting as ordering service, acting as third party providing the trust in the network.
I was wondering if it was possible to instead of having just one org doing all the ordering the process could be split in multiple orgs that belong to different companies, i think the orderers itselves are not a problem, but the kafka/zookeeper service could be, as a leader must be defined to orquestate the kafka service. I am right?(i dont have to much experience with kafka).
Also i suppose that when using a BFT variant the ordering service is going to be distributed among orgs directly as there's no need of a leader
in config.yaml only one consortium can be defined and contains all orgs in the network and is used for order. Am I right? thank you
@qsmen i think not, you can define all the organizations and then using subgroups of then generate several consortiums that are going to form the different channels
I don't know. I mean in order profile the consortium is defined and all orgs get listed here. in each channelprofile, only partial orgs are listed. If you are right, then many consortia will appear in order profile part.
yeah, i dunno, it's one of those obscure parts for me yet
I think the organizations section inside the orderer it's to define who should the orderer know, and then the consortion to define the channel participants. The confusing part is the application section, that defines the organizations of the consortium that are able to interact with the chaincode (?)
no, all orgs in a channel should be listed in channelprofile. whether a org could interact with chaincode is defined by endorse policy
in link "adding a new org to a channel", we only do the applicatin channel config update, but don't do order channel config update. So the new org is new to the application channel or is new to the network? If the org is new to the network, why not update the order channel config?
Has joined the channel.
Has joined the channel.
@pankajcheema: Can you post your observations and suggestions for improvement in #fabric-documentation?
> I am right?(i dont have to much experience with kafka).
@dsanchezseco: You are 100% right.
> If you are right, then many consortia will appear in order profile part.
@qsmen: Actually, multiple consortiums _are_ supported, it's just that the sample configuration file only lists one.
@qsmen: Actually, multiple consortiums _are_ supported, it's just that the sample configuration file only lists one. /cc @dsanchezseco
> in link "adding a new org to a channel", we only do the applicatin channel config update, but don't do order channel config update. So the new org is new to the application channel or is new to the network? If the org is new to the network, why not update the order channel config?
@qsmen: Once created, a channel can grow autonomously, i.e. it can add and remove orgs without being constrained by what's in the consortium. The consortium acts as a constraint when the channel is created by dictating how many signatures are needed by the channel participants, and constraining the channel participants (again, at creation time) to consortium members.
> in link "adding a new org to a channel", we only do the applicatin channel config update, but don't do order channel config update. So the new org is new to the application channel or is new to the network? If the org is new to the network, why not update the order channel config?
@qsmen : Once created, a channel can grow autonomously, i.e. it can add and remove orgs without being constrained by what's in the consortium. The consortium acts as a constraint when the channel is created by dictating how many signatures are needed by the channel participants, and constraining the channel participants (again, at creation time) to consortium members.
> in link "adding a new org to a channel", we only do the applicatin channel config update, but don't do order channel config update. So the new org is new to the application channel or is new to the network? If the org is new to the network, why not update the order channel config?
@qsmen : Once created, a channel can grow autonomously, i.e. it can add and remove orgs without being constrained by what's in the consortium. The consortium acts as a constraint when the channel is created by dictating how many signatures are needed by the channel participants, and constraining the channel participants (again, at creation time) to consortium members.
hi, i cannot create the 5th channel `auditch`, orderer0 log list below:
```2018-08-03 09:01:08.991 CST [cauthdsl] func1 -> DEBU b17 0xc42000e190 gate 1533258068991226326 evaluation starts
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b18 0xc42000e190 signed by 0 principal evaluation starts (used [false])
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b19 0xc42000e190 processing identity 0 with bytes of 0a0550784d535012a8072d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d494943686a4343416979674177494241674952414c746c57663558666d4e7573345777694f615341635977436759494b6f5a497a6a304541774977676163780a437a414a42674e5642415954416b4e4f4d52417744675944565151494577644b615746755a334e314d52417744675944565151484577644f595735716157356e0a4d525177456759445651514a4577733449453568636d6b67636d39685a4445504d4130474131554545524d474d6a45774d44417a4d526377465159445651514b0a457735776543356c626d56795a336c344c6d4e76625445594d4259474131554543784d50554739335a5849675258686a614746755a32567a4d526f77474159440a565151444578466a595335776543356c626d56795a336c344c6d4e7662544165467730784f4441344d4449784e5451794e545261467730794f4441334d7a41780a4e5451794e5452614d4947524d517377435159445651514745774a44546a45514d4134474131554543424d48536d6c68626d647a645445514d413447413155450a42784d48546d4675616d6c755a7a45554d4249474131554543524d4c4f43424f59584a7049484a7659575178447a414e42674e5642424554426a49784d4441770a4d7a45594d4259474131554543784d50554739335a5849675258686a614746755a32567a4d5230774777594456515144444252425a473170626b42776543356c0a626d56795a336c344c6d4e766254425a4d424d4742797147534d34394167454743437147534d3439417745484130494142435a7a66374941456a543272366d4a0a50585967692b52464f5164747a78786d2b497266756946686f7159686b414a4b2f6949392f544d4154545a7137684b3152785a3931366d794e75612b58624a6e0a624b76335843616a5454424c4d41344741315564447745422f775145417749486744414d42674e5648524d4241663845416a41414d437347413155644977516b0a4d434b41494155633576794b6331784739397478426a7a4f554252776d4c327365677253354e6c383955324368416e574d416f4743437147534d343942414d430a413067414d45554349514441354c5a486e614470377a734e44774d3735432b766c7131484b616d486b6c6b53544b4c705a77444d48414967465a3131457a704a0a33694d706a4f6d4e507a532f397a64717836786e414a4d43514867614f5649374835343d0a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b1a 0xc42000e190 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected OrdererMSP, got PxMSP)
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b1b 0xc42000e190 principal evaluation fails
2018-08-03 09:01:08.991 CST [cauthdsl] func1 -> DEBU b1c 0xc42000e190 gate 1533258068991226326 evaluation fails
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b1d Signature set did not satisfy policy /Channel/Orderer/OrdererMSP/Writers
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b1e == Done Evaluating *cauthdsl.policy Policy /Channel/Orderer/OrdererMSP/Writers
2018-08-03 09:01:08.991 CST [policies] func1 -> DEBU b1f Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ OrdererMSP.Writers ]
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b20 Signature set did not satisfy policy /Channel/Orderer/Writers
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b21 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Writers
2018-08-03 09:01:08.991 CST [policies] func1 -> DEBU b22 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Consortiums.Writers Orderer.Writers ]
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b23 Signature set did not satisfy policy /Channel/Writers
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b24 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers
2018-08-03 09:01:08.991 CST [orderer/common/broadcast] Handle -> WARN b25 [channel: auditch] Rejecting broadcast of config message from 172.18.0.1:37042 because of error: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
2018-08-03 09:01:08.991 CST [orderer/common/server] func1 -> DEBU b26 Closing Broadcast stream```
hi, i cannot create the 5th channel `auditch`, orderer0 log lists below:
```2018-08-03 09:01:08.991 CST [cauthdsl] func1 -> DEBU b17 0xc42000e190 gate 1533258068991226326 evaluation starts
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b18 0xc42000e190 signed by 0 principal evaluation starts (used [false])
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b19 0xc42000e190 processing identity 0 with bytes of 0a0550784d535012a8072d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d494943686a4343416979674177494241674952414c746c57663558666d4e7573345777694f615341635977436759494b6f5a497a6a304541774977676163780a437a414a42674e5642415954416b4e4f4d52417744675944565151494577644b615746755a334e314d52417744675944565151484577644f595735716157356e0a4d525177456759445651514a4577733449453568636d6b67636d39685a4445504d4130474131554545524d474d6a45774d44417a4d526377465159445651514b0a457735776543356c626d56795a336c344c6d4e76625445594d4259474131554543784d50554739335a5849675258686a614746755a32567a4d526f77474159440a565151444578466a595335776543356c626d56795a336c344c6d4e7662544165467730784f4441344d4449784e5451794e545261467730794f4441334d7a41780a4e5451794e5452614d4947524d517377435159445651514745774a44546a45514d4134474131554543424d48536d6c68626d647a645445514d413447413155450a42784d48546d4675616d6c755a7a45554d4249474131554543524d4c4f43424f59584a7049484a7659575178447a414e42674e5642424554426a49784d4441770a4d7a45594d4259474131554543784d50554739335a5849675258686a614746755a32567a4d5230774777594456515144444252425a473170626b42776543356c0a626d56795a336c344c6d4e766254425a4d424d4742797147534d34394167454743437147534d3439417745484130494142435a7a66374941456a543272366d4a0a50585967692b52464f5164747a78786d2b497266756946686f7159686b414a4b2f6949392f544d4154545a7137684b3152785a3931366d794e75612b58624a6e0a624b76335843616a5454424c4d41344741315564447745422f775145417749486744414d42674e5648524d4241663845416a41414d437347413155644977516b0a4d434b41494155633576794b6331784739397478426a7a4f554252776d4c327365677253354e6c383955324368416e574d416f4743437147534d343942414d430a413067414d45554349514441354c5a486e614470377a734e44774d3735432b766c7131484b616d486b6c6b53544b4c705a77444d48414967465a3131457a704a0a33694d706a4f6d4e507a532f397a64717836786e414a4d43514867614f5649374835343d0a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b1a 0xc42000e190 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected OrdererMSP, got PxMSP)
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b1b 0xc42000e190 principal evaluation fails
2018-08-03 09:01:08.991 CST [cauthdsl] func1 -> DEBU b1c 0xc42000e190 gate 1533258068991226326 evaluation fails
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b1d Signature set did not satisfy policy /Channel/Orderer/OrdererMSP/Writers
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b1e == Done Evaluating *cauthdsl.policy Policy /Channel/Orderer/OrdererMSP/Writers
2018-08-03 09:01:08.991 CST [policies] func1 -> DEBU b1f Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ OrdererMSP.Writers ]
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b20 Signature set did not satisfy policy /Channel/Orderer/Writers
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b21 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Writers
2018-08-03 09:01:08.991 CST [policies] func1 -> DEBU b22 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Consortiums.Writers Orderer.Writers ]
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b23 Signature set did not satisfy policy /Channel/Writers
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b24 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers
2018-08-03 09:01:08.991 CST [orderer/common/broadcast] Handle -> WARN b25 [channel: auditch] Rejecting broadcast of config message from 172.18.0.1:37042 because of error: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
2018-08-03 09:01:08.991 CST [orderer/common/server] func1 -> DEBU b26 Closing Broadcast stream```
hi, i cannot create the 5th channel `auditch`, it failed with error messsage `Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied'. log from orderer0 lists below:
```2018-08-03 09:01:08.991 CST [cauthdsl] func1 -> DEBU b17 0xc42000e190 gate 1533258068991226326 evaluation starts
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b18 0xc42000e190 signed by 0 principal evaluation starts (used [false])
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b19 0xc42000e190 processing identity 0 with bytes of 0a0550784d535012a8072d2d2d2d2d424547494e2043455254494649434154452d2d2d2d2d0a4d494943686a4343416979674177494241674952414c746c57663558666d4e7573345777694f615341635977436759494b6f5a497a6a304541774977676163780a437a414a42674e5642415954416b4e4f4d52417744675944565151494577644b615746755a334e314d52417744675944565151484577644f595735716157356e0a4d525177456759445651514a4577733449453568636d6b67636d39685a4445504d4130474131554545524d474d6a45774d44417a4d526377465159445651514b0a457735776543356c626d56795a336c344c6d4e76625445594d4259474131554543784d50554739335a5849675258686a614746755a32567a4d526f77474159440a565151444578466a595335776543356c626d56795a336c344c6d4e7662544165467730784f4441344d4449784e5451794e545261467730794f4441334d7a41780a4e5451794e5452614d4947524d517377435159445651514745774a44546a45514d4134474131554543424d48536d6c68626d647a645445514d413447413155450a42784d48546d4675616d6c755a7a45554d4249474131554543524d4c4f43424f59584a7049484a7659575178447a414e42674e5642424554426a49784d4441770a4d7a45594d4259474131554543784d50554739335a5849675258686a614746755a32567a4d5230774777594456515144444252425a473170626b42776543356c0a626d56795a336c344c6d4e766254425a4d424d4742797147534d34394167454743437147534d3439417745484130494142435a7a66374941456a543272366d4a0a50585967692b52464f5164747a78786d2b497266756946686f7159686b414a4b2f6949392f544d4154545a7137684b3152785a3931366d794e75612b58624a6e0a624b76335843616a5454424c4d41344741315564447745422f775145417749486744414d42674e5648524d4241663845416a41414d437347413155644977516b0a4d434b41494155633576794b6331784739397478426a7a4f554252776d4c327365677253354e6c383955324368416e574d416f4743437147534d343942414d430a413067414d45554349514441354c5a486e614470377a734e44774d3735432b766c7131484b616d486b6c6b53544b4c705a77444d48414967465a3131457a704a0a33694d706a4f6d4e507a532f397a64717836786e414a4d43514867614f5649374835343d0a2d2d2d2d2d454e442043455254494649434154452d2d2d2d2d0a
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b1a 0xc42000e190 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected OrdererMSP, got PxMSP)
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b1b 0xc42000e190 principal evaluation fails
2018-08-03 09:01:08.991 CST [cauthdsl] func1 -> DEBU b1c 0xc42000e190 gate 1533258068991226326 evaluation fails
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b1d Signature set did not satisfy policy /Channel/Orderer/OrdererMSP/Writers
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b1e == Done Evaluating *cauthdsl.policy Policy /Channel/Orderer/OrdererMSP/Writers
2018-08-03 09:01:08.991 CST [policies] func1 -> DEBU b1f Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ OrdererMSP.Writers ]
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b20 Signature set did not satisfy policy /Channel/Orderer/Writers
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b21 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Writers
2018-08-03 09:01:08.991 CST [policies] func1 -> DEBU b22 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Consortiums.Writers Orderer.Writers ]
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b23 Signature set did not satisfy policy /Channel/Writers
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b24 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers
2018-08-03 09:01:08.991 CST [orderer/common/broadcast] Handle -> WARN b25 [channel: auditch] Rejecting broadcast of config message from 172.18.0.1:37042 because of error: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
2018-08-03 09:01:08.991 CST [orderer/common/server] func1 -> DEBU b26 Closing Broadcast stream```
hi, i cannot create the 5th channel `auditch`, it failed with error messsage `Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied'. log from orderer0 lists below:
```2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b1a 0xc42000e190 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected OrdererMSP, got PxMSP)
2018-08-03 09:01:08.991 CST [cauthdsl] func2 -> DEBU b1b 0xc42000e190 principal evaluation fails
2018-08-03 09:01:08.991 CST [cauthdsl] func1 -> DEBU b1c 0xc42000e190 gate 1533258068991226326 evaluation fails
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b1d Signature set did not satisfy policy /Channel/Orderer/OrdererMSP/Writers
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b1e == Done Evaluating *cauthdsl.policy Policy /Channel/Orderer/OrdererMSP/Writers
2018-08-03 09:01:08.991 CST [policies] func1 -> DEBU b1f Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ OrdererMSP.Writers ]
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b20 Signature set did not satisfy policy /Channel/Orderer/Writers
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b21 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Writers
2018-08-03 09:01:08.991 CST [policies] func1 -> DEBU b22 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Consortiums.Writers Orderer.Writers ]
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b23 Signature set did not satisfy policy /Channel/Writers
2018-08-03 09:01:08.991 CST [policies] Evaluate -> DEBU b24 == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Writers
2018-08-03 09:01:08.991 CST [orderer/common/broadcast] Handle -> WARN b25 [channel: auditch] Rejecting broadcast of config message from 172.18.0.1:37042 because of error: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
2018-08-03 09:01:08.991 CST [orderer/common/server] func1 -> DEBU b26 Closing Broadcast stream```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=43MCf67SvfCQfrK7y) @kostas thank you very much. Then, suppose consortium 1 and consortium 2 are defined in order profile, the first consists of org1,org2,org3 and the second consists of org4,org5. In Channel 1 profile constortim1 is referenced, and org1,org2 are listed, then to create channel1, it needs two signatures of org1 admin and org2 admin. Now I want to create a channel with org1,org4 and org5 as members, how to do? define a new consortium consisting of org1,org4 and org5? or referece consortium 1 and consortium2 in the new channel's profile and lists org1,org4,org5?
can i setup 4 orderers in my fabric network?
orderer0~orderer3.example.com
Hi all
During Invoke while processing a proposal with endorser client in proto peer in node sdk with custom first network with tls enabled i am having this error
error: [client-utils.js]: sendPeersProposal - Promise is rejected: Error: 14 UNAVAILABLE: Connect Failed
at new createStatusError (/var/www/html/hyperledgerfabric_tsest_chaincode/first-network/node_modules/fabric-client/node_modules/grpc/src/client.js:64:15).
I have six peers, one orderer, solo, tls enabled network
I followed https://fabric-sdk-node.github.io/tutorial-mutual-tls.html for node sdk
Hi all
During Invoke while processing a proposal with endorser client in proto peer in node sdk with custom first network with tls enabled i am having this error
https://hastebin.com/befaviletu.js
I have six peers, one orderer, solo, tls enabled network
I followed https://fabric-sdk-node.github.io/tutorial-mutual-tls.html for node sdk
> Now I want to create a channel with org1,org4 and org5 as members, how to do? define a new consortium consisting of org1,org4 and org5? or referece consortium 1 and consortium2 in the new channel's profile and lists org1,org4,org5?
@qsmen: If you want to *create* a channel with orgs 1, 4, 5 then these orgs need to belong to the same consortium. You can't mix and match consortiums for channel creation.
> can i setup 4 orderers in my fabric network?
@bh4rtp: You can.
Could you please edit the message above regarding the 5th channel? Include a link to Hastebin instead of pasting the logs directly here. (See the channel's rules.)
@Gaurav6794: This Q is not related to this channel.
Has joined the channel.
i don't remember what i changed in configtx.yaml, the log of peer0 frequently print this error and sdk client says channel event hub shutdown.
`2018-08-04 22:27:03.559 CST [ConnProducer] NewConnection -> ERRO 1e42 Failed connecting to orderer1.energyx.com:8050 , error: context deadline exceeded`
i don't remember what i changed in configtx.yaml, the log of peer0 frequently print this error and sdk client says channel event hub shutdown when instantiating chaincode.
`2018-08-04 22:27:03.559 CST [ConnProducer] NewConnection -> ERRO 1e42 Failed connecting to orderer1.energyx.com:8050 , error: context deadline exceeded`
another kind of error message, i don't know what it means.
```2018-08-04 23:05:33.206 CST [blocksProvider] DeliverBlocks -> ERRO 30fd [tradech] Got error &{FORBIDDEN}
2018-08-04 23:05:33.206 CST [blocksProvider] DeliverBlocks -> CRIT 30fe [tradech] Wrong statuses threshold passed, stopping block provider```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Tmkac6xxCDBq9cFrS) @kostas thank you,Kostas.
Zookeeper is used for state, transaction management to be used by kafka brokers. If we loose all the ZK node file system (leader + followers). Is there a way to recover it using Kafka? My understanding is No. Please validate.
@kostas Zookeeper is used for state, transaction management to be used by kafka brokers. If we loose all the ZK node file system (leader + followers). Is there a way to recover it using Kafka? My understanding is No. Please validate my answer.
@Kostas our company deals in title insurance for real estate. we want to build a consortium of organizations where every organization can share its insurance policies so that other organizations can make better decision while providing insurance for same assets or property.
with that said, we now want strong reasons to why we should use hyperledger fabric for this use case instead of Ethereum and Corda. Could you please provide your suggestions?
@kostas so for now with the kafka ordering service only one org can manage it, right? Until the BFT is out only a "not so decentralized mode" is available using kafka on a third of trust for the channel
@kostas so for now with the kafka ordering service only one org can manage it, right? Until the BFT is out only a "not so decentralized mode" is available using kafka on a third of trust for the channel?
We will have raft orderer
Before BFT
@dsanchezseco
Has left the channel.
need i export kafka 9092 port to host when using kafka ordering?
need i expose kafka 9092 port to host when using kafka ordering?
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8QPuHLFtGDwKhswBj) Hi @ddurnev, you able to get the answer of your question? I am interested too. According to the HLF Kafka document, we should set the log.retention.ms = -1 and disable log.retention.bytes.
From the above setting, the Kafka log will be keep growing. During our internal testing,
the log size grow really big after a month. Is there anything we can do about it except getting huge harddisk for those log?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8QPuHLFtGDwKhswBj) Hi @ddurnev, you able to get the answer of your question? I am interested too. According to the HLF Kafka document, we should set the log.retention.ms = -1 and disable log.retention.bytes.
From the above setting, the Kafka log will be keep growing. During our internal testing,
the log size grow really big. Is there anything we can do about it except getting huge harddisk for those log?
Hi. yes, I checked this myself (fabric 1.0) and for me this works the following way: new empty orderer not even creates local ledger if orderer channel topic is empty. I guess it will not cut/save local blocks if cannot read old messages from the topic partition. Existing orderers do not care about old messages in kafka topic/partition if they have cut the blocks and saved in their local FS ledger already.
thanks @ddurnev, so I can assume there will be no impact when enabling the log.retention.bytes if we set it like 1gb if the block size is about 200mb?
thanks @ddurnev, so I can assume there will be no impact when enabling the log.retention.bytes if we set it like 1gb if the block size is about 200m if we do not add new orderer to the system? and when we enabling the log.retention.bytes, no new orderer can be add to the system?
Has joined the channel.
Hello, I haven't compared the kafka-consensus code between v.1.0 v/s 1.1+, wanted to do a check if there is any change in the Kafka Consensus Protocol in ordering the block from v1.1+?
Hello, I haven't compared the kafka-consensus code between v.1.0 v/s 1.1+, wanted to do a quick check if there is any change in the Kafka Consensus Protocol in ordering the block from v1.1+?
@rahulhegde: I'm lost as to which versions you which to compare?
@kostas @jyellick in the kafka consensus model,each channel maps one topic?
A single-partition topic, correct.
how to keep the data prevent the lose?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=a6yukLvLTtTuM5NBa) @kostas Ok - I meant the Fabric Release v.1.0.x v/s Fabric v.1.1.x/1.2.x.
@rahulhegde: The biggest change between 1.0.x and 1.1 had to do with supporting a new message processing flow: https://jira.hyperledger.org/browse/FAB-5258
@rahulhegde: The biggest change between 1.0.x and 1.1 had to do with: https://jira.hyperledger.org/browse/FAB-5258
@rahulhegde: The biggest change between 1.0.x and 1.1 in Kafka was: https://jira.hyperledger.org/browse/FAB-5284
Has joined the channel.
hi Expert.. Can you please help me with the error which i am getting in Orderer log while creating the channel. "Rejecting broadcast of message from <
Can someone please suggest?
@chandrika SERVICE_UNAVAILABLE usually means you have either:
1. Not waited for the Kafka cluster to start up, wait a few minutes and try again.
2. Misconfigured the Kafka cluster.
thanks @jyellick for your response.
To answer your question...
1. My Kafka cluster is up and running and my orderer can connect to my kafka cluster
2. this is my kafka configuration...
kafka:
image: hyperledger/fabric-kafka
restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
network_mode: "host"
kafka0:
container_name: kafka0
ports:
- 9092:9092
- 9093:9093
extends:
file: base/docker-compose-base.yaml
service: kafka
# network_mode: "host"
environment:
- KAFKA_BROKER_ID=0
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181
- KAFKA_PORT=9092
- KAFKA_MESSAGE_MAX_BYTES=103809024
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_MIN_INSYNC_REPLICAS=1
- KAFKA_DEFAULT_REPLICATION_FACTOR=1
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://kafka0:9092
depends_on:
- zookeeper0
Can you please suggest if anything needs to be changed in my kafka conbfiguration?
How can you tell that your ordered can connect to the Kafka cluster?
> My Kafka cluster is up and running and my orderer can connect to my kafka cluster
How can you tell that your orderer can connect to the Kafka cluster?
tls
@jyellick can you take a look #fabric-peer-endorser-committer
Hi ,
Facing an issue "[2018-08-12 15:01:53.778] [DEBUG] Create-Channel - response ::{"status":"BAD_REQUEST","info":"config does not validly parse: cannot enable application capabilities without orderer support first"}
[2018-08-12 15:01:53.778] [ERROR] Create-Channel -
"
Background:
Hi ,
Facing an issue "[2018-08-12 15:01:53.778] [DEBUG] Create-Channel - response ::{"status":"BAD_REQUEST","info":"config does not validly parse: cannot enable application capabilities without orderer support first"}
[2018-08-12 15:01:53.778] [ERROR] Create-Channel -
"
Background:
1. Used balance transfer example
2. Added chaincode - marbles private data collections
3. Earlier faced an issue, during instantiating marbles private chaincode on balance transfer network, where it seem channel capabilities were not set to 1.2.
4. Added in channel tx file, application , channel and orderer capabilties, similar to first-network script.
5. re-generated mychannel.tx
6. Now, creation of channel itself is failing due to above error.
Hi ,
Facing an issue "[2018-08-12 15:01:53.778] [DEBUG] Create-Channel - response ::{"status":"BAD_REQUEST","info":"config does not validly parse: cannot enable application capabilities without orderer support first"}
[2018-08-12 15:01:53.778] [ERROR] Create-Channel -
"
Background:
1. Used balance transfer example
2. Added chaincode - marbles private data collections
3. Earlier faced an issue, during instantiating marbles private chaincode on balance transfer network, where it seem channel capabilities were not set to 1.2.
4. Added in channel tx file, application , channel and orderer capabilties, similar to first-network script.
5. re-generated mychannel.tx
6. Now, creation of channel itself is failing due to above error.
Requesting here to pls assist, since i couldnt find anything on google or stackoverflow or other fabric forums
Hi ,
Facing an issue "[2018-08-12 15:01:53.778] [DEBUG] Create-Channel - response ::{"status":"BAD_REQUEST","info":"config does not validly parse: cannot enable application capabilities without orderer support first"}
[2018-08-12 15:01:53.778] [ERROR] Create-Channel -
"
Background:
1. Used balance transfer example
2. Added chaincode - marbles private data collections
3. Earlier faced an issue, during instantiating marbles private chaincode on balance transfer network, where it seem channel capabilities were not set to 1.2.
4. Added in channel tx file, application , channel and orderer capabilties, similar to first-network script.
5. re-generated mychannel.tx
6. Now, creation of channel itself is failing due to above error.
Requesting here to pls assist, since i couldnt find anything on google or stackoverflow or other fabric forums
Hi ,
Facing an issue "[2018-08-12 15:01:53.778] [DEBUG] *Create-Channel - response ::{"status":"BAD_REQUEST","info":"config does not validly parse: cannot enable application capabilities without orderer support first"}*
[2018-08-12 15:01:53.778] [ERROR] Create-Channel -
!!!!!!!!! Failed to create the channel 'mychannel' !!!!!!!!!
"
Background:
1. Used balance transfer example
2. Added chaincode - marbles private data collections
3. Earlier faced an issue, during instantiating marbles private chaincode on balance transfer network, where it seem channel capabilities were not set to 1.2.
4. Added in channel tx file, application , channel and orderer capabilties, similar to first-network script.
5. re-generated mychannel.tx
6. Now, creation of channel itself is failing due to above error.
Requesting here to pls assist, since i couldnt find anything on google or stackoverflow or other fabric forums
Hi ,
Facing an issue "[2018-08-12 15:01:53.778] [DEBUG] *Create-Channel - response ::{"status":"BAD_REQUEST","info":"config does not validly parse: cannot enable application capabilities without orderer support first"}*
[2018-08-12 15:01:53.778] [ERROR] Create-Channel -
!!!!!!!!! Failed to create the channel 'mychannel' !!!!!!!!!
"
*Background*:
1. Used balance transfer example
2. Added chaincode - marbles private data collections
3. Earlier faced an issue, during instantiating marbles private chaincode on balance transfer network, where it seem channel capabilities were not set to 1.2.
4. Added in channel tx file, application , channel and orderer capabilties, similar to first-network script.
5. re-generated mychannel.tx only ( not crypto materials)
6. Now, creation of channel itself is failing due to above error.
Unable to find where should i enable ordered capabilities.
Requesting here to pls assist, since i couldnt find anything on google or stackoverflow or other fabric forums
Hi ,
Facing an issue "[2018-08-12 15:01:53.778] [DEBUG] *Create-Channel - response ::{"status":"BAD_REQUEST","info":"config does not validly parse: cannot enable application capabilities without orderer support first"}*
[2018-08-12 15:01:53.778] [ERROR] Create-Channel -
!!!!!!!!! Failed to create the channel 'mychannel' !!!!!!!!!
"
*Background*:
1. Used balance transfer example
2. Added chaincode - marbles private data collections
3. Earlier faced an issue, during instantiating marbles private chaincode on balance transfer network, where it seem channel capabilities were not set to 1.2.
4. Added in channel tx file, application , channel and orderer capabilties, similar to first-network script.
5. re-generated mychannel.tx only ( not crypto materials)
6. Now, creation of channel itself is failing due to above error.
Unable to find where should i enable ordered capabilities.
Requesting here to pls assist, since i couldnt find anything on google or stackoverflow or other fabric forums
` nukulsharma$ docker run --rm hyperledger/fabric-tools:latest peer version | sed -ne 's/ Version: //p' | head -1
1.2.0`
Update:
Resolved it, it seem genesis block was not generated with new capabilities. Regeneration resolved it.
Hi Experts,
Can someone please help me on configuring multiple couchdb s in a host network ?
I am facing problem in starting multiple couchdbs in separate ports in network_mode=host
the error says - port already in use even though I am giving separate ports in couch configuration
@chandrika This channel is about ordering, you might want to try #fabric-ledger
Has joined the channel.
Has joined the channel.
Hi!
Hi!
So if I've understood it correctly the ordering service will fill up a block with all incomming transactions (until the block is "full") within a small time window before sending it to the peers for the validation phase?
How big is this time window and can I adjust it? Is the time window neglected if the block fills up before the time is "out"?
Thanks!
@SaraEmily: The block is "cut" when either `batchSize` transactions have come in or `batch.Timeout` has elapsed since the first transaction, whichever comes first.
Both of these settings can be adjusted per channel.
Look at `sampleconfig/configtx.yaml`.
@kostas Aha, great, thanks a lot!
hi
some idea what this message exactly means?
ejecting broadcast of config message from 172.18.90.174:49173 because of error: error authorizing update: error validating ReadSet: readset expected key [Group] /Channel/Application at version 0, but got version 1
It's saying you are trying to update version 0 of that key, but the key is currently at version 1.
Has joined the channel.
hi, i initialized 2 orderer network with 4 kafka and 3 zookeeper all running on different nodes without docker network b/w them. During channel creation i noticed orderer is trying to connect to kafka container by container id (kafka), this obviously means orderer first connected to kafka through domain name(that how orderer gets kafka container id), later orderer tries to connect through kafka's container id. ```
as this container id is not resolvable (no docker network) so channel cration fails. why is orderer trying to connect to kafka in through its (kafka) container id? can i configure orderer's behaviour to use domain name instead of container id?
```
Hi
Facing an issue querying a private data from other org ( though added to policy)
1. Added a transaction in private data and public data
2. Able to query both public and private on peer0 of Org1 which initiated the transaction
3. Able to query public data ( on ledger, not private data collections) from Org2 . Note: i am using mixed model i.e. using PDC only for private data whilst normal ledger way to persist public data) . Policy have both Org1 and Org2 members to access private data
4. Unable to query private data from peer0 of Org2
Following error is noticed whilst querying private data from peer0 of Org2.
`[2018-08-19 12:41:15.784] [INFO] Query - 1461 now has Error: {"Error":"Failed to get private details for 1461: GET_STATE failed: transaction ID: b9c1a98fc99cb0ff5b9124ff6efb8e95aa9bf8a261d98a27a24f378665dea772: Private data matching public hash version is not available. Public hash version = &version.Height{BlockNum:0x2, TxNum:0x0}, Private data version = (*version.Height)(nil)"}`
Hi all, is there any notion of a timestamp produced by the ordering service on each committed transaction or block? I know it's a difficult problem (the same one that NTP was designed to solve), but it seems like it would be a very useful and common feature and perhaps exists already?
@vdods There is deliberately no block timestamp (as you point out, this is a difficult problem, and one that is especially difficult in BFT systems). The contents of each transaction does have a timestamp however.
@sandman Please make sure you may communicate to the Kafka cluster using the provided Kafka sample clients. This is not a Fabric problem, but a Kafka configuration one. The Kafka protocol has the client connect in, and receive a list of broker addressess, as reported by the individual brokers. This allows the Kafka client to discover all brokers, even with only a subset of them defined. Most likely, you need to set KAFKA_ADVERTISED_HOST_NAME and KAFKA_ADVERTISED_PORT for each of your brokers.
@sandman Please make sure you can communicate to the Kafka cluster using the provided Kafka sample clients. This is not a Fabric problem, but a Kafka configuration one. The Kafka protocol has the client connect in, and receive a list of broker addressess, as reported by the individual brokers. This allows the Kafka client to discover all brokers, even with only a subset of them defined. Most likely, you need to set KAFKA_ADVERTISED_HOST_NAME and KAFKA_ADVERTISED_PORT for each of your brokers.
Hi all, I'm currently trying to deploy a fabric with kafka-based ordering service. Everything seems fine and I able to commit transaction. However, I got a quite annoying warning "This orderer is running in compatibility mode". I did a research to find if anyone else also encounter the same issue. I found several messages that matched my issue and I corrected what was supposed to be the problem (the capabilities were not enabled). I also checked the version of my images (v1.1.0 and 0.4.6 for 3rd-party) and binairies (v1.1.0). I still have the warning and I'm running out of ideas about this. Is anyone who got the same warning knows how to deal with it ? thx in advance.
Has joined the channel.
Has left the channel.
@AnthonyRoux This is indeed related to capabilities. Are you certain they are enabled on all channels?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=vf4q3EP8aaZh93Hd3) @jyellick There is only one channel, but I might miss something. Do you want to take a look at my configtx file ?
Your `confitx.yaml` file is only used during bootstrapping and channel creation
When you bootstrap the orderer, you are creating a channel called the orderer system channel
When you run `peer channel create` you are creating a new application channel
The easiest way to enable capabilities is prior to bootstrap, set them in your `configtx.yaml` and proceed.
If you need to enable them after bootstrap, then you will need to follow the upgrade guides provided in the documentation.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ECggjpW6vh9FumDSG) @jyellick It is set in my configtx.yaml. I created the capabilities section but I forgot to add them in the profile section. Must be that. Thank you for opening my eyes
Has joined the channel.
Currently have a kafka topic for a channel that has no data in it. Orderers keep trying to seek to the current offset, but the topic is empty so the attempt fails.
attempt message: `Received seekInfo (0xc420d689a0) start:
Peer logs messages related to the block delivery:
```[deliveryClient] connect -> DEBU 17f7b Connected to orderer-1.orderer:7050
[deliveryClient] connect -> DEBU 17f7c Establishing gRPC stream with orderer-1.orderer:7050 ...
[deliveryClient] afterConnect -> DEBU 17f7d Entering
[deliveryClient] RequestBlocks -> DEBU 17f7e Starting deliver with block [98] for channel foo-channel
[deliveryClient] afterConnect -> DEBU 17f7f Exiting
[blocksProvider] DeliverBlocks -> WARN 17f80 [foo-channel] Got error &{NOT_FOUND}```
I'm seeing a related issue in another setup where the requested offset is before what is currently retained in the kafka topic itself. Suspect these issues are related - at least in the approach for fixing them?
Kafka retention has been set to indefinite, so it shouldn't happen again in normal circumstances. Getting kafka back into a good state is the current goal
Has joined the channel.
https://chat.hyperledger.org/channel/fabric-orderer?msg=SzZPwdCAR9PMZNjLW
@MikeEmery: ^^ Would I be right to guess that for this network as well, Kafka retention was not set to indefinite?
correct, both networks were running with the default retention of 7 days
Understood. This is why your network is broken then. [As the instructions note](https://hyperledger-fabric.readthedocs.io/en/release-1.1/kafka.html#steps), time-based retention should be disabled.
Now, going back to your other question:
Now, going back to the other matter:
> Getting kafka back into a good state is the current goal
There is unfortunately no tooling built yet that would allow you to do that.
It is something that we'll solve indirectly when we get the Raft-based ordering service (and the Kafka migration tool) ready, but until then, you are unfortunately out of luck. (Sorry about that.)
Even though it's not what I was hoping to hear, thanks for the response
Is there a way to force the chain to replay from block zero?
All chain and index data is backed up on both peers and orderers
So, the issue here is that every orderer reads the `Metadata` field of the most recent block it has for a given channel.
Every block consists of one or more transactions. And every transaction corresponds to a message posted on the Kafka topic/partition for that channel. So every transaction has an offset.
In the `Metadata` field then, we encode the offset of the last transaction included in that block.
So the orderer is programmed to spin up a Kafka consumer and have it read from the most recently persisted offset going forward.
This is a long-winded way of describing what's going on behind the scenes. And it is not addressing your most recent question. Figured I'd post that either way though.
Now, on to your question:
> Is there a way to force the chain to replay from block zero?
The short answer is that there is no way to force the chain to replay from block zero. This goes back to what I wrote earlier - we do not have the tooling written for such a recovery yet.
Out of curiosity, do you think I could get the logs from one of the orderers that is giving you the NOT_FOUND error?
I want to check something that I'm fuzzy on. We wrote this code more than a year ago, and as I'm reviewing it now, I want to make sure I'm not missing anything.
thanks very much for the explanation. we have off-chain recovery options as well. it's obviously not as good as getting fabric back, but it's better than nothing
I'll DM what I can of the logs, along with any extra configuration you'd like to see
Right, I was about to say: realistically, your quickest --and totally not user-friendly and you-definitely-deserve-better-- solution would be to write an application (using one of our SDKs) that reads the existing transactions from`broken_channel` and pushes them to `new_channel`.
As I'm looking at this:
This is an issue that you have when a Deliver request comes your way for a specific channel, right?
That's correct
Are you perhaps targeting an ordering service node that you just brought up?
Put it a bit differently:
Is this an ordering service node that has served this request in the past?
That would tell us whether we should expect that OSN to have the blocks for that channel in its local ledger already.
It should be. As part of the recovery attempts - the entire chain and index was copied to every orderer from the peers
there have been many restarts though, the test cluster has had a tough day
That is very odd then, and pretty much everything I wrote above is off-topic.
That is very odd then, and pretty much everything I wrote above (RE: Kafka) is off-topic.
Because, the OSN _should_ serve this call locally.
Because, the OSN _should_ serve this call from its local ledger.
No need for reaching to Kafka.
No need for reaching out to Kafka.
So the Kafka not having these blocks anymore is not an issue.
So the Kafka cluster not having these blocks anymore is not an issue.
@jyellick Any ideas?
~@jyellick Any ideas?~ (see below, we resolved it with Mike in DMs)
standby, I have two different clusters. errors were from the cluster with partial data on the orderer. will restore there and verify
Has joined the channel.
Has joined the channel.
update: OSN with same state as peers was able to proceed without errors, despite the truncated kafka logs
Has joined the channel.
In v1.2, are there any required details that should be part of the orderer profile declaration in the configtx.yaml file that allow the signature to be satisfied when creating the channel.block from the peer? I'm looking at the configtx.yaml that comes out of the box for the orderer and I'm trying to use the profile SampleDevModeKafka with my own org details. Is there any reason why I would get this message in the orderer logs when running the peer channel create command on a peer: Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ SampleOrg.Admins ]
In v1.2, are there any required details that should be part of the orderer profile declaration in the configtx.yaml file that allow the signature to be satisfied when creating the channel.block from the peer? I'm looking at the configtx.yaml that comes out of the box for the orderer and I'm trying to use the profile SampleDevModeKafka with my own org details. Is there any reason why I would get this message in the orderer logs when running the peer channel create command on a peer:
In v1.2, are there any required details that should be part of the orderer profile declaration in the configtx.yaml file that allow the signature to be satisfied when creating the channel.block from the peer? I'm looking at the configtx.yaml that comes out of the box for the orderer and I'm trying to use the profile SampleDevModeKafka with my own org details. Is there any reason why I would get this message in the orderer logs when running the peer channel create command on a peer: ```
``` `2018-08-21 19:25:30.957 UTC [cauthdsl] func2 -> DEBU 1a5 0xc42000e740 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected SampleOrg, got SampleOrgMSP)
2018-08-21 19:25:30.957 UTC [cauthdsl] func2 -> DEBU 1a6 0xc42000e740 principal evaluation fails
2018-08-21 19:25:30.957 UTC [cauthdsl] func1 -> DEBU 1a7 0xc42000e740 gate 1534879530956937482 evaluation fails
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1a8 Signature set did not satisfy policy /Channel/Application/SampleOrg/Admins
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1a9 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/SampleOrg/Admins
2018-08-21 19:25:30.957 UTC [policies] func1 -> DEBU 1aa Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ SampleOrg.Admins ]
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1ab Signature set did not satisfy policy /Channel/Application/ChannelCreationPolicy
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1ac == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy
2018-08-21 19:25:30.957 UTC [orderer/common/broadcast] Handle -> WARN 1ad [channel: mychannel] Rejecting broadcast of config message from 159.122.206.182:59656 because of error: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2018-08-21 19:25:30.957 UTC [orderer/common/server] func1 -> DEBU 1ae Closing Broadcast stream
2018-08-21 19:25:30.959 UTC [grpc] Printf -> DEBU 1af transport: http2Server.HandleStreams failed to read frame: read tcp 159.122.206.181:7050->159.122.206.182:59656: read: connection reset by peer
2018-08-21 19:25:30.959 UTC [common/deliver] Handle -> WARN 1b0 Error reading from 159.122.206.182:59654: rpc error: code = Canceled desc = context canceled
2018-08-21 19:25:30.959 UTC [orderer/common/server] func1 -> DEBU 1b1 Closing Deliver stream`
In v1.2, are there any required details that should be part of the orderer profile declaration in the configtx.yaml file that allow the signature to be satisfied when creating the channel.block from the peer? I'm looking at the configtx.yaml that comes out of the box for the orderer and I'm trying to use the profile SampleDevModeKafka with my own org details. Is there any reason why I would get this message in the orderer logs when running the peer channel create command on a peer:
```2018-08-21 19:25:30.957 UTC [cauthdsl] func2 -> DEBU 1a5 0xc42000e740 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected SampleOrg, got SampleOrgMSP)
2018-08-21 19:25:30.957 UTC [cauthdsl] func2 -> DEBU 1a6 0xc42000e740 principal evaluation fails
2018-08-21 19:25:30.957 UTC [cauthdsl] func1 -> DEBU 1a7 0xc42000e740 gate 1534879530956937482 evaluation fails
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1a8 Signature set did not satisfy policy /Channel/Application/SampleOrg/Admins
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1a9 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/SampleOrg/Admins
2018-08-21 19:25:30.957 UTC [policies] func1 -> DEBU 1aa Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ SampleOrg.Admins ]
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1ab Signature set did not satisfy policy /Channel/Application/ChannelCreationPolicy
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1ac == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy
2018-08-21 19:25:30.957 UTC [orderer/common/broadcast] Handle -> WARN 1ad [channel: mychannel] Rejecting broadcast of config message from 159.122.206.182:59656 because of error: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2018-08-21 19:25:30.957 UTC [orderer/common/server] func1 -> DEBU 1ae Closing Broadcast stream
2018-08-21 19:25:30.959 UTC [grpc] Printf -> DEBU 1af transport: http2Server.HandleStreams failed to read frame: read tcp 159.122.206.181:7050->159.122.206.182:59656: read: connection reset by peer
2018-08-21 19:25:30.959 UTC [common/deliver] Handle -> WARN 1b0 Error reading from 159.122.206.182:59654: rpc error: code = Canceled desc = context canceled
2018-08-21 19:25:30.959 UTC [orderer/common/server] func1 -> DEBU 1b1 Closing Deliver stream```
In v1.2, are there any required details that should be part of the orderer profile declaration in the configtx.yaml file that allow the signature to be satisfied when creating the channel.block from the peer? I'm looking at the configtx.yaml that comes out of the box for the orderer and I'm trying to use the profile SampleDevModeKafka with my own org details. Is there any reason why I would get this message in the orderer logs when running the peer channel create command on a peer:
```2018-08-21 19:25:30.957 UTC [cauthdsl] func2 -> DEBU 1a5 0xc42000e740 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected SampleOrg, got SampleOrgMSP)
2018-08-21 19:25:30.957 UTC [cauthdsl] func2 -> DEBU 1a6 0xc42000e740 principal evaluation fails
2018-08-21 19:25:30.957 UTC [cauthdsl] func1 -> DEBU 1a7 0xc42000e740 gate 1534879530956937482 evaluation fails
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1a8 Signature set did not satisfy policy /Channel/Application/SampleOrg/Admins
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1a9 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/SampleOrg/Admins
2018-08-21 19:25:30.957 UTC [policies] func1 -> DEBU 1aa Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ SampleOrg.Admins ]
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1ab Signature set did not satisfy policy /Channel/Application/ChannelCreationPolicy
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1ac == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy
2018-08-21 19:25:30.957 UTC [orderer/common/broadcast] Handle -> WARN 1ad [channel: mychannel] Rejecting broadcast of config message from xxx.xxx.xxx.xxx:1234 because of error: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2018-08-21 19:25:30.957 UTC [orderer/common/server] func1 -> DEBU 1ae Closing Broadcast stream
2018-08-21 19:25:30.959 UTC [grpc] Printf -> DEBU 1af transport: http2Server.HandleStreams failed to read frame: read tcp 159.122.206.181:7050->xxx.xxx.xxx.xxx:1234: read: connection reset by peer
2018-08-21 19:25:30.959 UTC [common/deliver] Handle -> WARN 1b0 Error reading from xxx.xxx.xxx.xxx:1234: rpc error: code = Canceled desc = context canceled
2018-08-21 19:25:30.959 UTC [orderer/common/server] func1 -> DEBU 1b1 Closing Deliver stream```
In v1.2, are there any required details that should be part of the orderer profile declaration in the configtx.yaml file that allow the signature to be satisfied when creating the channel.block from the peer? I'm looking at the configtx.yaml that comes out of the box for the orderer and I'm trying to use the profile SampleDevModeKafka with my own org details. Is there any reason why I would get this message in the orderer logs when running the peer channel create command on a peer:
```2018-08-21 19:25:30.957 UTC [cauthdsl] func2 -> DEBU 1a5 0xc42000e740 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected SampleOrg, got SampleOrgMSP)
2018-08-21 19:25:30.957 UTC [cauthdsl] func2 -> DEBU 1a6 0xc42000e740 principal evaluation fails
2018-08-21 19:25:30.957 UTC [cauthdsl] func1 -> DEBU 1a7 0xc42000e740 gate 1534879530956937482 evaluation fails
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1a8 Signature set did not satisfy policy /Channel/Application/SampleOrg/Admins
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1a9 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/SampleOrg/Admins
2018-08-21 19:25:30.957 UTC [policies] func1 -> DEBU 1aa Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ SampleOrg.Admins ]
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1ab Signature set did not satisfy policy /Channel/Application/ChannelCreationPolicy
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1ac == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy
2018-08-21 19:25:30.957 UTC [orderer/common/broadcast] Handle -> WARN 1ad [channel: mychannel] Rejecting broadcast of config message from xxx.xxx.xxx.xxx:1234 because of error: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2018-08-21 19:25:30.957 UTC [orderer/common/server] func1 -> DEBU 1ae Closing Broadcast stream
2018-08-21 19:25:30.959 UTC [grpc] Printf -> DEBU 1af transport: http2Server.HandleStreams failed to read frame: read tcp xxx.xxx.xxx.xxx:7050->xxx.xxx.xxx.xxx:1234: read: connection reset by peer
2018-08-21 19:25:30.959 UTC [common/deliver] Handle -> WARN 1b0 Error reading from xxx.xxx.xxx.xxx:1234: rpc error: code = Canceled desc = context canceled
2018-08-21 19:25:30.959 UTC [orderer/common/server] func1 -> DEBU 1b1 Closing Deliver stream```
In v1.2, are there any required details that should be part of the orderer profile declaration in the configtx.yaml file that allow the signature to be satisfied when creating the channel.block from the peer? I'm looking at the configtx.yaml that comes out of the box for the orderer and I'm trying to use the profile SampleDevModeKafka with my own org details. Is there any reason why I would get this message in the orderer logs when running the peer channel create command on a peer:
```2018-08-21 19:25:30.957 UTC [cauthdsl] func2 -> DEBU 1a5 0xc42000e740 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected SampleOrg, got SampleOrgMSP)
2018-08-21 19:25:30.957 UTC [cauthdsl] func2 -> DEBU 1a6 0xc42000e740 principal evaluation fails
2018-08-21 19:25:30.957 UTC [cauthdsl] func1 -> DEBU 1a7 0xc42000e740 gate 1534879530956937482 evaluation fails
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1a8 Signature set did not satisfy policy /Channel/Application/SampleOrg/Admins
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1a9 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/SampleOrg/Admins
2018-08-21 19:25:30.957 UTC [policies] func1 -> DEBU 1aa Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ SampleOrg.Admins ]
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1ab Signature set did not satisfy policy /Channel/Application/ChannelCreationPolicy
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1ac == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy
2018-08-21 19:25:30.957 UTC [orderer/common/broadcast] Handle -> WARN 1ad [channel: mychannel] Rejecting broadcast of config message from xxx.xxx.xxx.xxx:1234 because of error: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2018-08-21 19:25:30.957 UTC [orderer/common/server] func1 -> DEBU 1ae Closing Broadcast stream
2018-08-21 19:25:30.959 UTC [grpc] Printf -> DEBU 1af transport: http2Server.HandleStreams failed to read frame: read tcp 159.122.206.181:7050->xxx.xxx.xxx.xxx:1234: read: connection reset by peer
2018-08-21 19:25:30.959 UTC [common/deliver] Handle -> WARN 1b0 Error reading from xxx.xxx.xxx.xxx:1234: rpc error: code = Canceled desc = context canceled
2018-08-21 19:25:30.959 UTC [orderer/common/server] func1 -> DEBU 1b1 Closing Deliver stream```
In v1.2, are there any required details that should be part of the orderer profile declaration in the configtx.yaml file that allow the signature to be satisfied when creating the channel.block from the peer? I'm looking at the configtx.yaml that comes out of the box for the orderer and I'm trying to use the profile SampleDevModeKafka with my own org details. Is there any reason why I would get this message in the orderer logs when running the peer channel create command on a peer:
```2018-08-21 19:25:30.957 UTC [cauthdsl] func2 -> DEBU 1a5 0xc42000e740 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected SampleOrg, got SampleOrgMSP)
2018-08-21 19:25:30.957 UTC [cauthdsl] func2 -> DEBU 1a6 0xc42000e740 principal evaluation fails
2018-08-21 19:25:30.957 UTC [cauthdsl] func1 -> DEBU 1a7 0xc42000e740 gate 1534879530956937482 evaluation fails
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1a8 Signature set did not satisfy policy /Channel/Application/SampleOrg/Admins
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1a9 == Done Evaluating *cauthdsl.policy Policy /Channel/Application/SampleOrg/Admins
2018-08-21 19:25:30.957 UTC [policies] func1 -> DEBU 1aa Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ SampleOrg.Admins ]
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1ab Signature set did not satisfy policy /Channel/Application/ChannelCreationPolicy
2018-08-21 19:25:30.957 UTC [policies] Evaluate -> DEBU 1ac == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Application/ChannelCreationPolicy
2018-08-21 19:25:30.957 UTC [orderer/common/broadcast] Handle -> WARN 1ad [channel: mychannel] Rejecting broadcast of config message from xxx.xxx.xxx.xxx:1234 because of error: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2018-08-21 19:25:30.957 UTC [orderer/common/server] func1 -> DEBU 1ae Closing Broadcast stream
2018-08-21 19:25:30.959 UTC [grpc] Printf -> DEBU 1af transport: http2Server.HandleStreams failed to read frame: read tcp xxx.xxx.xxx.xxx:7050->xxx.xxx.xxx.xxx:1234: read: connection reset by peer
2018-08-21 19:25:30.959 UTC [common/deliver] Handle -> WARN 1b0 Error reading from xxx.xxx.xxx.xxx:1234: rpc error: code = Canceled desc = context canceled
2018-08-21 19:25:30.959 UTC [orderer/common/server] func1 -> DEBU 1b1 Closing Deliver stream```
@xiven Yes, you must declare a `BlockValidation` policy which must be set if you are manually setting policies for the orderer org. https://github.com/hyperledger/fabric/blob/73404f558f21a6eac9cc8f4902993defaec59098/sampleconfig/configtx.yaml#L284-L288
@jyellick That exists already from the configtx.yaml that is provided out of the box. Does that policy need to exist anywhere else in the file?
@jyellick That exists already from the configtx.yaml that is provided out of the box. Does that policy need to exist anywhere else in the file? Does this need to be declared under policies in the profile section as well?
Profiles typically inherit from the top level definitions via the `<<: *` yaml reference directives.
If you reference the orderer top level group in the import, and do not override the policies within the profile, then it should not be necessary to do anything else.
So if I currently have a reference directive in my Profile that points to say `<<: * SampleOrgPolicies` should that `BlockValidation` policy declaration be added to that as well as the place you provided a link to in the file?
If you are referencing `SampleOrgPolicies` for your orderer policy definitions, then you should ensure that SampleOrgPolicies contains a `BlockValidation` policy. Remember, `configtx.yaml` is really a plain yaml file. Each profile could declare its entire contents without the references, the sample uses references to re-use common pieces of the configuration, but ultimately, before parsing the profile, the yaml document is transformed by effectively 'pasting' the referenced sections into the profiels first.
If you are referencing `SampleOrgPolicies` for your orderer policy definitions, then you should ensure that SampleOrgPolicies contains a `BlockValidation` policy. Remember, `configtx.yaml` is really a plain yaml file. Each profile could declare its entire contents without the references, the sample uses references to re-use common pieces of the configuration, but ultimately, before parsing the profile, the yaml document is transformed by effectively 'pasting' the referenced sections into the profiles first.
So I've added that policy to the Org Policies definitions. Should I add it under policies with the profile as well?
Policies:
<<: *SampleOrgPolicies
Admins:
Type: Signature
Rule: "OR('SampleOrg.member')"
Under Admins block:
`Policies:
<<: *SampleOrgPolicies
Admins:
Type: Signature
Rule: "OR('SampleOrg.member')"`
Under Admins block:
``Policies:
<<: *SampleOrgPolicies
Admins:
Type: Signature
Rule: "OR('SampleOrg.member')"``
Under Admins block:
`Policies:
<<: *SampleOrgPolicies
Admins:
Type: Signature
Rule: "OR('SampleOrg.member')"`
@jyellick Here is my `configtx.yaml` for your review. Let me know if I missed something. Its still giving the same message in the logs.
@jyellick Here is my `configtx.yaml` for your review. https://hastebin.com/etiregolun.coffeescript Let me know if I missed something. Its still giving the same message in the logs.
@xiven Your configtx.yaml looks good to me
where else may a policy conflict be occurring from? could something in the `orderer.yaml` cause this problem?
No. Are you certain you are re-bootstrapping the orderer? The `configtx.yaml` is only used once in bootstrapping the orderer, once the ledger is in place it will use the existing config. So you must remove the orderer ledger prior to start.
Yes I do remove the orderer ledger prior to generating a new genesis block and channel tx file.
Am I missing a step?
No, that should be sufficient. Let me re-examine your logs quickly.
Ah, are you sure you are submitting your channel creation request with an admin user?
~Ah, are you sure you are submitting your channel creation request with an admin user?~
Actually, it looks like you have the MSP ID wrong
Be sure to set `CORE_PEER_LOCALMSPID=SampleOrg` not `SampleOrgMSP`
If I change it to be the same as the Org name it gives me a deduplicate error when getting the cert
Ah, I see the problem
Actually I have my env var name as `CORE_PEER_GENERAL_LOCALMSPID`
Please modify your sample org definition. You may safely remove lines 68-70. And then for lines 55, 61, 67, and 402, you should use `SampleOrgMSP`, not `SampleOrg` when referencing the org
Policy definitions refer to orgs by MSPID, not by Org name.
In general, I would recommend that you make the org name and msp ID match, it will make your life simpler.
So, you may alternatively replace all instances of `SampleOrgMSP` with `SampleOrg` in your ` configtx.yaml`
Then, specify the MSP ID on your peer channel creation to be `SampleOrg`. This is actually probably the best solution. (Though you can and should still remove lines 68-70)
can someone shed some light on how authentication and authorization works for the orderer? e.g. what determines who can connect to the orderer rpc and call deliver/broadcast? how does the orderer know who can create channels? what about call broadcast/deliver on specific channels? what has to be true for peers to accept blocks delivered and signed by the orderer?
i have some familiarity with the msp structure, though i'm not sure how the channel msp differs from the local msp (and the differences between client/peer/orderer msps)
@moodysalem
> what determines who can connect to the orderer rpc and call deliver/broadcast?
The orderer takes the union of all TLS CAs from all channels for the TLS layer
> how does the orderer know who can create channels?
The orderer system channel has one or more consortium definitions. The consortium definition defines the orgs in the consortium (including their MSPs) and a channel creation policy (by default, any admin of the orgs in the consortium). This policy is evaluated at channel creation time
> what about call broadcast/deliver on specific channels?
The channel configuration defines policies, the /Channel/Readers, and /Channel/Writers policies. By default, these correspond to any member of any org in the channel.
> what has to be true for peers to accept blocks delivered and signed by the orderer?
The orderer group in `configtx.yaml` defines a `BlockValidation` policy. Peers check that this policy is satisfied before accepting a block. By default, any member of the ordering org may sign blocks.
@moodysalem
> what determines who can connect to the orderer rpc and call deliver/broadcast?
The orderer takes the union of all TLS CAs from all channels for the TLS layer
> how does the orderer know who can create channels?
The orderer system channel has one or more consortium definitions. The consortium definition defines the orgs in the consortium (including their MSPs) and a channel creation policy (by default, any admin of the orgs in the consortium). This policy is evaluated at channel creation time
> what about call broadcast/deliver on specific channels?
The channel configuration defines policies, the /Channel/Readers, and /Channel/Writers policies (which authenticate Deliver and Broadcast respectively). By default, these correspond to any member of any org in the channel.
> what has to be true for peers to accept blocks delivered and signed by the orderer?
The orderer group in `configtx.yaml` defines a `BlockValidation` policy. Peers check that this policy is satisfied before accepting a block. By default, any member of the ordering org may sign blocks.
@moodysalem
> what determines who can connect to the orderer rpc and call deliver/broadcast?
The orderer takes the union of all TLS CAs from all channels for the TLS layer
> how does the orderer know who can create channels?
The orderer system channel has one or more consortium definitions. The consortium definition defines the orgs in the consortium (including their MSPs) and a channel creation policy (by default, any admin of the orgs in the consortium). This policy is evaluated at channel creation time (selected by the consortium name specified in the channel creation)
> what about call broadcast/deliver on specific channels?
The channel configuration defines policies, the /Channel/Readers, and /Channel/Writers policies (which authenticate Deliver and Broadcast respectively). By default, these correspond to any member of any org in the channel.
> what has to be true for peers to accept blocks delivered and signed by the orderer?
The orderer group in `configtx.yaml` defines a `BlockValidation` policy. Peers check that this policy is satisfied before accepting a block. By default, any member of the ordering org may sign blocks.
"The orderer takes the union of all TLS CAs from all channels for the TLS layer" does this apply only when mutual TLS is enabled?
Correct, if mutual TLS is enabled, then there is no client auth to be had. Orderers never communicate directly with eachother (as of v1.2) but instead communicate through Kafka which has its own TLS authentication mechanisms.
Correct, if mutual TLS is not enabled, then there is no client auth to be had. Orderers never communicate directly with eachother (as of v1.2) but instead communicate through Kafka which has its own TLS authentication mechanisms.
hmmm.. the orderer group in configtx.yaml.. is not configtx.yaml only used for the system channel?
Ah, yes, `configtx.yaml` is only used to generate bootstrapping elements (and some limited updates), I was being overly broad.
i'm wondering if peers necessarily need to be able to read the system channel
I meant that typically, the value is specified in `configtx.yaml`, but is encoded into the channel configuration.
ahhh i see so each channel's config transaction determines the blockvalidation policy
Correct
In the channel configuration, it would canonically be called the `/Channel/Orderer/BlockValidation` policy.
When a channel is created, it inherits this policy from the orderer system channel, and usually this policy is uniform across all channels (though it could be managed independently)
so, say i'm setting up a consortium where there is a single org that is responsible for ordering across all the channels and running the orderer and nothing else, does that mean that only that org should be allowed to create channels and perform config updates? and their msp should be in every channel? because otherwise other orgs could create channels that have different block validation policies
Actually, the config permission system is much more granular than that
Config is hiercharcial, and at the top level split into orderer parameters and application parameters
The orderer parameters include things like the batch size, and the block validation parameters.
The application parameters include things like the application orgs, acls, and application level policies.
The actual channel creation process first creates a 'template configuration' based on the orderer system channel. It copies the orderer configuration as is, and creates a template for the application portion based on the consortium definition.
Then, it applies the 'channel creation transaction' as an update. This update must follow the modification rules dictated by the template configuration. To modify the 'application' parameters, requires satisfying the channel creation policy. To modify the orderer parameters, like the block validation policy, requires satisfying the /Channel/Orderer/Admins policy (by default).
So, you may safely allow application admins without worrying about them modifying parameters in ordering they should not have access to.
If you wanted, you could certainly not allow consortium members to create channels for themselves, and instead require an orderer admin to do so.
If that were the case, then you can modify the channel creation policy to require the approval of an orderer member, instead of an application admin.
Although you did not ask, I would also mention that the individaul orgs also have their own modification rules. So, for instance, when creating a channel, the org's MSP cannot be corrupted by the creator.
The idea is, that if an admin sees a genesis block signed by the orderer, it has assurance that all of the org definitions are correct according to the consortium definition (as attested by the orderer)
i think i'm starting to get it, thanks so much for the explanation.. does the path '/Channel/Orderer/Admin' refer to that item in the config tree somewhere?
or are these paths defined somewhere in the docs
Correct. The config tree is organized into three types. Groups, Policies, and Values. Each group may have sub-groups, values, and policies. You may think of 'groups' as nodes in a tree, where 'values' and 'policies' are leafs. So, for a policy path `/Channel/Orderer/BlockValidation`, we really mean `/
Some of this is documented https://hyperledger-fabric.readthedocs.io/en/latest/config_update.html
https://hyperledger-fabric.readthedocs.io/en/latest/policies.html
https://hyperledger-fabric.readthedocs.io/en/latest/configtx.html
All in all, I do think more/better documentation around channel config is needed ( @joe-alewine @pandrejko I'm happy to assist if you would have time to write something up )
Has joined the channel.
@jyellick would you please explain the concept and usage of `ImplicitMeta` policy? the doc is abstract and cannot understand easily. thank you in advance.
@bh4rtp An implicit meta policy evaluated sub-policies in the config tree. Please read the discussion above, but the config is hierarchical. So, an implicit meta policies evaluated the policies one level deeper in the tree, and, evaluates a rule against those policy evaluations. Namely, whether all those policies, any of those policies, or a majority of those policies were satisfied (depending on the rule specified in the implicit policy).
@jyellick Yeah, that sounds good. Pam's gonna create a jira and we can discuss in there what kind of scope you're thinking of
what is the point of the local msp? isn't the whole msp stored in the channel config tx?
@moodysalem The local MSP contains a signing identity. For the orderer, this is the only purpose. For the peer, it is used to authenticate local admin operations like installing a chaincode or joining a peer to a channel. Remember, there is no channel context when a peer is first started, it only exists after joining a peer to a channel (unlike an orderer, which starts with the orderer system channel)
is the `--cafile` required when running the `peer channel create` command?
@xiven In general yes, otherwise the peer cannot authenticate the orderer's TLS connection
and joining a peer to channel involves sending a command from the client to the peer (valid because it's signed by the admin cert on the peer), and then the peer creates/signs a config transaction to add itself and sends it directly to the orderer?
what if we aren't doing TLS for the time being?
is it still required
> and joining a peer to channel involves sending a command from the client to the peer (valid because it's signed by the admin cert on the peer)
This part is right
> then the peer creates/signs a config transaction to add itself and sends it directly to the orderer?
Not really, the peer will usually contact the orderer to receive additional blocks, but it does not need to be 'added'. The peer's CA should already be in the channel config, so no next step for registration needed.
@xiven If you are not using TLS, then you do not need to specify TLS CAs
@xiven If you are not using TLS, then you do not need to specify TLS CAs (for production, I would always recommend TLS)
ah i see, 'adding the peer to the channel' does not equate 'adding the org to the channel'
An org may be part of the channel config, even without peers. For instance, it might only have clients (though this is a bit unusual)
when starting the orderer i see the following show at the beginning of the logs `2018-08-22 19:33:25.597 UTC [msp] getMspConfig -> DEBU 00f crls folder not found at [/etc/hyperledger/msp/crls]. Skipping. [stat /etc/hyperledger/msp/crls: no such file or directory]
2018-08-22 19:33:25.597 UTC [msp] getMspConfig -> DEBU 010 MSP configuration file not found at [/etc/hyperledger/msp/config.yaml]: [stat /etc/hyperledger/msp/config.yaml: no such file or directory]` are these optional or required?
when starting the orderer i see the following show at the beginning of the logs
`2018-08-22 19:33:25.597 UTC [msp] getMspConfig -> DEBU 00f crls folder not found at [/etc/hyperledger/msp/crls]. Skipping. [stat /etc/hyperledger/msp/crls: no such file or directory]
2018-08-22 19:33:25.597 UTC [msp] getMspConfig -> DEBU 010 MSP configuration file not found at [/etc/hyperledger/msp/config.yaml]: [stat /etc/hyperledger/msp/config.yaml: no such file or directory]`
are these optional or required?
Hi, I run binary orderer and binay peer in TLS ,
But I cannot join the peer into channel or install the CC on the peer,
for example, when joining the channel, it thrown the error on peer like https://hastebin.com/tamugezuva.css
and command end with error https://hastebin.com/amorufasov.coffeescript
How to fix this one,
Thanks
Hi,during "peer channel create ..." command i receiver error stating "transport: http2Server.HandleStreams failed to read frame: read tcp 172.18.0.15:7050->172.18.0.16:48580: read: connection reset by peer" in my orderer logs. Can someone help me out to resolve this please .
And during "peer channel join ..."
peer-log.png
orderer-log.png
In my orderer logs I am getting this message when running the peer channel create command: `error authorizing update: error validating ReadSet: readset expected key [Group] /Channel/Application at version 0, but got version 1` From what I read, that is basically saying that the channel already exists. Is there a way to see where that is stored and remove it? I don't remember it being ever being created unless it is doing something behind the scenes when I generate the genesis block and the channel tx file.
The error above was a result of creating the genesis block using the -channelID tag. The warning message says that it is being deprecated but I guess that is not taken into account yet in v1.2.
@xiven The deprecation should say that "omitting the flag is deprecated".
> Is there a way to see where that is stored and remove it?
As a blockchain, channels cannot simply be 'removed'. You should be able to pull the genesis block for the channel, and you may decode it to see the transaction which created it, who signed it, etc. You may also repurpose the channel through reconfiguration.
@OviiyaDominic It sounds like you have a networking error between your client and the orderer, I'd suggest debugging using standard non-fabric tools like netcat etc.
@OviiyaDominic It sounds like you have a networking error between your client and the orderer, I'd suggest debugging using standard non-fabric tools like netcat, tcpdump, wireshark, etc.
@jyellick Yes, that is what it says. I'm not sure why it was not working for me with the channel id specified. I did pull the genesis block. Only thing I can't seem to grasp now what the syntax is for endorsing a peer channel instantiation command. I tried `./peer chaincode instantiate -o orderer1.example.com:7050 -C mychannel -n mycc -v 1.0 -c '{"Args":["init","a", "100", "b","200"]}' -P "OR ('SampleOrgMSP.member')" --cafile /etc/hyperledger/fabric/msp/orderer1-certs/msp/cacerts/cert.pem` But this results in the following error: `Error: could not assemble transaction, err Proposal response was not successful, error code 500, msg instantiation policy violation: signature set did not satisfy policy` How should the out of the box instantiation policy be declared in the `peer channel instantiation` command?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ohJP562NTpwrZQeHR) Should I being using an admincert if for dev purposes I'm not using tls? Now that ACL is in place do I need to set a policy like I've tried to do or can I just leave that out since I'm just using one peer?
Has joined the channel.
@jyellick , could please let me know the ways to invoke a system chain code from an application
I was not able to find any docs related to this
do we need to invoke the same in the context of system channel ?
Has joined the channel.
@jyellick i do the data persistence for fabric network each node,and than i restart it,the orderer tell me the following error
Clipboard - August 31, 2018 10:19 PM
can't build consenter
can't build consenter, and one kafka is restarting
@kostas
Has joined the channel.
So if I restart a kafka cluster which was configured with non persistent storage and then restart the orderer it starts throwing these
```
2018-09-02 15:33:59.201 UTC [orderer/consensus/kafka] try -> DEBU 26e [channel: genesischannel] Connecting to the Kafka cluster
2018-09-02 15:33:59.203 UTC [orderer/consensus/kafka] try -> DEBU 26f [channel: genesischannel] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
```
Is there a way to fix this that doesn't involve wiping my ledger?
So if I restart a kafka cluster which was configured with non persistent storage and then restart the orderer and it starts throwing these
```
2018-09-02 15:33:59.201 UTC [orderer/consensus/kafka] try -> DEBU 26e [channel: genesischannel] Connecting to the Kafka cluster
2018-09-02 15:33:59.203 UTC [orderer/consensus/kafka] try -> DEBU 26f [channel: genesischannel] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
```
Is there a way to fix this that doesn't involve wiping my ledger?
So if I restart a kafka cluster which was configured with non persistent storage and then restart the orderer and it starts throwing these
```
2018-09-02 15:33:59.201 UTC [orderer/consensus/kafka] try -> DEBU 26e [channel: genesischannel] Connecting to the Kafka cluster
2018-09-02 15:33:59.203 UTC [orderer/consensus/kafka] try -> DEBU 26f [channel: genesischannel] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
```
Is there a way to fix this that doesn't involve wiping my ledger?
EDIT: for full disclosure I should add that the `ORDERER_FILELEDGER_LOCATION` is backed to persistent storage.
Has joined the channel.
block is created after every transaction even when the orderer settings are modified to support more size and timeouts
Can someone help me with this?
hi, how to fix kafka connection refused?
```Failed to connect to broker kafka0.energyx.com:9092: dial tcp 172.20.0.19:9092: connect connection refused```
hi, how to fix kafka connection refused?
```Failed to connect to broker kafka0.example.com:9092: dial tcp 172.20.0.19:9092: connect connection refused```
sorry, the issue has been fixed by waiting kafka cluster started up.
i wonder why it takes almost 300 seconds to start up kafka cluster successfull, usually 30 seconds available.
i wonder why it takes almost 300 seconds to start up kafka cluster successfull. usually it just need 30 seconds to be available.
After chaincode instantiation i get the following log on orderer
ordererlogs.png
Instead it should look like something like this:
localorderer.png
Can anyone help me?
Hi! I'm running kafka orderer service on Kubernetes. The kafka brokers containers were recycled and the orderer cached previous service IP addresses. How can I update kafka broker adresses on Orderer Service?
Hi! I'm running kafka orderer service on Kubernetes. The kafka brokers containers were recycled and the orderer cached previous service IP addresses. How can I update kafka broker adresses on Orderer Service? Is it possible run an update in level db?
Support transaction A is submitted by a client app and transaction B is submitted by another client app a few seconds later. Transactions A and B will en
Support transaction A is submitted by a client app and transaction B is submitted by another client app a few seconds later. Transactions A and B will end up updating the status of Asset X. How does Fabric ensure transaction B is not committed to the ledger before transaction A if transaction.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=spbjakt5gdTXsfJfp) Support transaction A is submitted by a client app and transaction B is submitted by another client app a few seconds later. Transactions A and B will end up updating the status of Asset X. How does Fabric ensure transaction B is not committed to the ledger before transaction A if transaction A takes longer to co,mpp
Support transaction A is submitted by a client app and transaction B is submitted by another client app a few seconds later. Transactions A and B will end up updating the status of Asset X. How does Fabric ensure transaction B is not committed to the ledger before transaction A if transaction A takes longer to complete
How does Fabric ensure transactions are committed in the order they are received?
can anyone help me to understand how the orderer system channel work? first thing is how it is created, am I right it can be created by using the configtxgen command (configtxgen -outputBlock) to produce the generis block and config this as genesis file for a order? do I need to specify a channel id when doing this? and does the channel id value matter? for example must it be called something like "orderer-system-channel"?
can anyone help me to understand how the orderer system channel work? first thing is how it is created, am I right it can be created by using the configtxgen command (configtxgen -outputBlock) to produce the genesis block and config this as genesis file for a order? do I need to specify a channel id when doing this? and does the channel id value matter? for example must it be called something like "orderer-system-channel"?
can anyone help me to understand how the orderer system channel work? first thing is how it is created, am I right it can be created by using the configtxgen command (configtxgen -outputBlock) to produce the genesis block and config this as genesis file for a orderer? do I need to specify a channel id when doing this? and does the channel id value matter? for example must it be called something like "orderer-system-channel"?
what must be defined when creating the genesis block? the "Orderer" group, and "Consortiums" group I assume, can I define multiple consortiums here?
what should I do if I will have multiple orderers in the network? do I have to use same channel id for all orderers when creating the genesis block?
and what should I do if I add new orderer in the network after initial bootstrap? do I need to re-generate genesis block and restart the existing orderer service nodes?
what should I do if I want to update consortium after initial bootstrap, for example add/remove orgs?
When i try to instantiate chaincode, the orderer is able to write to a new block ,but failing to deliver it.What could be the possible reasons
I have a scenario where i am calling stub GetHistoryForKey from another chaincode using stub.InvokeChaincode() but it returns value only when on same channel else empty response is returned without error..can someone help??
@xiven Yes, to instantiate chaincode, you should be using an admin cert.
How does Fabric ensure transactions are committed in the order received? Is there a scenario where transaction A takes awhile for the 3 phases of consensus to complete while transaction B comes in a second or two later and can commit earlier than transaction A. However, the asset being updated needs to have transaction A update the state before transaction B
Has joined the channel.
Hi! I have a dead channel that I've migrated away from due to a Kafka misconfiguration (default 7 day message expire time). The orderers are working with the new channel, but still complain about the old broken channel. Unfortunately, after some period of time, they log at CRIT for the bad channel and exit. Is there any way to get the orderers to ignore the channel or at least stop exiting?
Orderer logs all look like:
```
```
Hi! I have a dead channel that I've migrated away from due to a Kafka misconfiguration (default 7 day message expire time). The orderers are working with the new channel, but still complain about the old broken channel. Unfortunately, after some period of time, they log at CRIT for the bad channel and exit. Is there any way to get the orderers to ignore the channel or at least stop exiting?
Orderer logs all look like:
```
2018-09-04 12:04:16.099 UTC [common/deliver] deliverBlocks -> WARN 31f6 [channel: broken-channel] Rejecting deliver request for 10.0.7.6:33796 because of consenter error
2018-09-04 12:04:16.099 UTC [common/deliver] Handle -> DEBU 31f7 Waiting for new SeekInfo from 10.0.7.6:33796
2018-09-04 12:04:16.099 UTC [common/deliver] Handle -> DEBU 31f8 Attempting to read seek info message from 10.0.7.6:33796
2018-09-04 12:04:26.100 UTC [common/deliver] Handle -> WARN 31f9 Error reading from 10.0.7.6:33796: rpc error: code = Canceled desc = context canceled
2018-09-04 12:04:26.100 UTC [orderer/common/server] func1 -> DEBU 31fa Closing Deliver stream
...
2018-09-04 12:05:44.150 UTC [orderer/consensus/kafka] try -> DEBU 3205 [channel: broken-channel] Connecting to the Kafka cluster
2018-09-04 12:05:44.151 UTC [orderer/consensus/kafka] try -> DEBU 3206 [channel: broken-channel] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
2018-09-04 12:05:44.151 UTC [orderer/consensus/kafka] startThread -> CRIT 3207 [channel: broken-channel] Cannot set up channel consumer = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
panic: [channel: broken-channel] Cannot set up channel consumer = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
goroutine 82 [running]:
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc4201faf30, 0xd1d562, 0x31, 0xc4206b6200, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x134
github.com/hyperledger/fabric/orderer/consensus/kafka.startThread(0xc420234000)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/kafka/chain.go:261 +0xb33
created by github.com/hyperledger/fabric/orderer/consensus/kafka.(*chainImpl).Start
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/kafka/chain.go:126 +0x3f
```
As far as I can tell, there is no way to get a peer to leave a channel?
As far as I can tell, there is no way to get a peer to leave a channel? Can it be done by hand somehow?
Peer logs look as expected for this:
```
2018-09-04 15:34:23.809 UTC [blocksProvider] DeliverBlocks -> WARN c0696 [broken-channel] Got error &{SERVICE_UNAVAILABLE}
2018-09-04 15:34:33.809 UTC [deliveryClient] Disconnect -> DEBU c0697 Entering
2018-09-04 15:34:33.809 UTC [deliveryClient] Disconnect -> DEBU c0698 Exiting
2018-09-04 15:34:33.817 UTC [deliveryClient] connect -> DEBU c0699 Connected to orderer-1.orderer:7050
2018-09-04 15:34:33.817 UTC [deliveryClient] connect -> DEBU c069a Establishing gRPC stream with orderer-1.orderer:7050 ...
2018-09-04 15:34:33.817 UTC [deliveryClient] afterConnect -> DEBU c069b Entering
2018-09-04 15:34:33.817 UTC [deliveryClient] RequestBlocks -> DEBU c069c Starting deliver with block [18] for channel broken-channel
2018-09-04 15:34:33.818 UTC [deliveryClient] afterConnect -> DEBU c069d Exiting
```
@Sreesha: I'll need logs from the orderer at DEBUG level, and trimmed around the point of the failed delivery to let you know what's going on.
@DennisM330: There is no 3-phase transaction going on currently with the Kafka option. That said, you don't have any guarantees that input order == output order, nor will you find a blockchain system offering that.
@MikeEmery: Argh, I can see how that can be frustrating. There are two asks here:
1. Whether a peer can leave a channel; I'm fairly certain the answer to that is still a no, but please confirm in #fabric-peer-endorser-committer
2. Whether the orderer can stop loading that channel on bootstrap. I believe this comes down to just deleting the "channel-name" folder in the orderer's ledger directory, as this is what the orderer consults when booting up in order to detect how many channels are there: https://github.com/hyperledger/fabric/blob/release-1.2/common/ledger/blkstorage/fsblkstorage/fs_blockstore_provider.go#L58
Let's see if @manish-sethi can confirm.
@kostas thanks, will try moving the channel dir away and restart
Error message has changed, but may be ok going forward (as in not causing the orderer to exit):
```
2018-09-04 16:15:50.178 UTC [common/deliver] deliverBlocks -> DEBU 60d Rejecting deliver for 10.0.7.6:55346 because channel broken-channel not found
2018-09-04 16:15:50.179 UTC [common/deliver] Handle -> DEBU 60e Waiting for new SeekInfo from 10.0.7.6:55346
2018-09-04 16:15:50.179 UTC [common/deliver] Handle -> DEBU 60f Attempting to read seek info message from 10.0.7.6:55346
2018-09-04 16:16:00.180 UTC [common/deliver] Handle -> WARN 610 Error reading from 10.0.7.6:55346: rpc error: code = Canceled desc = context canceled
2018-09-04 16:16:00.180 UTC [orderer/common/server] func1 -> DEBU 611 Closing Deliver stream
```
peer message has changed to match:
```
2018-09-04 16:18:20.447 UTC [deliveryClient] connect -> DEBU c0ecd Connected to orderer-0.orderer:7050
2018-09-04 16:18:20.447 UTC [deliveryClient] connect -> DEBU c0ece Establishing gRPC stream with orderer-0.orderer:7050 ...
2018-09-04 16:18:20.447 UTC [deliveryClient] afterConnect -> DEBU c0ecf Entering
2018-09-04 16:18:20.447 UTC [deliveryClient] RequestBlocks -> DEBU c0ed0 Starting deliver with block [18] for channel broken-channel
2018-09-04 16:18:20.448 UTC [deliveryClient] afterConnect -> DEBU c0ed1 Exiting
2018-09-04 16:18:20.449 UTC [blocksProvider] DeliverBlocks -> WARN c0ed2 [broken-channel] Got error &{NOT_FOUND}
```
@MikeEmery Alright. So you'd say we're good for now?
I think so, yes. I'll continue to monitor it for unexpected exits but it doesn't look like a message that would eventually cause a CRIT. Thanks @kostas
Sure thing.
Has joined the channel.
Hi, I am interested in creating an adaptation of the PBFT algorithm as consensus module for fabric, I was wondering where I can find documentation on that
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=qoYHWkAvMGbzE6ytL) @kostas I believe the correct answer is during the endorsement process, the read/write set is captured for the proposed transaction. So, if for example, transaction B commits first, then transaction A will fail since a check is made during validation to query the world state against the read/write set for transaction A. Since B has committed, the key/value will not match the read/write set for trans A, since B has changed it.
Has joined the channel.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=X3Ps7JssPNtG48vEM) @MikeEmery It looks like since you have only deleted the `broken-channel` dir from the orderer's ledger, but not from the peer's ledger, the peer is till under the assumption that `broken-channel` exists. You should try moving the `broken-channel` dir away even from the peers.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=cPWCi6vyYrfQSGYzE) @huxiangdong You are right about the orderer system channel being created using the genesis block configured as genesis file for the orderer. The channel id here does not matter. The way an orderer identifies the system channel is by checking if the consortiums group is defined in the channel's config. Application channels _must not_ have the consortiums group (`/Channel/Consortiums`) defined.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WP3btEjZFqDPA9L9L) @huxiangdong Yes you can define multiple consortiums here.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=5Gcmry4NbELww2XF7) @huxiangdong Yes you must use the same channel id. As a thought experiment, if you do not then how will you send transactions intended for the orderer system channel? What channel id will you use for such transactions?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=jj9gL2BZ2vAmFKvKQ) @huxiangdong You can submit update transactions on the orderer system channel for the same.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=43csc8YXBk9e4AJ3Z) @huxiangdong Note, currently this is possible for only kafka orderers. Although I am not entirely sure about this, you should be able to do this too with a config update to add the orderer org. The new orderer node should then be bootstrapped using the same genesis block as was used to bootstrap the other nodes. Once up, it should be able to connect to the kafka cluster and consume the kafka partition from offset 0 to eventually come up to speed with all the transactions and blocks.
@kostas @jyellick please correct me if I am wrong.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=43csc8YXBk9e4AJ3Z) @huxiangdong Note, currently this is possible for only kafka orderers. Although I am not entirely sure about this, you should be able to do this too with a config update to add the orderer org. The new orderer node should then be bootstrapped using the same genesis block as was used to bootstrap the other nodes. Once up, it should be able to connect to the kafka cluster and consume the kafka partition from offset 0 to eventually come up to speed with all the transactions and blocks.
@kostas @jyellick please correct me if I am wrong.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=43csc8YXBk9e4AJ3Z) @huxiangdong Note, currently this is possible for only kafka orderers. Although I am not entirely sure about this, you should be able to do this too with a config update to add the orderer org. The new orderer node should then be bootstrapped using the same genesis block as was used to bootstrap the other nodes. Once up, it should be able to connect to the kafka cluster and consume the kafka partition from offset 0 to eventually come up to speed with all the transactions and blocks.
@kostas @jyellick please correct me if I am wrong.
Has joined the channel.
(peer, orderer) binaries not working properly on my ubuntu system, whenever i try to check their version they throw me an error `2018-09-05 14:19:26.915 IST [main] main -> ERRO 001 Fatal error when initializing core config : error when reading core config file: Unsupported Config Type ""`, i figured this might be an issue with the FABRIC_CFG_PATH which was blank by default, even after setting this path the binaries didnot work.
(peer, orderer) binaries not working properly on my ubuntu system, whenever i try to check their version they throw me an error `2018-09-05 14:19:26.915 IST [main] main -> ERRO 001 Fatal error when initializing core config : error when reading core config file: Unsupported Config Type ""`, i figured this might be an issue with the FABRIC_CFG_PATH which was blank by default, even after setting this path the binaries didnot work.
(peer, orderer) binaries not working properly on my ubuntu system, whenever i try to check their version they throw me an error `2018-09-05 14:19:26.915 IST [main] main -> ERRO 001 Fatal error when initializing core config : error when reading core config file: Unsupported Config Type ""`, i figured this might be an issue with the FABRIC_CFG_PATH which was blank by default, even after setting this path the binaries didnot work. Does anybody have any idea what the issue might be?
(peer, orderer) binaries not working properly on my ubuntu system, whenever i try to check their version they throw me an error `2018-09-05 14:19:26.915 IST [main] main -> ERRO 001 Fatal error when initializing core config : error when reading core config file: Unsupported Config Type ""`, i figured this might be an issue with the FABRIC_CFG_PATH which was blank by default, even after setting this path the binaries didnot work. Does anybody have any idea what the issue might be? @jyellick
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=dn4765AaH9tnYX5cv) @thakurnikk You might have set the wrong FABRIC_CFG_PATH. Please check.
this is my FABRIC_CFG_PATH=go/src/github.com/hyperledger/fabric/sampleconfig @adarshsaraf123 , does it look wrong to you??
this is my FABRIC_CFG_PATH=go/src/github.com/hyperledger/fabric/sampleconfig @adarshsaraf123 , does it look wrong to you??is it wrong?
this is my FABRIC_CFG_PATH=go/src/github.com/hyperledger/fabric/sampleconfig @adarshsaraf123 , is it wrong?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hfWTSYzjfdQZgedyD) @thakurnikk Are you missing `~/go` in the path?
yup i missed out the $HOME @adarshsaraf123
yup i missed out the $HOME @adarshsaraf123 and after adding its working
Getting the following error when i use InvokeChaincode() function inside a chaincode to call another chaincode function which uses getHistoryForKey() built in function,Can someone explain the issue?
[ERROR]20180905 08:47:56,521 org.hyperledger.fabric.sdk.Channel.sendQueryProposalToPeers(Channel.java:2913) Sending proposal to buyer12peer1 failed because of: gRPC failure=Status{code=UNKNOWN, description=error executing chaincode: transaction returned with failure: Failed to get policy manager for channel [main], cause=null}
java.lang.Exception: io.grpc.StatusRuntimeException: UNKNOWN: error executing chaincode: transaction returned with failure: Failed to get policy manager for channel [main]
at org.hyperledger.fabric.sdk.Channel.sendQueryProposalToPeers(Channel.java:2913)
... 25 more
[ERROR]20180905 08:47:56,523 com.oracle.bcs.gateway.Gateway.query(Gateway.java:1213) Failed query proposal from peer buyer12peer1 status: FAILURE. Messages: Sending proposal to buyer12peer1 failed because of: gRPC failure=Status{code=UNKNOWN, description=error executing chaincode: transaction returned with failure: Failed to get policy manager for channel [main], cause=null}. Was verified : false.
[INFO ]20180905 08:47:56,553 com.oracle.bcs.gateway.Gateway.query(Gateway.java:1221) Success query proposal from peer buyer11peer1.
[INFO ]20180905 08:47:56,554 com.oracle.bcs.gateway.BcsClientRestAPI.query(BcsClientRestAPI.java:79) /v1/transaction/query from abhishek.dabas@sofbang.com:REQ: sellers/chaincode/v1/invokecc/[chaincode, main, getHistory, 1]/transientMap:null/ :REP:Success/com.oracle.bcs.gateway.ResultInfo@159e771d/null.
Getting the following error when i use InvokeChaincode() function inside a chaincode to call another chaincode function which uses getHistoryForKey() built in function,Can someone explain the issue?
[ERROR]20180905 08:47:56,521 org.hyperledger.fabric.sdk.Channel.sendQueryProposalToPeers(Channel.java:2913) Sending proposal to buyer12peer1 failed because of: gRPC failure=Status{code=UNKNOWN, description=error executing chaincode: transaction returned with failure: Failed to get policy manager for channel [main], cause=null}
java.lang.Exception: io.grpc.StatusRuntimeException: UNKNOWN: error executing chaincode: transaction returned with failure: Failed to get policy manager for channel [main]
at org.hyperledger.fabric.sdk.Channel.sendQueryProposalToPeers(Channel.java:2913)
... 25 more
[ERROR]20180905 08:47:56,523 com.oracle.bcs.gateway.Gateway.query(Gateway.java:1213) Failed query proposal from peer buyer12peer1 status: FAILURE. Messages: Sending proposal to buyer12peer1 failed because of: gRPC failure=Status{code=UNKNOWN, description=error executing chaincode: transaction returned with failure: Failed to get policy manager for channel [main], cause=null}. Was verified : false.
[INFO ]20180905 08:47:56,553 com.oracle.bcs.gateway.Gateway.query(Gateway.java:1221) Success query proposal from peer buyer11peer1.
[INFO ]20180905 08:47:56,554 com.oracle.bcs.gateway.BcsClientRestAPI.query(BcsClientRestAPI.java:79) /v1/transaction/query from admin@sofbang.com:REQ: sellers/chaincode/v1/invokecc/[chaincode, main, getHistory, 1]/transientMap:null/ :REP:Success/com.oracle.bcs.gateway.ResultInfo@159e771d/null.
getHistoryForKey() function when called from InvokeChaincode() works fine if both chaincodes are on same channel but returns empty result iterator if chaincode is on different channels .Can someone help?
Also not the issue of cross channel calling as query function works fine in all scenarios
Hi, I'd like to know in which instances fabric has to run PBFT.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=TwaGWAWS4y39sSCoE) @kostas Yes, that should work... haven't tried ever though.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8PTb8L8jbsStJcE62) @adarshsaraf123 Thanks - tried it! Unfortunately the peer creates a new folder and block for the channel, then tries to catch it up to latest. There aren't any errors logged, but the peer shuts down immediately after that.
`[peer] Initialize -> INFO 38c Loading chain broken-channel`
https://chat.hyperledger.org/channel/fabric-orderer?msg=NGExn3tjBqErS63Wi
@DennisM330: What you wrote is correct, however I don't think it addresses your original question? (You know better of course.) What I wrote above stands: you have no guarantees that the order with which the ordering service will receive transactions is the order with which it will output them to committing peers.
> Hi, I am interested in creating an adaptation of the PBFT algorithm as consensus module for fabric, I was wondering where I can find documentation on that
@Raycoms: PBFT or how to develop a Fabric consensus module?
A consensus plugin needs to implement the `Consenter` and `Chain` interfaces defined here: https://github.com/hyperledger/fabric/blob/master/orderer/consensus/consensus.go
There are two plugins built against these interfaces already:
https://github.com/hyperledger/fabric/tree/master/orderer/consensus/solo
https://github.com/hyperledger/fabric/tree/master/orderer/consensus/kafka
You can study them to take cues for your own implementation.
The entire orderer code can be found here:
https://github.com/hyperledger/fabric/tree/master/orderer
The design document we've put out for the Raft plugin (which, like PBFT, is a leader-based protocol) will also be handy: A consensus plugin needs to implement the `Consenter` and `Chain`
interfaces defined here:
https://github.com/hyperledger/fabric/blob/master/orderer/consensus/consensus.go
There are two plugins built against these interfaces already:
https://github.com/hyperledger/fabric/tree/master/orderer/consensus/solo
https://github.com/hyperledger/fabric/tree/master/orderer/consensus/kafka
You can study them to take cues for your own implementation.
The entire orderer code can be found here:
https://github.com/hyperledger/fabric/tree/master/orderer
The design document we've put out for the Raft plugin (which, like PBFT, is a leader-based protocol) will also be handy: https://docs.google.com/document/d/138Brlx2BiYJm5bzFk_B0csuEUKYdXXr7Za9V7C76dwo/edit#heading=h.f5d514ch3rv7
The design document we've put out for the Raft plugin (which, like PBFT, is a leader-based protocol) will also be handy: https://docs.google.com/document/d/138Brlx2BiYJm5bzFk_B0csuEUKYdXXr7Za9V7C76dwo/edit
https://chat.hyperledger.org/channel/fabric-orderer?msg=nYfpDPXwGYDbC759v
@adarshsaraf123 is correct in everything, just a clarification: a config update that adds the new orderer to _the system channel_, will ensure that this orderer belongs to every channel that is created from that point on.
In order to add this new orderer to all _existing_ channels, you'll need to target them one-by-one with this config update.
^^ /cc @huxiangdong
> Hi, I'd like to know in which instances fabric has to run PBFT.
@Raycoms: The ordering service nodes.
@kostas An adaptation of pbft for the byzantine consensus, thanks for the docs, since the consensus modules are used for the ordering, in this case aren't there solo, kafka, bft-smart and pbft itself? (the last one doing it with byzantine failures in mind)
I would have imagined that it is used to decide which of the servers will be allowed to append the block to the blockchain, but the whitepaper wasn't very clear on that
is there a function in the go chaincode to query block by number?
@kostas another consensus use RAFT
I have a case, not sure this is issue or not, can you explain more:
I have 2 Orderer nodes running which belong to ordererOrg, Orderer0 and Orderer1,
1. update channel config to remove Orderer1,
2. shutdown Orderer0
3. Send update transaction,
observed: this transaction was sent to Orderer1 and peer updated the ledger.
is this case valid or invalid?
can someone help, Thanks for explain.
I have a case, not sure this is issue or not, can you explain more:
setup Orderer, peer version: v1.1.1
I have 2 Orderer nodes running which belong to ordererOrg, Orderer0 and Orderer1,
1. update channel config to remove Orderer1,
2. shutdown Orderer0
3. Send update transaction,
observed: this transaction was sent to Orderer1 and peer updated the ledger.
is this case valid or invalid?
can someone help, Thanks for explain.
Has joined the channel.
Hello! Are supported other ordering service (for multi-orderers) as well as Kafka?
Has joined the channel.
Hi,
can anyone tell me where i can read about reader/writer policies?
Has joined the channel.
20180910100530.png
panic: [channel: testchainid] Cannot post CONNECT message = circuit breaker is open
create orderer node fail,have some one can help me?
Has joined the channel.
Has joined the channel.
@username343 [the docs](https://hyperledger-fabric.readthedocs.io/en/release-1.2/policies.html) have a good overview
Has joined the channel.
```Can any one tell me about this issue ?Beacuase of this my conatiner goes to excited state
`2018-09-10 12:32:30.311 UTC [orderer/common/server] initializeServerConfig -> INFO 003 Starting orderer with TLS enabled
2018-09-10 12:32:30.319 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 004 Not bootstrapping because of existing chains
2018-09-10 12:32:30.324 UTC [orderer/commmon/multichannel] newLedgerResources -> CRIT 005 Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: setting up the MSP manager failed: the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.indirasoft.com")
panic: Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: setting up the MSP manager failed: the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.indirasoft.com")`
```
```Can any one tell me about this issue ?Beacuase of this my conatiner goes to excited state
`2018-09-10 12:32:30.311 UTC [orderer/common/server] initializeServerConfig -> INFO 003 Starting orderer with TLS enabled
2018-09-10 12:32:30.319 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 004 Not bootstrapping because of existing chains
2018-09-10 12:32:30.324 UTC [orderer/commmon/multichannel] newLedgerResources -> CRIT 005 Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: setting up the MSP manager failed: the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.indirasoft.com")
panic: Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: setting up the MSP manager failed: the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.indirasoft.com")`
``` Can any one tell me about this issue ?Beacuase of this my conatiner goes to excited state
@Ashish_ydv This occurs because some certificate (possibly an admin certificate or intermediate CA) is not appropriately signed by the CA for the MSP.
I am seeing panics for the fabric-ca-orderer build (amd64-1.3.0-snapshot-0a5ff43) ```2018-09-10 14:56:12.795 UTC [orderer/commmon/multichannel] checkResourcesOrPanic -> CRIT 092 [channel behavesyschan] config requires unsupported channel capabilities: Channel capability V1_3 is required but not supported: Channel capability V1_3 is required but not supported
panic: [channel behavesyschan] config requires unsupported channel capabilities: Channel capability V1_3 is required but not supported: Channel capability V1_3 is required but not supported``` If I turn off V1_3 capability for the Channel and Application the panic goes away
@latitiah looks like it uses a configtx.yaml from v1.3 ?
https://github.com/hyperledger/fabric/blob/master/sampleconfig/configtx.yaml#L86
yes
maybe the fabric-ca-orderer is a v1.2 binary?
(taking a guess)
Yes, that was y thought
Yes, that was my thought
but I removed all images and rebuilt master for fabric then fabric-ca
the orderer logs its version at the startup
Actually I read through the build - the base is v1.2 in fabric-ca!
I'll look in the log...
ok, I don't see it for the orderer (I may be just missing it), but I do see the version for the peer and it is based on v1.2
I'll post this to fabric-ca. Thx @yacovm !
Hi! I would like to ask how service order to deliver a transaction to the peers. By URL or something else?
Hi! I would like to ask how service orderer to deliver a transaction to the peers. By URL or something else?
Hi! I would like to ask how service order deliver a transaction to the peers. By URL or something else?
@ArianStef The orderer has a GRPC service `Deliver` which streams blocks to peers
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ySXyxejPCq3uYoCLE) @jyellick Thank you a lot. I would have another question, if you have time. I haven't find any material about how to distribuite an ordering service across the network? Is the best choise is have just one organization for the ordering service or distribute it across multiple organization?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ySXyxejPCq3uYoCLE) @jyellick Thank you a lot. I would have another question, if you have time. I haven't find any material about how to distribuite an ordering service across the network. Is the best choise have just one organization for the ordering service or distribute it across multiple organization?
and why?
It is best to have a dedicated ordering organization, since ordering is only CFT (crash-fault-tolerant). We are actively working to produce a BFT ordering implementation, at which point, distributing the ordering service across multiple organizations will make more sense.
It is best to have a dedicated ordering organization, since ordering is only CFT (crash-fault-tolerant). We are actively working to produce a BFT (byzantine-fault-tolerant) ordering implementation, at which point, distributing the ordering service across multiple organizations will make more sense.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=zpTqRc7M5JfBTWAKD) @jyellick thank you. I would like to have just another clearification: why if the ordering service is CFT the best is having a dedicated ordering organization?
Has joined the channel.
hellow everybody , i want to know what is system channel.
i have read this :The ordering service is bootstrapped with a genesis block on the
system channel. This block carries a configuration transaction that
defines the ordering service poroperties.
The current production implementation consists of ordering-ser-
vice nodes (OSNs) that implement the operations described here
and communicate through the system channel.
does one orderer communicate what another orderer through system channel ,just like peer communicates with each other through standard channel with gossip?
sorry , "what " -> "with ", in the last line
i am confused
Hello All,
Hello All,
I am using Ordring Service with kafka.
When I tried to send 3000 transactions using Node SDK, some transactions are failed while creating block in orderer showing the following log.
[orderer/consensus/kafka] enqueue -> ERRO 881e[0m [channel: smschannel] cannot enqueue envelope because = kafka server: Message was too large, server rejected it to avoid allocation error.
[orderer/common/broadcast] Handle -> WARN 881f[0m [channel: smschannel] Rejecting broadcast of normal message from 172.32.0.98:59906 with SERVICE_UNAVAILABLE: rejected by Order: cannot enqueue
[orderer/common/server] func1 -> DEBU 8820[0m Closing Broadcast stream
Would you please help me how to solve this issue?
Has joined the channel.
@yj511608130 channel is a _logic_ concepts to separate data. it's not communication channel.
@Rosan kafka says message is too large...
> I would like to have just another clearification: why if the ordering service is CFT the best is having a dedicated ordering organization?
@ArianStef The more parties involved in a CFT system, the greater the chances that one of them will act in a byzantine/malicious fashion. Since CFT does not protect against byzantine faults, it is better to decrease the number of parties with the ability to inject byzantine faults.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=A7LwMzEMbRfoSwnyj) @jyellick Thank you a lot
Hi all, I'm calling configtxgen (using the same arguments as worked in Fabric 1.0.5) to create my orderer genesis block, and getting this warning:
`[common/tools/configtxgen] main -> WARN 001 Omitting the channel ID for configtxgen is deprecated. Explicitly passing the channel ID will be required in the future, defaulting to 'testchainid'.`
It seems like the orderer genesis block should be independent of the channels to be created, since the orderer is created before the channels. Is this warning correct?
@vdods there is a system channel in orderer which you should give a name to
Ah, is that the channel that the config data goes into?
@vdods part of them, i.e. channel creation configs go to system channel
Hi..can anybody help me resolve this error from peer logs `2018-09-14 07:21:10.222 UTC [ConnProducer] NewConnection -> ERRO 95f Failed connecting to testf_orderer0:7050 , error: context deadline exceeded
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> DEBU 960 Connected to
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> ERRO 961 Failed obtaining connection: Could not connect to any of the endpoints: [testf_orderer0:7050]
`
Hi..can anybody help me resolve this error from peer logs `2018-09-14 07:21:10.222 UTC [ConnProducer] NewConnection -> ERRO 95f Failed connecting to testf_orderer0:7050 , error: context deadline exceeded
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> DEBU 960 Connected to
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> ERRO 961 Failed obtaining connection: Could not connect to any of the endpoints: [testf_orderer0:7050]
`
Hi..can anybody please help me resolve this error from peer logs
`2018-09-14 07:21:10.222 UTC [ConnProducer] NewConnection -> ERRO 95f Failed connecting to testf_orderer0:7050 , error: context deadline exceeded
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> DEBU 960 Connected to
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> ERRO 961 Failed obtaining connection: Could not connect to any of the endpoints: [testf_orderer0:7050]
`
Hi..can anybody please help me resolve this error from peer logs
`2018-09-14 07:21:10.222 UTC [ConnProducer] NewConnection -> ERRO 95f Failed connecting to testf_orderer0:7050 , error: context deadline exceeded2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> DEBU 960 Connected to
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> ERRO 961 Failed obtaining connection: Could not connect to any of the endpoints: [testf_orderer0:7050]
`
Hi..can anybody please help me resolve this error from peer logs
Hi..can anybody please help me resolve this error from peer logs```
`2018-09-14 07:21:10.222 UTC [ConnProducer] NewConnection -> ERRO 95f Failed connecting to testf_orderer0:7050 , error: context deadline exceeded
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> DEBU 960 Connected to
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> ERRO 961 Failed obtaining connection: Could not connect to any of the endpoints: [testf_orderer0:7050]
`
```
Hi..can anybody please help me resolve this error from peer logs```
2018-09-14 07:21:10.222 UTC [ConnProducer] NewConnection -> ERRO 95f Failed connecting to testf_orderer0:7050 , error: context deadline exceeded
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> DEBU 960 Connected to
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> ERRO 961 Failed obtaining connection: Could not connect to any of the endpoints: [testf_orderer0:7050]
```
Hi..can anybody please help me resolve this error from peer logs```
2018-09-14 07:21:10.222 UTC [ConnProducer] NewConnection -> ERRO 95f Failed connecting to testf_orderer0:7050 , error: context deadline exceeded
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> DEBU 960 Connected to
2018-09-14 07:21:10.226 UTC [deliveryClient] connect -> ERRO 961 Failed obtaining connection: Could not connect to any of the endpoints: [testf_orderer0:7050]
```
This might not be the *best* place for this question, but I figured it's not a totally unreasonable spot. Can you add private data collections to an existing channel?
@aatkddny yes
Is there an example anywhere? I'm guessing it's a similar idea to adding an org to the channel, but thats literally a guess at this point.
@aatkddny yeah of course.... you do that during chaincode upgrade
did you look at the docs?
@aatkddny https://hyperledger-fabric.readthedocs.io/en/latest/private-data-arch.html#upgrading-a-collection-definition
And note, collections are scoped to a channel's chaincode, not the channel itself!
got the idea at least, thx - I didn't rtfm in this case and as such wasn't sure if the existence of the collection was recorded in the channel in the same way as members.
i saw it was done as part of the instantiate and didn't think any further than that. i'll try an upgrade and see what that does for me.
Has joined the channel.
I'm a bit confused about what is the meaning of the Orderer Genesis block.
In this configuration block is where the consortiums are defined with their CAs ceritificates?
Has joined the channel.
Hi, little question:
AFAIK, the orderer don't store a copy of a channel ledger.
If the orderer is restarted, how it knows the next block number that it should use to create the next block?
It stores some kind of persistent state of each channel in some other different form?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=veWXcuzBwcZr6pG4Q) @waxer @waxer the orderer does store all the blocks that are part of a channel's ledger. It doesn't store the "official" ledger in the sense that it does not know which transactions will ultimately be valid or invalid based on the validation and commitment phase of the transaction flow, and thus it doesn't store the channel's state database, but it does indeed keep the blocks of the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=veWXcuzBwcZr6pG4Q) @waxer the orderer does store all the blocks that are part of a channel's ledger. It doesn't store the "official" ledger in the sense that it does not know which transactions will ultimately be valid or invalid based on the validation and commitment phase of the transaction flow, and thus it doesn't store the channel's state database, but it does indeed keep the blocks of the channel
@silliman , oh great. Now makes sense :) thanky ou
@silliman , oh great. Now makes sense :) thank you
hi, i have a question about fail to join channel. the orderer log printed a warning:
```[common/deliver] deliverBlocks -> WARN 1160 [channel: registerch] Rejecting deliver request for 172.21.0.1:35686 because of consenter error```
and the sdk client prompts an error:
```error: [Orderer.js]: sendDeliver - rejecting - status:SERVICE_UNAVAILABLE```
i am using fabric release-1.2.
hi, i have a question about fail to join channel. the orderer log printed a warning:
`[common/deliver] deliverBlocks -> WARN 1160 [channel: registerch] Rejecting deliver request for 172.21.0.1:35686 because of consenter error`
and the sdk client prompts an error:
`error: [Orderer.js]: sendDeliver - rejecting - status:SERVICE_UNAVAILABLE`
i am using fabric release-1.2.
@bh4rtp This usually indicates a misconfiguration in the Kafka cluster or that you have not allowed the cluster sufficient time to start and finish provisioning your topic. Try again in a few minutes, and the problem will likely have gone away.
@jyellick thanks.
@jyellick the log of orderer prints `[orderer/consensus/kafka/sarama] NewClient -> DEBU 935 ClientID is the default of 'sarama', you should consider setting it to something application-specific`. how to set the `ClientID`?
@bh4rtp you cannot... we are using default clientid provided by sarama. it shouldn't matter that much from fabric point of view
I'm getting this error in my orderer logs when trying to invoke: `SERVICE_UNAVAILABLE: rejected by Consenter: will not enqueue, consenter for this channel hasn't started yet` I have been able to invoke but not recently with this error now. Is something tripped up in kafka?
Has joined the channel.
Has joined the channel.
Has joined the channel.
When i try to update the configtx.yaml file to add new channel in profile section and when i execute the peer channel create command then m getting the error "Unknown consortium name: my_consoritum_name" ...........Any solution?
Has joined the channel.
Hello guys, I'm trying to create a network with 3 organizations and I have created configtx.yaml and crypto-config.yaml, I could run docker successfully but when i want to create a channel I'm getting an error: Attempted to include a member which is not in the consortium. I have defined each organization under consortium in confixtx.yaml. If it would help, i can share the files with you. Have a nice day ! 🙂
Has joined the channel.
i am trying to create a channel using this command:
i am trying to create a channel using this command:
CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig CORE_PEER_LOCALMSPID="OrdererMSP" peer channel create -o orderer0:7050 -c mychannel -f crypto/orderer/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem
and this is my version I am not sure how I am wrong
`CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/voting.org/users/Admin@voting.org/msp ORE_PEER_LOCALMSPID="OrdererMSP" peer channel create -o orderer.voting.org:7050 -c VotingChannel -f ./channel-artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/voting.org/orderers/orderer.voting.org/msp/cacerts/ca.voting.org-cert.pem
Error: failed to create deliver client: orderer client failed to connect to orderer.voting.org:7050: failed to create new connection: context deadline exceeded`
i am trying to create a channel using this command:
CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig CORE_PEER_LOCALMSPID="OrdererMSP" peer channel create -o orderer0:7050 -c mychannel -f crypto/orderer/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem
and this is my version I am not sure how I am wrong
```CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/voting.org/users/Admin@voting.org/msp ORE_PEER_LOCALMSPID="OrdererMSP" peer channel create -o orderer.voting.org:7050 -c VotingChannel -f ./channel-artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/voting.org/orderers/orderer.voting.org/msp/cacerts/ca.voting.org-cert.pem
Error: failed to create deliver client: orderer client failed to connect to orderer.voting.org:7050: failed to create new connection: context deadline exceeded
```
i am trying to create a channel using this command:
CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig CORE_PEER_LOCALMSPID="OrdererMSP" peer channel create -o orderer0:7050 -c mychannel -f crypto/orderer/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem
and this is my version I am not sure how I am wrong
```CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/voting.org/users/Admin@voting.org/msp CORE_PEER_LOCALMSPID="OrdererMSP" peer channel create -o orderer.voting.org:7050 -c VotingChannel -f ./channel-artifacts/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/voting.org/orderers/orderer.voting.org/msp/cacerts/ca.voting.org-cert.pem
Error: failed to create deliver client: orderer client failed to connect to orderer.voting.org:7050: failed to create new connection: context deadline exceeded
```
Has joined the channel.
Hello,everyone , I have a question with the census based on KAFKA,it‘s not safe!!! why?it's because when I login in a kafka container,I can produce message to insert into the queue of the topic which the FABRIC network channel is using
Am I right? Have anyone thought about this
Hi , everyone, Do we need make a signature with orderer's Identity on the block before writing block to the file?
Has joined the channel.
Has joined the channel.
I am using this command peer channel upgrade and getting this error.......Error: could not assemble transaction, err Proposal response was not successful, error code 500, msg cannot get package for chaincode (mycc:2.0)....any solution?
Has joined the channel.
Has left the channel.
> I have been able to invoke but not recently with this error now. Is something tripped up in kafka?
@xiven Yes, usually this error means that Kafka has simply not finished starting up, or there is something wrong with the cluster
> When i try to update the configtx.yaml file to add new channel in profile section and when i execute the peer channel create command then m getting the error "Unknown consortium name: my_consoritum_name" ...........Any solution?
@yousaf When you bootstrapped your orderer, you defined a number of orgs in a consortium. New channels may only be constructed with these members. If you wish to add a new member to the consortium, you must update the orderer system channel.
> `Error: failed to create deliver client: orderer client failed to connect to orderer.voting.org:7050: failed to create new connection: context deadline exceeded`
@hypere This looks like some sort of networking issue between your hosts, can you ping, port-scan etc. to that address?
> Am I right? Have anyone thought about this
@JaccobSmith Certainly, you must protect the integrity of the Kafka cluster and only allow the orderers to insert messages. The same can be said for any system, for instance, if an attacker gains access to your laptop and modifies the contents of the hard drive, you could see a different blockchain.
> Hi , everyone, Do we need make a signature with orderer's Identity on the block before writing block to the file?
The orderer signs blocks before it commits and disseminates them. The peers check that blocks have been signed by an orderer before committing them.
> Hi , everyone, Do we need make a signature with orderer's Identity on the block before writing block to the file?
@baoyangc The orderer signs blocks before it commits and disseminates them. The peers check that blocks have been signed by an orderer before committing them.
@jyellick ../bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block...............I am using this command to update the orderer. Am i doing some mistake ??
@yousaf This would be if you are bootstrapping the network only. To modify the network, you must use a config update. https://hyperledger-fabric.readthedocs.io/en/release-1.2/config_update.html
The process for updating a config is necessarily a bit more challenging than bootstrapping the system, as you must ensure the update is authorized, prevent replay, etc.
@jyellick So you mean that we need to manually update the config.json file like for adding new consortium and add the updated configuration to the network?
You should follow the guide in the link I posted. Roughly, you will need to pull down the current orderer system channel config, modify it, compute and sign an update, and submit it to the orderer.
@jyellick Thanks sir :)
Good morning. I am trying to get my orderer node to stand up using fabric v1.2, but keep receiving an error regarding the MSP not being able to obtain the certification chain. The certificates were generated via fabric-ca and are correctly configured as far I and the good folks over at #fabric-ca can tell. The certificates are also located in the correct locations for the orderer's MSP structure. Two peer nodes I am bootstrapping concurrently start with no errors. My localMSPDir variable in the orderer.yaml points to the correct location for the orderer's MSP information. The error relates to the root administrator's certificate which registered the orderer. Please let me know what other information you need to help solve this problem.
Good morning. I am trying to get my orderer node to stand up using fabric v1.2, but keep receiving an error regarding the MSP not being able to obtain the certification chain. The certificates were generated via fabric-ca and are correctly configured as far as I and the good folks over at #fabric-ca can tell. The certificates are also located in the correct locations for the orderer's MSP structure. Two peer nodes I am bootstrapping concurrently start with no errors. My localMSPDir variable in the orderer.yaml points to the correct location for the orderer's MSP information. The error relates to the root administrator's certificate which registered the orderer. Please let me know what other information you need to help solve this problem.
Error38.PNG
@jvsclp Have you tried using `openssl verify` on the admin certs in the local MSP dir? This error would indicate to me that they are not appropriately issued by the CA.
As a workaround, those admin certs in the local MSP are actually unused for the orderer (they are only used in the peer) so you could simply try deleting them.
Thank you for the suggestion on the certificate deletion. I'll give it a shot and report backe. The `openssl verify` returns `ok` for all certificates.
Thank you for the suggestion on the certificate deletion. I'll give it a shot and report back. The `openssl verify` returns `ok` for all certificates.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=iBjZhrkbqq5C2Gs87) @jyellick I removed the files from the admincerts folder and received a new critical error due to the empty directory. Image below.
Crit.PNG
Ah, sorry, my mistake, I thought no admin certs would be allowed.
jq -s '.[0] * {"channel_group":{"groups":{"Application":{"groups": {"Org3MSP":.[1]}}}}}' config.json ./channel-artifacts/org3.json > modified_config.json
Can anyone tell me the similar command to add a new consortium for configuration update ??
@yousaf
```jq -s '.[0] * {"channel_group":{"groups":{"Consortiums":{"
@yousaf
```jq -s '.[0] * {"channel_group":{"groups":{"Consortiums":{"
Capture.PNG
I'm still working through the issue where the standing up my orderer node is giving me a certification error. I managed to get a new error at least, but it's still along the same lines as the previous ones. The node startup fails when trying to initialize the orderer's MSP. The certificate that is failing in this case (all the other admin certificates also fail in some fashion) is the RootAdmin-cert.pem which was used when registering the orderer's signing certificate with the fabric-ca-server. The openssl verification chains are valid. Here's the structure of the MSP for the orderer:
|-admincerts - RootAdmin-cert.pem
|
|-cacerts - RootCA-cert.pem
|
|-keystore - Orderer-cert.key
|
|-signcerts - Orderer-cert.pem
|
|-tls - Orderer-tlscert.pem, Order-tlscert.key
|
|-tlscacerts - RootTLSCA-cert.pem
Crit01f.PNG
On a separate note, is there a way to decode the array of numbers following the error?
@yousaf There is a `Consortium` value, which is defined in channel group for application channels. This is the consortium the channel was created for. In the _orderer system channel_, there is a `Consortiums` (note the 's' at the end) which contains all of the consortium definitions which may create channels. You must modify the orderer system channel if you wish to change consortium membership.
@jvsclp Yes, you may decode that array of numbers.... save them to a file, then :ballot_box_with_check:
```cat
@jvsclp Yes, you may decode that array of numbers.... save them to a file, then:
```cat
You should inspect your admin certs with `openssl verify` to confirm they are issued by your CA chain.
Verify.PNG
@jyellick Thanks sir. I just wanted to clear this confusion.
Verify.PNG
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ZtEgjMFLMJLaDm7Ap) @jyellick So,If I control the orderer,I control the whole blockchain,Dose this mean that the FABRIC has the single point of failure of the orderer?
It's quite different from what I expected ,In Bitcoin,a lonely peer can't influence the whole blockchain
@JaccobSmith It depends on your point of view. Bitcoin uses a "Proof of Work" consensus, which says the longest blockchain is the authoritative one. Any party with sufficient computational power, no matter the number of nodes may 'control the whole blockchain'. The premise in Bitcoin is that doing so would be financially prohibitive and that consenters are incentivized to behave properly. Still, if a single node advertised a blockchain that was 10 blocks ahead, then later advertised a blockchain that was 20 blocks ahead, it could effectively perform a double-spending attack.
Fabric ordering is designed to be modular with respect to its consensus. In v1.0/v1.1/v1.2 the production consensus mechanism of choice is "Kafka", which assumes that the parties running the Kafka brokers are well behaved. Because Fabric is a permissioned blockchain, the identity attesting to the valid order is recorded, so there is a significant disincentive for the orderer to behave badly. I would also note, that the identities in ordering authorized to sign blocks are not authorized to transact on the network. This implies that there must minimally be collusion between the orderer identity and a transacting identity for any sort of attack to take place.
There is a BFT based consensus based on Smart-BFT which recently announced a new milestone which is also an option. And, in the future there will be more BFT based solutions. The BFT solutions allow some number individual orderers to behave maliciously without impacting the network's integrity. Strictly speaking, Fabric _could_ add a proof of work type consensus model for ordering, but there are many drawbacks to a proof of work model, and it's nothing I've seen any particular interest in.
@jvsclp The screenshot you sent seems to be inspecting certs in a local folder (that does not appear to be organized as a 'local MSP'). Are you certain this is 'all' of the certs that appear in the MSP directory?
Hi all,
I am using using fabric 1.1.0 and OSN with (1 Orderer, 4 Kafka and 3 Zookeepers).
I was trying to get the Leader Election time of Kafka while processing 3000 transactions.
So, I stopped the leader Kafka in the middle of processing transactions.
At that time some the transactions are failed between Kafka Leader down time and new Leader Elected time.
The following logs are printed in the orderer log:
[31m2018-09-20 04:32:05.959 UTC [orderer/consensus/kafka] enqueue -> ERRO 16e25[0m [channel: testchannel] cannot enqueue envelope because = dial tcp 18.654.72.469:9092: getsockopt: connection refused
[33m2018-09-20 04:32:05.959 UTC [orderer/common/broadcast] Handle -> WARN 16e26[0m [channel: testchannel] Rejecting broadcast of normal message from 172.32.0.71:51082 with SERVICE_UNAVAILABLE: rejected by Order: cannot enqueue
[36m2018-09-20 04:32:05.959 UTC [orderer/common/server] func1 -> DEBU 16e27[0m Closing Broadcast stream
Orderer does not even Retry for the failed transactions.Is this Normal or is there any way to make orderer to retry the failed transactions?
Hi all,
I am using using fabric 1.1.0 and OSN with (1 Orderer, 4 Kafka and 3 Zookeepers).
I was trying to get the Leader Election time of Kafka while processing 3000 transactions.
So, I stopped the leader Kafka in the middle of processing transactions.
At that time some the transactions are failed between Kafka Leader down time and new Leader Elected time.
The following logs are printed in the orderer log:
[31m2018-09-20 04:32:05.959 UTC [orderer/consensus/kafka] enqueue -> ERRO 16e25[0m [channel: testchannel] cannot enqueue envelope because = dial tcp 18.654.72.469:9092: getsockopt: connection refused
[33m2018-09-20 04:32:05.959 UTC [orderer/common/broadcast] Handle -> WARN 16e26[0m [channel: testchannel] Rejecting broadcast of normal message from 172.32.0.71:51082 with SERVICE_UNAVAILABLE: rejected by Order: cannot enqueue
[36m2018-09-20 04:32:05.959 UTC [orderer/common/server] func1 -> DEBU 16e27[0m Closing Broadcast stream
After new Kafka Leader is Elected and Orderer get connected to new leader, it can process the remaining transactions.
But Orderer does not even Retry for the failed transactions. Is this Normal or is there any way to make orderer to retry the failed transactions?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=xZCD3BhhGS773cTKQ) @jyellick Thanks very much for your generous answer
I am trying to update the policy and in the BYFN, i initialized endorsement policy to OR('Org1MSP.admin' , 'Org2MSP.admin' ) but when i send the endorsed transaction to orderer by using peer channel update command then i am getting this error. Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: policy for [Policy] /Channel/Admins not satisfied: Failed to reach implicit threshold of 2 sub-policies, required 1 remaining..................Any solution?
How to set environment variables to use orderer peer, just like we use orgs peers.
How to set environment variables to use orderer peer, just like we use orgs peers.?
@yousaf I think you have some terminology wrong. There are policies in the channel config, though they are not called 'endorsement policies'. But, in general, to update the `/Channel/Admins` policy for BYFN, you would need a signature from the ordering admin and a signature from each of the peer admins
How to get signature from ordering admin? i mean I am not getting it that how to use ordering admin just like we use peer admins
It discusses this in the channel update docs, but, in general, you use `peer channel signconfigtx` execute by two admins (each invocation adds a signature to the transaction), then `peer channel update` by the third (which adds a signature and submits it)
Configure your environment to point to the correct crypto material (ie local MSP dir) before executing each command.
can you give me a environment variables configuration set which will be pointing to ordering admin?
Because i am getting access to peers but don't know that how to point to ordering admin using these env variables
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=HPiCndPADq3FaLk5a) @jyellick That was just to make the openssl commands shorter, but here are the commands given in the Orderer's MSP directory to verify the certificates
Verify.jpg
And those are the only certificates in the MSP directory?
@jvsclp Could you paste the long series of digits in the error message via a service like hastebin.com and I will decode them for you?
@jyellick Who is the ordering admin in byfn?
@yousaf `crypto-config/ordererOrganizations/example.com/users/Admin\@example.com/`
@jyellick This wa for MSPCONFIG PATH. can you please tell me the values for remaining three variables too?
The problem is that, ordering admin environment variables were not available in docs. That's why i was stuck
@yousaf You may find these defined in your original `configtx.yaml`, the MSP ID is `OrdererMSP`
@yousaf You may find this defined in your original `configtx.yaml`, the MSP ID is `OrdererMSP`
I'm not sure what third parameter you require?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BXnCWuWFBaSp8wDrs) @jyellick As of now, yes. I was trying to isolate which certificates were causing problems. Right now it's the admin certificates. Probably because when the node is standing up certificates are checked in a sequential manner and the first to fail causes the certificate verification routine to exit without checking any of the other certificates. I have some intermediate certificates and intermediate admins, but they are not in the orderer's MSP directory at the moment. If they were in the directory `openssl verify` still returns ok.
I will post the array to hastebin. I was trying the decode the array with the command you posted yesterday, but I kept running into issues with the configtxlator portion of the pipeline even after doing each step sequentially to produce input and output.
@jyellick CORE_PEER_TLS_ROOTCERT_FILE...this one?
@jvsclp Are your certificates prehaps concatenated more than one per file? I realize this is standard practice, but the MSP structure wants the intermediate CAs to be in their own files in the `intermediatecerts` folder.
@yousaf I do not believe should not need to set this variable for this command to work.
For the `peer channel` commands, it is interacting with the orderer and takes the orderer's TLS ca as an explicit parameter
@jyellick should i set CORE_PEER_ADDRESS to orderer.example.com:7051 too? or leave it to peer0.org1 address?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2HsQWDsd6utHCepkL) @jyellick That's a good question, the certificates are not concatenated. The intermediate CA certificates did reside in the `intermediatecerts`. Here's an example of one of my intermediate certificates.
Nevermind, I'll post the certificate to hastebin as well. Rocket chat does not like the .pem formate
Nevermind, I'll post the certificate to hastebin as well. Rocket chat does not like the .pem format
Array from the error regarding obtaining certification chain: https://hastebin.com/fadericumi.json
@jyellick I have set the env variables for the orderer. but ran peer node status command and getting this error...............status:UNKNOWN
Error: Error trying to connect to local peer: rpc error: code = Unknown desc = access denied
Intermediate certificate authority certificate: https://hastebin.com/utuyazerad.diff
Intermediate certificate authority certificate: https://hastebin.com/utuyazerad.diff
This is the corporate certificate so the chain is CLPRootCA -> CLPCorpCA
@yousaf Yes, the orderer admin is not authorized to interact with peers, only orderers
@jyellick So what is the solution to use orderer admin to sign my configuration update transactions?
Because i have signed it from orgs peer admins. Only orderer admin is left to sign it and i am stuck at this point
@jvsclp I'm having trouble decoding the hex string you sent. Are you sure you got all of it? Maybe just the full log line would be best.
@yousaf Set the local MSP ID and dir variables, and execute the `peer channel update` as the orderer admin. I'm not sure where you are stuck?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8gLYzXLSWcSorp7Tq) @jyellick Hmm, it is the full string. I'll see if I can get the error to generate again to get you the full log line as I've tried several configurations in the MSP since then so I'm not getting the exact error, but still similar certificate errors. By the way, though this is frustrating for me I appreciate you taking your time to work through this.
@jyellick Thankyou very much sir. Issue resolved. :)
@jvsclp Ah, I see my mistake... I thought it was hex encoded, but it's actually decimal encoded, let me try this again...
Ah, @jvsclp I feel like I may have been a little blind. Your openssl verification command uses only the CA to verify the admin cert, it does not additionally reference the intermediate CA. Am I to believe then that your admin cert was issued directly by the CA and not by the intermediate?
Yes, the CLPRootAdmin was the bootstrap admin used to stand up the CLPRootCA server. The CLPRootAdmin is also the administrator which registered the Orderer. The Orderer-cert is issued by the CLPRootCA, so:
CLPRootCA -> CLPRootAdmin-cert
CLPRootCA -> Orderer-cert
If you have an intermediate CA, then the user certs must be issued by that intermediate CA, they cannot be issued directly by the root CA. The reason for this is, you can imagine that Verisign issues you an intermediate CA cert. You configure your MSP to use Verisign as the root, and your intermediate as the intermediate. Now, you do not want Verisign to be able to directly issue your user certs, only your intermediate CA.
As the error message indicates, user certs must be 'leafs' in the trust graph.
So, you may have only a root CA, and directly issued user certs. Or, if you have one or more intermediate CAs, then your user certs must be issued by the intermediates.
So my RootAdmin is okay, but should not be included in the MSP directory and my Orderer needs to fall under one of the intermediate CAs? For example, CLPRootCA -> CLPCorpCA -> Orderer
Yes, correct
Hmm, thank you! I was not aware this was a required configuration despite reading the MSP Identity Validity Rules. I figured since the CLPRootCA was part of the organizational structure it would not be an issue since the intermediate certificate authorities represent divisions and the Orderer would be used for the entire organization. I will have to do some reconfiguration and I will let you know how it goes.
That also explains why the peer nodes would stand up without the same errors.
That also explains why the peer nodes would stand up without the same errors, because they're issued from intermediate certificate authorities.
I have used this command to add a consortium of Org1 and Org3 .>>>> jq -s '.[0] * {"channel_group":{"groups":{"Consortiums":{"OneThreeConsortium":{"groups": {"Org1MSP":.[1] , "Org3MSP":.[2]}}}}}}' config.json ./channel-artifacts/org1.json ./channel-artifacts/org3.json > modified_config.json....................While conversion form modified_config.json to modified_config.pb..........I am getting this error>>>>>Error decoding: error decoding input: *common.Config: error in PopulateFrom for field channel_group for message *common.Config: *common.DynamicChannelGroup: error in PopulateFrom for map field groups with key Consortiums for message *common.DynamicChannelGroup: *common.DynamicConsortiumsGroup: unknown field "OneThreeConsortium" in common.ConfigGroup
Any solution??
@jyellick sir??
@yousaf I had to step away for a few. You must modify your command to be `... {"groups":{"Consortiums":{"groups":{"OneThreeConsortium" ...`
I think I may have originally ommitted the 'groups' piece when I first told you. Note, this is simply a JSON path, you may inspect the document yourself to see the elements it is referring to.
@jyellick Now, I am getting this error..............................................Error decoding: error decoding input: *common.Config: error in PopulateFrom for field channel_group for message *common.Config: *common.DynamicChannelGroup: error in PopulateFrom for map field groups with key Consortiums for message *common.DynamicChannelGroup: *common.DynamicConsortiumsGroup: error in PopulateFrom for map field groups with key OneThreeConsortium for message *common.DynamicConsortiumsGroup: *common.DynamicConsortiumGroup: unknown field "Org1MSP" in common.ConfigGroup
That makes it sound eo me like you are missing the `"groups"` between `OneThreeConsortium` and your org definition
@jyellick ..........now m facing this error in case of peer channel update command.............Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: invalid mod_policy for element [Group] /Channel/Consortiums: mod_policy not set
@jyellick After clearing out the old Orderer certificates and reissuing the identity under an intermediate certificate as you mentioned the orderer node is now active. This problem stopped my work for about a week, thank you!
@jyellick After clearing out the old Orderer certificates and reissuing the identity under an intermediate certificate, as you mentioned, the orderer node is now active. This problem stopped my work for about a week, thank you!
@yousaf Is there a mod policy set on that element in your `modified_config.json`?
@jyellick no
What version of `configtxgen` did you use to bootstrap your network? What version of fabric are you using?
v 1.2
@jyellick can you tell me the command to set this mod_policy too along with consortium so that this issue could be fixed?
If you are working with v1.2 fabric binaries, I do not see how this mod policy could be unset
In the `original_config.json` is it also unset?
@jyellick in orginal config.json....it is set but for other sections. Since BYFN doesn't contain "Consortiums" section by default and we are including it by using jq tool. So it might be that we need to include it too through command
But the issue i am facing now is that how to include mod_policy along with adding new consortium in jq command
I am using this command to set the mod_policy.... jq -s '.[0] * {"channel_group":{"groups":{"Consortiums":{"groups":{"mod_policy":"Admins"}}}}}' config.json > modified_config.json
But getting this error............. Error decoding: error decoding input: *common.Config: error in PopulateFrom for field channel_group for message *common.Config: *common.DynamicChannelGroup: error in PopulateFrom for map field groups with key Consortiums for message *common.DynamicChannelGroup: *common.DynamicConsortiumsGroup: expected map field groups value for mod_policy for message *common.DynamicConsortiumsGroup to be assignable from map[string]interface {} but was not. Is string
@yousaf The orderer system channel, ie `testchainid` should include a `Consortiums` section. It sounds like you are trying to perform this operation on an application channel, which will not work.
kafka orderer doesn't work stably. sometimes fails to join channel, sometimes instantiate chaincode. it tells SERVICE_UNAVAILABLE.
kafka orderer doesn't work stably. sometimes fails to join channel, sometimes fails to instantiate chaincode. it tells SERVICE_UNAVAILABLE.
kafka orderer doesn't work stably. sometimes fails to join channel. it tells SERVICE_UNAVAILABLE.
it sleeps 5 seconds after created channel. should i set it longer?
the client sleeps 5 seconds after created 5 channels. should i set it longer?
Has joined the channel.
@jyellick Thanks man for your response :). I have changed the command as you said but still facing the problem of "unknown field consortiums"..............................................................................................................................................jq -s '.[0] * {"channel_group":{"groups": {"Orderer": {"Consortiums":{"groups":{"OneThreeConsortium": {"groups": {"Org1MSP":.[1] , "Org3MSP":.[2] , "mod_policy":"Admins"}}}}}}}}' config.json ./channel-artifacts/org1.json ./channel-artifacts/org3.json > modified_config.jsonroot@c1ea43115c29:/opt/gopath/src/github.com/hyperledger/fabric/peer# configtxlator proto_encode --input modified_config.json --type common.Config --output modified_config.pbconfigtxlator: error: Error decoding: error decoding input: *common.Config: error in PopulateFrom for field channel_group for message *common.Config: *common.DynamicChannelGroup: error in PopulateFrom for map field groups with key Orderer for message *common.DynamicChannelGroup: *orderer.DynamicOrdererGroup: unknown field "Consortiums" in common.ConfigGroup
I have setup a fabric network with more than one orderer and analyzing few scenarios on how it is working. Have two questions.
1. One of the advantages of multi orderer network is to avoid a single point of failure. So if one orderer fails it has to automatically take another orderer into the picture and continue the work. But in the actual scenario for peer chaincode invoke through cli we pass arguments of orderer and cafile of orderer to make a transaction. Here we are passing the orderer info so if the orderer we choose is down the transaction will not be done. My question is - this is not the objective of multi orderer network so why we need to pass the orderer related arguments?
2. I deployed this network with 4 kafka brokers and 3 zookeepers. Even after stopping all the three zookeepers the fabric network is giving the correct response. What is the significance of zookeeper?
@jyellick Kindly correct me if i did some mistake in command. OR provide me a corrected one.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8sJA5h9jtymhADRAL) @yousaf While your objective is not clear to me, if you are trying to update the Consortiums as defined in the system channel, then the `Consortiums` key does not sit within `Orderer`. It sits on the same level as the `Orderer`. So just try removing the `Orderer` key and have `{"channel_group":{"groups": {"Consortiums":{"groups":{"OneThreeConsortium": {"groups": {"Org1MSP":.[1] , "Org3MSP":.[2] , "mod_policy":"Admins"}}}}}}}'. Let us know if that works.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8sJA5h9jtymhADRAL) @yousaf While your objective is not clear to me, if you are trying to update the Consortiums as defined in the system channel, then the `Consortiums` key does not sit within `Orderer`. It sits on the same level as the `Orderer`. So just try removing the `Orderer` key and have `{"channel_group":{"groups": {"Consortiums":{"groups":{"OneThreeConsortium": {"groups": {"Org1MSP":.[1] , "Org3MSP":.[2] , "mod_policy":"Admins"}}}}}}}`. Let us know if that works.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ctHcmDXr6hAinae3K) @jyellick The above is how it was specified by @jyellick in this reply.
@jyellick Hi Jason, Could you please help me out on my query ?
@jyellick @adarshsaraf123 Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: invalid mod_policy for element [Group] /Channel/Consortiums: mod_policy not set
@jyellick @adarshsaraf123 In that case, I am facing this issue that invalid mod_policy
How did you create the genesis block? Which `configtx.yaml` file did you use?
I am using the default first-network containing configtx.yaml file and followed the steps in official documentation of byfn
hello guys, I am trying to build the network with Kafka in docker swarm mode but my orderer are not connecting to kafka it is showing below error : `2018-09-24 07:21:20.239 UTC [orderer/common/server] initializeServerConfig -> INFO 002 Starting orderer with TLS enabled
2018-09-24 07:21:20.517 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
2018-09-24 07:21:20.711 UTC [orderer/commmon/multichannel] newLedgerResources -> CRIT 004 Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: Invalid broker entry: kafka0_orderer:9092
panic: Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: Invalid broker entry: kafka0_orderer:9092
goroutine 1 [running]:
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc4201ec1b0, 0xd15b3b, 0x27, 0xc4202935a0, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x134
github.com/hyperledger/fabric/orderer/common/multichannel.(*Registrar).newLedgerResources(0xc420108230, 0xc420134300, 0xc420134300)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:253 +0x391
github.com/hyperledger/fabric/orderer/common/multichannel.NewRegistrar(0x1393300, 0xc4200c4420, 0xc420120cf0, 0x138fd80, 0x13f3e20, 0xc42000ea50, 0x1, 0x1, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:144 +0x352
github.com/hyperledger/fabric/orderer/common/server.initializeMultichannelRegistrar(0xc4201d5680, 0x138fd80, 0x13f3e20, 0xc42000ea50, 0x1, 0x1, 0xc42036ea10)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:262 +0x277
github.com/hyperledger/fabric/orderer/common/server.Start(0xcfa0bc, 0x5, 0xc4201d5680)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:103 +0x24c
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:82 +0x20f
main.main()` can you please help me out with this problem
hello guys, I am trying to build the network with Kafka in docker swarm mode but my orderer are not connecting to kafka it is showing below error : `2018-09-24 07:21:20.239 UTC [orderer/common/server] initializeServerConfig -> INFO 002 Starting orderer with TLS enabled
2018-09-24 07:21:20.517 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
2018-09-24 07:21:20.711 UTC [orderer/commmon/multichannel] newLedgerResources -> CRIT 004 Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: Invalid broker entry: kafka0_orderer:9092
panic: Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: Invalid broker entry: kafka0_orderer:9092
goroutine 1 [running]:
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc4201ec1b0, 0xd15b3b, 0x27, 0xc4202935a0, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x134
github.com/hyperledger/fabric/orderer/common/multichannel.(*Registrar).newLedgerResources(0xc420108230, 0xc420134300, 0xc420134300)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:253 +0x391
github.com/hyperledger/fabric/orderer/common/multichannel.NewRegistrar(0x1393300, 0xc4200c4420, 0xc420120cf0, 0x138fd80, 0x13f3e20, 0xc42000ea50, 0x1, 0x1, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:144 +0x352
github.com/hyperledger/fabric/orderer/common/server.initializeMultichannelRegistrar(0xc4201d5680, 0x138fd80, 0x13f3e20, 0xc42000ea50, 0x1, 0x1, 0xc42036ea10)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:262 +0x277
github.com/hyperledger/fabric/orderer/common/server.Start(0xcfa0bc, 0x5, 0xc4201d5680)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:103 +0x24c
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:82 +0x20f
main.main()` can you please help me out with this problem
hello guys, I am trying to build the network with Kafka in docker swarm mode but my orderer are not connecting to kafka it is showing below error : "2018-09-24 07:21:20.239 UTC [orderer/common/server] initializeServerConfig -> INFO 002 Starting orderer with TLS enabled
2018-09-24 07:21:20.517 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
2018-09-24 07:21:20.711 UTC [orderer/commmon/multichannel] newLedgerResources -> CRIT 004 Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: Invalid broker entry: kafka0_orderer:9092
panic: Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: Invalid broker entry: kafka0_orderer:9092
goroutine 1 [running]:
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc4201ec1b0, 0xd15b3b, 0x27, 0xc4202935a0, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x134
github.com/hyperledger/fabric/orderer/common/multichannel.(*Registrar).newLedgerResources(0xc420108230, 0xc420134300, 0xc420134300)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:253 +0x391
github.com/hyperledger/fabric/orderer/common/multichannel.NewRegistrar(0x1393300, 0xc4200c4420, 0xc420120cf0, 0x138fd80, 0x13f3e20, 0xc42000ea50, 0x1, 0x1, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:144 +0x352
github.com/hyperledger/fabric/orderer/common/server.initializeMultichannelRegistrar(0xc4201d5680, 0x138fd80, 0x13f3e20, 0xc42000ea50, 0x1, 0x1, 0xc42036ea10)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:262 +0x277
github.com/hyperledger/fabric/orderer/common/server.Start(0xcfa0bc, 0x5, 0xc4201d5680)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:103 +0x24c
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:82 +0x20f
main.main()" can you please help me out with this problem
Has joined the channel.
Hi All, good day! I'm having the following error in my peer: [blocksProvider] DeliverBlocks -> WARN 502 [someachannel] Got error &{SERVICE_UNAVAILABLE}
When i check my Order i'm having the following error: [orderer/common/deliver] Handle -> WARN 721 [channel: someachannel] Rejecting deliver request because of consenter error
2018-09-24 09:23:05.519 UTC [orderer/main] func1 -> DEBU 722 Closing Deliver stream
Is it because of the error from Orderer?
> kafka orderer doesn't work stably. sometimes fails to join channel. it tells SERVICE_UNAVAILABLE.
@bh4rtp SERVICE_UNAVAILABLE is transient (if your Kafka cluster is configured correctly), I would recommend you add retry logic to your application. FWIW, 5 seconds is usually plenty in my experience, but it will depend on the configuration of your Kafka cluster.
> I have changed the command as you said but still facing the problem of "unknown field consortiums"
@yousaf It sounds like you are still trying to do this on the application channel. There is the initial channel the orderer is bootstrapped with, in the case of 'first-network', the name of this channel is `testchainid`, then first-network creates an application channel called `mychannel`. I believe you are trying to do these operations on `mychannel`, it should be on `testchainid`. If you switch the channel ID, and go back to the version that is channel.groups.Consortiums.
> 1. One of the advantages of multi orderer network is to avoid a single point of failure. So if one orderer fails it has to automatically take another orderer into the picture and continue the work. But in the actual scenario for peer chaincode invoke through cli we pass arguments of orderer and cafile of orderer to make a transaction. Here we are passing the orderer info so if the orderer we choose is down the transaction will not be done. My question is - this is not the objective of multi orderer network so why we need to pass the orderer related arguments?
The point of multiple orderers is to eliminate a single point of failure and to allow the ordering service to scale horizontally. The peer CLI is really not intended to be used for invokes in a production application. Typically, an SDK such as Node or Java would be used, and on failure, the invoke would be retried to another orderer.
> 2. I deployed this network with 4 kafka brokers and 3 zookeepers. Even after stopping all the three zookeepers the fabric network is giving the correct response. What is the significance of zookeeper?
You may think of Kafka and Zookeeper as a single logical service. The Kafka brokers use Zookeeper to manage leader election, and generally orchestrate changes in the Kafka cluster. I would expect that with zookeeper down, eventually you will experience problems with the cluster.
> 1. One of the advantages of multi orderer network is to avoid a single point of failure. So if one orderer fails it has to automatically take another orderer into the picture and continue the work. But in the actual scenario for peer chaincode invoke through cli we pass arguments of orderer and cafile of orderer to make a transaction. Here we are passing the orderer info so if the orderer we choose is down the transaction will not be done. My question is - this is not the objective of multi orderer network so why we need to pass the orderer related arguments?
The point of multiple orderers is to eliminate a single point of failure and to allow the ordering service to scale horizontally. The peer CLI is really not intended to be used for invokes in a production application. Typically, an SDK such as Node or Java would be used, and on failure, the invoke would be retried to another orderer.
> 2. I deployed this network with 4 kafka brokers and 3 zookeepers. Even after stopping all the three zookeepers the fabric network is giving the correct response. What is the significance of zookeeper?
You may think of Kafka and Zookeeper as a single logical service. The Kafka brokers use Zookeeper to manage leader election, and generally orchestrate changes in the Kafka cluster. I would expect that with zookeeper down, eventually you will experience problems with the cluster.
@SudeepS 2 ^
@vudathasaiomkar You have attempted to reference your Kafka brokers with a hostname that includes an underscore. Underscores are not valid in hostnames, so, the orderer has refused to start because it detects this as a misconfiguration.
@reggiefelias Yes, the error in your peer (SERVICE_UNAVAILABLE) is because the orderer is returning this error to the peer. The orderer typically returns this because the Kafka cluster is still starting up, or because it is misconfigured.
@jyellick I followed your command but I'm still getting the error of mod_policy not set
@jyellick thanks. i have increased sleeping time to be 30 seconds. and now it is ok to join channel. is there any changes with kafka?
@jyellick thanks. i have increased sleeping time to be 30 seconds. and now it is ok to join channel. are there any changes with kafka?
@yousaf What channel are you trying to update?
@bh4rtp Making Kafka perform efficiently is a bit outside of the scope of Fabric, you can find many many good resources online however. If you deploy your Kafka cluster where each broker and zookeeper is on its own machine, with fast storage, and a fast network between them, I would anticipate that 5 seconds is more than sufficient to create a new channel.
@jyellick I am not trying to update a channel. I am trying to add a new consortium to add a new channel in my existing network.
@yousaf Consortiums are defined _only_ in the orderer system channel. If you wish to simply add an org to a channel, then you may follow the detailed guide in the docs. If you wish to add an org to a consortium, this means modifying the orderer system channel where the consortiums are defined. Adding an org to a consortium allow that org to be included in initial channel creation and to create channels. It does not give that org access to any existing channels.
@jyellick Sir. I am not talking about adding a new organization to a channel. I have one channel in my network and i want to add a new consortium through configuration transaction so that I'll be able to create a new channel through it. Its just like i want to extend my existing network from one channel to two channels. But adding a new channel to my existing network seems like I would have to do it through configuration transaction on orderer system channel. But in configuration transaction steps, I am facing issues as I described above.
@yousaf You are correct that you need to perform a config update against the orderer system channel. The error you pasted me some time ago (that there was no mod policy on the consortiums group) indicates to me that it was not the orderer system channel.
Perhaps you can upload the artifacts or a way to reproduce somewhere for me to look at
@jyellick Do you want to look at config.json file or wanna see the files of "channel-artifacts"?
How about the `original_config.json` and `modified_config.json` files, via a service like hastebin.com
This is orignal_config.json......................https://hastebin.com/ufepebudov.json
@jyellick This one is modified_config.json........................https://hastebin.com/ataselanov.json
The `original_config.json` as suspected is not from an orderer system channel.
Can you create a link for the config block you pulled using `peer channel fetch`?
Can you create a link for the config block you pulled using `peer channel fetch`? (after converting to JSON)
Actually, also what is the command you used to fetch the config block? It should include the channel ID
peer channel fetch config config_block.pb -o orderer.example.com:7050 -c $CHANNEL_NAME --tls --cafile $ORDERER_CA
What is `$CHANNEL_NAME`?
mychannel
Correct, `mychannel` is an application channel, you must use the name of your orderer system channel.
For first-network, its name is `testchainid`
and what is there any way to change this default name for orderer system channel?
Okay sir. Let me try this.
When you bootstrap the network, via the `configtxgen -outputBlock ...` command, you may (and should) specify a channel id with the `-channelID` flag. You will see a warning with newer versions of `configtxgen` when you omit this.
@jyellick Got it sir.
@jyellick Now, while using peer channel update command, I am facing this error again.........................................................Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: invalid mod_policy for element [Group] /Channel/Consortiums/OneThreeConsortium: mod_policy not set
Ah, in this case yes, you must now set a mod_policy in the JSON you inject. Set it to `/Channel/Orderer/Admins`
Do this by adding a `mod_policy` field to the `OneThreeConsortium` group
Actually
This should not be needed
What did you originally name your consortium?
OneThreeConsortium
Are you certain? The default name of the consortium in first-network is `SampleConsortium`, did you change this in your `configtx.yaml`?
I want to name new consortium as OneThreeConsortium. But my default on is SampleConsortium
You may create a new consortium via the update. In this case, you would need to set a `mod_policy` (I recommend `/Channel/Orderer/Admins`)
However, most likely, you want to modify your existing consortium.
Only organizations defined in the same consortium may create channels with eachother.
(Though once a channel is created, its membership may expand beyond the consortium which created it)
Okay sir. Now I'm going to modify my command to set the mod_policy along with the new consortium. Kindly check this command if there is some mistake..................jq -s '.[0] * {"channel_group":{"groups": {"Consortiums":{"groups":{"OneThreeConsortium": {"groups": {"Org1MSP":.[1] , "Org3MSP":.[2] , "mod_policy":"Admins" }}}}}}}' config.json ./channel-artifacts/org1.json ./channel-artifacts/org3.json > modified_config.json
The `mod_policy` should be set outside of the `groups` element, as a sibling (not as a child)
Also, the value should not be `Admins`, it should be `/Channel/Orderer/Admins`
Also, the value should not be `"Admins"`, it should be `"/Channel/Orderer/Admins"`
Is it okay now?.................jq -s '.[0] * {"channel_group":{"groups": {"Consortiums":{"groups":{"OneThreeConsortium": {"groups": {"Org1MSP":.[1] , "Org3MSP":.[2] } , "mod_policy": "/Channel/Orderer/Admins" }}}}}}' config.json ./channel-artifacts/org1.json ./channel-artifacts/org3.json > modified_config.json
Yes, that looks correct to me
Okay sir. let me try this one
@jyellick I salute you sir. Mission successful. Thankyou so much that you have always helped me whenever I'm stuck. :)
@yousaf Happy to help, and glad that you have been successful
Thanks sir. :) one more thing, How i would know that to create a channel configuration transaction, which profile name i would have to specify for the newly added consortium to create a channel.tx for it.?
You can update your `configtx.yaml` to match the current state of the configuration of your orderer system channel. The organization info like MSPs/Certificates is not needed for the channel creation tx, only the org names.
For the profile, you must specify a consortium name, in this case, you would use your new `OneThreeConsortium` name.
Goti it sir. Should i create new profile for this new consortium? Is it necessary? OR i should add this new consortium to the existing profile named as "TwoOrgsChannel" ??
I would create a new profile, referring to the new consortium, like:
``` OneThreeChannel:
Consortium: OneThreeConsortium
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org3
Capabilities:
<<: *ApplicationCapabilities
```
Then refer to it when generating your channel creation tx
Done boss. I have faced this error while using peer channel create command. Because it seems that it also need to add this type of policy along with consortium definition as the error says........................Error: got unexpected status: BAD_REQUEST -- initializing policymanager failed: policy ChannelCreationPolicy at path Channel/Application has unknown policy type: 0
But i would try to customize the jq command to add it too. Hope so that it will work. Still if didn't work then I'm gonna ask your help. By the way, thanks man again for your support :)
Ah, yes, you are quite right, I forgot that this value must be defined. You may see an example channel creation policy defined in the other consortium section
@jyellick Yes sir i have seen that. I would try to add it too :)
Hi experts here, Orderers keeping configure transactions for system channel is reasonable. but why orderers also keep all transactions for each application channel? Thank you
@jyellick Thanks Jason for the reply..
Now I am running my network with 2 orderers, 4 Kafka nodes and 3 zookeper instances. After instantiating channel and invoking once,
When I stop Orderer0, invoking is not working, getting error saying connecting to
Has joined the channel.
@jyellick Hi sir. I am trying to add ChannelCreationPolicy as we discussed yesterday. But facing some syntax issues while executing jq command.......Kindly correct this command if I'm wrong somewhere...........................................jq -s '.[0] * {"channel_group":{"groups": {"Consortiums": {"groups": {"OneThreeConsortium": {"values": {"ChannelCreationPolicy": {"mod_policy":"/Channel/Orderer/Admins" , {"value": {"type": 3 , "value": {"rule": "ANY" , "sub_policy": "Admins"}}}}}}}}}}}' config.json > modified_config.json
@SudeepS 2 Your application must handle failing over between orderers, if orderer0 does not work, you should retry on orderer1
@yousaf You wrote:
```jq -s '.[0] * {"channel_group":{"groups": {"Consortiums": {"groups": {"OneThreeConsortium": {"values": {"ChannelCreationPolicy": {"mod_policy":"/Channel/Orderer/Admins" , {"value": {"type": 3 , "value": {"rule": "ANY" , "sub_policy": "Admins"}}}}}}}}}}}' config.json > modified_config.json
```
I believe it should be:
```jq -s '.[0] * {"channel_group":{"groups": {"Consortiums": {"groups": {"OneThreeConsortium": {"values": {"ChannelCreationPolicy": {"mod_policy":"/Channel/Orderer/Admins" , "value": {"type": 3 , "value": {"rule": "ANY" , "sub_policy": "Admins"}}}}}}}}}}' config.json > modified_config.json
```
@jyellick Thanks sir. Now, I've been successful in creating a new channel. :)
@jyellick Thank you very much for response. I am not specifying any orderer while making transaction. I don't know how handling needs to be done between ordered on failure. I tried searching over internet, but didn't get any response. Can you please help me on how to handle by providing any link which you know or any suggestions ? Thanks..
@jyellick Thank you very much for response. I am not specifying any orderer while making transaction. I don't know how handling needs to be done between orderers on failure. I tried searching over internet, but didn't get any response. Can you please help me on how to handle by providing any link which you know or any suggestions ? Thanks..
Can I ask here, regarding capability requirement,
can someone help to answer my concern:
I running fabric network with setting `Channel Capability V1_1`; `Orderer Capability V1_1 ` and `Application Capability V1_1`;
my orderer, peer running on binary v1.1.1 and works fine.
however when I joined additional peer, say org0-peer1, which build one `v1.2.0`) into channel successfully, capability V1_1 is enabled for this peer as log shown below
`2018-09-26 04:43:45.801 UTC [common/capabilities] Supported -> DEBU 295 Application capability V1_1 is supported and is enabled
2018-09-26 04:43:45.801 UTC [common/capabilities] Supported -> DEBU 296 Channel capability V1_1 is supported and is enabled`
Does that correct that org0-peer1 could join the channel? I thought that when I enable Application capability V1_1,then only peer with version v1.1.x can join?
Thanks!
Can I ask here, regarding capability requirement,
can someone help to answer my concern:
I running fabric network with setting `Channel Capability V1_1`; `Orderer Capability V1_1 ` and `Application Capability V1_1`;
my orderer, peer running on binary v1.1.1 and works fine.
I setup one peer, , say org0-peer1, with version 1.0.6, and join the channel, this peer cannot join, for that can be understood.
however when I joined additional peer, say org0-peer2, which build one `v1.2.0`) into channel successfully, capability V1_1 is enabled for this peer as log shown below
`2018-09-26 04:43:45.801 UTC [common/capabilities] Supported -> DEBU 295 Application capability V1_1 is supported and is enabled
2018-09-26 04:43:45.801 UTC [common/capabilities] Supported -> DEBU 296 Channel capability V1_1 is supported and is enabled`
Does that correct that org0-peer1 could join the channel? I thought that when I enable Application capability V1_1,then only peer with version v1.1.x can join?
Thanks!
Can I ask here, regarding capability requirement,
can someone help to answer my concern:
I running fabric network with setting `Channel Capability V1_1`; `Orderer Capability V1_1 ` and `Application Capability V1_1`;
my orderer, peer running on binary v1.1.1 and works fine.
I setup one peer, , say org0-peer1, with version 1.0.6, and join the channel, this peer cannot join, for that can be understood.
however when I joined additional peer, say org0-peer2, which build one `v1.2.0`) into channel successfully, capability V1_1 is enabled for this peer as log shown below
`2018-09-26 04:43:45.801 UTC [common/capabilities] Supported -> DEBU 295 Application capability V1_1 is supported and is enabled
2018-09-26 04:43:45.801 UTC [common/capabilities] Supported -> DEBU 296 Channel capability V1_1 is supported and is enabled`
Does that correct that org0-peer2 could join the channel? I thought that when I enable Application capability V1_1,then only peer with version v1.1.x can join?
Thanks!
Can I ask here, regarding capability requirement,
can someone help to answer my concern:
I running fabric network with setting `Channel Capability V1_1`; `Orderer Capability V1_1 ` and `Application Capability V1_1`;
my orderer, peer running on binary v1.1.1 and works fine.
I setup one peer, , say org0-peer1, with version 1.0.6, and join the channel, this peer cannot join, for that can be understood.
However when I joined additional peer, say org0-peer2, which build one `v1.2.0`) into channel successfully, capability V1_1 is enabled for this peer as log shown below
`2018-09-26 04:43:45.801 UTC [common/capabilities] Supported -> DEBU 295 Application capability V1_1 is supported and is enabled
2018-09-26 04:43:45.801 UTC [common/capabilities] Supported -> DEBU 296 Channel capability V1_1 is supported and is enabled`
Does that correct that org0-peer2 could join the channel? I thought that when I enable Application capability V1_1,then only peer with version v1.1.x can join?
Thanks!
Has joined the channel.
Hello! I'm concerned about who is supposed to maintain Orderer service in a multi-organization network?
@SudeepS 2 How are you doing your invoke? Each SDK has its own way of selecting an orderer
@Ryan2 The purpose of capabilities is to enforce that older binaries to do not connect to channels which they cannot process. The capabilities change the way transactions are processed (and transactions must be processed the same way by all peers on the network). This is working as designed.
@Ryan2 The purpose of capabilities is to enforce that older binaries to do not connect to channels which they cannot process. Newer peers generally support processing in the older models. The capabilities change the way transactions are processed (and transactions must be processed the same way by all peers on the network). This is working as designed.
At some point, for instance, a 2.0 peer may not be able to join a channel with capability lower than say.. v1.4, but for the time being, all newer peers support all older capabilities.
@krabradosty In a multi-organization network, if using Kafka, we recommend that a neutral third party administer ordering. For BFT consensus models (such as SmartBFT [released but not in main fabric]) and eventually SBFT currently under development, then a more distributed ownership model of ordering makes sense.
Question: Say we have a running Fabric network with only one Orderer. Say the Ordered is completely destroyed. If I spin up a new Orderer (with the right MSPs, etc), it will sync his ledger with the leader peers so to continue processing new transactions as expected?. To say it differently, it will sync with the network to reach the same state as when it was destroyed?
@waxer No, orderers never sync from peers. In this case, manual recovery by copying a peer ledger to the orderer might be possible, however this would not be an automatic process.
@jyellick , oh ok. By 'copying a peer ledger' do you mean something as simple as copying a file (if the whole ledger is persisted as a unique file)?, or the procedure would be more complicated?
@jyellick , oh ok. By 'copying a peer ledger' do you mean something as simple as copying a file (the whole ledger is persisted as a unique file?)?, or the procedure would be more complicated?
I believe that it would be as simple as copying the `/var/production/hyperledger` directory to `/var/production/hyperledger/orderer` on your orderer... I can't claim to have tried it though
@jyellick , would be advisable to make backups of the orderer service maybe?
Forget it
@waxer Certainly, I would suggest that the orderer service should be backed up, and you should never run an ordering service with only a single node in production.
The most critical component is the Kafka cluster, if the kafka cluster is intact, you should be able to start a new orderer from the same genesis block and have it rebuild all ledgers.
@jyellick , you mean setting the kafka offset to 1, and let everything run again?
That sounds reasonable
Like in an event-sourcing system
@jyellick , in this contexts, two questions:
1- The re-issued blocks would be exactly identical?. The orderer doesn't sign the 'new' blocks with other timestamps?
2- The Orderer will send the 'new' blocks to the peers, that already have another block with the same BlockNumber... wouldn't this be an issue?
I'm sorry, I understood Kafka cluster incorrectly. The blocks are in the Kafka cluster
@waxer You would not reset the Kafka offsets, rather, the new orderer would connect at the original offset 0, and replay all txes
The transactions are in the Kafka cluster, and they may be determinstically cut into blocks
The blocks are stored at the orderer
The re-issued blocks will indeed be identical. The 'batch timeout' and other non-deterministic seeming operations are actually driven in a deterministic fashion through the Kafka log
Oh, the batch timeouts was going to be my next question hehe.
So I understand that 'timestamps' are not generated when building blocks. Correct?
Yes, when an orderer's batch timer expires, it sends a message saying "time to cut" to Kafka. The first of these messages triggers the block cutting, afterwards, other messages are ignored.
Correct, blocks do not have a timestamp, only the txes in them
And what would happend when the orderer tries to re-send 'old' blocks to leading peers?.. the peers would just dismiss them since they already have the BlockNumber and also the block hash would match too?
Orderers never 'push' blocks to peers. Peers connect to the orderer, and specify the block range they would like to receive.
If a peer already has the older blocks, it would simply connect to the orderer and ask for the newer blocks onlly.
If a peer already has the older blocks, it would simply connect to the orderer and ask for the newer blocks only.
The orderer would, for some period of time reply 'not found' to the peer, until the orderer caught up to the point in the blockchain where the peer was.
After this, the orderer would begin servicing the requests of the peer normally.
This is why the orderer interface is an 'atomic broadcast interface' (like the common distributed systems term). The client pushed 'broadcasts', then the peer does not receive, rather, the peer instructs the orderer to 'deliver'
This is why the orderer interface is an 'atomic broadcast interface' (like the common distributed systems term). The client pushes transactions as 'broadcasts', then the peer does not receive, rather, the peer instructs the orderer to 'deliver'
@jyellick , oh, the other day I asked the same thing about if the peer fetches or the orderer pushes, and get other explanation... (or I understanded correctly). This was the explanation:
"The peer sends a single message saying the first block to get from the orderer
Amd then from that block onward, when the orderer has a new block kt sends it down the stream "
Is that explanation correct?
Yes. The peer initiates the connection to the orderer `Deliver` interface, indicating the blocks it would like to receive (from block N onwards). The orderer then sends the blocks it has N and beyond, and, once it runs out, waits until a new block is produced, then sends this to the peer, then goes back to waiting.
So, the orderer does 'push' blocks to the peer, but only in response to a peer initiated request
@jyellick , oh ok.. its like an streaming RPC in gRPC?
It is a streaming gRPC RPC
https://github.com/hyperledger/fabric/blob/release-1.2/protos/orderer/ab.proto#L79
Great.. makes sense.
@jyellick , just one more question. If I have multiple orderers using the Kafka cluster, now I understand why every orderer can read the Kafka partition and generate the blocks in parallel since this is a deterministic process. That's is smart. (Did I understand it correctly?). My question is: The multiple orderers *doesn't* belong to the same consumer group in Kafka, right?
I mean, there's no concept of the 'master' Orderer and fallbacks orderers if they all would be in the same consumer group.
> If I have multiple orderers using the Kafka cluster, now I understand why every orderer can read the Kafka partition and generate the blocks in parallel since this is a deterministic process.
Correct.
> The multiple orderers *doesn't* belong to the same consumer group in Kafka, right?
Correct, each orderer consumes all messages, (so they are effectively each in their own group).
@jyellick , great!. Nice design. Hope in some time can help in the project.
Thanks for your help.
Happy to help. If you see weak points in our documentation or would generally like to be helpful, please feel free to submit change requests or open JIRA issues so that other newcomes will not have to experience your same pain :slight_smile:
I've worked at trying to get a peer to create and join a channel, but I don't seem to be making any progress lately. I am using fabric v1.2. When I give the `peer channel create ...` command in the peer's CLI I receive this message:
`Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining`
On the orderer's side I receive a similar message: `OrdererContainer | 2018-09-26 20:55:33.425 UTC [orderer/common/broadcast] Handle -> WARN 14d [channel: timetestchannel] Rejecting broadcast of config message from 172.22.0.5:39192 because of error: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining`
The thing is, I've changed the configtx.yaml to allow any admin signature to satisfy the policy, I've read through the Policies in Hyperledger page, scoured stackExchange, and looked at yousaf's recent issues, but I cannot find a Channel/Application policy requirement anywhere. Adding a Channel/Application line in the configtx.yaml does not seem to help. I know I'm missing something, but I don't know where to look next. What can I focus on or what further information can I provide to help resolve this issue?
I've worked at trying to get a peer to create and join a channel, but I don't seem to be making any progress lately. I am using fabric v1.2. When I give the `peer channel create ...` command in the peer's CLI I receive this message:
`Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining`
On the orderer's side I receive a similar message: `OrdererContainer | 2018-09-26 20:55:33.425 UTC [orderer/common/broadcast] Handle -> WARN 14d [channel: timetestchannel] Rejecting broadcast of config message from 172.22.0.5:39192 because of error: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining`
The thing is, I've changed the configtx.yaml `/Channel/Policies/Admins/Type: ImplicitMeta, Rules: "Any Admins"` to allow any admin signature to satisfy the policy, I've read through the Policies in Hyperledger page, scoured stackExchange, and looked at yousaf's recent issues above, but I cannot find a Channel/Application policy requirement anywhere. Adding a Channel/Application line in the configtx.yaml does not seem to help. I know I'm missing something, but I don't know where to look next. What can I focus on or what further information can I provide to help resolve this issue?
Question:
Consider the case of making a replay attack. In which part of the transaction flow the tx would be rejected?.
My thougths (that may be wrong):
In the commit phase since for sure the txID exists in the ledger, thus *the replayed tx is commited as invalid in the ledger*. There's also the possibility to be rejected in the endorsement phase *if* the first tx was already commited to the ledger. And also, could be rejected at endorsement phase if the first one wasn't commited yet but if the endorsers keep track of 'pending' txIds (not sure this is the case, sounds odd to keep track of a separate 'temporary' txId collection).
How is this implemented?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=MvfEPCHMwrv6mWe4y) @jyellick Hi Jason, I am using Node SDK and balance-transfer project.
@SudeepS 2 Then the #fabric-sdk-node channel would be more appropriate, but in general, you declare orderers, then target the broadcast function on the orderer instance.
I'm still working trying to resolve my above error. Here's the configtx.yaml for the channel defaults: https://hastebin.com/axagesezom.makefile
My `CORE_PEER_MSPCONFIGPATH` points to the msp directory of the administrator who registered the peer in the docker-compose-cli.yaml
Any ideas?
I'm still working trying to resolve my above error. Here's the configtx.yaml for the channel defaults: https://hastebin.com/axagesezom.makefile
My `CORE_PEER_MSPCONFIGPATH` points to the msp directory of the administrator who registered the peer in the docker-compose-cli.yaml
Any ideas on what to correct?
Did you set your `CORE_PEER_LOCALMSPID`?
Yes, it is set to CorpMSP in both the docker-compose-cli.yaml and docker-compose-base.yaml for the peer trying to create the channel.
```bash
7 18:11:36.348 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2018-09-27 18:11:36.826 UTC [channelCmd] executeJoin -> INFO 002 Successfully submitted proposal to join channel
```
Does this mean that a peer has successfully joined the channel? If not how do I approve that proposal
@jvsclp Can you set logging to debug on the orderer (`ORDERER_GENERAL_LOGLEVEL=debug`) and reproduce the failure, then paste the orderer logs to hastebin?
@hypere Yes, this would imply that the peer joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=zso8P8SAhtDoN2YuM) @jyellick Here are the orderer logs: https://hastebin.com/ugiquvazud.cs
And here are the peer logs, just in case: https://hastebin.com/qoserufale.m
If I have anything else misconfigured feel free to call me out. I know there will be more bugs going forward
See line 1611 of the orderer logs:
```ERRO 147 Principal deserialization failure (MSP CorpMSP is unknown)```
Looks like the MSP ID should be `CLPMSP`? Not `CorpMSP`?
Yes and no. I think I may have a fundamental misunderstanding of how the MSP structure should be then. Right now, this is a network with one organization and several organizational units: Corporate, site1, site2, and so on. The way I understand the MSP structure, if the OUs each have their own intermediate certificate authority then each node should have their own local MSP which is why you are seeing CLPMSP and CorpMSP. CLPMSP is the MSP structure for the entire organization and CorpMSP only covers identities within the corporate headquarters. In this case I can't have all the signing certs under one MSP, e.g., CLPMSP, but I have to break each node out with its own local MSP. Would I have to represent each organizational unit as an organization in the `configtx.yaml` for the peer nodes' MSPs to be pulled in? Or can I point to the CLPMSP as the `CORE_PEER_LOCALMSPID`, in the `docker-compose-cli.yaml`, but set the `CORE_PEER_MSPCONFIGPATH` to point to the administrator who registered the peer? Or am I completely off track?
Has joined the channel.
hi all, I have changed the orderer type from solo to kafka in the compose file and also added the kafka and zookeeper in compose file. How can I confirm that orderer is using Kafka culster using docker logs of orderer and Kafka
@jvsclp Naturally, you can really approach it either way. You may define a single organization, with a common root CA and multiple intermediate CAs, then associate an OU with each intermediate CA, and write your policies by MSP+OU [support for this exists, but the tooling is not great]. IMO, this is the much harder way to do things though. Instead you may simply create multiple orgs, each with their own MSP definition. And you may identity the unit by the MSPID instead of by OU.
@srinivasd The orderer prints out a message when starting at info level:
```INFO 004 Starting system channel 'test-system-channel-name' with genesis block hash 84d4f2255f8ae8091533fa1498714a9274b482b4c00e28d4164ab1bda4c23cf2 and orderer type solo
```
This is an example using solo, but you may see the same for Kafka
e.g. `and orderer type kafka`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=iuE7B2QrnCEN8z7kS) @jyellick Where would I be able to find more information on the first scenario, MSP+OU? That is, how to format and call it correctly in the configuration files? While more difficult this is the setup my team agreed on as being the most flexible in meeting our organization's needs.
@jvsclp This option is certainly the less well documented one. To begin with, in your MSP definition you should associate each of your intermediate CA certs with a particular OU. https://github.com/hyperledger/fabric/blob/release-1.2/sampleconfig/msp/config.yaml#L6-L8 is for a root CA requiring an OU, but the syntax is the same for the intermediates.
Then, when authoring your policies, you must use the principal type of OU.
https://github.com/hyperledger/fabric/blob/release-1.2/protos/msp/msp_principal.proto#L56
https://github.com/hyperledger/fabric/blob/release-1.2/protos/msp/msp_principal.proto#L87-L103
The CLI tools with the string friendly syntax do not support OUs, so you will need to use an SDK and create the protos directly.
> While more difficult this is the setup my team agreed on as being the most flexible in meeting our organization's needs.
I'd be curious to hear more about why multiple OUs within a single org is superior for your needs. Perhaps it's something we can address
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=RMAWRGqnbh8ABFCyd) @jyellick We are using hyperledger with two internal use cases. The first is maintaining an immutable timekeeping record across the organization with each organizational unit (sites) represented as its own entity for government auditing purposes. We chose to keep the organizational units within the organization rather than as seperate entities because employees can do work for other sites and government audits can be at the organizational level or the OU level.
Our second use case is focused on inventory tracking. The same setup as before, except we want the sites to be able to communicate their inventories across the organization as some of the items are extremely capital intensive or require long manufacturing lead times so being unable to "hide" or have assets forgotten about is extremely beneficial when the cost to ship the item is less than the alternative of having to wait for a newly manufactured piece of equipment.
We thought of using a permissioned database for the inventory case, but given the work we were already doing for the first case and a desire by leadership to have a system which can be leveraged to show technical leadership and expand possible business opportunities seemed too good to pass up.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=RMAWRGqnbh8ABFCyd) @jyellick We are using hyperledger with two internal use cases.
The first is maintaining an immutable timekeeping record across the organization with each organizational unit (sites) represented as its own entity for government auditing purposes. We chose to keep the organizational units within the organization rather than as seperate entities because employees can do work for other sites and government audits can be at the organizational level or the OU level.
Our second use case is focused on inventory tracking. The same setup as before, except we want the sites to be able to communicate their inventories across the organization as some of the items are extremely capital intensive or require long manufacturing lead times so being unable to "hide" or have assets forgotten about is extremely beneficial when the cost to ship the item is less than the alternative of having to wait for a newly manufactured piece of equipment.
We thought of using a permissioned database for the inventory case, but given the work we were already doing for the first case and a desire by leadership to have a system which can be leveraged to show technical leadership and expand possible business opportunities seemed too good to pass up.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=RMAWRGqnbh8ABFCyd) @jyellick We are using hyperledger with two internal use cases.
The first is maintaining an immutable timekeeping record across the organization with each organizational unit (sites) represented as its own entity for government auditing purposes. We chose to keep the organizational units within the organization rather than as seperate entities because employees can do work for other sites and government audits can be at the organizational level or the OU level.
Our second use case is focused on inventory tracking. The same setup as before, except we want the sites to be able to communicate their inventories across the organization as some of the items are extremely capital intensive or require long manufacturing lead times so being unable to "hide" or have assets forgotten in a warehouse or yard is extremely beneficial when the cost to ship the item is less than the alternative of having to wait for a newly manufactured piece of equipment.
We thought of using a permissioned database for the inventory case, but given the work we were already doing for the first case and a desire by leadership to have a system which can be leveraged to show technical leadership and expand possible business opportunities seemed too good to pass up.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=EJQPZaNgDXj6oPLzK) @jyellick I'm good here. My config.yaml was already set up for organizational units under an intermediate certificate authority.
`OrganizationalUnitIdentifiers:
- Certificate: "intermediatecerts/CLPCorpCA-cert.pem"
OrganizationalUnitIdentifier: "corp"`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=EJQPZaNgDXj6oPLzK) @jyellick I'm good here. My config.yaml was already set up for organizational units under an intermediate certificate authority.
`OrganizationalUnitIdentifiers:`
`- Certificate: "intermediatecerts/CLPCorpCA-cert.pem"`
`OrganizationalUnitIdentifier: "corp"`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WGfbm6N5q8omAxMT3) @jyellick Here is where I get an opportunity to learn something new.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WGfbm6N5q8omAxMT3) @jyellick Here is where I get an opportunity to learn something new. Not that this isn't all new to me!
Question:
Consider the case of making a replay attack. In which part of the transaction flow the tx would be rejected?.
My thougths (that may be wrong):
In the commit phase since for sure the txID exists in the ledger, thus *the replayed tx is commited as invalid in the ledger*. There's also the possibility to be rejected in the endorsement phase *if* the first tx was already commited to the ledger. And also, could be rejected at endorsement phase if the first one wasn't commited yet but if the endorsers keep track of 'pending' txIds (not sure this is the case, sounds odd to keep track of a separate 'temporary' txId collection).
How is this implemented?
@waxer -
1) Endorsers check the txID if it exists in the ledger
2) At commit time - It's also checked - in the validation phase
There is no point to keep track of pending in flight transactions... this isn't bitcoin.
a transaction would only be in the air "so much" time so if you get 2 transactions in the same block - one of them should be refused to be indexed by the index of the ledger ( @manish-sethi is that correct?)
It maybe not right room, but Can I ask a scenario I tested as below:
1. recording variableA with value of 200 marbles, and this variable is committed into ledger inserted into couchdb
2. might someone logged into couchdb and modified variableA = 500 marbles (at this time, in couchdb version of document already change and not match with rest of peer's couchdb)
2.1 make a transfer transaction from 500 marble variableA to variableB , and this transaction was successfully.
this mean that fabric does not maintain the consistency of version of documents of peers' statedb.
And protection (or security) of statedb from modification any value of the key become critical important?
is my understanding is correct?
Thanks for the help!
It maybe not right room, but Can I ask a scenario I tested as below:
1. recording variableA with value of 200 marbles, and this variable is committed into ledger inserted into couchdb
2. might someone logged into couchdb and modified variableA = 500 marbles (at this time, in couchdb version of document already change and not match with rest of peer's couchdb)
2.1 make a transfer transaction from 500 marble variableA to variableB , and this transaction was successfully.
this mean that fabric does not maintain the consistency of version of documents of peers' statedb.
And protection (or security) of statedb from modification any value of the key become critical important?
is my understanding correct?
Thanks for the help!
It maybe not right room, but Can I ask a scenario I tested as below:
My fabric network is simple one: 1 org (2 peer, 1 endorser, peer0, + 1 committer, peer1) and 1 orderer.
1. recording variableA with value of 200 marbles, and this variable is committed into ledger inserted into couchdb
2. might someone logged into peer0 couchdb and modified variableA = 500 marbles (at this time, in couchdb version of document already change and not match with rest of peer's couchdb)
2.1 make a transfer transaction from 500 marble variableA to variableB , and this transaction was successfully.
this mean that fabric does not maintain the consistency of version of documents of peers' statedb.
And protection (or security) of statedb from modification any value of the key become critical important?
is my understanding correct?
Thanks for the help!
It maybe not right room, but Can I ask a scenario I tested as below:
My fabric network is simple one: 1 org (2 peer, 1 endorser, peer0, + 1 committer, peer1) and 1 orderer.
1. recording variableA with value of 200 marbles, and this variable is committed into ledger inserted into couchdb
2. might someone logged into peer0 couchdb and modified variableA = 500 marbles (at this time, in couchdb version of document already change and not match with rest of peer's couchdb)
2.1 make a transfer transaction from 500 marble variableA to variableB , and this transaction was successfully.
my understanding are:
1. this mean that fabric does not maintain the consistency of version of documents of peers' statedb.
2. at step2.1, it check 2. And protection (or security) of statedb from modification any value of the key become critical important?
is my understanding correct?
Thanks for the help!
It maybe not right room, but Can I ask a scenario I tested as below:
My fabric network is simple one: 1 org (2 peer, 1 endorser, peer0, + 1 committer, peer1) and 1 orderer.
1. recording variableA with value of 200 marbles, and this variable is committed into ledger inserted into couchdb
2. might someone logged into peer0 couchdb and modified variableA = 500 marbles (at this time, in couchdb version of document already change and not match with rest of peer's couchdb)
2.1 make a transfer transaction from 500 marble variableA to variableB , and this transaction was successfully.
my understanding are:
1. this mean that fabric does not maintain the consistency of version of documents of peers' statedb.
2. And protection (or security) of statedb from modification any value of the key become critical important?
is my understanding correct?
Thanks for the help!
@Ryan2 The endorsement policy protects against this - peers from different orgs as specified in the endorsement policy must return the same chaincode execution results for transactions to be validated. If ledger state data had been altered or corrupted (in CouchDB or LevelDB file system) on a peer, then the chaincode execution results would be inconsistent across endorsing peers, the 'bad’ peer/org will be found out, and the application client can throw out the results from the bad peer/org before submitting the transaction for ordering/commit. If client application tries to submit a transaction with inconsistent endorsement regardless, this will be detected on all the peers at validation time and the transaction will be invalidated.
Of course, it is still recommended to secure CouchDB - 1) it is recommended to have peer and couchdb on same host and not expose the couchdb port on host, so that nothing beyond peer can connect. 2) set username/password in couchdb and peer config, this will also engage database level security such that no other user can read/write the data
@dave.enyeart thank you very much for the explanation, I got it.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=s9bwcJbu55446hMBg) @yacovm Yes, this is correct.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=s9bwcJbu55446hMBg) @yacovm - @waxer Yes, this is correct.
@jyellick Hi, I have changed the orderertype to kafka and how can i confirm that data is passing through kafka brokers to peers. Please suggest me. Thanks in Advance
@jyellick Hi, I have changed the orderertype to kafka and how can i confirm that data is passing through kafka brokers to peers when i am doing transaction. Please suggest me. Thanks in Advance
@srinivasd If the orderer indicates the consensus type is Kafka on startup, then I would consider that confirmation. If you are really suspicious/curious, you may inspect the offsets of the Kafka brokers to see them advance as you transact.
Has joined the channel.
Has joined the channel.
@jyellick Ok Thanks
@jyellick I am using balance-transfer project, changed ordering service to kafka with two orderers. Now if I make transactions, which orderer will process the transaction ? and If one of the orderer goes down, transactions will be handled by other orderer ? Please help me out in my queries..
Hey guys, I am trying to configure ACL policies for my chaincode, but I’ve only been able to add restrictions to the `peer/Propose` resource in `configtx.yaml`. My question is: how can I add policies so, for example, only Org1 members can use `Put` and only Org2 members can use `Get`, where `Put` and `Get` are specified operations in my chaincode? Is there any way to do something like `mycc/Put: /Channel/Application/Org1MemberPolicy`?
Folks, if we have multiple orderers, should assign an organisation for each orderer? If so how do you specify it in configtx.yaml?
@SudeepS 2 https://chat.hyperledger.org/channel/fabric-orderer?msg=WjwSsRNWBAicqRqkw
@venzi The ACLs are only for system chaincode functions, they are not applicable to user chaincodes.
@paul.sitoh There is no way to tie a particular organization to a particular orderer. This is being enhanced in the development of the new Raft orderer, but is not present as of yet.
ok thanks
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=vsyMfmZzdrqxhzwGR) @jyellick Alright, thank you!
Has joined the channel.
Has joined the channel.
Has joined the channel.
Hello guys, do you have example how to configure orderer and kafka cluster for tls authentication? Do you know if one orderer can use multiple kafka clusters with auth? In orderer.yaml I see only one set of keys to be able to use. Thank you
you can use multilpe kafka clusters only if they share the same root CA @jiribroulik
so if one has a larger fabric network, getting the kafka cluster to create the topic for a new channel can take longer than one would wish.
which in turn causes the code in the sdk that does the create call to time out.
ok that timeout value is just a parameter and i can up it so my sample works, but that seems fundamentally wrong. one is still hoping it will complete in (some longer) fixed time.
is there any way to make this deterministic?
so if one has a larger fabric network, getting the kafka cluster to create the topic for a new channel can take longer than one would wish.
which in turn causes the code in the sdk that does the create call to time out.
ok that timeout value is just a parameter and i can up it so my sample works, but that seems fundamentally wrong. one is still hoping it will complete in (some longer) fixed time.
is there any way to make this deterministic - have the caller wait and return when the call is complete and throw an error only if kafka can't be reached or there's a networking issue?
@aatkddny can you elaborate? You're saying topic creation takes too much time, and you want the orderer to return some intermediate code that would make the SDK poll it ?
i want the orderer to not return until it is GTG or if it throws an error rather than having to look for 503s and loop.
i want the orderer to not return until it is GTG or if it throws an error rather than having to look for 503s and loop. The timeout is in the code to get the genesis block - the underlying cause appears to be the amount of time kafka is taking, but honestly i don't care. creating a channel should be deterministic.
i want the orderer to not return until it is GTG or if it throws an error rather than having to look for 503s and loop.
The timeout is in the SDK code to get the genesis block - it loops on 503s. the root cause *appears* to be the amount of time kafka is taking, but honestly i don't care.
creating a channel should be deterministic.
@yacovm what I would want is for the orderer to not return until it is GTG or if it throws an error rather than having to look for 503s and loop.
The timeout is in the SDK code to get the genesis block - it loops on 503s and throws an exception if it takes longer than a predefined (property) number of ms.
the root cause *appears* to be the amount of time kafka is taking in my case, but honestly i don't care what's causing it.
creating a channel should be deterministic.
you are totally hosed if this fails halfway through.
but how do you distinguish between trying to create a channel and timing out prematurely and trying to create a channel and timing out because you'll never make it work?
if you "hold the line" it may be detrimental for the orderer's memory, no?
imagine many clients try to submit transactions
if kafka is unreachable.... and you "hold" the line - you also hold the resources no?
screw transactions. something as fundamental as creating a channel is a special case.
if that fails halfway through you are hosed.
right so you want special care for channel creation, right?
you can't retry
yes.
anything else can be retried. this totally hoses my network.
hmmmmm @kostas @jyellick what is your opinion?
(thanks for the clarifications @aatkddny )
my pleasure. or not - it takes me a half hour to stand up my network and it's failed three times now due to this timeout. i may be a little *passionate* about it right now :)
you're passionate in general, or - at least your comments make it sound so
no i was annoyed. i think a polling loop for a must complete transaction isn't the best way to do things.
no i was annoyed. i think a polling loop with a timeout for a must complete transaction isn't the best way to do things.
Has joined the channel.
Hi! Can I change the log level of my orderer nodes when the docker containers are running? I currently have a 3-OSN setup with Kafka.
Hello again! This question is about orderer storage.
Are transactions saved forever in all orderer nodes? I can see after sending a few thousand transactions that the production/orderer/chains/(mycahnnel) in the orderer node contain the same data as the peer node production/ledgerdata/..../(mychannel).
- What is the point of the orderer keeping my transactions and can it be turned off? It's taking a lot of space.
- Are all transactions also kept in the Kafka node partitions? And for how long?
- And do the blockfile_000xxx contain the actual blocks of the blockchain? Is it the only place where they are stored?
Thanks!
Yeah they are currently saved forever
And yes- they are also kept in kafka
But kafka had a retention policy if i remember correctly, @emiliastk
The block files contain aggregated blocks
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Mi7mXo3cbzMmufsk5) @emiliastk No, you cannot. A JIRA that may be of interest to you is open on this topic: https://jira.hyperledger.org/browse/FAB-12265
BFT
Has joined the channel.
Is there a good diagram of how a distributed ordered is configured with Kafka? If I have a channel with 3 orgs, which instances run at which orgs? Does each org have 2 kafkas, and 1 zookeeper? The documentation is confusing, because t refers to a separate orderer org.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=wcyEgDdvTs8BT5HhX) @f2632799 I would say that we will use same kafka cluster and different "topics" for orders and minimum 3 node zk cluster.
I'm confused. Are you saying that orgs2 and 3 trust org1 with all of the orderers in the cluster? Or are you saying that org1, org2 and org3 hire independent orgX to host/own the orderers?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3iCmoZvkSaGLMLuaH) @nsabharwal I'm confused. Are you saying that orgs2 and 3 trust org1 with all of the orderers in the cluster? Or are you saying that org1, org2 and org3 hire independent orgX to host/own the orderers?
I have a certificate affiliation misconfiguration issue that I'm not sure how to resolve. Our Orderer admin cert has an affiliation of 'org1'. Our policy for updating Orderer configurations (e.g. upgrading to 1.2 capabilities) requires an admin from the 'ordererMSP'. Since our admin cert has the wrong affiliation, we're not able to change any channel configurations: `identity 0 does not satisfy principal: the identity is a member of a different MSP (expected ordererMSP, got org1MSP)`. I thought of updating config block ```"values": {
"MSP": {
"mod_policy": "Admins",
"value": {
"config": {
"admins": [
"LS0...snip" <-- this admin cert has the wrong affiliation
],
"fabric_node_ous": null,
"name": "ordererMSP",``` admins but figured I'd run into the same affiliation mismatch. Thoughts?
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fwq2QAP9uWXw22Tkh) @f2632799 Let me find something for you
Hi. In dockerhub there are images for Kafka and zookeeper. There's any guidelines of how to use them? How are those images made? (Dockerfile?)
Hi! Can anyone help me to find some documentation about how ordering service in solo is implemented at software level, how does it manage parallel transaction?
Folks, I was wondering if this is the way to specify multiple channel definitions in configtx.yaml ```Profiles:
TwoOrgOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
TwoOrgConsortum:
Organizations:
- *Org1
- *Org2
TwoOrgChannel:
Consortium: TwoOrgConsortum
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
Capabilities:
<<: *ApplicationCapabilities
ThreeOrgOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
ThreeOrgConsortum:
Organizations:
- *Org1
- *Org2
- *Org3
ThreeOrgChannel:
Consortium: ThreeOrgConsortum
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
- *Org3
Capabilities:
<<: *ApplicationCapabilities
```
Folks, I was wondering if this is the way to specify multiple channel definitions in a single configtx.yaml ```Profiles:
TwoOrgOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
TwoOrgConsortum:
Organizations:
- *Org1
- *Org2
TwoOrgChannel:
Consortium: TwoOrgConsortum
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
Capabilities:
<<: *ApplicationCapabilities
ThreeOrgOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
ThreeOrgConsortum:
Organizations:
- *Org1
- *Org2
- *Org3
ThreeOrgChannel:
Consortium: ThreeOrgConsortum
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
- *Org3
Capabilities:
<<: *ApplicationCapabilities
```
Folks, I was wondering if this is the way to specify multiple channel definitions in a single configtx.yaml? In the following spec, I am planning to create definitions for two channels -- one named TwoOrgs and the other named ThreeOrgs ```Profiles:
TwoOrgOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
TwoOrgConsortum:
Organizations:
- *Org1
- *Org2
TwoOrgChannel:
Consortium: TwoOrgConsortum
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
Capabilities:
<<: *ApplicationCapabilities
ThreeOrgOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
ThreeOrgConsortum:
Organizations:
- *Org1
- *Org2
- *Org3
ThreeOrgChannel:
Consortium: ThreeOrgConsortum
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
- *Org3
Capabilities:
<<: *ApplicationCapabilities
```
Folks, I was wondering if this is the way to specify multiple channel definitions in a single configtx.yaml? In the following example spec, I am planning to create definitions for two channels -- one named TwoOrgs and the other named ThreeOrgs. Is that correct? ```Profiles:
TwoOrgOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
TwoOrgConsortum:
Organizations:
- *Org1
- *Org2
TwoOrgChannel:
Consortium: TwoOrgConsortum
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
Capabilities:
<<: *ApplicationCapabilities
ThreeOrgOrdererGenesis:
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
Organizations:
- *OrdererOrg
Capabilities:
<<: *OrdererCapabilities
Consortiums:
ThreeOrgConsortum:
Organizations:
- *Org1
- *Org2
- *Org3
ThreeOrgChannel:
Consortium: ThreeOrgConsortum
Application:
<<: *ApplicationDefaults
Organizations:
- *Org1
- *Org2
- *Org3
Capabilities:
<<: *ApplicationCapabilities
```
Question: any documentation about Kafka vs Raft as a consensus mechanism?. At least some keypoints?
@waxer: https://drive.google.com/open?id=11qDdi0-93f7CwYxsdmTl9BWITYy5VUtl
@kostas , great video. Seen first 5 mins and already answered my question. TLDR, less hosts to provide CFT consensus and remove third party dependencies that lower complexity in implementation and operation.
Latter gonna check the full video to see the details 👍
Has joined the channel.
Error: error endorsing chaincode: rpc error: code = Unknown desc = access denied: channel [xxx] creator org [xxxMSP]
```
I received the following error when trying to instantiate the channel. I have made sure to set CORE_PEER_MSPCONFIGPATH,CORE_PEER_ADDRESS,CORE_PEER_LOCALMSPID,CORE_PEER_TLS_ROOTCERT_FILE to the correct org and to generate the crypto before i run anything. what could be the possible reason for this?
```
```bash
Error: error endorsing chaincode: rpc error: code = Unknown desc = access denied: channel [xxx] creator org [xxxMSP]
```
I received the following error when trying to instantiate the channel. I have made sure to set CORE_PEER_MSPCONFIGPATH,CORE_PEER_ADDRESS,CORE_PEER_LOCALMSPID,CORE_PEER_TLS_ROOTCERT_FILE to the correct org and to generate the crypto before i run anything. what could be the possible reason for this?
I want to do test the connection to my orderer using `grpcurl`, but running `grpcurl -insecure orderer.example.com:7050 list` returns:
```
Failed to list services: server does not support the reflection API
```
Where can I find out the methods I can call for the Orderer via GRPC?
take a look at protos defining service?
@guoger, good point, but where to find this? I'm not highly versed in Go (yet) and documentation of orderer is scarce
@guoger, good point, but where to find this? I'm not highly versed in Go (yet) and documentation of orderer is scarce on the ReadTheDocs Fabric site
@alexvicegrab: https://github.com/hyperledger/fabric/blob/release-1.2/protos/orderer/ab.proto
Thank you @kostas
this might not be the best place for this but I'll give it a try.
i have a process that i can run to create a fabric network from scratch.
i want to be able to create a channel from scratch or
this might not be the best place for this but I'll give it a try.
i have a process that i can run to create a fabric network from scratch.
i can create new channels as required as long as they are in my original (templated) configtx and i run the -outputCreateChannelTx on configtxgen.
i want to add a new channel. it's a chore to recreate the configtx with this new channel in there.
is there a minimal configtx file i can get away with for this purpose - and if so what would it look like or am I required to reconstruct the whole thing?
this might not be the best place for this but I'll give it a try.
i have a process that i can run to create a fabric network from scratch.
i can create new channels as required as long as they are in my original (templated) configtx and i run the `-outputCreateChannelTx` on configtxgen.
if i want to add a new channel that wasn't part of the original set it's a chore to recreate the configtx with this new channel in there so I can run configtxgen.
is there a minimal configtx file i can get away with for this purpose - and if so what would it look like; or am I required to reconstruct the whole thing?
Can someone give me some insight into who owns the KAFKA orderers and the zookeeper instances in a hyperledger network? If I have 3 orgs in my network, does each org own its own kafka orderer instance? What about zookeeper?
The documentation is vague on which company is hosting the orderers. It's clear that each company/org owns its own peer instances. But the docs refer to an orderer org. What does that translate into in the real world?
Hi `kotas`, thanks for the tip. I'm still however unable to run any commands. For instance, having installed the basic network with `byfn.sh` locally, and setting 127.0.0.1 to point to orderer.example.com I attempt this:
```
grpcurl -insecure orderer.example.com:7050 orderer.AtomicBroadcast/Deliver
```
and see this same error message:
```
Error invoking method "orderer.AtomicBroadcast/Deliver": failed to query for service descriptor "orderer.AtomicBroadcast": server does not support the reflection API
```
Even thought I can do this (actual root domain replaced with `example.com`:
```
grpcurl fortune.example.com:443 build.stack.fortune.FortuneTeller/Predict
{
"message": "Regression analysis:\n\tMathematical techniques for trying to understand why things are\n\tgetting worse."
}
```
Hi `kotas`, thanks for the tip. I'm still however unable to run any commands. For instance, having installed the basic network with `byfn.sh` locally, and setting 127.0.0.1 to point to orderer.example.com I attempt this:
```
grpcurl -insecure orderer.example.com:7050 orderer.AtomicBroadcast/Deliver
```
and see this same error message:
```
Error invoking method "orderer.AtomicBroadcast/Deliver": failed to query for service descriptor "orderer.AtomicBroadcast": server does not support the reflection API
```
Even thought I can do this (actual root domain replaced with `example.com`, and using this https://github.com/kubernetes/ingress-nginx/blob/master/images/grpc-fortune-teller/proto/fortune/fortune.proto:
```
grpcurl fortune.example.com:443 build.stack.fortune.FortuneTeller/Predict
{
"message": "Regression analysis:\n\tMathematical techniques for trying to understand why things are\n\tgetting worse."
}
```
Hi @kotas, thanks for the tip. I'm still however unable to run any commands. For instance, having installed the basic network with `byfn.sh` locally, and setting 127.0.0.1 to point to orderer.example.com I attempt this:
```
grpcurl -insecure orderer.example.com:7050 orderer.AtomicBroadcast/Deliver
```
and see this same error message:
```
Error invoking method "orderer.AtomicBroadcast/Deliver": failed to query for service descriptor "orderer.AtomicBroadcast": server does not support the reflection API
```
Even thought I can do this (actual root domain replaced with `example.com`, and using this https://github.com/kubernetes/ingress-nginx/blob/master/images/grpc-fortune-teller/proto/fortune/fortune.proto:
```
grpcurl fortune.example.com:443 build.stack.fortune.FortuneTeller/Predict
{
"message": "Regression analysis:\n\tMathematical techniques for trying to understand why things are\n\tgetting worse."
}
```
Hi @kostas, thanks for the tip. I'm still however unable to run any commands. For instance, having installed the basic network with `byfn.sh` locally, and setting 127.0.0.1 to point to orderer.example.com I attempt this:
```
grpcurl -insecure orderer.example.com:7050 orderer.AtomicBroadcast/Deliver
```
and see this same error message:
```
Error invoking method "orderer.AtomicBroadcast/Deliver": failed to query for service descriptor "orderer.AtomicBroadcast": server does not support the reflection API
```
Even thought I can do this (actual root domain replaced with `example.com`, and using this https://github.com/kubernetes/ingress-nginx/blob/master/images/grpc-fortune-teller/proto/fortune/fortune.proto:
```
grpcurl fortune.example.com:443 build.stack.fortune.FortuneTeller/Predict
{
"message": "Regression analysis:\n\tMathematical techniques for trying to understand why things are\n\tgetting worse."
}
```
Has joined the channel.
Hi, I had a question related to the orderer of fabric.
The ordering nodes are only responsible to find a valid order.
Does this include the analysis of conflicts?
Or is the validity of the block only checked at the learners after the ordering?
@alexvicegrab never tried it, but you need to modify orderer code to enable reflection service, see [this](https://github.com/grpc/grpc-go/blob/master/Documentation/server-reflection-tutorial.md)
@Raycoms if you are talking about the validity of tx, then yes, they are checked by committing peers.
@guoger so the ordering phase practically only receives transactions on one side and returns blocks with the ordered transactions inside on the other side. Do the ordering nodes have to do any validation of this or only make sure all of them return the same block to the committing pears?
@guoger, thanks for the link. I will check it out.
Still, I'm wondering if there is a straightforward way of testing and debugging the connection with the orderer node (by somehow simulating a peer connection) without needing to update the source code, which will take a while (firstly for me to improve my Golang sufficiently to contribute and then the review process, and finally wait for the release of Fabric 1.4 in 3 months time).
I'm asking myself what the worst thing is the ordering service could do if it would be byzantine besides:
- Sending different blocks to different committing peers
- Being slow on purpose
I'm asking myself what the worst thing is the ordering service could do if it would be byzantine besides:
- Sending different blocks to different committing peers
- Being slow on purpose
- skip transactions
Question: Where is the Docker file of hyperledger/fabric-kafka and hyperledger/zookeeper?
Found it: https://github.com/hyperledger/fabric-baseimage
> @alexvicegrab never tried it, but you need to modify orderer code to enable reflection service, see this
FWIW, this is correct. I have tested `grpcurl` in the past and had to modify our code for it.
@f2632799: You would probably want to assign the ordering service (i.e. the ordering service nodes, the Kafka cluster, and the ZK ensemble) to a third party that all participants are cool. Obviously not ideal from a trust-nobody perspective, and works better in networks where there's no risk of censorship resistance and you want all the hash-chain benefits for auditability. The BFT work that we'll be resuming soon will address the censorship resistance concern.
> I'm asking myself what the worst thing is the ordering service could do if it would be byzantine besides:
> - Sending different blocks to different committing peers
> - Being slow on purpose
> - skip transactions
@Raycoms: You nailed it. These are exactly the things it could do.
i.e. it can censor and reshuffle but it can modify the transaction themselves.
Thanks @kostas so if I write my own ordering service I "only" have to make sure that 2f+1 equal blocks come out and reach the committing peers while making sure that the above can't happen?
I mean reshuffle is something any primary which is selected for defining the order can do
Correct RE: reshuffling. I missed the 2f+1 proposal though.
The committing peers is there some documentation how the response to them has to look like so they can verify that the result of the orderer is correct?
Has joined the channel.
Has left the channel.
Where can i find a list of all available Orderer env variables?
@gen_el , I'm not sure there's a documentation explaining that in detail. But I looked the code: https://github.com/hyperledger/fabric/blob/release-1.3/orderer/common/localconfig/config.go
Using Viper is a usual way of handling configuration in golang. Maybe some expert may check if this is ok.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=qFa43ZjMnsKsnibdP) @waxer I guess this should do. Thanks
Can't find any documentation regarding the committing peers on how they they verify the block of the ordering service in terms of how many messages do they expect, are they able to verify signatures, am I able to customize this, etc
Do an orderer container need genesis block file when spinning up network . I am trying to manually spin network on multiple host with docker swarm . When I am spinning orderer container , I am getting this error `panic: Unable to bootstrap orderer. Error reading genesis block file: open /etc/hyperledger/fabric/genesisblock: no such file or directory` Do I need to create channel artifacts before running ?
Hello everyone, are multiple "Orderer orgs" possible?
Hi all
how to enable re-submission to avoid warnings that orderer works in compatibility mode?
Hi all
how to enable tx re-submitting to avoid warnings that orderer works in compatibility mode?
@MohammadObaid , AFAIK, yes. You should mount the genesis block to bootstrap the ordering service.
Hey @waxer thanks for response :). That means first I have to generate crypto material then genesis block file and then spin up network ? I am asking related to production environment . My fabric-ca and intermediate ca is runnning somewhere else and I can get keys by using fabric-ca-client.
Hi all, question about chaincode execution - during the transaction commit phase, do orderers re-execute chaincode? I'd assumed they just ordered and transmitted the blocks - but this question came up and I wasn't 100% sure. Thanks!
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=oHxXZJzTbH34Q7EM2) @jdfigure Yeah, they just order and transmit.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BFhwHqcM2bMsctPxd) @NoLimitHoldem When you look at the `configtx.yaml`, it seems to support a multi orderer org configuration. I just haven't tried it yet. It should work as long as the orderers can access the kafka cluster.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=5DDZtaM72Jd4LEFXG) @gen_el Thanks!
I am still trying to figure out how the committing peers figure out that the output of the orderer is correct if configured to PBFT
Hi everyone. Is there any limit for a single ordering service node to bear the maximum specified number of channels and maximum number of organizations within each channel?
Has joined the channel.
Has joined the channel.
Regarding a BFT implementation of the ordering service... the "Deliver()" interface should mutate to some kind of majority "Deliver()" from the Ordering cluster?. Even though the Orderers could reach consensus about the order of transactions considering N byzantine orderers, I guess there's some extra work to let the peers 'trust' the consensus achieved by the Ordering service. The 'Deliver()' interface is in the context of a particular Orderer, or in the context of the Ordering service as a whole?
Has joined the channel.
Hi, I would like to setup kafka and zookeeper on my project but I have no idea where to start other than changing the orderer from solo to kafka! Can someone link me with a good tutorial on how to operate. Hyperledger's website isn't exactly the best.
Has joined the channel.
Hi, I'm not fully sure if this the right channel for me to post my question. But I'm facing an issue with my orderer client not being able to connect to orderer (orderer.example.com:7050). If this is not the right section- appreciate alternate areas to post.
Error: failed to create deliver client: orderer client failed to connect to orderer.example.com:7050: failed to create new connection: context deadline exceeded
!!!!!!!!!!!!!!! Channel creation failed !!!!!!!!!!!!!!!!
This earlier was preventing me from instantiating my private collection chaincode on the channel.
I felt it was a network issue between containers and added 'network_mode:host' and '- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=host
' to almost all docker-compose-base.yaml and peer-base.yaml files.
reference: https://stackoverflow.com/questions/44775844/error-starting-up-a-network-nodes-in-different-vm-failed-connecting-to-ordere
https://jira.hyperledger.org/browse/FAB-3337
my goal is to create a working private collection demo on fabric, which code sample/config etc is not relevant. I just need a working version of the feature.
If someone can share their working configs (yaml file(s)) it would be of real help.
Orderer container is not being bootstraped and exiting. I have checked the orderer logs and have found this error. Can someone explain it better?
"Error creating channelconfig bundle: cannot enable channel capabilities without orderer support first"
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ABniLnQtCrmCRCBbi) @Msaleh97 Have you taken a look at https://hyperledger-fabric.readthedocs.io/en/release-1.3/kafka.html ?
@DeepakMP I've answered your duplicate question on #fabric-samples and #fabric
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ecReavpKZHaEPfexf) @dave.enyeart I'm doing the same right now. Thank you.
Hey @mastersingh24 to generate genesis block using configtxgen we have to pass configtx.yaml file right ! . In that file we have to pass msp directory path of each organization and orderer . Now in production environment all generated msp's of organizations will not be in same directory due to private key . In such cases how we pass msp directory path in configtx.yaml?
Or configtx.yaml just need `cacerts` and `signcerts` from msp directory ?
@jyellick I am facing these two issues.
1) Orderer container is not being bootstraped and exiting. I have checked the orderer logs and have found this error. Can someone explain it better?
"Error creating channelconfig bundle: cannot enable channel capabilities without orderer support first"
2) Peer fails to join the channel and getting this error:
grpc: addrConn.createTransport failed to connect to {peer0.org1.example.com:7051 0
@yousaf
> "Error creating channelconfig bundle: cannot enable channel capabilities without orderer support first"
You have attempted to enable either the channel or application group capabilities without also enabling the orderer group V1_1 capability. The orderer group capability is a prerequisite for enabling any other capabilities.
> transport: authentication handshake failed: x509: certificate signed by unknown authority (possibly because of \"x509: ECDSA verification failure\"
This error indicates a TLS handshake error. Ensure that the TLS CA certs are correctly configured for your invocation.
@jyellick But i have defined V1_1 for orderer in configtx.yaml file in orderer section of genesis block but i am still getting this error.
Can you please paste your `configtx.yaml` to a service like hastebin.com ? Also, the full orderer startup log which fails?
@jyellick From where i would have to check the correction of TLS CA?
Usually it is set using the variable `CORE_PEER_TLS_ROOTCERT_FILE`, for example:
```export CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
```
@jyellick Sir i was facing the orderer related issue before and when i removed the orderer.example.com line from volume section of docker-compose-base.yaml file then orderer worked fine and these were the logs https://hastebin.com/ratiwodale.js but now i have added that line too and i don't know how but its working fine. And this one is my configtx.yaml file....https://hastebin.com/cawovigofu.php
@jyellick I have used echo CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt which is already setup correctly.
If you use a volume mount, then when you restart the container, it has already been bootstrapped, so your `configtx.yaml` will have no effect.
> echo CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
`echo` or `export`?
> echo CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
`echo ` or `export`?
> echo CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt
"echo" or `export`?
echo $CORE_PEER_TLS_ROOTCERT_FILE
Are you certain that file exists and is correct?
I mean it is set correctly when i get output of this env
I see
yes i have checked that it exists.
and didn't get that what was the exact issue related to orderer bootstrap and why that is working fine now?
"Bootstrap" encodes the genesis block of the orderer system channel. If a ledger already exists (because you have preserved a previous volume mount) then per the immutability of a blockchain, there is no way to 're-boostrap', ie replace that genesis block, without deleting the whole ledger. So, if you preserve data when restarting the orderer then any modifications to your bootstrap configuration will not be reflected.
So you mean that if i use the docker volume prune command and then i bootstrap the orderer then it will restart correctly according to the modified docker-compose-base.yaml file?
If you remove the container and its volumes before restarting it, it should fix your bootstrap problem
okay okay boss. got it :)
@jyellick but i am still stuck in TLS CA related issue :(
@jyellick What can be the issue sir even if the CORE_PEER_TLS_ROOTCERT_FILE variable is set.?
If it is set, and the certificate is present, then it must not be the correct CA cert. You should be able to pull the TLS client cert from the peer, and use openssl to verify that the client cert is not issued by the CA cert. Most likely, you copied the TLS CA cert into this container, then reboostrapped the crypto material on the peers such that the TLS CA changed, but you still have the old cert.
@jyellick I haven't used the openssl can you tell me that how to use that to check the verification of certificate? and is there any alternative way to that i can get the correct CA cert after rebootstraping the crypto material?
```openssl verify -CAFile
@jyellick What would be the
@jyellick I didn't try openssl because i don't know the paths of root_ca and client_cert but you was right about the incorrect TLS CA. When i restarted docker containers of peers then it was able to get the correct CA and peer channel join worked successfully. Thanks always for your support sir :)
@jyellick I have again killed all the containers of my network and again started it and i am now getting this error on peer channel join command: Error: proposal failed (err: bad proposal response 500)
@yousaf These problems are not directly related to ordering, you'll likely find a faster response in one of the more general channels. For joining a peer to a channel, most likely, you are not using an admin cert authorized by the peer's local MSP.
@jyellick okay sir. Thankyou very much for your support. fixed it :)
@jyellick I'm trying to figure out for a while how the comitting nodes detect byzantine behavior of servers in the ordering cluster. In terms of: If f out of 3f+1 members of the ordering cluster create invalid blocks how do the comitting nodes detect that? Are they configured to wait for f+1 equal blocks? And can these blocks be delivered in batches as well?
@Raycoms There is a 'BlockValidation' policy defined in the channel config. It indicates how many orderers must sign a block in order for the peer to accept it as valid. In the CFT case, this is only a single orderer, in the BFT case, this would be at least f+1 orderers (or depending on consensus implementation, potentially more)
Thanks =) This makes it configurable as well, nice
@jyellick When i use peer chaincode instantiate command then I get this error................................Error: could not assemble transaction, err Proposal response was not successful, error code 500, msg failed to execute transaction 25ad587011e9e130cd5f5fd23cabcfeba0a802bf0713f2fd6c39d613010f644c: error starting container: error starting container: API error (404): network _byfn not found
.............But when i replace this line CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME}_byfn with this line: CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=${COMPOSE_PROJECT_NAME} then it gives no error but console hangs like peer chaincode intantiate keeps on working but gives no output.
Has joined the channel.
@yousaf This is off topic for the #fabric-orderer channel, I will post it to #fabric-questions
okay sir :)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=y9YCKqZHG8T3HspXW) @mastersingh24 Yes, this is not really a clear documentation. First of all what is orderer.yaml and what is it supposed to contain? Steps 6+ doesn't make any sense.
@Msaleh97
> what is orderer.yaml and what is it supposed to contain
[here](https://github.com/hyperledger/fabric/blob/release-1.3/sampleconfig/orderer.yaml) is a sample `orderer.yaml`. At a high level, it contains config options to run an orderer. I find comments in this file expressive.
> Steps 6+ doesn't make any sense.
may I ask which part of it doesn't make sense? it does not intend to provide an exhaustive Kafka/ZK guide. You may consult [kafka doc](http://kafka.apache.org/documentation/) for that.
Has joined the channel.
Has joined the channel.
Hi @jyellick If a peer of an organization is attached to multiple channels and same chaincode is instantiated on multiple channels then how that peer is gonna behave w.r.t each channel? Will it differentiate the chaincodes based on their versions? or versions are only upgraded when a new network component is added dynamically to the existing network?
@yousaf This is really not orderer specific, in the future I'd suggest #fabric-peer-endorser-committer but yes, chaincode instantiation is managed per channel. If its the same version running on another channel, they will share the docker container, but if one is upgraded and another is not, then you will simply get two different versions of the chaincode running.
@jyellick I am really sorry sir. Followed . I'll take care next time :)
@jyellick Is there any limit for the number of peers, organizations and channels which a single ordering node can handle?
@yousaf There is likely a practical limit, but no, there is no artificial limit.
Has joined the channel.
Has joined the channel.
Doesn't the peer:orderer connection require different ports for the each of per channel connections? If that's the case there's certainly a limit. ISTR in the discovery phase we were told about 150 channels was the max you could have - unless things improved with 1.2 and up.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=jpF4CL8bqaaMzhqy7) @aatkddny Not sure what you mean by *different ports* .... there's always a per process limit for file descriptors when it comes to TCP connections .... this can be increased as required at the OS level
@aatkddny It is per peer, not per channel
@aatkddny ~It is per peer, not per channel~ Actually, let me verify this. The peer could re-use the same socket, but I'm not sure if it does.
I was under the impression that the peer - orderer connection set up an anonymous grpc port for each channel. they didn't multiplex because unravelling the crypto was too onerous.
And that provided a hard upper bound
But certainly, as @mastersingh24 points out, the maximum number of sockets a process may have open is in the tens of thousands. A limit of 150 would be an artificial security limit imposed by the OS which may be configured.
Can you run `ulimit -n` in the container? This is the number of file descriptors your process is limited to opening (and each socket counts against this number)
Channels not sockets. And I don't have that problem right now, this came up because our design requires biparty privacy so requires channel proliferation.
I get the max number of sockets is a 16 bit number but if you think about a network running with several orderers and even as few as 50 peers each connected to 100 or so channels that's quite a lot of potential sockets in play JUST for the peer-orderer stuff alone.
If you have gossip enabled, typically only one peer from each organization connects. Then disseminates the blocks within the other peers in the org.
If you have gossip enabled, typically only one peer from each organization connects per channel. Then disseminates the blocks within the other peers in the org.
So,
@jyellick Did you mean by practical limit is that adding a lot of channels can make our network slow but there is actually no limit on adding new organizations and channels in our network?
@yousaf Yes. Eventually your network may become so slow as to no longer work. But I would expect this number to be hardware dependent
@jyellick Got it sir.
Has joined the channel.
@jyellick
hello
I'm running a network with fabric v.1.2.1 and during orderer startup I see this message:
```
This orderer is running in compatibility mode
```
Coudl you explain please what does it mean, what impact on the network it has and what to do to get rid of it?
thanks in advance
Question: the dockerfile of Hyperledger Fabric was last changed in January, but the image in dockerhub was changed 7 days ago. Does this make sense?
(sorry, dockerfile of Kafka)
https://hub.docker.com/r/hyperledger/fabric-kafka/
https://github.com/hyperledger/fabric-baseimage/blob/master/images/kafka/Dockerfile.in
@waxer i think docker images are refreshed pushed, just built with same dockerfile
@gravity I assume this is enabled? https://github.com/hyperledger/fabric/blob/release-1.3/sampleconfig/configtx.yaml#L100
@gravity I assume this is ~enabled~ disabled? https://github.com/hyperledger/fabric/blob/release-1.3/sampleconfig/configtx.yaml#L100
checkout the comment around that toggle to understand the implications :)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=LqMBErti7kq6K8GGk) @guoger I did not provide this option explicitly. If I omitted this option in config, which version will be set here?
@guoger , do you mean that all minor versions of that image are in fact exactly the same image?
@gravity editted my previous reply. This toggle is to enable v1.1.x features which are not backwards compatible.
so if you are running a v1.1.x (or higher) binary without this enabled, it's running in compatibility mode
@Jgnuid the base of kafka image is `fabric-baseimage`, which may change. so even dockerfile does not change, produced image may differ
@Jgnuid the base of kafka image is `fabric-baseimage`, which may change. so even kafka dockerfile does not change, produced image may differ
@guoger , thanks 🙂
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fvDJvKD8exBJqT3Wh) @guoger so if I get it right, to get rid of that warnings, I should add `V1_2: true` to confitgtx.yaml?
@gravity you just need to enable it
I believe it's `V1_1: false` or not specified for you, right?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2tBqgqQxox8hycNtW) @guoger it's not specified at all. but why `V1_1`? my network is v1.2.1
In the orderer container hyperledger/fabric-orderer, there is a variable: ORDERER_GENERAL_GENESISFILE. Can we specify more than one genesis block. For example can we specify - ORDERER_GENERAL_GENESISFILE=/var/hyperledger/fabric/crypto-config/channel-artefacts/twoOrgs/genesis.block, /var/hyperledger/fabric/crypto-config/channel-artefacts/threeOrgs/genesis.block?
In the orderer container hyperledger/fabric-orderer, there is a variable: ORDERER_GENERAL_GENESISFILE. Can we specify more than one genesis block. For example can we specify - env ORDERER_GENERAL_GENESISFILE=/var/hyperledger/fabric/crypto-config/channel-artefacts/twoOrgs/genesis.block, /var/hyperledger/fabric/crypto-config/channel-artefacts/threeOrgs/genesis.block?
Hi guys,
When I fun orderer benchmark test as the following commands
first run: docker-compose up -d
Then execute: BENCHMARK=true go test -run TestOrdererBenchmarkKafkaBroadcast
I get this error: initializeGrpcServer -> FATA 002 Failed to listen: listen tcp 127.0.0.1:7050: bind: address already in use
Any one could help?
In the context of a multi channels network, do we need to create multiple genesis blocks. Do we need to create one genesis block for each channel?
In the context of a multi channels network, do we need to create multiple genesis blocks. Do we need to create one genesis block for each channel?
In the context of a multi channels network, do we need to create multiple genesis blocks. Do we need to create one genesis block for each channel? Do you need to simply create one genesis block?
In the context of a multi channels network, do we need to create multiple genesis blocks. Do we need to create one genesis block for each channel? Do you need to simply create one genesis block? Does each channel not have their own MSP and therefore require their own genesis block?
We need to create channel.tx file before channel creation command right ?
Has joined the channel.
@gravity Yes, because you may run a mix of v1.0.x, v1.1.x, and v1.2.x orderers, by default, the newer binaries run in 'compatibility mode'.
To eliminate this condition, ensure you are basing your bootstrapping off of an up to date `configtx.yaml` and `configtxgen`.
If you have an existing network, you will need to follow the upgrade guide, paying special attention to the capabilities section: https://hyperledger-fabric.readthedocs.io/en/release-1.1/upgrading_your_network_tutorial.html#enable-capabilities-for-the-channels
@jyellick thanks for the explanation, now it's clear
sorry to crosspost between here and #fabric-peer-endorser-committer but I am not sure if my error lies with the peer or my orderer configuration
i have a question regarding batch timeouts and block validation. i successfully updated my channel configuration (a 1.1. network) to have a BatchTimeout of 100ms, but am still getting MVCC errors. specifically in the following scenario:
txnA updates key1 => val1. ~300 ms later txnB updates key1 => val2. txnB fails with an MVCC error.
i'm assuming that since they occur 300ms apart the orderer would place them in separate blocks. is it the vscc that is rejecting with the MVCC error? if so why would the transaction be rejected if my assumption is correct and they are in separate blocks?
Do the order nodes authenticate the client?
Hey guys . I have a question . Channel msp's are those mps which we defined in a configtx.yaml file and produced in genesis block right ?
@jrosmith MVCC conflicts are independent of block boundaries. In general, it means that you executed one transaction, then another which uses those modified keys before the first committed
@jrosmith MVCC conflicts are independent of block boundaries. In general, it means that you executed one transaction, then another which uses those keys modified by the first before the first committed
@jyellick ah okay, that makes sense. thank you so much
Has joined the channel.
Hi all
I am facing this error during network bootstrap:
```
orderer.example.com | 2018-10-24 20:44:37.841 UTC [orderer/commmon/multichannel] newLedgerResources -> PANI 050 Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Consortiums sub-group config: setting up the MSP manager failed: admin 0 is invalid: The identity is not valid under this MSP [Org1MSP]: could not validate identity's OUs: none of the identity's organizational units [[0xc42015c1e0]] are in MSP Org1MSP
orderer.example.com | panic: Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Consortiums sub-group config: setting up the MSP manager failed: admin 0 is invalid: The identity is not valid under this MSP [Org1MSP]: could not validate identity's OUs: none of the identity's organizational units [[0xc42015c1e0]] are in MSP Org1MSP
orderer.example.com |
orderer.example.com | goroutine 1 [running]:
orderer.example.com | github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc42025a000, 0x0, 0x0, 0x0)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x4f4
orderer.example.com | github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc4201661e0, 0x4, 0xe14c6d, 0x27, 0xc420351958, 0x1, 0x1, 0x0, 0x0, 0x0)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
orderer.example.com | github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc4201661e0, 0xe14c6d, 0x27, 0xc420351958, 0x1, 0x1)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
orderer.example.com | github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc4201661e8, 0xe14c6d, 0x27, 0xc420351958, 0x1, 0x1)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
orderer.example.com | github.com/hyperledger/fabric/orderer/common/multichannel.(*Registrar).newLedgerResources(0xc4202a21b0, 0xc4204c0e60, 0xc4204c0e60)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:256 +0x2ea
orderer.example.com | github.com/hyperledger/fabric/orderer/common/multichannel.NewRegistrar(0xea36a0, 0xc42025c280, 0xc4204af5c0, 0xe9b060, 0x15a78b0, 0xc420166130, 0x1, 0x1, 0x0)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:142 +0x312
orderer.example.com | github.com/hyperledger/fabric/orderer/common/server.initializeMultichannelRegistrar(0xc4200e8580, 0xe9b060, 0x15a78b0, 0xc420166130, 0x1, 0x1, 0x0)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:258 +0x250
orderer.example.com | github.com/hyperledger/fabric/orderer/common/server.Start(0xdf7a5a, 0x5, 0xc4200e8580)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:96 +0x226
orderer.example.com | github.com/hyperledger/fabric/orderer/common/server.Main()
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:75 +0x1d6
orderer.example.com | main.main()
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
orderer.example.com exited with code 2
```
I have modified the msp of peer org `Org1` and added a new Organization Unit Identifier and the content of config.yaml is:
```
NodeOUs:
Enable: true
ClientOUIdentifier:
Certificate: cacerts/ca.org1.example.com-cert.pem
OrganizationalUnitIdentifier: client
PeerOUIdentifier:
Certificate: cacerts/ca.org1.example.com-cert.pem
OrganizationalUnitIdentifier: peer
OrganizationalUnitIdentifiers:
- Certificate: cacerts/ca.org1.example.com-cert.pem
OrganizationalUnitIdentifier: TEST
```
Does anyone know whats wrong here?
Question: say I have a long running network with multiple orderers and Kafka. Then I destroy all the orderers. The spinup the orderers again with the same Genesis block. Will they detect existing Kafka topics as existing channels and automatically set up consumers to them and start replaying the messages? So in the end all of them will be 'up to date' as the original orderers before destruction?
as long as you don't destroy kafka you should be OK
Is that a yes? 😄
Has joined the channel.
can I ask a question:
on the genesis block, https://hastebin.com/otaqelesur.json
following chronological "channel_group/groups/Application/groups/Org0MSP/values/MSP/value/intermediate_certs", from which certificate from "crypto-config" intermediate_certs value is embedded into genesis block?
besides, can you point out the code session doing that?
Thanks!
can I ask a question:
on the genesis block, https://hastebin.com/otaqelesur.json
following chronological "channel_group/groups/Application/groups/Org0MSP/values/MSP/value/intermediate_certs", from which certificate from "crypto-config" the intermediate_certs value is embedded into genesis block?
besides, can you point out the code session doing that?
Thanks!
can I ask a question:
on the genesis block, https://hastebin.com/otaqelesur.json
following chronological "channel_group/groups/Application/groups/Org0MSP/values/MSP/value/intermediate_certs", from which certificate from "crypto-config" the intermediate_certs value is embedded into genesis block?
besides, can you point out the code section doing that?
Thanks!
If anyone willing to answer on SO please follow this url: https://stackoverflow.com/questions/52982952/implementation-of-organization-unit-identifier-in-peer-organisation-causes-order
Has joined the channel.
Has joined the channel.
@akshay.sood I've replied to your SO post
@Ryan2
> from which certificate from "crypto-config" the intermediate_certs value is embedded into genesis block
There may be an `intermediatecerts` folder in the MSP directory containing iCA certs. These are set into the `intermediate_certs` field in the MSP structure in the genesis block.
@jyellickThanks for the reply. Do you know how to add OU to admin cert or can you provide me the url.
I followed this tutorial https://hyperledger-fabric.readthedocs.io/en/release-1.3/membership/membership.html
I followed this tutorial https://hyperledger-fabric.readthedocs.io/en/release-1.3/msp.html
How are you generating your certificates?
`cryptogen generate --config crypto-config.yaml`
do I need to register CA in `crypto-config.yaml` with `TEST` as OU?
Yes, I was just testing, will update my SO response
Sure
If I have 2-3 OUs then how it should be done? I mean multiple CA or single CA?
You may specify OUs with or without editing the MSP's `config.yaml`. Requiring OUs is generally used to integrate with existing CA infrastructures where the CA is also responsible for issuing certs for other purposes.
If you wish to use multiple OUs, you may simply list them multiply in the `config.yaml` for the same cert, or you may split them across separate CAs.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=6ZNWhBKTz35M7iFxG) without modifying MSP's `config.yaml` is it possible to register an OU?
Certainly. If you do not require OUs, then the OU will not be taken into account when validating the certificate (with the exception of any NodeOU requirements).
You may for instance, issue certs with specific OUs, and write policies based on those OUs.
The only time specifying the OU in `config.yaml` is required is when you do not want certs without specific OUs specified to be considered valid.
Ok
Can you look into that error?
What error?
mentioned on SO
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8B888uJ3GwqGg5ZbR) you were testing that
Oh
My SO response should be complete. The error in your original post is because you are requiring a particular OU to be set (`TEST`), but your admin cert does not have that OU.
You already did
Thanks :)
I will check it
Hyperledger fabric is using openssl for CA?
Hyperledger fabric is using openssl for x509?
No, it's simply using `x509` certificate formats, which `openssl` understands. You may use `fabric-ca`, `cryptogen`, `openssl`, or whatever other implementation of x509 you would like.
I have seen some users who use public CAs like Verisign/Digicert/etc.
ok
Thanks @jyellick I will test it and get back to you :)
Has joined the channel.
Error reading from 172.20.0.2:35181: rpc error: code = Canceled desc = context canceled
I am getting this error while invoking transaction with kafka as a messaging service
any help would be appreciated.
I setup 2 orderers along with kafka
I stopped 1 orderer before invoke; so it got transferred to orderer 2 but failing to execute transaction
@npc0405 What version of fabric? Often-times, clients do not cleanly close the connection and hang-up instead. We found this error to be misleading so we reduced its priority in the logging at some point.
Hi
while trying to instantiate a chaincode I am having this error
2018-10-25 19:45:14.055 UTC [gossip/state] commitBlock -> ERRO 0b3 Got error while committing(unexpected Previous block hash. Expected PreviousHash = [2e2048fcf6bcc4d99c210d7cdd99e135e0de419e35d6ccc7f5a1d58e25bc6ad0], PreviousHash referred in the latest block= [19ee18412db71ea5ddbc52a9e6fd4d8a56c8de695e15f7755e2dd3db7b675370]
the peer is immediatly killed
I have no idea what can be :s
here whole stacktzrace in case it helps: https://gist.github.com/alacambra/4f7d3e2c488bf4282c22d9910e3bb54d
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pqNdT9yLzgs4u328v) Thank you for your response @jyellick
@albert.lacambra This panic occurs to detect/prevent a state fork. The peer has a block which has been validly signed by the ordering service as being a valid block, but it does not fit into the hash chain. So, something is fatally wrong. Most likely this is caused because the network was partially rebuilt, leaving crypto in place, but not deleting all ledgers.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=gFubPxq4rN8PQShk6) @jyellick @jyellick Its Fabric version 1.2
@jyellick You are right. I had expected something like that but it was first TX in a new infra. However IK have found out that I was using the wrong block to join the network .....
@jyellick You are right. I had expected something like that but it was first TX in a new infra. However IK have found out that I was using the wrong block to join the network ..... thx for your help!
queryMarblesWithPagination function strange pagination result. When start key is 1 and end key is 2.It includes all result with 1 like 11,112,12 etc
queryMarblesWithPagination function strange pagination result. When start key is 1 and end key is 2. It includes all result with 1 like 11,12,12 etc
> I stopped 1 orderer before invoke; so it got transferred to orderer 2 but failing to execute transaction
@npc0405 What mechanism made it transfer? Do you have custom application code for this?
@BikashPal
> queryMarblesWithPagination function strange pagination result. When start key is 1 and end key is 2. It includes all result with 1 like 11,12,12 etc
It is using simple string comparison for all keys `>= "1"` and `< "2"`. So, it would effectively include all keys beginning with `1`.
@jyellick I suppose it internally performed by kafka and zookeeper service.
correct me if wrong
I have not written any custom application code for that
`[2018-10-26 05:39:48.391] [DEBUG] Helper - [BasicCommitHandler]: _commit - starting orderer orderer0.example.com
[2018-10-26 05:39:51.315] [ERROR] invoke-chaincode - REQUEST_TIMEOUT:localhost:7051
[2018-10-26 05:39:51.321] [ERROR] invoke-chaincode - Error: ChannelEventHub has been shutdown
at ChannelEventHub.disconnect (/home/fabric-samples/myNetwork/node_modules/fabric-client/lib/ChannelEventHub.js:440:21)
at Timeout.setTimeout (/home/fabric-samples/myNetwork/app/invoke-transaction.js:92:10)
at ontimeout (timers.js:475:11)
at tryOnTimeout (timers.js:310:5)
at Timer.listOnTimeout (timers.js:270:5)
[2018-10-26 05:39:51.367] [ERROR] invoke-chaincode - Error: ChannelEventHub has been shutdown
at ChannelEventHub.disconnect (/home/fabric-samples/myNetwork/node_modules/fabric-client/lib/ChannelEventHub.js:440:21)
at Timeout.setTimeout (/home/fabric-samples/myNetwork/app/invoke-transaction.js:92:10)
at ontimeout (timers.js:475:11)
at tryOnTimeout (timers.js:310:5)
at Timer.listOnTimeout (timers.js:270:5)
[2018-10-26 05:39:51.378] [ERROR] invoke-chaincode - Failed to invoke chaincode. cause:Error: ChannelEventHub has been shutdown
(node:6865) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 3): Error: Failed to invoke chaincode. cause:Error: ChannelEventHub has been shutdown
(node:6865) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
error: [Remote.js]: Error: Failed to connect before the deadline
error: [Orderer.js]: Orderer grpcs://localhost:7050 has an error Error: Failed to connect before the deadline
[2018-10-26 05:39:51.420] [DEBUG] Helper - [BasicCommitHandler]: _commit - Caught: Error: Failed to connect before the deadline
[2018-10-26 05:39:51.422] [DEBUG] Helper - [BasicCommitHandler]: _commit - finished orderer orderer0.example.com
[2018-10-26 05:39:51.422] [DEBUG] Helper - [BasicCommitHandler]: _commit - starting orderer orderer1.example.com
[2018-10-26 05:39:51.491] [DEBUG] Helper - [BasicCommitHandler]: _commit - Successfully sent transaction to the orderer orderer1.example.com`
@npc0405 There is no automatic failover when an orderer goes offline, only when a Kafka broker goes offline
It is up to the application to retry on a different orderer if there is an error broadcasting.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=LN4Cv8zCd9BFPq4Xv) @yacovm Revisiting the past here. @yacovm, I'm having a hard time finding the docs that explain the Orderer validation process in detail. Perhaps I am overlooking them.
For example, in the architecture deep dive, sections 2.3-2.4 [1] go from Client broadcast to Peer delivery, and seem to skip detail about the Orderer internal processes.
Can you point me to the correct section of the documentation?
[1] https://hyperledger-fabric.readthedocs.io/en/release-1.3/arch-deep-dive.html#the-submitting-client-collects-an-endorsement-for-a-transaction-and-broadcasts-it-through-ordering-service
Has joined the channel.
Does anyone know why I keep getting a timeout on peer channel create command. I added a timeout of 10 seconds and my Kafka and zookeeper containers are up and running
Do you see the channel being created in the orderer logs?
I have seen that in the logs before. But what is the cause of this?
Has joined the channel.
can you please suggest which ordering service can be used in between kafka or solo for higher throughput and performance ?
Hi , kafka orderer setup issue
Failed to connect to broker kafka0:9092: dial tcp: lookup kafka0 on 127.0.0.11:53: no such host
Screenshot from 2018-10-30 17-39-20.png
Screenshot from 2018-10-30 17-39-00.png
I have setup Kafka based orderer service with 2 orderers configured. While testing I have taken down 1st orderer and tring to transact with second.
It gets redirected to second orderer but Its not able to return the response, however transaction is getting posted to ledger.
Any help would be greatly appreciated.
Transaction hash is not being returned.
Hello Team
Could you please help me in resolving the issue related to creating private channel on newly onboarded organization
so below given is the detailed explanation of the scenario and the steps to reproduce the issue
1. First I spinned up orderers and first organization with genesis block that contains only first organization MSP and ordererMSP
2. Second I created one dedicated channel for first organization and instantiated it and verify it by invoke and query
3. Third I created one more channel from first organization and instantiated it and verify it by invoke and query
4. Fourth I spinned up another organization
5. Fifth I onboarded this another organization created in step4
6. Sixth I fetched the second channel (created from first organization) 0 block and then join the second organization to this channel and upgrade the chaincode and invoke and query from both the organizations
7. Now i want to create another channel from second organization which should act like a private channel for second organization. Now here I am facing issue while creating this channel from second organization
8. I am getting the error "Error: got unexpected status: BAD_REQUEST -- Attempted to include a member which is not in the consortium".
9. So i got to know that this second organization needs to be added to orderer system channel (testchainid) through channel config update and then I can create private channel for second organization.
10. But when i tried to fetch the testchainid channel config block using peer channel fetch command after initializing with orderer environment variables, I am getting an error on orderer side "Principal deserialization failure (MSP SampleOrg is unknown) for identity"
and the error on cli conatiner side "readBlock -> INFO 002 Got status: &{FORBIDDEN} Error: can't read the block: &{FORBIDDEN}"
> I have seen that in the logs before. But what is the cause of this?
@zimabry There are a number of possible reasons, which is why I was asking you if you saw the channel created in the orderer logs. It would be useful to see when relative to the start of the channel creation command when (or if) the channel is actually created.
@ShaikSharuk
> can you please suggest which ordering service can be used in between kafka or solo for higher throughput and performance ?
Kafka allows you to scale, whereas Solo does not.
@knagware9 It looks fairly clear from your posts that Kafka is not working. Are you using an unmodified fabric-samples?
@npc0405
> but Its not able to return the response, however transaction is getting posted to ledger.
> Any help would be greatly appreciated.
> Transaction hash is not being returned.
I don't understand. If the transaction is found on the blockchain, then it would imply that it was accepted by the orderer. The orderer does not return a transaction hash, only a status.
> 8. I am getting the error "Error: got unexpected status: BAD_REQUEST -- Attempted to include a member which is not in the consortium".
@javrevasandeep You must update the orderer system channel to include the new organization in the consortium definition before you may create new channels involving that organization.
@javrevasandeep
> 8. I am getting the error "Error: got unexpected status: BAD_REQUEST -- Attempted to include a member which is not in the consortium".
You must update the orderer system channel to include the new organization in the consortium definition before you may create new channels involving that organization.
@javrevasandeep
> 8. I am getting the error "Error: got unexpected status: BAD_REQUEST -- Attempted to include a member which is not in the consortium".
~You must update the orderer system channel to include the new organization in the consortium definition before you may create new channels involving that organization.~
I had not read your next steps yet
> 10. But when i tried to fetch the testchainid channel config block using peer channel fetch command after initializing with orderer environment variables, I am getting an error on orderer side "Principal deserialization failure (MSP SampleOrg is unknown) for identity"
and the error on cli conatiner side "readBlock -> INFO 002 Got status: &{FORBIDDEN} Error: can't read the block: &{FORBIDDEN}"
Only the orderer org is authorized to transact on the orderer system channel. You may use the orderer admin credentials to fetch the config block.
> 10. But when i tried to fetch the testchainid channel config block using peer channel fetch command after initializing with orderer environment variables, I am getting an error on orderer side "Principal deserialization failure (MSP SampleOrg is unknown) for identity" and the error on cli conatiner side "readBlock -> INFO 002 Got status: &{FORBIDDEN} Error: can't read the block: &{FORBIDDEN}"
Only the orderer org is authorized to transact on the orderer system channel. You may use the orderer admin credentials to fetch the config block.
Thanks @jyellick
Any idea where I must look then.
Now I successfully resolved the issue. I was not giving the CORE_PEER_MSPID as ordererMSP. Now with this fix I am able to fetch channel and make updates to config and then sign that envelope from four orderers in my network and push a channel update tx. Its all went fine till here.
But now I am getting a different error while trying to create a private channel on newly onboarded organization i.e. second organization. The error says
UTC [cli/common] readBlock -> INFO 17b Got status: &{SERVICE_UNAVAILABLE}
@javrevasandeep Typically this means that the orderer is still finishing setting up the channel, retry in a minute, if there are still problems, we can investigate
@npc0405 I still don't understand your question
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=vPhbsyAtQDBgzKz9P) @jyellick I tried to recreate the channel and now I am getting this error on cli side
Error: got unexpected status: SERVICE_UNAVAILABLE -- will not enqueue, consenter for this channel hasn't started yet
and orderer logs shows this
2018-10-30 20:07:16.259 UTC [orderer/common/broadcast] Handle -> WARN 54d [channel: btic-dedicated] Rejecting broadcast of message from 10.64.37.219:59440 with SERVICE_UNAVAILABLE: rejected by Consenter: will not enqueue, consenter for this channel hasn't started yet
2018-10-30 20:07:16.259 UTC [orderer/common/server] func1 -> DEBU 54e Closing Broadcast stream
2018-10-30 20:07:16.262 UTC [grpc] Printf -> DEBU 54f transport: http2Server.HandleStreams failed to read frame: read tcp 172.19.0.3:7050->10.64.37.219:59440: read: connection reset by peer
2018-10-30 20:07:16.262 UTC [common/deliver] Handle -> WARN 550 Error reading from 10.64.37.219:59438: rpc error: code = Canceled desc = context canceled
2018-10-30 20:07:16.262 UTC [orderer/common/server] func1 -> DEBU 551 Closing Deliver stream
@javrevasandeep But you have been able to successfully create other channels before?
yes i resolved it now. I just restarted one of the kafka broker node and then it worked
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7Lw4GD8P9hHByDtE3) @jyellick actully I am using modified balance transfer sample. I got the issue with docker compose file while extending docker-compose-base file ,its not taking environment variable which is being set in base file...
Anyone tried to use Kafka Mirroring/MirrorMaker, is it recommended to use it for backup and recovery of the Kafka cluster? (so that Fabric network data could be restored in disaster case, even when one Kafka cluster is down).
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=dsvrJ43nyzq9d244G) Fixed the issue,now all container running fine ...but my orderer not able to connect Kafka
@jyellick
Orderer0 works fine with orderer1 down.
But viceversa is not working
Also transaction is getting committed in ledger but I am not getting hash in return in response
its getting stuck.
Also, If I check the logs of orderer1...nothing erroneous getting printed
@jyellick ..All my kafka ,orderer and zookeeper container running but orderer not able to connect Kafka
2018-10-31 10:22:44.991 UTC [orderer/consensus/kafka/sarama] withRecover -> DEBU 128 Failed to connect to broker kafka3:9092: dial tcp 172.18.0.14:9092: connect: connection refused
2018-10-31 10:22:44.991 UTC [orderer/consensus/kafka] try -> DEBU 129 [channel: testchainid] Need to retry because process failed = error creating topic [testchainid]; failed to retrieve metadata for the cluster
2018-10-31 10:22:45.990 UTC [orderer/consensus/kafka] try -> DEBU 12a [channel: testchainid] Creating Kafka topic [testchainid] for channel [testchainid/0]
2018-10-31 10:22:45.990 UTC [orderer/consensus/kafka/sarama] Open -> DEBU 12b ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-10-31 10:22:45.990 UTC [orderer/consensus/kafka/sarama] withRecover -> DEBU 12c Failed to connect to broker kafka0:9092: dial tcp 172.18.0.13:9092: connect: connection refused
2018-10-31 10:22:45.990 UTC [orderer/consensus/kafka/sarama] Open -> DEBU 12d ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-10-31 10:22:45.991 UTC [orderer/consensus/kafka/sarama] withRecover -> DEBU 12e Failed to connect to broker kafka1:9092: dial tcp 172.18.0.15:9092: connect: connection refused
2018-10-31 10:22:45.991 UTC [orderer/consensus/kafka/sarama] Open -> DEBU 12f ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-10-31 10:22:45.991 UTC [orderer/consensus/kafka/sarama] withRecover -> DEBU 130 Failed to connect to broker kafka2:9092: dial tcp 172.18.0.16:9092: connect: connection refused
2018-10-31 10:22:45.991 UTC [orderer/consensus/kafka/sarama] Open -> DEBU 131 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-10-31 10:22:45.991 UTC [orderer/consensus/kafka/sarama] withRecover -> DEBU 132 Failed to connect to broker kafka3:9092: dial tcp 172.18.0.14:9092: connect: connection refused
2018-10-31 10:22:45.991 UTC [orderer/consensus/kafka] try -> DEBU 133 [channel: testchainid] Need to retry because process failed = error creating topic [testchainid]; failed to retrieve metadata for the cluster
2018-10-31 10:22:46.990 UTC [orderer/consensus/kafka] try -> DEBU 134 [channel: testchainid] Creating Kafka topic [testchainid] for channel [tes
2018-10-31 10:23:02.782 UTC [orderer/consensus/kafka] try -> DEBU 1c2 [channel: testchainid] Initial attempt failed = kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
hi, how to fix the following issue? and can this cause the failure of instantiating chaincode?
`2018-10-31 19:34:12.687 CST [common/deliver] deliverBlocks -> WARN 421d [channel: registerch] Client authorization revoked for deliver request from 172.24.0.14:50960: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied`
hi
we are running orderer with kafka and after some days of running, peers have begin to have this error: 2018-10-31 12:58:49.632 UTC [blocksProvider] DeliverBlocks -> WARN 1f3acd [channel-dciwbp-01] Got error &{SERVICE_UNAVAILABLE}
when looking at the orderer we see that;
2018-10-31 13:03:59.895 UTC [orderer/common/server] Deliver -> DEBU f84 Starting new Deliver handler
2018-10-31 13:03:59.895 UTC [common/deliver] Handle -> DEBU f85 Starting new deliver loop for 10.192.50.1:60066
2018-10-31 13:03:59.895 UTC [common/deliver] Handle -> DEBU f86 Attempting to read seek info message from 10.192.50.1:60066
2018-10-31 13:03:59.896 UTC [common/deliver] deliverBlocks -> WARN f87 [channel: channel-dciwbp-01] Rejecting deliver request for 10.192.50.1:60066 because of consenter error
2018-10-31 13:03:59.896 UTC [common/deliver] deliverBlocks -> WARN f88 [channel: channel-dciwbp-01] Rejecting deliver request for 10.192.50.1:35218 because of consenter error
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f89 Waiting for new SeekInfo from 10.192.50.1:60066
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f8a Attempting to read seek info message from 10.192.50.1:60066
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f8b Waiting for new SeekInfo from 10.192.50.1:35218
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f8c Attempting to read seek info message from 10.192.50.1:35218
2018-10-31 13:04:09.898 UTC [common/deliver] Handle -> WARN f8d Error reading from 10.192.50.1:35218: rpc error: code = Canceled desc = context canceled
2018-10-31 13:04:09.899 UTC [orderer/common/server] func1 -> DEBU f8f Closing Deliver stream
2018-10-31 13:03:59.895 UTC [orderer/common/server] Deliver -> DEBU f84 Starting new Deliver handler
2018-10-31 13:03:59.895 UTC [common/deliver] Handle -> DEBU f85 Starting new deliver loop for 10.192.50.1:60066
2018-10-31 13:03:59.895 UTC [common/deliver] Handle -> DEBU f86 Attempting to read seek info message from 10.192.50.1:60066
2018-10-31 13:03:59.896 UTC [common/deliver] deliverBlocks -> WARN f87 [channel: channel-dciwbp-01] Rejecting deliver request for 10.192.50.1:60066 because of consenter error
*2018-10-31 13:03:59.896 UTC [common/deliver] deliverBlocks -> WARN f88 [channel: channel-dciwbp-01] Rejecting deliver request for 10.192.50.1:35218 because of consenter error*
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f89 Waiting for new SeekInfo from 10.192.50.1:60066
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f8a Attempting to read seek info message from 10.192.50.1:60066
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f8b Waiting for new SeekInfo from 10.192.50.1:35218
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f8c Attempting to read seek info message from 10.192.50.1:35218
2018-10-31 13:04:09.898 UTC [common/deliver] Handle -> WARN f8d Error reading from 10.192.50.1:35218: rpc error: code = Canceled desc = context canceled
2018-10-31 13:04:09.899 UTC [orderer/common/server] func1 -> DEBU f8f Closing Deliver stream
2018-10-31 13:03:59.895 UTC [orderer/common/server] Deliver -> DEBU f84 Starting new Deliver handler
2018-10-31 13:03:59.895 UTC [common/deliver] Handle -> DEBU f85 Starting new deliver loop for 10.192.50.1:60066
2018-10-31 13:03:59.895 UTC [common/deliver] Handle -> DEBU f86 Attempting to read seek info message from 10.192.50.1:60066
2018-10-31 13:03:59.896 UTC [common/deliver] deliverBlocks -> WARN f87 [channel: channel-dciwbp-01] Rejecting deliver request for 10.192.50.1:60066 because of consenter error
*2018-10-31 13:03:59.896 UTC [common/deliver] deliverBlocks -> WARN f88 [channel: channel-dciwbp-01] Rejecting deliver request for 10.192.50.1:35218 because of consenter error*
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f89 Waiting for new SeekInfo from 10.192.50.1:60066
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f8a Attempting to read seek info message from 10.192.50.1:60066
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f8b Waiting for new SeekInfo from 10.192.50.1:35218
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f8c Attempting to read seek info message from 10.192.50.1:35218
2018-10-31 13:04:09.898 UTC [common/deliver] Handle -> WARN f8d Error reading from 10.192.50.1:35218: rpc error: code = Canceled desc = context canceled
2018-10-31 13:04:09.899 UTC [orderer/common/server] func1 -> DEBU f8f Closing Deliver stream
2018-10-31 13:03:59.895 UTC [orderer/common/server] Deliver -> DEBU f84 Starting new Deliver handler
2018-10-31 13:03:59.895 UTC [common/deliver] Handle -> DEBU f85 Starting new deliver loop for 10.192.50.1:60066
2018-10-31 13:03:59.895 UTC [common/deliver] Handle -> DEBU f86 Attempting to read seek info message from 10.192.50.1:60066
2018-10-31 13:03:59.896 UTC [common/deliver] deliverBlocks -> WARN f87 [channel: channel-dciwbp-01] Rejecting deliver request for 10.192.50.1:60066 because of consenter error
*2018-10-31 13:03:59.896 UTC [common/deliver] deliverBlocks -> WARN f88 [channel: channel-dciwbp-01] Rejecting deliver request for 10.192.50.1:35218 because of consenter error*
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f89 Waiting for new SeekInfo from 10.192.50.1:60066
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f8a Attempting to read seek info message from 10.192.50.1:60066
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f8b Waiting for new SeekInfo from 10.192.50.1:35218
2018-10-31 13:03:59.896 UTC [common/deliver] Handle -> DEBU f8c Attempting to read seek info message from 10.192.50.1:35218
2018-10-31 13:04:09.898 UTC [common/deliver] Handle -> WARN f8d Error reading from 10.192.50.1:35218: rpc error: code = Canceled desc = context canceled
2018-10-31 13:04:09.899 UTC [orderer/common/server] func1 -> DEBU f8f Closing Deliver stream
Has someone some idea what can be happening? How can we rcover kafka and zookeper? @jyellick
@NoLimitHoldem
> Anyone tried to use Kafka Mirroring/MirrorMaker, is it recommended to use it for backup and recovery of the Kafka cluster? (so that Fabric network data could be restored in disaster case, even when one Kafka cluster is down).
I can't speak to any personal experience, however, for use in recover, I'd suggest that you ensure that the mirroring is setup to preserve sequence numbers, as this is how Fabric indexes into the Kafka partition to resume processing.
@npc0405
> Orderer0 works fine with orderer1 down.
> But viceversa is not working
It sounds to me like your application is not failing over to the second orderer.
> Also transaction is getting committed in ledger but I am not getting hash in return in response
How can you tell the tx is being committed to the ledger. What would you usually get the hash in response to?
> Also, If I check the logs of orderer1...nothing erroneous getting printed
I suggest you enable debug logging at orderer1, this will log requests as they arrive, so we can see if the application is appropriately failing over.
> 2018-10-31 10:23:02.782 UTC [orderer/consensus/kafka] try -> DEBU 1c2 [channel: testchainid] Initial attempt failed = kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
It sounds like your Kafka cluster is not setup correctly. Please try following the [Kafka Quickstart Guide](https://kafka.apache.org/quickstart) and ensure that you can successfully produce and consume messages to the Kafka cluster before attempting to hook Fabric to it.
@albert.lacambra
> Has someone some idea what can be happening? How can we rcover kafka and zookeper?
I suggest that you look earlier in your orderer logs for any errors/warnings indicating that the Kafka connection has failed. I'd also suggest that you look at your Kafka/ZK logs to check the health of the cluster itself.
can in someway be realted in a corrputed chain?
looking the code: chain, ok := h.ChainManager.GetChain(chdr.ChannelId)
is that a look-up to kafka?
kafka and zookeper looks good
No, that is a lookup of a map from channel to to channel object. It is standard convention in golang to do: `value, ok := myMap[key]`, where `ok` indicates whether the key was found or not.
unfortunatly logs of the error has been lost by the operators :(
Are you certain that you disabled log expiration on your Kafka cluster? By default, Kafka prunes the partition logs after some period of time, resulting in data loss.
then that means the channel is not on the map?
will ask that. I do not know
If the channel were not in the map, I would expect a status of `NOT_FOUND`, but you are getting `SERVICE_UNAVAILABLE`, which implies that `ok` is `true` (and it is in the map)
you are right
erroredChan := chain.Errored()
that is the message
Yes, the consenter may signal the `Deliver` service that there is something wrong with its connection to Kafka by closing that channel, resulting in the error you saw. When the consenter piece closes that channel, it should log the reason why, without that log message, there's very little I can tell you.
ok
thanks a lot. Will try to find something.
there some steps we can follow to rectreate kafka and zookeper?
Unless you have backups, if Kafka has pruned the data, there is very little to be done recovery-wise.
Has joined the channel.
hi,
I am running Fabric v1.1.0. I've setup a distributed network with 3 orderers, 4 kafka servers and 3 ZK servers. I created a channel and joined 2 peers to it. All of these are on different VMs.
I am able to deploy and executed chaincodes fine. So far so good.
I shutdown the central ordering service nodes, kafka & zk nodes. I may have not shut them down in the right order (zk, followed by kafka, followed by OSNs).
While starting up the network, I started ZK servers, then kafka servers and finally one of the OSNs. The ordering service on the OSN crashes with the following logs -
---------
Oct 31 13:33:04 orderer0 arcvx[1307]: 2018-10-31 13:33:04.761 UTC [orderer/commmon/multichannel] NewRegistrar -> INFO 006#033[0m Starting system channel 'testchainid' with genesis block hash 9c6dbde0a7fb4ded67a4fe0a7f4c3a3c0b94766542bf5353fed077a89072bf3c and orderer type kafka
Oct 31 13:33:04 orderer0 arcvx[1307]: #033[35m2018-10-31 13:33:04.771 UTC [orderer/commmon/multichannel] newLedgerResources -> CRIT 007#033[0m Error umarshaling config envelope from payload data: proto: bad wiretype for field common.Config.Sequence: got wiretype 2, want 0
Oct 31 13:33:04 orderer0 arcvx[1307]: panic: Error umarshaling config envelope from payload data: proto: bad wiretype for field common.Config.Sequence: got wiretype 2, want 0
Oct 31 13:33:04 orderer0 arcvx[1307]:
Oct 31 13:33:04 orderer0 arcvx[1307]: goroutine 1 [running]:
Oct 31 13:33:04 orderer0 arcvx[1307]: github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc4201ec1b0, 0xd209be, 0x37, 0xc420369cc0, 0x1, 0x1)
Oct 31 13:33:04 orderer0 arcvx[1307]: #011/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x134
Oct 31 13:33:04 orderer0 arcvx[1307]: github.com/hyperledger/fabric/orderer/common/multichannel.(*Registrar).newLedgerResources(0xc42010a230, 0xc4202029c0, 0xc4202029c0)
Oct 31 13:33:04 orderer0 arcvx[1307]: #011/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:248 +0x2af
Oct 31 13:33:04 orderer0 arcvx[1307]: github.com/hyperledger/fabric/orderer/common/multichannel.NewRegistrar(0x1393300, 0xc4200c6320, 0xc420120f00, 0x138fd80, 0x13f3e20, 0xc42000ea50, 0x1, 0x1, 0x0)
Oct 31 13:33:04 orderer0 arcvx[1307]: #011/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:144 +0x352
Oct 31 13:33:04 orderer0 arcvx[1307]: github.com/hyperledger/fabric/orderer/common/server.initializeMultichannelRegistrar(0xc4201d5680, 0x138fd80, 0x13f3e20, 0xc42000ea50, 0x1, 0x1, 0xc420368600)
Oct 31 13:33:04 orderer0 arcvx[1307]: #011/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:262 +0x277
Oct 31 13:33:04 orderer0 arcvx[1307]: github.com/hyperledger/fabric/orderer/common/server.Start(0xcfa0bc, 0x5, 0xc4201d5680)
Oct 31 13:33:04 orderer0 arcvx[1307]: #011/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:103 +0x24c
Oct 31 13:33:04 orderer0 arcvx[1307]: github.com/hyperledger/fabric/orderer/common/server.Main()
Oct 31 13:33:04 orderer0 arcvx[1307]: #011/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:82 +0x20f
Oct 31 13:33:04 orderer0 arcvx[1307]: main.main()
Oct 31 13:33:04 orderer0 arcvx[1307]: #011/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
---------
Any idea what could be going wrong? I would expect the ordering service to startup without any issues. All the volumes are loaded, to the best of my knowledge.
Thanks
@parags Please do not post large segments of config or logs to this channel, it makes it very hard to read, please use a service like hastebin.com
```CRIT 007#033[0m Error umarshaling config envelope from payload data: proto: bad wiretype for field common.Config.Sequence: got wiretype 2, want 0````
This message indicates to me that the on-disk storage of your orderer has been corrupted. Assuming you configured Kafka correctly, you should be able to delete the orderer's ledger directory and re-bootstrap it with the original genesis block and have it rebuild the chains.
```CRIT 007#033[0m Error umarshaling config envelope from payload data: proto: bad wiretype for field common.Config.Sequence: got wiretype 2, want 0````
This message indicates to me that the on-disk storage of your orderer has been corrupted. Assuming you configured Kafka correctly, you should be able to delete the orderer's ledger directory and re-bootstrap it with the original genesis block and have it rebuild the chains.
```CRIT 007#033[0m Error umarshaling config envelope from payload data: proto: bad wiretype for field common.Config.Sequence: got wiretype 2, want 0````
This message indicates to me that the on-disk storage of your orderer has been corrupted. Assuming you configured Kafka correctly, you should be able to backup and delete the orderer's ledger directory and re-bootstrap it with the original genesis block and have it rebuild the chains.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=LjeaFGaGusAzyPHLc) @jyellick Noted @jyellick. Thanks.
So, on my orderer vm, under /var/hyperledger/production/orderer, i have both 'chains' and 'index' directories. Should I just delete both these directories and start the ordering service (after making a backup of course)? The original genesis file is present on disk at the correct location.
@parags Yes, this _should_ fix your problem. It is a little troubling that the ledger was corrupted on disk, even with an unclean shutdown.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Pkx4ncnevmLMMPETR) @jyellick Understand the concern, @jyellick. Do you want me to do any debugging? I am happy to provide inputs.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Pkx4ncnevmLMMPETR) @jyellick Thanks much... that worked :grinning: my existing channels are up and CONNECT message to kafka is posted successfully.
@parags
> Do you want me to do any debugging? I am happy to provide inputs.
If you can reproduce this problem, a detailed JIRA explaining how (ideally using one fo the fabric-samples) would be greatly appreciated!
@parags
> Do you want me to do any debugging? I am happy to provide inputs.
If you can reproduce this problem, a detailed JIRA explaining how (ideally using one of the fabric-samples) would be greatly appreciated!
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=TQFcGwYZoNPmcG3mi) @jyellick does it means, there is no way to recover the chain???
Well, the chain exists, but if your Kafka logs are deleted, there is no way to continue to modify the chain. You may attempt to recover by hand, by recreating an environment with the correct offsets, but this would be a tedious and technical procedure. The short answer is that if your Kafka logs were deleted, and you have not backups, then you have lost the data and recovery is not possible.
Well, the chain exists, but if your Kafka logs are deleted, there is no way to continue to modify the chain. You may attempt to recover by hand, by recreating an environment with the correct offsets, but this would be a tedious and technical procedure. The short answer is that if your Kafka logs were deleted, and you have no backups, then you have lost the data and recovery is not possible.
ok. It is an openshift infra so this logs should be somewehere. Will ask tomorrow.
In case I have this looks, just starting kafka and zookeper should beenough to get anything working?
I'm not sure I understand, but assuming your Kafka logs are intact, it should be possible to get your network going again. If Kafka/ZK are down, you will get the SERVICE_UNAVAILABLE message due to consenter error.
@jyellick so i've seen the 'if kafka logs are not pruned the network cannot recover' thing before but i don't think i fully understand the mechanism behind that. if the peers in a network are all in sync (same ledgers, same world states) and there are no new transactions, couldnt an orderer recover from the peer information alone? why are the kafka logs critical to orderer recovery?
@jrosmith , I'm not such an expert as jyellick, but what you say I think is not right. Even if all the peers agree that the last block was X. You cant never be sure that X+1 was created and in transit. If that is the case you would be in trouble since peers could receive two blocks with number X+1 correctly signed by orderer but with different hashes
There are other problems too.. for example introducing trust from the peers to the orderer... Thing that is not in the design I guess for good reasons since malicious peers could hide existing confirmed blocks on purpose.
(i should said, from orderers to peers)
> It sounds to me like your application is not failing over to the second orderer.
It is, otherwise transaction wouldn't be committed in ledger.
> How can you tell the tx is being committed to the ledger. What would you usually get the hash in response to?
Since we store data in terms of key value pair, If I query the ledger with the key I get the payload in response that I stored.
Hash I get in response to invoking chaicode to store data in ledger.
> I suggest you enable debug logging at orderer1, this will log requests as they arrive, so we can see if the application is appropriately failing over.
Will try.
Thanks @jyellick
@jyellick @mastersingh24 ..I am getting this error while setup orderer as kafka,,.refered couple of solution but no success , @mastersingh24 I checked on JIRA you resolved FAB-6250 for the same issue . I am also facing same issue ...getting above when creating channel...seems issue of DNS resolution ..please help to resolve
Screenshot from 2018-11-01 16-44-59.png
Has joined the channel.
@jrosmith
> @jyellick so i've seen the 'if kafka logs are not pruned the network cannot recover' thing before but i don't think i fully understand the mechanism behind that. if the peers in a network are all in sync (same ledgers, same world states) and there are no new transactions, couldnt an orderer recover from the peer information alone? why are the kafka logs critical to orderer recovery?
What @waxer indicates is true, unless you assume honesty among the peers, there are some attacks when recovering from the peers. However, the bigger issue is that the orderers store the Kafka log offset in the block metadata, so that when an orderer starts up, it knows where in Kafka's log to begin playing from. So, if the Kafka logs are gone, and the orderer attempts to startup, Kafka will tell the orderer that the offsets do not exist, and the orderer will give up processing. To truly 'fix' a Kafka which has lost its data, you would need to ensure all of the orderers are exactly in sync, then, for each channel, add a new dummy block, encoding a last offset of '0' for each channel, then start back up. And, as of today, the only way to accomplish this is by hand, which would require some pretty extensive knowledge of the internals of Fabric. It would be nice to release a tool to allow users to do this in the future, but for the time being the official statement is "if your Kafka logs are gone, your network is irrecoverable".
@knagware9 Please go through the [Kafka Quickstart Guide](https://kafka.apache.org/quickstart), running the sample clients from the machine/vm/container which will execute one of your orderers. Unless you can connect using the sample clients, Fabric will never be able to connect.
Has joined the channel.
Has joined the channel.
Hi All , I am facing error while creating channel - Error: got unexpected status: FORBIDDEN -- Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
orderer log - {"log":"\u001b[36m2018-10-31 10:29:39.138 UTC [policies] func1 -\u003e DEBU 13c\u001b[0m Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ OrdererOrg.Writers ]\n","stream":"stderr","time":"2018-10-31T10:29:39.138516673Z"}
This is with HLF 1.3
Fabric network is setup with 5 orderers and 10 peers.
Policies set for ordererorg is
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
@jyellick Please help
@rkrish82 Can you post more (ideally all) of the orderer log to a service like hastebin.com and paste the link here?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=c32ZsoJPvPvYDPFub) @jyellick https://hastebin.com/ihugedefez.json
```
identity 0 does not satisfy principal: the identity is a member of a different MSP (expected OrdererMSP, got Org1MSP)\
```
It sounds like your orderer's identity was issued by Org1, but in order to act as an orderer, it must be a member of the ordering org
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=mYSABcjR4SkXxxYff) @jyellick Is there any changes required to configtx.yml to fix this?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=QrPsGoLpHpjjkmQyz) @jyellick ok I will try ...one help. do you have any kafka fabric setup with node sdk.,
Yes I have kafka fabric 1.3.0 with go sdk
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=qepJisC6YsJdABh8y) @knagware9 I have used the configtx.yaml and modified for 5 orderers and 10 peers.. https://github.com/hyperledger/fabric/blob/release-1.3/examples/e2e_cli/configtx.yaml
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=huTvDzjyuqdxyjHM3) @rkrish82 Thanks..actually I have some kafka setup network those are running but when i want to create kafka setup for balance transfer then i face errors
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7XXHnfj8BpvEsiWRb) @knagware9 errors you mean same error that i am facing?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=CSYrpQrS4FGwJN2df) Is this error because i have specified multiple orderers in different orgs..?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=CSYrpQrS4FGwJN2df) @rkrish82 no...error in DNS Resolution\
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hveY7g3YwX5yKfYFg) @knagware9 What is the resolution for the error that i am getting?
@rkrish82 Can you please post a link to your modified `configtx.yaml`?
@knagware9 If you are having DNS resolution problems with Kafka and cannot make the sample clients work, then this is outside of the scope of Fabric, I'd refer you to the excellent [Kafka Documentation](https://kafka.apache.org/documentation/)
@jyellick hi, does release-1.3 support 1.3 orderer capacity in `configtx.yaml`?
```Orderer: &OrdererCapabilities
V1_3: true```
i use release-1.3. the network works if the configtx.yaml file is configured as:
``` Capabilities:
Channel: &ChannelCapabilities
V1_1: true
Orderer: &OrdererCapabilities
V1_1: true
Application: &ApplicationCapabilities
V1_2: true
v1_1: false
V1_1_PVTDATA_EXPERIMENTAL: false
V1_1_RESOURCETREE_EXPERIMENTAL: false
V1_2_CHAINCODE_LIFECYCLE_EXPERIMENTAL: false```
and it does not work if i configure it like this (take https://github.com/hyperledger/fabric-samples/blob/release-1.3/first-network/configtx.yaml as example):
```Capabilities:
Channel: &ChannelCapabilities
V1_3: true
Orderer: &OrdererCapabilities
V1_1: true
Application: &ApplicationCapabilities
V1_3: true
V1_2: false
v1_1: false```
there are errors reported in the peer logs:
`2018-11-03 07:58:44.411 CST [blocksProvider] DeliverBlocks -> ERRO 333c [regch] Got error &{FORBIDDEN}`
Has joined the channel.
@bh4rtp Orderer capabilities have a max value of V1_1
There have been no incompatible changes made in ordering since v1.1, so there is no need for a new capability
I expect that a new capability will be introduced with the upcoming Raft consensus protocol
I am running 3 zookeeper and 1 kafka instances . When I am starting out my orderer node , I am getting error `panic: Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: Invalid broker entry: node_kafka1st:9092` . What could be the possible reason of that ? I have opened 9092 port in kafka service . I am running this on docker swarm and there are no errors in either kafka or zookeeper containers . @jyellick Can you help me with this issue ?
@MohammadObaid `_` is not a valid character in a hostname
Thanks @jyellick got it . Just one more thing , if we have multiple orderers then genesis block and msp's should be mounted on both orderers container right ?
@MohammadObaid You must bootstrap every orderer with the same genesis block. They should each have their own MSP directory (but be issued by an orderer org)
Got it but each would have `signcerts` of each other peers including endorsing and comitting peers right ?
@MohammadObaid No, all of the validation info like CAs of other orgs is embedded into the genesis block. The MSP directory of the orderer is (currently) only used for signing, never validation.
Alright . We provide all necessary certificates of peers path in configtx.yaml file which embeds all certs in genesis block and then orderer use only that genesis block !
Hi,all,I start the orderer with a CRLs,but now I want to enable one certificate in the crls list, How could I do?
Has joined the channel.
Hi, guys.
Is it necessary that save kafka-data into host dirs when brokers running in containers??
@JaccobSmith
> Hi,all,I start the orderer with a CRLs,but now I want to enable one certificate in the crls list, How could I do?
@JaccobSmith
> Hi,all,I start the orderer with a CRLs,but now I want to enable one certificate in the crls list, How could I do?
Per the x.509 spec this is not allowed.
@luckydogchina
> Is it necessary that save kafka-data into host dirs when brokers running in containers??
Yes, your Kafka data must be persisted in any sort of production deployment
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=QqP5hpfySHLJptHdA) @jyellick ok, thank a lot.
Has joined the channel.
hello, when are the e2e-orderer-syschan generated in an e2e test?
hello, when are the e2e-orderer-syschan generated in an e2e test? posted a problem in #fabric-orderer-dev
hello, trying to customize an e2e test in Fabric 1.3-rc1 but orderer keeps failing
orderer still cannot find e2e-orderer-syschan?
and cli keeps searching for Attempting to fetch system channel 'e2e-orderer-syschan' ..XXX secs
or an orderer detailed setup/documentation?
did re generate crypto keys certs and the tx blocks successfully, also inspected them that all domains are correctly updated
question is, how do I verify that the default channels like e2e-orderer-syschan are created in the orderer?
`../../.build/bin/cryptogen generate --config=crypto-config.yaml
../../.build/bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block
export CHANNEL_NAME=custom_domain_channel && ../../.build/bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID $CHANNEL_NAME
.. anchor peer .tx files
`
../../.build/bin/configtxgen -channelID custom_channel -outputBlock custom_channel_genesisblock.pb -inspectBlock custom_channel_genesisblock.pb -profile TwoOrgsOrdererGenesis
also returns all MSPs, orderers properly, looks like orderer is not coming up fine, *i.e., their system channels are not and I don't know how to investigate this
hello, trying to customize an e2e test in Fabric 1.3-rc1 but orderer keeps failing
orderer still cannot find e2e-orderer-syschan?
and cli keeps searching for Attempting to fetch system channel 'e2e-orderer-syschan' ..XXX secs
or an orderer detailed setup/documentation?
did re generate crypto keys certs and the tx blocks successfully, also inspected them that all domains are correctly updated
question is, how do I verify that the default channels like e2e-orderer-syschan are created in the orderer?
`../../.build/bin/cryptogen generate --config=crypto-config.yaml
../../.build/bin/configtxgen -profile TwoOrgsOrdererGenesis -outputBlock ./channel-artifacts/genesis.block
export CHANNEL_NAME=custom_domain_channel && ../../.build/bin/configtxgen -profile TwoOrgsChannel -outputCreateChannelTx ./channel-artifacts/channel.tx -channelID $CHANNEL_NAME
.. anchor peer .tx files
`
../../.build/bin/configtxgen -channelID custom_channel -outputBlock custom_channel_genesisblock.pb -inspectBlock custom_channel_genesisblock.pb -profile TwoOrgsOrdererGenesis
also returns all MSPs, orderers properly, looks like orderer is not coming up fine, *i.e., their system channels are not and I don't know how to investigate this
Orderer logs: https://hastebin.com/wifodekipo.cs
show the system channel missing?
@kisna The orderer system channel name is derived from the block you used to bootstrap the orderer. This is typically the output of `configtxgen`, if you did not pass a channel ID to `configtxgen` when outputting the genesis block, the channel id is `testchainid`
@jyellick the issue here is due to a e2e-orderer-syschan
are you saying this is setup as part of the e2e test, but I don't see it anywhere in the e2e
omg, you are right, when I did a grep I see it in another script
e2e_cli/generateArtifacts.sh: $CONFIGTXGEN -profile TwoOrgsOrdererGenesis -channelID e2e-orderer-syschan -outputBlock ./channel-artifacts/genesis.block
thanks @jyellick
how does an orderer differentiate between system and application channels?
Is it true that genesis.block only contains orderer and consortiums
and application channels.block contains just consortium orgs?
how does an orderer differentiate between system and application channels?
Is it true that genesis.block only contains orderer and consortiums
and application channels.block contains just consortium orgs, I don't see orderer in channel.txt?
how does an orderer differentiate between system and application channels?
Is it true that genesis.block only contains orderer and consortiums
and application channels.block contains just consortium orgs, I don't see orderer in channel.tx?
how does an orderer differentiate between system and application channels?
Is it true that genesis.block contains both orderer and consortiums
and application channels.block contains just consortium orgs, I don't see orderer in channel.tx?
@kisna Yes, the orderer system channel contains an `Orderer` and `Consortiums` group. The application channels contain an `Orderer` and `Application` group, where the `Application` group contains a subset of the orgs from one of the consortiums.
You may inspect the configuration blocks using a tool like `configtxgen` or `configtxlator` to observe this for yourself.
so we have to use configtxgen generates genesis.block without any application channel info
To be more precise
`configtxgen` generates a genesis block for the orderer system channel.
yes, I see that
../../.build/bin/configtxgen -channelID custom_channel -outputBlock custom_channel_genesisblock.pb -inspectBlock custom_channel_genesisblock.pb -profile TwoOrgsOrdererGenesis
but, I accidentally added a regular channel to genesis.block
Then `configtxgen` can be used to generate channel creatoin transactions. These transactions are used to assemble new channels, using the seed information contained in the orderer system channel.
it seems like genesis.block only has system channel no application channels
Each channel has its own genesis block
aah, got it
The channel creation transaction is processed to create the genesis block of a new channel.
so a channel has a genesis block,
what confused me is that this is a naming convention for system channels outputBlock genesis.block
whereas application channels create outputCreateChannelTx ./channel-artifacts/channel.tx a configuration transaction not a block
Yes. The key here is that the orderer takes a channel creation transaction, processes it using a combination of the transaction contents, and the current configuration of the orderer system channel, to produce the new genesis block for the channel.
interesting, let me explore the contents of both channel transaction and genesis blocks
If you inspect the channel creation transaction, you will see that there is actually no crypto information etc., only instructions on which org names to include in the channel. The basic information about the organizations is all inherited from the current orderer system channel configuration.
so a system channel is a superset of all orgs, orderers
how do we add a new org into genesis block later?
so a system channel is a superset of all orgs, orderers
how do we add/update a new org into genesis block later?
so a system channel is a SINGLE superset of all orgs, orderers
how do we add/update a new org into genesis block later?
It is done through a channel config update.: https://hyperledger-fabric.readthedocs.io/en/latest/config_update.html
wow, that's quite complex jq query
thanks
why do we have both channel_name and channel_id - specified in create channel transaction, say, mychannel.tx
-c, --channelID string In case of a newChain command, the channel ID to create. It must be all lower case, less than 250 characters long and match the regular expression: [a-z][a-z0-9.-]*
why do we have both channel_name and channel_id - specified in create channel transaction, say, mychannel.tx
and channelID has some restriction in naming -c, --channelID string In case of a newChain command, the channel ID to create. It must be all lower case, less than 250 characters long and match the regular expression: [a-z][a-z0-9.-]*
why do we have both channel_name and channel_id - specified in create channel transaction, say, mychannel.tx
and channelID has some restriction in naming -c, --channelID string In case of a newChain command, the channel ID to create. It must be all lower case, less than 250 characters long and match the regular expression: [a-z][a-z0-9.-]*
and channel_name is actually passed to channel_id
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pcjTfD4CTXiBKxxwP) @jyellick I see, thanks!
Where is `channel_name`? But they are synonyms. Depending on your version of Fabric, you should be able to omit the channel id when running `peer channel create` and it will extract it from the file.
@kisna Where is `channel_name`? But they are synonyms. Depending on your version of Fabric, you should be able to omit the channel id when running `peer channel create` and it will extract it from the file.
Has joined the channel.
Has joined the channel.
20181109151445.png
Hello. I see that orderers persist channel data in their local storage. Why?
@fanliyan That warning message indicates the client hungup without properly closing the stream, it is generally benign.
@krabradosty
> Hello. I see that orderers persist channel data in their local storage. Why?
I don't understand. The orderers order messages for the channels and send them in blocks to the peers, how could they not persist channel data?
@jyellick Orderer could store only block hashes for example. What else needed? That's what I'm asking: why do orderer need to have full ledger. I do care because when I have a lot of ledgers, orderer will run out of disk space very fast.
@jyellick Orderer could store only block hashes and few recent blocks for example. What else needed? That's what I'm asking: why do orderer need to have full ledger. I do care because when I have a lot of ledgers, orderer will run out of disk space very fast.
@krabradosty There are plans for the orderer to be able to prune chains to only some recent set of blocks. It's actually not terribly hard to implement on the orderer side, the harder part is on the peer side, there are situations where the peer must pull very early blocks from the orderer in order to catch up with the gossip network. But, it is a problem being actively worked.
Hi, when I instantiate a chaincode, I am getting this error in the orderer node: grpc: Server.Serve failed to complete security handshake from "172.20.0.13:57700": tls: first record does not look like a TLS handshake.
It just happend recently. Any idea? Thanks.
The command I am using is peer chaincode instantiate -o orderer.example.com:7050 -v 0.1 -c '{"Args":["init","a","100","b","200"]}' -C mychannel -n mycc --logging-level debug
never mind I found that I forgot the tls CLI options.
Has joined the channel.
Hi! I've setup a kafka cluster and tried to run my orderer (one for the start) in orderertype kafka. The kafka cluster starts without errors but the orderer fails due to a SIGSEGV: segmentation violation after registering producer and consumer.
```
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka] startThread -> INFO 0c6 [channel: testchainid] Parent consumer set up successfully
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka] setupChannelConsumerForChannel -> INFO 0c7 [channel: testchainid] Setting up the channel consumer for this channel (start offset: -2)...
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka] try -> DEBU 0c8 [channel: testchainid] Connecting to the Kafka cluster
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka/sarama] Open -> DEBU 0c9 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x47 pc=0x7f1862def259]
```
The stack trace is much longer. I can provide it as well if needed. Any help?
Hi! I've setup a kafka cluster and tried to run my orderer (1.3.0) (one for the start) in orderertype kafka. The kafka cluster starts without errors but the orderer fails due to a SIGSEGV: segmentation violation after registering producer and consumer.
```
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka] startThread -> INFO 0c6 [channel: testchainid] Parent consumer set up successfully
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka] setupChannelConsumerForChannel -> INFO 0c7 [channel: testchainid] Setting up the channel consumer for this channel (start offset: -2)...
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka] try -> DEBU 0c8 [channel: testchainid] Connecting to the Kafka cluster
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka/sarama] Open -> DEBU 0c9 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x47 pc=0x7f1862def259]
```
The stack trace is much longer. I can provide it as well if needed. Any help?
Hi! I've setup a kafka cluster and tried to run my orderer (1.3.0) (one for the start) in orderertype kafka. The kafka cluster starts without errors but the orderer fails due to a SIGSEGV: segmentation violation after registering producer and consumer.
```
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka] startThread -> INFO 0c6 [channel: testchainid] Parent consumer set up successfully
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka] setupChannelConsumerForChannel -> INFO 0c7 [channel: testchainid] Setting up the channel consumer for this channel (start offset: -2)...
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka] try -> DEBU 0c8 [channel: testchainid] Connecting to the Kafka cluster
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka/sarama] Open -> DEBU 0c9 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x47 pc=0x7f1862def259]
```
The stack trace is much longer. I can provide it as well if needed.
Thats the relevant part from the configtx.yaml. Before changing the orderertype from solo to kafka the network started without issues.
```
Orderer: &OrdererDefaults
OrdererType: kafka
Addresses:
- orderer.ibm:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka-0.kafka.kafka:9092
- kafka-1.kafka.kafka:9092
- kafka-2.kafka.kafka:9092
```
Hi! I've setup a kafka cluster and tried to run my orderer (1.3.0) (one for the start) in orderertype kafka. The kafka cluster starts without errors but the orderer fails due to a SIGSEGV: segmentation violation after registering producer and consumer.
```
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka] startThread -> INFO 0c6 [channel: testchainid] Parent consumer set up successfully
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka] setupChannelConsumerForChannel -> INFO 0c7 [channel: testchainid] Setting up the channel consumer for this channel (start offset: -2)...
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka] try -> DEBU 0c8 [channel: testchainid] Connecting to the Kafka cluster
2018-11-11 22:33:27.222 UTC [orderer/consensus/kafka/sarama] Open -> DEBU 0c9 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x47 pc=0x7f1862def259]
```
The stack trace is much longer. I can provide it as well if needed.
Thats the relevant part from the configtx.yaml. Before changing the orderertype from solo to kafka the network started without issues.
```
Orderer: &OrdererDefaults
OrdererType: kafka
Addresses:
- orderer.example:7050
BatchTimeout: 2s
BatchSize:
MaxMessageCount: 10
AbsoluteMaxBytes: 99 MB
PreferredMaxBytes: 512 KB
Kafka:
Brokers:
- kafka-0.kafka.kafka:9092
- kafka-1.kafka.kafka:9092
- kafka-2.kafka.kafka:9092
```
@holzeis i think stack trace would be helpful, if its too long, you could just paste the top of it, or use pastebin or gist and copy the link here.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hmnh8Yh6RJtYqCKcW) @guoger ```<<
```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hmnh8Yh6RJtYqCKcW) @guoger ```
```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hmnh8Yh6RJtYqCKcW) @guoger ```
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x47 pc=0x7f1862def259]
runtime stack:
runtime.throw(0xe183a7, 0x2a)
/opt/go/src/runtime/panic.go:616 +0x81
runtime.sigpanic()
/opt/go/src/runtime/signal_unix.go:372 +0x28e
goroutine 45 [syscall]:
runtime.cgocall(0xb04b30, 0xc42033edf8, 0x29)
/opt/go/src/runtime/cgocall.go:128 +0x64 fp=0xc42033edb8 sp=0xc42033ed80 pc=0x402124
net._C2func_getaddrinfo(0xc4203ae210, 0x0, 0xc420d18030, 0xc42000e728, 0x0, 0x0, 0x0)
_cgo_gotypes.go:92 +0x55 fp=0xc42033edf8 sp=0xc42033edb8 pc=0x51fa65
net.cgoLookupIPCNAME.func1(0xc4203ae210, 0x0, 0xc420d18030, 0xc42000e728, 0x26, 0x26, 0xc420392900)
/opt/go/src/net/cgo_unix.go:149 +0x13b fp=0xc42033ee40 sp=0xc42033edf8 pc=0x5267cb
net.cgoLookupIPCNAME(0xc4203ae0f0, 0x25, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/opt/go/src/net/cgo_unix.go:149 +0x174 fp=0xc42033ef38 sp=0xc42033ee40 pc=0x5210d4
net.cgoIPLookup(0xc420cf8840, 0xc4203ae0f0, 0x25)
/opt/go/src/net/cgo_unix.go:201 +0x4d fp=0xc42033efc8 sp=0xc42033ef38 pc=0x52179d
runtime.goexit()
/opt/go/src/runtime/asm_amd64.s:2361 +0x1 fp=0xc42033efd0 sp=0xc42033efc8 pc=0x45b6d1
created by net.cgoLookupIP
/opt/go/src/net/cgo_unix.go:211 +0xaf
goroutine 1 [IO wait]:
internal/poll.runtime_pollWait(0x7f1869351f00, 0x72, 0x0)
/opt/go/src/runtime/netpoll.go:173 +0x57
internal/poll.(*pollDesc).wait(0xc420164098, 0x72, 0xc42039c000, 0x0, 0x0)
/opt/go/src/internal/poll/fd_poll_runtime.go:85 +0x9b
internal/poll.(*pollDesc).waitRead(0xc420164098, 0xffffffffffffff00, 0x0, 0x0)
/opt/go/src/internal/poll/fd_poll_runtime.go:90 +0x3d
internal/poll.(*FD).Accept(0xc420164080, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/opt/go/src/internal/poll/fd_unix.go:372 +0x1a8
net.(*netFD).accept(0xc420164080, 0x0, 0x5b80d46066dbcb28, 0x0)
/opt/go/src/net/fd_unix.go:238 +0x42
net.(*TCPListener).accept(0xc4200b21d0, 0x42a409, 0xc4200b7270, 0xc42032bc18)
/opt/go/src/net/tcpsock_posix.go:136 +0x2e
net.(*TCPListener).Accept(0xc4200b21d0, 0xe35298, 0xc420256540, 0xc420322760, 0x0)
/opt/go/src/net/tcpsock.go:259 +0x49
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).Serve(0xc420256540, 0xea7820, 0xc4200b21d0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:544 +0x21a
github.com/hyperledger/fabric/core/comm.(*GRPCServer).Start(0xc42021a2a0, 0xc42032be40, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/core/comm/server.go:197 +0x41
github.com/hyperledger/fabric/orderer/common/server.Start(0xdf7a5a, 0x5, 0xc4200f8580)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:106 +0x575
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:75 +0x1d6
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
goroutine 20 [syscall]:
os/signal.signal_recv(0x0)
/opt/go/src/runtime/sigqueue.go:139 +0xa6
os/signal.loop()
/opt/go/src/os/signal/signal_unix.go:22 +0x22
created by os/signal.init.0
/opt/go/src/os/signal/signal_unix.go:28 +0x41
goroutine 6 [chan receive]:
github.com/hyperledger/fabric/orderer/consensus/kafka.init.1.func1(0xc420082960)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/kafka/logger.go:46 +0x31
created by github.com/hyperledger/fabric/orderer/consensus/kafka.init.1
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/kafka/logger.go:43 +0x6c
goroutine 7 [select]:
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.(*BufferPool).drain(0xc4201782a0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:206 +0x152
created by github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util.NewBufferPool
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/util/buffer_pool.go:237 +0x171
```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hmnh8Yh6RJtYqCKcW) @guoger You can find the complete stack trace here https://gist.github.com/holzeis/5dd8af48c083927f9951f3b66e93c692
@holzeis what's your kafka version?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=yLg67MooBv5nctyeJ) @jyellick Is that normal? No control?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=FuZZ2XDC444uuqHut) @guoger hyperledger/fabric-kafka:0.4.14
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=FuZZ2XDC444uuqHut) @guoger hyperledger/fabric-kafka:0.4.14 -> from the docker file KAFKA_VERSION=1.0.0
Has left the channel.
@guoger interesting detail: I've kept the kubernetes cluster running with the crashing orderer. I just checked again and noticed that the orderer eventutually started successfully. It took kubernetes 30 times to restart the orderer pod, but after that it was running. However after restarting the cluster the orderer is back to a CrashLoopBackOff.
Clipboard - November 15, 2018 9:13 AM
:rofl:
Has joined the channel.
Has joined the channel.
Hi All,
Is there any orderers limit done as performance bench marking?
Has joined the channel.
@fanliyan You should simply have your client appropriately close the socket before exiting.
https://chat.hyperledger.org/channel/fabric-orderer-dev?msg=PRWZeDwK8uLnnvFSP
Has joined the channel.
The orderer exposes two gRPC APIs, `Broadcast` and `Deliver`. It is the `Deliver` API which delivers blocks to the peers. If a peer is a leader peer (as specified by config, or as elected by gossip), it will initiate an HTTP2/gRPC connection to the orderer and invoke `Deliver`, specifying the first block it wishes to receive and that it would like to receive blocks indefinitely. This opens a bidirectional stream, and the orderer replies asynchronously with the requested blocks until the connection is terminated.
See https://github.com/hyperledger/fabric/blob/6c073551a117b8281d676bded20826d2516640ce/protos/orderer/ab.proto#L79
This is really helpful, thank you! Is it possible to use grpc instead of grpcs within the balance-transfer example? I would like to see the deliver api invocation and other details in the packet capture, if possible. The corporate proxy is blocking hastebin url so unable to attach the existing packet capture that I have with me.
From what I see in the existing capture (two orgs with 2 peers each, one orderer), there are connections coming from 5 different ports to the orderer port 7050. These seem to be the random tcp ports and not the peer ports. An unencrypted capture could prove to be more helpful in such case.
@magar36 I've never attempted to run without TLS enabled, I'm not sure that it can be accomplished without code changes. As I mentioned, there are two APIs, `Broadcast`, and `Deliver`. The `Broadcast` API is used to send transactions for ordering, typically this is invoked by the clients (CLI/SDK)
You may enable debug logging in the orderer. It will log the source IP and port and API invoked.
You may enable debug logging in the orderer. It will log the source IP and port and RPC invoked.
I understand! Unfortunately, I do not see such invocation from the debug log. I am attaching the text file taken from the console here since 'hastebin' is blocked.
Debug_log.txt
@magar36 It looks to me like you only have client logs at debug
What version of Fabric are you running?
1.3.0
(There are other services, pastebin.com gist.github.com as well if they are not blocked by your employer)
this is blocked too
You should be able to set `ORDERER_GENERAL_LOGLEVEL=debug` in your compose setup for the orderer to enable the debug logging
this is already set: - ORDERER_GENERAL_LOGLEVEL=debug
and in base yaml: CORE_LOGGING_LEVEL=DEBUG
I see no output from your containers other than the application at all. Perhaps you can use `docker logs` to retrieve it directly from the container.
ok let me try getting container logs once and see what's going on there
Jason, thank you for the help. I got good info from the docker logs; will try to setup tcpdump on the container as well and see how it looks.
Meanwhile, it would be great if you could find out a way to change the setup from grpcs to grpc as that would be really helpful for us in understanding the fabric communications over http2 at the network level. It is something that I am very much interested in.
```orderer.example.com | 2018-11-16 06:47:38.768 UTC [orderer/common/broadcast] Handle -> WARN 00e Error reading from 172.19.0.7:39144: rpc error: code = Canceled desc = context canceled ``` getting this error while ledger is able to get update what the issue may me any suggestion ?
`172.19.0.7:39144` belongs to cli of first-network @jyellick
I am facing this from 1.1
hi, can someone telll me if there's a way to change the orderer to kafka other than manually setting up kafka instances and point fabric to use them.
can someone provide me with link to docs with steps to change orderer to kafka
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pb95TrmbMfGRaPmY2) @sushmitha Official documentation...https://hyperledger-fabric.readthedocs.io/en/release-1.1/kafka.html?highlight=Kafka
Official documentation...https://hyperledger-fabric.readthedocs.io/en/release-1.3/kafka.html?highlight=Kafka
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pb95TrmbMfGRaPmY2) @sushmitha Official documentation...https://hyperledger-fabric.readthedocs.io/en/release-1.3/kafka.html?highlight=Kafka
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pb95TrmbMfGRaPmY2) @sushmitha Official documentation...
https://hyperledger-fabric.readthedocs.io/en/release-1.3/kafka.html
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=4dFZssH53Xtjf73uM) I am also getting the same error but everything works fine. Could you look into this? @jyellick can you explain about it ?
@magar36 You may disable TLS at the orderer via `ORDERER_GENERAL_TLS_ENABLED=false`
@akshay.sood @pankajcheema This is not an error, it is a warning, simply indicating that the client closed the socket before performing the appropriate cleanup procedures. You may in general ignore this warning, though clients should appropriately hangup.
Thanks you @jyellick for the explainaton
Thanks you @jyellick for the explaination
Has joined the channel.
When I tried to use my orderer to connect to the kafka cluster, the return log of orderer showed an error similar to the failure of connection. Why
```
Connected to broker at kafka3:9092 (unregistered)
```
When I tried to use my orderer to connect to the kafka cluster, the return log of orderer showed an error similar to the failure of connection. Why
```
2018-11-19 05:45:49.489 UTC [orderer/consensus/kafka] setupTopicForChannel -> INFO 0ba [channel: testchainid] Setting up the topic for this channel...
2018-11-19 05:45:49.489 UTC [orderer/consensus/kafka] try -> DEBU 0bb [channel: testchainid] Creating Kafka topic [testchainid] for channel [testchainid/0]
2018-11-19 05:45:49.489 UTC [orderer/consensus/kafka/sarama] Open -> DEBU 0bc ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-11-19 05:45:49.492 UTC [orderer/consensus/kafka/sarama] withRecover -> DEBU 0bd Connected to broker at kafka1:9092 (unregistered)
2018-11-19 05:45:49.508 UTC [orderer/consensus/kafka/sarama] func1 -> DEBU 0be Closed connection to broker kafka1:9092
2018-11-19 05:45:49.508 UTC [orderer/consensus/kafka] try -> DEBU 0bf [channel: testchainid] Error is nil, breaking the retry loop
2018-11-19 05:45:49.508 UTC [orderer/consensus/kafka] setupProducerForChannel -> INFO 0c0 [channel: testchainid] Setting up the producer for this channel...
2018-11-19 05:45:49.508 UTC [orderer/consensus/kafka] try -> DEBU 0c1 [channel: testchainid] Connecting to the Kafka cluster
2018-11-19 05:45:49.508 UTC [orderer/consensus/kafka/sarama] NewAsyncProducer -> DEBU 0c2 Initializing new client
2018-11-19 05:45:49.508 UTC [orderer/consensus/kafka/sarama] NewClient -> DEBU 0c3 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-11-19 05:45:49.509 UTC [orderer/consensus/kafka/sarama] Open -> DEBU 0c4 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
2018-11-19 05:45:49.509 UTC [orderer/consensus/kafka/sarama] RefreshMetadata -> DEBU 0c5 client/metadata fetching metadata for all topics from broker kafka3:9092
2018-11-19 05:45:49.510 UTC [orderer/consensus/kafka/sarama] withRecover -> DEBU 0c6 Connected to broker at kafka3:9092 (unregistered)
2018-11-19 05:45:49.537 UTC [orderer/consensus/kafka/sarama] updateMetadata -> DEBU 0c7 client/brokers registered new broker #2 at kafka2:9092
2018-11-19 05:45:49.537 UTC [orderer/consensus/kafka/sarama] updateMetadata -> DEBU 0c8 client/brokers registered new broker #4 at kafka4:9092
2018-11-19 05:45:49.537 UTC [orderer/consensus/kafka/sarama] updateMetadata -> DEBU 0c9 client/brokers registered new broker #1 at kafka1:9092
2018-11-19 05:45:49.538 UTC [orderer/consensus/kafka/sarama] updateMetadata -> DEBU 0ca client/brokers registered new broker #3 at kafka3:9092
2018-11-19 05:45:49.538 UTC [orderer/consensus/kafka/sarama] NewAsyncProducer -> DEBU 0cb Successfully initialized new client
```
@nainiubaba Please use a service like hastebin.com for any snippets of text longer than a few lines, otherwise it makes it very difficult to read this channel. That being said, those logs look fairly normal to me. It is all debug/info, no warnings or errors.
So this is a new one...
Trying to update a channel. It failed with an orderer timeout.
Looking at the logs we see this little gem:
Is there a mitigation strategy for this?
So this is a new one...
Trying to update a channel. It failed with an orderer timeout.
Looking at the logs we see this little gem:
Is there a mitigation strategy for this?
https://hastebin.com/raw/esoqemotal
So this is a new one...
Trying to update a channel. It failed with an orderer timeout.
Looking at the logs we see this little gem:
Is there a mitigation strategy for this?
https://hastebin.com/raw/esoqemotal
Edit: after trawling the kafka logs it's a "logging directory full" problem. Is there a way to stop this - and more impor
So this is a new one...
Trying to update a channel. It failed with an orderer timeout.
Looking at the logs we see this little gem:
Is there a mitigation strategy for this?
https://hastebin.com/raw/esoqemotal
Edit: after trawling the kafka logs it's a "logging directory full" problem. Is there a way to stop this - and more importantly why wouldn't it be configured to do it ootb.
```
2018-11-19 16:07:03.776 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162d92 [channel: uber-mediastar-channel] Error during consumption: kafka: error while consuming uber-mediastar-channel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
2018-11-19 16:07:03.776 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 162d93 [channel: uber-mediastar-channel] Closed the errorChan
2018-11-19 16:07:03.776 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 162d94 [channel: uber-mediastar-channel] About to post the CONNECT message...
2018-11-19 16:07:03.776 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162d95 [channel: accuweather-mediastar-channel] Error during consumption: kafka: error while consuming accuweather-mediastar-channel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
2018-11-19 16:07:03.776 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 162d96 [channel: accuweather-mediastar-channel] Closed the errorChan
2018-11-19 16:07:03.776 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 162d97 [channel: accuweather-mediastar-channel] About to post the CONNECT message...
2018-11-19 16:07:03.776 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162d98 [channel: managed-mediastar-channel] Error during consumption: kafka: error while consuming managed-mediastar-channel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
2018-11-19 16:07:03.776 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 162d99 [channel: managed-mediastar-channel] Closed the errorChan
2018-11-19 16:07:03.776 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 162d9a [channel: managed-mediastar-channel] About to post the CONNECT message...
2018-11-19 16:07:04.778 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162d9b [channel: genesischannel] Error during consumption: kafka: error while consuming genesischannel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
2018-11-19 16:07:04.778 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 162d9c [channel: genesischannel] Closed the errorChan
2018-11-19 16:07:04.778 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 162d9d [channel: genesischannel] About to post the CONNECT message...
2018-11-19 16:07:07.281 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162d9e [channel: accuweather-mediastar-channel] Error during consumption: kafka: error while consuming accuweather-mediastar-channel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
2018-11-19 16:07:07.281 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162da0 [channel: uber-mediastar-channel] Error during consumption: kafka: error while consuming uber-mediastar-channel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
2018-11-19 16:07:07.281 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 162da1 [channel: accuweather-mediastar-channel] Closed the errorChan
2018-11-19 16:07:07.281 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 162da2 [channel: uber-mediastar-channel] Closed the errorChan
2018-11-19 16:07:07.281 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 162da3 [channel: accuweather-mediastar-channel] About to post the CONNECT message...
2018-11-19 16:07:07.281 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 162da4 [channel: uber-mediastar-channel] About to post the CONNECT message...
2018-11-19 16:07:07.281 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162d9f [channel: managed-mediastar-channel] Error during consumption: kafka: error while consuming managed-mediastar-channel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
2018-11-19 16:07:07.281 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 162da5 [channel: managed-mediastar-channel] Closed the errorChan
2018-11-19 16:07:07.281 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 162da6 [channel: managed-mediastar-channel] About to post the CONNECT message...
```
```
2018-11-19 16:07:08.283 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162da7 [channel: genesischannel] Error during consumption: kafka: error while consuming genesischannel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
2018-11-19 16:07:08.283 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 162da9 [channel: genesischannel] About to post the CONNECT message...
2018-11-19 16:07:10.785 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162daa [channel: managed-mediastar-channel] Error during consumption: kafka: error while consuming managed-mediastar-channel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
2018-11-19 16:07:10.785 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162dac [channel: uber-mediastar-channel] Error during consumption: kafka: error while consuming uber-mediastar-channel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
2018-11-19 16:07:10.785 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 162dae [channel: managed-mediastar-channel] About to post the CONNECT message...
2018-11-19 16:07:10.785 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 162daf [channel: uber-mediastar-channel] Closed the errorChan
2018-11-19 16:07:10.785 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162dab [channel: accuweather-mediastar-channel] Error during consumption: kafka: error while consuming accuweather-mediastar-channel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
2018-11-19 16:07:10.785 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 162db1 [channel: accuweather-mediastar-channel] Closed the errorChan
2018-11-19 16:07:10.785 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 162db0 [channel: uber-mediastar-channel] About to post the CONNECT message...
2018-11-19 16:07:10.785 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 162db2 [channel: accuweather-mediastar-channel] About to post the CONNECT message...
2018-11-19 16:07:11.787 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162db3 [channel: genesischannel] Error during consumption: kafka: error while consuming genesischannel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
2018-11-19 16:07:11.788 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 162db4 [channel: genesischannel] Closed the errorChan
2018-11-19 16:07:11.788 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 162db5 [channel: genesischannel] About to post the CONNECT message...
2018-11-19 16:07:13.435 UTC [common/deliver] Handle -> WARN 162db6 Error reading from 172.21.0.28:42594: rpc error: code = Canceled desc = context canceled
2018-11-19 16:07:13.440 UTC [common/deliver] deliverBlocks -> WARN 162db7 [channel: accuweather-mediastar-channel] Rejecting deliver request for 172.21.0.36:45114 because of consenter error
2018-11-19 16:07:13.445 UTC [common/deliver] Handle -> WARN 162db8 Error reading from 172.21.0.30:54760: rpc error: code = Canceled desc = context canceled
2018-11-19 16:07:13.447 UTC [common/deliver] deliverBlocks -> WARN 162db9 [channel: uber-mediastar-channel] Rejecting deliver request for 172.21.0.25:56640 because of consenter error
2018-11-19 16:07:13.448 UTC [common/deliver] Handle -> WARN 162dba Error reading from 172.21.0.28:42608: rpc error: code = Canceled desc = context canceled
2018-11-19 16:07:13.456 UTC [common/deliver] deliverBlocks -> WARN 162dbe [channel: uber-mediastar-channel] Rejecting deliver request for 172.21.0.35:37242 because of consenter error
2018-11-19 16:07:13.458 UTC [common/deliver] deliverBlocks -> WARN 162dbf [channel: managed-mediastar-channel] Rejecting deliver request for 172.21.0.25:56654 because of consenter error
2018-11-19 16:07:13.462 UTC [common/deliver] deliverBlocks -> WARN 162dc0 [channel: genesischannel] Rejecting deliver request for 172.21.0.24:33512 because of consenter error
2018-11-19 16:07:14.289 UTC [orderer/consensus/kafka] processMessagesToBlocks -> ERRO 162dc1 [channel: uber-mediastar-channel] Error during consumption: kafka: error while consuming uber-mediastar-channel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.```
@aatkddny The message:
```Error during consumption: kafka: error while consuming uber-mediastar-channel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.```
Indicates to me that your Kafka cluster itself is not ready. This could be because the cluster hardware is slow and the leader election has not taken place yet. If you try again in a few seconds/minutes often this problem is transient and goes away. On the other hand, it could be that the Kafka cluster is misconfigured or has some other problem. You can/should attempt to connect with the Kafka sample clients to determine the root cause.
@aatkddny The message:
```Error during consumption: kafka: error while consuming uber-mediastar-channel/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.```
Indicates to me that your Kafka cluster itself is not ready. This could be because the cluster hardware is slow and the leader election has not taken place yet. If you try again in a few seconds/minutes often this problem is transient and goes away. On the other hand, it could be that the Kafka cluster is misconfigured or has some other problem. You can/should attempt to connect with the Kafka sample clients and look at the Kafka logs to determine the root cause.
Re-read the edit.
The problem is that kafka has filled the 3 of 4 of the log directories (Kafkas 0, 1, 3) after a few days of running.
That brought down those instances.
That meant it couldn't elect a leader - only one instance was there - and that caused the failure.
*My point was that this is using the default hyperledger kafka containers in an ootb docker install.*
Configuring these kafka containers to not roll over the logs is going to catch more people that just this small install. Or if they are configured to do so (and I'll admit I haven't looked as the definition file) it's not working.
It only happened to us because we are running a qa cycle on this thing. I doubt it's seen that much activity before now.
Re-read the edit.
The problem is that kafka has filled the 3 of 4 of the log directories allocated to it by docker (Kafkas 0, 1, 3) after a few days of running.
That brought down those instances.
That meant it couldn't elect a leader - only one instance was there - and that caused the failure.
*My point was that this is using the default hyperledger kafka containers in an ootb docker install.*
Configuring these kafka containers to not roll over the logs is going to catch more people that just this small install. Or if they are configured to do so (and I'll admit I haven't looked as the definition file) it's not working.
It only happened to us because we are running a qa cycle on this thing. I doubt it's seen that much activity before now.
Re-read the edit pls.
The problem is that kafka has filled the 3 of 4 of the log directories allocated to it by docker (Kafkas 0, 1, 3) after a few days of running.
That brought down those instances.
That meant it couldn't elect a leader - only one instance was there - and that caused the failure.
*My point was that this is using the default hyperledger kafka containers in an ootb docker install.*
Configuring these kafka containers to not roll over the logs is going to catch more people than just in this small install. Or if they are configured to do so (and I'll admit I haven't looked at the definition file) it's not working.
It only happened to us because we are running a qa cycle on this thing. I doubt it's seen that much activity before now.
edit: error here:
```
[2018-11-15 23:24:14,952] ERROR [ReplicaManager broker=0] Error while writing to highwatermark file in directory /tmp/kafka-logs (kafka.server.ReplicaManager)
org.apache.kafka.common.errors.KafkaStorageException: Error while writing to checkpoint file /tmp/kafka-logs/replication-offset-checkpoint
Caused by: java.io.FileNotFoundException: /tmp/kafka-logs/replication-offset-checkpoint.tmp (No space left on device)```
Re-read the edit pls.
The problem is that kafka has filled the 3 of 4 of the log directories allocated to it by docker (Kafkas 0, 1, 3) after a few days of running without issue.
That brought down those instances.
That meant it couldn't elect a leader - only one instance was there - and that caused the failure.
*My point was that this is using the default hyperledger kafka containers in an ootb docker install.*
Configuring these kafka containers to not roll over the logs is going to catch more people than just in this small install. Or if they are configured to do so (and I'll admit I haven't looked at the definition file) it's not working.
It only happened to us because we are running a qa cycle on this thing. I doubt it's seen that much activity before now.
edit: error here:
```
[2018-11-15 23:24:14,952] ERROR [ReplicaManager broker=0] Error while writing to highwatermark file in directory /tmp/kafka-logs (kafka.server.ReplicaManager)
org.apache.kafka.common.errors.KafkaStorageException: Error while writing to checkpoint file /tmp/kafka-logs/replication-offset-checkpoint
Caused by: java.io.FileNotFoundException: /tmp/kafka-logs/replication-offset-checkpoint.tmp (No space left on device)```
@kostas ^
@jyellick I'm sorry, later will log information on hastebin.com, but still thank you
@aatkddny: I share the frustration with this not working "better", but this is very clearly spelt out in the Kafka guide: https://hyperledger-fabric.readthedocs.io/en/release-1.3/kafka.html
@aatkddny: I may share the frustration with this not working "better", but this is very clearly spelt out in the documentation: https://hyperledger-fabric.readthedocs.io/en/release-1.3/kafka.html
@aatkddny: I share the frustration with this not working "better", but this is very clearly spelt out in the documentation: https://hyperledger-fabric.readthedocs.io/en/release-1.3/kafka.html
Step 6 "Configure your Kafka brokers appropriately", sub-step 5 reads:
> `log.retention.ms = -1`. Until the ordering service adds support for pruning of the Kafka logs, you should disable time-based retention and prevent segments from expiring. (Size-based retention —see `log.retention.bytes`— is disabled by default in Kafka at the time of this writing, so there’s no need to set it explicitly.)
And as best I can tell, the Docker image is making sure this is set this way OOTB, as it should: https://github.com/hyperledger/fabric-baseimage/blob/944f7a6e618be1b3e93f665f0133e2946a79eda5/images/kafka/docker-entrypoint.sh#L47
So back to your point:
> Configuring these kafka containers to not roll over the logs is going to catch more people than just in this small install.
It is not configured to roll over the logs because it should not roll over the logs.
So you have an unbounded *log*? Inside a containerized application where figuring out what went TU requires someone that actually knows a little about how it works.
That's crazy.
So you have an unbounded *log*?
Inside a containerized application where figuring out what went TU requires someone that actually knows a little about how it works.
That's crazy.
Are we skipping over the part where this is explicitly called out in the setup guide?
Not at all.
What I'm saying is that if you make retention==forever you make the size of the logs unbounded.
Which in any system is going to cause issues as soon as it gets busy.
This is a QA setup - it has 4 nodes of two peers each and about 40-50 Gb of storage for the entire docker installation.
We filled it up in 10 days with almost no activity except a little QA Friday and today.
The knockon effect of that meant that the testers were stuck.
The application devs were stuck.
Our internal operations support had no idea what was going on.
I had to ssh into the box and look at the logs to figure out why it was failing.
Ok it's a single server docker instance. BFD. But this
Not at all.
What I'm saying is that if you make retention==forever you make the size of the logs unbounded.
Which in any system is going to cause issues as soon as it gets busy.
This is a QA setup - it has 4 nodes of two peers each and about 40-50 Gb of storage for the entire docker installation.
We filled it up in 10 days with almost no activity except a little QA Friday and today.
The knockon effect of that meant that the testers were stuck.
The application devs were also stuck.
Our internal operations support had no idea what was going on.
I had to stop what I was doing and ssh into the box and find and look at the logs to figure out why it was failing.
Ok it's a single server docker instance. bfd.
But this same setup going into any kind of production environment is going to have the same issues.
It'll just be a question of time before it runs out of disk.
Worse (for us) is that we are running the much much larger demo instance inside K8S but with kafka mapped to nfs. The configuration of that lot predates the instructions that are referenced, so the implications weren't clear.
I'm going to have to fix that pdq...
I hear you. And I truly do share the frustration. I've got a couple of questions if I may:
go for it.
1. How do you suggest we address this issue? (Other than "make it work w/ rolling logs". This one's a bit more complicated than it looks like.) Some big bold letters in a README close to the Docker image link that says "map this to a persistent volume"? Something else?
1. How do you suggest we address this issue? (Other than "make it work w/ rolling logs". This one's a bit more complicated than it looks like.) Big bold letters in a README close to the Docker image link that says "map this to a persistent volume"? Something else?
2. I suspect the answer to this lies in the fact that you're mapping Kafka to NFS, but at the risk of nitpicking, may I argue that the notion of an unbounded log in itself is not that exotic? For instance, the very same thing happens to the peer and orderer ledgers.
2. I suspect the answer to this lies in the fact that you're mapping Kafka to NFS, but at the risk of nitpicking, may I argue that the notion of an unbounded log in itself is not that exotic? For instance, the very same thing happens to the peer and orderer ledgers. (ATTN: I'm not saying it's acceptable. Just commenting on the unboundedness of the log.)
2. I suspect the answer to this lies in the fact that you're mapping Kafka to NFS, but at the risk of nitpicking, may I argue that the notion of an unbounded log in itself is not that exotic _in Fabric_? For instance, the very same thing happens to the peer and orderer ledgers. (ATTN: I'm not saying it's acceptable. Just commenting on the unboundedness of the log.)
(ATTN: I'm not saying it's acceptable. I'm commenting on the "_unbounded_ log" bit.)
^^ Edited this one a bit.
1. Prune is the obvious flip answer. There's no reason I can think of (at least for my application) why the messaging subsystem needs to keep logs after a certain timeframe. I can't remember the last time I went to an MQ queue manager and looked to see what the thing did a week ago.
Gigantic letters about the size getting ridiculously large and you better plan for it might have fixed this - except in my case I did it all before those instructions were there and because it was working I never went back to re-RTFM.
Also as an aside - we weren't mapping to NFS inside this instance that had a problem. It's a toy - it runs inside a single docker container with the logs going into a virtualized piece of docker disk. Restarting the containers (sort of) fixed it. NFS at least allows us to slap something to monitor the usage at a more granular level and alert someone that gets paid to allocate more disk that they need to do so.
2. The peer ledger is expected to be unconstrained. That's well known to be the repository for the ledger for any channel. It better have everything...
Although TBH I never considered that the orderers would also suffer from the same issue. I figured that they would also prune after the Tx was committed to the peers.
So let me flip this around. What's BMX going to do here? Doesn't it (and all the other providers so don't think I'm singling one out) constrain the amount of storage allocated per node/network/org inside the IFLs? I'm pretty sure there's a per account limit to both compute and storage.
1. Prune is the obvious flip answer. There's no reason I can think of (at least for my application) why the messaging subsystem needs to keep logs after a certain timeframe. I can't remember the last time I went to an MQ queue manager and looked to see what the thing did a week ago and I don't see any difference here.
Gigantic letters about the size possibly getting ridiculously large and you better plan for it might have fixed this - except in my case I did it all before those instructions were there and because it was working I never went back to re-RTFM.
Also as an aside - we weren't mapping to NFS inside this instance that had a problem. It's a toy - it runs inside a single docker container with the logs going into a virtualized piece of docker disk. Restarting the containers (sort of) fixed it. NFS at least allows us to slap something to monitor the usage at a more granular level and alert someone that gets paid to allocate more disk that they need to do so.
2. The peer ledger is expected to be unconstrained. That's well known to be the repository for the ledger for any channel. It better have everything...
Although TBH I never considered that the orderers would also suffer from the same issue. I figured that they would also prune after the Tx was committed to the peers.
So let me flip this around. What's BMX going to do here? Doesn't it (and all the other providers so don't think I'm singling one out) constrain the amount of storage allocated per node/network/org inside the IFLs? I'm pretty sure there's a per account limit to both compute and storage.
1. Prune is the obvious flip answer. There's no reason I can think of (at least for my application) why the messaging subsystem needs to keep logs after a certain timeframe. I can't remember the last time I went to an MQ queue manager and looked to see what the thing did a week ago and I don't see any difference here.
Gigantic letters about the size possibly getting ridiculously large and you better plan for it might have fixed this - except in my case I did it all before those instructions were there and because it was working I never went back to re-RTFM.
Also as an aside - we weren't mapping to NFS inside this instance that had a problem. It's a toy - it runs inside a single docker container with the logs going into a virtualized piece of docker disk. Restarting the containers (sort of) fixed it. NFS at least allows us to slap on something - probably nagios - to monitor the usage at a more granular level and to alert someone that gets paid to allocate more disk that they need to do so.
2. The peer ledger is expected to be unconstrained. That's well known to be the repository for the ledger for any channel. It better have everything...
Although TBH I never considered that the orderers would also suffer from the same issue. I figured that they would also prune after the Tx was committed to the peers.
So let me flip this around. What's BMX going to do here? Doesn't it (and all the other providers so don't think I'm singling one out) constrain the amount of storage allocated per node/network/org inside the IFLs? I'm pretty sure there's a per account limit to both compute and storage.
1. Prune is the obvious flip answer. There's no reason I can think of (at least for my application) why the messaging subsystem needs to keep logs after a certain timeframe. I can't remember the last time I went to an MQ queue manager and looked to see what the thing did a week ago and I don't see any difference here.
Gigantic letters about the size possibly getting ridiculously large and you better plan for it might have fixed this - except in my case I did it all before those instructions were there and because it was working I never went back to re-RTFM.
Also as an aside - we weren't mapping to NFS inside this instance that had a problem. It's a toy - it runs inside a single docker container with the logs going into a virtualized piece of docker disk. Restarting the containers (sort of) fixed it. NFS at least allows us to slap on something - probably nagios - to monitor the usage at a more granular level and to alert someone that gets paid to allocate more disk that they need to do so.
2. The peer ledger is expected to be unconstrained. That's well known to be the repository for the ledger for any channel. It better have everything...
Although TBH I never considered that the orderers would also suffer from the same issue. I figured that they would also prune after the Tx was committed to the peers. Now I'm wondering if there's anything from the EU that could be construed as PII in there and if we need to encrypt the drives. That's a whole can of worms I'd rather pretend wasn't happening.
So let me flip this around. What's BMX going to do here? Doesn't it (and all the other providers so don't think I'm singling one out) constrain the amount of storage allocated per node/network/org inside the IFLs? I'm pretty sure there's a per account limit to both compute and storage.
1. Prune is the obvious flip answer. There's no reason I can think of (at least for my application) why the messaging subsystem needs to keep logs after a certain timeframe. I can't remember the last time I went to an MQ queue manager and looked to see what the thing did a week ago and I don't see any difference here.
Gigantic letters about the size possibly getting ridiculously large and you better plan for it might have fixed this - except in my case I did it all before those instructions were there and because it was working I never went back to re-RTFM.
Also as an aside - we weren't mapping to NFS inside this instance that had a problem. It's a toy - it runs inside a single docker container with the logs going into a virtualized piece of docker disk. Restarting the containers (sort of) fixed it. NFS at least allows us to slap on something - probably nagios - to monitor the usage at a more granular level and to alert someone that gets paid to allocate more disk that they need to do so.
2. The peer ledger is expected to be unconstrained. That's well known to be the repository for the ledger for any channel. It better have everything...
Although TBH I never considered that the orderers would also suffer from the same issue. I figured that they would also prune after the Tx was committed to the peers. Now I'm wondering if there's anything from the EU that could be construed as PII in there and if we need to encrypt the drive. That's a whole can of worms I'd rather pretend wasn't happening.
So let me flip this around. What's BMX going to do here? Doesn't it (and all the other providers so don't think I'm singling one out) constrain the amount of storage allocated per node/network/org inside the IFLs? I'm pretty sure there's a per account limit to both compute and storage.
1. Prune is the obvious flip answer. There's no reason I can think of (at least for my application) why the messaging subsystem needs to keep logs after a certain timeframe. I can't remember the last time I went to an MQ queue manager and looked to see what the thing did a week ago and I don't see any difference here.
Gigantic letters about the size possibly getting ridiculously large and you better plan for it might have fixed this - except in my case I did it all before those instructions were there and because it was working I never went back to re-RTFM.
Also as an aside - we weren't mapping to NFS inside this instance that had a problem. It's a toy - it runs inside a single docker container with the logs going into a virtualized piece of docker disk. Restarting the containers (sort of) fixed it. NFS at least allows us to slap on something - probably nagios - to monitor the usage at a more granular level and to alert someone that gets paid to allocate more disk that they need to do so.
2. The peer ledger is expected to be unconstrained. That's well known to be the repository for the ledger for any channel. It better have everything...
Although TBH I never considered that the orderers would also suffer from the same issue. I figured that they would also prune after the Tx was committed to the peers. Now I'm wondering if there's anything from the EU that could be construed as PII in there and if we need to encrypt the drive. That's a whole can of worms I'd rather pretend wasn't happening.
So let me flip this around. What's BMX going to do here? Doesn't it (and all the other providers so don't think I'm singling one out) constrain the amount of storage allocated per node/network/org inside the IFLs? I'm pretty sure there's a per account limit to both compute and storage.
1. Prune is the obvious flip answer. There's no reason I can think of (at least for my application) why the messaging subsystem needs to keep logs after a certain timeframe. I can't remember the last time I went to an MQ queue manager and looked to see what the thing did a week ago and I don't see any difference here.
Gigantic letters about the size possibly getting ridiculously large and you better plan for it might have fixed this - except in my case I did it all before those instructions were there and because it was working I never went back to re-RTFM.
Also as an aside - we weren't mapping to NFS inside this instance that had a problem. It's a toy - it runs inside a single docker container with the logs going into a virtualized piece of docker disk. Restarting the containers (sort of) fixed it. NFS at least allows us to slap on something - probably nagios - to monitor the usage at a more granular level and to alert someone that gets paid to allocate more disk that they need to do so.
2. The peer ledger(s) are expected to be unconstrained. They are well known to be the repository for the sum of the transactions to the ledger for any channel. It better have everything...
Although TBH I never considered that the orderers would also suffer from the same issue. I figured that they would also prune after the Tx was committed to the peers.
So let me flip this around. What's BMX going to do here? Doesn't it (and all the other providers so don't think I'm singling one out) constrain the amount of storage allocated per node/network/org inside the IFLs? I'm pretty sure there's a per account limit to both compute and storage.
i am using fabric master branch. i modified sampleconfig/orderer.yaml and added pkcs11 block for GENERAL.BCCSP with `default: SW`. and running orderer will prompt an error:
`2018-11-20 11:24:14.269 CST [orderer.common.server] Main -> ERRO 001 failed to parse config: Error unmarshaling config into struct: 1 error(s) decoding:`
i am using fabric master branch. i modified sampleconfig/orderer.yaml and core.yaml, added `PKCS11` options for GENERAL.BCCSP with `default: SW`. but running orderer will prompt an error:
`2018-11-20 11:24:14.269 CST [orderer.common.server] Main -> ERRO 001 failed to parse config: Error unmarshaling config into struct: 1 error(s) decoding:`
> What's BMX going to do here? Doesn't it (and all the other providers so don't think I'm singling one out) constrain the amount of storage allocated per node/network/org inside the IFLs? I'm pretty sure there's a per account limit to both compute and storage.
I would expect them to handle it the same way they handle the unbounded ledger on the peer or the orderer.
Has joined the channel.
@kostas @jyellick i have two question, hope to get answer,
the one is when i want to update the channel, it needs A,B to sign it, so it must be in order? if so how can i get the order
the second is when i want to update the channel, i modify two part,one is ordererAddress and add an org,so how can i to get who should sign it?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=FdHYGDQGF27XvRqoD) I meant with regards to the size limits. I can't see any provider allowing unconstrained growth for a fixed monthly fee.
@jyellick can you give me an answer
@asaningmaxchain123 the order of signatures does not matter. if you want to modify two parts of the channel network config you must satisfy both mod_policies
so how can i to calculate the mod_policies?
the configtxlator tools just tell give updated payload,so i should parser it and get it
Hi all! Where can i find default channel policies? I try to create channel with custom `/Channel/Admins` policy, but validator requests signatures following `ImplicitMetaPolicy`, which i didn't created, because my policy is `SignaturePolicy`
@asaningmaxchain123
> so how can i to calculate the mod_policies?
The `mod_policy` of each element is specified in the channel config. By default, orderer elements (like batch size, orderer addresses, etc.) require that the orderer admin signs, application elements (like ACLs, and membership) require that a quorum of application admins sign, and individual org elements (like MSPs, anchor peers) require only a signature of that org's admin.
@iamdm Please look at https://github.com/hyperledger/fabric/blob/release-1.3/sampleconfig/configtx.yaml which enumerates all of the default policies. You may also decode your channel config using `configtxlator` and investigate them directly.
@jyellick thank you, Jason, but i want to make channel using another way :laughing: I'm trying to construct channel using Go, making `common.Groups` and etc. New channel creation is `ConfigUpdate`, right? It tries to calculate deltaSet between my channel config and something?
I got next error: `2018-11-20 14:00:13.756 UTC [orderer/common/broadcast] Handle -> WARN 107d [channel: mychannel-ddddddd] Rejecting broadcast of config message from 10.1.0.1:60320 because of error: error authorizing update: error validating DeltaSet: policy for [Group] /Channel not satisfied: Failed to reach implicit threshold of 2 sub-policies, required 1 remaining`
@iamdm Yes, so, the process is described in more detail (https://hyperledger-fabric.readthedocs.io/en/release-1.3/configtx.html#channel-creation)[here], but essentially, if the orderer receives a config update for a channel which does not exist, it generates an ephemeral channel configuration based on the content of the orderer system channel, and applies the update to create the new channel (if the update is authorized and succesful).
@iamdm Yes, so, the process is described in more detail [https://hyperledger-fabric.readthedocs.io/en/release-1.3/configtx.html#channel-creation](here), but essentially, if the orderer receives a config update for a channel which does not exist, it generates an ephemeral channel configuration based on the content of the orderer system channel, and applies the update to create the new channel (if the update is authorized and succesful).
@iamdm Yes, so, the process is described in more detail [here](https://hyperledger-fabric.readthedocs.io/en/release-1.3/configtx.html#channel-creation), but essentially, if the orderer receives a config update for a channel which does not exist, it generates an ephemeral channel configuration based on the content of the orderer system channel, and applies the update to create the new channel (if the update is authorized and succesful).
The content of the ephemeral configuration is roughly the /Channel and /Channel/Orderer groups from the orderer system channel, and the application orgs taken from the /Channel/Consortiums/
The place where you as a submitter have free reign to modify is in the /Channel/Application section
Modifying any of the other groups will require additional signatures.
@jyellick does it mean if I want to modify `/Channel/Admins` policy, i should do it in system channel?
You may do it during your channel creation, but you must include a signature from the orderer admin as well
I would note, the default policies are there for a good reason, they are of course configurable so that assorted use cases may be met, but more often than not, only the /Channel/Application policies need modification. What is your end goal?
I try to create channel using only orderer MSP
Is it possible?
If you add the orderer org to a consortium, you may. Only consortium members are authorized to create channels.
Can i create channel by orderer MSP, but not include it in channel?
The orderer MSP is always included in channels, as it is required to validate the block signatures from the orderer.
You may modify the channel creation policy to allow creation by the orderer users without making them application group members.
What is your end goal?
Orderer MSP included in `/Channel/Orderer` section. Can I create channel with Orderer MSP, but don't include it in `/Channel/Application` section?
The policy which governs who may create channels is stored in the /Consortiums/
Keep in mind, if you did this, you would also need to include the consortium member admins in this policy, or break the standard channel creation flow.
The nice thing about the 'implicit' policies is that they are relatively maintenance free. As you add and remove members to/from the consortium, they continue to operate as you want/expect. By explicitly including the orderer org via a signature policy, you will have to maintain this policy as you add and remove consortium members. (Not the end of the world, but something to be aware of)
@jyellick I created new consortium and set `ChannelCreationPolicy` using SignaturePolicy for orderer msp. It means that only orderer msp can create channels with this consortium
Note, that you must set this through `configtxlator`, not `config.yaml`, this is a Value embedding a policy, not a policy
Note, that you must set this through `configtxlator`, not `configtx.yaml`, this is a Value embedding a policy, not a policy
But yes
I' m trying to create new channel using orderer msp and consortium, but i have troubles with `/Channel/Admins` :(
If it is the /Channel/Admins policy, then you have modified an element in the /Channel groujp
If it is the /Channel/Admins policy, then you have modified an element in the /Channel group
What element did you modify?
I'm trying to set new `Admins` policy for new channel
Ah, I see the problem. There is no Admins policy defined for the application group yet, so it is not possible to satisfy the Channel Admins policy.
What is your end goal? Do you only want the orderer org to have control over the /Channel level elements?
@jyellick this is how my config update looks like: https://pastebin.com/L3TdYKQY
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=HpEH4ZszG3PjdgTEH) @jyellick Yes, this is what I want
Yes, I understand. It is not possible today using the default policies to modify the /Channel/Admins policy during channel creation. However, you may accomplish your goal much more simply by modifying the /Channel/Admins policy in the orderer system channel to be what you desire. All new channels will automatically inherit this value.
@jyellick if i add more signature for modify one element, the fabric will fail?
I have questions regarding kafka/zoo security. Is there any document that describes setting up TLS between kafka and orderer ( i have only found some sample on JIRA with openssl). Also should SASL SCRAM be configured and how would orderer authenticate himself. Also how should i secure zookeeper.
@LazarLukic Here is a good resource to get you started https://docs.confluent.io/3.2.0/kafka/security.html
@asaningmaxchain123 In general, Fabric ignores extra signatures once a policy has been satisfied
Hi @jyellick
I'm running fabric 1.3.0 and getting warnings about the default policy emission when generating the genesis block:
```
bootstrap.testOrg.com | 2018-11-22 14:13:50.490 UTC [common/tools/configtxgen/encoder] NewChannelGroup -> WARN 002 Default policy emission is deprecated, please include policy specifications for the channel group in configtx.yaml
bootstrap.testOrg.com | 2018-11-22 14:13:50.490 UTC [common/tools/configtxgen/encoder] NewOrdererGroup -> WARN 003 Default policy emission is deprecated, please include policy specifications for the orderer group in configtx.yaml
bootstrap.testOrg.com | 2018-11-22 14:13:50.490 UTC [common/tools/configtxgen/encoder] NewOrdererOrgGroup -> WARN 004 Default policy emission is deprecated, please include policy specifications for the orderer org group orderer.testOrg.com in configtx.yaml
bootstrap.testOrg.com | 2018-11-22 14:13:50.491 UTC [common/tools/configtxgen/encoder] NewOrdererOrgGroup -> WARN 005 Default policy emission is deprecated, please include policy specifications for the orderer org group testOrg.com in configtx.yaml
```
could you please explain what is `emission policy` and how to get rid of these warnings?
Hi @jyellick
I'm running fabric 1.3.0 and getting warnings about the default policy emission when generating the genesis block:
```
bootstrap.testOrg.com | 2018-11-22 14:13:50.490 UTC [common/tools/configtxgen/encoder] NewChannelGroup -> WARN 002 Default policy emission is deprecated, please include policy specifications for the channel group in configtx.yaml
bootstrap.testOrg.com | 2018-11-22 14:13:50.490 UTC [common/tools/configtxgen/encoder] NewOrdererGroup -> WARN 003 Default policy emission is deprecated, please include policy specifications for the orderer group in configtx.yaml
bootstrap.testOrg.com | 2018-11-22 14:13:50.490 UTC [common/tools/configtxgen/encoder] NewOrdererOrgGroup -> WARN 004 Default policy emission is deprecated, please include policy specifications for the orderer org group orderer.testOrg.com in configtx.yaml
bootstrap.testOrg.com | 2018-11-22 14:13:50.491 UTC [common/tools/configtxgen/encoder] NewOrdererOrgGroup -> WARN 005 Default policy emission is deprecated, please include policy specifications for the orderer org group testOrg.com in configtx.yaml
```
could you please explain what is `emission policy` and how to get rid of these warnings?
`bootstrap.testOrg.com` is a `hyperledger/fabric-tools` container
What policy regulates chaincode instantiation?
`/Channel/Writers`?
Has joined the channel.
How to judge whether peer nodes are connected?
Has joined the channel.
**Bug on prod, please help!**
After several months of stable work Fabric orderer stopped working.
Orderer logs:
```
enqueue -> ERRO 561c9 [channel: innochannel] cannot enqueue envelope because = kafka server: Tried to send a message to a replica that is not the leader for some partition. Your metadata is out of date.
2018-11-26 13:37:18.220 UTC [orderer/common/broadcast] Handle -> WARN 561ca [channel: innochannel] Rejecting broadcast of normal message from 192.168.128.1:51658 with SERVICE_UNAVAILABLE: rejected by Order: cannot enqueue
```
Kafka broker logs:
https://imgur.com/tKho8CR
I did not change anything in the settings, the error occurred on the production system, which has been used for a long time.
I restarted one kafka broker for forcing leader election, but it not work. Now I see this in kafka logs:
```
2018-11-26 16:24:51.312 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 3ac client/metadata found some partitions to be leaderless
2018-11-26 16:24:51.312 UTC [orderer/consensus/kafka] try -> DEBU 3ad [channel: innochannel] Need to retry because process failed = kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
```
Kafka can not choose a leader for an hour. How to fix it?
PS: And it's not a problem with logs pruning, I set `KAFKA_LOG_RETENTION_MS=-1`.
@gravity This is simply warning you that you are using an out of date `configtx.yaml`. Simply pull the latest `configtx.yaml` and those errors will go away.
@VadimInshakov It sounds like your Kafka cluster is not healthy for some reason. I'd suggest you investigate your Kafka broker logs as well as your zookeeper logs to see if there are any errors.
@jyellick yes, orderer can't find leader, but I don't know why
You can try connecting to the Kafka brokers with the kafka sample clients, but I suspect they will have a similar error. I believe your problem has nothing to do with the orderer, rather the state of your Kafka cluster.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=6eHrcjikHgDiMuuJo) @jyellick thanks
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=kqvc5mJqTJY6gtzJB) Wanna bet whether kafka ran out of disk in the container?
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Hello.I use kafka-orderType to set up my HLF version 1.2. I want to know what's difference between one orderer and three orderer?
@githubcpc , fault tolerance in block generation, and load balancing.
Okay.I get it.Thanks
i have a problem in bringing up network using kafka. I am having 2 CA's 2 Orgs 4 peers 1 Orderer 4 kafka brokers 3 zookeepers . The network is up with all the docker containers required and e2e scenario also works fine. but when i do install chaincode (i am using composer) , i get following error
✖ Installing business network. This may take a minute...
Error: Error trying install business network. Error: No valid responses from any peers.
Response from attempted peer comms was an error: Error: 2 UNKNOWN: access denied: channel [] creator org [Org2MSP]
Response from attempted peer comms was an error: Error: 2 UNKNOWN: access denied: channel [] creator org [Org2MSP]
Command failed
@jyellick I came around with a scenario where kafka logs says FATAL [ReplicaFetcher replicaId=0, leaderId=3, fetcherId=0] Exiting because log truncation is not allowed for partition btic-dedicated-0, current leader's latest offset 53004 is less than replica's latest offset 53017 (kafka.server.ReplicaFetcherThread)
so this caused 2 of the kafka brokers shutting down and irrecoverable. Below given are the steps that we followed
1. we had 7 channels in the network and we tried removing a organization from one of the channel and thats where this problem started. This may not be related to this activity
2. we checked the logs and found that "service unavailable" error from orderer. Then we saw in kafka logs and found this message
FATAL [ReplicaFetcher replicaId=0, leaderId=3, fetcherId=0] Exiting because log truncation is not allowed for partition btic-dedicated-0, current leader's latest offset 53004 is less than replica's latest offset 53017 (kafka.server.ReplicaFetcherThread). we tried restarting the exited kafka containers but these containers were repetitively got shut down again.
3. Then we stopped and removed all of the kafka zookeeper and orderer containers with all of its volume and again recreating the containers.
4. After this we created the same channel on orderer again and then directly invoked the chaincode and query without joining the peers and it was successfull.
5. Now I was able to get all of the data that was stored earlier on the channel ledger from peer. I also tried to fetch the 0 block for that channel and was able to get it.
6. My question is when i didn't join peers to the channel after recreating the containers and creating the same channel on orderer, then how I am able to still invoke and query the data on peers. Is it a feature or a bug.
7. Another question is how do we recover from kafka once we have this issue. I have read that there is no easy way when kafka brokers get out of sync and that network recovery is quite complicated and more complex but is there any documentation or process to do this hard way.
Has joined the channel.
I am getting this error frequently while executing transaction using node client.
For the first time request fails with this error:
```
{ Error: 14 UNAVAILABLE: TCP Write failed
at Object.exports.createStatusError (/app/node_modules/grpc/src/common.js:87:15)
at Object.onReceiveStatus (/app/node_modules/grpc/src/client_interceptors.js:1188:28)
at InterceptingListener._callNext (/app/node_modules/grpc/src/client_interceptors.js:564:42)
at InterceptingListener.onReceiveStatus (/app/node_modules/grpc/src/client_interceptors.js:614:8)
at callback (/app/node_modules/grpc/src/client_interceptors.js:841:24)
code: 14,
metadata: Metadata { _internal_repr: {} },
details: 'TCP Write failed' }
```
when I request again the error is:
```
error: [Orderer.js]: sendBroadcast - on error: "Error: 14 UNAVAILABLE: TCP Write failed\n at Object.exports.createStatusError (/app/node_modules/grpc/src/common.js:87:15)\n at ClientDuplexStream._emitStatusIfDone (/app/node_modules/grpc/src/client.js:235:26)\n at ClientDuplexStream._receiveStatus (/app/node_modules/grpc/src/client.js:213:8)\n at Object.onReceiveStatus (/app/node_modules/grpc/src/client_interceptors.js:1290:15)\n at InterceptingListener._callNext (/app/node_modules/grpc/src/client_interceptors.js:564:42)\n at InterceptingListener.onReceiveStatus (/app/node_modules/grpc/src/client_interceptors.js:614:8)\n at /app/node_modules/grpc/src/client_interceptors.js:1110:18"
sending response inside catch
Error: SERVICE_UNAVAILABLE
at ClientDuplexStream.broadcast.on (/app/node_modules/fabric-client/lib/Orderer.js:172:22)
at emitOne (events.js:116:13)
at ClientDuplexStream.emit (events.js:211:7)
at ClientDuplexStream._emitStatusIfDone (/app/node_modules/grpc/src/client.js:236:12)
at ClientDuplexStream._receiveStatus (/app/node_modules/grpc/src/client.js:213:8)
at Object.onReceiveStatus (/app/node_modules/grpc/src/client_interceptors.js:1290:15)
at InterceptingListener._callNext (/app/node_modules/grpc/src/client_interceptors.js:564:42)
at InterceptingListener.onReceiveStatus (/app/node_modules/grpc/src/client_interceptors.js:614:8)
at /app/node_modules/grpc/src/client_interceptors.js:1110:18
```
And when I hit request for the third time it executes the transaction successfully.
I don't know what can be the problem and I am using fabric 1.3 with fabric-shim 1.3
Hello
I've set up a fabric network v1.3.0, added orderer capability
```
Orderer: &OrdererCapabilities
V1_1: false
V1_2: false
V1_3: true
```
to `configtx.yaml`
but on orderer startup I'm getting the error:
```
panic: [channel genesis] config requires unsupported orderer capabilities: Orderer capability V1_3 is required but not supported: Orderer capability V1_3 is required but not supported
```
Is the capability configuration correct in the `configtx.yaml`?
@sushmitha
> access denied: channel
This sounds like you are not using a valid credential. You can check the peer logs for more detail. This is unrelated to Kafka.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=eY4bK4L4c3Cj496Z7) Please help
Has joined the channel.
Hi, I have a general question about network design. Let's assume that I have one channel with two organisations in it. Is it possible that each of organisations hosts its own orderer?
Has joined the channel.
Has joined the channel.
Hello. I am running a hyperledger fabric network with 4 kafkas, 1 peer, 2 orderers, for my dev env, they are running on a single VM. Yesterday the VM ran out of memory and 3 of the kafkas died. The last one threw an error that a leader election failed. If a leader kafka dies, one of the followers should become a leader immediately. So even with one kafka the network should work fine I tried to reproduce the issue, I am killing kafka containers, but now everything is working fine. Do you have an idea what might cause the problem?
@migrenaa for read operations should work with only 1kafka. But write operations should fail. If not, you min isr is not safe.
Hello. Before generating genesis block and channel creation transaction, I added to the msp directory of one of my organization (not orderer) an intermediatecerts and tlsintermediatecerts dirs with appropriate certificates. Then I generated configuration files and started network. Orderer failed with error:
```
2018-11-30 13:49:45.971 UTC [orderer/commmon/multichannel] newLedgerResources -> PANI 05f Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Consortiums sub-group config: setting up the MSP manager failed: CA Certificate did not have the Subject Key Identifier extension, (SN: 579386462760088976293046332568118716706373949620)
panic: Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Consortiums sub-group config: setting up the MSP manager failed: CA Certificate did not have the Subject Key Identifier extension, (SN: 579386462760088976293046332568118716706373949620)
goroutine 1 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc4200d3970, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x4f4
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc42000e2a0, 0x4, 0xe14c6d, 0x27, 0xc420075958, 0x1, 0x1, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc42000e2a0, 0xe14c6d, 0x27, 0xc420075958, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc42000e2a8, 0xe14c6d, 0x27, 0xc420075958, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/orderer/common/multichannel.(*Registrar).newLedgerResources(0xc4201e6360, 0xc4200a3540, 0xc4200a3540)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:256 +0x2ea
github.com/hyperledger/fabric/orderer/common/multichannel.NewRegistrar(0xea36a0, 0xc420376620, 0xc420167c20, 0xe9b060, 0x15a78b0, 0xc42000eca0, 0x1, 0x1, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:142 +0x312
github.com/hyperledger/fabric/orderer/common/server.initializeMultichannelRegistrar(0xc4200e0580, 0xe9b060, 0x15a78b0, 0xc42000eca0, 0x1, 0x1, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:258 +0x250
github.com/hyperledger/fabric/orderer/common/server.Start(0xdf7a5a, 0x5, 0xc4200e0580)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:96 +0x226
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:75 +0x1d6
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
```
Without intermediatecerts and tlsintermediatecerts directories Orderer starts successfully. Any suggestions?
@krabradosty - how did you generate the intermediate root certs?
Starter a new CA server:
`fabric-ca-server start -b admin:adminpw --tls.enabled -u https://admin:adminpw@tlsca.org1.optherium.com:7054 --intermediate.tls.certfiles /data/tlsca/tlsca.org1.optherium.com-cert.pem --csr.hosts int.ca.org1.optherium.com &
`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=56kgRoE3fDeNXWc6t) @mastersingh24 Starter a new CA server:
```
fabric-ca-server start -b admin:adminpw --tls.enabled -u https://admin:adminpw@tlsca.org1.example.com:7054 --intermediate.tls.certfiles /data/tlsca/tlsca.org1.example.com-cert.pem --csr.hosts int.ca.org1.example.com
```
Started a new CA server:
```
fabric-ca-server start -b admin:adminpw --tls.enabled -u https://admin:adminpw@tlsca.org1.example.com:7054 --intermediate.tls.certfiles /data/tlsca/tlsca.org1.example.com-cert.pem --csr.hosts int.ca.org1.example.com
```
Then I copied `$FABRIC_CA_HOME/ca-cert.pem` and `$FABRIC_CA_HOME/tls-cert.pem` to the msp directory of the org1
Started a new CA server:
```
fabric-ca-server start -b admin:adminpw --tls.enabled -u https://admin:adminpw@tlsca.org1.example.com:7054 --intermediate.tls.certfiles /data/tlsca/tlsca.org1.example.com-cert.pem --csr.hosts int.ca.org1.example.com
```
The new server has started without errors and I copied `$FABRIC_CA_HOME/ca-cert.pem` and `$FABRIC_CA_HOME/tls-cert.pem` to the msp directory of the org1
Ok ... so from this my best guess is that the root CA did not actually issue the intermediate certificates
https://hyperledger-fabric-ca.readthedocs.io/en/latest/users-guide.html#enrolling-an-intermediate-ca
For an MSP, the intermediate cert(s) must be issued by the root cert in cacerts or tlscacerts
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7dHg6TYn5k5jRGtJ5) @mastersingh24 Yes, verification of the signature of intermediate tls certificate failed. Thanks for help.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=JPFk7TSdeKmShi7An) @jyellick @kostas @mastersingh24 any idea on this scenario/issue
@javrevasandeep - did you actually invoke the chaincode and submit a new transaction to the orderer or did you just do invoke a query? If you only invoked a query, then this will work because the peers still have the ledgers for the channels from when you joined them initially. I suspect that you might get into an issue if you actually try to submit a new transaction to the orderer.
You might have been able to restart your brokers if you set `unclean.leader.election.enable=true` although this might result in data loss on the brokers.
I believe there were a few known issues in Kafka 0.10.x which might have caused this. They have *supposedly* been resolved in Kafka v0.11.x and later
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=38hLSeBbNnjPcFNtu) @mastersingh24 I was able to invoke the chaincode and store new data on the ledger and also was able to query new as well as old data. Yes I am using 0.10.x version. So is there any documentation on how to upgrade to 0.11.x without any data loss and impact on the existing network
Hmm ... that's a bit odd ... but I guess I can see how that's possible.
In terms of upgrading Kafka, Kafka has very good documentation on this
Has joined the channel.
Hi , is there any doc on how multiple orderer nodes interact with each other for sharing the ledger and how channel management across ordering nodes works exactly
@ArpitKhurana1 in kafka mode, they don't really interact with each other. Think orderers as replicated state machine, if the input (incoming envelopes) is same, and the current state (ledger) is same, then it's guaranteed that output (updated ledger) is same.
Got it thanks @guoger
How can the orderer verify the transactions? What do the endorsers sign???
@maxrobot The orderers explicitly do _not_ validate endorsements. They simply validate that the client is authorized to submit transactions.
An endorsement contains a certificate and signatuure
what is it that the endorsers sign to create the signature? The transaction payload? But what is the payload
The endorsement is a signature over the proposal response: https://github.com/hyperledger/fabric/blob/b350b2697908ffcd85c9d1224f69c41393a3aafc/protos/peer/proposal_response.proto#L27-L54
if I get a block I have three sections, metadata, header and data. In the data section I have an array corresponding to each individual transaction. Now looking in the element of a specific transaction I will have a creator and payload with some endorsements...
what is it that the endorsers have signed? the proposal_response_payload?
proposal_hash???
Yes, the bytes of the `proposal_response_payload` concatenated with endorser's identity
aaah ok that's interesting
from the json block?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=xhRR7NEf6fLrgxzeR) Hi @jyellick
and local peer using peer's private key to sign this concatenation?
@maxrobot
> from the json block?
I don't understand, the transaction format is encoded in binary protobuf. The endorser's identity is the bytes included in the `endorser` field of the endorsement
@Ryan2 Yes, the peer uses its private key to sign this concatenation to produce the endorsement.
If I request a block from a peer
returning `mychannel.block` then I use the `configtxgen -inspectBlock mychannel.block` to convert it into json format
Ah, yes, `configtxlator` can be used to decode the block into a more human readable JSON form, but obviously signature checking would only work against the binary form
ok that makes sense
I would like to be able to recreate the payload off-chain to verify the signature I find in the block
is it not possible to convert
corollary to my previous questions what does the orderer sign when they propogate the ordered blocks back to the leader peers?
If you wish to do signature verification, you'll need to use the protobufs directly. The `configtxlator` tools automatically expand byte fields into their underlying message types which is in general not safely invertible.
> corollary to my previous questions what does the orderer sign when they propogate the ordered blocks back to the leader peers?
The orderer signs in the block metadata.
> corollary to my previous questions what does the orderer sign when they propogate the ordered blocks back to the leader peers?
The orderer signs in the block metadata which covers the block header, which in turn covers the block data.
where is it possible to find this written in the documentation?
@jyellick where is it possible to find this written in the documentation?
Has joined the channel.
@MaxHuang I'm not certain that these implementation details are anywhere in the formal documentation. The high level that, the endorsers sign the transaction result, and that the orderer signs blocks are all there, but if you want to know exactly how the signatures are constructed and checked, your best bet is to look at the code.
Has joined the channel.
@jyellick ok great what part of the code should I check?
For the orderer side you can start looking at invocations of https://github.com/hyperledger/fabric/blob/b350b2697908ffcd85c9d1224f69c41393a3aafc/orderer/common/multichannel/blockwriter.go#L176
that is super helpful thank you
I am however surprised that the documentation doesn't exist. Is there no yellow paper like in ethereum? If I want to build on top or fork hyperledger it is v. difficult to get started
Most of these details likely do exist in documents somewhere, though I'm unaware of any consolidated resource. In my experience, reading code is always more reliable than reading documentation.
true but the code is implementing a protocol right? The code *may not* be implementing said protocol correctly so there must be some architecture documentation somewhere to guide the devs in what to deploy
Has joined the channel.
In fabric-samples/basic-network/docker-compose.yaml what does the following orderer mount do . /crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/m sp/peerOrg1
And should i change this folder name if i have a different organisation
IINM, it's used by orderer to authenticate requests sent by peer, and yes you should alter them if you are using a different name
What is the naming scheme? peerOrg1 seems like Org1 is msp
@guoger What is the naming scheme? peerOrg1 seems like Org1 is msp
Hi, in one fabric blockchain network, all channels share one same order service. can each channel has its own parameters, like block time, tansaction number in one blcok? Thanks
@qsmen yes
@ArpitKhurana1 it's user defined, i don't think we check the name scheme
obviously some exotic chars are not allowed i think, e.g. '/', '.'
I have tried many combinations like ( peerlp , peerLp , lpMSP ) , none works, does it requires any variable to be set ? (lp is org, lpMSP is mspid)
Has joined the channel.
@jyellick i can't pull block from orderer
`2018-12-05 14:09:49.878 UTC [msp] DeserializeIdentity -> INFO 0a6 Obtaining identity
2018-12-05 14:09:49.882 UTC [common/deliver] Handle -> WARN 0a7 Error reading from 172.18.0.18:36382: rpc error: code = Canceled desc = context canceled
2018-12-05 14:09:52.997 UTC [common/deliver] Handle -> WARN 0a8 Error reading from 172.18.0.18:36384: rpc error: code = Canceled desc = context canceled
`
```2018-12-05 14:09:49.878 UTC [msp] DeserializeIdentity -> INFO 0a6 Obtaining identity
2018-12-05 14:09:49.882 UTC [common/deliver] Handle -> WARN 0a7 Error reading from 172.18.0.18:36382: rpc error: code = Canceled desc = context canceled
2018-12-05 14:09:52.997 UTC [common/deliver] Handle -> WARN 0a8 Error reading from 172.18.0.18:36384: rpc error: code = Canceled desc = context canceled
```
@asaningmaxchain123 All that error indicates is that there is some connection error between your client and the orderer.
@jyellick can you take a look #fabric-peer-endorser-committer
Hi @jyellick
I'm running a fabric network v1.3.0 with the Kafka consensus mode.
When the network is configured and running, sometimes orderer nodes report this message in logs:
```
orderer0 | 2018-12-05 16:08:55.950 UTC [grpc] infof -> DEBU 1a7f transport: loopyWriter.run returning. connection error: desc = "transport is closing"
orderer1 | 2018-12-05 16:08:55.952 UTC [grpc] infof -> DEBU 14c2 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
```
Is it something to pay attention at? is it connected to the configuration of orderers?
@gravity that's generally benign
(connection was interrupted by the other then)
(connection was interrupted by the other end)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SJPgd4jk5ZtgNGmq2) @guoger and what is the "other end" in this case?
could be clients submitting envelopes to orderers
@guoger ok, I see. thanks
@guoger
are you familiar with this log message from orderer nodes?
`2018-11-21 13:56:25.304 UTC [grpc] newHTTP2Transport -> DEBU 0ac grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams failed to receive the preface from client: EOF"`
@guoger
are you familiar with this log message from orderer nodes?
`2018-11-21 13:56:25.304 UTC [grpc] newHTTP2Transport -> DEBU 0ac grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams failed to receive the preface from client: EOF"`
it appears when orderer runs in solo mode
i feel that's when grpc connection is established but failed to receive anything? how do you reproduce it?
@guoger one thing I know for sure that there were no transactions. orderer just started and later these logs appeared. I will try to define steps to reproduce it
hi, i'm working with HLF network(v 1.2) which contains SOLO orderer. During chaincode upgrade procedure, by mistake, orderer container was re-created resulting in losing file ledger directory and rest of stuff there. Is it possible to remedy from this situation or this is dead end?
when add a new org to an existing channel, it is unrelated to the order admin, am I right? when use a new org to build a new channel, it will be lead to updating of the order channel genesis block, so it will need the order admin's signature. am I right?
the version of fabric is 1.2
in the first case, after the new org is added into the existing channel, Now I want to use the newly added org to build a new channel, will it need the order admin's signature ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Tfi6hZomswk8pRp2R) @IgorSim it depends... if ledger files are persisted, then you could just restart that container, it will loads ledger and resume from where it stopped. Otherwise, it's pretty much dead end :(
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=PTWwF3LRsmnSYzi7t) @guoger tnx for reply...one more question, is it possible to switch ordering service in existing HLF network(that have blocks) from solo ->kafka w/o losing data?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=6gaZz5nqgomzFY68f) @qsmen for the first part of question, yes.
for the second, to be accurate, you are not updating the genesis block (it's immutable). It requires orderer admin's sig to create new channel (of course it depends on your policy)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=MY3yBXe2wCRnszmwz) @IgorSim we don't support that yet (it's not hard to implement, just we don't see a clear use case for that, since solo is just for dev purpose). But in the future, we will support migrating from kafka to other consensus, i.e. raft/bft
@guoger tnx for reply
@guoger ,thank you for the reply. for the second case, yes, as you said i should say update the system channel configureation.
when add a new org to an existing channel, it is unrelated to the order admin. After this, I want to use the newly added org to build a new channel, will it need the order admin's signature ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fhj4dRaa86uKo9gH4) Hi @guoger
I've clarified the flow to reproduce this behavior - it happens only in the solo mode right after an orderer started. no transactions are sent to an orderer.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=FM8zdMa4aym8Fxrh4) @qsmen i'm assuming you are using one of fabric samples, where only orderer admin is allowed to create channel by default. This is not affected by whether an org is newly added or not.
yes, indeed all channels must be created by orderers. continuing the topic above. Will I still need to update the system channel configuration when I want to build a new chanel for the newly added org? I think unnecessary because now the orderers already know the existence of the newly added org or its msp when its admin submits a channel creation request.
trying to create a channel.
trying to create a channel.
Trying to create a channel and it's timing out intermittently in the orderer with a `Getting block time exceeded N seconds for channel xxx-chain-channel`. I've tried upping N.
This unfortunately is causing the whole initialization to fail. If you try to create the failing channel channel you get something like this
```
{
org.hyperledger.fabric.sdk.exception.TransactionException: Channel xxx-chain-channel orderer orderer0-orderer status returned failure code 400 (BAD_REQUEST) during orderer next:Channel xxx-chain-channel, send transaction failed on orderer OrdererClient-xxx-chain-channel-orderer0-orderer(grpc://st-mofabric-demo02.na.rtdom.net:30510). Reason: Channel mediamath-chain-channel orderer orderer0-orderer status returned failure code 400 (BAD_REQUEST) during orderer next
}
```
Surely I can't be the only person that has seen this - it's hit me 4 times in a row at varying points in my init and it's requiring me to recreate a huge network each time.
Is there some mitigation strategy I don't know about?
Trying to create a channel and it's timing out intermittently in the orderer with a `Getting block time exceeded N seconds for channel xxx-chain-channel`. I've tried upping N.
This unfortunately is causing the whole initialization to fail. If you try to create the failing channel channel you get something like this
```
{
org.hyperledger.fabric.sdk.exception.TransactionException: Channel xxx-chain-channel orderer orderer0-orderer status returned failure code 400 (BAD_REQUEST) during orderer next:Channel xxx-chain-channel, send transaction failed on orderer OrdererClient-xxx-chain-channel-orderer0-orderer(grpc://st-mofabric-demo02.na.rtdom.net:30510). Reason: Channel mediamath-chain-channel orderer orderer0-orderer status returned failure code 400 (BAD_REQUEST) during orderer next
}
```
Error on the orderer itself is `Rejecting broadcast of config message from 10.42.20.12:61038 because of error: error authorizing update: error validating ReadSet: readset expected key [Group] /Channel/Application at version 0, but got version 1`
Surely I can't be the only person that has seen this - it's hit me 4 times in a row at varying points in my init and it's requiring me to recreate a huge network each time.
Is there some mitigation strategy I don't know about?
Trying to create a channel and it's timing out intermittently in the orderer with a `Getting block time exceeded N seconds for channel xxx-chain-channel`. I've tried upping N.
Error on the orderer is
```2018-12-06 23:02:36.538 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 01d [channel: xxx-chain-channel] About to post the CONNECT message...
2018-12-06 23:02:36.583 UTC [common/deliver] deliverBlocks -> WARN 01e [channel: xxx-chain-channel] Rejecting deliver request for 10.42.20.12:61014 because of consenter error```
So it's kafka. Which is the bane of my life right now.
This unfortunately is causing the whole initialization to fail. Of course kafka was the reason it fell over in the first place, but I digress.
If you try to create the failing channel channel again you get something like this:
```
{
org.hyperledger.fabric.sdk.exception.TransactionException: Channel xxx-chain-channel orderer orderer0-orderer status returned failure code 400 (BAD_REQUEST) during orderer next:Channel xxx-chain-channel, send transaction failed on orderer OrdererClient-xxx-chain-channel-orderer0-orderer(grpc://st-mofabric-demo02.na.rtdom.net:30510). Reason: Channel mediamath-chain-channel orderer orderer0-orderer status returned failure code 400 (BAD_REQUEST) during orderer next
}
```
Second error on the orderer itself is `Rejecting broadcast of config message from 10.42.20.12:61038 because of error: error authorizing update: error validating ReadSet: readset expected key [Group] /Channel/Application at version 0, but got version 1`
Surely I can't be the only person that has seen this - it's hit me 4 times in a row at varying points in my init and it's requiring me to recreate a huge network each time.
Is there some mitigation strategy I don't know about? Is there a way to delete the block that's causing such angst and get a do-over?
Trying to create a channel and it's timing out intermittently in the orderer with a `Getting block time exceeded N seconds for channel xxx-chain-channel`. I've tried upping N.
Error on the orderer is
```2018-12-06 23:02:36.538 UTC [orderer/consensus/kafka] sendConnectMessage -> INFO 01d [channel: xxx-chain-channel] About to post the CONNECT message...
2018-12-06 23:02:36.583 UTC [common/deliver] deliverBlocks -> WARN 01e [channel: xxx-chain-channel] Rejecting deliver request for 10.42.20.12:61014 because of consenter error```
So it's kafka. Which is the bane of my life right now.
This unfortunately is causing the whole initialization to fail. Of course kafka was the reason it fell over in the first place, but I digress.
If you try to create the failing channel channel again you get something like this:
```
{
org.hyperledger.fabric.sdk.exception.TransactionException: Channel xxx-chain-channel orderer orderer0-orderer status returned failure code 400 (BAD_REQUEST) during orderer next:Channel xxx-chain-channel, send transaction failed on orderer OrdererClient-xxx-chain-channel-orderer0-orderer(grpc://my-server:30510). Reason: Channel xxx-chain-channel orderer orderer0-orderer status returned failure code 400 (BAD_REQUEST) during orderer next
}
```
Second error on the orderer itself is `Rejecting broadcast of config message from 10.42.20.12:61038 because of error: error authorizing update: error validating ReadSet: readset expected key [Group] /Channel/Application at version 0, but got version 1`
Surely I can't be the only person that has seen this - it's hit me 4 times in a row at varying points in my init and it's requiring me to recreate a huge network each time.
Is there some mitigation strategy I don't know about? Is there a way to delete the block that's causing such angst and get a do-over?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=JT6BsTY3XEJyGy9qL) @qsmen but as system channel admin, if you want to grant channel creation privilege to a new org, you'll need to update policy
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ks5B85NzoLuBjp7vm) @guoger indeed, solo is meant for dev purposes, the issue is that we're in 'pilot' phase where our network is running with solo orderer(ledger and orderer msp are backed up). Why we started with solo orderer? Simply because operating kafka cluster(4k + 3 zk) costs additional money and resources, especially for small startups (with tight budget :) ) which main effort goes to on-boarding new organizations, building the network etc. Once there is some traction, participants are happy and they see value in blockchain network 'pilot' will become 'production' but we don't want to lose data. You mention implementation isn't difficult, i would really appreciate if you can share more details what should be implemented and if there is some kind of workaround?
@IgorSim you should wait for 2019 where you'll have an etcd-raft orderer
much cheaper in terms of servers, and easier to set up
also more flexible (you can add new nodes to the cluster, and remove them easily)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=MCEzobZhDtpvHQY5E) @guoger hi
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=MCEzobZhDtpvHQY5E) @guoger do you mean if I want to create a new channel, it is orderer admin who submit the requestion in any case?
hi experts, can anyone pls help me with this error: ` Failed obtaining connection: Could not connect to any of the endpoints: [orderer0:7055]
`
hi experts, can anyone pls help me with this error: ` Failed obtaining connection: Could not connect to any of the endpoints: [orderer0:7055]
`
hi experts, can anyone pls help me with this error:
```
Failed obtaining connection: Could not connect to any of the endpoints: [orderer0:7055]
```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=coo3THAibfGoZQjsB) @yacovm @yacovm tnx for answer, is that planned for Q1/Q2 in 2019 or Q3/Q4? Btw, etcd-raft orderer can make our live easier if we create new network, question is if there is any(reasonable) way to switch from solo->kafka(or etcd-raft in future) on network that contains transactions/blocks w/o losing that data?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=coo3THAibfGoZQjsB) @yacovm tnx for answer, is that planned for Q1/Q2 in 2019 or Q3/Q4? Btw, etcd-raft orderer can make our live easier if we create new network, question is if there is any(reasonable) way to switch from solo->kafka(or etcd-raft in future) on network that contains transactions/blocks w/o losing that data?
Q1... it's almost done
there will be a migration path
from Kafka to etcdraft
the idea is that you send a special transaction to the kafka orderer
and you shut it down
when it wakes up again, it is raft :)
so i don't think migration from solo to etcdraft makes any sense.... why would you use solo in production anyway?
@IgorSim
@IgorSim ^
well, idea was to start with solo, build the network, on-board few organizations, get familiar with HLF, what is the benefit for them etc...Basically, POC.
Customers can see real value in the network and now they want to switch to real production mode but are asking if its possible to keep all transactions in the network.
Of course, i wouldn't start solo in production but this is the reality and situation we're facing. Therefore, i asked if there is some way to migrate data or we need to find some other way to 'replay' all transactions...
so I think you can use the same migration of Kafka, to do the solo --> etcdraft migration
@tock correct me if I'm wrong
Has joined the channel.
@jyellick I am following the `balance-transfer` tutorial however I see the following error `"success":false,"message":"failed TypeError: Cannot read property 'curve' of undefined"}`
I cannot find any rationale anywhere I have double triple checked that I have followed the instructions correctly
why did you tag Jason?
@yacovm he previously answer my question and I don't know any other mods on the channel
is this an issue?
a bit out of context, I'd say...
@yacovm @IgorSim currently I do not plan to support solo to raft migration, but in principle, it should be the same as kafka to raft. Once I finish the first pass on the kafka-to-raft migration, I'll take a look of what it take. However, another option would be to start the POC with a single node kafka and single node zookeeper in two containers, as will soon be demonstrated in the fabric-samples project (see https://jira.hyperledger.org/browse/FAB-13011).
@yacovm @IgorSim currently I do not plan to support solo to raft migration, but in principle, it should be the same as kafka to raft. Once I finish the first pass on the kafka-to-raft migration, I'll take a look of what it takes. However, another option would be to start the POC with a single node kafka and single node zookeeper in two containers, as will soon be demonstrated in the fabric-samples project (see https://jira.hyperledger.org/browse/FAB-13011).
but you're not doing anything Kafka specific @tock , right ?
so it should work (in theory) for solo too no?
well, apart from checking that the consensus-type is "kafka", no ;-)
well, apart from checking that the consensus-type is "kafka", and some code in the kafka/chain.go ... but not much
that's a minor obstacle
and writing a bunch of code in the kafka/chain...
so yes
@yacovm @tock i appreciate looking into this, if you can make solo->raft migration working w/o too much trouble that would definitely make our lives easier
Has joined the channel.
are there any preliminary numbers on scaling out with etcd/raft vs kafka? max channels is what i'm particularly interested in.
@aatkddny there shouldn't be much difference cuz max channels is agnostic to consensus
Has anyone tried splitting an orderer cluster across geographically dispersed data centres for resiliency ? The kafka documentation "https://kafka.apache.org/documentation/#datacenters"
says "It is generally not advisable to run a single Kafka cluster that spans multiple datacenters over a high-latency link". Does that therefore apply to the orderers cluster ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=xRaLX6NrcPwWnMZPC) @guoger except there are underlying physical constraints.
The Node client got the following issue when trying to connect to the orderer, any suggestion? Thanks!
```
info: [PTE 0 util]: [getTLSCert] key: orderer, subkey: orderer0
info: [PTE 0 main]: [clientNewOrderer] orderer: grpcs://zaci-43.pok.ibm.com:5005
E1212 17:27:53.080721112 44583 ssl_transport_security.cc:1227] Handshake failed with fatal error SSL_ERROR_SSL: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed.
E1212 17:27:54.081753813 44583 ssl_transport_security.cc:1227] Handshake failed with fatal error SSL_ERROR_SSL: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed.
E1212 17:27:55.391927501 44583 ssl_transport_security.cc:1227] Handshake failed with fatal error SSL_ERROR_SSL: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed.
```
Has joined the channel.
I have just modified the fabric sample first network into a new org name and started the network: I am seeing these DEBUG logs in orderer. Is it something should be handled ?
```grpc: Server.Serve failed to complete security handshake from "172.18.0.2:44144": remote error: tls: bad certificate```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8dXL4b7zweztHu8ks) @qizhang try regenerate crypto artifacts?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=bTmRgQSK7jQhWc5Kp) @javapriyan crypto certs misuse?
@guoger Yeah, I have no clue what did I wrong. But, after clearing the network, i worked. Thanks
@guoger Thanks. I solved this problem by changing the openssl version on the client machine. However, the following problem occurs, any suggestion?
```
info: [PTE 0 main]: [clientNewOrderer] orderer: grpcs://zaci-43.pok.ibm.com:5005
error: [Remote.js]: Error: Failed to connect before the deadline URL:grpcs://zaci-43.pok.ibm.com:5005
error: [Orderer.js]: Orderer grpcs://zaci-43.pok.ibm.com:5005 has an error Error: Failed to connect before the deadline URL:grpcs://zaci-43.pok.ibm.com:5005
error: [PTE 0 main]: Failed to create/update the channel (testorgschannel1)
error: [PTE 0 main]: Error: Failed to connect before the deadline URL:grpcs://zaci-43.pok.ibm.com:5005
at checkState (/home/root/git/src/github.com/hyperledger/fabric-test/tools/PTE/node_modules/grpc/src/client.js:720:16)
```
`zaci-43.pok.ibm.com:5005` is the orderer
Has joined the channel.
HI All,
I have deployed the Hyperledger fabric on AWS using cello ansible with dockers. Everything works fine. I was able to do a transaction using composer playground. In a process for vertical scale-up the system, I have created a bigger AWS instance using old AMI. I have correct all the dns, docker and flannel settings. Updated /etc/hosts files on both the VMs. Brought all the docker container up and running. Now when I try to do a transaction, I am getting below error by composer-playground:
Error: Error trying invoke business network. Error: Failed to send peer responses for transaction '02be502e532dfe5c153fa2fc5ecbb599a387834e32f4eb5b1806949335cfcd26' to orderer. Response status 'SERVICE_UNAVAILABLE
I have even tried crearting participant user composer CLI but still getting same error.
I have checked all the docker logs(i.e. orderer, peer, kafka, zookeeper) could not able to find exact error.
Also after a day, orderer dockers shuts down with below error:
2018-12-12 00:33:33.799 UTC [orderer/consensus/kafka] startThread -> CRIT 1a3 [channel: orderersystemchannel] Cannot set up channel consumer = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
panic: [channel: orderersystemchannel] Cannot set up channel consumer = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
goroutine 30 [running]:
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc42024e0f0, 0xd1d562, 0x31, 0xc420114540, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x134
github.com/hyperledger/fabric/orderer/consensus/kafka.startThread(0xc42015f8c0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/kafka/chain.go:261 +0xb33
created by github.com/hyperledger/fabric/orderer/consensus/kafka.(*chainImpl).Start
Can anyone please tell me what I am missing here?
HI All,
I have deployed the Hyperledger fabric on AWS using cello ansible with dockers. Everything works fine. I was able to do a transaction using composer playground. In a process for vertical scale-up the system, I have created a bigger AWS instance using old AMI. I have correct all the dns, docker and flannel settings. Updated /etc/hosts files on both the VMs. Brought all the docker container up and running. Now when I try to do a transaction, I am getting below error by composer-playground:
Error: Error trying invoke business network. Error: Failed to send peer responses for transaction '02be502e532dfe5c153fa2fc5ecbb599a387834e32f4eb5b1806949335cfcd26' to orderer. Response status 'SERVICE_UNAVAILABLE
I have even tried crearting participant user composer CLI but still getting same error.
I have checked all the docker logs(i.e. orderer, peer, kafka, zookeeper) could not able to find exact error.
Also after a day, orderer dockers shuts down with below error:
2018-12-12 00:33:33.799 UTC [orderer/consensus/kafka] startThread -> CRIT 1a3 [channel: orderersystemchannel] Cannot set up channel consumer = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
panic: [channel: orderersystemchannel] Cannot set up channel consumer = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
goroutine 30 [running]:
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc42024e0f0, 0xd1d562, 0x31, 0xc420114540, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x134
github.com/hyperledger/fabric/orderer/consensus/kafka.startThread(0xc42015f8c0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/kafka/chain.go:261 +0xb33
created by github.com/hyperledger/fabric/orderer/consensus/kafka.(*chainImpl).Start
Can anyone please tell me what I am missing here?
Has joined the channel.
Hello, does fabric 1.3 support kafka 2.0? Sarama supports it but the doc says this: Fabric uses the sarama client library and vendors a version of it that supports Kafka 0.10 to 1.0. Based on this I guess there is an old version of sarama which does not support it. Did anyone played with kafka 2.0?
what is kafka 2.0? 0.9.0.0 ?
@jiribroulik
@yacovm its newer, here are the releases https://kafka.apache.org/downloads
so yeah i think it does.... you can try it
https://github.com/hyperledger/fabric/blob/master/sampleconfig/orderer.yaml#L300
set this property to be `2.11`
and it should work, i think....
@yacovm not sure, I am getting the same error kafka: error decoding packet: CRC didn't match expected 0x0 got 0xe38a6876
did you try what I suggested?
yep
hmmm @kostas any ideas?
1.4 fixes it https://github.com/hyperledger/fabric/commit/cbd917c @yacovm
How is deliver api able to utilize grpc to create a two-way client-server model? We know grpc supports bidirectional communication but typically it's a client-server setup wherein client generates requests and server responds to those requests and all this can happen in parallel on a single tcp connection. In fabric though, orderer dispatches the blocks to the peers which are not a direct consequence of the request coming from the peer.
So peer establishes a grpc connection with the orderer (and invokes the deliver api), sends some data, gets the response - this is typical grpc but after some time orderer is able to use the same connection to send the block to the peer as and when a new block is generated. How does orderer know when to send the block to the peer? It must be keeping the connection details with all the peers in memory somehwere to be able to use those when dispatching the new blocks, right?
hi expers here, suppose I want to set up a fabric network with kafka as the cosensus. how many hosts should I prepare for this consensus? how to configure each hosts? Thank you.
@yacovm have you ever hit Unable to decode public/private key pair: tls: failed to find any PEM data in certificate input when enabling KAFKA_TLS on orderer? Even though I have valid self-signed pem keypair
kafka.txt
@qsmen ^^ here is example for dockder
@qsmen see https://hyperledger-fabric.readthedocs.io/en/latest/kafka.html for setting up kafka ordering service
The orderer got this problem when the client tries to connect to the orderer: `[grpc] handleRawConn -> DEBU 2fe grpc: Server.Serve failed to complete security handshake from "9.47.152.79:52258": read tcp 172.18.0.4:5005->9.47.152.79:52258: read: connection reset by peer`. Any advise? Thanks!
@guoger @jiribroulik , Thank you very much. I will read them carefully.
@qizahn
@qizhang could you provide orderer logs? i suspect that tls is not enabled on server side, but client expects secure connection
Has joined the channel.
Hello,
Hello,
I have been trying to run our development environment network in my local but unable to create the channel after the user is created (this is similar to the fabric network sample). In the postman, I get the error as below:
{
"success": false,
"message": "Failed to initialize the channel: Error: SERVICE_UNAVAILABLE"
}
Below is the error that I get in the application logs:
(node:67700) [DEP0079] DeprecationWarning: Custom inspection function on Objects via .inspect() is deprecated
E1220 13:24:57.070129000 4362040768 ssl_transport_security.cc:1229] Handshake failed with fatal error SSL_ERROR_SSL: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed.
E1220 13:24:57.072593000 4362040768 ssl_transport_security.cc:1229] Handshake failed with fatal error SSL_ERROR_SSL: error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed.
error: [Orderer.js]: sendBroadcast - on error: "Error: 14 UNAVAILABLE: Connect Failed\n at Object.exports.createStatusError (/
my client failed to connect to the orderer and the following msgs are shown in the orderer log, anyone can help?Thanks!
```
2018-12-22 01:40:31.730 UTC [core.comm] ServerHandshake -> ERRO 008 TLS handshake failed with error EOF {"server": "Orderer", "remote address": "9.47.152.36:44934"}
2018-12-22 01:40:32.729 UTC [core.comm] ServerHandshake -> ERRO 009 TLS handshake failed with error EOF {"server": "Orderer", "remote address": "9.47.152.36:44936"}
2018-12-22 01:40:34.427 UTC [core.comm] ServerHandshake -> ERRO 00a TLS handshake failed with error EOF {"server": "Orderer", "remote address": "9.47.152.36:44938"}
```
There is any estimation of a BFT consensus version of Fabric?
Has joined the channel.
Hello All,
When I tried to create a new channel, "SERVICE_UNAVAILABLE" error occured and I was told that this may be because of connection between orderer and kafka cluster.
Then I checked the orderer log and found as following:
2018-12-24 09:05:56.347 UTC [orderer/consensus/kafka] try -> DEBU 261 [channel: testchainid] Connecting to the Kafka cluster
2018-12-24 09:05:56.348 UTC [orderer/consensus/kafka] try -> DEBU 262 [channel: testchainid] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
Is there any way to make these offsets equal between orderer and kafka cluster? Please...
Hi everyone.
I have 2 queries. One is that is it possible that multiple channels can be setup between same set of organizations. For example, 2 two channels between org1 and org2 ?? Another query is that if we want to store multiple tables data like we store commonly in databases...how can it be stored in level or couch db? Since Hyperledger fabric supports only these two databases and data is stored only in the form of key value pair in hyperledger fabric blockchain
```2018-12-27 06:21:47.103 UTC [orderer/consensus/kafka] processConnect -> DEBU 0e2 [channel: testchainid] It's a connect message - ignoring
2018-12-27 06:21:53.905 UTC [grpc] Println -> DEBU 0e3 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams failed to receive the preface from client: EOF"```
Anyone have any idea what does this logs mean
Anyone have any idea what does these logs mean
everyting is working fine instead on aws
using native binary of orderer
specially this one `2018-12-27 06:21:53.905 UTC [grpc] Println -> DEBU 0e3 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams failed to receive the preface from client: EOF"`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BSdnKTheCbDgEY5Jt) @yousaf 1) Yes you can have multiple channels between the same orgs. 2) For different data types within a chaincode, use composite keys where first part indicates data type and second part indicates unique key within that data type. You can do this on your own, or use CreateCompositeKey() chaincode api
why does `common/grpclogging/fields.go` use `github.com/gogo/protobuf/proto`, but not `github.com/golang/protobuf/proto`?
```import (
"github.com/gogo/protobuf/proto"
"github.com/golang/protobuf/jsonpb"
"go.uber.org/zap"
"go.uber.org/zap/zapcore"
)```
sorry. i should post in #fabric .
@dave.enyeart Thankyou so much for your response sir. :)
@dave.enyeart But can you tell be that what could be the use case when same set of orgs would need multiple channels? Even they already have a private channel then why would they define a new channel between them. Since their previous channel had already existed as a private source of communication for them and their was no third party org in their previous channel.
@yousaf You're right, a single channel between the orgs would be sufficient, I was simply answering your question that it is technically possible, if you would like to do it.
will release-1.4 support etcdraft orderer?
@dave.enyeart Got it sir. Thankyou :)
Has joined the channel.
orderer2.OrdererOrg.com.log
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Z7TLmLB6uEocD23bJ) @mamtabhardwaj12 Do you have debug logging enabled for your orderer node(s)? `ORDERER_GENERAL_LOGLEVEL=DEBUG`
Hi @mastersingh24 ,
I am running it on docker container, where I can set the loglevel=debug? Can you please help me?
Are you using docker compose?
Has joined the channel.
Are there some examples for use Kafka for orderer? tks~~
Has joined the channel.
yes
you can set `ORDERER_GENERAL_LOGLEVEL` to debug in your compose file
btw - why are you using v1.0.1? We are about to release 1.4
Hey @mastersingh24
I have resolved that problem by commenting below the line in /etc/resolv.conf on orderer and peer server.
#options single-request-reopen
Now I am getting another error while installing the chaincode.First organization suppose to create 3 more chaincode who is going to join that channel.I am getting error as below:
Error: Error endorsing chaincode: rpc error: code = Unknown desc = chaincode error (status: 500, message: could not find chaincode with name 'channelipdcanddistributor')
Hello, I tried to install and instantiate two chaincodes concurrently using `locust`, peer endorses install and instantiation proposals and then it fails on sending instantiate proposal to orderer, does anyone has an idea what might be happening?
Hello, I tried to install and instantiate two chaincodes concurrently using locust, peer endorses install and instantiation proposals and then it fails on sending instantiate proposal to orderer, does anyone has an idea what might be happening?
Hello, I tried to install and instantiate two chaincodes concurrently using locust, peer endorses install and instantiation proposals and then it freezes and then timeouts on sending instantiate proposal to orderer, does anyone has an idea what might be happening?
Hello, I tried to install and instantiate two chaincodes concurrently using locust, peer endorses install and instantiation proposals and then it freezes and then timeouts on sending instantiate proposal to orderer, does anyone has an idea what might be happening?
Hello, I tried to install and instantiate two chaincodes concurrently using locust, peer endorses install and instantiation proposals and then it freezes and then timeouts on sending instantiate proposal to orderer, does anyone has an idea what might be happening? the result is that the system breaks and chaincodes cannot be installed and instantiated anymore
Hello, I tried to install and instantiate two chaincodes concurrently using locust, sdk freezes after sending instantiate proposals to orderer. In orderer logs it looks that everything is ok and proposals have been endorsed but nothing else happens in sdk, any ideas what might be happening?
@StefanKosc By 'freezes', I expect it is waiting for the transactions to commit in a block at the peer the SDK is connected to to receive events from. You should be able to check to see if the transaction commits into a block at the orderer, and check to see if it commits into a block at the peer. My first guess would be that the peer is having trouble receiving blocks from the orderer.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BCtb6Q38WPsCZFznq) @jyellick ok, thanks for response @jyellick, it looks that `orderer` is delivering block (log: `2019-01-04 15:21:49.713 UTC [common/deliver] deliverBlocks -> DEBU 35f [channel: aa.id-1] Delivering block for (0xc420a2d000) for 172.27.0.7:36800`) and `peer` receives it (logs: `2019-01-04 15:21:49.781 UTC [cceventmgmt] HandleStateUpdates -> INFO 031 Channel [aa.id-1]: Handling LSCC state update for chaincode [0ow1_id-T1]
2019-01-04 15:21:49.786 UTC [cceventmgmt] HandleStateUpdates -> INFO 032 Channel [aa.id-1]: Handling LSCC state update for chaincode [inkm_id-T2]`) but events do not come to sdk
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BCtb6Q38WPsCZFznq) @jyellick ok, thanks for response @jyellick, it looks that `orderer` is delivering block (log: `2019-01-04 15:21:49.713 UTC [common/deliver] deliverBlocks -> DEBU 35f [channel: aa.id-1] Delivering block for (0xc420a2d000) for 172.27.0.7:36800`) and `peer` receives it (logs: `2019-01-04 15:21:49.781 UTC [cceventmgmt] HandleStateUpdates -> INFO 031 Channel [aa.id-1]: Handling LSCC state update for chaincode [0ow1_id-T1]
2019-01-04 15:21:49.786 UTC [cceventmgmt] HandleStateUpdates -> INFO 032 Channel [aa.id-1]: Handling LSCC state update for chaincode [inkm_id-T2]`) but events do not come to sdk
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BCtb6Q38WPsCZFznq) @jyellick ok, thanks for response @jyellick, it looks that `orderer` is delivering block (log: `2019-01-04 15:21:49.713 UTC [common/deliver] deliverBlocks -> DEBU 35f [channel: aa.id-1] Delivering block for (0xc420a2d000) for 172.27.0.7:36800`) and `peer` receives it (logs: `2019-01-04 15:21:49.781 UTC [cceventmgmt] HandleStateUpdates -> INFO 031 Channel [aa.id-1]: Handling LSCC state update for chaincode [0ow1_id-T1]
2019-01-04 15:21:49.786 UTC [cceventmgmt] HandleStateUpdates -> INFO 032 Channel [aa.id-1]: Handling LSCC state update for chaincode [inkm_id-T2]`) but sdk does not receive events
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BCtb6Q38WPsCZFznq) @jyellick ok, thanks for response @jyellick, it looks that `orderer` is delivering block (log: `2019-01-04 15:21:49.713 UTC [common/deliver] deliverBlocks -> DEBU 35f [channel: aa.id-1] Delivering block for (0xc420a2d000) for 172.27.0.7:36800`) and `peer` receives it (logs: `2019-01-04 15:21:49.781 UTC [cceventmgmt] HandleStateUpdates -> INFO 031 Channel [aa.id-1]: Handling LSCC state update for chaincode [0ow1_id-T1]
2019-01-04 15:21:49.786 UTC [cceventmgmt] HandleStateUpdates -> INFO 032 Channel [aa.id-1]: Handling LSCC state update for chaincode [inkm_id-T2]`) but sdk does not receive events
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BCtb6Q38WPsCZFznq) @jyellick ok, thanks for response @jyellick, it looks that `orderer` is delivering block (log: `2019-01-04 15:21:49.713 UTC [common/deliver] deliverBlocks -> DEBU 35f [channel: aa.id-1] Delivering block for (0xc420a2d000) for 172.27.0.7:36800`) and `peer` receives it (logs: `2019-01-04 15:21:49.781 UTC [cceventmgmt] HandleStateUpdates -> INFO 031 Channel [aa.id-1]: Handling LSCC state update for chaincode [0ow1_id-T1]
2019-01-04 15:21:49.786 UTC [cceventmgmt] HandleStateUpdates -> INFO 032 Channel [aa.id-1]: Handling LSCC state update for chaincode [inkm_id-T2]`) but sdk does not receive events and there is no information that new block is commited
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BCtb6Q38WPsCZFznq) @jyellick ok, thanks for response @jyellick, it looks that `orderer` is delivering block (log: `2019-01-04 15:21:49.713 UTC [common/deliver] deliverBlocks -> DEBU 35f [channel: aa.id-1] Delivering block for (0xc420a2d000) for 172.27.0.7:36800`) and `peer` receives it (logs: `2019-01-04 15:21:49.781 UTC [cceventmgmt] HandleStateUpdates -> INFO 031 Channel [aa.id-1]: Handling LSCC state update for chaincode [0ow1_id-T1]
2019-01-04 15:21:49.786 UTC [cceventmgmt] HandleStateUpdates -> INFO 032 Channel [aa.id-1]: Handling LSCC state update for chaincode [inkm_id-T2]`) but sdk does not receive events and there is no information in logs that new block is commited
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BCtb6Q38WPsCZFznq) @jyellick ok, thanks for response @jyellick, it looks that `orderer` is delivering block (log: `2019-01-04 15:21:49.713 UTC [common/deliver] deliverBlocks -> DEBU 35f [channel: aa.id-1] Delivering block for (0xc420a2d000) for 172.27.0.7:36800`) and `peer` receives it (logs: `2019-01-04 15:21:49.781 UTC [cceventmgmt] HandleStateUpdates -> INFO 031 Channel [aa.id-1]: Handling LSCC state update for chaincode [0ow1_id-T1]
2019-01-04 15:21:49.786 UTC [cceventmgmt] HandleStateUpdates -> INFO 032 Channel [aa.id-1]: Handling LSCC state update for chaincode [inkm_id-T2]`) but sdk does not receive events and there is no information in logs that new block is commited. Maybe there could be more information when run with different log level?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=cPR4uNqzbWwhXpWhy) and there is no information in logs that new block is commited
If the transaction is committing at the peer, but the SDK is not receiving notification that it has committed, then you should likely be looking at the SDK. I'm not sure which language you are working in, but you might find better resources in #fabric-sdk-node #fabric-sdk-java #fabric-sdk-go #fabric-sdk-py depending on the language of the SDK you are using.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=eQEvcTzBASo4E6Aca) @jyellick ok, thanks for your time
Has joined the channel.
Has joined the channel.
Has joined the channel.
Hey guys! In our current setup we have 2 peers and 2 orderers. We have a specific transaction which is executed every 15 min. So every 15 min I see something like this in the Peer Logs: `2018-10-11 01:15:29.224 UTC [gossip/privdata] StoreBlock -> INFO 719c Received block [33327]
2018-10-11 01:15:29.758 UTC [kvledger] CommitWithPvtData -> INFO 719d Channel [bccchannel]: Committed block [33327] with 1 transaction(s)` . After sometime though the logs wont show up anymore and nothing is written to the ledger. After a restart of the peer, the logs show up again. My assumption is that the connection between Orderer and Peer gets lost/was closed. Does anyone have an idea why this could happen? Any help is very much appreciated!
Hey guys! In our current setup we have 2 peers and 2 orderers. We have a specific transaction which is executed every 15 min. So every 15 min I see something like this in the Peer Logs: `2018-10-11 01:15:29.224 UTC [gossip/privdata] StoreBlock -> INFO 719c Received block [33327]`
`2018-10-11 01:15:29.758 UTC [kvledger] CommitWithPvtData -> INFO 719d Channel [bccchannel]: Committed block [33327] with 1 transaction(s)` . After sometime though the logs wont show up anymore and nothing is written to the ledger. After a restart of the peer, the logs show up again. My assumption is that the connection between Orderer and Peer gets lost/was closed. Does anyone have an idea why this could happen? Any help is very much appreciated!
@deelthor peer and orderer do not communicate via gossip. this question may be better answered in #fabric-gossip
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=58d6cwFb6pTkaeeAz) @guoger Thanks a lot!
My understanding was that the orderer delivers new blocks via gossip to the leader peers.
yes
no via gossip
directly
@yacovm Please help ,, I came across some scenarios while changing the block creation configuration in configtx.yaml file 1.) It was not taking BatchTimeout in double digit like 60s and channel creation failed ,, 2.) while using balance transfer sample with updated block creation setup like BatchTimeout: 9s ,MaxMessageCount: 50 but while sending transaction its creating a block per transaction ,, there is no affect of block creation config
@knagware9 When using `configtxgen` to generate channel creation txes, only the parameters of the Application section are considered. This is by design as channels are generally created by application admins who do not know the configuration of the ordering parameters nor are authorized to change them. It is possible to modify the ordering parameters at channel creation time, but it requires a more sophisticated channel creation tx (which would need to be generated by hand) as well as the signature of an ordering admin.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=tDNKWGm3SeXZN6TCB) @jyellick ok..Batch timeout and MaxMessageCount is part of the Application section...I have checked in orderer logs and its taking affect of Batch timeout as 9s but not taking double digit time ..and regarding balance transfer example, I think its coded to only one transaction per block
2019-01-07 13:40:40.833 UTC [orderer.common.blockcutter] Ordered -> DEBU a2b Enqueuing message into batch
2019-01-07 13:40:40.833 UTC [orderer.consensus.solo] main -> DEBU a2c Just began 9s batch timer
2019-01-07 13:40:40.834 UTC [orderer.common.broadcast] Handle -> DEBU a2d Received EOF from 172.18.0.1:48888, hangup
2019-01-07 13:40:40.834 UTC [orderer.common.server] func1 -> DEBU a2e Closing Broadcast stream
2019-01-07 13:40:40.834 UTC [comm.grpc.server] 1 -> INFO a2f streaming call completed {"grpc.start_time": "2019-01-07T13:40:40.833Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "172.18.0.1:48888", "grpc.code": "OK", "grpc.call_duration": "1.555784ms"}
2019-01-07 13:40:49.834 UTC [orderer.consensus.solo] main -> DEBU a30 Batch timer expired, creating block
Has joined the channel.
Hello,
Is it possible to have one peer connect to two different ordering services via two different channels?
If yes, is it recommended?
@GuillaumeTong Yes, it should be possible, though I'm not certain it's seen a lot of testing
@knagware9 The ordeirng section, including the batch size parameters are only included by `configtxgen` when generating the genesis block for the orderer system channel. If you have observed the batch parameters changing during channel creation through `configtxgen` then I believe it is most likely a coincidence with re-bootstrapping your orderer system channel before tesitng.
@knagware9 The ordeirng section, including the batch size parameters are only included by `configtxgen` when generating the genesis block for the orderer system channel. If you have observed the batch parameters changing during channel creation through `configtxgen` then I believe it is most likely a coincidence with re-bootstrapping your orderer system channel before testing.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Awgca8nrX43wZZW5E) @jyellick ok Thanks..could you please help to change bathc time and size of block..I have to setup ordering system to create block with multiple transactions within it...which is I am not able to achieve..
@knagware9 I just configured an orderer with a batch timeout of 200s with no problems. Can you provide a (minimal) set of detailed steps which reproduces your problem?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=paZFvNrJ2S7rQJcBM) @jyellick trying with 200s and BatchSize
MaxMessageCount: 100 ...will messgae if it failed
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ycroRpyw5unbDv7DP) its successful see steps but when I am sending 4 transaction at the same time ,, its creating block per transaction not all transactions in one block ..its wait for the defined batch timeout and create block https://pastebin.com/2ULNNQVk
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=AavtTCokEy8ugnWGi) 2019-01-08 06:50:46.857 UTC [orderer.common.blockcutter] Ordered -> DEBU 958 Enqueuing message into batch
2019-01-08 06:50:46.857 UTC [orderer.consensus.solo] main -> DEBU 959 Just began 3m20s batch timer
2019-01-08 06:50:46.858 UTC [orderer.common.broadcast] Handle -> DEBU 95a Received EOF from 172.18.0.1:32978, hangup
2019-01-08 06:50:46.858 UTC [orderer.common.server] func1 -> DEBU 95b Closing Broadcast stream
2019-01-08 06:50:46.858 UTC [comm.grpc.server] 1 -> INFO 95c streaming call completed {"grpc.start_time": "2019-01-08T06:50:46.857Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "172.18.0.1:32978", "grpc.code": "OK", "grpc.call_duration": "839.519µs"}
2019-01-08 06:54:06.858 UTC [orderer.consensus.solo] main -> DEBU 95d Batch timer expired, creating block
2019-01-08 06:54:06.858 UTC [msp] GetDefaultSigningIdentity -> DEBU 95e Obtaining default signing identity
2019-01-08 06:54:06.858 UTC [msp] GetDefaultSigningIdentity -> DEBU 95f Obtaining default signing identity
2019-01-08 06:54:06.858 UTC [msp.identity] Sign -> DEBU 960 Sign: plaintext: 0A95060A0A4F7264657265724D535012...2FF95FEB03E1AD10D3D82ECB41A50676
2019-01-08 06:54:06.858 UTC [msp.identity] Sign -> DEBU 961 Sign: digest: 2625C6BD9FE5E685776861BE006C33565820600A1B3D915D7F744572B66B1FB8
2019-01-08 06:54:06.858 UTC [msp] GetDefaultSigningIdentity -> DEBU 962 Obtaining default signing identity
2019-01-08 06:54:06.858 UTC [orderer.commmon.multichannel] addLastConfigSignature -> DEBU 963 [channel: mychannel] About to write block, setting its LAST_CONFIG to 2
2019-01-08 06:54:06.858 UTC [msp] GetDefaultSigningIdentity -> DEBU 964 Obtaining default signing identity
2019-01-08 06:54:06.858 UTC [msp.identity] Sign -> DEBU 965 Sign: plaintext: 08020A95060A0A4F7264657265724D53...2FF95FEB03E1AD10D3D82ECB41A50676
2019-01-08 06:54:06.858 UTC [msp.identity] Sign -> DEBU 966 Sign: digest: 9A21090CC7C243F9275FD78C85351BD95145B0CFC607B990531EBF6A823F7E5A
2019-01-08 06:54:06.910 UTC [fsblkstorage] indexBlock -> DEBU 967 Indexing block [blockNum=5, blockHash=[]byte{0x4c, 0x93, 0x20, 0x6b, 0xc1, 0x3, 0xef, 0x6d, 0x88, 0x86, 0xb, 0x5b, 0xd4, 0x60, 0x7e, 0x66, 0x23, 0x3f, 0xba, 0x9c, 0x70, 0x75, 0x14, 0x61, 0xac, 0x14, 0xc5, 0xf0, 0x79, 0x77, 0x68, 0xa4} txOffsets=
txId=a0cc77d0dc3a58a43a270352575a963a3a1d34a20405087b608cdb60433d923a locPointer=offset=70, bytesLength=4085
]
2019-01-08 06:54:06.952 UTC [fsblkstorage] updateCheckpoint -> DEBU 968 Broadcasting about update checkpointInfo: latestFileChunkSuffixNum=[0], latestFileChunksize=[58063], isChainEmpty=[false], lastBlockNumber=[5]
2019-01-08 06:54:06.952 UTC [orderer.commmon.multichannel] commitBlock -> DEBU 969 [channel: mychannel] Wrote block 5
Has joined the channel.
I have gone through the tutorial to add a new org to an existing channel. This requires getting a signed configuration transaction from all the participating peers. Is it possible to change the policy such that to add a new org, only a specified organisation can add it?
To make it clearer..
Lets say OrgA, OrgB and OrgC are in the channel. The channel has been created by OrgA. OrgD needs to be added. To perform this addition, I want only OrgA to sign it.
Follow up question:
Is it possible to define non default values in configtx.yaml itself while configuring the network instead of updating the channel every time? Can I override the default mod_policy in configtx.yaml file itself?
some one please help on this
Hi, I'm trying to run the balance-transfer sample. In ubuntu 16.04 there is no problem. When I apply the same steps at MacOS I got an error about SSL handshake during creating a channel.
Node.js app and ordering service logs:
https://hastebin.com/pazozihahe.sql
I'm using all artifacts and crypto-materials from the sample project. I didn't re-generate.
@knagware9 It sounds like your application is written to send the transactions serially, ie, your application is waiting to send the next transaction until the previous one commits. This is very common in the samples. If you are interested in high throughput, you should engineer your application to decouple dependencies between your transactions.
@NeelKantht If you use the most recent version of `configtxlator` and `configtx.yaml` you may specify the value of policies in `configtx.yaml`. Rather than specifying a `Majority` for the /Channel/Application/Admins policy, you may specify 'Any', or, a policy requiring a specific member.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=JaM9kd5zxT2FDmkqn) @jyellick Thanks actually I had to send multiple transaction to API..I mean need to hit invoke API concurrently , now its having multiple transaction in a block as per the config
Has joined the channel.
Hi All,
I was using a KAfka based ordering, and the network was stable and performance was moderate, and i was using a Log retention policy on kafka with either 5 GB or 7 Days. But 6 days back the orderer some how stopped connecting to kafka and got shutdown
now when i tried restarting the orderer the issues raised saying that , offset from the kafka are out of range
i found this JIRA issue
https://jira.hyperledger.org/browse/FAB-7352
it is pointing to the same problem
now can someone help me finding a way how to update this offset in orderer so it can again connect to the kafka and sync the offset
as currently the orderer which has stopped syncing was the primary orderer and the ledger is not being updated by any transaction due to this reason, which shold not be as per the fault tolerance for 2 out of 3 orderers was syncing perfectly previously
as currently the orderer which has stopped syncing was the primary orderer and the ledger is not being updated by any transaction due to this reason, which should not be as per the fault tolerance for 2 out of 3 orderers was syncing perfectly previously
please help
can i somehow update the kafka offset to the latest kafka offset on the orderer
@NeerajKumar - we would need to see the orderer logs. Also note that the documentation does state that you should not prune the Kafka logs
Has joined the channel.
Has joined the channel.
Hi @mastersingh24 , let me put the orderer logs in a link:
https://drive.google.com/file/d/1EB91aHmdK7_xFc2yXly1Z-pAFEUhaZZN/view?usp=sharing
kafkaLogs.log
here let me tell you that orderers are holding an offset of 214 on the channel "indiaadhaar"
so this issue is mainly due to the mismatch of the offset
but my question is that of it happened in the first place
but my question/confusion is that, why it happened in the first place
???
please somebody help
Not sure if this should be here, in #fabric-ledger or #fabric-peer-endorser-committer tbh.
We have a situation where a committed transaction has been written to one peer but not another and I'm really not sure how to get us out of this mess.
It looks (from the orderer logs) like kafka was doing one of its regular leadership elections, followed by a couple of consenter errors and some other stuff.
The client logs are inconclusive as they only note the second failure (which would have been an update).
The upshot is that the Tx is in peer0 (and can be read) but gets a not found in peer1 for a particular org.
Has anyone ever seen this? And if so how did they fix it?
Not sure if this should be here, in #fabric-ledger or #fabric-peer-endorser-committer tbh.
We have a situation where a committed transaction has been written to one peer but not another and I'm really not sure how to get us out of this mess.
It looks (from the orderer logs) like kafka was doing one of its regular leadership elections, followed by a couple of consenter errors and some other stuff.
The client logs are inconclusive as they only note the second failure (which would have been an update).
The upshot is that the Tx is in peer0 (and can be read) but gets a not found in the chain code for peer1 for a particular org. New transactions are going through ok, so the network is still working - and it's been up for months now, so this isn't code we futzed with. Has anyone ever seen this? And if so how did they fix it?
Not sure if this should be here, in #fabric-ledger or #fabric-peer-endorser-committer tbh.
We have a situation where a committed transaction has been written to one peer but not another and I'm really not sure how to get us out of this mess.
It looks (from the orderer logs) like kafka was doing one of its regular leadership elections, followed by a couple of consenter errors and some other stuff.
The client logs are inconclusive as they only note the second failure (which would have been an update).
The upshot is that the Tx is in peer0 (and can be read) but gets a not found in the chain code for peer1 for a particular org.
New transactions are going through ok, so the network is still working - and it's been up for months now, so this isn't code we futzed with. Has anyone ever seen this? And if so how did they fix it?
@NeerajKumar Most likely, you have configured Kafka to prune logs, ie, to delete data. Kafka has deleted data at the point the orderer needs to recover from, so it cannot.
@aatkddny I would try pulling the block containing that tx from each peer. Are the block contents different? If so, this would represent a chain fork (this should not happen ever, and I would be very curious to discover how it did). If they are the same, we can look at the validation bitmask for each, if those are different, it would represent a state fork (also very bad and should not happen). Or, if both match, then it is likely an application problem.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hgqTy83AbjbdmRx3b) @jyellick We figured it out. Kafka. Again. The bane of my life in Fabric.
I misrepresented significantly when I said it was still working - when I actually looked at it, rather than relying on what I was told - the whole thing was broken for update. Reads were unaffected. It's QA so they replay data a lot.
This was an old docker setup - it only has a dozen nodes so it fits in a single docker and we never got round to moving it.
Kafka logs weren't redirected to a permanent directory and they filled up inside the container.
That killed kafka, at the same time as it was updating the peers.
Someone brought the kafkas back, but with a fresh start it was a dismal failure. Lesson learned. Every single time we've had a failure in every environment it's that requirement to keep the logs around causing something to fill up.
I really really hope 1.4 and etcd fixes this.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hgqTy83AbjbdmRx3b) @jyellick We figured it out. Kafka. Again. The bane of my life in Fabric.
I misrepresented significantly when I said it was still working - when I actually looked at it, rather than relying on what I was told - the whole thing was broken for update. Reads were unaffected. It's QA so they replay data a lot.
This was an old docker setup - it only has a dozen nodes so it fits in a single docker and we never got round to moving it.
Kafka logs weren't redirected to a permanent directory and they filled up inside the container.
That killed kafka, at the same time as it was updating the peers.
Someone brought the kafkas back, but with a fresh start it was a dismal failure. Lesson learned. Every single time we've had a failure that forces a restart - in every environment we run this in - it's that requirement to keep the logs around causing something to fill up.
I really really hope 1.4 and etcd fixes this.
Yes, getting Kafka out of prereqs is big on our list, we think etcd/Raft should simplify life for administrator's significantly
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=qqxxGpvk3vuD2nRbM) @jyellick I wasn't joking. Every single catastrophic failure requiring a total restart and data loss we've had can be directly traced back to the same root.
Kafka log files filling up *something* required for happy container functioning.
Docker or Kubernetes the same.
And I'm not kidding either :slight_smile: Raft is our #1 priority for v2.0
@aatkddny don't worry, just survive until March... you won't be needing Kafka orderer anymore
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fd6gB8ozfGYA5rg9p) @yacovm Counting the days...
Perhaps I shouldn't bite, but I will.
>We figured it out. Kafka. Again. The bane of my life in Fabric.
You're already dealing with at least two processes in your setup than need an unbounded volume for data storage (orderer, peer). Until you switch to a solution that works like you expect it to (whether it's Fabric or not), what prevents you from temporarily doing the same for your Kafka processes, so that this "bane of your Fabric life" ceases to exist?
What's that quote? "Insanity is doing the same thing over and over again and expecting different results."
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Z93Smbmv4Qzb9yPwW) @jyellick yeah that i understand, but can i do something to recover my orderer right now... also it should have pruned thes logs but why orderer suddenly stopped working on it
The orderer is refusing to start because it cannot guarantee data safety. It wants to resume processing from the last message it processed, but Kafka tells it that message is no longer there. The orderer assumes that those messages might have been committed to blocks by other orderers, and therefore cannot proceed.
There is currently no tooling available to recover from this situation. You could start a new ordering service and resubmit the transactions from your blockchain in order to rebuild the world state. Or, you could try manually editing the block metadata, but this process requires considerably more understanding of fabric datastructures.
Has joined the channel.
Hi, I am currently considering the case of wanting to rotate all of the network's certs (in the case where I want to migrate from cryptogen-generated certs to fabric-ca managed ones, or when the certs are almost expired, or worse, have already expired), and part of it involves the MSP service.
This section in the Hyperledger documentation mentions a Network Channel which I suppose must be updated to include the new CA(s) of the new orderer and orderer-admin certs so that they can keep working.
https://hyperledger-fabric.readthedocs.io/en/release-1.4/network/network.html
So here are a few questions:
1. Assuming I keep using the same CA, can I simply replace the orderer's certs in its msp and tls folder, then restart the node and expect it to work?
2. How do I update the "network channel" to insert the new CA certs? I have been able to update regular channels before.
3. Assuming all Admin certs in the orderer organisation have expired, is there any way left to update the channel? What about if the orderer and Ca certs of the organisation also expired, can there be any more transactions?
Thank you for your help
@GuillaumeTong I believe you mean "orderer system channel" when you say "network channel". The MSP structures are stored on-chain in structures known as "configuration blocks", to update them, you need to do a config update transaction, see https://hyperledger-fabric.readthedocs.io/en/release-1.4/config_update.html
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SGTQsuooKcYZsC5rv) @jyellick For a "regular" channel update I would use a command similar to this to retrieve the config as a first step :
```./peer.sh channel fetch config config_block.pb -c
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SGTQsuooKcYZsC5rv) @jyellick For a "regular" channel update I would use a command similar to this to retrieve the config as a first step :
``` ./peer.sh channel fetch config config_block.pb -c
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SGTQsuooKcYZsC5rv) @jyellick For a "regular" channel update I would use a command similar to this to retrieve the config as a first step :
` ./peer.sh channel fetch config config_block.pb -c
When you bootstrapped your network, you created a genesis block for the orderer, which contained your initial consortium definitions, crypto material, etc.
You used a tool called `configtxgen` to do this.
Yes
If you passed in a `-channelID` flag, then that is the channel name. If you did not, then it defaulted to `testchainid`
Ah I see.
Normally when would I need to update the orderer system channel? When the admin certs change for example?
As best practice, we recommend that the MSP definitions across all channels be kept in sync.
The orderer uses the MSP definitions in the orderer system channel to authorize the creation of new channels, and as the initial set of crypto populated into new channels.
That is quite important then. I'll keep it in mind. Having to do the same operation on all channels is however somewhat annoying.
Thank you for your help
> Having to do the same operation on all channels is however somewhat annoying.
Certainly this is something you'll want to put an automation framework around. The requirement to do it for each channel is a strong technical one. Since channels do not have order relative to eachother, using a single copy of crypto material would not allow for deterministic validation of signatures.
@yacovm will the new etcd/raft based ordering service in v2.0 support the ability to distribute OSNs across multiple orgs? So, given a channel X shared across org1 and org2, will I be able to define a ordering service implemented by a raft cluster with nodes both in org1 and org2?
yep
you can run a Raft channel on a subset of orgs
i.e - if you have orgs 1,2,3,4,5 total in the entire network
the system channel needs to have them all
but you can create a channel only for 1,2,3
and 4,5 won't get the blocks for the channel
but will know it exists
excellent; that's the answer I wanted to hear; thanks
Has joined the channel.
What happens if the orderer process dies, and then I start it again? Will I lose only transactions that have not been commited, or something else?
If the orderer dies you will lose transactions that are not yet included in a block and included in your local ledger
@VadimInshakov If you are using Kafka, and you have received a status SUCCESS in response to `Broadcast`, the message will be ordered regardless of orderer crashes. In Raft, this guarantee will be weaker, but in general, SUCCESS indicates that the tx has entered the consensus process
@jyellick and what if I will use solo mode? I will loose orderer ledger? But what is it?
In solo mode, if the orderer crashes, any data which has not already committed to the blockchain is lost.
Has joined the channel.
@jyellick What is the difference between a orderer ledger and a peer ledger?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pTAJdNEmSmgHpdL93) @VadimInshakov The orderer keeps all the blocks but technically it isn't the "ledger" as the final validation phase (checking for things like endorsement policy failure, MVCC read conflict, etc.) is done at the peers after having received the blocks from the orderer (or from another peer via gossip), so the orderer has no knowledge of whether the transactions in the blocks it delivers are valid or invalid transactions.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=pTAJdNEmSmgHpdL93) @VadimInshakov also the ledger is usually thought of as the "blockchain" (the flat files containing the blocks) and the "world state" (the current value for each name/value) and orderers do not maintain world state.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Has joined the channel.
Greetings Fabric Gurus! Previously I was wondering whether the order of submitted transactions are maintained when they are recorded on the blockchain. After reading the docs, it seems like this problem depends on what ordering mechanism is used, right? If so, does Kafka ensure ordering/sequencing of transaction in the order they were submitted from clients?
@unlimited Kafka ensures sequencing of transactions in the order they are written to the Kafka topic of the channel which is used as input by the ordering service node which has been elected by zookeeper as leader of the Kafka topic used for cutting blocks. If client A submits a transaction and later client B submits one, network latency or other factors could cause the transaction from client B to arrive first, and hence be written to the block first.
hi friends,i have created a fabric network on 4 kafka and 3 orderer.the network is working perfectly. but when i tried to find the block details inside orderer, the /var/hyperledger/production/orderer folder is empty. i have successfully created channel ,also installed and instanciated chain code.quey also working.but nothing inside production folder inside the orderer containers. can anyone help me.i dont know what i have done wrong.the orderer yaml file configuration i am pasting below.
hi friends,i have created a fabric network on 4 kafka and 3 orderer.the network is working perfectly. but when i tried to find the block details inside orderer, the /var/hyperledger/production/orderer folder is empty. i have successfully created channel ,also installed and instanciated chain code.quey also working.but nothing inside production folder inside the orderer containers. can anyone help me.i dont know what i have done wrong.the orderer yaml file configuration i am pasting below.
.............................................................................................................................................................................................................................................................
orderer0.example.com:
extends:
file: docker-compose-base.yaml
service: orderer
container_name: orderer0.example.com
environment:
- ORDERER_HOST=orderer0.example.com
- CONFIGTX_ORDERER_ORDERERTYPE=kafka
- CONFIGTX_ORDERER_KAFKA_BROKERS=[kafka0:9092,kafka1:9092,kafka2:9092,kafka3:9092]
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_GENERAL_GENESISPROFILE=SampleInsecureKafka
- ORDERER_ABSOLUTEMAXBYTES=${ORDERER_ABSOLUTEMAXBYTES}
- ORDERER_PREFERREDMAXBYTES=${ORDERER_PREFERREDMAXBYTES}
volumes:
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/msp:/var/hyperledger/msp
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer0.example.com/tls:/var/hyperledger/tls
- ./artifacts/:/var/hyperledger/configs
- /var/hyperledger/orderer0:/var/hyperledger
depends_on:
- kafka0
- kafka1
- kafka2
- kafka3
ports:
- 7050:7050
and the following is under docker-compose-base.yaml file
orderer:
image: hyperledger/fabric-orderer
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=NETWORK_NAME
- ORDERER_HOME=/var/hyperledger/orderer
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/msp
- ORDERER_GENERAL_LOCALMSPID=ordererMSP
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_LEDGERTYPE=ram
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/configs/orderer.block
- CONFIGTX_ORDERER_ORDERERTYPE=kafka
- CONFIGTX_ORDERER_BATCHSIZE_MAXMESSAGECOUNT=10
- CONFIGTX_ORDERER_BATCHTIMEOUT=2s
- CONFIGTX_ORDERER_ADDRESSES=[127.0.0.1:7050]
# TLS settings
- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/tls/ca.crt]
- ORDERER_TLS_CLIENTAUTHREQUIRED=false
- ORDERER_TLS_CLIENTROOTCAS_FILES=/var/hyperledger/users/Admin@example.com/tls/ca.crt
- ORDERER_TLS_CLIENTCERT_FILE=/var/hyperledger/users/Admin@example.com/tls/client.crt
- ORDERER_TLS_CLIENTKEY_FILE=/var/hyperledger/users/Admin@example.com/tls/client.key
volumes:
- ./artifacts/:/var/hyperledger/configs
- ./crypto-config/ordererOrganizations/example.com/users:/var/hyperledger/users
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
ports:
- '7050'
Hi ,,my orderer container failed due to this issue and I found there is one bug ticket https://jira.hyperledger.org/browse/FABN-185 for fabric 1.1 ...but now I am getting this error in fabric 1.4 Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: setting up the MSP manager failed: the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.example.com")
panic: Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: setting up the MSP manager failed: the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.example.com")
Clipboard - January 23, 2019 3:12 PM
Also some one please ,,,another issue in channel creation ...'019-01-23 09:52:25.707 UTC [orderer.common.server] Start -> INFO 007 Beginning to serve requests
2019-01-23 09:54:43.933 UTC [cauthdsl] deduplicate -> ERRO 008 Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.org1.example.com")) for identity 0
2019-01-23 09:54:43.933 UTC [cauthdsl] deduplicate -> ERRO 009 Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca.org1.example.com")) for identity 0
2019-01-23 09:54:43.933 UTC [orderer.common.broadcast] ProcessMessage -> WARN 00a [channel: mychannel] Rejecting broadcast of config message from 172.18.0.1:40746 because of error: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
2019-01-23 09:54:43.933 UTC [comm.grpc.server] 1 -> INFO 00b streaming call completed {"grpc.start_time": "2019-01-23T09:54:43.91Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "172.18.0.1:40746", "grpc.code": "OK", "grpc.call_duration": "22.635378ms"}
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=aKpzCjuDPF3umu3vs) Tried on another machine and its working
@alokkv You have selected a ledger type of RAM
```- ORDERER_GENERAL_LEDGERTYPE=ram
```
This causes the ledger to be held in memory only (and truncated as memory exhausts). It is not intended for a production system and is really only useful for test.
@knagware9 The first error you describe is because your channel config is no longer consistent. If it is an expiration problem, I suggest you set the system time to be in the past, start the orderer, and apply a config update to replace the expired certificates.
For the second error you described, most likely you are not using an admin cert for one of the orgs. You can turn on debug on the orderer and retry for more details in the orderer logs.
@silliman Ah I see, thanks for clearing that up. Is there a way to reference the transactions in chronological order though? I suppose that requires processing the whole chain to see which transactions occurred first, but that seems rather inefficient.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fi37X5Q2J9HNPT2nx) @unlimited I don't know, but that would probably be a function of one of the chaincode shim APIs against the ledger stored on the peers and you might try asking in #fabric-peer-endorser-committer or #fabric-chaincode-dev
@silliman Okay, will try those channels and look into the API. Thanks!
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2LEHMHFjHN2mXRZgi) @jyellick Thank you ...:) later I solved the issue but not sure what I changed ,, I followed the same steps which I followed eralier
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8WLZi5CDn2qop6tB9) @jyellick thank you for the replay.so what should i give
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=vQBdhcZKmXk8Qz4RQ) @alokkv ORDERER_GENERAL_LEDGERTYPE=file
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=5X6vBSgPZt4fGR8yY) General:
# Ledger Type: The ledger type to provide to the orderer.
# Two non-production ledger types are provided for test purposes only:
# - ram: An in-memory ledger whose contents are lost on restart.
# - json: A simple file ledger that writes blocks to disk in JSON format.
# Only one production ledger type is provided:
# - file: A production file-based ledger.
LedgerType: file
Has joined the channel.
hey all, i have been directed here with my orderer on kafka query . i hope you can help me
when using a kafka ordering service, i keep getting the following error when trying to create a channel
ProcessMessage -> WARN 008 [channel: messagebus] Rejecting broadcast of message from 10.0.1.1:1234 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
although Kafka nodes seem to have started fine. what should i do to investigate this error ?
adamhardie 12:35 PM
eg - on a follower node i can see - Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
orderer has set up test chain id successfully
2019-01-24 12:29:06.338 UTC [orderer.consensus.kafka] setupProducerForChannel -> INFO 00c [channel: testchainid] Setting up the producer for this channel...
2019-01-24 12:29:06.366 UTC [orderer.consensus.kafka] startThread -> INFO 00d [channel: testchainid] Producer set up successfully
when using a kafka ordering service, i keep getting the following error when trying to create a channel
ProcessMessage -> WARN 008 [channel: messagebus] Rejecting broadcast of message from 10.0.1.1:1234 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
although Kafka nodes seem to have started fine. what should i do to investigate this error ?
eg - on a follower node i can see - Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
orderer has set up test chain id successfully
2019-01-24 12:29:06.338 UTC [orderer.consensus.kafka] setupProducerForChannel -> INFO 00c [channel: testchainid] Setting up the producer for this channel...
2019-01-24 12:29:06.366 UTC [orderer.consensus.kafka] startThread -> INFO 00d [channel: testchainid] Producer set up successfully
when using a kafka ordering service, i keep getting the following error when trying to create a channel
ProcessMessage -> WARN 008 [channel: messagebus] Rejecting broadcast of message from 10.0.1.1:1234 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
although Kafka nodes seem to have started fine. what should i do to investigate this error ?
eg - on a follower node i can see -
Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
..so it appears that the orderer has set up test chain id successfully :
2019-01-24 12:29:06.338 UTC [orderer.consensus.kafka] setupProducerForChannel -> INFO 00c [channel: testchainid] Setting up the producer for this channel...
2019-01-24 12:29:06.366 UTC [orderer.consensus.kafka] startThread -> INFO 00d [channel: testchainid] Producer set up successfully
when using a kafka ordering service, i keep getting the following error when trying to create a channel
ProcessMessage -> WARN 008 [channel: messagebus] Rejecting broadcast of message from 10.0.1.1:1234 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
..although Kafka nodes seem to have started fine. what should i do to investigate this error ?
eg - on a follower node i can see -
Replica loaded for partition testchainid-0 with initial high watermark 0 (kafka.cluster.Replica)
..so it appears that the orderer has set up test chain id successfully :
2019-01-24 12:29:06.338 UTC [orderer.consensus.kafka] setupProducerForChannel -> INFO 00c [channel: testchainid] Setting up the producer for this channel...
2019-01-24 12:29:06.366 UTC [orderer.consensus.kafka] startThread -> INFO 00d [channel: testchainid] Producer set up successfully
Has joined the channel.
Hi everyone, In the BYFN demo the ordering service fails to start. I checked the logs and the orderer container exited with some error. So then I checked the logs for the orderer container the logs are given in the link below. To summarise the problem is that the orderer container fails to start with the error that the *channel ID contains invalid characters* (*SYS_CHANNEL*). The *ChannelID* cannot contain special characters. I tried to set the channel flag using following commands. `./byfn.sh generate -c mychannel `which ran successfully but `./byfn.sh up -c mychannel` command failed with the same error. On running `docker ps -a` I can see that the orderer container has exited.
https://hastebin.com/mayilemini.makefile
version 1.4 used
Hi Team, did any one try to connect to operational rest endpoint like logspec of Orderer? I couldn't see 8443 server running in my server after complete Network deployment. thanks.
Hi Team, did any one try to connect to operational rest endpoint like logspec of Orderer in Fabric v1.4? I couldn't see 8443 server running in my server after complete Network deployment. thanks.
Hi Team, did any one try to connect to operational rest endpoint like logspec of Orderer in Fabric v1.4? I couldn't see 8443 rest server port running in my server after complete Network deployment. thanks.
Hi Team, did any one try to connect to operations rest endpoint like logspec of Orderer in Fabric v1.4? I couldn't see 8443 rest server port running in my server after complete Network deployment. thanks.
Hello. Could somebody shortly describe how to update a configuration of orderer system channel?
@krabradosty https://hyperledger-fabric.readthedocs.io/en/release-1.4/config_update.html
@mfaisaltariq https://github.com/hyperledger/fabric-samples/blob/release-1.4/first-network/byfn.sh#L413-L415 these lines set the channel ID of the orderer, it is not configurable, so you must have modified them?
hi, can I ask a question,
If fabric network is running with solo consensus, If want to change to kafka, do we need to provisioning network from scratch with kafka type configured in configtx.yaml file?, is there any way to archive this goal through update process?
Thanks
@Ryan2 There is currently no supported path to switch from Solo to other consensus types
@jyellick I checked and it appears that the lines have not been modified. To be on safe side I even copied the complete file from the link you provided and ran that. Still no luck
Can you re-run `./byfn.sh generate` before the `up` to confirm you are not using old artifacts?
I have deleted the old artifacts and crypto-config folder before doing that and destroyed all containers too using `docker rm -f $(docker ps -aq)`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=xYCT4xAg3RaQjqpeW) thank you for your information,
let me clone the repo again and try may be I have changed something in the other files.
pruned all containers and images. Dowloaded all images again with all binaries and samples. Tried running the network again. Failed with the same error log
@jyellick
the orderering service fails to start and the container status is *Exited*
hi friends is there a way to auto delete log files inside docker...?
@mfaisaltariq Did you prune the shared volumes? Try a `./byfn.sh down` in between?
hi friends has anyone know how to auto delete old logs from containers?.please help..
hi friends, i have put - ORDERER_GENERAL_LOGLEVEL=ERROR as my orderer environment variable.still when i check the logs its saying debug.whay is that.is there any other variable need to be set.? please help
2019-01-26 05:19:33.106 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 1f7 client/metadata fetching metadata for all topics from broker kafka1:9092
2019-01-26 05:29:10.801 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 1f8 client/metadata fetching metadata for all topics from broker kafka0:9092
2019-01-26 05:29:13.966 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 1f9 client/metadata fetching metadata for all topics from broker kafka0:9092
2019-01-26 05:29:31.401 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 1fa client/metadata fetching metadata for all topics from broker kafka3:9092
2019-01-26 05:29:33.106 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 1fb client/metadata fetching metadata for all topics from broker kafka1:9092
2019-01-26 05:39:10.801 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 1fc client/metadata fetching metadata for all topics from broker kafka0:9092
2019-01-26 05:39:13.966 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 1fd client/metadata fetching metadata for all topics from broker kafka0:9092
2019-01-26 05:39:31.401 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 1fe client/metadata fetching metadata for all topics from broker kafka3:9092
2019-01-26 05:39:33.106 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 1ff client/metadata fetching metadata for all topics from broker kafka1:9092
2019-01-26 05:49:10.801 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 200 client/metadata fetching metadata for all topics from broker kafka0:9092
2019-01-26 05:49:13.966 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 201 client/metadata fetching metadata for all topics from broker kafka0:9092
2019-01-26 05:49:31.401 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 202 client/metadata fetching metadata for all topics from broker kafka3:9092
2019-01-26 05:49:33.106 UTC [orderer/consensus/kafka/sarama] tryRefreshMetadata -> DEBU 203 client/metadata fetching metadata for all topics from broker kafka1:9092
Hi. How to check if the new orderer i have spun up has joined the network?
Has joined the channel.
Has left the channel.
Has joined the channel.
hello @shivann! I'm interrested in connecting to Operations endpoint. Did you have any success for your request?
@alokkv What version of fabric? Do you have `FABRIC_LOGGING_SPEC` set? In v1.4 this variable replaces old methods of configuring logging.
@Jamie you may fetch blocks from it (for instance using `peer channel fetch`) to see if it has synced with the network
@jyellick cool thanks. I will try and update here.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=oSMvHziM7xsoRqemR) @jyellick i am using fabric 1.1
@alokkv In Fabric v1.1.x, `ORDERER_GENERAL_LOGLEVEL` should control your logs. Are you certain it's not being overridden somewhere?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SK8jqAqW76QJQeDxj) @jyellick yes i am certain.only ORDERER_GENERAL_LOGLEVEL is set in my yaml file.is there any place else i should set any parameters
@alokkv
> is set in my yaml file
Your yaml file uses fields, top level `General`, next level `LogLevel`, you may set the log level here without using an environment variable. The `ORDERER_GENERAL_LOGLEVEL` if set, will override the value in the yaml file.
@alokkv
> is set in my yaml file
Your yaml file uses fields, top level `General`, next level `LogLevel`, you may set the log level here without using an environment variable. The `ORDERER_GENERAL_LOGLEVEL` environment variable if set, will override the value in the yaml file.
> is there any place else i should set any parameters
Base configuration is defined in `orderer.yaml`, it is overridden via environment. There shouldn't be anywhere else to check.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nceuRgvi3vBFeXCTP) @jyellick ORDERER_GENERAL_GENESISPROFILE.do you know what this field is and is it need to be set.
@alokkv The sample config file contains documentation https://github.com/hyperledger/fabric/blob/release-1.4/sampleconfig/orderer.yaml#L88-L93
Is there some part of the description you find confusing?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=t8YP8CG5sqLDbyxgG) @jyellick thank you for the link.i am pasting my orderer variables.can you please tell anything wrong in there.
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=byfn
- ORDERER_HOME=/var/hyperledger/orderer
- ORDERER_GENERAL_LOGLEVEL=ERROR
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/msp
- ORDERER_GENERAL_LOCALMSPID=ordererMSP
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_LEDGERTYPE=file
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/configs/orderer.block
- CONFIGTX_ORDERER_ORDERERTYPE=kafka
- CONFIGTX_ORDERER_BATCHSIZE_MAXMESSAGECOUNT=10
- CONFIGTX_ORDERER_BATCHTIMEOUT=2s
- CONFIGTX_ORDERER_ADDRESSES=[127.0.0.1:7050]
# TLS settings
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/tls/server.key
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/tls/server.crt
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/tls/ca.crt]
- ORDERER_TLS_CLIENTAUTHREQUIRED=false
- ORDERER_TLS_CLIENTROOTCAS_FILES=/var/hyperledger/users/Admin@example.com/tls/ca.crt
- ORDERER_TLS_CLIENTCERT_FILE=/var/hyperledger/users/Admin@example.com/tls/client.crt
@alokkv
```- CONFIGTX_ORDERER_ORDERERTYPE=kafka
- CONFIGTX_ORDERER_BATCHSIZE_MAXMESSAGECOUNT=10
- CONFIGTX_ORDERER_BATCHTIMEOUT=2s
- CONFIGTX_ORDERER_ADDRESSES=[127.0.0.1:7050]
```
These parameters will have no effect if you use the `file` genesis method. You should set these variables (or, better yet, modify `configtx.yaml` before generating your genesis block.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=NkQFfzyPCpnsZxJvd) @jyellick i dont have a orderer.yaml file,like the file you send the link.i have configured configtx.yaml file for kafka.so i think i can comment the ORDERER_GENERAL_GENESISPROFILE section right
If you are using genesis type of file, then genesismethod will have no effect.
If you are using genesis type of file, then genesisprofile will have no effect.
If you are using genesis method of file, then genesisprofile will have no effect.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=QHsEBWafSsurDkf4j) also the following file need to be commented right,ORDERER_GENERAL_GENESISMETHOD=file
@jyellick thanks man. `./byfn.sh down` did the trick. Can you please explain why do we need to do that in order to avoid this situation? Are there any other services that run behind the scene that prevent a network with same configuration to start?
@mfaisaltariq Yes, the containers are brought up with persistent docker volumes., So, when you bootstrapped your orderer with the genesis block with the bad name, it got written into the volume. No matter that you stopped or even destroyed the container, the volume was still there, and the docker compose was binding it back in.
@jyellick thanks again
Hello,
When I install chaincode for the first time everything works fine but after somedays, after upgrading the chaincode I got error of channel not found. Here are the suspicious orderer logs:
`2019-01-28 08:57:41.050 UTC [common/deliver] deliverBlocks -> ERRO dedb [channel: eprocurechannel] Error reading from channel, cause was: NOT_FOUND`
peer logs:
`2019-01-28 07:55:18.143 UTC [blocksProvider] DeliverBlocks -> WARN 64c9 [eprocurechannel] Got error &{NOT_FOUND}`
Hello,
When I install chaincode for the first time everything works fine but after somedays, after upgrading the chaincode I got error of channel not found. Here are the suspicious orderer logs:
```2019-01-28 08:57:41.050 UTC [common/deliver] deliverBlocks -> ERRO dedb [channel: eprocurechannel] Error reading from channel, cause was: NOT_FOUND```
peer logs:
```2019-01-28 07:55:18.143 UTC [blocksProvider] DeliverBlocks -> WARN 64c9 [eprocurechannel] Got error &{NOT_FOUND}```
Hello,
When I install chaincode for the first time everything works fine but after somedays, after upgrading the chaincode I got error of channel not found. Here are the suspicious ```
orderer 2019-01-28 08:57:41.050 UTC [common/deliver] deliverBlocks -> ERRO dedb [channel: eprocurechannel] Error reading from channel, cause was: NOT_FOUND
```
peer logs:
```
2019-01-28 07:55:18.143 UTC [blocksProvider] DeliverBlocks -> WARN 64c9 [eprocurechannel] Got error &{NOT_FOUND}
```
I am unable to find the cause of it. Any help would be appriciated.
Has joined the channel.
@HoneyShah What happens if you try to fetch blocks from the orderer directly for that channel with `peer channel fetch`?
Hello, I'm getting the following error when trying to start my network `Failed to initialize local MSP: could not load a valid signer certificate from directory /etc/hyperledger/orderer/msp/signcerts: stat /etc/hyperledger/orderer/msp/signcerts: no such file or directory`
I have the following path configured `ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/orderer/msp`
Tried these paths already:
`/etc/hyperledger/msp/orderer/msp``/var/hyperledger/msp
Hello, I'm getting the following error when trying to start my network `Failed to initialize local MSP: could not load a valid signer certificate from directory /etc/hyperledger/orderer/msp/signcerts: stat /etc/hyperledger/orderer/msp/signcerts: no such file or directory`
I have the following path configured `ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/orderer/msp`
Tried these paths already:
`/etc/hyperledger/msp/orderer/msp`
`/var/hyperledger/msp
Hello, I'm getting the following error when trying to start my network `Failed to initialize local MSP: could not load a valid signer certificate from directory /etc/hyperledger/orderer/msp/signcerts: stat /etc/hyperledger/orderer/msp/signcerts: no such file or directory`
I have the following path configured `ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/orderer/msp`
Tried these paths already:
`/etc/hyperledger/msp/orderer/msp`
`/var/hyperledger/msp`
Complete orderer container log:
`2019-01-28 20:15:08.983 UTC [localconfig] completeInitialization -> INFO 001 Kafka.Version unset, setting to 0.10.2.0
2019-01-28 20:15:08.983 UTC [bccsp_sw] createKeyStoreIfNotExists -> DEBU 002 KeyStore path [/etc/hyperledger/orderer/msp/keystore] missing [true]: [
Complete orderer container log:
`2019-01-28 20:15:08.983 UTC [localconfig] completeInitialization -> INFO 001 Kafka.Version unset, setting to 0.10.2.0
2019-01-28 20:15:08.983 UTC [bccsp_sw] createKeyStoreIfNotExists -> DEBU 002 KeyStore path [/etc/hyperledger/orderer/msp/keystore] missing [true]: [
Complete orderer container log
Complete orderer container log:
```
2019-01-28 20:15:08.983 UTC [localconfig] completeInitialization -> INFO 001 Kafka.Version unset, setting to 0.10.2.0
2019-01-28 20:15:08.983 UTC [bccsp_sw] createKeyStoreIfNotExists -> DEBU 002 KeyStore path [/etc/hyperledger/orderer/msp/keystore] missing [true]: [
And part of the log for the start network script:
```
2019-01-28 20:15:52.111 UTC [bccsp_sw] loadPrivateKey -> DEBU 033 Loading private key [dfb17cf51dc061d585b4850599be0e4b8b7cc8cc363a67c23bc03c6c5393b0e0] at [/etc/hyperledger/peer/msp/keystore/dfb17cf51dc061d585b4850599be0e4b8b7cc8cc363a67c23bc03c6c5393b0e0_sk]...
2019-01-28 20:15:52.111 UTC [msp/identity] newIdentity -> DEBU 034 Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICGTCCAb+gAwIBAgIQTx2TvwYtAf62KKQliP6UoTAKBggqhkjOPQQDAjBzMQsw
CQYDVQQGEwJVUzETMBEGA1UECBMKQ2FsaWZvcm5pYTEWMBQGA1UEBxMNU2FuIEZy
YW5jaXNjbzEZMBcGA1UEChMQb3JnMS5leGFtcGxlLmNvbTEcMBoGA1UEAxMTY2Eu
b3JnMS5leGFtcGxlLmNvbTAeFw0xNzA2MjYxMjQ5MjZaFw0yNzA2MjQxMjQ5MjZa
MFsxCzAJBgNVBAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1T
YW4gRnJhbmNpc2NvMR8wHQYDVQQDExZwZWVyMC5vcmcxLmV4YW1wbGUuY29tMFkw
EwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEPPHUp7+EYb2xIbleWfRClMgqbtQqRmIS
2a5F8T0L3J6IZp9wm7K+w4LIBIgw1Cz9D8nqHW6f4OYBrbp0cSGnR6NNMEswDgYD
VR0PAQH/BAQDAgeAMAwGA1UdEwEB/wQCMAAwKwYDVR0jBCQwIoAgGatlq7sEgH2t
EuTAqaqmZJ5who46vQIXoyLYnkfhpq4wCgYIKoZIzj0EAwIDSAAwRQIhAK4i2Hz2
K398TvjJk62neDoenYhkMY7rBN3BN/GI0G0SAiAOTx36wuy9/4BBV8NVBCZ9V+Iw
msdI9CyZ59oVMVmNYQ==
-----END CERTIFICATE-----
2019-01-28 20:15:52.111 UTC [msp] setupSigningIdentity -> DEBU 035 Signing identity expires at 2027-06-24 12:49:26 +0000 UTC
2019-01-28 20:15:52.111 UTC [msp] Validate -> DEBU 036 MSP Org1MSP validating identity
2019-01-28 20:15:52.112 UTC [msp] GetDefaultSigningIdentity -> DEBU 037 Obtaining default signing identity
2019-01-28 20:15:52.112 UTC [grpc] DialContext -> DEBU 038 parsed scheme: ""
2019-01-28 20:15:52.112 UTC [grpc] DialContext -> DEBU 039 scheme "" not registered, fallback to default scheme
2019-01-28 20:15:52.112 UTC [grpc] watcher -> DEBU 03a ccResolverWrapper: sending new addresses to cc: [{orderer0.example.com:7050 0
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Nh3nRbMTccqrCsB99) @jyellick I didn't try that command at that time and I have restarted the network for resume the work so I can't check it now but I had tried I checked channel list by `peer channel list `and it was showing me the channel. But I want to know how to solve for this kind of problem if occurs in future.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Nh3nRbMTccqrCsB99) @jyellick I didn't try that command at that time and I have restarted the network for resume the work so I can't check it now but I had tried I checked channel list by `peer channel list `and it was showing me the channel. But I want to know how to solve this kind of problem if occurs in future.
Hi All, i have a query , if the kafka is ahead of orderer in topic offset count and if the kafka has done the log retention as per the policy loosing older topic offsets in the process and if on later point in time orderer try to do a tx and end up having a kafka offset out of range issue, can we reset kafka offset to save our orderer from this issue? please help as its a major issue where i am stuck
Hi All, i have a query , if the kafka is ahead of orderer in topic offset count and if the kafka has done the log retention as per the policy loosing older topic offsets in the process and if on later point in time orderer try to broadcast a new blockbut end up having a kafka offset out of range issue, can we reset kafka offset to save our orderer from this issue? please help as its a major issue where i am stuck
Hi All, i have a query , if the kafka is ahead of orderer in topic offset count and if the kafka has done the log retention as per the policy loosing older topic offsets in the process and if on later point in time orderer try to broadcast a new block but end up having a kafka offset out of range issue, can we reset kafka offset to save our orderer from this issue? please help as its a major issue where i am stuck
Has joined the channel.
I tried to implement kafka in hyperledger fabric. And when I try to create a channel, it shows kafka-cluster hasnot completed booting.
I increased the sleep to 100 after my kafka cluster was up but it didnot work too. I have tried to edit the code of byfn network of hyperledger fabric. I tried to make 4 orderers, 4 kafka brokers and 1 zookeeper ensemble.
While trying to create a channel it shows
```
+ peer channel create -o orderer0.example.com:7050 -c mychannel -f ./channel-artifacts/channel.tx --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
+ res=1
+ set +x
2019-01-29 10:29:26.155 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
Error: got unexpected status: SERVICE_UNAVAILABLE -- backing Kafka cluster has not completed booting; try again later
!!!!!!!!!!!!!!! Channel creation failed !!!!!!!!!!!!!!!!
========= ERROR !!! FAILED to execute End-2-End Scenario ===========
ERROR !!!! Test failed
```
The logs in orderer0.example.com is
```
2019-01-29 10:29:26.171 UTC [common.deliver] Handle -> WARN 00c Error reading from 192.168.176.14:41538: rpc error: code = Canceled desc = context canceled
2019-01-29 10:29:26.171 UTC [comm.grpc.server] 1 -> INFO 00d streaming call completed {"grpc.start_time": "2019-01-29T10:29:26.156Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "192.168.176.14:41538", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "14.906454ms"}
```
@rodolfofranco You likely need a path like `crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/msp`
@HoneyShah As a rule, I wouldn't expect to encounter the problem you described unless the network were misconfigured
@bibek54 Please follow the (https://kafka.apache.org/quickstart)[Kafka quickstart guide] to ensure your Kafka cluster is functioning without Fabric before attempting to plug fabric into it. There are a multitude of good resources online around configuring a Kafka cluster. Note, we realize the pain users feel in learning to administer another non-Fabric service, and this is one of the reasons we are moving to Raft in the coming releases.
Hello,```
```
Hello,```
I have a Kafka Based Orderering service with 2 orderers on separate VMs. I am using docker swarm to connect them.
I am getting the following WARN message in the orderer nodes. Any idea that it means ?
2019-01-30 08:37:19.372 UTC [orderer.consensus.kafka] setupChannelConsumerForChannel -> INFO 00e [channel: testchainid] Setting up the channel consumer for this channel (start offset: -2)...
2019-01-30 08:37:19.384 UTC [orderer.consensus.kafka] startThread -> INFO 00f [channel: testchainid] Channel consumer set up successfully
2019-01-30 08:37:19.384 UTC [orderer.consensus.kafka] startThread -> INFO 010 [channel: testchainid] Start phase completed successfully
2019-01-30 08:53:02.080 UTC [orderer.consensus.kafka] processRegular -> WARN 011 [channel: testchainid] This orderer is running in compatibility mode
2019-01-30 08:53:02.082 UTC [comm.grpc.server] 1 -> INFO 012 streaming call completed {"grpc.start_time": "2019-01-30T08:53:01.931Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "10.0.0.2:52624", "grpc.code": "OK", "grpc.call_duration": "150.82111ms"}
2019-01-30 08:53:02.082 UTC [comm.grpc.server] 1 -> INFO 013 streaming call completed {"grpc.start_time": "2019-01-30T08:53:01.93Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "10.0.0.2:52622", "grpc.code": "OK", "grpc.call_duration": "152.003659ms"}
```
Hello,
I have a Kafka Based Orderering service with 2 orderers on separate VMs. I am using docker swarm to connect them.
I am getting the following WARN message in the orderer nodes. Any idea that it means ?
``` 2019-01-30 08:37:19.372 UTC [orderer.consensus.kafka] setupChannelConsumerForChannel -> INFO 00e [channel: testchainid] Setting up the channel consumer for this channel (start offset: -2)...
2019-01-30 08:37:19.384 UTC [orderer.consensus.kafka] startThread -> INFO 00f [channel: testchainid] Channel consumer set up successfully
2019-01-30 08:37:19.384 UTC [orderer.consensus.kafka] startThread -> INFO 010 [channel: testchainid] Start phase completed successfully
2019-01-30 08:53:02.080 UTC [orderer.consensus.kafka] processRegular -> WARN 011 [channel: testchainid] This orderer is running in compatibility mode
2019-01-30 08:53:02.082 UTC [comm.grpc.server] 1 -> INFO 012 streaming call completed {"grpc.start_time": "2019-01-30T08:53:01.931Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "10.0.0.2:52624", "grpc.code": "OK", "grpc.call_duration": "150.82111ms"}
2019-01-30 08:53:02.082 UTC [comm.grpc.server] 1 -> INFO 013 streaming call completed {"grpc.start_time": "2019-01-30T08:53:01.93Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "10.0.0.2:52622", "grpc.code": "OK", "grpc.call_duration": "152.003659ms"}```
I am not sure what *compatibility mode* is, and I cannot find anything in the docs
I am not sure what "compatibility mode" is, and I cannot find anything in the docs
I am not sure what* "compatibility mode"* is, and I cannot find anything in the docs
@jyellick Hi sir. I am facing an error......Error: got unexpected status: SERVICE_UNAVAILABLE -- will not enqueue, consenter for this channel hasn't started yet
My 4 kafka brokers are working properly as i have checked them by sending some msgs. I have mentioned this one in my kafka configuration properties advertised.listeners=PLAINTEXT://zookeeper-1:9092.......and i have mentioned zookeeper-1:9092 in kafka broker section of configtx.yaml, along with 3 other kafka brokers. I am using same VM instances of GKE for zookeeper and kafka for just practice and testing. Can you help me to fix the issue or can yo tell me that where I am doing mistake?
@yousaf Have you tried using the sample clients from the same container/VM/origin as the orderer binaries? Usually errors in this case can be traced back to problems with the 'advertised' properties.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=qLfjMjTGgLYzKyTHz) @jyellick Sir I didn't get it. Kindly can you clarify it more that what do you mean by saying that?
@jyellick one more thing is that the zookeeper instances are resolving to internal IP addresses. Is that issue might be because of the fact that the zookeepers should resolve to external IP addresses?
@yousaf As you can see in the kafka documentation, the brokers and zookeepers can advertise different address internally and externally. When a client cannot communicate with the brokers, usually it is because the externally advertised addresses are configured incorrectly. You can test this by using the Kafka sample clients from the same container/VM where you will be executing the orderer. If the sample clients cannot connect, then the orderer will not be able to either.
Okay sir. Thanks. Let me try this...
@jyellick Sir did you mean by Kafka sample client testing is that I should set up the all Kafka configurtation in the container in which the orderer is executing and then I should check the communication from this orderer container to the destination kafka brokers by pinging them(like sending some messages) ?
@yousaf Correct. Leave your Kafka cluster as it is, but run the sample clients from where the orderer would run to ensure there is connectivity and the advertised names and ports are all correct.
@jyellick I have changed some configs of kafkas related to advertised addresses and i was able to ping and telnet to both the internal and external IP addresses of kafkas from orderer container. But I am getting this error from logs....Error: timeout waiting for channel creation
@jyellick I have changed some configs of kafkas related to advertised addresses and i was able to ping and telnet to both the internal and external IP addresses of kafkas from orderer container. But I am getting this error from logs....Error: timeout waiting for channel creation. ........Even that I have given the argument of 120s to peer channel create but still facing the same issue.
Capture.PNG
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=QurzLXCi6tq7zmHFy) @jyellick These are the logs
@yousaf Ping and telnet are not particularly useful for debugging Kafka problems. Please use the Kafka sample clients.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=NTFoF9qrTwx7uiKt3) Hi @jyellick, any idea about the *Compatibility Mode* of the Kafka Orderer?
@gaijinviki Compatibility mode is used when the network does not have the `V1_1` ordering capability enabled.
It is designed to allow you to operate your network with a mix of v1.0.x and v1.1.x orderers.
Has joined the channel.
Is anyone able to guide me in the direction of creating a new channel? I am working with a new SDK that I am developing for C# and have gotten a simple construction working but now I am trying to add anchor peers to them along with an MSP. Every time I try and create the new channel with an MSP config set, it gives me back the following error:
```
2019-01-31 04:05:28.986 UTC [common/channelconfig] initializeProtosStruct -> DEBU 292a Processing field: MSP
2019-01-31 04:05:28.986 UTC [common/channelconfig] Validate -> DEBU 292b Anchor peers for org org0 are anchor_peers:
Then I try specifying all the certificates for the MSP and then I get this error:
```
2019-01-31 04:05:08.351 UTC [common/channelconfig] initializeProtosStruct -> DEBU 2806 Processing field: MSP
2019-01-31 04:05:08.351 UTC [common/channelconfig] Validate -> DEBU 2807 Anchor peers for org org0 are anchor_peers:
Then I try specifying all the certificates for the MSP and I get this error:
```
2019-01-31 04:05:08.351 UTC [common/channelconfig] initializeProtosStruct -> DEBU 2806 Processing field: MSP
2019-01-31 04:05:08.351 UTC [common/channelconfig] Validate -> DEBU 2807 Anchor peers for org org0 are anchor_peers:
Nvm I got it!! I just wasn't wrapping the `FabricMSPConfig` in a `MSPConfig` and it wasn't able to get the type s:sweat_smile:
Nvm I got it!! I just wasn't wrapping the `FabricMSPConfig` in a `MSPConfig` and it wasn't able to get the type :sweat_smile:
Hmm but now how do I go about making the peer aware of the new channel?
You must join peers to channels by invoking the join channel API with the genesis block of the channel as a parameter
@ConnorChristie You must join peers to channels by invoking the join channel API with the genesis block of the channel as a parameter
@jyellick Thanks sir. But now, I am getting this error on peer channel create command........Error: got unexpected status: FORBIDDEN -- Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
I am getting the below error when starting the orderer
Channel capability V1_4 is required but not supported
Did someone setup 1.4
in my configtx.yml
Capabilities:
# Channel capabilities apply to both the orderers and the peers and must be
# supported by both. Set the value of the capability to true to require it.
Channel: &ChannelCapabilities
# V1.1 for Global is a catchall flag for behavior which has been
# determined to be desired for all orderers and peers running v1.0.x,
# but the modification of which would cause incompatibilities. Users
# should leave this flag set to true.
V1_4: true
# Orderer capabilities apply only to the orderers, and may be safely
# manipulated without concern for upgrading peers. Set the value of the
# capability to true to require it.
Orderer: &OrdererCapabilities
# V1.1 for Order is a catchall flag for behavior which has been
# determined to be desired for all orderers running v1.0.x, but the
# modification of which would cause incompatibilities. Users should
# leave this flag set to true.
V1_1: true
# Application capabilities apply only to the peer network, and may be safely
# manipulated without concern for upgrading orderers. Set the value of the
# capability to true to require it.
Application: &ApplicationCapabilities
# V1.1 for Application is a catchall flag for behavior which has been
# determined to be desired for all peers running v1.0.x, but the
# modification of which would cause incompatibilities. Users should
# leave this flag set to true.
V1_3: true
V1_2: false
# V1.1 for Application enables the new non-backwards compatible
# features and fixes of fabric v1.1 (note, this need not be set if
# V1_2 is set).
V1_1: false
@kiranarshakota Please do not post long snippets of files in this channel, use a service like hastebin.com instead
To answer your question, there are no new capabilities flags in v1.4 (it is entirely backwards compatible with v1.3). Please see the `configtx.yaml` distributed with v1.4 to see all allowable capabilities.
@jyellick , https://github.com/hyperledger/fabric/blob/release-1.4/sampleconfig/configtx.yaml , could i use similar to this one
Also i see an error as *Channel capability V1_4 is required but not supported*
what could be the reason
Hello, I have a Kafka based Ordering node, and the log level in the ENVIRONMENT Variables is set to 'debug' like so:`ORDERER_GENERAL_LOGLEVEL=debug` but the Orderer is printing only 'INFO' logs. Is there any other setting to make it print debug logs ?
Hello, I have a Kafka based Ordering node, and the log level in the ENVIRONMENT Variables is set to debug like so: `ORDERER_GENERAL_LOGLEVEL=debug` but the Orderer is printing only *'INFO' *logs. Is there any other setting to make it print *debug* logs ?
Hello, I have a Kafka based Ordering node, and the log level in the ENVIRONMENT Variables is set to debug like so:*`ORDERER_GENERAL_LOGLEVEL=debug`* but the Orderer is printing only *'INFO' *logs. Is there any other setting to make it print *debug* logs ?
Hello, I have a Kafka based Ordering node, and the log level in the ENVIRONMENT Variables is set to debug like so:`ORDERER_GENERAL_LOGLEVEL=debug` but the Orderer is printing only *'INFO' *logs. Is there any other setting to make it print *debug* logs ?
@gaijinviki presumably you are using fabric 1.4, and you should check docs here: https://hyperledger-fabric.readthedocs.io/en/latest/logging-control.html
(it's `FABRIC_LOGGING_SPEC` now)
Thank you
@kiranarshakota Yes, that yaml file should be fine. Notice that there is no channel capability (or any other capability for that matter) named `V1_4` in that file. Because it does not exist, the orderer of course cannot support it
@jyellick Which service is the "join channel API" a part of? Is that under the peer gRPC or ca service?
@ConnorChristie This is an API off the CSCC chaincode -- I suggest you look at the first network tutorials which cover these sort of details https://hyperledger-fabric.readthedocs.io/en/release-1.4/build_network.html
Has joined the channel.
@jyellick Got that fixed. Issue was related to some kafka server configuration. Thanks for your support sir :)
HI all
I have a question regarding the orderer in kafka mode.
If, for example, there were no request to an orderer during a day and when I try to send a transaction to orderer, it's failing.
It looks like kafka and zookeeper are not available. kafka sends messages that leader election is failed, orderer says that it cannot read genesis block for a channel (assume the transaction was to create a channel) and zookeeper says the next:
```
[Thread-396:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:51650 (no session established for client)
```
Are there any properties for zookeeper and kafka to not allow them to go in to the idle state?
Thanks in advance
HI all
I have a question regarding the orderer in kafka mode.
If, for example, there were no request to an orderer during a day and when I try to send a transaction to orderer, it's failing.
It looks like kafka and zookeeper are not available. kafka sends messages that leader election is failed, orderer says that it cannot read genesis block for a channel (assume the transaction was to create a channel) and zookeeper says the next:
```
[Thread-396:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:51650 (no session established for client)
```
and kafka says the next:
```
[channel: genesis] Need to retry because process failed = kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
```
Are there any properties for zookeeper and kafka to not allow them to go in to the idle state?
Thanks in advance
HI all
I have a question regarding the orderer in kafka mode.
If, for example, there were no request to an orderer during a day and when I try to send a transaction to orderer, it's failing.
It looks like kafka and zookeeper are not available. kafka sends messages that leader election is failed, orderer says that it cannot read genesis block for a channel (assume the transaction was to create a channel) and zookeeper says the next:
```
[Thread-396:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:51650 (no session established for client)
```
and kafka says the next:
```
[channel: genesis] Need to retry because process failed = kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
```
Are there any properties for zookeeper and kafka to not allow them to go in to the idle state?
Thanks in advance
HI all
I have a question regarding the orderer in kafka mode.
If, for example, there were no request to an orderer during a day and when I try to send a transaction to orderer, it's failing.
It looks like kafka and zookeeper are not available. kafka sends messages that leader election is failed, orderer says that it cannot read genesis block for a channel (assume the transaction was to create a channel) and zookeeper says the next:
```
[Thread-396:NIOServerCnxn@1044] - Closed socket connection for client /127.0.0.1:51650 (no session established for client)
```
and kafka says the next:
```
[channel: genesis] Need to retry because process failed = kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.
RRO 17d9 [channel: channel154915294645711549273158528] Error during consumption: kafka: error while consuming channel154915294645711549273158528/0: kafka server: Replication-factor is invalid.
```
Are there any properties for zookeeper and kafka to not allow them to go in to the idle state?
Thanks in advance
Also, the orderer sometimes spams this message to logs:
```
2019-02-04 10:04:37.857 UTC [grpc] newHTTP2Transport -> DEBU 2f6 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams failed to receive the preface from client: EOF"
```
at this moment nothing happens with the orderer, it just started/restarted and no transactions were sent when I got these messages
Has joined the channel.
@gravity If the Kafka cluster stays in 'leadership election' for long periods of time, most likely this is a misconfiguration of your Kafka cluster or other networking problem between Kafka cluster members. There is no 'general' tweak for preventing this, other than to identify and rectify the misconfiguration.
@jyellick ok, thanks. I'm going to debug that behavior
and what about that message from an orderer? is it something to worry about?
```
2019-02-04 10:04:37.857 UTC [grpc] newHTTP2Transport -> DEBU 2f6 grpc: Server.Serve failed to create ServerTransport: connection error: desc = "transport: http2Server.HandleStreams failed to receive the preface from client: EOF"
```
Sounds to me like something on your network is connecting but not even attempting the TLS handshake, maybe a port-scanner or similar?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=CjbAFhArebuZzjkJD) @jyellick currently TLS is disabled for the network
@gravity Ah, still, the socket does not appear to be opening all the way as HTTP would expect it to, my bet would still be something like a port scanner
@jyellick get it, thanks
This might be a complex question to ask but I am going to try. It might need a few rounds of query, so if anyone is volunteering to help maybe you could to it my private channel after this question.
Basically, I have two orderers wired up to a cluster of 4 kafka and 3 zookeepers. As far as I can tell, these containers are up and running.
Docker logs of those don't seemed to suggest error.
Next from a cli container in the same docker-compose network I executed this command: `peer channel create -o $ORDERER -c $CHANNEL_ONE_NAME -f ./channel-artefacts/$CHANNEL_ONE_NAME.tx --tls --cafile $ORDERER_CA` but I am getting this error in the orderer container: `Error: got unexpected status: SERVICE_UNAVAILABLE -- backing Kafka cluster has not completed booting; try again later`
Question: What might be the cause of this error?
This might be a complex question to ask but I am going to try. It might need a few rounds of query, so if anyone is volunteering to help maybe you could to it my private channel after this question.
Basically, I have two orderers wired up to a cluster of 4 kafka and 3 zookeepers. As far as I can tell, these containers are up and running.
Docker logs of those don't seemed to suggest error.
Next from a cli container in the same docker-compose network I executed this command: `peer channel create -o $ORDERER -c $CHANNEL_ONE_NAME -f ./channel-artefacts/$CHANNEL_ONE_NAME.tx --tls --cafile $ORDERER_CA` but I am getting this error in the orderer container: `Error: got unexpected status: SERVICE_UNAVAILABLE -- backing Kafka cluster has not completed booting; try again later`
Question: What might be the cause of this error?
This might be a complex question to ask but I am going to try. It might need a few rounds of query, so if anyone is volunteering to help maybe you could to it my private channel after this question.
Basically, I have two orderers wired up to a cluster of 4 kafka and 3 zookeepers. As far as I can tell, these containers are up and running.
Docker logs of those don't seemed to suggest error.
Next from a cli container in the same docker-compose network I executed this command: `peer channel create -o $ORDERER -c $CHANNEL_ONE_NAME -f ./channel-artefacts/$CHANNEL_ONE_NAME.tx --tls --cafile $ORDERER_CA` but I am getting this error in the orderer container: ```2019-02-06 16:52:53.299 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized Error: got unexpected status: SERVICE_UNAVAILABLE -- backing Kafka cluster has not completed booting; try again later`
Question: What might be the cause of this error?
This might be a complex question to ask but I am going to try. It might need a few rounds of query, so if anyone is volunteering to help maybe you could to it my private channel after this question.
Basically, I have two orderers wired up to a cluster of 4 kafka and 3 zookeepers. As far as I can tell, these containers are up and running.
Docker logs of those don't seemed to suggest error.
Next from a cli container in the same docker-compose network I executed this command: `peer channel create -o $ORDERER -c $CHANNEL_ONE_NAME -f ./channel-artefacts/$CHANNEL_ONE_NAME.tx --tls --cafile $ORDERER_CA` but I am getting this error in the orderer container: ```2019-02-06 16:52:53.299 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized Error: got unexpected status: SERVICE_UNAVAILABLE -- backing Kafka cluster has not completed booting; try again later```
Question: What might be the cause of this error?
This is my orderer container environmental setting ```- CONFIGTX_ORDERER_ORDERERTYPE=kafka
- CONFIGTX_ORDERER_KAFKA_BROKERS=[kafka1.network:9092,kafka2.network:9092,kafka3.network:9092,kafka4.network:9092]
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_LISTENPORT=7050
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_QUEUESIZE=1000
- ORDERER_GENERAL_MAXWINDOWSIZE=1000
- ORDERER_RAMLEDGER_HISTORY_SIZE=100
- ORDERER_GENERAL_BATCHSIZE=10
- ORDERER_GENERAL_BATCHTIMEOUT=10s
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/fabric/crypto-config/channel-artefacts/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/fabric/crypto-config/orderer.kafka.network/msp
- ORDERER_GENERAL_TLS_ENABLED=true
- ORDERER_GENERAL_TLS_CERTIFICATE=/var/hyperledger/fabric/crypto-config/orderer.kafka.network/tls/server.crt
- ORDERER_GENERAL_TLS_PRIVATEKEY=/var/hyperledger/fabric/crypto-config/orderer.kafka.network/tls/server.key
- ORDERER_GENERAL_TLS_ROOTCAS=[/var/hyperledger/fabric/crypto-config/orderer.kafka.network/tls/ca.crt, /var/hyperledger/fabric/crypto-config/peerOrganizations/org2.kafka.network/tls/ca.crt]
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=30s
- ORDERER_KAFKA_VERBOSE=true
- ORDERER_GENERAL_GENESISPROFILE=SampleInsecureKafka
- ORDERER_ABSOLUTEMAXBYTES=1048576
- ORDERER_PREFERREDMAXBYTES=1048576```
@paul.sitoh could you first try diagnose kafka cluster with http://kafka.apache.org/quickstart ?
if you confirm kafka cluster is properly functioning, pls post orderer log (maybe using pastebin or gist and share the url)
ok
Hello, I am using java sdk to do service discovery during channel initiation. The peer external endpoint is set to `CORE_PEER_GOSSIP_EXTERNALENDPOINT=mycustomurl:8051` and so the sdk can connect to the peer, but the discovery peer is passing the orderer address as `orderer0.example.com`, and hence the sdk cannot resolve that. How can I change the external address of the orderer, preferably via some environment variable ?
@gaijinviki you have to set it in the `configtx.yaml` when you created the channel
or you do a configuration update that changes the channel config afterwards
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=vRfs8d8Ne4BXpfmCD) @yacovm Okay got it. Thank you!
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=t8zrmmbWE7vSQ3Ajo) Just to confirm, in the configtx.yaml, I should use the host port or the docker internal port? For example my docker-compose has the following:
ports:
- 8050:7050
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=t8zrmmbWE7vSQ3Ajo) Just to confirm, in the configtx.yaml, I should use the host port or the docker internal port. For example my docker-compose has the following:
ports:
- 8050:7050
depends from where you want to get to the orderer node
if unsure you can just add both ports, via adding 2 different entries
From the sdk, which is running on another VM
so the 8050
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=aBaYRC2DBMQZ8B33o) @yacovm Addresses:
- orderer0.example.com:7050
- orderer0.example.com:8050
So I can do this in configtx.yaml ?
yeah
Great. Thank you.
@yacovm If I change the `Addresses` field in `configtx.yaml` from `orderer0.example.com:7050` to `myurl:7050`, the peer node is not able to connect to the orderer. My peer and orderer nodes are running on different VMs, but connected to a docker swarm overlay network.
```
2019-02-08 08:43:45.103 UTC [deliveryClient] connect -> ERRO 11bf Failed obtaining connection: Could not connect to any of the endpoints: [myurl:8050 myurl:7050]
```
@yacovm If I change the `Addresses` field in `configtx.yaml` from `orderer0.example.com:7050` to `myurl:7050`, the peer node is not able to connect to the orderer. My peer and orderer nodes are running on different VMs, but connected to a docker swarm network.
```
2019-02-08 08:43:45.103 UTC [deliveryClient] connect -> ERRO 11bf Failed obtaining connection: Could not connect to any of the endpoints: [myurl:8050 myurl:7050]
```
Is there anything else that I need to update as well ?
add also as `orderer0.example.com:7050` :) @gaijinviki
Hey guys! I have a question concerning the diskspace of an orderer. Currently we have 142251 blocks and the diskspace used by the orderer is about 3.2GB. Is there a way to free up diskspace?
@yacovm, I added the additional `address` lines in `configtx.yaml`, and regenerated the config and crypto-config files, restarted the network, but still the SDK is getting the same old address `orderer0.example.com:7050`
```
```
@yacovm, I added the additional `address` lines in `configtx.yaml`, and regenerated the config and crypto-config files, restarted the network, but still the SDK is getting the same old address `orderer0.example.com:7050`
```
[error 2019/02/08 18:49:33.365 JST server1
@yacovm, I added the additional `address` lines in `configtx.yaml`, and regenerated the config and crypto-config files, restarted the network, but still the SDK is getting the same old address `orderer0.example.com:7050`
```
[error 2019/02/08 18:49:33.365 JST server1
I noticed that it is also trying to connect to the new address `myurl:7050`, but even that fails
```
[error 2019/02/08 18:49:27.006 JST server1
I noticed that it is also trying to connect to the new address `myurl:7050`, but even that fails
```
[error 2019/02/08 18:49:27.006 JST server1
I noticed that it is also trying to connect to the new address `myurl:7050`, but even that fails
```
[error 2019/02/08 18:49:27.006 JST server1
it cannot resolve the host
Yes, but it should be able to, because on a similar url, I have a peer running, and the SDK can resolve that host and send a discovery msg to it.
```
[info 2019/02/08 18:48:40.853 JST server1
Yes, but it should be able to, because on a similar url, I have a peer running, and the SDK can resolve that host and send a discovery msg to it.
```
[info 2019/02/08 18:48:40.853 JST server1
Yes, but it should be able to, because on a similar url, I have a peer running, and the SDK can resolve that host and send a discovery msg to it.
```
[info 2019/02/08 18:48:40.853 JST server1
Has joined the channel.
hey All,
any idea how to backup an orderer from the docker-compose volume mapping so it can survive unexpected crashes?? and can be restarted again.
Hi, When the raft based Orderer is created is the expectation that in a future release the kafka will be retired , or is the expectation that both will exist for the foreseeable future ?
@rsherwood: For the foreseeable future, both Kafka and Raft will coexist.
I will personally campaign so that we retire the Kafka option as soon as we possibly can though.
There is any reasonable way of deleting channels? I want to keep the network clean from testing channels.
@waxer: No.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=v52m3DX7PrE2h6a9F) is it a silly question?
@NeerajKumar if you create a volume, it won't be deleted if container exits, so you can recover from there
but i dont know how to maintain an orderer's volume , can you please share with me, which host directory should be mapped in this container volume mapping to recover my orderer
right now if some thing happens and my network fails then i have to recreate channel and chaincodes states
i want to safegaurd this situation
and i am already half way through as the entire ledger is already backed by volume mapping for couchdb
buy orderer are still hanging
but orderer are still hanging
please guide me how i can back them as well
@NeerajKumar: You'll need to persist whatever path you've used for the `FileLedger.Location` value in your `orderer.yaml`: https://github.com/hyperledger/fabric/blob/release-1.4/sampleconfig/orderer.yaml#L159
I assume you use solo.
If you use Kafka, you'll need to make sure that the path in `log.dirs` is also persisted somewhere permanently.
First Of all thank you so much @kostas for putting me into right direction.
and second i am using Kafka
please aso share sample config of kafka.yaml using which i can set this `logs.dir`
please aso share sample config of kafka.yaml using which i can set this `log.dirs`
last time i had a mjor meltdown of my entire staging network because of this kafka `log.dirs`
please guide me
hey i found this on official kafka doc
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dirs=/tmp/kafka-logs-1
config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dirs=/tmp/kafka-logs-2
but not sure how to put that into kafka config for fabric
please guide me
I'm triyng to create channel in `first-network` (I modified configtx.yaml for 3 orgs) and get this error:
```
root@d41215:/var/seeds/hlf/1.2.1/fabric-samples/first-network# docker exec -e "CHANNEL_NAME=mychannel" -e "CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/users/Admin@org1.example.com/msp" -e "CORE_PEER_ADDRESS=peer0.org1.example.com:7051" -e "CORE_PEER_LOCALMSPID='Org1MSP'" -e "CORE_PEER_TLS_ROOTCERT_FILE=/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt" cli peer channel create -o orderer.example.com:7050 -c channel2 -f /channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
2019-02-14 16:20:37.940 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
Error: got unexpected status: BAD_REQUEST -- Attempted to include a member which is not in the consortium
```
`configtx.yaml`: https://gist.github.com/VadimInshakov/897aaef92cdf1dcd38aab16cecfc82b0
Hi , Please help me to resolve this error which is I am getting during chaincode instantiate in multi host docker swarm setup...
007a6abf-af14-4e65-8149-4417c5b4b34f.jpg
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ZbKury4fCMJe2mnmy) @kostas @yacovm
Question: if i have a consortium with participants A and B. Can I create a channel participants A and C?
Because Im getting a 'Attempted to include a member which is not in the consortium' error.
Besides the one creating the channel is an admin from A.
Do I nees to add C to the consortium necessarily?
Well... I created the channel only with A. And then modified the channel config to add C. Seemed to work. Sound reasonable?
@knagware9 first of all, pls use some tools like [pastebin](https://pastebin.com/) or [gist](https://gist.github.com/) to paste your logs and copy url here. it's hard to look at screenshot/photo and diagnose. Secondly, it says `bad certificate`, and pls confirm you are using correct tls cert to communicate with orderer
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=HN5N4yCEjXoREoLxp) @guoger yes , I will use pastebin next onwards,,,I checked that certificate is correct, I think peer is not able to reach orderer as orderer is on another machine..do we need to provide orderer ip address in config block/channel.tx file in configtx file
Hello, all ,why the orderer call the function extractBootstrapChannel twice?
sorry the function extractBootstrapBlock
fabri
fabric 4.0
Has joined the channel.
Hi all, I would like to know if there exist a sample or an example where an ordering service between different organisations is created. The samples now only have one organisation that manages the ordering service. I can't find an example where different organisations have an OSN in the network.
Hi all, I would like to know if there exists a sample or an example where an ordering service between different organisations is created. The samples now only have one organisation that manages the ordering service. I can't find an example where different organisations have an OSN in the network. Thank you very much!
@braduf Especially with Kafka, we recommend that only one org administer the ordering service. You'll see more examples produced as Raft support is released in the coming months.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=z6tjxBoLErSRmSp45) @jyellick @jyellick hmm, so you mean on organisation that runs the kafka cluster, or also the fabric ordering nodes? Because that takes away the principle of decentralization, no? The orderer org can just pull out the plug of the network by stopping the ordering service then...?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=z6tjxBoLErSRmSp45) @jyellick @jyellick hmm, so you mean one organisation that runs the kafka cluster, or also the fabric ordering nodes? Because that takes away the principle of decentralization, no? The orderer org can just pull out the plug of the network by stopping the ordering service then...?
@braduf It's not exactly that simple. The channel configuration can still be configured to be jointly administered, and, should the org running the ordering service pack up its orderers and go home, the channel could be reconfigured to use a new ordering service. With Raft support, this will be much more natural however.
With Raft, it will be much simpler to have a multiple organizations contribute nodes to ordering, and if an org leaves, the channel may remove that org's orderers from consideration.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=koWvak7cNk6iafvMu) So ideal is that every organisation have a kafka cluster and ordering nodes ready, for the case there is something wrong with the current ordering organisation to be ready to switch...?
Ok, I understand, and so is it really not possible for the moment or just really difficult?
> So ideal is that every organisation have a kafka cluster and ordering nodes ready, for the case there is something wrong with the current ordering organisation to be ready to switch...?
I would say the better plan would be to ensure that the organization running ordering does not pack up and leave. This could be a joint venture between participants or similar. There is no requirement that the ordering org be one of the application orgs, in fact we discourage this.
But, the difficulty in distributing ordering responsibilities is one of the motivations for the impending Raft support.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=CbGQ3zwq5P65GRgw4) @jyellick So for the moment, every organization in the network has it's own CA, should we between all create another CA for the "venture" then to represent this ordering organization?
Yes, the ordering org should have a different root of trust from the other application orgs.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nQfbhKtGyGKscGHRY) @jyellick Ok, and so, is this really the only way for now, or is it just difficult to add orderers of other organisations?
@braduf Adding orderers from other organizations is not the difficult part. You may do this, so long as they have access to the backing Kafka cluster. However, Kafka does not lend itself to being administered by multiple organizations and should generally not be spread across datacenters.
And if you have the hosting of the Kafka cluster centralized, decentralizing your orderers does not buy you much.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=h3CRNQAc4Lyq9oHFf) @jyellick Ok, I understand, and do you have any idea when more or less the raft ordering service comes out?
And with the experience they have in hyperledger burrow using Tendermint, hasn't there been any tests to use Tendermint as a consensus engine for Fabric?
To be able to have a BFT ordering service...
There has been some work with tendermint, and bringing BFT consensus to ordering is absolutely a priority in the roadmap.
We had BFT consensus in Fabric v0.5/v0.6, but based on user feedback, we decided to use Kafka to get v1.0 going quickly with the long term goal of bring back BFT to ordering. The introduction of Raft is once more a step towards BFT, as the Raft and PBFT consensus protocols are topologically very similar.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fH8cJSjX6N64Nu9qC) @jyellick and the BFT consensus of v0.6 can not be used anymore in v1.3? I also have seen an unoficial iniciative that developed a bft-smart ordering service, but I don't know if it is ready and recomended to use? For our case, with the involved parties, it will be dificult to create a venture, and it is a priority for the choice of platform to have a decentralized ordering service...
@braduf The architecture in v0.6 to v1.0 shifted drastically. One of the big breaks was splitting the ordering service from the executing/committing environment (orderer vs. peer), so there's no obvious way to port between the two. There is an implementation of ordering using bft-smart, and I know they made a release not too long ago addressing the shortcomings of their initial release. I've not had a chance to play too much with it, but it could be an option. Alternatively, Raft support should debut in a release within the next month or so. If you are really eager to get going it is largely functional in master.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=E2S2XtBEn6Wjk8ZYn) @jyellick That's great to know, thanks a lot for the extensive and clear information!
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Nh3nRbMTccqrCsB99) Hi @jyellick , I have tried the `peer channel fetch` command and the configuration displayed is appropriate. However, I am still facing the issue of the channel not found by orderer. Here is the short log of the issue from the orderer:```
Rejecting deliver for 10.0.0.2:43558 because channel eprocurechannel not found
``` and from one of the peers:```
DeliverBlocks -> WARN 330d6 [eprocurechannel] Got error &{NOT_FOUND}
```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Nh3nRbMTccqrCsB99) Hi @jyellick , I have tried the `peer channel fetch` command and the configuration displayed is appropriate. However, I am still facing the issue of the channel not found by orderer. Here is the short log of the issue from the orderer:```
Rejecting deliver for 10.0.0.2:43558 because channel eprocurechannel not found
``` and from one of the peers:
```
DeliverBlocks -> WARN 330d6 [eprocurechannel] Got error &{NOT_FOUND}
```
Do you have multiple orderers? Did you try the command against each of them?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=PEipNuyK5k7yF976r) @jyellick Yes I have multiple orderers. I have tried this command in one peer container
Please try it against all of the orderers. If I were to guess, you likely have several orderers, each running in Solo mode, operating independently.
You can also check your orderer logs for the consensus type
You should see a log statement like:
```2019-02-21 00:42:09.700 EST [orderer.commmon.multichannel] Initialize -> INFO 008 Starting system channel 'test-system-channel-name' with genesis block hash f9611e0e1d1e547c7a348c7b904a7e2fcf97a1c8b18f212b052310f93df2d257 and orderer type solo
```
If it says "orderer type *solo*", then this is your problem. If it says "orderer type *kafka*" then it is something else, though not immediately obvious what this might be.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SyXHx7xReBTN6RRBw) @jyellick No, I am using kafka for running multiple orderers
Are you sure? Can you check your logs? I have seen this misconfiguration before.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=4WD4jyTB6sdjv3BLT) @jyellick Yes, I am sure. Is there any command to check consenus type so I can send you the exact details.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=4WD4jyTB6sdjv3BLT) @jyellick Yes, I am sure. Is there any command to check consenus type so I can send you the exact details?
When your orderer starts up, it should emit a log message like the one I described above. If you can grep for "and orderer type" in each orderer's log, you should find it.
Here are some of the logs:
https://pastebin.com/ZQc8Afzz
Here is some of the logs:
https://pastebin.com/ZQc8Afzz
It does look like that orderer is at least talking to Kafka
Actually, all was working fine before and suddenly there was an error so I have tried updating the docker services but it is not working.
Was the container recreated? Is it possible the ledger was stored directly in the container filesystem and not on a persistent volume?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=GmiKxE2Kvjn8WKHjd) @jyellick I have 3 peers with 1 organization. One of them was recreated but I am not using that for endorsnment and other two peers have all the data.
I mean your orderer container
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=wJYTg5J5SNLWAowv7) @jyellick I have 4 orderers and two of them are started agian as that system was down for sometime.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=wJYTg5J5SNLWAowv7) @jyellick I have 4 orderers and two of them are started agian as that system was down for sometime. Other 2 were up for all the time.
The peer randomly connects to an orderer for delivering blocks, so if the orderer filesystem were destroyed, it might be mid-rebuild and report a not found error.
The peer connects to an orderer randomly for delivering blocks, so if the orderer filesystem were destroyed, it might be mid-rebuild and report a not found error.
The peer connects to a random orderer for delivering blocks, so if the orderer filesystem were destroyed, it might be mid-rebuild and report a not found error.
I would be curious if you tried the 'peer channel fetch' against each orderer if you would not find 2 working, and 2 failed
So, I need to persist orderer filesystem also for fault tolerance. Can I know the container path that needs to be persist?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=mqoeuAdJnDu5hGz8w) @jyellick I will check and let you know
It is of course configurable, but by default, it is `/var/hyperledger/production/orderer`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=mqoeuAdJnDu5hGz8w) @jyellick It is not working with 2 orderers.
but can we solve this problem for this running network or need to persist first and restart the network?
And for production, I am assuming I need to persist data of peer, couchdb, ca and orderer.
Correct
> but can we solve this problem for this running network or need to persist first and restart the network?
It should be possible to recover this network. The orderers should reconnect to Kafka and replay the log after restart. If you find complaints about offset not found, this would imply that your Kafka logs have rolled and recovery will be much more challenging.
Okay. Got it. Thanks. So it is not always simple to recover from fault without persisted data.
Hello! I'm trying to get a configuration block of orderer system channel. I'm doing it by invoking `channel.getChannelConfig` method of node SDK. Invoking identity is the orderer admin. TLS is enabled.
I got handshake error. Peer logs:
```
TLS handshake failed with error tls: failed to verify client's certificate: x509
```
What I'm doing wrong? Order admin can't establish connection with a peer? How admin supposed to work with orderer channel?
Note: channel transactions work correctly, so orderer's MSP is configured properly.
Hello! I'm trying to get a configuration block of orderer system channel. I'm doing it by invoking `channel.getChannelConfig` method of node SDK. Invoking identity is the orderer admin. TLS is enabled. Fabric 1.4.
I got handshake error. Peer logs:
```
TLS handshake failed with error tls: failed to verify client's certificate: x509
```
What I'm doing wrong? Order admin can't establish connection with a peer? How admin supposed to work with orderer channel?
Note: channel transactions work correctly, so orderer's MSP is configured properly.
Hello! I'm trying to get a configuration of orderer system channel. I'm doing it by invoking `channel.getChannelConfig` method of node SDK. Invoking identity is the orderer admin. TLS is enabled. Fabric 1.4.
I got handshake error. Peer logs:
```
TLS handshake failed with error tls: failed to verify client's certificate: x509
```
What I'm doing wrong? Order admin can't establish connection with a peer? How admin supposed to work with orderer channel?
Note: channel transactions work correctly, so orderer's MSP is configured properly.
@krabradosty The orderer admin is generally not allowed to invoke chaincode on a peer
@krabradosty The orderer admin is generally not allowed to invoke chaincode on a peer (this is part of an important separation of powers between ordering and peers)
You may retrieve the config block directly from an orderer however.
@jyellick Got it. What about an updating of config of orderer system channel? Should I give access to orderer admins to invoke chaincode on peers in that case?
In general, peers should not have access to join the orderer system channel
You would once more want to retrieve the config block directly from ordering.
@jyellick I mean I need to update orderer system channel config. For example, I want to update BatchSize, so all new channels will have this value. Or add new organization to the Consortium.
Yes, you may retrieve the latest config block from the orderer, for instance using the `peer channel fetch` command, or, using the SDK to call the orderer's Deliver API directly.
Has joined the channel.
I have recently setup kafka orderer in fabric, but after executing some transactions it is consuming more disk space when compared to solo orderer and also the speed at which transactions are executed also came down. Are there any suggestions for optimizing disk space and performance.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=H5eqZGBHC3CxzgDTK) @jyellick After that how can I send a transaction to update channel config?
@krabradosty You may submit it as a normal config update transaction to ordering. Config updates are not endorsed by peers, rather signatures are gathered out of band.
Hey @Techie
I'm also setting up Kafka. Can you share the logs & documentation you used for setting it up? @Techie [ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ZB2pAFXhETdgTymWh)
> I have recently setup kafka orderer in fabric, but after executing some transactions it is consuming more disk space when compared to solo orderer and also the speed at which transactions are executed also came down. Are there any suggestions for optimizing disk space and performance.
@Techie Solo is not a crash fault tolerant ordering service, it only stores one copy of the data. Kafka is, so you will be storing multiple copies of the data, so more disk usage is inevitable.
Hi @jyellick Someone told me that dynamic inclusion of new peer or organization in the running network requires fabric-samples v1.1 because configtxlator only works properly on this version....................Is it true that the versions v1.2, 1.3 and 1.4 do not support extending the network ??
@yousaf No, I suspect they meant that it requires fabric-samples v1.1+, extending the network is part of our standard test suite and runs routinely.
@jyellick Thanks sir. Got it
Has joined the channel.
Hello! I can't add anchor peer to the orderer system channel config. I got error from configtxlator:
```
```
Hello! I can't add anchor peer to the orderer system channel config. I got error from configtxlator:
```
status: 400,
text: '*common.Config: error in PopulateFrom for field channel_group for message *common.Config: *common.DynamicChannelGroup: error in PopulateFrom for map field groups with key Consortiums for message *common.DynamicChannelGroup: *common.DynamicConsortiumsGroup: error in PopulateFrom for map field groups with key OptheriumConsortium for message *common.DynamicConsortiumsGroup: *common.DynamicConsortiumGroup: error in PopulateFrom for map field groups with key Org1MSP for message *common.DynamicConsortiumGroup: *common.DynamicConsortiumOrgGroup: error in PopulateFrom for map field values with key AnchorPeers for message *common.DynamicConsortiumOrgGroup: *common.DynamicConsortiumOrgConfigValue: error in PopulateFrom for field value for message *common.DynamicConsortiumOrgConfigValue: unknown Consortium Org ConfigValue name: AnchorPeers\n',
method: 'POST',
path: '/protolator/encode/common.Config' },
```
It seems like orderer config doesn't support `AnchorPeers` field. Am I right? Why?
So our demo K8S cluster has gone TU. One of the orderers is failing to initialize with this little gem:
```
2019-02-26 00:17:28.598 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
2019-02-26 00:17:28.633 UTC [common/ledger/blockledger/file] Next -> ERRO 004 Entry not found in index
2019-02-26 00:17:28.634 UTC [orderer/commmon/multichannel] getConfigTx -> CRIT 005 Config block does not exist
panic: Config block does not exist
```
So our demo K8S cluster has gone TU. One of the orderers is failing to initialize with this little gem:
```
2019-02-26 00:17:28.598 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
2019-02-26 00:17:28.633 UTC [common/ledger/blockledger/file] Next -> ERRO 004 Entry not found in index
2019-02-26 00:17:28.634 UTC [orderer/commmon/multichannel] getConfigTx -> CRIT 005 Config block does not exist
panic: Config block does not exist
```
This isn't one I've seen before. Anyone able to tell what's been overwritten/fubar'd and if there's a mitigation strategy?
So our demo K8S cluster has gone TU. One of the orderers is failing to initialize with this little gem:
```
2019-02-26 00:17:28.598 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
2019-02-26 00:17:28.633 UTC [common/ledger/blockledger/file] Next -> ERRO 004 Entry not found in index
2019-02-26 00:17:28.634 UTC [orderer/commmon/multichannel] getConfigTx -> CRIT 005 Config block does not exist
panic: Config block does not exist
```
This isn't one I've seen before. Anyone able to tell what's been overwritten/fubar'd and if there's a mitigation strategy? System has been up for 3 months without issue up to now.
So our demo K8S cluster has gone TU. One of the orderers is failing to initialize with this little gem:
```
2019-02-26 00:17:28.598 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
2019-02-26 00:17:28.633 UTC [common/ledger/blockledger/file] Next -> ERRO 004 Entry not found in index
2019-02-26 00:17:28.634 UTC [orderer/commmon/multichannel] getConfigTx -> CRIT 005 Config block does not exist
panic: Config block does not exist
```
This isn't one I've seen before. Anyone able to tell what's been overwritten/fubar'd and if there's a mitigation strategy? System has been up for 3 months without issue up to now.
EDIT: There's also something strange going on with the disk for this orderer. There's a bunch of .ldb files in the /index directory and a MANIFEST-NNNNN (numeric). The other two are clean - .log and MANIFEST-00000 files.
So our demo K8S cluster has gone TU. One of the orderers is failing to initialize with this little gem:
```
2019-02-26 00:17:28.598 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
2019-02-26 00:17:28.633 UTC [common/ledger/blockledger/file] Next -> ERRO 004 Entry not found in index
2019-02-26 00:17:28.634 UTC [orderer/commmon/multichannel] getConfigTx -> CRIT 005 Config block does not exist
panic: Config block does not exist
```
This isn't one I've seen before. Anyone able to tell what's been overwritten/fubar'd and if there's a mitigation strategy? System has been up for 3 months without issue up to now.
EDIT: There's also something strange going on with the disk for this orderer. There's a bunch of .ldb files in the /index directory and a MANIFEST-NNNNN (numeric). The other two are clean - .log and MANIFEST-00000 files.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=JEWBMZwLKCJ2fKfjR) So our demo K8S cluster has gone TU. One of the orderers is failing to initialize with this little gem:
```
2019-02-26 00:17:28.598 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
2019-02-26 00:17:28.633 UTC [common/ledger/blockledger/file] Next -> ERRO 004 Entry not found in index
2019-02-26 00:17:28.634 UTC [orderer/commmon/multichannel] getConfigTx -> CRIT 005 Config block does not exist
panic: Config block does not exist
```
This isn't one I've seen before. Anyone able to tell what's been overwritten/fubar'd and if there's a mitigation strategy? System has been up for 3 months without issue up to now.
EDIT: There's also something strange going on with the disk for this orderer. There's a bunch of .ldb files in the /index directory and a MANIFEST-NNNNN (numeric). The other two are clean - .log and MANIFEST-00000 files.
So our demo K8S cluster has gone TU. One of the orderers is failing to initialize with this little gem:
```
2019-02-26 00:17:28.598 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
2019-02-26 00:17:28.633 UTC [common/ledger/blockledger/file] Next -> ERRO 004 Entry not found in index
2019-02-26 00:17:28.634 UTC [orderer/commmon/multichannel] getConfigTx -> CRIT 005 Config block does not exist
panic: Config block does not exist
```
This isn't one I've seen before. Anyone able to tell what's been overwritten/fubar'd and if there's a mitigation strategy? System has been up for 3 months without issue up to now.
EDIT: There's also something strange going on with the disk for this orderer. There's a bunch of .ldb files in the /index directory and a MANIFEST-NNNNN (numeric). The other two are clean - .log and MANIFEST-00000 files.
So our demo K8S cluster has gone TU. One of the orderers is failing to initialize with this little gem:
```
2019-02-26 00:17:28.598 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
2019-02-26 00:17:28.633 UTC [common/ledger/blockledger/file] Next -> ERRO 004 Entry not found in index
2019-02-26 00:17:28.634 UTC [orderer/commmon/multichannel] getConfigTx -> CRIT 005 Config block does not exist
panic: Config block does not exist
```
This isn't one I've seen before. Anyone able to tell what's been overwritten/fubar'd and if there's a mitigation strategy? System has been up for 3 months without issue up to now.
EDIT: There's also something strange going on with the disk for this orderer. There's a bunch of .ldb files in the /index directory and a MANIFEST-NNNNN (numeric). The other two are clean - .log and MANIFEST-00000 files.
So our demo K8S cluster has gone TU. One of the orderers is failing to initialize with this little gem:
```
2019-02-26 00:17:28.598 UTC [orderer/common/server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
2019-02-26 00:17:28.633 UTC [common/ledger/blockledger/file] Next -> ERRO 004 Entry not found in index
2019-02-26 00:17:28.634 UTC [orderer/commmon/multichannel] getConfigTx -> CRIT 005 Config block does not exist
panic: Config block does not exist
```
This isn't one I've seen before. Anyone able to tell what's been overwritten/fubar'd and if there's a mitigation strategy? System has been up for 3 months without issue up to now.
EDIT: There's also something strange going on with the disk for this orderer. There's a bunch of .ldb files in the /index directory and a MANIFEST-NNNNN (numeric). The other two are clean - .log and MANIFEST-00000 files.
hi friends, i have been trying to create a multi node network with kafka consensus and 3 orderer.in pc1 i have created 3 zookeerper 4 kafka and first orderer.but when i try to bring up orderer in pc2 it is not recgnising kafka.all pcs are in same docker swarm.can anyone please help.has anyone done this
Has joined the channel.
So to add to the post two above this in case anyone else finds themselves in this situation.
We tried the obvious stuff - clear out the files in the orderer directory and see if it would restart as if it were a new orderer join the cluster.
That didn't work. Instead we just got the genesis block in there. It failed miserably to catch up.
Going back to kafka we saw that there was a problem with one of the kafkas going down and coming back. At about that time it also threw a failed to write exception (it wasn't out of disk). It came back but it too didn't catch up properly.
We tried untold combinations of restarting things after that to no avail.
We ended up junking the setup and reconstituting it, which is quite a lot of wasted effort given how many orgs are in there.
I'm curious if there was anything that we could have done to make this better.
As a post mortem we found this initially kicked off when vSphere moved a node automagically. That at least is not going to happen again...
So to add to the post two above this in case anyone else finds themselves in this situation.
We tried the obvious stuff - clear out the files in the orderer directory and see if it would restart as if it were a new orderer join the cluster.
That didn't work. Instead we just got the genesis block in there. It failed miserably to catch up.
Going back to kafka we saw that there was a problem with one of the kafkas going down and coming back. At about that time it also threw a failed to write exception (it wasn't out of disk). It came back but it too didn't catch up properly.
We tried untold combinations of restarting things after that to no avail.
We ended up junking the setup and reconstituting it, which is quite a lot of wasted effort given how many orgs are in there.
I'm curious if there was anything that we could have done to make this better.
As a post mortem we found this initially kicked off when vSphere moved a k8s node automagically. That at least is not going to happen again...
So to add to the post two above this in case anyone else finds themselves in this situation.
We tried the obvious stuff - clear out the files in the orderer directory and see if it would restart as if it were a new orderer joining the cluster.
That didn't work. Instead we just got the genesis block in there. It failed miserably to catch up.
Going back to kafka we saw that there was a problem with one of the kafkas going down and coming back. At about that time it also threw a failed to write exception (it wasn't out of disk). It came back but it too didn't catch up properly.
We tried untold combinations of restarting things after that to no avail.
We ended up junking the setup and reconstituting it, which is quite a lot of wasted effort given how many orgs are in there.
I'm curious if there was anything that we could have done to make this better.
As a post mortem we found this initially kicked off when vSphere moved a k8s node automagically. That at least is not going to happen again...
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=PtF9sSqBHbAz47cty) @alokkv Can you show the yaml files too?
@krabradosty
> It seems like orderer config doesn't support `AnchorPeers` field. Am I right? Why?
Orderer orgs run orderers, not peers. So it doesn't make any sense to specify peers for an orderer org.
@aatkddny If a single orderer is corrupt, you should be able to simply delete its ledger, and re-bootstrap it from the genesis block. It will pull the transactions logs from Kafka and rebuild everything. If you wanted to be especially ambitious, you could even copy the ledger from a good orderer to the bad one.
Has joined the channel.
hi, my network has two ca, i.e. `ca.org1.example.com` and `ca.org2.example.com`. to create `crypto-config` using `fabric-ca-client` directly, should the certs of `ca.org1.example.com`, `ca.org2.example.com` and `orderer.example.com` be created or signed by root ca?
do both ca.org1.example.com and ca.org2.example.com act as intermediate ca?
do both `ca.org1.example.com` and `ca.org2.example.com` act as intermediate ca?
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=CMLuXagiMdjwLQkvQ) @jyellick Yeah. Should... It didn't work though. It bootstrapped from the genesis file and then just sat there. The orderer logs were - shall we say - inconclusive.
Wasn't aware the orderer ledgers - plural - were directly compatible. I will try that if we hit this again.
I was thinking about what might have happened last night - as I was rebuilding this thing.
It looks like we had 2 of 4 kafkas on the node that was moved (so went down and was restarted) and we run with a replication factor of 3. I'm guessing that's what caused it to fail in the manner we saw.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=CMLuXagiMdjwLQkvQ) @jyellick Yeah. Should... It didn't work though. It bootstrapped from the genesis file and then just sat there. The orderer logs were - shall we say - inconclusive.
Wasn't aware the orderer ledgers - plural - were directly compatible. I will try that if we hit this again.
I was thinking about what might have happened last night - as I was rebuilding this thing.
It looks like we had 2 of 4 kafkas on the node that was moved (so went down and was restarted) and we run with a replication factor of 3. I'm guessing that's what caused it to fail in the manner we saw. Which leads me to a wave my ignorance about question. Is there any simple way to scale up the kafkas and to have the orderers recognize they exist? ISTR they are baked into config.tx and I'm not sure if there's a simple way to notify the orderers. It would have been much easier (for me) if they adopted the zookeeper approach and expected them to be specified in the yaml.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=CMLuXagiMdjwLQkvQ) @jyellick Yeah. Should... It didn't work though. It bootstrapped from the genesis file and then just sat there. The orderer logs were - shall we say - inconclusive.
Wasn't aware the orderer ledgers - plural - were directly compatible. I will try that if we hit this again.
I was thinking about what might have happened last night - as I was rebuilding this thing.
It looks like we had 2 of 4 kafkas on the node that was moved (so went down and was restarted) and we run with a replication factor of 3. I'm guessing that's what caused it to fail in the manner we saw.
Which leads me to a wave my ignorance about question. Is there any simple way to scale up the kafkas and to have the orderers recognize they exist? ISTR they are baked into config.tx and I'm not sure if there's a simple way to notify the orderers. It would have been much easier (for me) if they adopted the zookeeper approach and expected them to be specified in the yaml.
@aatkddny
> Should... It didn't work though. It bootstrapped from the genesis file and then just sat there. The orderer logs were - shall we say - inconclusive.
This should be effectively the same as joining a new orderer, late to the game. It wouldn't work if the entire initial set of Kafka replicas were no longer available.
> Is there any simple way to scale up the kafkas and to have the orderers recognize they exist?
You may of course update the channel configuration which specifies the Kafka brokers via a config update transaction. Although so long as there is one good broker in the config, the orderer should be able to discover the others through it.
Has joined the channel.
Hi, i'm trying to secure the connection between my orderer and my kafka-zookeeper cluster. Does anybody has some clear documentation on how to setup TLS?
Has joined the channel.
Hi, I am trying to set TLS based authentication of the orderers to the Kafka brokers. There is documentation on how to achieve it from peer to orderer, but I cannot find something regarding TLS based auth from orderer to kafka broker. Anybody with a clue?
I think it supports SASLPlain - check the section in `orderer.yaml`
query.txt
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ntzAMxvQkFNPjKg7h) @DeepakMule Did you verify if the certificates are valid?
yes
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Z9DjWQ4FRPAaQEMGo) @benjamin.verhaegen yes
@DeepakMule I am also working on the same topic as @benjamin.verhaegen , TLS + TLS based auth for orderer to kafka communication, I would very much appreciate it if there is some feedback on that.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ncCeoaL6ejXXyyhGi) @nikolas I'm also trying to figure it out, working on it today.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=6u2R6rwaFpxdtkLQZ) @benjamin.verhaegen i have refereed steps as given in "https://lists.hyperledger.org/g/fabric/message/1241".
Hi, i'm getting following error from my kafka container:
[2019-03-01 09:11:38,538] WARN [SocketServer brokerId=0] Unexpected error from /172.29.0.15; closing connection (org.apache.kafka.common.network.Selector)
org.apache.kafka.common.network.InvalidReceiveException: Invalid receive (size = 369295616 larger than 104857600)
help would be appreciated
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=PiYut2oQL9bd8EMoG) @benjamin.verhaegen is it possible for you to share deployment configuration ?
docker-compose-kafka.txt
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=C4tuhkvK2obre34TN) @DeepakMule @DeepakMule
i'm trying to get the invalid size error away
@benjamin.verhaegen ensure the block-size and the kafka message size align
hi I am using solo to run the first network example. Looking through the `fabric/peer/node/start.go` I can't see where a peer becomes aware of the signature of the orderer
do they query the msp permanently?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=HhEKiyawKDFDr24YR) @benjamin.verhaegen Thank you for sharing your deployment configuration. when i compared with my configuration, the change I diff in configuration is of client (orderer ) certs. do u have procedure to generate client & kafka certs ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=78a4mEDTgys6D5tk8) @nikolas where can i check the block size?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=2QCJ66NY8ib9KtBjz) got it
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=kd975xmppGchMN9cM) @benjamin.verhaegen @benjamin.verhaegen did your issue resolved ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=kd975xmppGchMN9cM) @benjamin.verhaegen is it possible for you to share client certificate generation process ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=dLMDGXrgsYyrARPqv) @DeepakMule i'm still searching for it
Has joined the channel.
Who can help me?
I have this error
TLS handshake failed with error tls: first record does not look like a TLS handshake {"server": "Orderer", "remote address": "172.18.0.23:42230"}
TLS handshake failed with error tls: first record does not look like a TLS handshake {"server": "Orderer", "remote address": "172.18.0.23:42230"}
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=38qmRLwKYGZWompC4) @benjamin.verhaegen any update on your issue.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=38qmRLwKYGZWompC4) @benjamin.verhaegen any update on your issue?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=aGXZZgcYcvnnCkb4d) @DeepakMule No, I have other work now prior to the research I do, on wednesday i'll proceed with the TLS connection
SASL_PLAIN based auth for orderer to kafka communication, I would very much appreciate it if there is some feedback on that.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=v63uKbv8y4zyyYmJw) @yousaf @yousaf do you happen to remember how you managed to overcome this error?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=v63uKbv8y4zyyYmJw) @yousaf do you happen to remember how you managed to overcome this error?
Has joined the channel.
Hi all
is it possible to recover blockchain data from ledger ledger file (`block_0000` on orderer) if kafka nodes are lost?
@gravity , It will automatically sync from the existing peer's and from the channel ledger
@Chandoo but peer's world state database has only latest data for all keys
and as for channel ledger - you mean ledger file on orderers, don't you?
I'm having some strange errors bringing up a network that was working. It's causing me some angst - as in I've been looking at problems for quite some time now. Has anyone seen this before?
`[33m2019-03-06 17:00:00.482 UTC [orderer/common/broadcast] Handle -> WARN 00d[0m [channel: some-channel] Rejecting broadcast of message from 10.42.85.47:57282 with SERVICE_UNAVAILABLE: rejected by Consenter: will not enqueue, consenter for this channel hasn't started yet
`
My peers all *appear* to have started. Is there anything simple I've overlooked?
I'm having some strange errors bringing up a network that was working. It's causing me some angst - as in I've been looking at problems for quite some time now. Has anyone seen this before?
```[33m2019-03-06 17:00:00.482 UTC [orderer/common/broadcast] Handle -> WARN 00d[0m [channel: some-channel] Rejecting broadcast of message from 10.42.85.47:57282 with SERVICE_UNAVAILABLE: rejected by Consenter: will not enqueue, consenter for this channel hasn't started yet
```
My peers all *appear* to have started. Is there anything simple I've overlooked?
@aatkddny do you use tls?
I've met this warning previously. The root of the problem was in the advertisement properties of zookeeper/kafka cluster. Make sure all of you zookeeper nodes have started correctly and all the kafka nodes are running and they are in sync.
also, sharing the configuration properties of zookeeper nodes and kafka nodes would be useful
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=L2RAfAiXPaLEPD9J6) @gravity This is a test network. No TLS.
It was working and has been stable for 3 months - until one of the VMs it's running on was moved by vSphere. Now I can't even reconstitute the thing from scratch.
This is just the latest in a series of problems that this thing is having - mostly to do with channels - rebuilding and restarting. This is code that I've run dozens of times too.
FWIW here are samples of the config. We run 3 zookeepers and 6 kafkas. Because we've had "issues" with kafka in the past.
Sample zk
```
apiVersion: v1
kind: Service
metadata:
name: zoo1-headless
spec:
ports:
- name: follower
protocol: TCP
port: 2888
- name: leader-election
protocol: TCP
port: 3888
selector:
name: zookeeper1
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: zoo1
spec:
ports:
- name: client
protocol: TCP
port: 2181
selector:
name: zookeeper1
clusterIP: None
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper1
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: zookeeper1
app: zookeeper
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- "zookeeper"
topologyKey: "kubernetes.io/hostname"
containers:
- name: zookeeper1
image: hyperledger/fabric-zookeeper
imagePullPolicy: IfNotPresent
env:
- name: ZOO_MY_ID
value: "1"
- name: ZOO_SERVERS
value: server.1=0.0.0.0:2888:3888:participant server.2=zoo2-headless:2888:3888:participant server.3=zoo3-headless:2888:3888:participant
- name: GODEBUG
value: netdns=go
volumeMounts:
- mountPath: /datalog
name: zoo
subPath: zookeeper/zookeeper1/datalog
- mountPath: /data
name: zoo
subPath: zookeeper/zookeeper1/data
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
restartPolicy: Always
volumes:
- name: zoo
persistentVolumeClaim:
claimName: bc-storage
---
```
Sample kafka:
```
apiVersion: v1
kind: Service
metadata:
name: kafka0
spec:
ports:
- name: "9092"
protocol: TCP
port: 9092
- name: server
protocol: TCP
port: 9093
selector:
name: kafka0
clusterIP: None
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka0
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: kafka0
app: kafka
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- "kafka"
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 60
restartPolicy: Always
dnsPolicy: ClusterFirst
schedulerName: default-scheduler
containers:
- name: kafka0
image: hyperledger/fabric-kafka:0.4.14
imagePullPolicy: IfNotPresent
env:
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181 zoo2:2181 zoo3:2181
- name: KAFKA_LOG_RETENTION_HOURS
value: "-1"
- name: KAFKA_MESSAGE_MAX_BYTES
value: "103809024"
- name: KAFKA_REPLICA_FETCH_MAX_BYTES
value: "103809024"
- name: KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE
value: "false"
- name: KAFKA_DEFAULT_REPLICATION_FACTOR
value: "3"
- name: KAFKA_MIN_INSYNC_REPLICAS
value: "2"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka0
- name: KAFKA_CONTROLLED_SHUTDOWN_ENABLED
value: "true"
- name: KAFKA_LOG_DIR
value: /share
- name: KAFKA_LOG_DIRS
value: /share
- name: GODEBUG
value: netdns=go
volumeMounts:
- mountPath: /share
name: kfk
subPath: kafka/kafka0
lifecycle:
preStop:
exec:
command:
- /opt/kafka/bin/kafka-server-stop.sh
volumes:
- name: kfk
persistentVolumeClaim:
claimName: bc-storage
---
```
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=L2RAfAiXPaLEPD9J6) @gravity This is a test network. No TLS.
It was working and has been stable for 3 months - until one of the VMs it's running on was moved by vSphere. Now I can't even reconstitute the thing from scratch.
This is just the latest in a series of problems that this thing is having - mostly to do with channels - rebuilding and restarting. This is code that I've run dozens of times too.
FWIW here are samples of the config. We run 3 zookeepers and 6 kafkas. Because we've had "issues" with kafka in the past.
Sample zk
```
apiVersion: v1
kind: Service
metadata:
name: zoo1-headless
spec:
ports:
- name: follower
protocol: TCP
port: 2888
- name: leader-election
protocol: TCP
port: 3888
selector:
name: zookeeper1
clusterIP: None
---
apiVersion: v1
kind: Service
metadata:
name: zoo1
spec:
ports:
- name: client
protocol: TCP
port: 2181
selector:
name: zookeeper1
clusterIP: None
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: zookeeper1
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: zookeeper1
app: zookeeper
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- "zookeeper"
topologyKey: "kubernetes.io/hostname"
containers:
- name: zookeeper1
image: hyperledger/fabric-zookeeper:0.4.12
imagePullPolicy: IfNotPresent
env:
- name: ZOO_MY_ID
value: "1"
- name: ZOO_SERVERS
value: server.1=0.0.0.0:2888:3888:participant server.2=zoo2-headless:2888:3888:participant server.3=zoo3-headless:2888:3888:participant
- name: GODEBUG
value: netdns=go
volumeMounts:
- mountPath: /datalog
name: zoo
subPath: zookeeper/zookeeper1/datalog
- mountPath: /data
name: zoo
subPath: zookeeper/zookeeper1/data
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
restartPolicy: Always
volumes:
- name: zoo
persistentVolumeClaim:
claimName: bc-storage
---
```
Sample kafka:
```
apiVersion: v1
kind: Service
metadata:
name: kafka0
spec:
ports:
- name: "9092"
protocol: TCP
port: 9092
- name: server
protocol: TCP
port: 9093
selector:
name: kafka0
clusterIP: None
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kafka0
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
name: kafka0
app: kafka
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- "kafka"
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 60
restartPolicy: Always
dnsPolicy: ClusterFirst
schedulerName: default-scheduler
containers:
- name: kafka0
image: hyperledger/fabric-kafka:0.4.12
imagePullPolicy: IfNotPresent
env:
- name: KAFKA_BROKER_ID
value: "0"
- name: KAFKA_ZOOKEEPER_CONNECT
value: zoo1:2181 zoo2:2181 zoo3:2181
- name: KAFKA_LOG_RETENTION_HOURS
value: "-1"
- name: KAFKA_MESSAGE_MAX_BYTES
value: "103809024"
- name: KAFKA_REPLICA_FETCH_MAX_BYTES
value: "103809024"
- name: KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE
value: "false"
- name: KAFKA_DEFAULT_REPLICATION_FACTOR
value: "3"
- name: KAFKA_MIN_INSYNC_REPLICAS
value: "2"
- name: KAFKA_ADVERTISED_HOST_NAME
value: kafka0
- name: KAFKA_CONTROLLED_SHUTDOWN_ENABLED
value: "true"
- name: KAFKA_LOG_DIR
value: /share
- name: KAFKA_LOG_DIRS
value: /share
- name: GODEBUG
value: netdns=go
volumeMounts:
- mountPath: /share
name: kfk
subPath: kafka/kafka0
lifecycle:
preStop:
exec:
command:
- /opt/kafka/bin/kafka-server-stop.sh
volumes:
- name: kfk
persistentVolumeClaim:
claimName: bc-storage
---
```
Hi all, I am trying to start my orderer node and it gives the following error:
```
```
Hi all, I am trying to start my orderer node and it gives the following error:
```
2019-03-06 18:53:30.600 UTC [orderer.common.server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains
orderer_bancolombia | 2019-03-06 18:53:30.616 UTC [orderer.commmon.multichannel] newLedgerResources -> PANI 004 Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: setting up the MSP manager failed: admin 0 is invalid: could not obtain certification chain: invalid validation chain. Parent certificate should be a leaf of the certification tree
```
I already checked with openssl all certificates in the MSPs if their chain from admin to intermediate to root is valid, and everything seems ok there. Anyone has an idea what the problem might be, please? Thanks in advance!
@braduf Did you run this check against the msp directory of the orderer org for the directory you used with `configtxgen`?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=HQhQNJynr4KgJ98cn) @jyellick Hi @jyellick, yes, verified the MSP used for the creation of the genesis block...
I assume you're trying to bootstrap the orderer? If so, could you try deleting the ledger and trying again?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=PwZEvGdh5T4W3jTqi) @jyellick Yes, that's what we are trying to do. We are running the orderer with docker and we already removed the exited container and removed the volumes it created already... And we keep getting the same error when we try to run the container again. So I think we already removed the ledger, but I am not sure...
```2019-03-06 18:53:30.600 UTC [orderer.common.server] initializeMultichannelRegistrar -> INFO 003 Not bootstrapping because of existing chains``` would indicate that you have not
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=YrR3PXiKtxDvHMZoT) @jyellick Yeah, indeed, that's the strange thing. Where on the host machine can it store the ledger? It is only stored in the volume - /orderer/data:/var/hyperledger/production/orderer, no?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=YrR3PXiKtxDvHMZoT) @jyellick Yeah, indeed, that's the strange thing. Where on the host machine can it store the ledger? It is only stored in the volume - /orderer/data:/var/hyperledger/production/orderer, no? We removed the /orderer directory on the host machine, removed the exited container etc...
Right, that should be the only spot, unless you overrode the location
As to whether /orderer/data is the volume... hard to say, that would be up to your setup
Typically we let docker manage the volumes
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=4Q5htLC4LNfyWYK5E) @jyellick And do you think the problem is the existing chains or this part:
```
Error creating channelconfig bundle: initializing channelconfig failed: could not create channel Orderer sub-group config: setting up the MSP manager failed: admin 0 is invalid: could not obtain certification chain: invalid validation chain. Parent certificate should be a leaf of the certification tree
```
Well, existing chains is not a problem, but, if there is an existing chain which contains a bad MSP definition, no matter how correct the profile used with `configtxgen` it will not matter, it will continue to try to start using this already persisted bad definition.
Essentially as that message indicates, the genesis block you have provided is being ignored because the system is already bootstrapped.
(We did not want to require users to reconfigure their system after bootstrap before restarting the orderer binary, so multiple attempts to bootstrap are simply ignored)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=kbmD92syRYAHwaaQk) @jyellick Ok, yeah, that seems to be the problem, just have to find where he is taking the existing chains from then. Can this be from Kafka? Normally if i create a genesis block with a new channelID for the system channel, it should create a new topic, right?
This would not be Kafka. The genesis block is persisted locally (and embeds the information needed to connect to Kafka). Based on the error above, I don't even think you're getting as far as connecting to Kafka. That being said, you should destroy your Kafka cluster's storage in addition to your orderer ledger when attempting to re-bootstrap a network, or you will run into problems.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3kkPjxS4v6Btj7Kxu) @jyellick Yes, indeed, it doesn't reach Kafka yet. But good to know, I will try again removing everything and destroying the Kafka cluster's storage too, just to start clean
(To your specific question, if you chose a new orderer system channel id, then you should get a new topic, and there should be no collissions)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=H34meGasAy5BZi42J) @jyellick We could run it, thanks a lot! But I don't know now if it was only by deleting the ledger data. Something that we noticed also is that one organization in the network added a certificate generated by its root CA instead of its intermediate CA in the admincerts of there MSP. Could this be something that can not be done? Should the admincerts always be generated by an intermediate CA when the org has an intermediate CA?
Ah yes, @braduf absolutely, certs _must_ be issued only by intermediate CAs, if an intermediate CA is used.
You can imagine for instance, that you have Verisign issue you an intermediate certificate, which you then use to issue all of your peer certs. You would not want me to be able to go to Verisign and get a client certificate which is authorized to transact on your network.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=PPHjdtaTb55PJiTpR) @jyellick Great! Got it! And that was the error in the end (combined with the ledger data). Thanks a lot for the great info and help!
Has joined the channel.
Hi, anyone seen this error before? ```2019-03-07 06:02:32.321 UTC [orderer.common.broadcast] ProcessMessage -> WARN 013 [channel: mychannel] Rejecting broadcast of normal message from 1.2.3.4:42566 with SERVICE_UNAVAILABLE: rejected by Order: cannot enqueue
2019-03-07 06:04:33.194 UTC [orderer.consensus.kafka] enqueue -> ERRO 014 [channel: mychannel] cannot enqueue envelope because = read tcp 1.2.3.4:34502->1.2.3.4:9092: i/o timeout```
Hi, anyone seen this error before? ```2019-03-07 06:02:32.321 UTC [orderer.common.broadcast] ProcessMessage -> WARN 013 [channel: mychannel] Rejecting broadcast of normal message from 1.2.3.4:42566 with SERVICE_UNAVAILABLE: rejected by Order: cannot enqueue
2019-03-07 06:04:33.194 UTC [orderer.consensus.kafka] enqueue -> ERRO 014 [channel: mychannel] cannot enqueue envelope because = read tcp 1.2.3.4:34502->1.2.3.4:9092: i/o timeout
2019-03-07 06:35:06.950 UTC [orderer.consensus.kafka] try -> DEBU 39a [channel: mychannel] Need to retry because process failed = circuit breaker is open
2019-03-07 06:35:11.956 UTC [orderer.consensus.kafka.sarama] tryRefreshMetadata -> DEBU 39d Unexpected topic-level metadata error: kafka server: Replication-factor is invalid.
2019-03-07 06:35:41.955 UTC [orderer.consensus.kafka] try -> DEBU 3ae [channel: mychannel] Need to retry because process failed = kafka server: Replication-factor is invalid.
```
@JayJong is your kafka cluster properly running? maybe try diagnosing with http://kafka.apache.org/quickstart
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=u2xS26DuHZPczpJDQ) @bricakeld +1
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=L2RAfAiXPaLEPD9J6) @gravity Could you share with us a working TLS config on kafka - orderer please? having the same issues
Hi All,
I am working on developing one application using Kafka/ZK based ordering system. Applications works fine in Solo Ordering system but which I switch to Kafka based ordering system I get following error in orderer log:
2019-03-07 06:02:21.621 UTC [orderer/common/broadcast] Handle -> WARN 15b [channel: mychannel] Rejecting broadcast of config message from 172.24.0.10:47586 because of error: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
2019-03-07 06:02:21.621 UTC [orderer/common/server] func1 -> DEBU 15c Closing Broadcast stream
Complete code for the applciation is available in this link:
https://github.com/Dpkkmr/HL_Fabric_Kafka.git
NOTE: I am using generateArtifacts.sh file to generate the certificates.
Any help in resolving this issue will be really helpful.
Thanks!
Has joined the channel.
Hi all,
I've tried to start up my orderer, however, it encounters the following errors, does anyone have idea? Thanks
panic: Error opening leveldb: file does not exist
goroutine 1 [running]:
github.com/hyperledger/fabric/common/ledger/util/leveldbhelper.(*DB).Open(0xc000079e80)
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/util/leveldbhelper/leveldb_helper.go:79 +0x271
github.com/hyperledger/fabric/common/ledger/util/leveldbhelper.NewProvider(0xc0003afaa0, 0xc0003afaa0)
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/util/leveldbhelper/leveldb_provider.go:40 +0xda
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.NewProvider(0xc0003d17a0, 0xc0003d17c0, 0x7c6765, 0xc00000e238)
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage/fs_blockstore_provider.go:34 +0x7f
github.com/hyperledger/fabric/common/ledger/blockledger/file.New(0xc00003b1d0, 0x23, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/blockledger/file/factory.go:71 +0xea
github.com/hyperledger/fabric/orderer/common/server.createLedgerFactory(0xc000348000, 0x1a186a8, 0xc16058, 0x1771a8f, 0x4b)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/util.go:32 +0x1dd
github.com/hyperledger/fabric/orderer/common/server.Start(0xf4dfc0, 0x5, 0xc000348000)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:96 +0x7e
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:87 +0x1ce
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
```
```
Hi regarding the leveldb: file does not exist issue, does anyone know where is the path of leveldb supposed in orderer?
okay
HI
Has joined the channel.
Sorry, where is kafka configuration in byfn multi-organization example ?
Where is kafka configuration in byfn multi-organization example ? I like to have 3 organisations and orderer for each.
byfn - multi organizations->configtx.yaml uses solo as the orderer, how can i change it to kafka ?
I need each organization to maintain its own orderer ? where should I make necessary changes?
I need each organization to maintain its own kafka orderer ? where should I make necessary changes?
@Kosalayb Simply run:
```./byfn.sh down
./byfn.sh generate -o kafka
./byfn.sh up -o kafka
```
-o ?
I dont see -o ?
sorry, I dont see -o ?
https://github.com/hyperledger/fabric-samples/blob/026aa9ec01ad8e0826635230bca917ae054620d1/first-network/byfn.sh#L51
@Kosalayb ^
Orderer container exits with the following error.
`2019-03-07 19:26:01.490 UTC [orderer.common.server] initializeServerConfig -> INFO 003 Starting orderer with TLS enabled
2019-03-07 19:26:01.494 UTC [orderer.common.server] initializeMultichannelRegistrar -> INFO 004 Not bootstrapping because of existing chains
2019-03-07 19:26:01.510 UTC [orderer.commmon.multichannel] newLedgerResources -> PANI 005 Error creating channelconfig bundle: initializing configtx manager failed: error converting config to map: Illegal characters in key: [Group]
panic: Error creating channelconfig bundle: initializing configtx manager failed: error converting config to map: Illegal characters in key: [Group]
goroutine 1 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000cdad0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc00000e368, 0x1056c04, 0xf6f983, 0x27, 0xc000293808, 0x1, 0x1, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc00000e368, 0xf6f983, 0x27, 0xc000293808, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc00000e370, 0xf6f983, 0x27, 0xc000293808, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/orderer/common/multichannel.(*Registrar).newLedgerResources(0xc00017a5a0, 0xc0002bd4f0, 0xc0002bd4f0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:260 +0x2d7
github.com/hyperledger/fabric/orderer/common/multichannel.(*Registrar).Initialize(0xc00017a5a0, 0xc0002979b0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/multichannel/registrar.go:147 +0x242
github.com/hyperledger/fabric/orderer/common/server.initializeMultichannelRegistrar(0xc000223300, 0xc0002d4330, 0x0, 0xc00017a510, 0x19b4420, 0xc0002d44f0, 0x2, 0x2, 0xc0002d4500, 0x2, ...)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:414 +0x2ec
github.com/hyperledger/fabric/orderer/common/server.Start(0xf4dfc0, 0x5, 0xc00014cc00)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:142 +0x52d
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:87 +0x1ce
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20`
I had this same issue when I ran the First-Network demo and @jyellick told me to run the `byfn.sh down` to remove the volume from previous container with the same name but this is a custom network to be created how can I remove the volume now? I have started the network once successfully
This was the reply by @jyellick
*Yes, the containers are brought up with persistent docker volumes., So, when you bootstrapped your orderer with the genesis block with the bad name, it got written into the volume. No matter that you stopped or even destroyed the container, the volume was still there, and the docker compose was binding it back in.*
removed all the volumes and destroyed all containers, still no luck
@mfaisaltariq It sounds like perhaps you have given one of your orgs an odd name? Maybe something with punctuation or accents?
okay let me double check. @jyellick thank you.
can I use "underscore" ?
for orgnames ?
No
okay my bad. Thank you @jyellick
``` configAllowedChars = "[a-zA-Z0-9.-]+"
maxLength = 249
```
thank you @jyellick it solved the problem.
For Future reference of everyone the CHANNEL ID cannot contain uppercase letters.
`AllowedChars: "[a-z0-9.-]+"`
had to redo everything after a bit of searching for solution online
Hi, Are the slides from the RAFT playback on Jan 8 available? I see the video of the playback, but cant find the slide deck anywhere.
@guoger Hi, Are the slides from the RAFT playback on Jan 8 available? I see the video of the playback, but cant find the slide deck anywhere.
guoger slides
@kostas ^
@rangak https://docs.google.com/presentation/d/1H_aajW2mDsKa8Q-mayvEI-p2brl4O0jRsfcC6O8t2rg/edit?usp=sharing
Hey, Is there anyone who have done IBM Blockchain platform on IBM cloud private?
Has joined the channel.
@mamtabhardwaj12 yes, we have teams working it. PM me if you need contact
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3WXsawyzd2poCcMTD) @mamtabhardwaj12 ya.... cateina is working on it..... Please contact them.. they will support you..........
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3WXsawyzd2poCcMTD) @mamtabhardwaj12 ya.... cateina technologies is working on it..... Please contact them.. they will support you..........
Hello Team,
May I know the steps to migrate solo blockchain setup to kafka based blockchain setup.... solotokafka migration....
#fabric-orderer #fabric-questions
``` Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ Orderer.Readers Consortiums.Readers ]
``` why can't I create a new channel?
I've been inspecting my block the whole afternoon
and nothing is different from byfn
Hello. In the proposal of a Raft-based ordering service, I see notes about BFT orderer. Where can I find more about plans that Fabric community has regarding BFT orderer? Initial proposal? Deadline?
Has joined the channel.
Hi, I'm trying to start up my orderer, however, it encounters error as below. My certs were just generated by cryptogen. Does anyone have idea what the problem is?
19-03-10 18:50:21.598 UTC [bccsp_sw] loadPrivateKey -> DEBU 0c5 Loading private key [c85e853c92e5bcecfe0bcf4d42cca479a8419027df6e50703712d2524cb38a7b] at [/var/hyperledger/orderer/msp/keystore/c85e853c92e5bcecfe0bcf4d42cca479a8419027df6e50703712d2524cb38a7b_sk]...
2019-03-10 18:50:21.598 UTC [msp.identity] newIdentity -> DEBU 0c6 Creating identity instance for cert -----BEGIN CERTIFICATE-----
MIICDDCCAbOgAwIBAgIRALI/Vjargf2ea+rNQH6OrdkwCgYIKoZIzj0EAwIwaTEL
MAkGA1UEBhMCVVMxEzARBgNVBAgTCkNhbGlmb3JuaWExFjAUBgNVBAcTDVNhbiBG
cmFuY2lzY28xFDASBgNVBAoTC2V4YW1wbGUuY29tMRcwFQYDVQQDEw5jYS5leGFt
cGxlLmNvbTAeFw0xOTAzMTEwMjMwMDBaFw0yOTAzMDgwMjMwMDBaMFgxCzAJBgNV
BAYTAlVTMRMwEQYDVQQIEwpDYWxpZm9ybmlhMRYwFAYDVQQHEw1TYW4gRnJhbmNp
c2NvMRwwGgYDVQQDExNvcmRlcmVyLmV4YW1wbGUuY29tMFkwEwYHKoZIzj0CAQYI
KoZIzj0DAQcDQgAEIZ9qj17OTbTrgiLBmu/x3vqeJ3lIjhBKTTRhivh+KBtbZ1gT
ILyfbWM5hyCIL1JVIk8WbcLVXidYxjR/GFHhEqNNMEswDgYDVR0PAQH/BAQDAgeA
MAwGA1UdEwEB/wQCMAAwKwYDVR0jBCQwIoAgHU43oz8n5Vx53eZdv4DrLc/AGD5N
kZuNTPuTUwtHnrYwCgYIKoZIzj0EAwIDRwAwRAIgYvLj/kvgK0e0pQvfBfGtwo3n
/ExgDIRdhVSKJWaDzcgCIDdL942YFbdpDsoiU8QHvLZwODUO6cwG3zuF/lW36Ngr
-----END CERTIFICATE-----
2019-03-10 18:50:21.598 UTC [msp] setupSigningIdentity -> DEBU 0c7 Signing identity expires at 2029-03-08 02:30:00 +0000 UTC
2019-03-10 18:50:21.599 UTC [orderer.common.server] initializeLocalMsp -> FATA 0c8 Failed to initialize local MSP: CA Certificate is not valid, (SN: 134024305439508509025471136214548940168): could not obtain certification chain: the supplied identity is not valid: x509: certificate has expired or is not yet valid
The certs were generated this morning
@stephenman - one possibility is that the time is different on the host where you generated the certs and the host running the container. If you are using Docker for Mac or Docker for Windows, you occasionally need to restart Docker to resolve the time difference
```
func (msp *bccspmsp) getValidityOptsForCert(cert *x509.Certificate) x509.VerifyOptions {
// First copy the opts to override the CurrentTime field
// in order to make the certificate passing the expiration test
// independently from the real local current time.
// This is a temporary workaround for FAB-3678
var tempOpts x509.VerifyOptions
tempOpts.Roots = msp.opts.Roots
tempOpts.DNSName = msp.opts.DNSName
tempOpts.Intermediates = msp.opts.Intermediates
tempOpts.KeyUsages = msp.opts.KeyUsages
tempOpts.CurrentTime = cert.NotBefore.Add(time.Hour)
return tempOpts
}
```
@stephenman above is probably relevant to your issue. `fabric/msp/mspimplvalidate.go`
Has joined the channel.
Hello,
I am frequently getting following error:
https://stackoverflow.com/questions/53445400/hyperledger-fabric-error-14-unavailable-tcp-write-failed
Can anyone help please?
Hi, referring to the attachment in the link below, is there any better way to manage the TLS certificates for the kafka-orderer communications? suppose i want to have a trusted root CA for my organization, any advice on how i should systematically get it to sign all the certs without having to pass keys around and mount keystores outside of the kafka and orderer containers (which was the way they did it in the sample below but i'm not sure if it's secure enough)?
https://jira.hyperledger.org/browse/FAB-5226
Hi, I am running my orderers in kubernetes v1.10.11 in kafka mode. Everything works fine until, I install chaincode what produces a `failed to invoke chaincode name:"lscc"` error. see gist file for more details. However, the chaincode container gets nevertheless successfully build and started..
https://gist.github.com/holzeis/b4682bf4ed23869cb192bfe5ba3a0187
May I ask you to explain that error to me?
Hi, I am running my orderers in kubernetes v1.10.11 in kafka mode. Everything works fine until, I install chaincode what produces a `failed to invoke chaincode name:"lscc"` error. see gist file for more details. However, the chaincode container gets nevertheless build and started successfully..
https://gist.github.com/holzeis/b4682bf4ed23869cb192bfe5ba3a0187
May I ask you to explain that error to me?
Hi, I am running my orderers in kubernetes v1.10.11 in kafka mode. Everything works fine until, I install chaincode what produces a `failed to invoke chaincode name:"lscc"` error. see gist file for more details. However, the chaincode container gets nevertheless build and started successfully..
https://gist.github.com/holzeis/b4682bf4ed23869cb192bfe5ba3a0187
Do you have any ideas, why this error happens?
Hi, we are trying to create an application channel and we get the following error when we send the channel create tx: `Error: got unexpected status: SERVICE_UNAVAILABLE -- backing Kafka cluster has not completed booting; try again later`. We took down our zookeeper and kafka cluster already, removed all data and also from the orderer, then we started everything up again, created a new system channel, that works, but we get the samen error again when trying to create the application channel. And our kafka cluster is booted and running well. Anyone has an idea what might be the problem? Thanks in advance.
Same error that @braduf here!
@braduf you should try and turn the kafka verbose logging on
https://github.com/hyperledger/fabric/blob/release-1.4/sampleconfig/orderer.yaml#L239
and then see what is loged
and then see what is logged
Thanks for the response, this is all that is logged concerning the error:
```
2019-03-12 14:56:24.318 UTC [orderer.common.broadcast] ProcessMessage -> WARN 00c [channel: col-fin-channel] Rejecting broadcast of message from 3.88.235.210:49698 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
2019-03-12 14:56:24.318 UTC [comm.grpc.server] 1 -> INFO 00d streaming call completed {"grpc.start_time": "2019-03-12T14:56:24.206Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "3.88.235.210:49698", "grpc.code": "OK", "grpc.call_duration": "112.460971ms"}
2019-03-12 14:56:24.321 UTC [common.deliver] Handle -> WARN 00e Error reading from 3.88.235.210:49696: rpc error: code = Canceled desc = context canceled
2019-03-12 14:56:24.322 UTC [comm.grpc.server] 1 -> INFO 00f streaming call completed {"grpc.start_time": "2019-03-12T14:56:24.205Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "3.88.235.210:49696", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "116.509472ms"}
2019-03-12 15:08:26.800 UTC [orderer.common.broadcast] ProcessMessage -> WARN 010 [channel: col-fin-channel] Rejecting broadcast of message from 3.88.235.210:49702 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
```
@yacovm Seems like a problem with the broadcast, do you know what can be the cause of this?
@mauricio is this the same message you are having too?
Thanks for the response, this is all that is logged concerning the error:
```
2019-03-12 14:56:24.318 UTC [orderer.common.broadcast] ProcessMessage -> WARN 00c [channel: col-fin-channel] Rejecting broadcast of message from 3.88.235.210:49698 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
2019-03-12 14:56:24.318 UTC [comm.grpc.server] 1 -> INFO 00d streaming call completed {"grpc.start_time": "2019-03-12T14:56:24.206Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "3.88.235.210:49698", "grpc.code": "OK", "grpc.call_duration": "112.460971ms"}
2019-03-12 14:56:24.321 UTC [common.deliver] Handle -> WARN 00e Error reading from 3.88.235.210:49696: rpc error: code = Canceled desc = context canceled
2019-03-12 14:56:24.322 UTC [comm.grpc.server] 1 -> INFO 00f streaming call completed {"grpc.start_time": "2019-03-12T14:56:24.205Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "3.88.235.210:49696", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "116.509472ms"}
2019-03-12 15:08:26.800 UTC [orderer.common.broadcast] ProcessMessage -> WARN 010 [channel: col-fin-channel] Rejecting broadcast of message from 3.88.235.210:49702 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
```
Seems like a problem with the broadcast, do you know what can be the cause of this?
@mauricio is this the same message you are having too?
Thanks for the response, this is all that is logged concerning the error:
```
2019-03-12 14:56:24.318 UTC [orderer.common.broadcast] ProcessMessage -> WARN 00c [channel: col-fin-channel] Rejecting broadcast of message from 3.88.235.210:49698 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
2019-03-12 14:56:24.318 UTC [comm.grpc.server] 1 -> INFO 00d streaming call completed {"grpc.start_time": "2019-03-12T14:56:24.206Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "3.88.235.210:49698", "grpc.code": "OK", "grpc.call_duration": "112.460971ms"}
2019-03-12 14:56:24.321 UTC [common.deliver] Handle -> WARN 00e Error reading from 3.88.235.210:49696: rpc error: code = Canceled desc = context canceled
2019-03-12 14:56:24.322 UTC [comm.grpc.server] 1 -> INFO 00f streaming call completed {"grpc.start_time": "2019-03-12T14:56:24.205Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "3.88.235.210:49696", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "116.509472ms"}
2019-03-12 15:08:26.800 UTC [orderer.common.broadcast] ProcessMessage -> WARN 010 [channel: col-fin-channel] Rejecting broadcast of message from 3.88.235.210:49702 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
```
@yacovm Seems like a problem with the broadcast, do you know what can be the cause of this?
@mauricio is this the same message you are having too?
Yes, it's the same message @braduf @yacovm
this is not a problem with the broadcast
and it doesn't seem you activated the config param i asked
```
services:
orderer.org.com:
container_name: orderer.org.com
image: hyperledger/fabric-orderer
environment:
- FABRIC_LOGGING_SPEC=INFO
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/var/hyperledger/orderer/orderer.genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrgMSP
- ORDERER_GENERAL_LOCALMSPDIR=/var/hyperledger/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=false
- ORDERER_KAFKA_RETRY_PERIOD=3s
- ORDERER_KAFKA_RETRY_STOP=10s
- ORDERER_KAFKA_RETRY_SHORTINTERVAL=5s
- ORDERER_KAFKA_RETRY_SHORTTOTAL=10m
- ORDERER_KAFKA_RETRY_LONGINTERVAL=5m
- ORDERER_KAFKA_RETRY_LONGTOTAL=12h
- ORDERER_KAFKA_TOPIC_REPLICATIONFACTOR=1
- ORDERER_KAFKA_VERBOSE=true
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./network/network-artifacts/genesis.block.pb:/var/hyperledger/orderer/orderer.genesis.block
- ./msp:/var/hyperledger/orderer/msp
- /orderer/data:/var/hyperledger/production/orderer
ports:
- 7050:7050
```
This is my config, I've the environment variable `ORDERER_KAFKA_VERBOSE=true`
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=mvTYGR5GkkFQNMFxh) @yacovm Is it possible the orderer is not taking the environment variables specified in the docker-compose file for some reason, because we also have `ORDERER_KAFKA_VERBOSE=true`there. Where can we usually find the configuration file that the orderer image generates in the container? To check if the parameter is set there or not, I can't seem to find it in the container. Thanks in advance.
Has joined the channel.
Has joined the channel.
I am getting error of service unavailable on restarting the server.
Here are the orderer and kafka logs:
orderer logs: https://pastebin.com/4xa7HSQN
kafka logs: https://pastebin.com/VpbbysvV
When I try to fetch channel I encounter the following:
```
2019-03-13 04:45:36.574 UTC [cli/common] readBlock -> INFO 043 Got status: &{SERVICE_UNAVAILABLE}
Error: can't read the block: &{SERVICE_UNAVAILABLE}
```
I have persisted the data of orderer, peer, ca and cocuhdb.
Has joined the channel.
Regarding *Multi Orderers*
If my network has 3 orderers say (orderer0, orderer1, orderer2) and 4 Orgs say (org1, org2, org3, org4)
All the orgs should reach out orderer0 or should there be a proxy load balancer for all running orderers?
Regarding *Multi Orderers*
If my network has 3 orderers say (orderer0, orderer1, orderer2) and 4 Orgs say (org1, org2, org3, org4)
All the orgs should reach out orderer0 or should there be a proxy load balancer for all running orderers?
Regarding *Multi Orderers*
If my network has 3 orderers say (orderer0, orderer1, orderer2) and 4 Orgs say (org1, org2, org3, org4)
All the orgs should reach out orderer0 or should there be a proxy load balancer for all running orderers?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=uNZrXbjwhxvvKR4Qf) @mastersingh24 Thank you so much, singh!
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ZhvRA4tw5NabF6np6) @iramiller Thank you so much, too! Iramiller!
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=HRqafjgDHav8NrDvd) @Mahesh-Raj will reach out to orderer0 (or any specific orderer) , there wont be load balancer. Just all orderer data on sync with each other and in case of one orderer failure another orderer will take charge
I have already built the prometheus enviroment and I got the metrics from my peer and orderer.But I can't find a part of metrics from https://hyperledger-fabric.readthedocs.io/en/release-1.4/metrics_reference.html .Such as consensus_etcdraft_cluster_size,logging_entries_written...
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=dTucnMoHkTgdJN7FB) @knagware9 does that mean there cannot be any load balancing possible in Fabric? Doesn't that restrict the scaling of Fabric?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=cv5ngWrC5ni2oLQXF) @Mahesh-Raj its not restrict the scaling ,, you can scale the fabric by creating number of channels and other aspects of scaling but Kafka ordering service work like this..may be RAFT/PBFT will work on orderer scaling
@knagware9 thanks a lot, then how do you achieve better tps with Fabric1.4. I am only getting 150tx/sec
@knagware9 thanks a lot, then how do you achieve better tps? Currently with Fabric1.4. I am only getting 150tx/sec
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=rZxuwxDgmxshmodJj) @Mahesh-Raj check this https://github.com/nexledger/accelerator/tree/master/innovation-sandbox
you can use high performing infra
Has joined the channel.
@knagware9 I have seen `Caliper` and have used it as well. With simple chaincodes it gives 600tps (that's max again) and with a bit complex chaincodes the tps falls to 150-200tps
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=ZzvZPPfW4HhYfAELM) @Mahesh-Raj okay
@knagware9 Do you have any other recommendations? Are you able to achieve better tps throughput?
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8w5PZzf64Bd3KdbYu) @braduf Any solution for this error?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=qg2Q4PrYDvtQ3QQ25) @mauricio It looks like after setting up the producer, the orderer is stuck at `About to post the CONNECT message...`, so it never finishes its startup process and doesn't create a consumer. But we haven't found the cause of this yet. Will tell you when we get it solved.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=cCPjNBD9h7u9S9976) @Mahesh-Raj adding more endorser in the network and channels to increase TPS..I achieevd addidng more endorser , and separating commiter & endorser role
Hi experts, can someone please explain the error?
*Rejecting broadcast of config message from 172.35.2.158:50698 because of error: error authorizing update: error validating ReadSet: readset expected key [Group] /Channel/Application at version 2, but got version 3*
I 'm getting this when I 'm trying to add an org to my existing channel. The block I fetched has version 2 and after updating the config.json file also the version is 2 since it's mostly adding new details. when I 'm submitting this block update I 'm getting this error. Why it's telling me got version 3?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=WhToaave5E64NrY9R) separating Committer and Endorser!! do you have more details on it? Any sample code would be great
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=rEAmbGxNnN3LzvX7X) @Mahesh-Raj its not code, its related to how we will setup our fabric network
adding an org to an existing channel, whether system order channel or application channel, only needs the the signatures from the majority of orgs' admins. unncessary to add the new org to consortium. My question, is it possible for the admin of the new org to propose setting up a new channel?
adding an org to an existing channel, whether system order channel or application channel, only needs the signatures from the majority of the channel orgs' admins. unncessary to add the new org to one consortium. My question, is it possible for the admin of the new org to propose setting up a new channel?
Has joined the channel.
Hello Everyone,
I have one question, do we need to create a separate organization for orderer nodes or existing orgs (providing peers) can be used for ordering?
I have two organizations with 2 peers in each org. I want each org to provide one orderer node as well (don't want to create a new third org for orderer like being done in the tutorials). is it possible?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=uzrQiX36aP5nSg3hx) @bilalahmed I think this can be achieved via RAFT consensus which will be released march end , with Kafka I am not sure
Has joined the channel.
@knagware9 so, currently we will have to invoke a 3rd organization for orderers?
any one know how to specift the capability so orderer does not throw this error during startup and exit...
```
2019-03-18 16:56:07.713 EDT [orderer.commmon.multichannel] checkResourcesOrPanic -> PANI 008 [channel test-system-channel-name] config requires unsupported channel capabilities: Channel capability V2_0 is required but not supported: Channel capability V2_0 is required but not supported
panic: [channel test-system-channel-name] config requires unsupported channel capabilities: Channel capability V2_0 is required but not supported: Channel capability V2_0 is required but not supported
goroutine 1 [
```
any one know how to specify the capability so orderer does not throw this error during startup and exit...
```
2019-03-18 16:56:07.713 EDT [orderer.commmon.multichannel] checkResourcesOrPanic -> PANI 008 [channel test-system-channel-name] config requires unsupported channel capabilities: Channel capability V2_0 is required but not supported: Channel capability V2_0 is required but not supported
panic: [channel test-system-channel-name] config requires unsupported channel capabilities: Channel capability V2_0 is required but not supported: Channel capability V2_0 is required but not supported
goroutine 1 [
```
@aambati as part of your configtx.yaml you should be able to set your channel versions: https://github.com/hyperledger/fabric/blob/8da563d978f0a471f93f75bff6a23415bc20cc25/sampleconfig/configtx.yaml#L100
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=PiyKv2kQfWz8rkYKS) @bilalahmed currently we need to setup separate orderer organization , option solo or kafka cluster for multiple fault tolerant orderers
okay thanks @knagware9
Hey guys :) I am having the following warning in the logs in the logs of both orderers. Do you know what does it mean?
```
2019-03-19 09:16:43.120 UTC [orderer/consensus/kafka] processRegular -> WARN 1a0 [channel: orbchannel] This orderer is running in compatibility mode
```
@migrenaa It means you have not enabled the V1_1 capability in your channel config but you are running a v1.1+ orderer.
See https://hyperledger-fabric.readthedocs.io/en/release-1.1/upgrade_to_one_point_one.html
Hi alll, when having a root and intermediate tls ca, what exactly should be put in this parameter when tls is enables: ORDERER_GENERAL_TLS_CLIENTROOTCAS = fully qualified path of the file that contains the certificate chain of the CA that issued TLS server certificate?
Hi alll, when having a root and intermediate tls ca, what exactly should be put in this parameter when tls is enables: ORDERER_GENERAL_TLS_CLIENTROOTCAS = fully qualified path of the file that contains the certificate chain of the CA that issued TLS server certificate?
The file that contains the chain, so one file that contains both the intermediate CA cert and the root CA cert? Or should it be an array with two elements, one the root ca cert and the other the intermediate ca cert?
Hi alll, when having a root and intermediate tls ca, what exactly should be put in this parameter when tls is enabled: ORDERER_GENERAL_TLS_CLIENTROOTCAS = fully qualified path of the file that contains the certificate chain of the CA that issued TLS server certificate?
The file that contains the chain, so one file that contains both the intermediate CA cert and the root CA cert? Or should it be an array with two elements, one the root ca cert and the other the intermediate ca cert?
Hi alll, when having a root and intermediate tls ca, what exactly should be put in this parameter when tls is enabled: ORDERER_GENERAL_TLS_CLIENTROOTCAS = fully qualified path of the file that contains the certificate chain of the CA that issued TLS server certificate?
The file that contains the chain, so one file that contains both the intermediate CA cert and the root CA cert? Or should it be an array with two elements, one the root ca cert and the other the intermediate ca cert?
I'm having a TRANSIENT_FAILURE error when trying out the raft orderer, and I think I don't have tls configured correctly
Hey. I am load testing and I am getting this error once from every few requests:
`Failed to send transaction successfully to the orderer status:FORBIDDEN `
I have this error in orderer logs:
```
Rejecting broadcast of normal message from 10.0.11.21:40194 because of error: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
```
Do you have any idea?
@migrenaa This indicates that the signing identity is not authorized, or the signature of the identity is not correct.
If you are trying to do load testing, is it possible you are accidentally re-using some structures or mutating a pointer before it's actually been sent?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fJFRbR2qqc7AiA8T3) @jyellick I will check it and will write to you . Thanks for the response.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nrrWxfvFeZ8zHL9EL) But actually what could it be? Client certificate (the one signing invoke transaction)?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nrrWxfvFeZ8zHL9EL) @jyellick But actually what could it be? Client certificate (the one signing invoke transaction)?
If you turn on debug at the orderer, particularly for `msp` `cauthdsl` and `policies` you can get much more detailed information.
But typically, it is that the identify (certificate) of the submitter is not authorized to transact on the channel.
hi friends,
i am using kafka ordering service and its working fine.i have stack deployed all zookeeper , kafka and orderer.but when i use 'configtxlator' on orderer config file i am getting the following error.but i was able to use the same tool on oederer config file when i deploy using docker-compose.can anyone help please.
configtxlator: error: Error decoding: error decoding input: *common.Config: error in PopulateFrom for field channel_group for message *common.Config: *common.DynamicChannelGroup: error in PopulateFrom for map field groups with key Consortiums for message *common.DynamicChannelGroup: *common.DynamicConsortiumsGroup: error in PopulateFrom for map field groups with key SampleConsortium for message *common.DynamicConsortiumsGroup: *common.DynamicConsortiumGroup: error in PopulateFrom for map field groups with key Org1MSP for message *common.DynamicConsortiumGroup: *common.DynamicConsortiumOrgGroup: error in PopulateFrom for map field values with key MSP for message *common.DynamicConsortiumOrgGroup: *common.DynamicConsortiumOrgConfigValue: error in PopulateFrom for field value for message *common.DynamicConsortiumOrgConfigValue: *msp.MSPConfig: error in PopulateFrom for field config for message *msp.MSPConfig: *msp.FabricMSPConfig: unknown field "FabricNodeOUs" in msp.FabricMSPConfig
hi friends,
i am using kafka ordering service and its working fine.i have stack deployed all zookeeper , kafka and orderer.but when i use 'configtxlator' on orderer config file i am getting the following error.but i was able to use the same tool on orderer config file when i deploy using docker-compose.can anyone help please.
configtxlator: error: Error decoding: error decoding input: *common.Config: error in PopulateFrom for field channel_group for message *common.Config: *common.DynamicChannelGroup: error in PopulateFrom for map field groups with key Consortiums for message *common.DynamicChannelGroup: *common.DynamicConsortiumsGroup: error in PopulateFrom for map field groups with key SampleConsortium for message *common.DynamicConsortiumsGroup: *common.DynamicConsortiumGroup: error in PopulateFrom for map field groups with key Org1MSP for message *common.DynamicConsortiumGroup: *common.DynamicConsortiumOrgGroup: error in PopulateFrom for map field values with key MSP for message *common.DynamicConsortiumOrgGroup: *common.DynamicConsortiumOrgConfigValue: error in PopulateFrom for field value for message *common.DynamicConsortiumOrgConfigValue: *msp.MSPConfig: error in PopulateFrom for field config for message *msp.MSPConfig: *msp.FabricMSPConfig: unknown field "FabricNodeOUs" in msp.FabricMSPConfig
hi friends,
i am using kafka ordering service and its working fine.i have stack deployed all zookeeper , kafka and orderer.but when i use 'configtxlator' on orderer config file i am getting the following error.but i was able to use the same tool on orderer config file when i deploy using docker-compose.can anyone help please.i am using hyperledger version 1.1
configtxlator: error: Error decoding: error decoding input: *common.Config: error in PopulateFrom for field channel_group for message *common.Config: *common.DynamicChannelGroup: error in PopulateFrom for map field groups with key Consortiums for message *common.DynamicChannelGroup: *common.DynamicConsortiumsGroup: error in PopulateFrom for map field groups with key SampleConsortium for message *common.DynamicConsortiumsGroup: *common.DynamicConsortiumGroup: error in PopulateFrom for map field groups with key Org1MSP for message *common.DynamicConsortiumGroup: *common.DynamicConsortiumOrgGroup: error in PopulateFrom for map field values with key MSP for message *common.DynamicConsortiumOrgGroup: *common.DynamicConsortiumOrgConfigValue: error in PopulateFrom for field value for message *common.DynamicConsortiumOrgConfigValue: *msp.MSPConfig: error in PopulateFrom for field config for message *msp.MSPConfig: *msp.FabricMSPConfig: unknown field "FabricNodeOUs" in msp.FabricMSPConfig
Hi @jyellick How are you sir. I am facing this issue. What could be the possible reasons for this or any recommendations about fixes? .................Error: Error getting endorser client channel: endorser client failed to connect to peer1.org1.example.com:7051: failed to create new connection: context deadline exceeded
Are there plans to add Prometheus support to the orderer? It doesn't look like it can currently be configured.
```
type Metrics struct {
Provider string
Statsd Statsd
}
```
https://github.com/hyperledger/fabric/blob/0b67afda0e38fad055301ddb852f798bb984b9d7/orderer/common/localconfig/config.go#L196
@iramiller Metrics are available to prometheus from the orderer via the operations endpoint. The metrics struct in the config is if you wanted to configure a push based metric system like statsd
@alokkv What version of `configtxlator`? I suspect it is an older build which does not know about some of these newer proto structures, if you update it, I expect your error will go away.
thanks @jyellick
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=zhAXPAGBRGHHtCvSv) @jyellick Hi jyellick. Like you said i updated the versiin and its working fine. Thank you
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=zhAXPAGBRGHHtCvSv) @jyellick Hi jyellick. Like you said i updated the versiin and its working fine. Thank you
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=zhAXPAGBRGHHtCvSv) @jyellick Hi jyellick. Like you said i updated the versiin and its working fine. Thank you
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=zhAXPAGBRGHHtCvSv) @jyellick Hi jyellick. Like you said i updated the versiin and its working fine. Thank you
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=EQtP8jMmkgqQmG8e8) Just as a POI I have it running locally on my orderers and it seems fine. Actually it's more than fine - with grafana configured it makes for a compelling demo when trying to show someone it's real.
If you didn't put it in yet add these two to the deployment and 9443 to your service. Same as the peer but swap "ORDERER" for "CORE".
```
- name: ORDERER_OPERATIONS_LISTENADDRESS
value: "0.0.0.0:9443"
- name: ORDERER_METRICS_PROVIDER
value: prometheus
```
I have some (basic) prometheus config too, but I'm assuming since you were only asking about the orderer you are all squared away there.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=EQtP8jMmkgqQmG8e8) Just as a POI I have it running locally on my orderers and it seems fine. Actually it's more than fine - with grafana configured it makes for a compelling demo when trying to show someone it's real rather than vaporware.
If you didn't put it in yet add these two to the deployment and 9443 to your service. Same as the peer but swap "ORDERER" for "CORE".
```
- name: ORDERER_OPERATIONS_LISTENADDRESS
value: "0.0.0.0:9443"
- name: ORDERER_METRICS_PROVIDER
value: prometheus
```
I have some (basic) prometheus config too, but I'm assuming since you were only asking about the orderer you are all squared away there.
Has joined the channel.
Has joined the channel.
Hi All,
We are intending to run a fabric network with 26 organizations comprising of:
- one orderer ("solo" based consensus)
- two peers per organizations each having a couchdb database. Out of two,one peer is designated
as the Anchor peer and are setting "CORE_PEER_GOSSIP_USELEADERELECTION=true" for electing the
leader dynamically.
On the whole, we are spinning 105 containers (52 peers + 52 couchdb + one orderer).
We based our network configuration on the "BYFN" example. In addition to this,
we are setting the values of "CORE_PEER_TLS_CLIENTAUTHREQUIRED" and "ORDERER_GENERAL_TLS_CLIENTAUTHREQUIRED"
to "true" - also setting the value of "ORDERER_GENERAL_TLS_CLIENTROOTCAS" to contain the list of client CA's.
We were able to successfully spin off a network with 14 organizations but when tried to extend it for 12 more
organizations, we are facing multiple issues - by extend, I meant we are trying to create the peers for
26 organizations at one go. We are running the network on a single machine.
We were able to create the channel and join the peers to the channel. But when doing an "Anchor" peer update,
the network stops when updating the "Anchor" peer for 19th organization with the following error: "Context deadline exceeded."
We added a sleep of 100 seconds between each update and set the following environment variables.
- CORE_PEER_GOSSIP_ALIVETIMEINTERVAL=100s
- CORE_PEER_GOSSIP_ALIVEEXPIRATIONTIMEOUT=100s
- CORE_PEER_GOSSIP_RECONNECTINTERVAL=100s
- CORE_PEER_GOSSIP_DIALTIMEOUT=1000s
- CORE_PEER_GOSSIP_RESPONSEWAITTIME=1000s
- CORE_PEER_DISCOVERY_PERIOD=300s
- CORE_PEER_DISCOVERY_TOUCHPERIOD=30s
- CORE_LEDGER_STATE_COUCHDBCONFIG_REQUESTTIMEOUT=300s
- CORE_PEER_KEEPALIVE=300s
- CORE_PEER_KEEPALIVE_CLIENT_INTERVAL=300s
We aren't sure if they are required, but assuming that it would, we tried but failed.
This time, the anchor peer update was successful, despite the following errors/warnings:
2019-03-25 11:04:35.518 UTC [gossip/comm] sendToEndpoint -> WARN a8c Failed obtaining connection for peer0.organization17.com:7051, PKIid:[213 58 75 254 200 30 208 175 202 1 180 210 202 31 30 144 176 199 65 190 121 87 70 35 133 236 10 32 167 113 116 112] reason: context deadline exceeded
2019-03-25 11:07:36.109 UTC [gossip/discovery] func1 -> WARN b4c Could not connect to {peer0.organization6.com:7051 [] [] peer0.organization6.com:7051
@aatkddny -- Grafana is ok but we transitioned to telegraf/chronograf/kapacitor and have been much happier. It is probably worth a look if you haven't considered it.
We previously used the statsd reporting but recently switched over to prometheus because the statsd formats are not very usable.
As a side note we abandoned the use of ENV configuration in our environment preferring configuration files due to the 'magic' that extensive use of overrides introduced. Our custom containers also delete the stock configuration files entirely to prevent accidentally loading a default config which is wrong in almost all cases--we preferred to have the process crash than silently continue with a bad default. Mapping a Kubernetes config map with the config file in greatly simplified our administration and auditing.
Has joined the channel.
Hello! In Fabric terminology, I can understand the concept of Channel which is something like creating a private (sub network) between two organizations. While going through some of the docs, I see *Orderer System Channel*. So a question pops out to me is, does Orderer also have a Channel? If that is the case, what all are included in that Channel?
Thanks!
@klkumar369 There's a bit of documentation which is pending release: https://logs.hyperledger.org/production/vex-yul-hyp-jenkins-3/fabric-docs-build-x86_64/1582/html/orderer/ordering_service.html#orderer-nodes-and-channel-configuration there is a temporary build of it.
But in general, the orderer system channel is the channel the orderers use to orchestrate creation of other channels.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=4X2hdPW4qwo4Skaiv) @jyellick Thank you!
Has joined the channel.
Has joined the channel.
Hi,
I'm curious about the current (not Raft) implementation (workflow) regarding the setup with multiple orderers.
Let's say one of the orderers is a malicious orderer (different codebase) from a legitimate organization (all cryptomaterial is valid).
For simplicity, let's assume the malicious orderer will form a block with less transactions than the rest of the orderers (reading them from Kafka using a changed algorithm).
I assume, that with a modified code base, this will be possible.
Then the block will be signed by the orderer and delivered to the organizations leaders. They will resend the block to the rest of the organization peers.
The good orderers will also do the same, broadcasting their block versions to the "same" leader peers.
I hope my understanding till here is correct.
The question is: what will happen on the leader peers, when they receive the same block number (with different content) twice.
How/when does the mechanism that will synchronize the ledger work.
Thanks,
Venko Ivanov
@ivanovv - the first block that reaches the peer, is the block to enter its ledger
if a peer gets block 100 twice, it will ignore the second block, even if it is different
as long as it has a valid signature from an orderer
if there is a malicious orderer and the orderer type is not BFT - it can create a fork
@yacovm Thanks!
@yacovm and the fork will not be resolved anymore?
if you detect if then you can fix it
but in my opinion detecting is too late
because some of the money may already be double spent
so it depends on use case
Has joined the channel.
hello!
my java program is failing to commit messages with the following issue
Caused by: org.hyperledger.fabric.sdk.exception.TransactionException: Channel messagebus, send transaction failed on orderer OrdererClient-messagebus-orderer.group(grpc://10.x.x.x:7050). Reason: UNAVAILABLE: io exception
at org.hyperledger.fabric.sdk.OrdererClient.sendTransaction(OrdererClient.java:223)
seems the connection is reset.. i dont see any recent error logs in the orderer or the peer. everything looks fine
i see these warnings from a few days ago.. when we started the OSN
2019-03-26 11:45:50.079 UTC [orderer.common.broadcast] Handle -> WARN 02e Error reading from 10.0.0.4:44794: rpc error: code = Canceled desc = context canceled
any ideas what could cause this? Kafka is running fine without any issues / connectivity problems
hi @yacovm i am trying to enable mutual TLS for peers and orderer but am facing this error while trying to join channel for a peer
this is extracted from the peer container's logs:
`[core.comm] ServerHandshake -> ERRO 008 TLS handshake failed with error tls: client didn't provide a certificate {"server": "PeerServer",`
i have already specified --clientauth and --keyfile and --certfile following the instructions on https://hyperledger-fabric.readthedocs.io/en/release-1.4/enable_tls.html
is there any missing orderer config that i should have set to pass in its keys/certs as a client to the peer? would appreciate any help!
write your full command with environment variables @bricakeld
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=McW4vpSBQGQEENhFe) @yacovm `peer channel join -b /path/to/channel/block --clientauth --keyfile /path/to/peer0/org1/key --certfile /path/to/peer0/org1/crt`
from what i understand from the logs it seems the orderer did not present any certificates as a client when establishing connection with the peer?
@bricakeld you also need to specify root certificates I think
is the answer from here : https://lists.hyperledger.org/g/fabric/message/4012
really the best way to allow orderer fault tolerance?
> have a proxy for Orderer which then forwards request to available Orderer.
For client connections, yes. For block delivery to peers, this is all handled transparently.
ok that makes sense. thanks
hey .can anyone clarify me how to specify orderer ports while fetching channel using kafka ?
Has joined the channel.
@SahithiDyavarashetti I don't understand your question. Do you mean specify the orderer's port to the `peer channel fetch` command?
Has joined the channel.
Has joined the channel.
Hi. So i noticed the raft ordering and broadcast time isn't consistent. Sometimes it takes 2s (batch timeout time) other times it takes *60seconds*. See the logs -->
Hi. So i noticed the raft ordering and broadcast time isn't consistent. Sometimes it takes 2s (batch timeout time) other times it takes *60seconds*. See the logs --> Why does this happen?
Screenshot 2019-04-03 at 12.34.03 PM.png
Can this time be modified in a config?
@gen_el pls refer to the section in config:
```
# Batch Size: Controls the number of messages batched into a block.
# The orderer views messages opaquely, but typically, messages may
# be considered to be Fabric transactions. The 'batch' is the group
# of messages in the 'data' field of the block. Blocks will be a few kb
# larger than the batch size, when signatures, hashes, and other metadata
# is applied.
BatchSize:
# Max Message Count: The maximum number of messages to permit in a
# batch. No block will contain more than this number of messages.
MaxMessageCount: 500
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch. The maximum block size is this value
# plus the size of the associated metadata (usually a few KB depending
# upon the size of the signing identities). Any transaction larger than
# this value will be rejected by ordering. If the "kafka" OrdererType is
# selected, set 'message.max.bytes' and 'replica.fetch.max.bytes' on
# the Kafka brokers to a value that is larger than this one.
AbsoluteMaxBytes: 10 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed
# for the serialized messages in a batch. Roughly, this field may be considered
# the best effort maximum size of a batch. A batch will fill with messages
# until this size is reached (or the max message count, or batch timeout is
# exceeded). If adding a new message to the batch would cause the batch to
# exceed the preferred max bytes, then the current batch is closed and written
# to a block, and a new batch containing the new message is created. If a
# message larger than the preferred max bytes is received, then its batch
# will contain only that message. Because messages may be larger than
# preferred max bytes (up to AbsoluteMaxBytes), some batches may exceed
# the preferred max bytes, but will always contain exactly one transaction.
PreferredMaxBytes: 2 MB
```
Hello:
When the solution of this problem will be available: https://jira.hyperledger.org/browse/FAB-14551
@bilalahmed it should solved already. What particular problem do you encounter?
Has joined the channel.
I gone through the ordering service link 'https://hyperledger-fabric.readthedocs.io/en/release-1.4/orderer/ordering_service.html'. Still I don't understand whether multiple organisations can maintain OSNs with Kafka based ordering service? If so, then how? I don't understand the usability of multiple OSNs run by different organisations and all these nodes connecting to a single Kafka cluster? More precisely in kafka based ordering service, how multiple OSNs within different organisations co-ordinate with each other, which OSN will be selected to create block and distribute to peers? Please help me to understand.
As I understood, using Raft based ordering service with leader-follower model, we can utilize multi organisation ordering service (purely decentralized).
> I don't understand the usability of multiple OSNs run by different organisations and all these nodes connecting to a single Kafka cluster?
If it's really desired, yes, multiple orgs would need to connect to single kafka cluster. But often there might be a pseudo-dictator in consortium to operate OSN and having peers of other orgs connect to it. It's not optimum, and that's why we'd like to make Raft the preferred consensus mechanism for now.
> More precisely in kafka based ordering service, how multiple OSNs within different organisations co-ordinate with each other, which OSN will be selected to create block and distribute to peers?
Kafka-based OSN consent on *transactions*, instead of blocks. It's guaranteed all orderers would produce identical blocks based on ordered transaction stream from Kafka.
> I don't understand the usability of multiple OSNs run by different organisations and all these nodes connecting to a single Kafka cluster?
If it's really desired, yes, multiple orgs would need to connect to single kafka cluster. But often there might be a pseudo-dictator in consortium to operate OSN and having peers of other orgs connect to it. It's not optimum, and that's why we'd like to make Raft the preferred consensus mechanism for now.
> More precisely in kafka based ordering service, how multiple OSNs within different organisations co-ordinate with each other, which OSN will be selected to create block and distribute to peers?
Kafka-based OSN consent on *transactions*, instead of blocks. It's guaranteed all orderers would produce identical blocks based on ordered transaction stream from Kafka.
hope this helps @biksen
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8DePz8EYPTQKTt2PM) @guoger @guoger I'm unable to add 3rd Organisation by following the EYFN script and getting the exactly same error described here: https://jira.hyperledger.org/browse/FAB-14551
I checked the git but didn't find any commit regarding this issue.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=MqKt74xRXD5Qa8NDZ) @guoger Thank you @guoger for the details. One more point I would like to understand, how different OSNs running in different organisations co-ordinate with each other to produce/consume messages from Kafka partition and which OSN would be selected to create block and distributed to peers? How client can determine which OSN it needs to connect? Please help me to understand in bit details please.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=MqKt74xRXD5Qa8NDZ) @guoger Thank you @guoger for the details. One more point I would like to understand, how different OSNs running in different organisations co-ordinate with each other to produce/consume messages from Kafka partition and which OSN would be selected to create block and distribute to peers? How client can determine which OSN it needs to connect? Please help me to understand in bit details please.
@jyellick yeah exactly.If i have multiple orderers ,i have created a channel using orderer1 and stopped it and trying to fetch the channel using orderer2.What are the ports do i need to specify beside orderer2 ?
Has joined the channel.
Hey @jyellick i am also working on the multiple orderers using kafka .I also have same issue as @SahithiDyavarashetti mentioned about ports ?Can u help us ?
hi , I am facing the an error when I instantiate the chaincode ,the following error is shown in orderer logs:
"2019-04-04 07:58:52.923 UTC [core.comm] ServerHandshake -> ERRO 011 TLS handshake failed with error tls: first record does not look like a TLS handshake {"server": "Orderer", "remote address": "172.22.0.3:52378"}"
@bilalahmed could you check that you are using correctly built binary (in particular configtxgen)? `configtxgen --version` should say:
```
configtxgen:
Version: 1.4.1
Commit SHA: d5901e1
Go version: go1.11.6
OS/Arch: linux/amd64
```
@biksen all of OSNs produce identical blocks, peers can pull blocks from any of them.
configtxgen:
Version: 1.4.0
Commit SHA: d700b43
Go version: go1.11.1
OS/Arch: darwin/amd64
@guoger I get the above output
@bilalahmed then you need to checkout tag `v1.4.1`, build binary, and give it another try
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=esDJ9B7733CqYf5qa) @itg1996 You'll need to enable TLS for all of components if you are trying Raft orderer
I did checkout v1.4.1 a while ago, so do I need to check it out again? @guoger
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=riteikhBhe6DQy8iS) @bilalahmed your binary says 1.4.0
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=GRuZmY7yKBTZjunY4) @AkhilKura in orderer.yaml, there should be address:port that orderer listens on
if you start orderer in container with port mapping, you'll need to use custom port instead (ultimately however you can connect to specified addr:port mentioned above)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hQpeivAr6nqq6JRfb) @guoger okay, I will give it a try and update you. Thanks @guoger for your help.
what is the purpose of
General.Authentication.TimeWindow = 15m0s
in orderer config please?
@adamhardie - it prevents replay attacks for deliver API
Has joined the channel.
Can I set the consumer poll interval?
@yacovm thanks - how does it do this? my sdk seems to have problems receiving transaction commit responses after 15 minutes - im wondering if i have somehow triggered this
when you send a Deliver request, you put a timestamp into it. The request is signed
now, if the peer you sent the request to, is evicted from the channel after a while - it cannot use your request from the past to impersonate a client and ask for blocks
if you want a more powerful replay attack protection- then you need to use mutual TLS
then it works without any time window (for all time)
understood. thanks for the info!
Has joined the channel.
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=gdtafsf6GQE8Krzcb) @guoger @guoger you meant v1.4.1-rc1?
how the voting mechanism works in Raft based consensuses , does it have any discovery service
i read the Raft documentation, I am good now.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=P636Hv6zBB56cxAWD) I get the following warning when I try to up my network:
LOCAL_VERSION=1.4.1
DOCKER_IMAGE_VERSION=1.4.1-rc1
=================== WARNING ===================
Local fabric binaries and docker images are
out of sync. This may cause problems.
===============================================
@guoger
@bilalahmed are you able to bring up the network though?
my orderer is set to DEBUG
but all i see in logs is
2019-04-08 10:16:52.916 UTC [comm.grpc.server] 1 -> INFO e845 streaming call completed {"grpc.start_time": "2019-04-08T10:16:52.91Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "10.255.0.2:47244", "grpc.code": "OK", "grpc.call_duration": "5.277802ms"}
is
- ORDERER_GENERAL_LOGLEVEL=debug
- ORDERER_KAFKA_VERBOSE=true
enough to provide detailed debug logging ?
when I try to update the anchor peer of a channel , I get following error in orderer logs: [channel: channel0] Rejecting broadcast of config message from 172.20.0.5:44015 because of error: error applying config update to existing channel 'channel0': error authorizing update: error validating ReadSet: proposed update requires that key [Group] /Channel/Application/Org1MSP be at version 0, but it is currently at version 1
hii everyone, when I try to update the anchor peer of a channel , I get following error in orderer logs: [channel: channel0] Rejecting broadcast of config message from 172.20.0.5:44015 because of error: error applying config update to existing channel 'channel0': error authorizing update: error validating ReadSet: proposed update requires that key [Group] /Channel/Application/Org1MSP be at version 0, but it is currently at version 1
@itg1996 : Error clearly says ReadSet Key is a version head,
@itg1996 : Error clearly says ReadSet Key is a version ahead,
This almost definitely means you are attempting to create a channel which already exists
The anchor peer update will only work once
The anchor peer update will only work on unmodified channels, when generated by `configtxgen`, this is stated in the help output of `configtxgen`
Has joined the channel.
To update anchor peers multiple times or on channels where other changes have been made, use the normal `configtxlator` channel update flow
See https://hyperledger-fabric.readthedocs.io/en/release-1.4/config_update.html
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=wNgrQ35k47LHpGnKo) @jyellick link doesn't load
@brockhager Link loads fine for me, are you able to access anything else on readthedocs?
link loads fine for me too
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fm9nePgvMfPo372Pi) @jyellick thanks a lot @jyellick
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=fm9nePgvMfPo372Pi) @jyellick Thanks a lot
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8wFqQuTJ8jfpw7Q4a) @guoger no @guoger I'm getting the exactly same error which I was getting with 1.4.0 and mentioned here: https://jira.hyperledger.org/browse/FAB-14551
So, not sure that either the fix of this issue has been added on git or not.
@bilalahmed eyfn works fine for me. pls check if binary being used by script is correc
@bilalahmed eyfn works fine for me. pls check if binary being used by script is correct
@guoger for command: ./configtxgen --version, I'm getting the following output:
configtxgen:
Version: 1.4.1
Commit SHA: 29433f0
Go version: go1.11.5
OS/Arch: darwin/amd64
I'm getting the exactly same issue which is described here: https://jira.hyperledger.org/browse/FAB-14551
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=aEZ6PfwQ4xKBFTS22) @guoger @guoger Did you modified it or used exactly same as described here: https://hyperledger-fabric.readthedocs.io/en/latest/channel_update_tutorial.html ? Can you please share your files?
no i didn't modify anything, just ran those two scripts
is it possible that your script is loading binary from somewhere else? (possibly a stale version)
how about `configtxlator`?
scripts/step1Telco3.sh: line 60: signConfigtxAsPeerTelco: command not found
./configtxlator version
configtxlator:
Version: 1.4.1
Commit SHA: 29433f0
Go version: go1.11.5
OS/Arch: darwin/amd64
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=wZs27zbe2rMfRccmp) @guoger Sorry, how can we verify it? I mean we do have binaries in bin directory of first-samples
sometimes people have binaries in PATH
I am facing issue while upgrading HL network from 1.2 to 1.3. Below given are the steps that i followed
1. First I created the HL network with 1.2.1. I used kafka zookeeper ordering service with orderer and peer ledger mounted to host using volumes.
2. Second I downloaded the HL 1.3 binaries and upgraded the network including orderers, peers and couchDB.
3. Third while trying to enable capabilities, I am getting the error while trying adding capabilities for testchainid channel. I am using first-network byfn upgrade as reference. Below given is the orderer logs
UTC [orderer/common/broadcast] Handle -> WARN 5df19 [channel: testchainid] Rejecting broadcast of config message from 10.64.37.220:35842 because of error: cannot enable channel capabilities without orderer support first
2019-04-10 08:01:47.367 UTC [orderer/common/server] func1 -> DEBU 5df1a Closing Broadcast stream
2019-04-10 08:01:47.370 UTC [common/deliver] Handle -> WARN 5df1b Error reading from 10.64.37.220:35840: rpc error: code = Canceled desc = context canceled
2019-04-10 08:01:47.370 UTC [orderer/common/server] func1 -> DEBU 5df1e Closing Deliver stream
2019-04-10 08:01:47.370 UTC [grpc] infof -> DEBU 5df1c transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-04-10 08:01:47.370 UTC [grpc] infof -> DEBU 5df1d transport: loopyWriter.run returning. connection error: desc = "transport is closing"
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=RnqnsmdfxKBSa6F58) @guoger For example I see the following command: cryptogen generate --config=./org3-crypto.yaml
this command executes well but I don't know that from where 'cryptogen' is being invoked as when I run this on CMD directly it says -bash: cryptogen: command not found
when you execute the scripts, the absolute path of binaries should be printed
yes, I saw it. Its serving frm bin directory inside fabric-samples
has anyone ever done the multiple kafka in one signle computer?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hvfFkqAdxY7fWKtMm) @spartucus yes
hi @bilalahmed, have you ever met the "panic: kafka: client has run out of available brokers to talk to (Is your cluster reachable?)" error while starting orderer?
No, I didn't face any such issues. And apparently, it seems due to something wrong with your kafka confs. Can you please share conf of any kafka node and also had a look into logs?
Does anyone know why fabric-ca-orderer image is discontinued from 1.4 version
is there any reason why kafka client is 1.0.0 ?
Orderer are compatible with kafka 2 ?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=tWfJ8MXncJTdLcPC5) @kariyappal fabric-ca and fabric-orderer are different separate images
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Sxnwe5qJbwLsrwNcq) @guoger Finally, I'm able to run this network. one binary file wasn't being pointed correctly. Thanks for your help.
I am getting the issue while trying to enable a channel capabilities. This channel has got just one org. I fetched the config.json for the channel and found that /channel mod_policy is set to blank. How to resolve this issue. Below are the logs
2019-04-10 10:38:18.680 UTC [policies] GetPolicy -> ERRO 5ea3e Returning dummy reject all policy because no policy ID supplied
2019-04-10 10:38:18.680 UTC [orderer/common/broadcast] Handle -> WARN 5ea3f [channel: fatico-dedicated] Rejecting broadcast of config message from 10.64.37.220:58638 because of error: error authorizing update: error validating DeltaSet: unexpected missing policy for item [Group] /Channel
2019-04-10 10:38:18.680 UTC [orderer/common/server] func1 -> DEBU 5ea40 Closing Broadcast stream
2019-04-10 10:38:18.683 UTC [common/deliver] Handle -> WARN 5ea42 Error reading from 10.64.37.220:58636: rpc error: code = Canceled desc = context canceled
@javrevasandeep You _must_ enable the V1_1 capabilities for the orderer first, in a single config update.
@javrevasandeep You _must_ enable the V1_1 orderer capabilities first, in a single config update.
This will repair the broken mod_policy you see above
Once the V1_1 orderer capabilities are enabled, you may do the rest of the capability updates at once, or separately.
yes i did the same. I added the V1_1 orderer capabilities first and successfully able to update the orderer capabilities as well as orderer system channel(testchainid) capabilities. After that when i tried to enable the capability for the channel that i created, then I am getting the error *Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: unexpected missing policy for item [Group] /Channel*
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=g55sqH2SPDwpABFH5) @jyellick yes i did the same. I added the V1_1 orderer capabilities first and successfully able to update the orderer capabilities as well as orderer system channel(testchainid) capabilities. After that when i tried to enable the capability for the channel that i created, then I am getting the error *Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: unexpected missing policy for item [Group] /Channel*
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=g55sqH2SPDwpABFH5) @jyellick testchainid config https://hastebin.com/noluzirude.json
other channel config https://hastebin.com/hizayeleve.json
@javrevasandeep Did you create the channel after enabling the capabilities, or before? It looks like before. Capabilities must be enabled on a per channel basis. If you enable them before channel creation, the orderer and channel level capabilities are inherited, otherwise you must add them manually.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=g3RHgFHnC22tNLDEP) @jyellick Yes i created the channel before adding capabilities. so I need to add orderer capabilities for each channel that i have created before adding channel capabilities rite?
like jq -s '.[0] * {"channel_group":{"groups":{"Orderer":{"values":{"Capabilities": .[1]}}}}}'
Correct, once a channel is created, you must manage its configuration independently, this includes capabilities.
Hi Experts I am getting these logs at orderer ```2019-04-11 09:04:48.929 UTC [common.deliver] Handle -> WARN 022 Error reading from 172.19.0.12:44564: rpc error: code = Canceled desc = context canceled
2019-04-11 09:04:48.929 UTC [comm.grpc.server] 1 -> INFO 023 streaming call completed {"grpc.start_time": "2019-04-11T09:04:48.926Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "172.19.0.12:44564", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "2.727059ms"}
```
Always nothing else Instead everything is fine
Always nothing else Instead everything is working fine like invoke
any suggestion what I am doing wrong here
@jyellick
@pankajcheema It looks to me like your client is not appropriately terminating the gRPC stream before hanging up.
Has joined the channel.
Hi all, I have replaced my certs for admin and a peer, and now while creating channel, it throws error as below, could anyone help? Thanks!
2019-04-12 02:46:06.798 UTC [orderer.common.broadcast] ProcessMessage -> WARN 011 [channel: mychannel] Rejecting broadcast of config message from 10.0.0.196:38572 because of error: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining: permission denied
2019-04-12 02:46:06.798 UTC [comm.grpc.server] 1 -> INFO 012 streaming call completed {"grpc.start_time": "2019-04-12T02:46:06.796Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Broadcast", "grpc.peer_address": "10.0.0.196:38572", "grpc.code": "OK", "grpc.call_duration": "2.107804ms"}
2019-04-12 02:46:06.801 UTC [common.deliver] Handle -> WARN 013 Error reading from 10.0.0.196:38570: rpc error: code = Canceled desc = context canceled
2019-04-12 02:46:06.801 UTC [comm.grpc.server] 1 -> INFO 014 streaming call completed {"grpc.start_time": "2019-04-12T02:46:06.794Z", "grpc.service": "orderer.AtomicBroadcast", "grpc.method": "Deliver", "grpc.peer_address": "10.0.0.196:38570", "error": "rpc error: code = Canceled desc = context canceled", "grpc.code": "Canceled", "grpc.call_duration": "7.115516ms"}
Has joined the channel.
Error: "orderer" request parameter is missing and there are no orderers defined on this channel in the common connection profile
Error: "orderer" request parameter is missing and there are no orderers defined on this channel in the common connection profile *Orderer is defined in the connection profile as specified by common connection profile format *
Error while executing contract.submitTransaction() Error: "orderer" request parameter is missing and there are no orderers defined on this channel in the common connection profile *Orderer is defined in the connection profile as specified by common connection profile format *
did somebody succeeded to get the first-network running with raft ordering?
@stephenman did you load proper private keys?
@benjamin.verhaegen yes, many did
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BuZJYZ3ku8PDm7LGv) @guoger do I need to change something to the docker-compose files? Because no Fabric-CA is being booted; And can't get it to work neighter
@benjamin.verhaegen i don't think byfn script actually starts CA
you don't need to modify anything to get that script work though
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=oJKKe2y2L7QSPJkBm) @guoger i'm using the newest fabric-samples, but it keeps going back to 'solo'
completeInitialization -> INFO 005 orderer type: solo
because you need to specify raft with flag `-o etcdraft`
otherwise it defaults to solo
`byfn.sh -h` could help
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=osCkfamuL4kktLZXc) @guoger yes, allready doing that
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=Q2cWFtjrr9iJ2Qtfi) weird thing is that it says: CONSENSUS_TYPE=etcdraft
but then changes to: 2019-04-12 00:34:38.323 PDT [common.tools.configtxgen.localconfig] completeInitialization -> INFO 003 orderer type: solo
pls make sure you've checked out correct tag of repo, and binaries with proper version are actually being used by scripts (sometimes people have binaries in other PATH)
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=L6uCE45fQgfWuTJx7) @guoger I've been using the latest binaries/repo/docker images/...
Hello Team, I am running Hyperledger Fabric 1.4.1 kafka orderer. Running kafka cluster on AWS and running orderer on my local machine. But sometimes I am getting Consumption error as in the below error log and sometimes its getting "Marked consenter as available again". I am not sure why? Please help me to understand.
Clipboard - April 15, 2019 12:37 PM
@biksen it says i/o timeout, most likely it's network issue
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7m6yNK3zN3kyB6dWz) @guoger Ahhhhh!! Thank you!
Hi, I'm experiencing some issues with the HLF 1.4.1. After I updated the HLF Orderer from 1.4.0 to 1.4.1 the Orderer is not able to connect to the Kafka cluster (0.4.15).
```
[comm.grpc.server] 1 -> INFO 00c streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.request_deadline=2019-04-15T12:49:29.609Z grpc.peer_address=52.58.183.15:27655 grpc.code=OK grpc.call_duration=338.224µs
[common.deliver] deliverBlocks -> WARN 00b [channel: testchainid] Rejecting deliver request for 52.58.183.15:27655 because of consenter error
```
check the orderer logs
These are the orderer logs :) - the same configuration works with an HLF 1.4.0 Orderer.
```
[orderer.common.broadcast] ProcessMessage -> WARN 1cd [channel: channel2] Rejecting broadcast of message from 52.58.183.15:43585 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
```
These are actually the orderer logs :) - the *same configuration works out of the box with an HLF 1.4.0 Orderer*.
```
[orderer.common.broadcast] ProcessMessage -> WARN 1cd [channel: channel2] Rejecting broadcast of message from 52.58.183.15:43585 with SERVICE_UNAVAILABLE: rejected by Consenter: backing Kafka cluster has not completed booting; try again later
```
ah right
odd....
Its interesting, even for channels i created with an HLF 1.4.0 orderer it rejects the deliver request after I switched to 1.4.1
```
[common.deliver] deliverBlocks -> WARN 001 [channel: channel2] Rejecting deliver request for 52.57.67.100:33524 because of consenter error
```
Hi, where is the kafka container saving files? where should I mount the volume for the data?
Has joined the channel.
After the update to HLF 1.4.1 Kafka rejects the SSL handshake:
```
09:12:33 kafka-network-thread-1-ListenerName(SSL)-SSL-2, closeOutboundInternal()
09:12:33 [Raw write]: length = 7
09:12:33 0000: 15 03 03 00 02 02 50 ......P
09:12:33 Using SSLEngineImpl.
09:12:33 Allow unsafe renegotiation: false
09:12:33 Allow legacy hello messages: true
09:12:33 Is initial handshake: true
09:12:33 Is secure renegotiation: false
09:12:33 Ignoring unsupported cipher suite: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 for TLSv1
09:12:33 Ignoring unsupported cipher suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 for TLSv1
09:12:33 Ignoring unsupported cipher suite: TLS_RSA_WITH_AES_256_CBC_SHA256 for TLSv1
09:12:33 Ignoring unsupported cipher suite: TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384 for TLSv1
09:12:33 Ignoring unsupported cipher suite: TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384 for TLSv1
09:12:33 Ignoring unsupported cipher suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 for TLSv1
09:12:33 Ignoring unsupported cipher suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 for TLSv1
09:12:33 Ignoring unsupported cipher suite: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384 for TLSv1.1
09:12:33 Ignoring unsupported cipher suite: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 for TLSv1.1
09:12:33 Ignoring unsupported cipher suite: TLS_RSA_WITH_AES_256_CBC_SHA256 for TLSv1.1
09:12:33 Ignoring unsupported cipher suite: TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384 for TLSv1.1
09:12:33 Ignoring unsupported cipher suite: TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384 for TLSv1.1
09:12:33 Ignoring unsupported cipher suite: TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 for TLSv1.1
09:12:33 Ignoring unsupported cipher suite: TLS_DHE_DSS_WITH_AES_256_CBC_SHA256 for TLSv1.1
09:12:33 kafka-network-thread-1-ListenerName(SSL)-SSL-0, fatal error: 80: problem unwrapping net record
09:12:33 javax.net.ssl.SSLException: Unrecognized SSL message, plaintext connection?
09:12:33 kafka-network-thread-1-ListenerName(SSL)-SSL-0, SEND TLSv1.2 ALERT: fatal, description = internal_error
09:12:33 kafka-network-thread-1-ListenerName(SSL)-SSL-0, WRITE: TLSv1.2 Alert, length = 2
09:12:33 kafka-network-thread-1-ListenerName(SSL)-SSL-0, called closeOutbound()
09:12:33 kafka-network-thread-1-ListenerName(SSL)-SSL-0, closeOutboundInternal()
09:12:33 kafka-network-thread-1-ListenerName(SSL)-SSL-0, called closeInbound()
09:12:33 kafka-network-thread-1-ListenerName(SSL)-SSL-0, fatal: engine already closed. Rethrowing javax.net.ssl.SSLException: Inbound closed before receiving peer's close_notify: possible truncation attack?
09:12:33 kafka-network-thread-1-ListenerName(SSL)-SSL-0, called closeOutbound()
09:12:33 kafka-network-thread-1-ListenerName(SSL)-SSL-0, closeOutboundInternal()
```
The cipher I'm using on the Kafka side:
```
Protocol : TLSv1.2
Cipher : ECDHE-ECDSA-AES256-GCM-SHA384
```
@david_dornseifer Did you upgrade only fabric, or also your Kafka cluster?
What do you see on the orderer side if you enable debug logging?
The kafka cluster is running on 0.4.15 and has not been updated. The orderer I started with 1.4.0 and updated it to 1.4.1.
```
13:43:50 [orderer.consensus.kafka.sarama] Open -> DEBU 869 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 86a client/metadata fetching metadata for all topics from broker kafka3:9092
13:43:50 [orderer.consensus.kafka.sarama] Open -> DEBU 86b ClientID is the default of 'sarama', you should consider setting it to something application-specific.
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 86c client/metadata fetching metadata for all topics from broker kafka2:9092
13:43:50 [orderer.consensus.kafka.sarama] withRecover -> DEBU 86d Connected to broker at kafka2:9092 (unregistered)
13:43:50 [orderer.consensus.kafka.sarama] withRecover -> DEBU 86e Connected to broker at kafka3:9092 (unregistered)
13:43:50 [orderer.consensus.kafka.sarama] withRecover -> DEBU 86f Connected to broker at kafka4:9092 (unregistered)
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 870 client/metadata got error from broker while fetching metadata: unexpected EOF
13:43:50 [orderer.consensus.kafka.sarama] tryRefreshMetadata -> DEBU 871 Closed connection to broker kafka2:9092
13:43:50 [orderer.consensus.kafka.sarama] Open -> DEBU 872 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 873 client/metadata fetching metadata for all topics from broker kafka1:9092
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 874 client/metadata got error from broker while fetching metadata: unexpected EOF
13:43:50 [orderer.consensus.kafka.sarama] tryRefreshMetadata -> DEBU 875 Closed connection to broker kafka3:9092
13:43:50 [orderer.consensus.kafka.sarama] Open -> DEBU 876 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 877 client/metadata fetching metadata for all topics from broker kafka1:9092
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 878 client/metadata got error from broker while fetching metadata: unexpected EOF
13:43:50 [orderer.consensus.kafka.sarama] tryRefreshMetadata -> DEBU 879 Closed connection to broker kafka4:9092
13:43:50 [orderer.consensus.kafka.sarama] Open -> DEBU 87a ClientID is the default of 'sarama', you should consider setting it to something application-specific.
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 87b client/metadata fetching metadata for all topics from broker kafka1:9092
13:43:50 [orderer.consensus.kafka.sarama] withRecover -> DEBU 87c Connected to broker at kafka1:9092 (unregistered)
13:43:50 [orderer.consensus.kafka.sarama] withRecover -> DEBU 87d Connected to broker at kafka1:9092 (unregistered)
13:43:50 [orderer.consensus.kafka.sarama] withRecover -> DEBU 87e Connected to broker at kafka1:9092 (unregistered)
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 87f client/metadata got error from broker while fetching metadata: unexpected EOF
13:43:50 [orderer.consensus.kafka.sarama] tryRefreshMetadata -> DEBU 880 Closed connection to broker kafka1:9092
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 881 client/metadata no available broker to send metadata request to
13:43:50 [orderer.consensus.kafka.sarama] tryRefreshMetadata -> DEBU 882 client/brokers resurrecting 4 dead seed brokers
```
@jyellick The kafka cluster is running on 0.4.15 and has not been updated. The orderer I started with 1.4.0 and updated the docker container to HLF 1.4.1
```
13:43:50 [orderer.consensus.kafka.sarama] Open -> DEBU 869 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 86a client/metadata fetching metadata for all topics from broker kafka3:9092
13:43:50 [orderer.consensus.kafka.sarama] Open -> DEBU 86b ClientID is the default of 'sarama', you should consider setting it to something application-specific.
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 86c client/metadata fetching metadata for all topics from broker kafka2:9092
13:43:50 [orderer.consensus.kafka.sarama] withRecover -> DEBU 86d Connected to broker at kafka2:9092 (unregistered)
13:43:50 [orderer.consensus.kafka.sarama] withRecover -> DEBU 86e Connected to broker at kafka3:9092 (unregistered)
13:43:50 [orderer.consensus.kafka.sarama] withRecover -> DEBU 86f Connected to broker at kafka4:9092 (unregistered)
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 870 client/metadata got error from broker while fetching metadata: unexpected EOF
13:43:50 [orderer.consensus.kafka.sarama] tryRefreshMetadata -> DEBU 871 Closed connection to broker kafka2:9092
13:43:50 [orderer.consensus.kafka.sarama] Open -> DEBU 872 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 873 client/metadata fetching metadata for all topics from broker kafka1:9092
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 874 client/metadata got error from broker while fetching metadata: unexpected EOF
13:43:50 [orderer.consensus.kafka.sarama] tryRefreshMetadata -> DEBU 875 Closed connection to broker kafka3:9092
13:43:50 [orderer.consensus.kafka.sarama] Open -> DEBU 876 ClientID is the default of 'sarama', you should consider setting it to something application-specific.
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 877 client/metadata fetching metadata for all topics from broker kafka1:9092
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 878 client/metadata got error from broker while fetching metadata: unexpected EOF
13:43:50 [orderer.consensus.kafka.sarama] tryRefreshMetadata -> DEBU 879 Closed connection to broker kafka4:9092
13:43:50 [orderer.consensus.kafka.sarama] Open -> DEBU 87a ClientID is the default of 'sarama', you should consider setting it to something application-specific.
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 87b client/metadata fetching metadata for all topics from broker kafka1:9092
13:43:50 [orderer.consensus.kafka.sarama] withRecover -> DEBU 87c Connected to broker at kafka1:9092 (unregistered)
13:43:50 [orderer.consensus.kafka.sarama] withRecover -> DEBU 87d Connected to broker at kafka1:9092 (unregistered)
13:43:50 [orderer.consensus.kafka.sarama] withRecover -> DEBU 87e Connected to broker at kafka1:9092 (unregistered)
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 87f client/metadata got error from broker while fetching metadata: unexpected EOF
13:43:50 [orderer.consensus.kafka.sarama] tryRefreshMetadata -> DEBU 880 Closed connection to broker kafka1:9092
13:43:50 [orderer.consensus.kafka.sarama] func1 -> DEBU 881 client/metadata no available broker to send metadata request to
13:43:50 [orderer.consensus.kafka.sarama] tryRefreshMetadata -> DEBU 882 client/brokers resurrecting 4 dead seed brokers
```
It just keeps trying to read the metadata
Yes, it would seem that it's unable to complete the SSL handshake
The odd part is that there was no upgrade of the Kafka lib that Fabric uses (Sarama) in between v1.4.0 and v1.4.1
I don't know of any other library changes which would affect the TLS negotiation either.
it could be that it just tries to connect over plaintext... @david_dornseifer you can try and do `tcpdump` and see if there is a TLS client hello or not
Has joined the channel.
Has joined the channel.
when i run peer channel create i get ...
```
[channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
Error: got unexpected status: SERVICE_UNAVAILABLE -- backing Kafka cluster has not completed booting; try again later
```
but my Kafka seems fine i have 4 kafka, 3 zookeeper, 3 orderers and 3 peers
@JuanSuero did you wait some period of time and retry? Did you just start up your network?
Hi Jason,
We are noticing below errors in the orderer logs:
2019-04-17 11:51:17.755 UTC [orderer/consensus/kafka] try -> DEBU 1baea [channel: ctschannel] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
2019-04-17 11:51:17.755 UTC [orderer/consensus/kafka] startThread -> CRIT 1baeb [channel: ctschannel] Cannot set up channel consumer = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
panic: [channel: ctschannel] Cannot set up channel consumer = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
goroutine 66 [running]:
github.com/hyperledger/fabric/vendor/github.com/op/go-logging.(*Logger).Panicf(0xc42023ba70, 0xd1d562, 0x31, 0xc4215fbaa0, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/op/go-logging/logger.go:194 +0x134
github.com/hyperledger/fabric/orderer/consensus/kafka.startThread(0xc420181600)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/kafka/chain.go:261 +0xb33
created by github.com/hyperledger/fabric/orderer/consensus/kafka.(*chainImpl).Start
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/kafka/chain.go:126 +0x3f
We are running on fabric v1.1 and have noticed this issue in the past as well. Tried setting the flag to KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false but still see the issue.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=yoZPxTJgaFCtYWz5Q) @jyellick @jyellick I started the zookeepers first, then i started the brokers, then i started the ordererers then i started the peers. then i waited 2 days and peer channel create still doesnt work
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=yoZPxTJgaFCtYWz5Q) @jyellick @jyellick I started the zookeepers first, then i started the brokers, then i started the ordererers then i started the peers. then i waited 2 days and peer channel create still doesnt work gives the error. how long do i have to wait between starting the different services.
@magar36 Are you certain you have disabled Kafka log pruning, particularly by setting the max retention bytes and ms to -1?
@JuanSuero Typically, a kafka ordering system will come up in under a minute. It sounds to me like your Kafka cluster is misconfigured. I would check your Kafka logs to determine what is wrong. If Kafka seems healthy, try using the Kafka sample consumer from one of your orderer nodes to confirm the problem.
Hi Jason,
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=r24s8KRy6Db92FCpr) @jyellick We have not set any properties other than the below so it should take the others by default, is that right?
# Batch Timeout: The amount of time to wait before creating a batch
BatchTimeout: 2s
# Batch Size: Controls the number of messages batched into a block
BatchSize:
# Max Message Count: The maximum number of messages to permit in a batch
MaxMessageCount: 10
# Absolute Max Bytes: The absolute maximum number of bytes allowed for
# the serialized messages in a batch.
AbsoluteMaxBytes: 99 MB
# Preferred Max Bytes: The preferred maximum number of bytes allowed for
# the serialized messages in a batch. A message larger than the preferred
# max bytes will result in a batch larger than preferred max bytes.
PreferredMaxBytes: 512 KB
@magar36 Those are fabric properties, these would be properties of your Kafka configuration
Notably `log.retention.bytes` and `log.retention.ms`
See https://kafka.apache.org/documentation/
This is what we have in dc-kafka-base.yaml:
# log.retention.ms
# Until the ordering service in Fabric adds support for pruning of the
# unclean.leader.election.enable
# Data consistency is key in a blockchain environment. We cannot have a
# leader chosen outside of the in-sync replica set, or we run the risk of
# overwriting the offsets that the previous leader produced, and --as a
# result-- rewriting the blockchain that the orderers produce.
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
#
# log.retention.ms
# Until the ordering service in Fabric adds support for pruning of the
# Kafka logs, time-based retention should be disabled so as to prevent
# segments from expiring. (Size-based retention -- see
# log.retention.bytes -- is disabled by default so there is no need to set
# it explicitly.)
# - KAFKA_LOG_RETENTION_MS=-1
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=QPu5B4gTfBEuvzEXe) @jyellick where can i find the "Kafka sample consumer"
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=4DvGkqKYmtF9STd7t) @jyellick the orderer container doesnt have apt-get or yum so cant install java easy
It is part of the standard Kafka distribution, see https://kafka.apache.org/quickstart
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=4DvGkqKYmtF9STd7t) @jyellick ok i created a bastion container and installed java and gradle there
FAILURE: Build failed with an exception.
* Where:
Build file '/root/kafka-2.2.0-src/build.gradle' line: 57
* What went wrong:
A problem occurred evaluating root project 'kafka-2.2.0-src'.
> Failed to apply plugin [id 'org.owasp.dependencycheck']
> Could not create task of type 'Analyze'.
./bin/kafka-topics.sh --create --zookeeper zookeeper1-example-com:2181 --replication-factor 1 --partitions 1 --topic my-topicj
Classpath is empty. Please build the project first e.g. by running './gradlew jar -PscalaVersion=2.12.8'
> the orderer container doesnt have apt-get or yum so cant install java easy
@JuanSuero: You do not (and in fact: should not) launch this process from inside the orderer container. Instead, launch the Kafka producer/consumer on your localhost, and target the Kafka container that is spinned up when you launch the Fabric ordering service.
Just follow the Kafka Quickstart guide, and instead of targeting the Kafka broker they suggest, target any of the Kafka brokers that you brought up when launching the ordering service.
@yacovm I used the following tcp dump command to capture the TLS handshakes:
```
tcpdump -i any -vvvnnSeXX "src host
I can see the kafka interbroker communication but not the orderer - so yes it seems that the orderer tries to establish a plaintext communication. As I said, the config (TLS ...) is working fine with the HLF 1.4.0 broker.
@david_dornseifer The config parsing was changed slightly in v1.4.1 to address a bug, it's possible it introduced a regression. How are you setting your TLS properties?
peer chaincode instantiate -o orderer0-example-com:7050 -C twoorgschannel -n mycc github.com/chaincode -v v0 -c '{"Args": ["a", "100"]}'
2019-04-18 17:33:52.238 UTC [main] InitCmd -> WARN 001 CORE_LOGGING_LEVEL is no longer supported, please use the FABRIC_LOGGING_SPEC environment variable
2019-04-18 17:33:52.246 UTC [main] SetOrdererEnv -> WARN 002 CORE_LOGGING_LEVEL is no longer supported, please use the FABRIC_LOGGING_SPEC environment variable
2019-04-18 17:33:52.255 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
2019-04-18 17:33:52.255 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
Error: error endorsing chaincode: rpc error: code = Unknown desc = access denied: channel [twoorgschannel] creator org [Org1MSP]
@jyellick thanks for your great help. i solved by adding KAFKA_ADVERTISED_HOST_NAME . but now i installed and joined the channel and installed chaincode but i get access denied on instantiation: ............. peer chaincode instantiate -o orderer0-example-com:7050 -C twoorgschannel -n mycc github.com/chaincode -v v0 -c '{"Args": ["a", "100"]}'
2019-04-18 17:33:52.238 UTC [main] InitCmd -> WARN 001 CORE_LOGGING_LEVEL is no longer supported, please use the FABRIC_LOGGING_SPEC environment variable
2019-04-18 17:33:52.246 UTC [main] SetOrdererEnv -> WARN 002 CORE_LOGGING_LEVEL is no longer supported, please use the FABRIC_LOGGING_SPEC environment variable
2019-04-18 17:33:52.255 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 003 Using default escc
2019-04-18 17:33:52.255 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 004 Using default vscc
Error: error endorsing chaincode: rpc error: code = Unknown desc = access denied: channel [twoorgschannel] creator org [Org1MSP]
```
```
```
im running Hyperledger 1.4 with Kafka orderer from opensource ( not IBP ) and peer logs have lots of ``` 2019-04-18 21:05:01.738 UTC [ConnProducer] DisableEndpoint -> WARN 09f Only 1 endpoint remained, will not black-list it
2019-04-18 21:05:01.743 UTC [blocksProvider] DeliverBlocks -> ERRO 0a0 [twoorgschannel] Got error &{FORBIDDEN} ```
hi experts here, Would you please tell me how to obtain an org's root certificate in application chaincdes? I know if it is possible because system chaincodes need root certificate to verify the correctness of certificates
Thanks
@JuanSuero That would imply that your channel does not contain the organization for your peer, or that the org's crypto is wrong.
@qsmen It is not generally possible to obtain the root cert. However, you may obtain the 'creator' cert, which will have a reference to its signer.
Has joined the channel.
Hi, I am using fabric 1.4.1 with raft
am unable to get the orderers running - i followed the examples with 5 orderer nodes
# TickInterval is the time interval between two Node.Tick invocations.
# if it is commented - panic: [channel: byfn-sys-channel] Error creating consenter: failed to parse TickInterval () to time duration
# fails if with "ms" at generating genesis block
# if no "ms" and plain 500 fails too - 019-04-20 12:41:12.955 UTC [orderer.commmon.multichannel] newChainSupport -> PANI 455 [channel: byfn-sys-channel] Error creating consenter: failed to parse TickInterval () to time duration
#panic: [channel: byfn-sys-channel] Error creating consenter: failed to parse TickInterval () to time duration
TickInterval: 500
what should i be doing here? Orderers dont get created. Is this a bug? The same bug was in JIRA but now it is closed.
https://jira.hyperledger.org/browse/FAB-14451?workflowName=FAB:+Bug+Workflow&stepId=6
Has joined the channel.
Has joined the channel.
Has joined the channel.
Hey all, i am trying to setup raft consensus to work but without cryptogen, using CA. I have question regarding configtx.yaml. https://github.com/hyperledger/fabric-samples/blob/release-1.4/first-network/configtx.yaml#L362
I would have to provide ClientTLSCert and ServerTLSCert for orderer (enroll and register orderer to CA) before running configtxgen command for generating generis block? It kind a confuses me.
Am i missing something? Thanks
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=7ubdJmMAbdHZFWotS) Because i am getting: cannot marshal metadata for orderer type etcdraft: cannot load client cert for consenter orderer1-org0:7050: open /etc/hyperledger/orderer/tls/server.crt: no such file or directory
Because i am getting: cannot marshal metadata for orderer type etcdraft: cannot load client cert for consenter orderer1-org0:7050: open /etc/hyperledger/orderer/tls/server.crt: no such file or directory`
nvm, managed to get it running
Anyone faced this issue in Orderer logs: Rejecting broadcast of config message from 172.28.0.1:33424 because of error: error applying config update to existing channel 'mychannel': error authorizing update: error validating ReadSet: proposed update requires that key [Group] /Channel/Application be at version 0, but it is currently at version 1
Dear Team, I am following the link https://hyperledger-fabric.readthedocs.io/en/release-1.4/channel_update_tutorial.html to add new organisation Org3. I am stuck at joining the peer "peer0.org3" with the channel (https://hyperledger-fabric.readthedocs.io/en/release-1.4/channel_update_tutorial.html#join-org3-to-the-channel). I am getting error "*Authentication failed: failed classifying identity: Unable to extract msp.Identity from peer Identity: Peer Identity*". I fetched block 0 instead of latest block. As per my understanding, all peers needs to join channel before any channel update. Please help me.
Am I missing anything?
Am I missing anything?
Org1 and Org2 already running
@jyellick I'm using the following parameters to setup the Orderer / Kafka TLS communication:
```
'ORDERER_KAFKA_TLS_PRIVATEKEY_FILE=/.../server.key',
'ORDERER_KAFKA_TLS_CERTIFICATE_FILE=/.../server.crt',
'ORDERER_KAFKA_TLS_ROOTCAS_FILE=/.../ca.crt',
```
Every time it start the Orderer with HLF 1.4.1. I can see a handshake attempt via ```tcpdump``` on the Kafka side (a total of 14 packets). No further handshake is happening afterwards. Starting ther HLF 1.4.0 orderer I see a successful TLS handshake (10 packets) on the Kafka side.
Hello Team, any suggestion how to resolve the issue? https://jira.hyperledger.org/browse/FAB-11192
in fabric 1.4, another consensus raft is provided, how to configure and use it?
Thank you
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8bYjz9jLFdnHgXtow) @qsmen https://hyperledger-fabric.readthedocs.io/en/latest/raft_configuration.html
@guoger ,thank you.
Has joined the channel.
Hi all!! now with raft we need to share even more certificates with the members of the network, now the tls of each of the orderers, but as far as i know there isn't any function to query the orgs CA and get the public material of one of its nodes unless you are a member of that org, is that right?
Has anyone know of a service that works as a bulletin board to put there the certs so anyone can go and get them easily??
@dsanchezseco you probably want to check out the work from Interop working group https://wiki.hyperledger.org/display/fabric/Fabric+Interop+Working+Group
thanks, do not see anything related at first view, i'll look more in detail
but i still feel like most of the process cannot be automated/programmed and has to be done manually
most likely you can get proposals from management chaincode there, which contain certificates
@guoger umm but for that you need to have a channel set already, which is what s
@guoger umm but for that you need to have a channel set already, which is what i'm trying to simplify
but is cool tho
what you described looks like a cold start problem - inevitably you'll need to have some orgs defined in your genesis block, which is shared out of band
IMO that's out of scope of fabric
yep, is that, i know is out of scope, i'm trying to improve that part to help my company mates have an easy way to create testnets with different topologies in an easy way, and latter improve it to something that helps you create the resources needed to connect to a production net by not having to manually edit all the peer/orderers/.... deployments with the org name, CA, default orderer and such
maybe oversimplification, but it really speeds up the PoCs of teams that do not know anything of Fabric. Is easier to just do the chaincode that setting up a network(in my opinion the most difficult part of Fabric)
i'm just asking if someone knows about a tool to make a generic exchange of generic certificates(maybe a simple nginx is enough) and use it on fabric by myself, not asking to put it on fabric
hold on, hold on
@dsanchezseco you can just get the certificates from the orderer
not sure what is the issue here?
Is there any way to receive a notification, as a client app, when a new channel is created in the ordering service - no matter which flavor of ordering service you are using?
are you asking how you get all certificates in the first place?
> Is there any way to receive a notification, as a client app, when a new channel is created in the ordering service - no matter which flavor of ordering service you are using?
you need admin permissions on the orderer system channel
but yes
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=FXrknChHQ4BzTrK7Y) @yacovm ...and then you subscribe - through a peer - to events on that channel?
no, you subscribeto the orderer to blocks
i'm not sure if we have a client binary that can do that
but it's easy to modify the existing deliver client binary to support that
we have a sample somewhere
to create a channel you need the root certificates of all the orgs plus the tls ones of each orderer, so you need to exchange them prior to create the channel, that exchange is what i'm trying to do
(there's also the admin certificate/key of the org in the org elements, but i'm doing that part with vault)
i think... @guoger do you remember where is the deliver sample?
> to create a channel you need the root certificates of all the orgs plus the tls ones of each orderer, so you need to exchange them prior to create the channel, that exchange is what i'm trying to do
yes but you have all of them in the system channel
so you can just take the config block of the system channel
and look what is inside of it
I assume that the orderer is running already
mainly is to do something like send you an email with my pgp so you can send me an encrypted email, but not manually
fabric won't do out of band stuff for you....
i mean, it can't setup itself for you
but once you have a system channel
the certificates are the same ones
is prior to the orderer being running, that's why you need to exchange the certs manually
it would be something like a bulletin board of each org in which make public the certificates
but why would fabric have a bulletin board ?
the orderer is the first thing that comes into the network
i mean, without it.... you can't create blocks
like https://www.amazontrust.com/repository/
but this is centralized
this is literally, listed in amazon, no?
GPG is decentralized
and you need to exchange PGP keys in the first place too no?
so what is the difference between exchanging PGP keys and certificates for fabric?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=sWWPAwbKp8qLPcK2T) @dsanchezseco I'm not aware of such tool... i see how it can be useful for a consortium. but as yacov stated, once you have orderer in place (with orgs already joined), you can pull those info from system channel
However, I assume you wanted to share the cert *before* creating ordering service?
i believe that is what he wants
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=BZTfsihGf2gnuLJez) @yacovm mind elaborate?
we had some sample somewhere that lets you connect to the peer's deliver API
let me look
https://github.com/hyperledger/fabric/tree/release-1.3/examples/events/eventsclient
when you pass around genesis block, it contains certificates for each org
@minollo https://github.com/hyperledger/fabric/tree/release-1.4/examples/events/eventsclient
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=XGcSurtqd8HsR53Gb) @yacovm Thanks!
@dsanchezseco maybe there is something in https://github.com/hyperledger-labs
but it shouldn't be part of Fabric or any official repository I think
this is how i understand the cycle, correct me if i'm wrong
prior to do the "orderer" command you have to give the orderer the genesis block in which for each org defined there there's a variable saying in which folder (msp) is the root certificate for that organization, and same with the tls cert for the other orderers in raft.
this means that you should provide the orderer with the genesis and the required certificates in its file system before starting the ordering service(the system channel), so there's no network yet.
this exchange is supposed to be done manually, out-off-band, by the orgs that are going to make the network. this makes the creation of networks tedious and prone to error, no problem if doing that only in a couple production nets, BUT for testing pourposes sucks, mainly for team that are only of bussines not of architecture.
why does it suck for testing?
just generate the certificate material and invoke `configtxgen` ;)
i would say it sucks for production
i'm not asking for it being officialy supported, just ideas of tools already available or ideas to do it myself.
hmmm.... i don't know anything of that sort but why is that such a problem in real life? send the certificates via email or facebook or something
or whatsapp
it sucks when the solution team bypass you and uses configtx in prod without asking
this way at leas they do not use configtx
bypass you?
who is you
makes sense t have something with an api that is not manual when you have a tool that can get somehting like this and create the whole network in Kubernetes for realistic dev environments :wink:
```
# Definition of the organizations to be created
# Caution, an organization or department can only contain one of the following:
# [ departments | (( orderers &| peers ) & users & wrappers & bucket )]
#
# the org names, mspId, peer names and orderers names must be unique
organizations:
org0:
caAdminCreds:
user: ca-org0
pass: ca-org0-pw
departments:
org0-orderers:
caAdminCreds:
user: ca-org0-orderers
pass: ca-org0-orderers-pw
mspId: org0-orderingMSP
orderers:
- name: orderer00
pass: orgfadf.av,kjasbd
- name: orderer01
pass: aaaaaaaaaaa
peers:
- name: peer0-ordering
pass: peer0_randomorddd
isAnchor: true
- name: random0name-ordering
pass: something_secure
isAnchor: false
users:
- name: orderer0Admin
pass: lkjbvlsrkub
isAdmin: true
org1:
caAdminCreds:
user: ca-org1
pass: ca-org-1password
mspId: org1MSP
peers:
- name: peer01
pass: apajklsbd
isAnchor: true
- name: peer00
pass: passsssss
- name: peer23-accounting
pass: aasdcas
users:
# just one admin for now # TODO: check if is possible to have more than one for each org
- name: userA
pass: passw123
- name: userB
pass: qwerty
isAdmin: true
wrappers: 1
# Define the channels to be created to relationate the different peers and orderers
# put org-names instead of peer/orderers names, it's supposed to have all the peers of an org. For easier management
channels:
- channel_name: channeltest
channel_peers:
- org1
- org0-1
channel_creator_org: org0-1
- channel_name: channeltwo
channel_peers:
- org1
- org0-orderers
channel_creator_org: org1
# all the orderers organizations defined in this file are used in all the channels
# so this orderers configuration is now global
# if another set of orderers is needed has more sense to have a whole new network
# choose the ordering service type
orderers_props:
####### RAFT ########
# notice that raft makes the usage of tls mandatory!!!
# so the previous tls option is ignored
# type: etcdraft
######## KAFKA #########
type: kafka
kafka_host_number: 3
zookeeper_host_number: 3
kafka_zoo_suffix: def-net
```
makes sense t have something with an api that is not manual when you have a tool that can get somehting like this and create the whole network in Kubernetes for realistic dev environments :wink:
[REDACTED]
it's trimmed but you get the idea
i'm part of the team of global architecture of my company, and with this the solutions team only have to do the chaincode and integration for their pocs, not the network too
so instead of a two months work is more like a week or a week and a half
maybe you can create one
and share with the community
the generator or the exchanger?
both i guess
though i don't know what you mean by either
i have the generator, but is not mine, is my company's. And regarding the exchanger that was the next step if there weren't anything
you can open source it i guess
i want to, but not my boss' boss
i work in a bank...
:)
@jyellick @yacovm any idea how I should continue with the Orderer / Kafka TLS issue?
I am getting this error when i issue a `peer channel update1` command, anybody know what this means:
```
2019-04-23 19:36:08.662 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'channel-1': error authorizing update: error validating ReadSet: existing config does not contain element for [Policy] /Channel/Application/Org1MSP/Endorsement but was in the read set
```
I am getting this error when i issue a `peer channel update` command, anybody know what this means:
```
2019-04-23 19:36:08.662 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'channel-1': error authorizing update: error validating ReadSet: existing config does not contain element for [Policy] /Channel/Application/Org1MSP/Endorsement but was in the read set
```
```panic: [channel syschannel] config requires unsupported orderer capabilities: Orderer capability v1_3 is required but not supported: Orderer capability v1_3 is required but not supported```
how to enable this on orderer?
in configtx.yaml
well, I had to put orderer capabilities as false for all
Hi i have question related to raft N orderer. what are the minimum No of orderer's i can run using RAFT. what if i have 3 and one goes down, is my orderer still functional?
Hello Team, I followed the link "https://hyperledger-fabric.readthedocs.io/en/release-1.4/channel_update_tutorial.html" to add a new organisation Org3. But I am bit confused about how to update anchor peer of the newly added organisation Org3 which is missing in the document?? Please help me.
Hi all, I want to add a Raft orderer of a new organization to the system channel in a running network. In the documentation it says the first thing is adding the tls certs of the new node to the channel through a channel configuration update transaction and then as latest step adding the endpoint of the new node through another channel configuration transaction. It is not so clear for me what is just adding the TLS certificates first, is this only adding the organizations MSP with the root and intermediate tls certs to the system channel or is this also adding the node already to the list of consenters?
I am confused because in the list of consenters together with the TLS certs of the node itself also the endpoint of the nodes should be specified already...
Is adding the endpoint in the latest step only adding the orderer to the Orderer.Addresses section?
Thanks in advance for your answers!
@braduf
> It is not so clear for me what is just adding the TLS certificates first
in the configtx.yaml there is this array of consenters, right? each has a TLS cert
adding the TLS certficates to that, is what i meant
now, as for the organization's MSP, root CA, etc. - that is an entirely different thing, and of course if you add a new org, you also need to do that
> Is adding the endpoint in the latest step only adding the orderer to the Orderer.Addresses section?
the endpoints of the new orderer is indeed the *last* step, and it's different than adding it to the consenter list
so, as an example - if you have a new orderer, you first all its TLS certificates (server/client) and endpoint to the consenter list
make sure it onboards the cluster
and after everything - only then update the global orderer addresses
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=K2ppcanjhoJ7GNQ9u) @yacovm Got it, thanks alot!
In Fabric 1.4 is RAFT part of the orderer, or it runs outside of the orderer process independently? How does the message subscription works?
@Chandoo first of all Raft is not an acronym
it's Raft, not RAFT
and second of all - it is embedded in the same orderer process
Raft is a library - we took https://github.com/etcd-io/etcd/tree/master/raft
and just built a blockchain on top of it
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=yMPX7ChpFddBfjKcJ) Hello Team, Can anyone please let me know whether it is at all possible or not?
@biksen it is
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=8A6YYtcTYQ3Tw5yKS) @guoger Thank you!! Could you please provide me any reference or help link how to do this? I am really struggling a lot to understand how to do this?
probably you can use the anchor peer update for org1/org2 as an example?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=94LBT5X3MnJAvWmdb) @guoger But how to get the profile where new Org3 added?
is this what you were looking for? https://github.com/hyperledger/fabric-samples/blob/release-1.4/first-network/org3-artifacts/configtx.yaml
Yes. My network is already running with two organisations Org1 and Org2. Now I want to add new organisation Org3 with the running network
I followed this link https://hyperledger-fabric.readthedocs.io/en/latest/channel_update_tutorial.html
but the document is missing to update anchor peer for Org3
I hope you understand my question?
yeah, but that's not different from updating anchor peers for org1/org2
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=6r5LKArCYBxZ9GKvs) @guoger Then how to create update anchor peer transaction file using configtxgen? I don't want to stop my running network.
quoting doc [here](https://hyperledger-fabric.readthedocs.io/en/latest/channel_update_tutorial.html#add-the-org3-crypto-material):
> The steps you’ve taken up to this point will be nearly identical no matter what kind of config update you’re trying to make. We’ve chosen to add an org with this tutorial because it’s one of the most complex channel configuration updates you can attempt.
basically you need to fetch current config, alter it, and compute a diff (config update) with `configtxgen`
im getting ``` 2019-04-25 12:54:30.375 UTC [orderer.common.server] updateTrustedRoots -> DEBU 3dd adding app root CAs for MSP [Org0MSP]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xacb884]
goroutine 1 [running]:
github.com/hyperledger/fabric/core/comm.(*GRPCServer).SetClientRootCAs(0xc0001bc000, 0xc0003a6360, 0x4, 0x6, 0x0, 0x0) ```
im using raft with hyperledger 1.4.1 and 5 orderers. all my orderers are getting error ``` 2019-04-25 12:54:30.375 UTC [orderer.common.server] updateTrustedRoots -> DEBU 3dd adding app root CAs for MSP [Org0MSP]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xacb884]
goroutine 1 [running]:
github.com/hyperledger/fabric/core/comm.(*GRPCServer).SetClientRootCAs(0xc0001bc000, 0xc0003a6360, 0x4, 0x6, 0x0, 0x0) ``` is this something to do with the PATH to some certifications are off. I translated the byfn 2org raft setup to kubernetes YAML files.
im using raft with hyperledger 1.4.1 and 5 orderers. all my orderers are getting error ``` 2019-04-25 12:54:30.375 UTC [orderer.common.server] updateTrustedRoots -> DEBU 3dd adding app root CAs for MSP [Org0MSP]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0xacb884]
goroutine 1 [running]:
github.com/hyperledger/fabric/core/comm.(*GRPCServer).SetClientRootCAs(0xc0001bc000, 0xc0003a6360, 0x4, 0x6, 0x0, 0x0) ``` is this something to do with the PATH to some certifications are off. I translated the byfn 2org raft setup to kubernetes YAML files.
Has joined the channel.
Hi Experts,
I am using RAFT with fabric 1.4.1. I am re-using a client SDK app to create a channel. Channel creation fails at "client.CreateChannel". Below is the error trail
[2019-04-26 19:57:08.489] [INFO] TradeApp - <<<<<<<<<<<<<<<<< C R E A T E C H A N N E L >>>>>>>>>>>>>>>>>
Successfully extracted the config update from the configtx envelope
Successfully enrolled user 'admin' for org1
Successfully signed config as admin of org1
Successfully enrolled user 'admin' for org2
Successfully signed config as admin of org2
Successfully enrolled user 'admin' for org3
Successfully signed config as admin of org3
Successfully enrolled user 'admin' for every org and obtained channel config signatures
Successfully enrolled user 'admin' for orderer
error: [Remote.js]: Error: Failed to connect before the deadline
Channel powerchannel does not exist yet
Successfully signed config update as orderer admin
error: [Remote.js]: Error: Failed to connect before the deadline
error: [Orderer.js]: Orderer grpcs://localhost:7050 has an error Error: Failed to connect before the deadline
What might be happening here?
```while running peer chaincode install from CLI container.. The DNS name convention format that crypto-config.yaml creates certificates for doesnt mix well with Kubernetes, IKS internal urls .... ```2019-04-26 16:07:53.460 UTC [grpc] createTransport -> DEBU 044 grpc: addrConn.createTransport failed to connect to {peer0-org0-idtraft-uat-com.feature-addiksraft-1.svc:7051 0
while running peer chaincode install from CLI container.. The DNS name convention format that crypto-config.yaml creates certificates for doesnt mix well with Kubernetes, IKS internal urls .... ```2019-04-26 16:07:53.460 UTC [grpc] createTransport -> DEBU 044 grpc: addrConn.createTransport failed to connect to {peer0-org0-idtraft-uat-com.feature-addiksraft-1.svc:7051 0
@yacovm, do you know what could cause this error?
` [channel syschannel] config requires unsupported orderer capabilities: Orderer capability v1_1 is required but not supported: Orderer capability v1_1 is required but not supported`
Orderer is at v1.4
at configtx.yaml orderer capability v1_1 is set to true as test files at github
at configtx.yaml orderer capability v1_1 is set to true as test files at github suggests
Im getting an "Access Denied" on Org0 when i try to instantiate the chaincode. My setup is RAFT Hyperledger 1.4.1 i first ran my network with a modified byfn install. then i transposed that to IKS Templates. I previously have successfully done this with a Hyperledger 1.1 Composer network by using Docker in Docker in IKS. that works flawlessly right now. So i tried moving byfn in the same fashion. I make it all the way to try to instantiate the chaincode and then i get an Access Denied. This happens whether or not i turn off TLS. I'm stumpped. A detail i dont understand is why when i successfully join i get the following message ``` Channels peers has joined:
byfn-sys-channel ``` why doesnt it say "mychannel" maybe this points to my underlying issue?
2019-04-27 23:01:55.744 UTC [grpc] HandleSubConnStateChange -> DEBU 04f pickfirstBalancer: HandleSubConnStateChange: 0xc0002784a0, TRANSIENT_FAILURE
2019-04-27 23:01:55.744 UTC [grpc] HandleSubConnStateChange -> DEBU 050 pickfirstBalancer: HandleSubConnStateChange: 0xc0002784a0, CONNECTING
2019-04-27 23:01:55.744 UTC [grpc] HandleSubConnStateChange -> DEBU 051 pickfirstBalancer: HandleSubConnStateChange: 0xc0002784a0, TRANSIENT_FAILURE
2019-04-27 23:01:55.745 UTC [msp.identity] Sign -> DEBU 052 Sign: plaintext: 0ACF070A6708031A0C08E3BE93E60510...30300A000A04657363630A0476736363
2019-04-27 23:01:55.745 UTC [msp.identity] Sign -> DEBU 053 Sign: digest: C10F390C964747DE4120965BCA6AD0E08790E9B980E5A7B452EBEDBF1DEAEDCD
Error: error endorsing chaincode: rpc error: code = Unknown desc = access denied: channel [mychannel] creator org [Org0MSP]
root@infra-tools0-org0-idtraft-uat-com-694bd88764-v9bs8:/opt/gopath/src/github.com/hyperledger/fabric/peer# FABRIC_LOGGING_SPEC=DEBUG CORE_PEER_TLS_ENABLED=false peer chaincode instantiate -o orderer0.orderers-idtraft-uat.com:7050 -C mychannel -n mycc github.com/chaincode/sacc -v v1.0 -c '{"Args": ["a", "100"]}'
Error: error endorsing chaincode: rpc error: code = Unknown desc = access denied: channel [mychannel] creator org [Org0MSP]
FABRIC_LOGGING_SPEC=DEBUG CORE_PEER_TLS_ENABLED=false peer chaincode instantiate -o orderer0.orderers-idtraft-uat.com:7050 -C mychannel -n mycc github.com/chaincode/sacc -v v1.0 -c '{"Args": ["a", "100"]}
``` 2019-04-27 23:01:55.744 UTC [grpc] HandleSubConnStateChange -> DEBU 04f pickfirstBalancer: HandleSubConnStateChange: 0xc0002784a0, TRANSIENT_FAILURE
2019-04-27 23:01:55.744 UTC [grpc] HandleSubConnStateChange -> DEBU 050 pickfirstBalancer: HandleSubConnStateChange: 0xc0002784a0, CONNECTING
2019-04-27 23:01:55.744 UTC [grpc] HandleSubConnStateChange -> DEBU 051 pickfirstBalancer: HandleSubConnStateChange: 0xc0002784a0, TRANSIENT_FAILURE
2019-04-27 23:01:55.745 UTC [msp.identity] Sign -> DEBU 052 Sign: plaintext: 0ACF070A6708031A0C08E3BE93E60510...30300A000A04657363630A0476736363
2019-04-27 23:01:55.745 UTC [msp.identity] Sign -> DEBU 053 Sign: digest: C10F390C964747DE4120965BCA6AD0E08790E9B980E5A7B452EBEDBF1DEAEDCD
Error: error endorsing chaincode: rpc error: code = Unknown desc = access denied: channel [mychannel] creator org [Org0MSP]
root@infra-tools0-org0-idtraft-uat-com-694bd88764-v9bs8:/opt/gopath/src/github.com/hyperledger/fabric/peer# FABRIC_LOGGING_SPEC=DEBUG CORE_PEER_TLS_ENABLED=false peer chaincode instantiate -o orderer0.orderers-idtraft-uat.com:7050 -C mychannel -n mycc github.com/chaincode/sacc -v v1.0 -c '{"Args": ["a", "100"]}' ```
i am using 2.0 master branch. it fails to create channel.
```[2019-04-29 14:35:41.305] [DEBUG] Create-Channel - response ::{"status":"SERVICE_UNAVAILABLE","info":"no Raft leader"}```
in the configtx.yaml, i specifies `OrdererEndpoints`.
Has joined the channel.
Can anyone point me to a good tutorial on how to implement custom ACL in configtx.yaml?
Does the ACL feature supports custom signature policy with custom OU like:
MyPolicy:
Type: Signature
Rule: "OR('Org1MSP.SALES')"
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3ucCSvotNCrsPixXY) @Vishal3152 No ... you can only use OUs to differentiate MSPs
I am trying to bring up the network with RAFT as per the document "./byfn.sh up -o etcdraft" , it throws out error no such option -o
@Chandoo are you sure your `fabric-samples` repo is up-to-date?
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=nDjSq9uMTfXaxztjF) @mastersingh24 Thankyou.. @mastersingh24. Is there any other way to restrict access to (peer/propose) resource to a particular department in the organization e.g. Org1MSP.Sales?
You can implement the logic in your chaincode
Has joined the channel.
Hi, I just downloaded fabric 1.4.1, and byfn breaks at channel creation:
Channel name : mychannel
Creating channel...
+ peer channel create -o orderer.example.com:7050 -c mychannel -f ./channel-artifacts/channel.tx --tls true --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem
+ res=1
+ set +x
2019-04-30 17:28:07.238 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
Error: got unexpected status: FORBIDDEN -- implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
!!!!!!!!!!!!!!! Channel creation failed !!!!!!!!!!!!!!!!
orderer log:
2019-04-30 17:28:06.138 UTC [orderer.common.server] Start -> INFO 009 Beginning to serve requests
2019-04-30 17:28:07.255 UTC [cauthdsl] deduplicate -> ERRO 00a Principal deserialization failure (MSP Org1MSP is unknown) for identity 0
:thumbsup: Thanks @mastersingh24
@guoger I created everything new using the documentation script, I felt the same and did git pull it updated some of the files, Now i am good
I made it work by modding some files both before and after the repo update. not the right way to do.
I deployed orderer with RAFT, but i still see Kafka variables set in the orderer container, is Kafka still being used?
I deployed orderer with RAFT, but i still see Kafka variables set in the orderer container, is Kafka still being used? Kafka.Retry.ShortInterval = 5s
Kafka.Retry.ShortTotal = 10m0s
Kafka.Retry.LongInterval = 5m0s
Kafka.Retry.LongTotal = 12h0m0s
Kafka.Retry.NetworkTimeouts.DialTimeout = 10s
Kafka.Retry.NetworkTimeouts.ReadTimeout = 10s
Kafka.Retry.NetworkTimeouts.WriteTimeout = 10s
Kafka.Retry.Metadata.RetryMax = 3
Kafka.Retry.Metadata.RetryBackoff = 250ms
Kafka.Retry.Producer.RetryMax = 3
Kafka.Retry.Producer.RetryBackoff = 100ms
Kafka.Retry.Consumer.RetryBackoff = 2s
Kafka.Verbose = true
Kafka.Version = 0.10.2.0
Kafka.TLS.Enabled = false
Kafka.TLS.PrivateKey = ""
Kafka.TLS.Certificate = ""
Kafka.TLS.RootCAs = []
Kafka.TLS.ClientAuthRequired = false
Kafka.TLS.ClientRootCAs = []
Kafka.SASLPlain.Enabled = false
Kafka.SASLPlain.User = ""
Kafka.SASLPlain.Password = ""
Kafka.Topic.ReplicationFactor = 1
@Chandoo that's benign, some kafka config options are still loaded but not used.
Hi all,
in the chaincode dev mode, orderer container does not get created because orderer.block is a directory.
I have my earlier blockchain running, hence want to make changes in the port alone so that devmode blockchain also works.
I make a few changes - but orderer.block gets created as a directory and I can't figure out why.
The .yaml file is pasted below.
Thanks for the earlier pointer - this is with hastebin
panic: unable to bootstrap orderer. Error reading genesis block file: read /etc/hyperledger/fabric/orderer.block: is a directory
Any help would be appreciated.
version: '2'
services:
orderer:
container_name: orderer
image: hyperledger/fabric-orderer
environment:
- FABRIC_LOGGING_SPEC=debug
- ORDERER_GENERAL_LISTENADDRESS=orderer
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=orderer.block
- ORDERER_GENERAL_LOCALMSPID=DEFAULT
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp
- GRPC_TRACE=all=true,
- GRPC_VERBOSITY=debug
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: orderer
volumes:
- ./msp:/etc/hyperledger/msp
- ./orderer.block:/etc/hyperledger/fabric/orderer.block
ports:
- 7060:7050
peer:
container_name: peer
image: hyperledger/fabric-peer
environment:
- CORE_PEER_ID=peer
- CORE_PEER_ADDRESS=peer:7061
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer:7061
- CORE_PEER_LOCALMSPID=DEFAULT
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp
volumes:
- /var/run/:/host/var/run/
- ./msp:/etc/hyperledger/msp
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer
command: peer node start --peer-chaincodedev=true
ports:
- 7061:7061
- 7063:7063
depends_on:
- orderer
clisample:
container_name: clisample
image: hyperledger/fabric-tools
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_ID=cli
- CORE_PEER_ADDRESS=peer:7061
- CORE_PEER_LOCALMSPID=DEFAULT
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp
working_dir: /opt/gopath/src/chaincodedev
command: /bin/bash -c './script.sh'
volumes:
- /var/run/:/host/var/run/
- ./msp:/etc/hyperledger/msp
- ./../chaincode:/opt/gopath/src/chaincodedev/chaincode
- ./:/opt/gopath/src/chaincodedev/
depends_on:
- orderer
- peer
chaincode:
container_name: chaincode
image: hyperledger/fabric-ccenv
tty: true
environment:
- GOPATH=/opt/gopath
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
- FABRIC_LOGGING_SPEC=DEBUG
- CORE_PEER_ID=example02
- CORE_PEER_ADDRESS=peer:7061
- CORE_PEER_LOCALMSPID=DEFAULT
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp
working_dir: /opt/gopath/src/chaincode
command: /bin/bash -c 'sleep 6000000'
volumes:
- /var/run/:/host/var/run/
- ./msp:/etc/hyperledger/msp
- ./../chaincode:/opt/gopath/src/chaincode
depends_on:
- orderer
- peer
hi all,i have 7 organizations.how many should I create to orderer?
@guoger Okay thanks for the comment
@GowriR see if you can list the file "/etc/hyperledger/fabric/orderer.block" check file permissions, make it chmod 777 and test it out
In fabric tutorials where u add an org to a network, the steps including fetching the application channel config block and add new org crypto material/policies to it
But there is no step to update system channel block (orderer gensesis) which contains various consortium defintions -> is it updated implicitly when updating an application channel block?
In fabric tutorials where u add an org to a network, the steps including fetching the application channel config block and add new org crypto material/policies to it
But there is no step to update system channel block (orderer gensesis) which contains various consortium defintions -> is it updated implicitly when updating an application channel block?
@jyellick @yacovm Hi, we are still experiencing the TLS connection issue after we updated the orderer from HLF 1.4.0 to HLF 1.4.1. So far I was able to isolate the commit that introduced the issue: https://github.com/hyperledger/fabric/commit/f7c5221d1bbae8e1ef0daa2efc8ff4b61cea2019
So far I have not yet looked into the issue any deeper - any idea why this commit causes the orderer / kafka communication to break?
Has joined the channel.
i have an issue whereby a new raft orderer doesn't seem to detect that it belongs to an application channel: https://stackoverflow.com/questions/56057187/new-raft-orderer-cannot-detect-that-it-belongs-to-an-application-channel
Hello!
I broadcast to orderer a transaction to update a channel configuration, orderer responses to me with "SUCCESS" status. Does it mean that this update transaction will be applied eventually?
If it is not, how can I subscribe to new blocks of orderer system channel?
@krabradosty try polling `Deliver` API on orderer to see if that update is reflected in latest block
Clipboard - May 11, 2019 12:15 PM
Hi to all. Is there any example/sample with fabri v1.4 with fabric-ca-orderer image?
Hi to all. Is there any example/sample with fabric v1.4 with fabric-ca-orderer image?
Has joined the channel.
2019-05-13 06:18:13.513 UTC [grpc] HandleSubConnStateChange -> DEBU 04c pickfirstBalancer: HandleSubConnStateChange: 0xc000239b60, READY
Error: got unexpected status: SERVICE_UNAVAILABLE -- backing Kafka cluster has not completed booting; try again later
root@cli-798764bc8b-bs7s7:/opt/gopath/src/github.com/hyperledger/fabric/peer# exit
order and kafka are in different kubernetes cluster
Clipboard - May 13, 2019 9:44 AM
Do you have many Kafka replicas? or only once? I got that error when I had a Kafka cluster with 7 organizations and the Kafka for any of that organizations was not properly configured.
I'm trying to start byfn with etcd raft on 1.4 release
./byfn.sh generate -c mychannel
./byfn.sh up -c mychannel -o etcdraft
Docker ps -a shows all orderers are exiting
docker logs orderer.example.com
https://ctrlv.it/id/179265/3831933279
@mastersingh24 ^^^
`newChainSupport -> PANI 007 Error retrieving consenter of type: etcdraft
panic: Error retrieving consenter of type: etcdraft`
```
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
hyperledger/fabric-tools latest 432c24764fbb 4 weeks ago 1.55GB
hyperledger/fabric-orderer latest ec4ca236d3d4 4 weeks ago 173MB
hyperledger/fabric-peer latest a1e3874f338b 4 weeks ago 178MB
```
Do you have your `configtxgen` updated to the last raft version? You need to have all the scripts updated (1.4.1 raft version) in your `bin` path (generally `$GOPATH/bin`).
Do you have you `configtxgen` updated to the last raft version? You need to have all the scripts updated (1.4.1 raft version) in your `bin` path (generally `$GOPATH/bin`).
I'll double check but I checked out fabric.
`git checkout -b origin/release-1.4`
```
git status
On branch origin/release-1.4
Untracked files:
(use "git add
that's latest of release 1.4 which should be 1.4.1 >
Screen Shot 2019-05-14 at 9.14.42 AM.png
Actually, I'm working with 1.4.1 and thinking to move to 2.0.0-alpha
You need to update your script to take the new raft changes, maybe you still have script of the previous Kafka version
You can do a `git fetch` and `git checkout 1.4.1` then run `make` in the fabric root folder and simply copy the `configtxgen` to `$GOPATH/bin`
You can do a `git fetch` and `git checkout 1.4.1` then run `make` in the fabric root folder or simply copy the `configtxgen` to `$GOPATH/bin`
```
commit a31b330c1ad442c106f794c02ef8074343cf616d (HEAD -> release-1.4, origin/release-1.4)
Merge: d34754c0e 220d7fb4e
Author:
Date: Mon May 13 08:46:18 2019 +0000
Merge "FAB-15318 Fix Raft UT flake." into release-1.4
```
that's release 1.4 which has fixes on top of 1.4.1 .. where I'm referencing configtxgen and cryptogen
@rickr `-o etcdraft`is working for me on latest release-1.4.
I see in your logs "Not bootstrapping because of existing chains"
Looks like you have existing pre-raft channels
@dave.enyeart finally got v2.0 working.
i have only one kafka cluster
with single organization
:/
Try to delete old data that you have in your Kafka
Trying to delete old data that you have in your Kafka
Has joined the channel.
Has joined the channel.
Actually this issue is coming on only one channel when we had send in automated request on that channel.
Otherwise the whole fabric network is working fine. I'm able to perform transactions on other channels and other chaincodes as well.
I get this error in the peer:
```
2019-05-16 21:04:52.779 UTC [blocksProvider] DeliverBlocks -> WARN 453 [mychannel] Got error &{SERVICE_UNAVAILABLE}
```
while on the orderer I'm encountering:
```
2019-05-16 21:03:42.737 UTC [common/deliver] Handle -> WARN 49e Error reading from 10.0.0.9:36274: rpc error: code = Canceled desc = context canceled
2019-05-16 21:03:52.746 UTC [common/deliver] deliverBlocks -> WARN 49f [channel: mychannel] Rejecting deliver request for 10.0.0.9:36390 because of consenter error
```
I have a zookeeper-kafka network. This needs to resolved urgently, so please if anyone has any idea please let me know.
Actually this issue is coming on only one channel when we had send in automated request on that channel.
Otherwise the whole fabric network is working fine. I'm able to perform transactions on other channels and other chaincodes as well.
I get this error in the peer:
```
2019-05-16 21:04:52.779 UTC [blocksProvider] DeliverBlocks -> WARN 453 [mychannel] Got error &{SERVICE_UNAVAILABLE}
```
while on the orderer I'm encountering:
```
2019-05-16 21:03:42.737 UTC [common/deliver] Handle -> WARN 49e Error reading from 10.0.0.9:36274: rpc error: code = Canceled desc = context canceled
2019-05-16 21:03:52.746 UTC [common/deliver] deliverBlocks -> WARN 49f [channel: mychannel] Rejecting deliver request for
And the errors I've mentioned repeat but the one initiating them is :```
2019-05-16 21:55:02.537 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 0f1 [channel: mychannel] got right time-to-cut message (for block 25641), no pending requests though; this might indicate a bug
2019-05-16 21:55:02.537 UTC [orderer/consensus/kafka] processMessagesToBlocks -> CRIT 0f2 [channel: mychannel] Consenter for channel exiting
```
Has joined the channel.
Hi, when converting from a Kafka based Orderer to a Raft based Orderer will the conversion process need Kafka to put transaction on the ledger ? Or is this not yet decided ? I ask as we may change the way our production system is deployed around the the time this is done and the new deployment model would not have kafka. If the conversion process needs kafka that infers that it needs to happen on the old infrastructure first.
A second different question is has any performance studies been published on the difference between the performance of a kafka and Raft based orderer. In particular any effects of latency between orderers.
@rsherwood The migration does require that Kafka be online.
There has been some fairly unscientific testing comparing Raft and Kafka, which has shown comparable throughput with slightly better latency.
Thanks, so no published performance papers as yet ?
Correct
Has joined the channel.
I'm receiving the following error in regards to policies: 2019-05-20 20:51:45.119 UTC [orderer.common.broadcast] ProcessMessage -> WARN 04d [channel: mychannel] Rejecting broadcast of config message from 172.31.0.20:33368 because of error: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
I'm receiving the following error in regards to policies: `2019-05-20 20:51:45.119 UTC [orderer.common.broadcast] ProcessMessage -> WARN 04d [channel: mychannel] Rejecting broadcast of config message from 172.31.0.20:33368 because of error: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied`
Does anyone have any insight as to what could be causing this? I can supply my configtx.yaml if needed.
Can anyone please help me out in this.
Has joined the channel.
hello all, Currently i test the End2EndIT and one block contains 1 transaction. do you know how can orderer inserts many transactions to a block?
Hi Everyone, I have written a article on Raft and it's implementation with fabric. Kindly let me know your feedback.
“Detail Analysis of Raft & its implementation in Hyperledger Fabric” by Shyam Pratap Singh https://link.medium.com/zXJpmy2JOW
Did anyone have time to review my previous message?
Still trying to troubleshoot this issue without success.
@Rajatsharma which version of orderer are you running?
and do you have way to reproduce this error?
v1.2.0
also, it would be nice to have debug level logs while you reproduce it
we've not been able to recover our data yet.
The thing is whenever we start the whole network again there's some issue in the kafka but they're only info messages. Kafka has moved it's offset value to something greater than 0, while if we start a new orderer to sync-up all the data it gives an error as it's expecting the first block while the offset is large while in other cases we encounter this error:
``
2019-05-16 21:55:02.537 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 0f1 [channel: mychannel] got right time-to-cut message (for block 25641), no pending requests though; this might indicate a bug
2019-05-16 21:55:02.537 UTC [orderer/consensus/kafka] processMessagesToBlocks -> CRIT 0f2 [channel: mychannel] Consenter for channel exiting
```
The thing is we've seen the Kafka logs at length, we go to this point in kafka https://github.com/apache/kafka/blob/ce5ce2d569dd9fead42974d81bc7adfc5e6c7a22/core/src/main/scala/kafka/log/Log.scala#L1057 Kafka has intentionally flushed out the older blocks. Now we are in a situation we can't recover the channel even by using another orderer as the offset is moved.
Could you please help me find a way to resolve this.
```
we've not been able to recover our data yet.
The thing is whenever we start the whole network again there's some issue in the kafka but they're only info messages. Kafka has moved it's offset value to something greater than 0, while if we start a new orderer to sync-up all the data it gives an error as it's expecting the first block while the offset is large while in other cases we encounter this error:
```
2019-05-16 21:55:02.537 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 0f1 [channel: mychannel] got right time-to-cut message (for block 25641), no pending requests though; this might indicate a bug
2019-05-16 21:55:02.537 UTC [orderer/consensus/kafka] processMessagesToBlocks -> CRIT 0f2 [channel: mychannel] Consenter for channel exiting
```
The thing is we've seen the Kafka logs at length, we go to this point in kafka https://github.com/apache/kafka/blob/ce5ce2d569dd9fead42974d81bc7adfc5e6c7a22/core/src/main/scala/kafka/log/Log.scala#L1057 Kafka has intentionally flushed out the older blocks. Now we are in a situation we can't recover the channel even by using another orderer as the offset is moved.
Could you please help me find a way to resolve this.
```
we've not been able to recover our data yet.
The thing is whenever we start the whole network again there's some issue in the kafka but they're only info messages. Kafka has moved it's offset value to something greater than 0, while if we start a new orderer to sync-up all the data it gives an error as it's expecting the first block while the offset is large while in other cases we encounter this error:
```
2019-05-16 21:55:02.537 UTC [orderer/consensus/kafka] processMessagesToBlocks -> WARN 0f1 [channel: mychannel] got right time-to-cut message (for block 25641), no pending requests though; this might indicate a bug
2019-05-16 21:55:02.537 UTC [orderer/consensus/kafka] processMessagesToBlocks -> CRIT 0f2 [channel: mychannel] Consenter for channel exiting
```
The thing is we've seen the Kafka logs at length, we go to this point in kafka https://github.com/apache/kafka/blob/ce5ce2d569dd9fead42974d81bc7adfc5e6c7a22/core/src/main/scala/kafka/log/Log.scala#L1057 Kafka has intentionally flushed out the older blocks. Now we are in a situation we can't recover the channel even by using another orderer as the offset is moved.
Could you please help me find a way to resolve this.
oh, has kafka purged data?
Yes
We were performing heavy load testing on the network due to so much load the kafka and zookeeper, these services got closed and we had to restart them. It used to work earlier but on that day after some time kafka had moved it's offset to a large value and even updated the epoch.
```
0
25
4 30457
5 72333
6 74837
10 93640
12 99637
13 99684
14 100376
15 100411
17 102268
18 102437
20 105171
23 105172
25 105180
27 105192
29 105196
32 105203
35 105206
37 105209
38 105213
39 105225
41 105246
45 105250
46 105252
53 105253
111 105254
```
```
```
We were performing heavy load testing on the network due to so much load the kafka and zookeeper, these services got closed and we had to restart them. It used to work earlier but on that day after some time kafka had moved it's offset to a large value and even updated the epoch.
```
0
25
4 30457
5 72333
6 74837
10 93640
12 99637
13 99684
14 100376
15 100411
17 102268
18 102437
20 105171
23 105172
25 105180
27 105192
29 105196
32 105203
35 105206
37 105209
38 105213
39 105225
41 105246
45 105250
46 105252
53 105253
111 105254
```
This is our leader-epoch-checkpoint file for that channel now.
unfortunately we do require kafka to never purge logs: https://jira.hyperledger.org/browse/FAB-7707
^ see discussion there
Okay I'm going through this one.
I got the point, we're stuck in the exact same issue. So there's no work around ?
i don't think so. if you don't have production data, great, you've avoided a future problem. If you do, sadly you'll need to devise a "client" to migrate data to new kafka.
But the thing is exactly not our case as we've set `KAFKA_LOG_RETENTION_MS=-1`.
And the channel was not idol, I understand we've hit a edge case in Kafka. But the issue you've mentioned is not exactly the issue we're experiencing.
so, back to my previous question, has kafka purged old data?
Yes. But we had set `KAFKA_LOG_RETENTION_MS=-1`.
We're expecting that KAFKA won't purge the data after setting this flag. But this issue has till occured.
how about `log.retention.bytes`?
This is our configuration for that kafka in yaml file
```kafka0:
image: hyperledger/fabric-kafka:amd64-0.4.13
container_name: kafka0
environment:
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_MESSAGE_MAX_BYTES=103809024
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024
- KAFKA_BROKER_ID=0
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_MIN_INSYNC_REPLICAS=2
ports:
- 9000:9092
networks:
- byfn
```
I'm asking this because we've not experienced this issue in production but we face something similar there too, because our config is the same there too.
Could you get the kafka configs from its log at boot time? IIRC, kafka does print them when it starts
Okay, I'll send that too.
The logs are really long, is there any other way I could send the configuration to you.
https://hastebin.com
No, I'm not able to search them in the logs. It'll take a bit of time.
i don't understand. you can also send log as file here i suppose
If I restart the kafka then also the config will appear ?
I've copied the whole config in https://hastebin.com/hinihiseva.cpp. Please have a look.
your configs look sane to me... i assume you've manually checked that kafka initial offset indeed incremented, and data prior to that has been pruned?
Yes ... When we were debugging we came across those logs.
I'm not able to find those logs currently, but it was stated that kafka is deleting some files and moving the offset too.
These were the logs:
```
INFO Incrementing log start offset of partition mychannel-0 to 30457 in dir /tmp/kafka-logs (kafka.log.Log)
INFO Cleared earliest 0 entries from epoch cache based on passed offset 30457 leaving 6 in EpochFile for partition mychannel-0 (kafka.server.epoch.LeaderEpochFileCache)
INFO Updated PartitionLeaderEpoch. New: {epoch:13, offset:8621}, Current: {epoch:12, offset8618} for Partition: mychannel1-0. Cache now contains 5 entries. (kafka.server.epoch.LeaderEpochFileCache)
INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
INFO Found deletable segments with base offsets [0,4883,7464,22368,25071,26892] due to log start offset 30457 breach (kafka.log.Log)
INFO Scheduling log segment 0 for log mychannel-0 for deletion. (kafka.log.Log)
INFO Scheduling log segment 4883 for log mychannel-0 for deletion. (kafka.log.Log)
INFO Scheduling log segment 7464 for log mychannel-0 for deletion. (kafka.log.Log)
INFO Scheduling log segment 22368 for log mychannel-0 for deletion. (kafka.log.Log)
INFO Scheduling log segment 25071 for log mychannel-0 for deletion. (kafka.log.Log)
INFO Scheduling log segment 26892 for log mychannel-0 for deletion. (kafka.log.Log)
INFO Deleting segment 0 from log mychannel-0. (kafka.log.Log)
INFO Deleting segment 4883 from log mychannel-0. (kafka.log.Log)
INFO Deleting segment 7464 from log mychannel-0. (kafka.log.Log)
INFO Deleting segment 22368 from log mychannel-0. (kafka.log.Log)
INFO Deleting segment 25071 from log mychannel-0. (kafka.log.Log)
INFO Deleting segment 26892 from log mychannel-0. (kafka.log.Log)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000026892.index.deleted (kafka.log.OffsetIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000022368.index.deleted (kafka.log.OffsetIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000026892.timeindex.deleted (kafka.log.TimeIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000022368.timeindex.deleted (kafka.log.TimeIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000025071.index.deleted (kafka.log.OffsetIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000000000.index.deleted (kafka.log.OffsetIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000007464.index.deleted (kafka.log.OffsetIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000004883.index.deleted (kafka.log.OffsetIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000025071.timeindex.deleted (kafka.log.TimeIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000000000.timeindex.deleted (kafka.log.TimeIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000007464.timeindex.deleted (kafka.log.TimeIndex)
INFO Deleting index /tmp/kafka-logs/mychannel-0/00000000000000004883.timeindex.deleted (kafka.log.TimeIndex)
```
Is there a definite plan about when the kafka-to-raft ordering service migration tool will be available?
Has joined the channel.
@yacovm @dave.enyeart ?
it is not a "tool"
it's a tutorial
though i guess there will be some scripts so you can call it a tool maybe?
as for timeline i don't know... @dave.enyeart might know
I guess in my mind, if it requires multiple script invocations and/or multiple manual steps, it's a tutorial; otherwise it's a "tool"; either way, yes, that "thing"
@minollo it will be available very soon, and then it will go through some more test cycles before a release. can you help test it and report findings?
any test help would expedite delivery into a release
Sure, I would be happy to; and I can also involve some people in the team with me
perfect, stay tuned on this channel, or even better, ask on #fabric-orderer-dev if you'd like to help test initial cut
Sounds good; thanks
@Rajatsharma could you share a more complete log of kafka? in particular, i'm looking for `log start offset` when kafka loads segment files at boot up.
I'm sharing logs at https://hastebin.com/barudeloro.json
The line we suspect is causing this problem is present at 1105.
```
Incrementing log start offset of partition qachannel-0 to 30457 in dir /tmp/kafka-logs
```
@guoger I'm sharing logs at https://hastebin.com/barudeloro.json
The line we suspect is causing this problem is present at 1105.
```
Incrementing log start offset of partition qachannel-0 to 30457 in dir /tmp/kafka-logs
```
@guoger I'm sharing logs at https://hastebin.com/barudeloro.json
The line we suspect causing this problem is at 1105.
```
Incrementing log start offset of partition qachannel-0 to 30457 in dir /tmp/kafka-logs
```
I'm not able to send this message on the previous thread. So I've sent this here.
Has joined the channel.
Has joined the channel.
@guoger did you find anything ?
not really.. something is off. I see this:
```
{"log":"[2019-05-14 07:32:45,471] INFO Completed load of log qachannel-0 with 1 log segments, log start offset 0 and log end offset 0 in 44 ms (kafka.log.Log)\n","stream":"stdout","time":"2019-05-14T07:32:45.47607923Z"}
```
which looks strange to me and i suspect log for this channel is not properly loaded. If this problem is reproducable, I'd seek help from Kafka community.
Hi
I am trying to initialize a channel using Java SDK but got below error in Orderer log
error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied
[orderer.common.broadcast] ProcessMessage -> WARN 008 [channel: uppclchannel] Rejecting broadcast of config message from 172.21.0.1:36158 because of error: error validating channel creation transaction for new channel 'uppclchannel', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
Has joined the channel.
Hey guys, I'm having some problems while trying to run a RAFT ordering service
This is the configtx.yaml file used to generate the genesis block: https://pastebin.com/V45MyDnU
These are orderer0.org1.example.com logs: https://pastebin.com/jT9dWtAh
Can anyone help me with this? I'm not sure how to troubleshoot it "/
Hey guys, I'm having some problems while trying to run a RAFT ordering service
This is the configtx.yaml file used to generate the genesis block: https://pastebin.com/V45MyDnU
These are `orderer0.org1.example.com` logs: https://pastebin.com/jT9dWtAh
Can anyone help me with this? I'm not sure how to troubleshoot it "/
> 2019-05-29 21:10:41.330 UTC [core.comm] ServerHandshake -> ERRO 2b3 TLS handshake failed with error tls: first record does not look like a TLS handshake server=Orderer remoteaddress=100.25.102.3:45486
perhaps this is a port mismatch?
Hi Team,
I am facing one strange issue with chaincode. I have one function in smart contract which gets asset record value from ledger based on asset key and based on the result, it create the asset record if it is not there and doesn't create the record if its already there and we have sub asset records for every asset record. The asset record data is coming from legacy systems one by one to blockchain ledger through invokation. What i found is that sometimes subsequent sub asset record is not getting stored for the asset record for which there is already at least one sub asset record for the same asset record stored on ledger in previous invokation request.
first of all, this question is more properly asked in #fabric-chaincode-dev . To answer your question we might need more info, esp. how your chaincode is written. Although I guess the *same* key-value is altered in consecutive transactions, which leads to the later ones being invalidated during committing phase because it fails mvcc check
you might investigate by looking at peer logs or the block metadata
Has joined the channel.
Hello. I'm getting this error (in orderer log ouput) when I'm trying to add new consortium to orderer system channel:
```
Rejecting broadcast of config message from 172.17.0.1:41862 because of error: error applying config update to existing channel 'orderer-system-channel': initializing channelconfig failed: could not create channel Consortiums sub-group config: Attempted to define two different versions of MSP: Org2MSP
```
Actually what is the problem? Org2MSP is already defined in another consortium.
Hello. I'm getting this error (in orderer log ouput) when I'm trying to add new consortium to orderer system channel:
```
Rejecting broadcast of config message from 172.17.0.1:41862 because of error: error applying config update to existing channel 'orderer-system-channel': initializing channelconfig failed: could not create channel Consortiums sub-group config: Attempted to define two different versions of MSP: Org2MSP
```
Actually what is the problem? Org2MSP is already defined in another consortium.
I use nodejs SDK.
If you define the same MSPID in multiple places in the configuration, it must define exactly the same MSP. This is to prevent ambiguity when evaluating policies. If two different MSP definitions existed for the same MSPID, then it is not clear whether a user should be authorized if one MSP authorizes it, while another for instance claims the cert has been revoked.
Hi again, one more question. I'm stuck with adding a new consortium to orderer system channel. I successfully broadcast update transaction to orderer, orderer accepts it, but fails during handling this transaction. See orderer logs output here: https://hastebin.com/juwunexaru.txt
Seems like that something wrong with configuration block. The configuration is not default, see configtx.yaml which I used to create genesis block here: https://hastebin.com/aroquheyag.yaml
Any thougts what I did wrong? I really appreciate your help, guys.
all orderers are set to use port 7050 for RAFT so I think a port mismatch is unlikely... the port mentioned in the log 45486 is outbound port which is randomly chosen, right? I don't understand why TLS handshake is failing
it seems like this happens when the request received is `grpc` not `grpcs`... do I need to enable TLS somewhere other than the `ORDERER_GENERAL_TLS_ENABLED`?
no... you don't need to do it anywhere
if you start a Raft orderer without TLS it will crash
solved it, I wasn't setting `ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE`, `ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY`, `ORDERER_GENERAL_CLUSTER_ROOTCAS`. misread documentation and thought these were optional, my bad!
try to use `tcpdump` and record the information sent in the network
oh... hmmmm ok
thanks
sure
Looks like the problem is here
```
identity 0 does not satisfy principal: could not validate identity's OUs: the identity must be a client, a peer or an orderer identity to be valid, not a combination of them. OUs: [[0xc0001d7b90 0xc0001d7bc0]], MSP: [Org1MSP]
```
Looks like the problem is here
```
identity 0 does not satisfy principal: could not validate identity's OUs: the identity must be a client, a peer or an orderer identity to be valid, not a combination of them. OUs: [[0xc0001d7b90 0xc0001d7bc0]], MSP: [Org1MSP]
```
I use the same organization to define orderer and consortium. I can't?
Finally, I found the problem. Definition of orderer organization contained `fabric_node_ous`and it was caused an error.
Finally, I found the problem. Definition of orderer organization contained `fabric_node_ous` and it caused an error.
Has joined the channel.
Has joined the channel.
Hi all, I have tried out with kafka,
I have an issue : i have 3 orders, If last two orderes exited, I am able to excute the transaction. But if first one exited , I am not able to do any transaction.
Hi all, I have tried out with kafka,
I have an issue : i have 3 orders, If last two orders exited, I am able to execute the transaction. But if first one exited , I am not able to do any transaction.
Is kafka will take care of this switching of order er based on availability?
How to define the broadcast rpc in the configuration?
are you using cli? i suspect you are specifying _the first orderer_ with cli... and i don't quite understand your last question, could you elaborate?
I am using the Node-sdk .
I have 3 orderes , orderer0,orderer1 and orderer2. I have configured the kafka services , It's working fine. But my problem is,
If I bring down the orderer0, I am not able to invoke any transaction. But If I bring down orderer1 and orderer2 , able to do the transaction. So my doubt is , In multi orderer setup , to which orderer this endorsed trasaction is sending? Do we need to specify that in configuration?
have you specified all 3 orderer endpoints via sdk?
~have you specified all 3 orderer endpoints via sdk?~ yes you do
As per my understanding , order to order communication is not happening in the network?
no for kafka-based orderer
yes
So from client side , is it possible to send the transaction to the 3rd orderer , Can we specify that in configuration?
https://fabric-sdk-node.github.io/release-1.4/tutorial-network-config.html
yes you definitely can. pls see this doc
In my network-config.yaml file, I have specified all three ordere endpoints . So by default it will take the first configuration only?
Has joined the channel.
you'll need to double check but i think you can actually specify an orderer instance as part of request param. pls refer to #fabric-sdk-node
Even I tried to replicate this same issue, I'm not able to do so. Is there anyway I could use my network again.
I've backed up all my data then I've tried multiple things:
I tried re spawning all my KAFKA and Zookeepers but that does not work as it's not able to find any data.
I suspect there's some issue in the block information of a single block is there anyway I could get to a state before that block, probably change some offset in orders. Or Delete some data, do you know any workaround for this ?
I'm getting a error from this part of the code https://github.com/hyperledger/fabric/blob/09e245b2b28072cf99670cabe94adc628d0db340/orderer/consensus/kafka/chain.go#L470
Probably I'm stuck at the TODO at line 474 of this code.
And Precisely I'm getting the error from here. Is there anything wrong with the code ?
hi all!!
i'm trying to create a network using RAFT but i'm getting the follovwing panig
```
2019-06-10 15:26:35.818 UTC [orderer.common.cluster] ReplicateChains -> PANI 01d Failed pulling system channel: failed obtaining the latest block for channel orderersconfig
panic: Failed pulling system channel: failed obtaining the latest block for channel orderersconfig
goroutine 67 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc000256580, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc00014e208, 0x1134304, 0x102ff4c, 0x21, 0xc0006afc00, 0x1, 0x1, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc00014e208, 0x102ff4c, 0x21, 0xc0006afc00, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc00014e210, 0x102ff4c, 0x21, 0xc0006afc00, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/orderer/common/cluster.(*Replicator).ReplicateChains(0xc00072c000, 0x1061d48, 0xc000666a80, 0xc00072c000)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/cluster/replication.go:155 +0x4c2
github.com/hyperledger/fabric/orderer/common/server.(*replicationInitiator).ReplicateChains(0xc000289140, 0xc0005c8440, 0xc0007ee000, 0x1, 0x1, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/onboarding.go:120 +0x20a
github.com/hyperledger/fabric/orderer/common/server.(*inactiveChainReplicator).replicateDisabledChains(0xc00007b140)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/onboarding.go:224 +0x1f5
github.com/hyperledger/fabric/orderer/common/server.(*inactiveChainReplicator).run(0xc00007b140)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/onboarding.go:202 +0x42
created by github.com/hyperledger/fabric/orderer/common/server.initializeEtcdraftConsenter
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:637 +0x3fc
```
which i think it maybe produced because of
```
2019-06-10 15:26:11.757 UTC [orderer.consensus.etcdraft] detectSelfID -> WARN 001 Could not find -----BEGIN CERTIFICATE-----
When you configure the Raft cluster, you must enumerate the individual TLS certs for the orderers, not the roots
i have them too on the configtx.yaml
This is the genesis block and you are bootstrapping your orderer for the first time?
wait. is ServerTLSCert: the cert of the orderer or the root?
The orderer
so which is the ClientTLSCert?
You may use the same cert for both if your CA supports it.
oh, i get it, i'm going to try
yeah, i'm usinng the same CA to generate both the tls and the regular orderers cert
It is less an issue of the same CA, and more of getting both the client and server attributes set inside the TLS certificate. If you are using a public CA (think Verisign, Digicert, etc.) they may not be willing to issue you a cert that may act as both a server and a client. If you are acting as your own CA, you should be able to use the same cert for both.
I'm using Fabric's CA for now
@jyellick i'd made the change and now both the server and client are the server.crt of the orderer but i'm getting the same result. Answering your previous question yes, is the first bootstrap of the ordering service
Are you still seeing the message complaining that your orderer's TLS cert is not among a set of root CAs? It should now be a set of orderer TLS certs.
yes
If you are still seeing the root CAs enumerated, that would indicate to me that the genesis block was not appropriately regenerated. Did you delete your ledger directory?
but the first cert in the among is the "missing" cert
yup, everything deletes(is a kubernetes deployment) but let me try again to check
another question, for the orderer config the var ORDERER_GENERAL_CLUSTER_ROOTCAS must be set or if empty the orderer gets the values from somewhere(like the genesis or something)?
nope @jyellick same result, i'm generating the genesis with
```configtxgen -profile networkProfile -outputBlock $ORDERER_GENERAL_GENESISFILE -channelID orderersconfig```
can provide the configtx.yaml if you want
If you are still seeing `Could not find
That `
yeah, now those are the server.crt of the respective orderer, and the message appears as well and on the message i can see listed the same "missing" cert.
I'd just notice that i'd set the corresponding env vars for the config `ServerCertificate, ServerPrivateKey` without setting the `ListenPort, ListenAddress` eventhoug in the docs says not to. so removing those clears a bunch of others warnings but still gets the warning
BUT the panic has changed to
```
2019-06-10 16:17:38.917 UTC [orderer.common.cluster] createReplicator -> PANI 002 Failed creating puller config from bootstrap block: unable to decode TLS certificate PEM:
panic: Failed creating puller config from bootstrap block: unable to decode TLS certificate PEM:
```
guess it cannot read the tls cert unless i tell it where to locate it :thinking_face:
nevermind i think somewhere in my automated cert exchange i screw it and got the regular cert instead of the tls one
that's why you can't decode a TLS cert if it isn't a tls cert
Ah, yep, that would do it
nop, it was the correct one, because of lazyness i'd used a webpage to decode it instead openssl and it didn't show the extended key usage field. Back to the thinking part
Have you looked at https://hyperledger-fabric.readthedocs.io/en/release-1.4/raft_configuration.html and the raft section in https://hyperledger-fabric.readthedocs.io/en/release-1.4/build_network.html ?
The latter gives an example of a working Raft configuration, it might be easier to start from there and identify the differences
yeah, that's where i checked the envvars and config files. i'll look more in detail tomorrow about this
thanks for the help!!
Hi All, I have a multiple orderer setup with kafka, But if I invoke a transaction , it is sending to the first one mentioned in the network-config file .Also if the same orderer goes down in the network , Transactions are getting failed . Any solutions for handling this issue
no i don't think so... i feel it's still a problem with kafka mysteriously pruning history data...
@Rajatsharma Can you try using the Kafka console-consumer to try to consume offsets 0-1? If this is successful, likely this is a Fabric problem, if this fails, then your Kafka logs have been pruned (despite your configuration), and the Kafka community would likely be of more help.
dig into history a bit and i think original reporter had confirmed that kafka has pruned data: https://chat.hyperledger.org/channel/fabric-orderer?msg=uqqgnDKhyPFJsAkL2
If Kafka has pruned the data, then this is definitely a Kafka problem. Your configuration indicates data should not be pruned, but somehow it has been.
did you get any luck in #fabric-sdk-node ?
Yes, we've confirmed Kafka has pruned the data. Now it's a kafka issue only, the thing is I wanted to retrieve my network. So I was asking if there's anyway I could do something to even partially retrieve that network.
Also could you help me getting assistance for this issue in Kafka community.
Likely the easiest way to recover your network would be to simply replay the transactions in the blockchain against a new network. You would end up with a different hash chain, but the same contents.
Do you have a working orderer and the problem is that you cannot bootstrap a new one?
No currently we're facing issue with offset in all the running orders while if we create a new one that that looks from starting offset. That's the issue I;m not able to use my current network anyhow.
I was thinking if there's a way I could use the runnning orderers to sync new kafka, then I could use that.
What about the 'replay' suggestion above?
Because that's the only way I can fetch the data kafka has pruned. because it's present in orderer.
How could we replay the transactions ? That would also work fine.
You would bootstrap a new network, with the same MSP information (but new Kafka broker addresses etc.). Then, iterate over your existing blockchain, sending the raw transactions in the blocks to the new orderer. Once this is complete, you will have a new network, with the same transactions in the same order (but with different block boundaries and hashes). You may then transition to using this other network and discard the original.
How do you suggest me to go through those raw transactions in the orderer.
There's any tool for this ?
Do you have a peer which is up to date?
Yes !
Then you may use the standard peer APIs for retrieving blocks using your favorite SDK or CLI.
Simply retrieve the blocks in order from 0 to the current, and iterate over the block contents of each. The bytes in the data section are of type 'common.Envelope' which is the data type accepted by the Orderer's Broadcast interface.
Okay Thanks !! I'll try this out, this seems to be the only way now. While if you know some code or any github repo please share that too, I could use that for reference.
It's been in plan to add a nicer way to recover from this sort of scenario to the codebase for a while, but hasn't been prioritized. I'll see about getting into 1.4.2, but that obviously wouldn't help you.
Yes !!!! Thanks anyways, Ive been stuck at this from really long the data is important that's why I was thinking of finding a way to recover that.
No
@jyellick i got it and it may be a bug in the configtx or the orderer
what happened is that i was putting the config for raft in the orderersDefault and then putting the reference in the profile, which make it crash
instead, what you have to do is put the raft config directly in the profile section of the configtx.yaml
what's strange is that it works with Kafka getting the config from the Defaults
@dsanchezseco That is indeed very odd. In fact, I thought we did our system tests with the config in the orderer defaults.
i'll elaborate more tomorrow in jira if you will
in the samples is the other way arround tho
Yes, I was actually just going to try to quickly reproduce using the sample
I moved the raft config to the orderer defaults section in byfn and it still seems to work for me
I'll be curious to see your JIRA with details and artifacts
yep, same here. is in my project where it doesn't works
i'll check carefully i didn't make any mistake as i'm using a template to generate the file
If you wanted to check, you could use the `-inspectBlock` flag of `configtxgen` and compare the output for the block generated from defaults and generate with profile overrides
i'll do
thanks againg
thanks again
Has joined the channel.
I am working on a raft orderer project using Hyperledger Fabric. I am attempting to install chaincode on one of my peer nodes using this command:
`docker exec $env_vars $peer peer chaincode install -n loan-chain -l node -p /opt/gopath/src/github.com/hyperledger/fabric/peer/loan-chain-node -v v1 --tls --cafile /etc/hyperledger/msp/orderer0.example.com/msp/tlscacerts/tlsca.example.com-cert.pem `
I am experiencing the below error. The error sometimes randomly goes away with repeated attempts. Any advice? I have additional go routine logs if someone wants to see them.
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0xe5 pc=0x7fd384254638]
runtime stack:
runtime.throw(0x1272c18, 0x2a)
/opt/go/src/runtime/panic.go:608 +0x72
runtime.sigpanic()
@Swhit210 please do not cross-post questions.
Has joined the channel.
Hey guys, I'm trying to add a new orderer to a RAFT ordering service
How should I encode the bootstrap genesis block for the new orderer? I'm using Node.js SDK so I'm guessing I should use `fabProtos.common.Block` format and then encode it to use as bootstrap but I can't find any documentation about how to properly set fields
Also, docs say
```1: Adding the TLS certificates of the new node to the channel through a channel configuration update transaction. Note: the new node must be added to the system channel before being added to one or more application channels.```
but I couldn't find any operation guide on how to edit the `etcdraft` section of the config block
I have the current system channel block in JSON format in my Node.js program (https://hastebin.com/ijizayejac.json) and as you can see the ConsensusType information is encoded as base64. If you decode it you will find protobuf-encoded data so I'm not really sure what to do with it
Can anyone help me out?
@bandreghetti The standard method for updating channel configuration is to use the `configtxlator` tool. This can decode the nested proto structures into plain JSON for you to operate on. You can of course use the proto structures directly, but this generally requires more intimate knowledge of the system.
See https://hyperledger-fabric.readthedocs.io/en/release-1.4/config_update.html
@jyellick Ok I guess my question was too vague, sorry.
I'm using the `channel.getChannelConfigFromOrderer()` method from Node.js SDK, which according to docs gives me a `fabProtos.common.ConfigEnvelope`. What I don't know is how do I translate a `common.ConfigEnvelope` to a `common.Block` structure so I can use it as bootstrap block in a new orderer node
Maybe I should ask this in #fabric-sdk-node then?
@jyellick https://gerrit.hyperledger.org/r/c/fabric-test/+/31727/2..5/tools/PTE/ConnProfiles/config.yaml
WHY do we only show peer orgs in the connection profile, each with a list of peers (as in lines 126-127), WHEREAS the orderers are only listed individually? It seems there is an assumption that all the orderers are in one orderer org. Why isn't this connection profile built to have a list of ordererorgs too, each with a list of orderer(s)? We would like our tools to be able to know where to get the orderers and orderer org information, in a standard format, for all connection profiles.
The samples in the node-sdk and java-sdk repos do not show any orderer orgs information.
I noticed cryptogen generates "ordererOrganizations" directory, but I see subdirectory named just "example.com" rather than specific ordererOrgN.example.com in the certs paths.
Should we create a bug for cryptogen?
Hey guys, I'm facing a very random issue. I have a network of 3 Zookeeper, 4 kafka, 3 orderer and 2 peers. Everything was working fine but suddenly orderer has stopped delivering blocks to the peer. While as we use a client to process the transaction the transactions are getting processed and even the block gets committed. But that is not being communicated to the peer.
We tried restarting the peer container everything starts working smoothly, but the same issue starts occurs after some time. Could anyone help me out with this ?
could you share some logs of peer&orderer when this problem happens? also, could you try fetching blocks using client/cli to see if it works?
ls
peer-send.log
orderer0-send.log
I've sent partial logs for both peer and orderer in case you require more logs please let me know.
And the fetchinfo is wokring similarly too. When the blocks are syncing I get the info, but when the peer is not receiving the blocks. It appears as if the command has hanged.
> While as we use a client to process the transaction the transactions are getting processed and even the block gets committed. But that is not being communicated to the peer.
what do you mean by transactions are getting processed? Also, how do you tell that peer is not getting block? it hangs?
it's probably easier to debug if you send the orderer log when you use client to fetch block from orderer
I can see logs in the orderer that block got committed, but there's no log in the peer. And I'm not able to fetch information too. I still kept processing transactions using the client. And once I restarted the Peer all the logs started appearing in the peer of all the blocks.
Okay, I'll send you specific logs now and by hang I mean there's no response in the CLI. It gets stuck as if it's expecting some output.
i'm expecting some debug log like:
```
"[channel: %s] Received seekInfo (%p) %v from %s"
```
on orderer side
Orderer logs when the request is send:
Orderer: https://hastebin.com/abafabicof.cs
Peer Debug request:https://hastebin.com/jolifaquju.cs
Peer logs: https://hastebin.com/qesovedase.rb
When we run the command: `peer channel fetch newest -c qawolverine1 -o orderer0.wolverine.com:7050`
But when we run `peer channel getinfo -c qawolverine1 -o orderer0.wolverine.com:7050`, there's no respionse
While running 'peer channel getinfo', I'm getting errror:
```
github.com/hyperledger/fabric/core/chaincode.(*Handler).Execute
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/handler.go:919
github.com/hyperledger/fabric/core/chaincode.(*ChaincodeSupport).execute
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/chaincode_support.go:253
github.com/hyperledger/fabric/core/chaincode.(*ChaincodeSupport).Invoke
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/chaincode_support.go:239
github.com/hyperledger/fabric/core/chaincode.(*ChaincodeSupport).Execute
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/chaincode_support.go:179
github.com/hyperledger/fabric/core/endorser.(*SupportImpl).Execute
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/support.go:141
github.com/hyperledger/fabric/core/endorser.(*Endorser).callChaincode
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:136
github.com/hyperledger/fabric/core/endorser.(*Endorser).SimulateProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:287
github.com/hyperledger/fabric/core/endorser.(*Endorser).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:501
github.com/hyperledger/fabric/core/handlers/auth/filter.(*expirationCheckFilter).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/handlers/auth/filter/expiration.go:61
github.com/hyperledger/fabric/core/handlers/auth/filter.(*filter).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/handlers/auth/filter/filter.go:31
github.com/hyperledger/fabric/protos/peer._Endorser_ProcessProposal_Handler
/opt/gopath/src/github.com/hyperledger/fabric/protos/peer/peer.pb.go:112
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).processUnaryRPC
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:923
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).handleStream
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:1148
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:637
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
error sending
failed to execute transaction b5a84effec0b6cc3c1867fc1c73205d2912c52150a1c15e99c5677b49513e8cb
github.com/hyperledger/fabric/core/chaincode.(*ChaincodeSupport).Execute
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/chaincode_support.go:181
github.com/hyperledger/fabric/core/endorser.(*SupportImpl).Execute
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/support.go:141
github.com/hyperledger/fabric/core/endorser.(*Endorser).callChaincode
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:136
github.com/hyperledger/fabric/core/endorser.(*Endorser).SimulateProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:287
github.com/hyperledger/fabric/core/endorser.(*Endorser).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/endorser/endorser.go:501
github.com/hyperledger/fabric/core/handlers/auth/filter.(*expirationCheckFilter).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/handlers/auth/filter/expiration.go:61
github.com/hyperledger/fabric/core/handlers/auth/filter.(*filter).ProcessProposal
/opt/gopath/src/github.com/hyperledger/fabric/core/handlers/auth/filter/filter.go:31
github.com/hyperledger/fabric/protos/peer._Endorser_ProcessProposal_Handler
/opt/gopath/src/github.com/hyperledger/fabric/protos/peer/peer.pb.go:112
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).processUnaryRPC
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:923
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).handleStream
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:1148
github.com/hyperledger/fabric/vendor/google.golang.org/grpc.(*Server).serveStreams.func1.1
/opt/gopath/src/github.com/hyperledger/fabric/vendor/google.golang.org/grpc/server.go:637
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:2361
```
Hi Team, For RAFT based orderers (5 by default), are they all registered using the same CA? Should admin@orderer.example.com be different than admin@orderer2.example.com?
@mbanerjee It really depends on how they are administered. If they are all run by the same org, then sharing a CA would be fine/expected. If they are each controlled by different orgs, then I would expect different CAs.
Orderer are independent of the org.
When a RAFT cluster is setup its not really tied to an org
Just orderer containers are setup with consensus type etcdraft
Is that understanding correct?
Yes and no. Raft endpoints are agreed to by all ordering orgs, and as such, they are not specified in a way as to tie a particular node to a particular org. However, each node will need a signing certificate issued by a particular org,and will be managed by some org, so each Raft node is in fact tied to an org.
Can the certificates be issues by ca.orderer.example.com?
It sounds like you only have one orderer org? If so, then you should issue the certs from that org's TLS CA.
- Host: orderer.example.com
Port: 7050
ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
What is the guideline?
Should the orderer be tied to orgs?
if so, how do we map the orderer raft nodes to each org?
It really depends on your topology needs. If it is acceptable to have a single dedicated ordering organization, this is the easiest setup. If multiple organizations want to be involved in ordering for decentralization reasons, then they should each act as an ordering org. Note, it is always recommended to have different logical organizations for ordering and peers, even if they are the same organization in actuality. In this way, the harm of a private key being compromised is significantly limited.
Agreed
For multi org networks, how should be setup orderer? Is it orderer/org and use the org CA for certificates? If so each orderer org should have 5 orderer nodes?
For multi org networks, how should wecsetup orderer? Is it orderer/org and use the org CA for certificates? If so each orderer org should have 5 orderer nodes?
I would suggest a default of a total of 5 orderer nodes, across all organizations. Only adding additional ones if you have requirements to do so.
Additional Raft nodes do add additional failure tolerance, but, also decrease throughput, especially on WANs. Although up to a 100 or so should still work, practically, keeping the number smaller is better.
For 2 org network, may be one org can manage 2 orderer nodes and another 3.
Certainly
But for say more than 5 org network, how do we determine who manages which orderer?
Is that something that can be updated post network creation?
Yes, it may be updated later. You may change certificates, and add and remove nodes. Who runs nodes would be a business decision. Those running ordering nodes obviously have a stronger stake in the network, but, they must also pay for computation and storage resources. The exact needs of the network will dictate the ordering topology.
could this be a reason for this:
```{"log":"\u001b[36m2019-06-11 22:12:28.630 UTC [common/deliver] deliverBlocks -\u003e DEBU 1ff76\u001b[0m Context canceled, aborting wait for next block\n","stream" :"stderr","time":"2019-06-11T22:12:28.630555963Z"}
```
Has joined the channel.
For 2 org network, may be one org can manage 2 orderer nodes and another 3. With this setup, if the org1's 2 nodes fail, the Raft culster will still work, but then org2 will gain complete control.
Not exactly. The ordering service is purposefully distinct from the peer application components. Just because an org has complete control over ordering does not mean anything about application transactions. Unless the peers are appropriately endorsing, the transactions will still be marked invalid.
OK, got it.
Is single dedicated ordering organization acceptable as first cut?
As the business use case become clear, we can tweak it
Are there any security issues with having one orderer.example.com CA generate the TLS cert for all the raft orderer nodes?
No, the TLS certificates are pinned, so there is no way for one to masquerade as another.
I have a couple of raft best practise questions.
since i'm about to (try) to change over from a kafka 1.4 setup to a raft 1.4.1.
It looks on first glance as if the orderer nodes would be best associated to individual orgs in the network and then associated with the channels that org is to access. That seems closer to a true consortium model imo. But then the "orderers can see the system channel" means they can all see what everyone else is connected to, which is not at all desirable. So we appear to be back to a TTP operating the orderers with all the other orgs connecting their peers to them. That's a lot closer to the model we have now, but I was hoping we could do better somehow. Did I misunderstand something there?
Is there a recommended way to associate channels to orderers? Say I have 5 orderers and 100 channels. Is a random selection of n from m recommended, should all 5 be hooked up to all channels, or is there something that'll do that for me built in? I suspect the former, but figured it was worth asking. If it's n from m does that mean that a self-generated network config has to remember explicitly which orderers are attached to which channels? This may be better in the SDK channel rather than here, but figured I'd ask here first while I'm typing.
in RAFT protocol, the term ends and a new election occurs if the leader crashes or stops responding and the first follower's electionTimeout reaches 0. provided that the leader operates properly all the time, does this mean that the leader can serve indefinitely? is there any other mechanism to reelect a new leader even though the current leader is operating perfectly?
> But then the "orderers can see the system channel" means they can all see what everyone else is connected to, which is not at all desirable.
If you are concerned about seeing who is in what channel, you may create the channel with a single org, and then extend it. Only orderers authorized on the channel can read the current config, the system channel only records the initial member set.
> So we appear to be back to a TTP operating the orderers with all the other orgs connecting their peers to them. That's a lot closer to the model we have now, but I was hoping we could do better somehow. Did I misunderstand something there?
You may run Raft just like you did Kafka, or, you may pick particular subsets of orderers for channels for privacy.
> Say I have 5 orderers and 100 channels. Is a random selection of n from m recommended, should all 5 be hooked up to all channels, or is there something that'll do that for me built in?
Nothing built in. It will vary based on the fault tolerance required, and the load on the channels. On moderate load, 5 orderers serving 100 channels may be fine. Under intense load, you might only want each orderer to service a few channels. Because of the variables concerned, it's left to the operator to determine the appropriate deployment.
There is currently no mechanism to force periodic leader rotation.
hi friends, i have a hyperledger setup with 3 zookeeper,4 kafka, 3 orderer.i have created the channel using first orderer. i have done some transations and everything working fine.but if i stop first orderer, everything will comes to error.so my question is why we need 3 orderer. i thought if one orderer down other will take its place because of the kafka setup.but that is not the case.also if all orderers is down, all the transations at that time should be retriggered by kafka right?.but that is also not happening so why we need kafka
@guoger did you find anything in thi>
@guoger did you find anything in this ?
Thats what I figured. Thx.
Has joined the channel.
Has joined the channel.
ArtemFrantsiian - Fri Jun 14 2019 11:31:08 GMT+0300 (Eastern European Summer Time).txt
Hi all, I got this error when I try to run fabric-sample/first-network with etcdraft consensus
Hi all, I got this error when I try to run fabric-sample/first-network with etcdraft consensus
```orderer.example.com | 2019-06-14 08:27:07.143 UTC [orderer.common.server] replicateDisabledChains -> INFO 00c Found 1 inactive chains: [byfn-sys-channel]
orderer.example.com | 2019-06-14 08:27:07.143 UTC [orderer.common.cluster] ReplicateChains -> INFO 00d Will now replicate chains [byfn-sys-channel]
orderer.example.com | 2019-06-14 08:27:07.149 UTC [orderer.common.cluster] discoverChannels -> INFO 00e Discovered 1 channels: [byfn-sys-channel]
orderer.example.com | 2019-06-14 08:27:07.149 UTC [orderer.common.cluster] channelsToPull -> INFO 00f Evaluating channels to pull: [byfn-sys-channel]
orderer.example.com | 2019-06-14 08:27:07.149 UTC [orderer.common.cluster] channelsToPull -> INFO 010 Probing whether I should pull channel byfn-sys-channel
orderer.example.com | 2019-06-14 08:27:07.156 UTC [orderer.common.cluster.replication] probeEndpoint -> WARN 011 Failed connecting to orderer4.example.com:7050: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp: lookup orderer4.example.com on 127.0.0.11:53: no such host"
orderer.example.com | 2019-06-14 08:27:07.156 UTC [orderer.common.cluster.replication] probeEndpoint -> WARN 012 Failed connecting to orderer3.example.com:7050: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp: lookup orderer3.example.com on 127.0.0.11:53: no such host"
orderer.example.com | 2019-06-14 08:27:07.156 UTC [orderer.common.cluster.replication] func1 -> WARN 013 Received error of type 'failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp: lookup orderer4.example.com on 127.0.0.11:53: no such host"' from orderer4.example.com:7050
orderer.example.com | 2019-06-14 08:27:07.156 UTC [orderer.common.cluster.replication] func1 -> WARN 014 Received error of type 'failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp: lookup orderer3.example.com on 127.0.0.11:53: no such host"' from orderer3.example.com:7050
orderer.example.com | 2019-06-14 08:27:07.156 UTC [orderer.common.cluster.replication] probeEndpoint -> WARN 015 Failed connecting to orderer2.example.com:7050: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp: lookup orderer2.example.com on 127.0.0.11:53: no such host"
orderer.example.com | 2019-06-14 08:27:07.156 UTC [orderer.common.cluster.replication] func1 -> WARN 016 Received error of type 'failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp: lookup orderer2.example.com on 127.0.0.11:53: no such host"' from orderer2.example.com:7050
orderer.example.com | 2019-06-14 08:27:07.157 UTC [core.comm] ServerHandshake -> ERRO 017 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.27.0.3:58786```
and after this orderer service panic with
```orderer.example.com | panic: Failed pulling system channel: failed obtaining the latest block for channel byfn-sys-channel
orderer.example.com |
orderer.example.com | goroutine 31 [running]:
orderer.example.com | github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000efe40, 0x0, 0x0, 0x0)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
orderer.example.com | github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc00000e0a8, 0x1134304, 0x102ff4c, 0x21, 0xc00083bc00, 0x1, 0x1, 0x0, 0x0, 0x0)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
orderer.example.com | github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc00000e0a8, 0x102ff4c, 0x21, 0xc00083bc00, 0x1, 0x1)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
orderer.example.com | github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc00000e0b0, 0x102ff4c, 0x21, 0xc00083bc00, 0x1, 0x1)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
orderer.example.com | github.com/hyperledger/fabric/orderer/common/cluster.(*Replicator).ReplicateChains(0xc00030d980, 0x1061d48, 0xc0005b09a0, 0xc00030d980)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/cluster/replication.go:155 +0x4c2
orderer.example.com | github.com/hyperledger/fabric/orderer/common/server.(*replicationInitiator).ReplicateChains(0xc00030d860, 0xc000150400, 0xc000386000, 0x1, 0x1, 0x0, 0x0, 0x0)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/onboarding.go:120 +0x20a
orderer.example.com | github.com/hyperledger/fabric/orderer/common/server.(*inactiveChainReplicator).replicateDisabledChains(0xc0000b6d20)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/onboarding.go:224 +0x1f5
orderer.example.com | github.com/hyperledger/fabric/orderer/common/server.(*inactiveChainReplicator).run(0xc0000b6d20)
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/onboarding.go:202 +0x42
orderer.example.com | created by github.com/hyperledger/fabric/orderer/common/server.initializeEtcdraftConsenter
orderer.example.com | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:637 +0x3fc
orderer.example.com exited with code 2```
@Rajatsharma sorry i was occupied elsewhere... based on the logs in hastebin, only thing i could tell is that peer successfully got block 4816 from orderer, *does peer hang from that point on??*
also, fwiw, `peer channel fetch` is retrieving block from orderer, and `peer channel getinfo` is querying peer for channel info.
btw, which version of fabric are you using?
Okay Thanks for this, I was not sure about these commands !! The thing is when I run the command I get the block but automatically as peer accepts block from orderer that's not happening.
I'm running a 1.2 network
hmmm... i'd suspect a deadlock in peer.. could you try `kill -6 peer_pid` next time when this happens? it should give you a dump of running go routines..
Yes that's possible ... And the machine that had these docker containers hosted had more than one peers.
also, i suspect it has something to do with https://jira.hyperledger.org/browse/FAB-11094
So if this happens, you're suggesting I should kill the goroutines.
or restarting the peer is the only option ?
kill with `-6` signal to gather more info
not suggesting any solution here
just trying to get more info to diagnose
also, https://jira.hyperledger.org/browse/FAB-10540
try v1.2.1 might be another option
Okay !! let me just check, I guess we had restarted the peer a while back only. When I'll get this error in the peer again. I'll try `kill -6 peer_pid`. And ping you the output.
sure. meanwhile pls take a look at the jira i posted, i feel that's the issue we are facing here, which is fixed in patch release 1.2.1
Yes, I was going through that only. I suspect I'm facing the same issue.
I'll check and let you know.
I have TLS enabled on orderer peers and cli. I have tested my certificates by running the byfn script end to end
I see this error on the orderer ' 2019-06-14 22:45:52.439 UTC [grpc] handleRawConn -> DEBU 2e4 grpc: Server.Serve failed to complete security handshake from "192.168.112.16:59930": tls: first record does not look like a TLS handshake'
What could I be missing? I was getting bad certificate error earlier coz I had not mapped the cert path correctly. I fixed that.
got the problem finally but dunno if it could be considered a bug or not.
the matter was that i had extra empty lines before the ---- BEGIN CERTIFICATE ---- (don't ask...) and thus the orderer thought it was a different one. I don't know is that could be a problem with the cert loader or more a problem in the device between the keyboard and the chair...
Is there a worked example anywhere showing how to extend the system channel to add orgs after the initial setup? Google hasn't been my friend here...
@jyellick Is there a worked example anywhere showing how to extend the system channel to add orderers from additional orgs after the initial setup? RTFM only tells me the steps to add a node. That's likely going to involve a significant amount of trial and error to get right and I'l love to circumvent that insofar as possible.
I got this error for a few reasons:
1. TLS was not configured on my Orderers, Peers, CLI, and Certificate Authorities.
2. The path to the certificates, keys and root certs were not correct given the volumes I had.
3. I was not using --tls and --cafile flags when running "peer channel create"/"peer channel fetch"/"peer chaincode install"/"peer chaincode instantiate"/"peer chaincode invoke"
4. I was not using grpcs and https in my fabric-config.yaml files used with my client application.
I am working with Raft at the moment and I am testing how the network behaves when a Raft orderer goes down. When the Raft leader is stopped, a new election occurs and a new leader is chosen from the remaining orderers as expected. I can also continue running transactions using my client application. However, when a Raft follower goes down, I get the below errors in my peer nodes and my transactions begin timing out. Is this the intended behavior?
`2019-06-17 14:16:18.966 UTC [ConnProducer] NewConnection -> ERRO 0ec Failed connecting to orderer0.example.com:7050 , error: context deadline exceeded
2019-06-17 14:16:18.966 UTC [deliveryClient] connect -> ERRO 0ed Failed obtaining connection: Could not connect to any of the endpoints: [orderer0.example.com:7050]
2019-06-17 14:16:18.966 UTC [deliveryClient] try -> WARN 0ee Got error: Could not connect to any of the endpoints: [orderer0.example.com:7050] , at 9 attempt. Retrying in 4m16`
I just tested and can confirm on my end that enabled TLS is not necessary for the CAs.
while creating a channel in raft mode i get this
```
2019-06-17 15:47:28.244 UTC [orderer.commmon.multichannel] commitBlock -> PANI 096 [channel: orderersconfig] Could not append block: unexpected Previous block hash. Expected PreviousHash = [d4edb1760d748bb7f37951bc904467135013284183b9f9029a19a638584e5499], PreviousHash referred in the latest block= [75f73fc680be570ad7aa407ca7bc73283f357234433ed1d63bb92992d97561fd]
panic: [channel: orderersconfig] Could not append block: unexpected Previous block hash. Expected PreviousHash = [d4edb1760d748bb7f37951bc904467135013284183b9f9029a19a638584e5499], PreviousHash referred in the latest block= [75f73fc680be570ad7aa407ca7bc73283f357234433ed1d63bb92992d97561fd]
```
but i'd had to change the writters from Admins and Clients to Member in the configtx cause otherwise it said it wasn't an allowed writter for the channel, i guess is not related but just in case
Has joined the channel.
Hey guys, Is it possible to define consenter list for a channel when using raft? If so can anyone point me to an example?
In byfn, we can define a list of consenters and use that profile to generate genesis block but no where I see an option to choose particular set of consenters for a channel.
Trying to start an orderer with raft.
Get this: `panic: Error opening leveldb: resource temporarily unavailable`
I'm sure it's a configuration problem, but if someone knows the fix off top of their head that will save me some time. TIA and all that.
Trying to start an orderer with raft.
Get this: `panic: panic: Error opening leveldb: file does not exist`
I'm sure it's a configuration problem, but if someone knows the fix off top of their head that will save me some time. TIA and all that.
NM: Found it - if there's any part of the directory tree for the ledger, it pitches a fit. Deleting the whole thing fixes it. First go around I just deleted the contents of the folder - it appears to want everything including the empty directories gone. Leaving this in case it hits someone else.
This sounds to me like there are two different blockchains, with the same crypto material, and conflicting channel names.
You can take a look at this unmerged stuff: https://gerrit.hyperledger.org/r/c/fabric-samples/+/30938
Has joined the channel.
Hello, all. I'm trying to modify the `byfn` sample to have two orderer orgs, but when I create another orderer org and move some orderers to it, the script fails.
Hello, all. I'm trying to modify the `byfn` sample using **RAFT** protocol to have two orderer orgs, but when I create another orderer org and move some orderers to it, the script fails.
Are additional changes needed?
You need to update the `configtx.yaml` to include the other orderer endpoints, additionally, you may need to change hostnames etc. in scripts, and I assume you have also updated the compose files, crypto files, etc. for the second org.
it shouldn't i'm just creating one(intending). I'll try with just one ordering org instead of two
Yes, I updated `configtx.yaml` , `crypto-config.yaml`, `docker-compose-etcdraft2.yaml` to add the new OrdererOrg to the consortium and its respective orderer nodes.
However, I'm getting and error in chaincode:
```
rror: endorsement failure during query. response: status:500 message:"make sure the chaincode mycc has been successfully instantiated and try again: chaincode mycc not found"
!!!!!!!!!!!!!!! Query result on peer0.org1 is INVALID !!!!!!!!!!!!!!!!
```
Do note that if you run `configtxgen` multiple times to create the genesis block, even with the same `configtx.yaml` will produce different results and would cause this error.
I've tried to increase `cli` delay to 6 seconds
Most likely this is a symptom of the an earlier uncaught failure. I would check further up in the logs to see if there are any problems.
In `orderer.example.com` logs I can see:
```
2019-06-18 14:10:20.364 UTC [common.deliver] deliverBlocks -> WARN 043 [channel: mychannel] Rejecting deliver request for 172.23.0.11:47714 because of consenter error
```
then thats the matter as each orderer is generating its block
A larger excerpt:
```
2019-06-18 14:10:20.363 UTC [orderer.common.cluster] updateStubInMapping -> INFO 032 Allocating a new stub for node 2 with endpoint of orderer2.example.com:7050 for channel mychannel
2019-06-18 14:10:20.363 UTC [orderer.common.cluster] updateStubInMapping -> INFO 033 Deactivating node 2 in channel mychannel with endpoint of orderer2.example.com:7050 due to TLS certificate change
2019-06-18 14:10:20.363 UTC [orderer.common.cluster] updateStubInMapping -> INFO 034 Allocating a new stub for node 3 with endpoint of orderer3.example2.com:7050 for channel mychannel
2019-06-18 14:10:20.364 UTC [orderer.common.cluster] updateStubInMapping -> INFO 035 Deactivating node 3 in channel mychannel with endpoint of orderer3.example2.com:7050 due to TLS certificate change
2019-06-18 14:10:20.364 UTC [orderer.common.cluster] updateStubInMapping -> INFO 036 Allocating a new stub for node 4 with endpoint of orderer4.example2.com:7050 for channel mychannel
2019-06-18 14:10:20.364 UTC [orderer.common.cluster] updateStubInMapping -> INFO 037 Deactivating node 4 in channel mychannel with endpoint of orderer4.example2.com:7050 due to TLS certificate change
2019-06-18 14:10:20.364 UTC [orderer.common.cluster] updateStubInMapping -> INFO 038 Allocating a new stub for node 5 with endpoint of orderer5.example2.com:7050 for channel mychannel
2019-06-18 14:10:20.364 UTC [orderer.common.cluster] updateStubInMapping -> INFO 039 Deactivating node 5 in channel mychannel with endpoint of orderer5.example2.com:7050 due to TLS certificate change
2019-06-18 14:10:20.364 UTC [orderer.common.cluster] applyMembershipConfig -> INFO 03a 2 exists in both old and new membership for channel mychannel , skipping its deactivation
2019-06-18 14:10:20.364 UTC [orderer.common.cluster] applyMembershipConfig -> INFO 03b 3 exists in both old and new membership for channel mychannel , skipping its deactivation
2019-06-18 14:10:20.364 UTC [orderer.common.cluster] applyMembershipConfig -> INFO 03c 4 exists in both old and new membership for channel mychannel , skipping its deactivation
2019-06-18 14:10:20.364 UTC [orderer.common.cluster] applyMembershipConfig -> INFO 03d 5 exists in both old and new membership for channel mychannel , skipping its deactivation
2019-06-18 14:10:20.364 UTC [orderer.common.cluster] Configure -> INFO 03e Exiting
2019-06-18 14:10:20.364 UTC [orderer.consensus.etcdraft] start -> INFO 03f Starting raft node as part of a new channel channel=mychannel node=1
2019-06-18 14:10:20.364 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO 040 1 became follower at term 0 channel=mychannel node=1
2019-06-18 14:10:20.364 UTC [orderer.consensus.etcdraft] newRaft -> INFO 041 newRaft 1 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] channel=mychannel node=1
2019-06-18 14:10:20.364 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO 042 1 became follower at term 1 channel=mychannel node=1
2019-06-18 14:10:20.364 UTC [common.deliver] deliverBlocks -> WARN 043 [channel: mychannel] Rejecting deliver request for 172.23.0.11:47714 because of consenter error
2019-06-18 14:10:20.365 UTC [comm.grpc.server] 1 -> INFO 044 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.23.0.11:47714 grpc.code=OK grpc.call_duration=557.053441ms
2019-06-18 14:10:20.526 UTC [orderer.consensus.etcdraft] apply -> INFO 045 Applied config change to add node 1, current nodes in channel: [1 2 3 4 5] channel=mychannel node=1
2019-06-18 14:10:20.526 UTC [orderer.consensus.etcdraft] apply -> INFO 046 Applied config change to add node 2, current nodes in channel: [1 2 3 4 5] channel=mychannel node=1
2019-06-18 14:10:20.526 UTC [orderer.consensus.etcdraft] apply -> INFO 047 Applied config change to add node 3, current nodes in channel: [1 2 3 4 5] channel=mychannel node=1
2019-06-18 14:10:20.526 UTC [orderer.consensus.etcdraft] apply -> INFO 048 Applied config change to add node 4, current nodes in channel: [1 2 3 4 5] channel=mychannel node=1
2019-06-18 14:10:20.526 UTC [orderer.consensus.etcdraft] apply -> INFO 049 Applied config change to add node 5, current nodes in channel: [1 2 3 4 5] channel=mychannel node=1
2019-06-18 14:10:20.567 UTC [common.deliver] deliverBlocks -> WARN 04a [channel: mychannel] Rejecting deliver request for 172.23.0.11:47716 because of consenter error
2019-06-18 14:10:20.568 UTC [comm.grpc.server] 1 -> INFO 04b streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.23.0.11:47716 grpc.code=OK grpc.call_duration=200.725263ms
```
so nice, one more thing i have to exchange in advance in my automater.... :(
In `orderer3.example2.com`, the other OrdererOrg, everything looks fine:
```
2019-06-18 14:10:20.872 UTC [orderer.consensus.etcdraft] run -> INFO 05d raft.node: 3 elected leader 3 at term 2 channel=mychannel node=3
2019-06-18 14:10:20.896 UTC [orderer.consensus.etcdraft] run -> INFO 05e Leader 3 is present, quit campaign channel=mychannel node=3
2019-06-18 14:10:20.896 UTC [orderer.consensus.etcdraft] serveRequest -> INFO 05f Raft leader changed: 0 -> 3 channel=mychannel node=3
2019-06-18 14:10:20.913 UTC [orderer.consensus.etcdraft] serveRequest -> INFO 060 Start accepting requests as Raft leader at block [0] channel=mychannel node=3
2019-06-18 14:10:46.954 UTC [orderer.consensus.etcdraft] propose -> INFO 061 Created block [1], there are 0 blocks in flight channel=mychannel node=3
2019-06-18 14:10:46.954 UTC [orderer.consensus.etcdraft] serveRequest -> INFO 062 Received config transaction, pause accepting transaction till it is committed channel=mychannel node=3
2019-06-18 14:10:47.104 UTC [orderer.consensus.etcdraft] writeBlock -> INFO 063 Writing block [1] (Raft index: 7) to ledger channel=mychannel node=3
2019-06-18 14:10:53.030 UTC [orderer.consensus.etcdraft] propose -> INFO 064 Created block [2], there are 0 blocks in flight channel=mychannel node=3
2019-06-18 14:10:53.030 UTC [orderer.consensus.etcdraft] serveRequest -> INFO 065 Received config transaction, pause accepting transaction till it is committed channel=mychannel node=3
2019-06-18 14:10:53.141 UTC [orderer.consensus.etcdraft] writeBlock -> INFO 066 Writing block [2] (Raft index: 8) to ledger channel=mychannel node=3
2019-06-18 14:11:19.624 UTC [orderer.consensus.etcdraft] propose -> INFO 067 Created block [3], there are 0 blocks in flight channel=mychannel node=3
2019-06-18 14:11:19.718 UTC [orderer.consensus.etcdraft] writeBlock -> INFO 068 Writing block [3] (Raft index: 9) to ledger channel=mychannel node=3
```
The same applies (looks fine) to `ordere2.example.com`, from the first OrdererOrg and both `orderer4.example2.com` and `orderer5.example2.com`
Any tips?
When I reset to the original values, it works flawlessly
My suspicion is that it has to do with the scripts, and not with the Raft quorum, but really, there are no shortcuts here, I expect you will need to walk carefully through the steps, perhaps running them manually, comparing output with the original BYFN to identify your issues.
I can say that a multi-org Raft network has been successfully deployed and tested many times, so it is possible.
Thanks for you help! I'll try to execute manually and carefully. Just to confirm: what happens in a multi-org Raft orderer when orgs start to leave the network? Is a minimum of 3 orderer nodes needed to achieve consensus?
The Raft network may grow and shrink arbitrarily. But, you may only add/remove one node at a time. If you drop below 3 nodes, then you have no crash fault tolerance, because a majority of nodes must be up to maintain quorum (and a majority of 2 is 2, a majority of 1 is 1)
Thank you very much, Jason! :)
Has joined the channel.
Hello,
I am getting the following logs in orderer:
```
2019-06-19 05:09:02.057 UTC [common/deliver] Handle -> DEBU 5d0 Attempting to read seek info message from 10.0.0.2:60280
2019-06-19 05:09:02.057 UTC [common/deliver] deliverBlocks -> WARN 5d1 [channel: eprocurechannelall] Rejecting deliver request for 10.0.0.2:60280 because of consenter error
2019-06-19 05:09:02.057 UTC [common/deliver] Handle -> DEBU 5d2 Waiting for new SeekInfo from 10.0.0.2:60280
2019-06-19 05:09:02.057 UTC [common/deliver] Handle -> DEBU 5d3 Attempting to read seek info message from 10.0.0.2:60280
2019-06-19 05:09:03.032 UTC [orderer/consensus/kafka] try -> DEBU 5d4 [channel: eprocurechannelall] Connecting to the Kafka cluster
2019-06-19 05:09:03.035 UTC [orderer/consensus/kafka] try -> DEBU 5d5 [channel: eprocurechannelall] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
```
and here are is the kafka logs:
https://pastebin.com/UwMVWkVD
Can anyone help in this?
Hello,
I am getting the following logs in orderer:
```
2019-06-19 05:09:02.057 UTC [common/deliver] Handle -> DEBU 5d0 Attempting to read seek info message from 10.0.0.2:60280
2019-06-19 05:09:02.057 UTC [common/deliver] deliverBlocks -> WARN 5d1 [channel: eprocurechannelall] Rejecting deliver request for 10.0.0.2:60280 because of consenter error
2019-06-19 05:09:02.057 UTC [common/deliver] Handle -> DEBU 5d2 Waiting for new SeekInfo from 10.0.0.2:60280
2019-06-19 05:09:02.057 UTC [common/deliver] Handle -> DEBU 5d3 Attempting to read seek info message from 10.0.0.2:60280
2019-06-19 05:09:03.032 UTC [orderer/consensus/kafka] try -> DEBU 5d4 [channel: eprocurechannelall] Connecting to the Kafka cluster
2019-06-19 05:09:03.035 UTC [orderer/consensus/kafka] try -> DEBU 5d5 [channel: eprocurechannelall] Need to retry because process failed = kafka server: The requested offset is outside the range of offsets maintained by the server for the given topic/partition.
```
and here is the kafka logs:
https://pastebin.com/UwMVWkVD
Can anyone help in this?
Hey, Is there any way for us to remove a channel or remove a peer from a channel through sdk?
Has joined the channel.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=mdi8e8hBjWNXk2B54) Please help
did kafka prune logs?
Hi we have recently added a new organisation to our network and when this organisation tries to start their orderer, they get the following panic from the etcdraft consensus. I told them to remove any index, etcdraft and chains folder they would have already and try starting the orderer again, but they say the same error is showing and the orderer fails:
```
2019-06-19 19:42:39.795 UTC [orderer.consensus.etcdraft] Step -> INFO ed0 5 [term: 1] received a MsgHeartbeat message with higher term from 2 [term: 16] channel=colombia-sys-channel node=5
2019-06-19 19:42:39.795 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO ed1 5 became follower at term 16 channel=colombia-sys-channel node=5
2019-06-19 19:42:39.795 UTC [orderer.consensus.etcdraft] commitTo -> PANI ed2 tocommit(27) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost? channel=colombia-sys-channel node=5
panic: tocommit(27) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost?
goroutine 34 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000c3e40, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc000822ed8, 0x4, 0x10573d2, 0x5d, 0xc0009cbba0, 0x2, 0x2, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc000822ed8, 0x10573d2, 0x5d, 0xc0009cbba0, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc000822ee0, 0x10573d2, 0x5d, 0xc0009cbba0, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raftLog).commitTo(0xc0003ecd20, 0x1b)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/log.go:203 +0x14d
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raft).handleHeartbeat(0xc001111040, 0x8, 0x5, 0x2, 0x10, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:1324 +0x54
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.stepFollower(0xc001111040, 0x8, 0x5, 0x2, 0x10, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:1269 +0x450
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raft).Step(0xc001111040, 0x8, 0x5, 0x2, 0x10, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:971 +0x12db
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*node).run(0xc00036db60, 0xc001111040)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/node.go:357 +0x1101
created by github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.RestartNode
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/node.go:246 +0x31b
```
Hi we have recently added a new organisation to our network and when this organisation tries to start their orderer, they get the following panic from the etcdraft consensus. I told them to remove any index, etcdraft and chains folder they would have already and try starting the orderer again, but they say the same error is showing and the orderer fails:
```
2019-06-19 19:42:39.795 UTC [orderer.consensus.etcdraft] Step -> INFO ed0 5 [term: 1] received a MsgHeartbeat message with higher term from 2 [term: 16] channel=colombia-sys-channel node=5
2019-06-19 19:42:39.795 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO ed1 5 became follower at term 16 channel=colombia-sys-channel node=5
2019-06-19 19:42:39.795 UTC [orderer.consensus.etcdraft] commitTo -> PANI ed2 tocommit(27) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost? channel=colombia-sys-channel node=5
panic: tocommit(27) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost?
goroutine 34 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000c3e40, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc000822ed8, 0x4, 0x10573d2, 0x5d, 0xc0009cbba0, 0x2, 0x2, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc000822ed8, 0x10573d2, 0x5d, 0xc0009cbba0, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc000822ee0, 0x10573d2, 0x5d, 0xc0009cbba0, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raftLog).commitTo(0xc0003ecd20, 0x1b)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/log.go:203 +0x14d
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raft).handleHeartbeat(0xc001111040, 0x8, 0x5, 0x2, 0x10, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:1324 +0x54
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.stepFollower(0xc001111040, 0x8, 0x5, 0x2, 0x10, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:1269 +0x450
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raft).Step(0xc001111040, 0x8, 0x5, 0x2, 0x10, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:971 +0x12db
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*node).run(0xc00036db60, 0xc001111040)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/node.go:357 +0x1101
created by github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.RestartNode
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/node.go:246 +0x31b
```
Does this have to do with the configuration of the orderer locally or is it a problem with the raft cluster? Anyone has an idea?
Thanks a lot!
oh, ouch... @guoger @jyellick ^
@braduf i recommend you to open a JIRA in any case, should any interesting bugs will be found
If that node was joined to the network and started successfully once, and you have deleted the WAL, then you need to remove that node, from the config, and as a separate transaction add it back. This will assign it a new Raft Node ID.
he can also wipe out that node's ledger and then give it the latest config block, no? @jyellick
it should replicate everything
No, I believe etcdraft is smarter than that. The other nodes in the network track the last hearbeat sent by a node, if he tries to send one with a lower sequence, they tell him he's corrupt and he dies.
Because if you wipe the WAL, you have lost Raft safety, because he could re-vote causing split majorities
i don't get it - can't that node replicate to a state of an empty WAL and then ask for a snapshot? :thinking_face:
> and he dies.
oh... did it ever occur to you?
Yes, we saw this in SVT
oh, i see....
basically what is missing is some API from the application that consumes Raft, to know if the proposal is sound or not, right? @jyellick
i.e - you can take a look at the ledger and see if the proposal makes any sense
I'm looking at the etcd source, trying to understand where it would be realizing that the WAL has been truncated. Perhaps it is not etcd doing this, though that had been my assumption.
Thanks a lot for your answers, I think some problem also came from adding and removing organisations and consenters at the same time in one config transaction. Could that have caused the new node taking the node ID of the removed node? We will try removing the node and adding him again. Thanks a lot. And so I create a Jira ticket either way?
Thanks a lot for your answers, I think some problem also came from adding and removing organisations and consenters at the same time in one config transaction. Could that have caused the new node taking the node ID of the removed node? We will try removing the node and adding him again. Thanks a lot. And so I create a JIRA ticket either way?
@braduf adding and removing a node at the same time is actually *certificate rotation* -- swapping the certificate of an *existing* node with new one -- and this purely Fabric operation, Raft is not aware of this, therefore expecting no change to persisted data.
so, from Raft point of view, an *exiting* node is rebooted, but with all WAL gone
although Raft leader tracks the progress of followers (commit index) *in memory*, so if you reboot the leader before starting new orderer, i believe this panic should be gone
if you still see problem, pls report back
Sorry, I didn't get you.
Sorry, I didn't get you. Can you please elaborate?
fabric expects kafka *never* delete history logs -- base offset should be zero for all topics. so i suspect that your kafka deleted some old data. maybe you can try confirm this with kafka client?
I never deleted any data from kafka. All logs are mounted on the system so restarting kafka also persists the data. But still I can double check. Can you help me with the steps how to confirm this with kafka client?
I never deleted any data from kafka. All logs are mounted on the system so restarting kafka also persists the data. But still I will double check. Can you help me with the steps how to confirm this with kafka client?
https://gist.github.com/ursuad/e5b8542024a15e4db601f34906b30bb5
kafka could do log retention if not configured properly...
I will check.
Hii,
I tried following commands:
```
root@kafka3:/opt/kafka/bin# ./kafka-topics.sh --zookeeper zookeeper0:2181 --list
eprocurechannelall
testchainid
root@kafka3:/opt/kafka/bin# cd ..
root@kafka3:/opt/kafka# bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list kafka0:9092 --topic eprocurechannelall --time -2
eprocurechannelall:0:0
root@kafka3:/opt/kafka# bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list kafka0:9092 --topic testchainid --time -2
testchainid:0:0
root@kafka3:/opt/kafka# bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list kafka0:9092 --topic eprocurechannelall --time -1
eprocurechannelall:0:15426
root@kafka3:/opt/kafka# bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list kafka0:9092 --topic testchainid --time -1
testchainid:0:60
```
Hii,
Here are the commands I tried.
https://pastebin.com/wJatNwkM
Hii,
Here are the commands I tried.
https://pastebin.com/uexp1NPx
Hii,
Here are the commands I tried and it seems base offset is 0 for all the topics.
https://pastebin.com/uexp1NPx
Please help
Hi getting this error when trying to generate genesis.block for raft
````2019-06-20 13:46:04.914 IST [common.tools.configtxgen] main -> INFO 001 Loading configuration
2019-06-20 13:46:04.944 IST [common.tools.configtxgen.localconfig] completeInitialization -> INFO 002 orderer type: etcdraft
2019-06-20 13:46:04.944 IST [common.tools.configtxgen.localconfig] completeInitialization -> INFO 003 Orderer.EtcdRaft.Options unset, setting to tick_interval:"500ms" election_tick:10 heartbeat_tick:1 max_inflight_blocks:5 snapshot_interval_size:20971520
2019-06-20 13:46:04.944 IST [common.tools.configtxgen.localconfig] Load -> INFO 004 Loaded configuration: /Users/shubham.kumar/Desktop/repos/blockahead_baas/configtx.yaml
2019-06-20 13:46:04.974 IST [common.tools.configtxgen.localconfig] completeInitialization -> INFO 005 orderer type: etcdraft
2019-06-20 13:46:04.975 IST [common.tools.configtxgen.localconfig] completeInitialization -> PANI 006 etcdraft raft configuration missing
2019-06-20 13:46:04.975 IST [common.tools.configtxgen] func1 -> PANI 007 etcdraft raft configuration missing
panic: etcdraft raft configuration missing [recovered]
panic: etcdraft raft configuration missing
goroutine 1 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc000447080, 0x0, 0x0, 0x0)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc00000c248, 0xc0001a5604, 0xc00034b980, 0x23, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc00000c248, 0xc00034b980, 0x23, 0x0, 0x0, 0x0)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panic(0xc00000c250, 0xc0001a5768, 0x1, 0x1)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:73 +0x75
main.main.func1()
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/main.go:260 +0x1a9
panic(0x163d3a0, 0xc00033f000)
/opt/go/go1.11.5.linux.amd64/src/runtime/panic.go:513 +0x1b9
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc000447080, 0x0, 0x0, 0x0)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc00000c228, 0x4, 0x17659b1, 0x1d, 0xc0001a5c10, 0x1, 0x1, 0x0, 0x0, 0x0)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc00000c228, 0x17659b1, 0x1d, 0xc0001a5c10, 0x1, 0x1)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc00000c230, 0x17659b1, 0x1d, 0xc0001a5c10, 0x1, 0x1)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/common/tools/configtxgen/localconfig.(*Orderer).completeInitialization(0xc0001863f0, 0xc0002574c0, 0x32)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/localconfig/config.go:397 +0xcf3
github.com/hyperledger/fabric/common/tools/configtxgen/localconfig.(*TopLevel).completeInitialization(0xc00050f9f0, 0xc0002574c0, 0x32)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/localconfig/config.go:303 +0xaf
github.com/hyperledger/fabric/common/tools/configtxgen/localconfig.LoadTopLevel(0x0, 0x0, 0x0, 0x0)
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/localconfig/config.go:243 +0x4ca
main.main()
/w/workspace/fabric-release-jobs-x86_64/gopath/src/github.com/hyperledger/fabric/common/tools/configtxgen/main.go:278 +0xcab
make: *** [generate_arts_raft] Error 2```
What am i doing wrong? I can share the configtx.yaml file if thats needed
nevermind found the issue
which version are you using? and does this actually prevent network from working?
I am using fabric-1.3. I am not able to any transaction. The logs when I do transaction are as follows:
```
Failed to order the transaction. Error code: SERVICE_UNAVAILABLE
Transaction failed to be committed to the ledger due to ::TIMEOUT
```
I have posted orderer logs in the question. I am not sure if only this reason is stoping write transaction.
can you do this:
- turn on verbose (in `orderer.yaml`)
- turn on debug log (i think you already did)
- submit a tx and post complete log via pastebin
Okay. I will do that.
Here it is:
https://pastebin.com/0z0hbEBt
i suspect there's some problem with orderer-kafka connection, since your channel is not actually started. do you by any chance have access to the log since boot?
My network is up from many weeks. Everything was working fine but I am getting this error suddenly from yesterday. I don't have old logs but I can reboot the system and can see the logs. Will it work?
how many orderer nodes are you running? you can reboot one of them and grab logs
I have 3 orderer nodes. I am removing docker service of one node and will start that again to grab logs. Is it fine?
yes
https://pastebin.com/FcZhqAen
Logs on orderer reboot:
https://pastebin.com/FcZhqAen
so the next msg offset fabric is expecting is 17944:
```
2019-06-20 09:04:55.058 UTC [orderer/consensus/kafka] setupChannelConsumerForChannel -> INFO 1b7 [channel: eprocurechannelall] Setting up the channel consumer for this channel (start offset: 17944)...
```
However, the latest offset on kafka is 15426 (according to your cli output)
could you see if that offset is available on other brokers of kafka?
could you check if that offset is available on other brokers of kafka?
On all kafka brokers, available offset is:
eprocurechannelall:0:15427
could you paste kafka configs?
```
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyperledger-ov
- KAFKA_BROKER_ID=1
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA.MIN_INSYNC_REPLICAS=2
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
- KAFKA_LOG_DIRS=/kafka/kafka_logs
```
that's all? if `KAFKA_LOG_RETENTION_*` configs are left to default, then kafka does log retention every week by default
I haven't override them. But default log.retention.ms=-1. So I thought it would be fine.
I saw there is also log.retention.hours=168
Now, is there any way to recover network? And what should I set to log retention?
wait, if you indeed set `log.retention.ms=-1`, then it shouldn't prune logs, regardless of retention.hours
I didn't set that, too. It is default set to -1 in server.properties of kafka.
no, -1 is not default value
https://kafka.apache.org/documentation/
as for recovery, you will need to read out existing data from a peer and write them back to orderer (with fresh and properly configured kafka)
unfortunately there's no tools provided from community, you'll need to handcraft it..
Here is the server.properties. In the last line log.retention.ms=-1
https://we.tl/t-puShtmg3RK
My network was up from many weeks and I faced that issue for the first time.
How can I write all data to orderer? I don't have any idea about this.
essentially you'll need to write a tool (with sdk or cli) to pull blocks from peer, extract data out of them, and submit to orderer via `broadcast` api. your situation is somewhat similar to @Rajatsharma , maybe he has experience to share.
As for why kafka offset is lower than fabric, we still need to figure it out... it looks like a huge gap (17944 vs 15426), were you able to perform any tx beyond 15426? do you still have kafka log from that point?
I think the gap is because I tried to perform many transactions but any of them is not committed due to this issue. Is there anyway from which I can decrease the fabric offset so system can start again from that stable state without offset gap.
I think the gap is because I tried to perform many transactions but any of them is not committed due to this issue. Is there anyway from which I can decrease the fabric offset so system can start again from that stable state without offset gap.?
I think the gap is because I tried to perform many transactions but any of them is not committed due to this issue. Is there anyway from which I can decrease the fabric offset so system can start again from that stable state without offset gap?
I think the gap is because I tried to perform many transactions but any of them is not committed due to this issue. Is there any way from which I can decrease the fabric offset so system can start again from that stable state without offset gap?
I also read this
```
```
I also read this
```
# log.retention.ms
# Until the ordering service in Fabric adds support for pruning of the
# Kafka logs, time-based retention should be disabled so as to prevent
# segments from expiring. (Size-based retention -- see
# log.retention.bytes -- is disabled by default so there is no need to set
# it explicitly.)
# - KAFKA_LOG_RETENTION_MS=-1
```
So, is it okay to not set log retention explicitly?
kafka offset is persisted as part of block metadata, so failed transactions would not result in this gap. I think some data in kafka has lost.
if you are using kafka image produced by fabric, then you don't need to set that config option.
No... it doesn't support decreasing offset
I am using fabric kafka image only.
So data lost from kafka is strange.
It happened in local only so I can restart the network but I have live system also so it is hard to maintain in that case.
Thanks for the explanation. So one option is removing the panicking node and then adding him again with another transaction and another solution is rebooting the leader while the panicking node is down and then starting the panicking node again? Or should everything be done (removing the node, rebooting the leader and adding the node again)?
just reboot leader, then start new node.
but, leader may be different node for different channel
so you may just reboot the whole network for sake of simplicity
So bring all orderers down and then bring them all up again?
yes
let me know if it works for you. i only tried by modifying integration test
Hi, does anyone know where and how the blockheader is created? Am curious of the exact flow of how the hash is being generated
The block (including the header) is created here
https://github.com/hyperledger/fabric/blob/42b652a0fc42bfcdc0afab7ee2e4a495d3344e9e/orderer/common/multichannel/blockwriter.go#L66-L87
It calls a hash function on the previous block's header to form the hash chain
https://github.com/hyperledger/fabric/blob/42b652a0fc42bfcdc0afab7ee2e4a495d3344e9e/protos/common/block.go#L50-L75
Thank you! That helps a lot. Would I be correct to that this is being performed by the ordering service in the `deliver(seqno, prevhash, blob)` function, or is it being called somewhere else?
@jyellick Thank you! That helps a lot. Would I be correct to that this is being performed by the ordering service in the `deliver(seqno, prevhash, blob)` function, or is it being called somewhere else?
it is preformed during the cutting of the block
at the block cutter
Has joined the channel.
Could anyone confirm that Orderer 1.4.1 is broken with Kafka when communicating over TLS? I've been testing setting it up and it works fine with 1.4.0, but not 1.4.1. Then I found this: https://jira.hyperledger.org/browse/FAB-15404
I've seen some other reports around TLS and Kafka with v1.4.1, so it seems like there might be an issue. If you are able to produce a patch against fabric-samples/first-network which exhibits this problem, it would be helpful in getting it debugged. You also might want to open a bug in JIRA with details.
Has joined the channel.
I'm not on this F/T so it's taking longer than I'd like, but I was advised for privacy to start a network with the owner in the system channel and then join the other orgs to it. I've done the join an extant org to a channel before, but never added an org to a (in this case single member) consortium. Is there any handy guide? Preferably allowing me to use configtxlator for syntax.
Or am I going to be trying to deconstruct the block and shoving stuff in randomly in the hope it all works?
Thanks. The add the org3 crypto material is (i hope) what I'm looking for. It wasn't there the last time I was on that page - which was a long time ago I'll admit...
I'll go off and try to write some ugly code to automate the process inline with the Kubernetes yaml file generation and see how I get on.
Was advised to start a consortium with a single org and add orgs to the system channel as required to preserve privacy. Is there a guide for this anywhere or am I stuck with suck it and see. *edit* had to reconstitute this - for some reason I overwrote it with my response below. Need more coffee.
we have a doc on this topic: https://hyperledger-fabric.readthedocs.io/en/latest/channel_update_tutorial.html
it refers to some scripts that extends byfn network to add new org. pls take a look
Thanks. The add the org3 crypto material is (i hope) what I'm looking for. It wasn't there the last time I was on that page - which was a long time ago I'll admit...
I'll go off and try to write some ugly code to automate the process inline with the Kubernetes yaml file generation and see how I get on.
Finally back onto this - the instructions don't seem like quite enough.
I'm using raft ordering.
I have privacy issues. The best way for me to fix them is to have each org have their own set of orderers, have the network owner come up attached to the system channel and then dynamically add in the other orgs - with their orderers - so they can't see the information in the system channel, but can interact on channels I create for them.
So - `org1` owns the network. It's attached to the system channel with a set of raft orderers. I have a consortium with one member in it - `org1`.
I want to add `org2` through `orgN` to this consortium so I can create channels and attach them selectively thereto, but so they can't see the contents of the system channel. There's a complex relationship between the orgs so the channels all have defined members, and the relationships have to remain private. `org3` is not be allowed to infer who `org2` has channels with.
The channel create stuff I have down pat.
The bit that's got me confused is what needs to be added to this genesis block inside the system channel to get the orderers to talk to one another.
Looking at the decoded block, there's an awful lot of stuff about raft that I don't remember seeing before. It doesn't seem like a single edit here is sufficient. Or is it?
The code I used before adds the configuration json to `channel_group.groups.Consortiums.groups.{consortium-name}.groups`. Is that *really* all that's required?
Finally back onto this - the instructions don't seem like quite enough.
I'm using raft ordering.
I have privacy issues. The best way for me to fix them is to have each org have their own set of orderers, have the network owner come up attached to the system channel and then dynamically add in the other orgs - with their orderers - so they can't see the information in the system channel, but can interact on channels I create for them.
So - `org1` owns the network. It's attached to the system channel with a set of raft orderers. I have a consortium with one member in it - `org1`.
I want to add `org2` through `orgN` to this consortium so I can create channels and attach them selectively thereto, but so they can't see the contents of the system channel. There's a complex relationship between the orgs so the channels all have defined members, and the relationships have to remain private. `org3` is not be allowed to infer who `org2` has channels with.
The channel create stuff I have down pat.
The bit that's got me confused is what needs to be added to this genesis block inside the system channel to get the orderers to talk to one another.
Looking at the decoded block, there's an awful lot of stuff about raft that I don't remember seeing before. It doesn't seem like a single edit here is sufficient. Or is it?
The code I used before adds the configuration json to `channel_group.groups.Consortiums.groups.{consortium-name}.groups`. Is that *really* all that's required?
*edit*: Remember that the orderer nodes I want to add aren't supposed to be able to read anything in the system channel. I think that's where I'm struggling with the concepts here.
Finally back onto this - the instructions don't seem like quite enough.
I'm using raft ordering.
I have privacy issues. The best way for me to fix them is to have each org have their own set of orderers, have the network owner come up attached to the system channel and then dynamically add in the other orgs - with their orderers - so they can't see the information in the system channel, but can interact on channels I create for them.
So - `org1` owns the network. It's attached to the system channel with a set of raft orderers. I have a consortium with one member in it - `org1`.
I want to add `org2` through `orgN` to this consortium so I can create channels and attach them selectively thereto, but so they can't see the contents of the system channel. There's a complex relationship between the orgs so the channels all have defined members, and the relationships have to remain private. `org3` is not be allowed to infer who `org2` has channels with.
The channel create stuff I have down pat.
The bit that's got me confused is what needs to be added to this genesis block inside the system channel to get the orderers to talk to one another.
Looking at the decoded block, there's an awful lot of stuff about raft that I don't remember seeing before. It doesn't seem like a single edit here is sufficient. Or is it?
The code I used before adds the configuration json to `channel_group.groups.Consortiums.groups.{consortium-name}.groups`. Is that *really* all that's required?
*edit*: Remember that the orderer nodes I want to add aren't supposed to be able to read anything in the system channel. I think that's where I'm struggling with the concepts here. If I start messing with the list in
```
```
Finally back onto this - the instructions don't seem like quite enough.
I'm using raft ordering.
I have privacy issues. The best way for me to fix them is to have each org have their own set of orderers, have the network owner come up attached to the system channel and then dynamically add in the other orgs - with their orderers - so they can't see the information in the system channel, but can interact on channels I create for them.
So - `org1` owns the network. It's attached to the system channel with a set of raft orderers. I have a consortium with one member in it - `org1`.
I want to add `org2` through `orgN` to this consortium so I can create channels and attach them selectively thereto, but so they can't see the contents of the system channel. There's a complex relationship between the orgs so the channels all have defined members, and the relationships have to remain private. `org3` is not be allowed to infer who `org2` has channels with.
The channel create stuff I have down pat.
The bit that's got me confused is what needs to be added to this genesis block inside the system channel to get the orderers to talk to one another.
Looking at the decoded block, there's an awful lot of stuff about raft that I don't remember seeing before. It doesn't seem like a single edit here is sufficient. Or is it?
The code I used before adds the configuration json to `channel_group.groups.Consortiums.groups.{consortium-name}.groups`. Is that *really* all that's required?
*edit*: Remember that the orderer nodes I want to add aren't supposed to be able to read anything in the system channel. I think that's where I'm struggling with the concepts here.
If I add the MSP to
```
"Orderer": {
"groups": {
```
start messing with the list in
```
"ConsensusType": {
"mod_policy": "Admins",
"value": {
"metadata": {
"consenters": [
```
wouldn't that allow them to look in the system channel?
Is sticking them in `OrdererAddresses` instead sufficient?
Just as an FYI - If anyone is copying the orderer.yaml file from the latest fabric-samples in the hope it'll provide clues for raft, it's missing a property.
Without the field General.Cluster.RootCAs: set to ../tls/ca.crt you struggle to come up cleanly.
Now if someone can tell me how to get the things to use hostnames rather than ip to communicate while it's broadcasting that would be great. I might be able to get past this issue
`TRANSIENT_FAILURE
2019-06-28 13:26:04.362 UTC [core.comm] ServerHandshake -> ERRO 43c TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=10.1.0.1:36766`
Just as an FYI - If anyone is copying the orderer.yaml file from the latest fabric-samples in the hope it'll provide clues for raft, it's missing a property.
Without the field General.Cluster.RootCAs: set to ../tls/ca.crt you struggle to come up cleanly.
Now if someone can tell me how to get the things to use hostnames rather than ip to communicate while it's broadcasting that would be great. I might be able to get past this issue
`TRANSIENT_FAILURE 2019-06-28 13:26:04.362 UTC [core.comm] ServerHandshake -> ERRO 43c TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=10.1.0.1:36766`
@aatkddny just put hostnames in your `configtx.yaml` and in your TLS certificates
and what is the error you get in case you omit the `RootCAs` from the `orderer.yaml` ?
@yacovm - I did. There's a SAN for each of the orderer names in the cert and same in configtx.
This problem is inside kubernetes - the only thing I've seen to fix relates to FAB-15648 and the same guy ended up using scraping the IPs from his services and using a hostAliases file to fix it.
Surely there has to be a better way to do it than that.
re: the RootCA thing - it throws x509 errors initializing. Add it and then I get to the broadcast problem when it tries to talk to the other orderers.
@yacovm - I did. There's a SAN for each of the orderer names in the cert and same in configtx.
This problem is inside kubernetes - the only thing I've seen to fix relates to FAB-15648 and the same guy ended up using scraped IPs from his services and using a hostAliases file to fix it.
Surely there has to be a better way to do it than that. You can see it in situ here - https://github.com/APGGroeiFabriek/PIVT - search for `hostAliases.yaml`
re: the RootCA thing - it throws x509 errors initializing. Add it and then I get to the broadcast problem when it tries to talk to the other orderers.
@yacovm - I did. There's a SAN for each of the orderer names in the cert and same in configtx. This is coming from the orderer.
This problem is inside kubernetes - the only thing I've seen to fix relates to FAB-15648 and the same guy ended up using scraped IPs from his services and using a hostAliases file to fix it.
Surely there has to be a better way to do it than that. You can see it in situ here - https://github.com/APGGroeiFabriek/PIVT - search for `hostAliases.yaml`
re: the RootCA thing - it throws x509 errors initializing. Add it and then I get to the broadcast problem when it tries to talk to the other orderers.
@aatkddny wait... you have TLS termination or something?
how does FAB-15648 help you?
It probably doesn't. I'm trying ~just~ to get the orderers to talk to one another. I thought I'd cracked it when I added the RootCA property and they initialized, but then they started failing as they tried to talk to one another.
@yacovm It probably doesn't. I'm trying *just* to get the orderers to talk to one another. Haven't get far enough into it to hit them from the outside.
I thought I'd cracked it when I added the RootCA property and they initialized, but then they started failing as they tried to talk to one another.
@yacovm It probably doesn't. I'm trying *just* to get the orderers to talk to one another inside a single cluster. Haven't get far enough into it to hit them from the outside.
I thought I'd cracked it when I added the RootCA property and they initialized, but then they started failing as they tried to talk to one another.
@yacovm It probably doesn't. I'm trying *just* to get the orderers to talk to one another inside a single cluster. Haven't got far enough into it to hit them from the outside.
I thought I'd cracked it when I added the RootCA property and they initialized, but then they started failing as they tried to talk to one another.
well but when they talk to each other they use hostnames like specified in the `configtx.yaml`. the logger of the transient failure just prints them with IPs
I do not think it's because of the SANs
what you can try to do, is record the TLS handshake using `tcpdump`
and then analyze it with wireshark
and see what the certificate the node really sends
I may not be the best person to do this. Analyzing network traffic isn't my forte. I stuck a tcpdump sidecar on one of the orderer pods and piped the output to a log file, but am struggling to get wireshark to do anything with it. File's human readable - timestamps and flags so it's probably the wrong thing. To be continued...
the error messages only have access to the IP address not the DNS name ... so even though communication from peer to orderer was via hostname resolution we can only show the remote IP in the error logs
`tls: bad certificate` usually occures when the remote "client" sends a bad client certificate (or does not send a client certificate at all)
So likely you have not set up the cluster TLS configs or the orderer node TLS config properly
If you want to figure out the DNS names to use in your orderer server certs and in the cluster config, see https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/ for DNS names for kube pods and services
@aatkddny you can give me your dump... I can analyze it. Don't worry - since nothing is working for you anyway, you won't leak any data from your orderers intra-cluster connection ;)
you only leak the hostnames and IPs
and the file shouldn't be human readable. it should be in pcap format
use `tcpdump -w`
@yacovm so after a little quality time with google and the tcpdump man pages the problem i have is i'm trying to do this on a mac. so i can't get to the container - or more likely can't figure out how to. apparently docker0 isn't available and there appears to be no way in. the output i have is from a prepackaged tcpdump container from dockersec that doesn't come with any instructions or obvious way to add cl switches. i pm'd you the log but i suspect it's useless. i'm still poking around trying to create a decent pcap file. if i do i'll be back.
@yacovm so after a little quality time with google and the tcpdump man pages the problem i have is i'm trying to do this on a mac. so i can't get to the container - or more likely can't figure out how to.
apparently docker0 isn't available and there appears to be no way in.
*edited*
after a little more poking around I managed to hack a prepackaged container with tcpdump in it and get a pcap file that i sent out of band as a pm. i'll start looking at it in wireshark and see if i can figure out what it's finding objectionable, but if you have the opportunity to take a peek i'd be most grateful.
I figured that. I also already adding tried the obvious names from here to my SAN list without any luck. This stuff is so much easier without TLS getting in the way. I remember now why I gave up getting a running grpcs setup to my dev cluster with the kafka version. Spent way too long looking at the ingress manuals.
@aatkddny your pcap file doesn't contain a TLS handshake... it tries to establish a TCP connection but the orderer node (or, at least - that thing that is getting port 7050 SYN attempts) always returns an RST back to the client
What I'm trying to say is that the TLS client certificates you srt up for your orderers are incorrect ... I don't think this is SAN issues
What I'm trying to say is that the TLS client certificates you srt up for your orderers are incorrect ... I don't think this is SAN issue
I'm sure they are - other people seem to have had no problems doing the same.
I've tried using self signed ones from cryptogen for this thing and had no luck.
I then went down a rabbit hole trying to reverse engineer the cryptogen output - signing our root cert - that cost me a day or so and was no better.
I must be missing something obvious but I apparently don't know enough to know what it is I'm missing.
I feel like one of those "I want blockchain" posters right now...
Yup. I'm an idiot - it was glaringly obvious. But now I also don't know how to satisfy my use case.
My use case calls for a single org to be attached to the genesis channel.
I decided to use ~my~ org and configure orderers for it.
The profile has for
```
MSPDir: /opt/blockchain/localhost/generated/crypto-config/peerOrganizations/myOrg/msp
```
When it's an orderer it needs `/ordererOrganizations...` and the certs aren't the same.
Should I define two orgs for this `MyOrgMSP` and `MyOrdererOrgMSP`or do we need another profile along the lines of `OrdererMSP`
Yup. I'm an idiot - it was glaringly obvious. But now I also don't know how to satisfy my use case.
My use case calls for a single org to be attached to the genesis channel, and then the other orgs orderers to be added in.
I decided to use *my* org and configure orderers for it.
The profile has for
```
MSPDir: /opt/blockchain/localhost/generated/crypto-config/peerOrganizations/myOrg/msp
```
When it's an orderer it needs `/ordererOrganizations...` and the certs aren't the same.
Should I define two orgs for this `MyOrgMSP` and `MyOrdererOrgMSP`or do we need another profile along the lines of `OrdererMSP`
Yup. I'm an idiot - it was glaringly obvious. But now I also don't know how to satisfy my use case.
My use case calls for a single org to be attached to the genesis channel, and then the other orgs orderers to be added in.
I decided to use *my* org and configure orderers for it.
The profile has:
```
MSPDir: /opt/blockchain/localhost/generated/crypto-config/peerOrganizations/myOrg/msp
```
Of course when it's configuring an orderer it needs `/ordererOrganizations...` and the certs aren't the same.
Should I define two orgs for this of the form `MyOrgMSP` and `MyOrdererOrgMSP`
Or do we need another profile along the lines of `OrdererMSP`
Yup. I'm an idiot - it was glaringly obvious. But now I also don't know how to satisfy my use case.
My use case calls for existing orgs in existing fabrics to be able to join this one.
Data leakage is a concern so I decided to have *my* org be attached to the genesis channel, and then the other orgs added in as they join, but so they can't see the details of the system channel. That way I can set up channels between each of the orgs and include their orderers on the channels they are allowed to be members of.
I decided to use *my* org and configure orderers for it.
The profile has:
```
MSPDir: /opt/blockchain/localhost/generated/crypto-config/peerOrganizations/myOrg/msp
```
Of course when it's configuring an orderer it needs `/ordererOrganizations...` and the certs aren't the same.
Should I define two orgs for this of the form `MyOrgMSP` and `MyOrdererOrgMSP`
Or do we need another profile along the lines of `OrdererMSP`
Yup. I'm an idiot - it was glaringly obvious. But now I also don't know how to satisfy my use case.
My use case calls for existing orgs in existing fabrics to be able to join this one.
Data leakage is a concern so I decided to have *my* org be attached to the genesis channel, and then the other orgs added in as they join, but so they can't see the details of the system channel. That way I can set up channels between each of the orgs and include their orderers on the channels they are allowed to be members of.
The profile has:
```
MSPDir: /opt/blockchain/localhost/generated/crypto-config/peerOrganizations/myOrg/msp
```
Of course when it's configuring an orderer it needs `/ordererOrganizations...` and the certs aren't the same.
Should I define two orgs for this of the form `MyOrgMSP` and `MyOrdererOrgMSP`
Or do we need another profile along the lines of `OrdererMSP`
Yup. I'm an idiot - it was glaringly obvious.
If anyone is following along or having the same issue, I missed the piece of documentation in the sample crypto-config.yaml that shows how to override the CN. I mistakenly assumed having it in the SAN was enough. It isn't. You need to have
`CommonName: "{{.Hostname}}-{{.Domain}}" ` in the spec.
I also have another issue that compounded the problem.
And now I also don't know how to satisfy my use case.
It calls for existing orgs in existing fabrics to be able to join this one.
Data leakage on the system channel is a concern so I decided to have *my* org be attached to the genesis channel, and then the other orgs added in as they join, so they can't see the details of who's doing what to who on the system channel.
That way I can set up channels between each of the orgs and include their orderers on the channels they are allowed to be members of.
The profile has:
```
MSPDir: /opt/blockchain/localhost/generated/crypto-config/peerOrganizations/myOrg/msp
```
Of course when it's configuring an orderer it needs `/ordererOrganizations...` and the certs aren't the same.
Should I define two orgs for this of the form `MyOrgMSP` and `MyOrdererOrgMSP`
Or do we need another profile along the lines of `OrdererMSP`
Yup. I'm an idiot - it was glaringly obvious.
If anyone is following along or having the same issue, I missed the piece of documentation in the sample crypto-config.yaml that shows how to override the CN. I mistakenly assumed having it in the SAN was enough. It isn't. You need to have
`CommonName: "{{.Hostname}}-{{.Domain}}" ` in the spec.
Hello Guys
Could you please help me in resolving the below issue.
Issue: In our kafka based HL network, there are 4 kafka brokers and 3 zookeepers running as docker containers, we have 3 channels and one orderer system channel testchainid in our network. Everything was working fine till sunday. However after that we saw errors in ordering service during invokations. After that we restarted our zookeepers services and then restarted kafka0, kafka1, kafka2 and kafka3 in the same order maintaining 10 secs gap after every kafka restart. This process we used to do (roughly every 3 weeks) whenever we faced such issue.
However this time when we did the same process, kafka2 and kafka1 got shut down after restart and when we checked the logs we found this error *FATAL [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Exiting because log truncation is not allowed for partition testchainid-0, current leader's latest offset 96672 is less than replica's latest offset 96674 (kafka.server.ReplicaFetcherThread)* on broker2 and same error we found on broker1 as well. so basically we have two channels on broker0 as leader - ort and testchainid and rest of the channels are present on other brokers.
Also when we stopped kafka0 broker and then restart kafka1, kafka2 and kafka3, then kafka1 and kafka2 didn't get shut down. So the problem is as soon as i restart kafka0 broker, then the brokers kafka1 and kafka2 get shut down immediately.
Now with kafka0 stopped and rest of the brokers kafka1, kafka2, and kafka3 running, though I am able to invoke on rest of the other channels but I am able to see error in orderer logs for the ort channel *[orderer/consensus/kafka] processMessagesToBlocks -> ERRO 3a0123d [channel: ort] Error during consumption: kafka: error while consuming ort/0: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes.*
So I thought I'd got this down, but now I'm not sure if it's working or not.
I just recreated everything from scratch.
I can't tell from this message set if it's working and I can ignore the debug error or not.
```
2019-07-02 12:46:19.730 UTC [orderer.consensus.etcdraft] Step -> INFO 333 3 is starting a new election at term 1 channel=genesischannel node=3
2019-07-02 12:46:19.731 UTC [orderer.consensus.etcdraft] becomePreCandidate -> INFO 334 3 became pre-candidate at term 1 channel=genesischannel node=3
2019-07-02 12:46:19.731 UTC [orderer.consensus.etcdraft] poll -> INFO 335 3 received MsgPreVoteResp from 3 at term 1 channel=genesischannel node=3
2019-07-02 12:46:19.732 UTC [orderer.consensus.etcdraft] campaign -> INFO 336 3 [logterm: 1, index: 3] sent MsgPreVote request to 1 at term 1 channel=genesischannel node=3
2019-07-02 12:46:19.733 UTC [orderer.consensus.etcdraft] campaign -> INFO 337 3 [logterm: 1, index: 3] sent MsgPreVote request to 2 at term 1 channel=genesischannel node=3
2019-07-02 12:46:19.733 UTC [orderer.consensus.etcdraft] consensusSent -> DEBU 338 Sending msg of 28 bytes to 1 on channel genesischannel took 80.2µs
2019-07-02 12:46:19.734 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 339 Failed to send StepRequest to 1, because: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.99.87.158:7050: connect: connection refused" channel=genesischannel node=3
2019-07-02 12:46:19.735 UTC [orderer.consensus.etcdraft] consensusSent -> DEBU 33a Sending msg of 28 bytes to 2 on channel genesischannel took 368.1µs
2019-07-02 12:46:19.735 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 33b Failed to send StepRequest to 2, because: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.111.89.86:7050: connect: connection refused" channel=genesischannel node=3
```
On one hand the `consensusSent` messages look good on the other the `logSendFailure` ones don't.
It repeats the message set about every 10s.
So I thought I'd got this down, but now I'm not sure if it's working or not.
I just recreated everything from scratch to be sure nothing else was getting in the way. It all seems to come up ok now.
I can't tell from this message set if it's working and I can ignore the debug error or not.
```
2019-07-02 12:46:19.730 UTC [orderer.consensus.etcdraft] Step -> INFO 333 3 is starting a new election at term 1 channel=genesischannel node=3
2019-07-02 12:46:19.731 UTC [orderer.consensus.etcdraft] becomePreCandidate -> INFO 334 3 became pre-candidate at term 1 channel=genesischannel node=3
2019-07-02 12:46:19.731 UTC [orderer.consensus.etcdraft] poll -> INFO 335 3 received MsgPreVoteResp from 3 at term 1 channel=genesischannel node=3
2019-07-02 12:46:19.732 UTC [orderer.consensus.etcdraft] campaign -> INFO 336 3 [logterm: 1, index: 3] sent MsgPreVote request to 1 at term 1 channel=genesischannel node=3
2019-07-02 12:46:19.733 UTC [orderer.consensus.etcdraft] campaign -> INFO 337 3 [logterm: 1, index: 3] sent MsgPreVote request to 2 at term 1 channel=genesischannel node=3
2019-07-02 12:46:19.733 UTC [orderer.consensus.etcdraft] consensusSent -> DEBU 338 Sending msg of 28 bytes to 1 on channel genesischannel took 80.2µs
2019-07-02 12:46:19.734 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 339 Failed to send StepRequest to 1, because: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.99.87.158:7050: connect: connection refused" channel=genesischannel node=3
2019-07-02 12:46:19.735 UTC [orderer.consensus.etcdraft] consensusSent -> DEBU 33a Sending msg of 28 bytes to 2 on channel genesischannel took 368.1µs
2019-07-02 12:46:19.735 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 33b Failed to send StepRequest to 2, because: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 10.111.89.86:7050: connect: connection refused" channel=genesischannel node=3
```
On one hand the `consensusSent` messages look good on the other the `logSendFailure` ones don't.
It repeats the message set about every 10s.
it sounds like a kafka issue.. have you tried reaching out to kafka community?
nops but I found one of the similar issue as open bug
KAFKA-3410
Unclean leader election and "Halting because log truncation is not allowed"
I searched for this issue and found the below bug in open state
https://issues.apache.org/jira/browse/KAFKA-3410
have you tried restoring kafka service with persisted data?
I am running zookeeper and kafka brokers as docker containers
don't have data persisted in persistent storage?
Below given is my docker-compose configuration for one of the zookeeper and kafka node
zookeeper0:
container_name: zookeeper0
image: hyperledger/fabric-zookeeper:latest
dns_search: .
# restart: always
ports:
- 2181:2181
- 2888:2888
- 3888:3888
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
networks:
- fabric-ca
volumes:
- ./hosts/zookeeper0hosts/hosts:/etc/hosts
kafka0:
container_name: kafka0
image: hyperledger/fabric-kafka:latest
dns_search: .
# restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 * 1024 * 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_BROKER_ID=0
- KAFKA_HOST_NAME=kafka0
- KAFKA_LISTENERS=EXTERNAL://0.0.0.0:9092,REPLICATION://0.0.0.0:9093
- KAFKA_ADVERTISED_LISTENERS=EXTERNAL://10.64.67.212:9092,REPLICATION://kafka0:9093
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=EXTERNAL:PLAINTEXT,REPLICATION:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=REPLICATION
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
ports:
- 9092:9092
- 9093:9093
networks:
- fabric-ca
volumes:
- ./hosts/kafka0hosts/hosts:/etc/hosts
Below given is my docker-compose configuration for one of the zookeeper and kafka node
zookeeper0:
container_name: zookeeper0
image: hyperledger/fabric-zookeeper:latest
dns_search: .
# restart: always
ports:
- 2181:2181
- 2888:2888
- 3888:3888
environment:
- ZOO_MY_ID=1
- ZOO_SERVERS=server.1=zookeeper0:2888:3888 server.2=zookeeper1:2888:3888 server.3=zookeeper2:2888:3888
networks:
- fabric-ca
volumes:
- ./hosts/zookeeper0hosts/hosts:/etc/hosts
kafka0:
container_name: kafka0
image: hyperledger/fabric-kafka:latest
dns_search: .
# restart: always
environment:
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 1024 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 1024 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_BROKER_ID=0
- KAFKA_HOST_NAME=kafka0
- KAFKA_LISTENERS=EXTERNAL://0.0.0.0:9092,REPLICATION://0.0.0.0:9093
- KAFKA_ADVERTISED_LISTENERS=EXTERNAL://10.64.67.212:9092,REPLICATION://kafka0:9093
- KAFKA_LISTENER_SECURITY_PROTOCOL_MAP=EXTERNAL:PLAINTEXT,REPLICATION:PLAINTEXT
- KAFKA_INTER_BROKER_LISTENER_NAME=REPLICATION
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_ZOOKEEPER_CONNECT=zookeeper0:2181,zookeeper1:2181,zookeeper2:2181
ports:
- 9092:9092
- 9093:9093
networks:
- fabric-ca
volumes:
- ./hosts/kafka0hosts/hosts:/etc/hosts
Below given are kafka brokers logs after restart.
Broker0 - https://hastebin.com/zavocatace.sql
Broker1 - https://hastebin.com/latojedemu.sql
Broker2 - https://hastebin.com/poxudijepi.sql
Broker3 - https://hastebin.com/doliqohufa.sql
Hi,
I have been trying to run my application using 1.4.1 and Raft Orderer and getting an error while creating a channel.
"status:FORBIDDEN reason:implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied"
On seeing the orderer logs, i see the following error
"identity 0 does not satisfy principal: the identity is a member of a different MSP"
I have checked my certificates multiple times and also the policies but still cant figure out the error. Any help will be appreciated.
Has joined the channel.
*Anybody knows how to solve this error? I am using the Kafka ordering service*
```
2019-07-04 05:07:04.629 UTC [common.deliver] deliverBlocks -> WARN 001 [channel: mychannel] Rejecting deliver request for 10.244.1.91:39800 because of consenter error
2019-07-04 05:07:04.844 UTC [common.deliver] deliverBlocks -> WARN 002 [channel: mychannel] Rejecting deliver request for 10.244.1.91:39804 because of consenter error
2019-07-04 05:07:05.058 UTC [common.deliver] deliverBlocks -> WARN 003 [channel: mychannel] Rejecting deliver request for 10.244.1.91:39806 because of consenter error
2019-07-04 05:07:05.278 UTC [common.deliver] deliverBlocks -> WARN 004 [channel: mychannel] Rejecting deliver request for 10.244.1.91:39808 because of consenter error
2019-07-04 05:07:08.992 UTC [common.deliver] Handle -> WARN 005 Error reading from 10.244.1.91:39830: rpc error: code = Canceled desc = context canceled
2019-07-04 05:07:08.992 UTC [orderer.common.broadcast] Handle -> WARN 006 Error reading from 10.244.1.91:39832: rpc error: code = Canceled desc = context canceled
2019-07-04 05:07:17.869 UTC [common.deliver] Handle -> WARN 007 Error reading from 10.244.2.84:59718: rpc error: code = Canceled desc = context canceled
2019-07-04 05:07:17.870 UTC [orderer.common.broadcast] Handle -> WARN 008 Error reading from 10.244.2.84:59720: rpc error: code = Canceled desc = context canceled
2019-07-04 05:07:30.500 UTC [common.deliver] Handle -> WARN 009 Error reading from 10.244.3.84:40092: rpc error: code = Canceled desc = context canceled
2019-07-04 05:07:30.500 UTC [orderer.common.broadcast] Handle -> WARN 00a Error reading from 10.244.3.84:40094: rpc error: code = Canceled desc = context canceled
2019-07-04 05:08:38.136 UTC [orderer.common.broadcast] Handle -> WARN 00b Error reading from 10.244.1.91:40354: rpc error: code = Canceled desc = context canceled
2019-07-04 05:08:42.523 UTC [orderer.common.broadcast] Handle -> WARN 00c Error reading from 10.244.1.91:40372: rpc error: code = Canceled desc = context canceled
2019-07-04 05:08:48.583 UTC [orderer.common.broadcast] Handle -> WARN 00d Error reading from 10.244.1.91:40396: rpc error: code = Canceled desc = context canceled
2019-07-04 05:08:53.367 UTC [orderer.common.broadcast] Handle -> WARN 00e Error reading from 10.244.1.91:40414: rpc error: code = Canceled desc = context canceled
2019-07-04 05:08:59.081 UTC [orderer.common.broadcast] Handle -> WARN 00f Error reading from 10.244.1.91:40196: rpc error: code = Canceled desc = context canceled
2019-07-04 05:09:25.505 UTC [orderer.common.broadcast] Handle -> WARN 010 Error reading from 10.244.1.91:40240: rpc error: code = Canceled desc = context canceled
2019-07-04 05:09:29.679 UTC [orderer.common.broadcast] Handle -> WARN 011 Error reading from 10.244.1.91:40274: rpc error: code = Canceled desc = context canceled
2019-07-04 05:12:49.207 UTC [orderer.common.broadcast] Handle -> WARN 012 Error reading from 10.244.1.91:41024: rpc error: code = Canceled desc = context canceled
2019-07-04 05:15:32.779 UTC [orderer.common.broadcast] Handle -> WARN 013 Error reading from 10.244.1.91:41692: rpc error: code = Canceled desc = context canceled
2019-07-04 05:15:36.192 UTC [orderer.common.broadcast] Handle -> WARN 014 Error reading from 10.244.1.91:41714: rpc error: code = Canceled desc = context canceled
2019-07-04 05:16:42.004 UTC [orderer.common.broadcast] Handle -> WARN 015 Error reading from 10.244.1.91:41752: rpc error: code = Canceled desc = context canceled
```
hi,
wait 5-10min before running your command
this would solve the problem of consenter
Has joined the channel.
Has joined the channel.
could you send a more complete orderer log, preferably at `debug` level and `verbose` in `kafka` section turned on?
Has joined the channel.
Hi i set up the ordering service using 5 orderers in Raft mode, but on the first orderer this message ```WARN 076 Error reading from 172.22.0.15:52944: rpc error: code = Canceled desc = context canceled
2019-07-08 13:38:43.225 UTC [comm.grpc.server] 1 -> INFO 077 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.22.0.15:52944 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=338.582048ms
```
Hi i set up the ordering service using 5 orderers in Raft mode, but on the first orderer i get this message every minute or so (the port number changes every time):
```WARN 076 Error reading from 172.22.0.15:52944: rpc error: code = Canceled desc = context canceled
2019-07-08 13:38:43.225 UTC [comm.grpc.server] 1 -> INFO 077 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.22.0.15:52944 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=338.582048ms
```
Hi, I set up the ordering service using 5 orderers in Raft mode, but on the first orderer i get this message every minute or so (the port number changes every time):
```WARN 076 Error reading from 172.22.0.15:52944: rpc error: code = Canceled desc = context canceled
2019-07-08 13:38:43.225 UTC [comm.grpc.server] 1 -> INFO 077 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.22.0.15:52944 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=338.582048ms
```
@mattiabolzonella1 it's fine.... it just means the client disconnected
Oh I understand, thanks!
Has joined the channel.
hey guys, I am trying to set up the orderer with raft and there is something that I dont have clear yet:
- On the configtx.yml file on the orderer section orderertype, should I put raft?
- For the certificates, are the signcerts from the tls for each orderer?
thanks in advance
Since it took me way longer than it should have to get this up I'll try to pay it forward. These are the parameters I have for a 3 raft setup in an orderer msp. Ignore the hyphens - this is for k8s.
`OrdererType: etcdraft`
```
# EtcdRaft defines configuration which must be set when the "etcdraft"
# orderertype is chosen.
EtcdRaft:
# The set of Raft replicas for this network. For the etcd/raft-based
# implementation, we expect every replica to also be an OSN. Therefore,
# a subset of the host:port items enumerated in this list should be
# replicated under the Orderer.Addresses key above.
Consenters:
- Host: raft0-orderer
Port: 7050
ClientTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft0-orderer/tls/server.crt
ServerTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft0-orderer/tls/server.crt
- Host: raft1-orderer
Port: 7050
ClientTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft1-orderer/tls/server.crt
ServerTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft1-orderer/tls/server.crt
- Host: raft2-orderer
Port: 7050
ClientTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft2-orderer/tls/server.crt
ServerTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft2-orderer/tls/server.crt
```
My genesis channel uses this profile.
```
Profiles:
Genesis:
<<: *ChannelDefaults
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
OrdererType: etcdraft
EtcdRaft:
Consenters:
- Host: raft0-orderer
Port: 7050
ClientTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft0-orderer/tls/server.crt
ServerTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft0-orderer/tls/server.crt
- Host: raft1-orderer
Port: 7050
ClientTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft1-orderer/tls/server.crt
ServerTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft1-orderer/tls/server.crt
- Host: raft2-orderer
Port: 7050
ClientTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft2-orderer/tls/server.crt
ServerTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft2-orderer/tls/server.crt
Addresses:
- raft0-orderer:7050
- raft1-orderer:7050
- raft2-orderer:7050
Organizations:
- *Orderer
Capabilities:
<<: *OrdererCapabilities
Application:
...
```
I've lifted the code from a bunch of various samples, so it might not be perfect as a reference sample but it seems to all be working.
Finally...
Since it took me way longer than it should have to get this up I'll try to pay it forward. These are the parameters I have for a 3 raft setup in an orderer msp. Ignore the hyphens - this is for k8s. Just be aware that there are a few wrinkles in orderer.yaml that might trip you up too.
`OrdererType: etcdraft`
```
# EtcdRaft defines configuration which must be set when the "etcdraft"
# orderertype is chosen.
EtcdRaft:
# The set of Raft replicas for this network. For the etcd/raft-based
# implementation, we expect every replica to also be an OSN. Therefore,
# a subset of the host:port items enumerated in this list should be
# replicated under the Orderer.Addresses key above.
Consenters:
- Host: raft0-orderer
Port: 7050
ClientTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft0-orderer/tls/server.crt
ServerTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft0-orderer/tls/server.crt
- Host: raft1-orderer
Port: 7050
ClientTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft1-orderer/tls/server.crt
ServerTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft1-orderer/tls/server.crt
- Host: raft2-orderer
Port: 7050
ClientTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft2-orderer/tls/server.crt
ServerTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft2-orderer/tls/server.crt
```
My genesis channel uses this profile.
```
Profiles:
Genesis:
<<: *ChannelDefaults
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
OrdererType: etcdraft
EtcdRaft:
Consenters:
- Host: raft0-orderer
Port: 7050
ClientTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft0-orderer/tls/server.crt
ServerTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft0-orderer/tls/server.crt
- Host: raft1-orderer
Port: 7050
ClientTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft1-orderer/tls/server.crt
ServerTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft1-orderer/tls/server.crt
- Host: raft2-orderer
Port: 7050
ClientTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft2-orderer/tls/server.crt
ServerTLSCert: /opt/blockchain/localhost/generated/crypto-config/ordererOrganizations/orderer/orderers/raft2-orderer/tls/server.crt
Addresses:
- raft0-orderer:7050
- raft1-orderer:7050
- raft2-orderer:7050
Organizations:
- *Orderer
Capabilities:
<<: *OrdererCapabilities
Application:
...
```
I've lifted the code from a bunch of various samples, so it might not be perfect as a reference sample but it seems to all be working.
Finally...
So I have a question about multiple orderer orgs that's not quite clear in my mind.
Say I want to have two orgs work together, but they each have their own HLF network already.
A set of orderers and a set of peers each that are running doing whatever they are doing (likely with other nodes that aren't related to this discussion).
I'm going to assume I can create a channel between them by adding the second org to the first org's system channel.
2. Org1 speaks only to Org1's orderers and Org2 speaks only to Org2's orderers.
I'll leave chaincode out of it - I'm imagining signing to be an out of band operation. I'm also looking at this from the perspective of automating and I'm using the java SDK so the order of operation is determined by what works there for a separate orderer org.
When I try to connect one of the orgs to the channel I just created (ha), the code does a load from a network config file. Think the json that comes down in IBP. I'm building this from my network definition right now, so I am wondering if the channel needs all the orderers defined - from all the orgs - for each org in there or if it's ok with just the orgn-orderers in the definition for orgn.
I'm thinking that our clients might not want to be publicizing their orderer certs outside of the respective orgs.
Is this feasible, or are we constrained to only joining the peers from orgs 1 and 2 to a separate orderer msp as it is now?
I've left a lot of detail out. I wanted this to not turn into war and peace.
1) name is `etcdraft`
2) specifically for raft orderer, you'll need to supply server & client tls cert
TBH i don't quite get your question here..
> Say I want to have two orgs work together, but they each have their own HLF network already.
If they have two distinct network already, then you cannot join an orderer from org2 to org1 (they've maintained two different chains).
thank you guoger!
Since it took me way longer than it should have to get this up I'll try to pay it forward. These are the parameters I have for a 3 raft setu
ty @aatkddny
also, for the channel administrators, the "Please Read Before Posting" under the #fabric-orderer goes to a page not found
thx for reminding... after Linux foundation migrated wiki page, this one was archived i think... although i think questions asked recently are fairly sophisticated and we probably don't need that post anymore.
Hi! I've set up a fabric network using 5 orderers in raft mode following the BYFN repo. In the logs of each orderer all seems ok, if i invoke chaincode from the cli i have zero problem.
But when i use the Java SDK (I've already asked in the dedicated channel): I set the HFClien with admin certificate of ORG1 (which administrate the channel); i use the service discovery and in the logger i see that all peers and orderers of the channel are discovered. Now, when i try to send an update transaction (validated transaction response) I can only send the transaction to the first orderer of the cluster if it's the current leader, if the transaction is sent to any other orderer i receive the following exception:
```
```
asd
RaftOrderers_&_JavaSDK.txt
Hi Guys, how can one migrate from a solo orderer to RAFT?
On starting the orderer docker is giving me the following error, panic: runtime error: index out of range:
```
Operations.TLS.Enabled = false
Operations.TLS.PrivateKey = ""
Operations.TLS.Certificate = ""
Operations.TLS.RootCAs = []
Operations.TLS.ClientAuthRequired = false
Operations.TLS.ClientRootCAs = []
Metrics.Provider = "disabled"
Metrics.Statsd.Network = "udp"
Metrics.Statsd.Address = "127.0.0.1:8125"
Metrics.Statsd.WriteInterval = 30s
Metrics.Statsd.Prefix = ""
panic: runtime error: index out of range
goroutine 1 [running]:
github.com/hyperledger/fabric/msp.(*bccspmsp).sanitizeCert(0xc0002079e0, 0xc000111700, 0x26, 0xc000531108, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimpl.go:691 +0x207
github.com/hyperledger/fabric/msp.newIdentity(0xc000111700, 0x1152560, 0xc00000ef98, 0xc0002079e0, 0xc00035e148, 0x1152560, 0xc00000ef98, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/msp/identities.go:47 +0x70
github.com/hyperledger/fabric/msp.(*bccspmsp).getIdentityFromConf(0xc0002079e0, 0xc000354000, 0x3cd, 0x400, 0x1, 0x1, 0x0, 0x7c8088, 0xc0000ac7e0, 0xff)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimpl.go:161 +0x102
github.com/hyperledger/fabric/msp.(*bccspmsp).setupCAs(0xc0002079e0, 0xc00014b1d0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimplsetup.go:134 +0x65d
github.com/hyperledger/fabric/msp.(*bccspmsp).preSetupV1(0xc0002079e0, 0xc00014b1d0, 0xc0005312f0, 0x7d23a0)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimplsetup.go:393 +0x64
github.com/hyperledger/fabric/msp.(*bccspmsp).setupV1(0xc0002079e0, 0xc00014b1d0, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimplsetup.go:373 +0x39
github.com/hyperledger/fabric/msp.(*bccspmsp).setupV1-fm(0xc00014b1d0, 0x1026ec0, 0x1a)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimpl.go:112 +0x34
github.com/hyperledger/fabric/msp.(*bccspmsp).Setup(0xc0002079e0, 0xc00034a300, 0x0, 0xc00034a3c0)
/opt/gopath/src/github.com/hyperledger/fabric/msp/mspimpl.go:225 +0x14d
github.com/hyperledger/fabric/msp/cache.(*cachedMSP).Setup(0xc0004f2f90, 0xc00034a300, 0x1159600, 0xc0004f2f90)
/opt/gopath/src/github.com/hyperledger/fabric/msp/cache/cache.go:88 +0x4b
github.com/hyperledger/fabric/common/channelconfig.(*MSPConfigHandler).ProposeMSP(0xc000508550, 0xc00034a300, 0x19, 0xc0005314c8, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/msp.go:68 +0xc0
github.com/hyperledger/fabric/common/channelconfig.(*OrganizationConfig).validateMSP(0xc00034a2c0, 0x0, 0xffffffffffffffff)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/organization.go:80 +0xc0
github.com/hyperledger/fabric/common/channelconfig.(*OrganizationConfig).Validate(0xc00034a2c0, 0xc000531550, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/organization.go:73 +0x2b
github.com/hyperledger/fabric/common/channelconfig.NewOrganizationConfig(0xc0004fcf48, 0x6, 0xc0004f55e0, 0xc000508550, 0x0, 0x0, 0x8)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/organization.go:54 +0x10e
github.com/hyperledger/fabric/common/channelconfig.NewConsortiumConfig(0xc0004f5590, 0xc000508550, 0xc0005316c0, 0xf07a40, 0xc0004f2e70)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/consortium.go:44 +0x196
github.com/hyperledger/fabric/common/channelconfig.NewConsortiumsConfig(0xc0004f5540, 0xc000508550, 0xc000531808, 0x4, 0x1b8ac00)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/consortiums.go:31 +0x103
github.com/hyperledger/fabric/common/channelconfig.NewChannelConfig(0xc0004f5040, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/channel.go:104 +0x392
github.com/hyperledger/fabric/common/channelconfig.NewBundle(0xc0004fd2e0, 0xc, 0xc0004f2780, 0xc000536510, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/bundle.go:196 +0x6b
github.com/hyperledger/fabric/common/channelconfig.NewBundleFromEnvelope(0xc0004f4a50, 0x1444, 0x1500, 0x114b520)
/opt/gopath/src/github.com/hyperledger/fabric/common/channelconfig/bundle.go:187 +0x14d
github.com/hyperledger/fabric/orderer/common/server.ValidateBootstrapBlock(0xc000079940, 0xc000079940, 0xc000531be8)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/onboarding.go:349 +0xf7
github.com/hyperledger/fabric/orderer/common/server.Start(0x1013e09, 0x5, 0xc0004c8900)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:97 +0x59
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:91 +0x1ce
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
```
so the error is giving in this function:
```
func (msp *bccspmsp) sanitizeCert(cert *x509.Certificate) (*x509.Certificate, error) {
if isECDSASignedCert(cert) {
// Lookup for a parent certificate to perform the sanitization
var parentCert *x509.Certificate
chain, err := msp.getUniqueValidationChain(cert, msp.getValidityOptsForCert(cert))
if err != nil {
return nil, err
}
// at this point, cert might be a root CA certificate
// or an intermediate CA certificate
if cert.IsCA && len(chain) == 1 {
// cert is a root CA certificate
parentCert = cert
} else {
parentCert = chain[1]
}
// Sanitize
cert, err = sanitizeECDSASignedCert(cert, parentCert)
if err != nil {
return nil, err
}
}
return cert, nil
}
```
on the:
```
parentCert = chain[1]
```
what am I missing?
Hi, does anyone know what the port 8443 for the orderer is used for? Is it supposed to be open to public/who should be using this port?
Hi, does anyone know what port 8443 for the orderer is used for? Is it supposed to be open to public/who should be using this port?
Ok, so Im missing the tlscacert, so thats why is giving that error
so how then would you have a consortium - one where members want to join but they also have an existing fabric network.
have someone act as a network operator and own a set of orderers and then allow others to integrate only at the peer level? that's hardly a consortium and likely to be a tough sell. or is this simply not possible?
@guoger so how then would you have a consortium - one where members want to join but they also have an existing fabric network.
have someone act as a network operator and own a set of orderers and then allow others to integrate only at the peer level? that's hardly a consortium and likely to be a tough sell. or is this simply not possible?
look in orderer.yaml at
```
################################################################################
#
# Operations Configuration
#
# - This configures the operations server endpoint for the orderer
#
################################################################################
```
and https://hyperledger-fabric.readthedocs.io/en/latest/operations_service.html
Has joined the channel.
Hi All,
Can anybody share me a link where i can get details of orderer and kafka brokers TLS set up . Will be very grateful.
Regards,
Soumya
Hi , Is the TLS issues resolved ? As i am also facing the issue whenevr trying to start the orderer , i am getting TLS handshake issue in kafka broker logs.
the fix will be part of upcoming release @soumyanayak
Thanks Guoger for updating. Any idea by when the upcoming release will be there like any date
most likely next week
I can provide more information
If someone could help me please?
If I put orderer with debugging the error happens after this:
```
2019-07-11 08:55:13.896 UTC [common.channelconfig] NewStandardValues -> DEBU 0e7 Initializing protos for *channelconfig.ChannelProtos
2019-07-11 08:55:13.896 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0e8 Processing field: HashingAlgorithm
2019-07-11 08:55:13.896 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0e9 Processing field: BlockDataHashingStructure
2019-07-11 08:55:13.896 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0ea Processing field: OrdererAddresses
2019-07-11 08:55:13.896 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0eb Processing field: Consortium
2019-07-11 08:55:13.896 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0ec Processing field: Capabilities
2019-07-11 08:55:13.896 UTC [common.channelconfig] NewStandardValues -> DEBU 0ed Initializing protos for *channelconfig.OrdererProtos
2019-07-11 08:55:13.897 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0ee Processing field: ConsensusType
2019-07-11 08:55:13.897 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0ef Processing field: BatchSize
2019-07-11 08:55:13.897 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0f0 Processing field: BatchTimeout
2019-07-11 08:55:13.897 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0f1 Processing field: KafkaBrokers
2019-07-11 08:55:13.897 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0f2 Processing field: ChannelRestrictions
2019-07-11 08:55:13.897 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0f3 Processing field: Capabilities
2019-07-11 08:55:13.897 UTC [common.channelconfig] NewStandardValues -> DEBU 0f4 Initializing protos for *channelconfig.OrganizationProtos
2019-07-11 08:55:13.897 UTC [common.channelconfig] initializeProtosStruct -> DEBU 0f5 Processing field: MSP
2019-07-11 08:55:13.897 UTC [common.channelconfig] validateMSP -> DEBU 0f6 Setting up MSP for org KaytekMSP
2019-07-11 08:55:13.897 UTC [msp] newBccspMsp -> DEBU 0f7 Creating BCCSP-based MSP instance
2019-07-11 08:55:13.897 UTC [msp] New -> DEBU 0f8 Creating Cache-MSP instance
2019-07-11 08:55:13.897 UTC [msp] Setup -> DEBU 0f9 Setting up MSP instance KaytekMSP
2019-07-11 08:55:13.898 UTC [msp.identity] newIdentity -> DEBU 0fa Creating identity instance for cert ---- BEGIN CERTIFICATE---- .....----END CERTIFICATE
```
also I have a question on so: https://stackoverflow.com/questions/56969192/panic-runtime-error-index-out-of-range-when-starting-the-orderer-with-genesis
has anyone implemented raft in balance transfer?
Not going to be done by the core Fabric development team as we've removed that sample
ok..thank you sir
Hello - I see during the TLS handshake, there is orderer closing the connection after 5seconds. Looking at the code and closed reference is src/github.com/hyperledger/fabric/core/comm/config.go
```
// Configuration defaults
var (
// Max send and receive bytes for grpc clients and servers
MaxRecvMsgSize = 100 * 1024 * 1024
MaxSendMsgSize = 100 * 1024 * 1024
// Default peer keepalive options
DefaultKeepaliveOptions = &KeepaliveOptions{
ClientInterval: time.Duration(1) * time.Minute, // 1 min
ClientTimeout: time.Duration(20) * time.Second, // 20 sec - gRPC default
ServerInterval: time.Duration(2) * time.Hour, // 2 hours - gRPC default
ServerTimeout: time.Duration(20) * time.Second, // 20 sec - gRPC default
ServerMinInterval: time.Duration(1) * time.Minute, // match ClientInterval
}
// strong TLS cipher suites
DefaultTLSCipherSuites = []uint16{
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
}
// default connection timeout
DefaultConnectionTimeout = 5 * time.Second
)
```
How do i change this parameter in the docker-compose. I am using Fabric v1.4.0 release.
```
```
Hello - I see during the TLS handshake, there is orderer closing the connection after 5seconds. Looking at the code and closed reference is src/github.com/hyperledger/fabric/core/comm/config.go
```
// Configuration defaults
var (
// Max send and receive bytes for grpc clients and servers
// Default peer keepalive options
DefaultKeepaliveOptions = &KeepaliveOptions{
ClientInterval: time.Duration(1) * time.Minute, // 1 min
ClientTimeout: time.Duration(20) * time.Second, // 20 sec - gRPC default
ServerInterval: time.Duration(2) * time.Hour, // 2 hours - gRPC default
ServerTimeout: time.Duration(20) * time.Second, // 20 sec - gRPC default
ServerMinInterval: time.Duration(1) * time.Minute, // match ClientInterval
}
// strong TLS cipher suites
DefaultTLSCipherSuites = []uint16{
tls.TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,
tls.TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,
tls.TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,
tls.TLS_RSA_WITH_AES_128_GCM_SHA256,
tls.TLS_RSA_WITH_AES_256_GCM_SHA384,
}
// default connection timeout
DefaultConnectionTimeout = 5 * time.Second
)
```
How do i change this parameter in the docker-compose. I am using Fabric v1.4.0 release.
```
```
Hello - I see during the TLS handshake, there is orderer closing the connection after 5seconds. Looking at the code and closed reference is src/github.com/hyperledger/fabric/core/comm/config.go
```
// Configuration defaults
var (
// Max send and receive bytes for grpc clients and servers
...
// Default peer keepalive options
DefaultKeepaliveOptions = &KeepaliveOptions{
...
}
// strong TLS cipher suites
DefaultTLSCipherSuites = []uint16{
... }
// default connection timeout
DefaultConnectionTimeout = 5 * time.Second
)
```
How do i change this parameter in the docker-compose. I am using Fabric v1.4.0 release.
Let me take a look
@rahulhegde Unfortunately it looks like that variable isn't exposed to be overridden through the orderer
I can open up a JIRA to try to make it available in v1.4.2, it should hopefully be a simple change
okay Jason - TCP trace does show the FIN, ACK sent by orderer soon after 5secs. However I need to ascertain this is the issue in our environment.
Are you seeing any parts of the HELLO come across from the client?
Ord_Issue_CAAdpt.PNG
yes - I can see.
yes - I can see. 9151 is exposed orderer port running on .53
Ord_Issue_CAAdpt.PNG
Question 2 - The snapshot sent shows, the configuration is already with the expected value. Do we still need to perform a channel update, setting to explicit?
```
"peer/ChaincodeToChaincode": {
"policy_ref": "/Channel/Application/Readers" },
```
Question 2 - The snapshot sent shows, the configuration is already with the expected value to Readers. Do we still need to perform a channel update, setting to explicit?
```
"peer/ChaincodeToChaincode": {
"policy_ref": "/Channel/Application/Readers" },
```
The 'snapshot sent'? I thought when I saw your channel config, it had no ACL section
The 'snapshot sent'? I thought when I saw your channel config, it had no ACLs section
The 'snapshot sent'? I thought when I saw your channel config, it had no ACLs section.
So, we're seeing the client send a pre-amble, the server replies, and then silence, and eventually the server times out the connection?
Right - This was the channel created in Fabric v1.0.6. And the JSON created was using the Configtxlator of Fabric v1.4 during the Fabric Migration process (Fabric v1.0.6 to Fabric v1.4.0). Could this be the reason?
Even that do we need to explicitly set it?
So the ACLs need to go into the the 'public' channel. If that channel was created prior to v1.3, most likely it does not have an ACL section defined. In that case, you would need to define one and include that ACL
okay
okay, so no ACL definition in the channel configuration should not mean fallback to default value?
okay, so no ACL definition in the channel configuration should not mean fallback to default value in Fabric v1.4?
I checked this, and unfortunately, the default value in `configtx.yaml` is not actually the default value hard coded in the peer. The default value if unset is `/Channel/Application/Writers`
Okay Jason. Thanks.
Has joined the channel.
Hi, I got a question with the orderer design. I saw that the order holds the entire blockchain in order to know how to deterministically cut the block and the `system channel` among them stores the network level config,
my question is:
I can imagine using solo, kafka, raft, or most leader based consensus, orderers could deterministically cut the block, (max block size and max block interval, whichever hit first)
but how does orderer agree on the cut, if we move to a BFT assumption where orderer could be byzantine, furthermore, if we further consider different assumption for the underlying network e.g. to partial synchrony with an unknown bound for msg delay?
and also, doesn't this violate the `stateless` model that the original fabric paper proposed?
Has joined the channel.
success":false,"message":"Channel 'mychannel' failed to create status:SERVICE_UNAVAILABLE reason:no Raft leader...could anyone help me out with this??
In case of BFT, it can cut arbitrary blocks according to the transactions it sees. In most BFT algorithms, there is a way to detect that the leader is misbehaving and censoring a particular client and not putting its transactions into the blocks and then wanting a view change.
In case of BFT, it can cut arbitrary blocks according to the transactions it sees. In most BFT algorithms, there is a way to detect that the leader is misbehaving and censoring a particular client and not putting its transactions into the blocks and then advocating a view change.
If enough honest nodes detect this, a view change occurs and the leader is dethroned.
[2019-07-15 15:35:15.251] [ERROR] Create-Channel - Failed to create the channel. status:SERVICE_UNAVAILABLE reason:no Raft leader
Hi All, is it possible to migrate from the solo orderer to RAFT?
Has joined the channel.
Hi we are facing an issue with kafka in production, "FATAL [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Exiting because log truncation is not allowed for partition headersms-0, current leader's latest offset 7 is less than replica's latest offset 80 (kafka.server.ReplicaFetcherThread)"
Please let me know if this is not correct channel for the query
Screenshot from 2019-07-15 17-55-36.png
I think raft leads to greater difference to fabric than the fabric in kafka. with raft, each channel has its own orderers, and a orderer need not to hold all channels' ledgers. it is a great progress?
if the orderer does join all channels
it is possible, but will not be documented. steps are the same as kafka->raft migration
i suspect that TLS on client is not enabled..
is it a question or statement?
my impression on raft, wrong?
what 's you idea on this ?
what 's your idea on this ?
yes, i agree, it is different from kafka-based ordering service in various aspects.
As transactions are now processed only by orderers in the same channel, the efficience is greatly improved. With this design, the consensus now is not the system bottle, and it is even more efficient than zilliqa, a public blockchain.
As transactions are now processed only by orderers in the same channel, the efficiency is greatly improved. With this design, the consensus now is not the system bottle, and it is even more efficient than zilliqa, a public blockchain.
not necessarily undermining fabric, but ordering service is only part of Fabric consensus. I wouldn't conclude that consensus is not bottleneck anymore... (orderer was not bottleneck in kafka-based consensus neither)
although, the same argument can be made for peers - only those in a channel are participating in consensus
in kafka, all transaction are ordered by kafka. this tend to be a bottle, right?
with raft, every channel almost be a complete blockchain?
now a channel become a complete blockchain, what benefits will it bring to us? what should be done to communicate among channnels?
now a channel becomes a complete blockchain, what benefits will it bring to us? what should be done to communicate among channnels?
hi, is raft orderer ready to use safelly in production?
Has joined the channel.
it is used in fabric 1.4 , a long term service version. shoud be safe in production
ty
Has joined the channel.
Hi..!!!!
Anyone added an orderer using etcdraft protocol in a running network
i am getting an error when i modify the config.json file and update the channel with new orderer address and its tls certs
083 Deactivating node 6 in channel firstchannel with endpoint of orderer6.example.com:7050 due to TLS certificate change
what exactly did you modify - IIRC, this is not an error, but info
i fetched the configutaion file
and added the TLS certs with host and port of new orderer
but my newly added orderer is not working
and also not able to communicate with other orderers
did you add it to `firstchannel`?
no in that only i am facing the issue
can uh suggest me any blog or article which i can follow to do so
https://hyperledger-fabric.readthedocs.io/en/latest/raft_configuration.html#reconfiguration
not able to do with this
could you post orderer log?
yaa sure
pls use pastebin or equivalence
Please give me 15 mints
will share the logs with you
the logs of main orderer https://pastebin.com/L3AWFcgu
the logs of orderer which i am adding
https://pastebin.com/5btfff2w
could you describe steps of your experiment? in particular, did you get this correct?
```
4. Starting the new Raft node with the path to the *config block* in the General.GenesisFile configuration parameter.
```
Here are the logs of new orderer When i try this
https://pastebin.com/4PkVz3TK
Was the config block pulled from system channel?
Yes pulled from the system channel only
Has anyone got this error ???????????????....... TLS handshake failed with error tls: first record does not look like a TLS handshake server=Orderer remoteaddress=172.27.0.7:45610
Has joined the channel.
Getting error "Failed validating bootstrap block: the block isn't a system channel block because it lacks ConsortiumsConfig"
while starting newly added orderer using raft
Getting error "Failed validating bootstrap block: the block isn't a system channel block because it lacks ConsortiumsConfig"
while starting newly added orderer using raft
Can anyone know how can i fix it
@YashGupta You must bootstrap your new orderer using the latest config block from the orderer system channel. It looks like you are using the config block from an application channel.
could you describe your steps in detail? if you prefer, you can also file a JIRA bug with reproduce steps
2019-07-17 06:05:59.177 UTC [core.comm] ServerHandshake -> ERRO 387 TLS handshake failed with error tls: first record does not look like a TLS handshake server=Orderer remoteaddress=172.21.0.2:45658
Clipboard - July 17, 2019 12:07 PM
Hi All,
am trying to create a zookeeper cluster using multiple Virtual machines.
Using docker compose i had written a zookeeper service 1 in one VM and another docker compose in the other VM for zookeeper service 2. Below are the respective files.
The issue both are not able to identify each other and are failing to bind at the ports. Below my files the error description is there.
Zookeeper-1 - VM -1 -- IP - 172.23.155.118
version: '2.0'
networks:
legaldescription:
services:
zookeeper1:
image: hyperledger/fabric-zookeeper
container_name: zookeeper1
restart: always
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
ZOO_MY_ID: 1
ZOO_TICK_TIME: 2000
ZOO_INIT_LIMIT: 10
ZOO_MAX_CLIENT_CNXNS: 0
ZOO_SYNC_LIMIT: 5
ZOO_CLIENT_PORT: 2181
ZOO_SERVERS: server.1=0.0.0.0:2888:3888 server.2=172.23.155.126:2888:3888
volumes:
- ./zookeeper1/data:/data
- ./zookeeper1/datalog:/datalog
networks:
- legaldescription
Zookeeper - 2 VM -2 IP -172.23.155.126
version: '2'
networks:
legaldescription:
services:
zookeeper2:
image: hyperledger/fabric-zookeeper
container_name: zookeeper2
restart: always
ports:
- "2181:2181"
- "2888:2888"
- "3888:3888"
environment:
ZOO_MY_ID: 2
ZOO_TICK_TIME: 2000
ZOO_INIT_LIMIT: 10
ZOO_MAX_CLIENT_CNXNS: 0
ZOO_SYNC_LIMIT: 5
ZOO_CLIENT_PORT: 2181
ZOO_SERVERS: server.2=0.0.0.0:2888:3888 server.1=172.23.155.118:2888:3888
volumes:
- ./zookeeper2/data:/data
- ./zookeeper2/datalog:/datalog
networks:
- legaldescription
Zookeeper2.PNG
Zookeeper1.PNG
@jyellick Thank you, Adding of orderer is done :grinning:
Thanks @guoger
Able to add the orderer now.
glad to hear that! what was the issue?
Was updating the config of only application channel. NOw after updating both the config i.e. from application and system channel bot, its working fine
are you saying you managed to add an orderer to *application channel* *without* adding it to *system channel*?
are you saying you managed to add an orderer to _application channel_ *without* adding it to _system channel_?
Yes i was doing that first
and that didn't throw an error to you?
It was showing error then i added it to system channel as well
earlier i was using approach to add only in application channel which was not working
ah, ok. nice to see that your problem is gone. i'd appreciate if you can provide a more thorough list of steps next time when you report error, that would reduce the overhead of both of us :)
ok Sure. I will take care of that next time
Has joined the channel.
Has joined the channel.
Hi..!!!
Anyone has any docs regarding adding extra `Orderer` to the existing network. Googled it but found no docs. can anyone help me.
https://hyperledger-fabric.readthedocs.io/en/latest/raft_configuration.html#reconfiguration
is there a way in kafka based ordering service. i wanted to add one more orderer for another organisation.
yes it's doable but i don't know any document. essentially steps are:
- submit config tx to add org to consortium, and orderer endpoint (similar to what it describes in the doc i posted)
- copy the ledger file from existing orderer to the new orderer you are about to add
- start new orderer from existing data
@jaswanth Follow the steps given in this link
https://stackoverflow.com/questions/50153905/how-to-add-more-orderer-nodes-to-a-running-hyperledger-fabric-network
Thank you pulkitSarraf
let me know if u face any challanges
Has joined the channel.
hey there, i'm having a connection timeout to my orderer, what are some possible causes of this
hey there, i'm having a connection timeout to my orderer, what are some possible causes of this. i'm running in solo mode
Timeout from the client sending messages, or from the peers retrieving blocks? Are you running with TLS enabled? Most likely, this is a networking problem unrelated to Fabric, so you'll have to resort to looking at TCP traces via something like wireshark
Has joined the channel.
i am getting in docker logs --> orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address= ipxx x x x :port x error="context finished before block retrieved: context canceled" grpc.code=Unknown grpc.call_duration=5h48m38.655538305s
is it error or normal behavior?
i am getting in docker logs of orderer as --> orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address= ipxx x x x :port x error="context finished before block retrieved: context canceled" grpc.code=Unknown grpc.call_duration=5h48m38.655538305s
is it error or normal behavior?
@Utsav_Solanki do not cross post your questions
This simply indicates that a peer hung up on the orderer. This is normal behavior if the peer is restarted for instance.
thanks for reply
2019-07-13 07:35:12.011 UTC [orderer.consensus.kafka] processMessagesToBlocks -> ERRO 033 [channel: mychannel] Error during consumption: kafka: error while consuming mychannel/0: read tcp ip x x x x: port x->ip x x x x: port x: i/o timeout
2019-07-13 07:35:12.011 UTC [orderer.consensus.kafka] processMessagesToBlocks -> WARN 034 [channel: mychannel] Deliver sessions will be dropped if consumption errors continue.
2019-07-13 07:35:12.011 UTC [orderer.consensus.kafka] processMessagesToBlocks -> ERRO 035 [channel: testchainid] Error during consumption: kafka: error while consuming testchainid/0: read tcp 10.200.1.34:53738->ip x x x x: port x: i/o timeout
2019-07-13 07:35:12.011 UTC [orderer.consensus.kafka] processMessagesToBlocks -> WARN 036 [channel: testchainid] Deliver sessions will be dropped if consumption errors continue.
2019-07-13 07:35:22.012 UTC [orderer.consensus.kafka] processMessagesToBlocks -> WARN 037 [channel: testchainid] Closed the errorChan
2019-07-13 07:35:22.012 UTC [orderer.consensus.kafka] sendConnectMessage -> INFO 038 [channel: testchainid] About to post the CONNECT message...
2019-07-13 07:35:22.012 UTC [orderer.consensus.kafka] processMessagesToBlocks -> WARN 039 [channel: mychannel] Closed the errorChan
2019-07-13 07:35:22.012 UTC [orderer.consensus.kafka] sendConnectMessage -> INFO 03a [channel: mychannel] About to post the CONNECT message...
2019-07-13 07:35:23.174 UTC [orderer.consensus.kafka] processMessagesToBlocks -> INFO 03b [channel: mychannel] Marked consenter as available again
2019-07-13 07:35:23.324 UTC [orderer.consensus.kafka] processMessagesToBlocks -> INFO 03c [channel: testchainid] Marked consenter as available again
2019-07-13 07:37:00.769 UTC [orderer.common.server] handleSignals -> INFO 03d Received signal: 15 (terminated)
2019-07-13 07:35:12.011 UTC [orderer.consensus.kafka] processMessagesToBlocks -> ERRO 033 [channel: mychannel] Error during consumption: kafka: error while consuming mychannel/0: read tcp ip x x x x: port x->ip x x x x: port x: i/o timeout
2019-07-13 07:35:12.011 UTC [orderer.consensus.kafka] processMessagesToBlocks -> WARN 034 [channel: mychannel] Deliver sessions will be dropped if consumption errors continue.
2019-07-13 07:35:12.011 UTC [orderer.consensus.kafka] processMessagesToBlocks -> ERRO 035 [channel: testchainid] Error during consumption: kafka: error while consuming testchainid/0: read tcp ip x x x x: port x->ip x x x x: port x: i/o timeout
2019-07-13 07:35:12.011 UTC [orderer.consensus.kafka] processMessagesToBlocks -> WARN 036 [channel: testchainid] Deliver sessions will be dropped if consumption errors continue.
2019-07-13 07:35:22.012 UTC [orderer.consensus.kafka] processMessagesToBlocks -> WARN 037 [channel: testchainid] Closed the errorChan
2019-07-13 07:35:22.012 UTC [orderer.consensus.kafka] sendConnectMessage -> INFO 038 [channel: testchainid] About to post the CONNECT message...
2019-07-13 07:35:22.012 UTC [orderer.consensus.kafka] processMessagesToBlocks -> WARN 039 [channel: mychannel] Closed the errorChan
2019-07-13 07:35:22.012 UTC [orderer.consensus.kafka] sendConnectMessage -> INFO 03a [channel: mychannel] About to post the CONNECT message...
2019-07-13 07:35:23.174 UTC [orderer.consensus.kafka] processMessagesToBlocks -> INFO 03b [channel: mychannel] Marked consenter as available again
2019-07-13 07:35:23.324 UTC [orderer.consensus.kafka] processMessagesToBlocks -> INFO 03c [channel: testchainid] Marked consenter as available again
2019-07-13 07:37:00.769 UTC [orderer.common.server] handleSignals -> INFO 03d Received signal: 15 (terminated)
AFTER THIS LOGS, DOKCER CONTAINER OF ORDERER 2 GOES EXITED (0)
am i lacking some configuration
Your orderer is timing out when attempting to communicate with Kafka. The orderer realizes that it is in a bad state, and therefore drops the peer connections. You should troubleshoot the connectivity between your orderer and Kafka cluster.
thank you so much
Hi All ,
Generated the certs using openssl for establishing TLS communication with orderer 1.4.2 and kafka brokers. On starting of orderer i am getting the below issue
orderer1 | 2019-07-19 08:54:22.807 UTC [common.ledger.blockledger.ram] appendBlock -> DEBU 18c Sending signal that block 18446744073709551615 has a successor
orderer1 | 2019-07-19 08:54:22.807 UTC [orderer.consensus.kafka] newBrokerConfig -> PANI 18d Unable to decode public/private key pair: tls: failed to find any PEM data in certificate input
orderer1 | panic: Unable to decode public/private key pair: tls: failed to find any PEM data in certificate input
orderer1 |
orderer1 | goroutine 1 [running]:
orderer1 | github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc00020d810, 0x0, 0x0, 0x0)
orderer1 | /opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
orderer1 | github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc00000e388, 0xc0001ff404, 0xc0004f11a0, 0x5f, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
orderer1 | /opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
orderer1 | github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc00000e388, 0xc0004f11a0, 0x5f, 0x0, 0x0, 0x0)
orderer1 | /opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
orderer1 | github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panic(0xc00000e390, 0xc0001ff600, 0x2, 0x2)
orderer1 | /opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:73 +0x75
orderer1 | github.com/hyperledger/fabric/orderer/consensus/kafka.newBrokerConfig(0x1, 0xc00003a01d, 0x38, 0xc00003a0de, 0x40, 0xc000495d50, 0x1, 0x1, 0x0, 0x0, ...)
orderer1 | /opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/kafka/config.go:47 +0x259
orderer1 | github.com/hyperledger/fabric/orderer/consensus/kafka.New(0x12a05f200, 0x8bb2c97000, 0x45d964b800, 0x274a48a78000, 0x2540be400, 0x2540be400, 0x2540be400, 0x3, 0xee6b280, 0x3, ...)
orderer1 | /opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/kafka/consenter.go:32 +0xdb
orderer1 | github.com/hyperledger/fabric/orderer/common/server.initializeMultichannelRegistrar(0xc0001c8c40, 0xc0003151a0, 0xc000070a00, 0x0, 0xc0001703f0, 0x1b25320, 0xc0003e8940, 0x2, 0x2, 0xc0003e8950, ...)
orderer1 | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:633 +0x27d
orderer1 | github.com/hyperledger/fabric/orderer/common/server.Start(0x1015083, 0x5, 0xc00029c480)
orderer1 | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:162 +0x83c
orderer1 | github.com/hyperledger/fabric/orderer/common/server.Main()
orderer1 | /opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:91 +0x1ce
orderer1 | main.main()
orderer1 | /opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
orderer1 exited with code 2
Regards,
Soumya
This issue is resolved - by adding "_FILE" at the end of the environment variables
- ORDERER_KAFKA_TLS_PRIVATEKEY_FILE=
- ORDERER_KAFKA_TLS_CERTIFICATE_FILE=
- ORDERER_KAFKA_TLS_ROOTCAS_FILE
Hi, We need to take the decisions to choose orderer implementation of kafka vs RAFT for production blockchain. I have a few questions regarding this. 1) Are there any data points to be considered while choosing RAFT over Kafka? 2) As RAFT is recently released are there in risks associated? 3) While Kafka has extensive documentation available for production setups, Do we have such guidelines/documentation for RAFT production setup? Is there a scenario in future that kafka won't be supported? and lastly Are there any operational considerations for RAFT implementation?
@RahulHundet In general, for new deployments we would recommend utilizing Raft. Although it has seen less testing in production, it has seen more extensive pre-production tests than Kafka did. Additionally, Raft was largely motivated by a desire to simplify operator's lives, by removing the two additional clusters that Kafka requires to be managed. In general for Raft, a deployment should require less administration, fewer resources, and functionally supports more scenarios (such as not having all orderers joined to all channels).
@jyellick Thanks! Do we have any documentation for production setups. for example, in kafka there are various configurations being mentioned in documentation (e.g. minimum- insync replicas, replication factors etc.)
Also, is it correct that RAFT is for network with less number of nodes?
We recommend utilizing 5 nodes in a production setup. This allows you to perform maintenance on any node without losing the fault tolerant nature of the ordering network.
This should in general be less than for a Kafka deployment, which recommended 3 orderers, 4 kafka brokers, and 3 zookeepers
There is quite a bit of documentation available https://hyperledger-fabric.readthedocs.io/en/release-1.4/raft_configuration.html
In one particular point it is mentioned that, one shouldn't perform any reconfiguration unless all nodes are available. In our case the nodes would be distributed over different data centers (there could be network glitches). Kafka recommends having all nodes in single data center and mirroring can be done for redundancy. what's RAFT's recommendation, can nodes be spread across data centers?
Yes, one of the additional benefits of Raft is that you may disperse nodes across datacenters. Note, you will need nodes spread across at least 3 datacenters to tolerate losing a datacenter. As you must ensure that in any failure, a majority of Raft nodes are still online.
@jyellick Thanks!
Hi All,I am trying to do Raft setup using Balance_transfer fabric sample..i am getting this issue can anyone help me in this [Create-Channel - Failed to create the channel. status:SERVICE_UNAVAILABLE reason:no Raft leader]
please wait for a few seconds and try again, as orderers might be going through leader election process. If the problem persists after couple of attempts, pls observe orderer logs to see if you have quorum (>50%) of nodes available (alive and able to communicate with each other)
yes,i tried .but it wont't works for me...
orderer logs?
i am getting this issue in orderer log [Failed to send StepRequest to 2, because: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp: lookup orderer2.example.com on 127.0.0.11:53: no such host" channel=testchainid node=1]
then it's a network problem - `orderer2.example.com` is not resolved to correct destination. Pls check your deployment
ok thank you will check it.......
Dear team, we are trying to upgrade a hyperledger blockchain network instance running v1.1.0 to v1.2.1 and all is well until we perform the step - Enable the new v1.2 capability at the URL https://hyperledger-fabric.readthedocs.io/en/release-1.2/upgrading_your_network_tutorial.html#enable-the-new-v1-2-capability. We followed the documented process in specific sequence and for most part all steps worked. We are seeing the error message in the final submission of config update transaction - "error: cannot enable application capabilities without orderer support first" with Kafka/Zk CFT ordering service. Our orderer and peer binaries are upgraded to v1.2.1 and currently running/processing transactions correctly. Our ledger and object stores are storing transaction blocks correctly. This validation error is coming from fabric/common/channelconfig/bundle.go source per our understanding. So how do we resolve and get past this issue? Any help would be gratefully appreciated.
Failed creating a block puller channel=testchainid node=1
panic: Failed creating a block puller
goroutine 785 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0002a6bb0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc0004521c8, 0x4, 0x102d210, 0x1e, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc0004521c8, 0x102d210, 0x1e, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc0004521d0, 0x102d210, 0x1e, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*evictionSuspector).confirmSuspicion(0xc000428230, 0x8bb37a1db5)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/util.go:604 +0xa4f
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*evictionSuspector).confirmSuspicion-fm(0x8bb37a1db5)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/chain.go:347 +0x34
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*PeriodicCheck).conditionFulfilled(0xc000428280)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/util.go:566 +0x9f
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*PeriodicCheck).check(0xc000428280)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/util.go:546 +0xf2
github.com/hyperledger/fabric/orderer/consensus/etcdraft.(*PeriodicCheck).check-fm()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/util.go:531 +0x2a
created by time.goFunc
/opt/go/src/time/sleep.go:172 +0x44
there should be some error logs before panic, and i think it's still related to your network issue
Hi Faisal,
Hi all,
I added another orderer to the network -- so now when i am trying to do a chaincode invoke to this orderer2 the orderer is throwing below error - channel not exist
orderer2 | 2019-07-29 14:25:49.831 UTC [orderer.common.broadcast] ProcessMessage -> WARN 3f1 [channel: legaldescriptionchannel] Rejecting broadcast of normal message from 172.23.155.115:34746 because of error: channel does not exist
orderer2 | 2019-07-29 14:25:49.831 UTC [orderer.common.server] func1 -> DEBU 3f2 Closing Broadcast stream
What are the steps we need to carry to add an orderer to already existing channel.
anyone familiar with this: `2019-07-29 15:07:22.617 UTC [orderer.common.server] initializeLocalMsp -> FATA 0c7 Failed to initialize local MSP: the supplied identity is not valid: x509: certificate has expired or is not yet valid`
Has joined the channel.
hello, does anyone know how to access lvldb database of the orderer, i opened it with python but the keys and value have unknown characters
Hi, I'll start specifing the network configuration:
- 3 Organizations with 1 orderer per organization
- The orderers are in a raft cluster
- Everything is running in Docker
Problem description and how to reproduce it
All orderers up and running (all have joined the system channel and the raft leader has been elected). After creating a channel "mychannel" if one of the orderer (I'll call it Ord3) is stopped by deleting the container and its volumes associated, when restarted the following happens:
Ord3 joins (i suppose it joins) the system channel with a not update term
Ord3 begin to sent vote requests to others orderers
The request is rejected several times
At some point Ord3 receives a message from the raft leader
Ord3 becomes a follower at the correct term
Ord3 crashes with the following error:
[orderer.consensus.etcdraft] commitTo -> PANI 737 tocommit(6) is out of range [lastIndex(3)]. Was the raft log corrupted, truncated, or lost? channel=sys-channel node=3
panic: tocommit(6) is out of range [lastIndex(3)]. Was the raft log corrupted, truncated, or lost?
goroutine 44 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000cec60, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc00000e388, 0x4, 0x10587a9, 0x5d, 0xc0006fb900, 0x2, 0x2, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc00000e388, 0x10587a9, 0x5d, 0xc0006fb900, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc00000e390, 0x10587a9, 0x5d, 0xc0006fb900, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raftLog).commitTo(0xc0001b82a0, 0x6)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/log.go:203 +0x14d
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raft).handleHeartbeat(0xc000217680, 0x8, 0x3, 0x2, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:1324 +0x54
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.stepCandidate(0xc000217680, 0x8, 0x3, 0x2, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:1224 +0x7e8
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raft).Step(0xc000217680, 0x8, 0x3, 0x2, 0x3, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:971 +0x12db
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*node).run(0xc0003e0de0, 0xc000217680)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/node.go:357 +0x1101
created by github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.RestartNode
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/node.go:246 +0x31b
I imagine that the problem is the volume erase , and it counts as a log corruption. But if it happens, how can I restore the orderer?
Hi team,
Are the images of hyperledger/fabric-zookeeper and hyperledger/fabric-kafka available in docker hub with the latest versions of kafka - 2.3 and zookeeper 3.5.5 ?
I could not find it in docker hub or is there any way we can update the existing images with the latest versions?
Regards,
Soumya
Hello all,
Recently I was stuck in an issue where Kafka had pruned the data and the data was available in orderer and peers. But even after trying a lot, I was not able to recover the whole network. So I was planning to spawn a new network and have all the data in the same format there. I know it's not an exact copy as the Hash chain is different, but in my condition, it's better to have the data than to lose everything.
So suppose I have a running network and I want to build something so that I could migrate all the data block by block to another network. I'll probably have to send direct calls to the orderer or Kafka. @guoger @jyellick suggested that I should send blocks in the form `common.Envelope` . I got what they wanted to convey and how should I approach this problem. But I'm not getting how should I go about building this tool.
Probably anyone could explain me at code level what needs to be done to achieve this. Or any inputs are welcome, I really think this is very useful and many of us might need this.
we don't have them. if you want to use latest kafka+zk, it's not hard to build images, based on the dockerfile from Fabric. although, i wouldn't suggest that
ok can you provide link for building the images to the latest version and also will there will be any support for kafka zookeeper in future ?
Next action would be obvious migrating to RAFT for orderer
https://github.com/hyperledger/fabric-baseimage/tree/master/images
i don't think we will deprecate kafka/zk support in foreseeable future (it's in 1.4 LTS anyway)
Hi Rajat,
Just curious to know what is the setting in kafka that was missed which led to pruning of data . Is there anything to take care about that as i am also a kafka user currently.
Regards,
SOumya
Thanks Jay for the input
we've been seeing this kind of problem for a while - kafka/zk dies for some reason, and data in Fabric needs to be salvaged. And most recently, i've seen several user reporting that kafka prunes data even configured properly.
Instead of reading all data and write them back to a new Fabric network (essentially to replay in kafka), I'm wondering if we could manually assemble config blocks and append to ledger, to migrate from kafka to raft, and start orderer from there. If user wants to keep using kafka, then they could always migrate back (do we allow this?)
wdyt? @jyellick @yacovm
replied in mailing list
the kafka property - log.cleaner.enable=false might help of these pruning of data for unknown reasons.
Got the above setting from the kafka users community
@soumyanayak thanks! do you have reference of this reply (url or something). If this does prevent log pruning from happening, maybe we should consider adding it to kafka docker build.
i had subscribed to the kafka mailing list -- users@kafka.apache.org.
where one of the user replied -
"instead of tweaking retention time/size, you can try using log.cleaner.enable=false. It's true by default as of 0.9.0.1 "
but still just i am waiting from more users if they really confirm on this property or suggest any.
Has joined the channel.
it appears that the certificate you provided for the orderer is expired. you could be sure by decoding the certificate ( `openssl x509 -in
did that and what i saw was that the notbefore date of the signing cert was 5 mins before the notbefore date of the cacerts
But is your signcert in range of date? before notAfter and after notBefore?
yeah, the notbefore is now-5mins and not after is now+1yr
```
Validity
Not Before: Jul 30 14:51:00 2019 GMT
Not After : Jul 29 14:55:16 2020 GMT
```
Did you generate the certificates using Fabric CA or cryptogen?
fabric ca
here's a related issue: https://jira.hyperledger.org/browse/FABC-832
hello does anyone know how to fix this:
tocommit(8) is out of range [lastIndex(7)]. Was the raft log corrupted, truncated, or lost? channel=jumpitt-sys-channel node=1
panic: tocommit(8) is out of range [lastIndex(7)]. Was the raft log corrupted, truncated, or lost?
Was the Raft WAL log corrupted truncated or lost? If so, the recommended recovery would be to remove that node and add a new one.
to remove the orderer? isnt there a better way to recover it?
From a Raft protocol safety perspective, you want to ensure that if a node's WAL is regressed, that it does not end up voting twice and causing a network split. If your network is otherwise healthy, it should be safe to shut down all orderers, then start them again. This should clear your error.
ty i will check that, i have another question, how would you recover the orderer from a corrupted blockfile
Corruption recovery is always difficult. The ordering service is designed to be crash fault tolerant, meaning you should be able to discard the crashed node and bootstrap a new one, as a clean reliable way to recover. However, for common assets like blockfiles, you can usually safely copy them from other non-corrupt nodes.
would it be a problem if i copy more non corrupt blocks than the orderer had originally? or if i copy a block that has more data?
It should be okay if the orderer's ledger grows
in the event that all the orderers get corrupted if i add a new one, will it recover all the data from a peer?
No, orderers never get blocks from peers, only the other way around. The ledger formats should be compatible in the worst case, you could probably copy a ledger from a peer to the orderer.
Note, all of these are very non-supported paths, and would be considered last resort efforts. It is far better to simply maintain your network within crash fault constraints.
Has joined the channel.
Is it possible to update the batchTimout in system channel config block?
If Yes, How?
I am getting error "error authorizing update: error validating DeltaSet: policy for [Value] /Channel/Orderer/BatchTimeout not satisfied: signature set did not satisfy policy".
yes it is, although you'll need orderer's admin signature on that config update tx
@jyellick sorry to bother you again, i have some questions about the orderers, using raft, if all the orderers data gets corrupted (ledger, index, etc)is there a way to recover the network using peer data?
If peers have received the blocks from the orderers, then it _should_ be possible to copy the block storage from the most up to date peer, to the orderers, delete the WAL dirs, and restart the ordering network. Note, this is very risky, and you must ensure that the ledger data is the most recent across all peers. Otherwise you could see a network fork. The most safe thing to do would be to discard your existing peers and bootstrap new ones after recovering the ordering network.
As (dis)qualified before, this would qualify as an upsupported path, and something you should test and verify in your own non-production environment prior to attempting in production.
As (dis)qualified before, this would qualify as an upsupported path, and something you should test and verify in your own non-production environment prior to attempting in production.
As (dis)qualified before, this is as an upsupported path, and something you should test and verify in your own non-production environment prior to attempting in production.
yes, we are doing some tests trying to check all the possible system fails and how to recover from them
Any time you manually manipulate the orderer network ledger data or WAL, you risk a blockchain fork. If done properly, you will ensure that orderers have a consistent, and most up to date view of the blockchain. If done incorrectly, some peers in the network could have blocks at higher heights than the orderers, and this would cause a fork when the orderers later commit those block numbers.
we verified what you said about recovering the orderer if it gets corrupted, but you need at least 1 that is not corrupted to recover the ones that are failing (after restarting the ordering network)
we know this it like something of last resort method
Has joined the channel.
Hi everyone, I am encountering the following problem when I try to run orderer container. I have checked both the ca cert and tls ca cert for this msp and both has CA=true attribute. But, still I get the following error. I use fabric CA `[orderer.common.server] Start -> PANI 003 Failed validating bootstrap block: initializing channelconfig failed: could not create channel Consortiums sub-group config: setting up the MSP manager failed: CA Certificate did not have the CA attribute, (SN: 137fd6ac160f5fc32f4eb5b5b0882725590403b3)
panic: Failed validating bootstrap block: initializing channelconfig failed: could not create channel Consortiums sub-group config: setting up the MSP manager failed: CA Certificate did not have the CA attribute, (SN: 137fd6ac160f5fc32f4eb5b5b0882725590403b3)
`
Can anyone please help me in this? how to troubleshoot and know where the exact problem is?
I have one CA for TLS and another CA for organosation.
It seems like you are not actually putting the proper root certificate in the `msp/cacerts` when creating the channel config
Has joined the channel.
Has joined the channel.
@jyellick .. Hello .. we are having issues with Kafka (not immediately after starting Fabric instance, but may be after 2-4 days of problem-free running) all of a sudden, since we upgraded to Fabric 1.4.0 (Kafka and ZK versions are 0.4.10) on our GNU/Linux servers. We get the following FATAL error in Kafka logs and the Kafka instance goes down which makes our Orderers to go down / stop responding as well.
Any idea on this kafka-orderer behavior?
The stack trace for the Kafka error is:
[2019-08-08 18:27:52,758] INFO Truncating cls1obo-cls4obo-0 to 269 has no effect as the largest offset in the log is 268. (kafka.log.Log)
[2019-08-08 18:27:52,759] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Based on follower's leader epoch, leader replied with an offset 345 >= the follower's log end offset 345 in cls1obo-cls2obo-0. No truncation needed. (kafka.server.ReplicaFetcherThread)
[2019-08-08 18:27:52,759] INFO Truncating cls1obo-cls2obo-0 to 345 has no effect as the largest offset in the log is 344. (kafka.log.Log)
[2019-08-08 18:27:52,807] FATAL [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Exiting because log truncation is not allowed for partition cls2obo-cls4obo-0, current leader's latest offset 245 is less than replica's latest offset 253 (kafka.server.ReplicaFetcherThread)
[2019-08-08 18:27:52,808] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Stopped (kafka.server.ReplicaFetcherThread)
[2019-08-08 18:27:52,810] INFO [KafkaServer id=2] shutting down (kafka.server.KafkaServer)
[2019-08-08 18:27:52,811] INFO [KafkaServer id=2] Starting controlled shutdown (kafka.server.KafkaServer)
Kafka Compose File :
- KAFKA_MESSAGE_MAX_BYTES=103809024 # 99 1024 1024 B
- KAFKA_REPLICA_FETCH_MAX_BYTES=103809024 # 99 1024 1024 B
- KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false
- KAFKA_BROKER_ID=0
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_LOG_RETENTION_MS=-1
- KAFKA_ZOOKEEPER_CONNECT=zookeeper01clsorder.cit.clsnet:2181,zookeeper02clsorder.cit.clsnet:2181,zookeeper03clsorder.cit.clsnet:2181
- LICENSE=accept
i removed stop orderer container and start same orderer container logs are ok but invoke is giving ERROR as Orderer grpc://orderer0.example.com:7050 has an error Error: Failed to connect before the deadline URL:grpc://orderer0.example.com:7050
i stoped orderer container and start same orderer container logs are ok but invoke is giving ERROR as Orderer grpc://orderer0.example.com:7050 has an error Error: Failed to connect before the deadline URL:grpc://orderer0.example.com:7050
i stoped orderer container and start same orderer container, logs are ok but invoke is giving ERROR as Orderer grpc://orderer0.example.com:7050 has an error Error: Failed to connect before the deadline URL:grpc://orderer0.example.com:7050
i stoped orderer container and start same orderer container, logs are ok but invoke is giving ERROR as Orderer grpc://orderer0.example.com:7050 has an error Error: Failed to connect before the deadline URL:grpc://orderer0.example.com:7050 any idea why it not going to invoke?
i stoped orderer container and start same orderer container, logs are ok but invoke is giving ERROR as Orderer grpc://orderer0.example.com:7050 has an error Error: Failed to connect before the deadline URL:grpc://orderer0.example.com:7050 any idea why it not going to invoke transaction?
I don't see the actual Kafka error? But based on the stack trace, it looks like perhaps the broker is out of disk space? Seems that it attempts to prune the Kafka log (which must never be done), and because of configuration, it cannot, so it exits.
Has joined the channel.
Hi team
Please help me to setup raft orderer service in hyperledger fabric
https://hyperledger-fabric.readthedocs.io/en/release-1.4/raft_configuration.html
Has joined the channel.
Has joined the channel.
hello!
I have moved to raft-based OSN. Trying to setup a simple network.
I keep seeing the following in the orderer0 logs.. : tls: first record does not look like a TLS handshake
eventually the node fails with the below:
2019-08-12 13:20:28.013 UTC [orderer.consensus.etcdraft] confirmSuspicion -> PANI 8b3 Failed creating a block puller channel=testchainid node=1
panic: Failed creating a block puller
[build-your-first-network](https://github.com/hyperledger/fabric-samples/tree/release-1.4/first-network) provides pretty good example. You need to make sure that TLS is enabled to run Raft based OSN
thanks for answering so quickly! I am using that example. I noticed i missed a few TLS fields so will retry :)
Clipboard - August 13, 2019 5:59 PM
Has joined the channel.
@jyellick hi, could you help me understand how raft or bft can prevent a bad player on the network commit a conflict transaction
https://chat.hyperledger.org/channel/fabric-sdk-java?msg=JNgncY9gfm5cMnh8t ,it's been a while from i get the reply
https://chat.hyperledger.org/channel/fabric-sdk-java?msg=JNgncY9gfm5cMnh8t ,it's been a while when i got the reply
one example could be that the Raft leader can simply put censorship on incoming transactions, i.e. refusing tx from particular client
or simply disrupt network by refusing to cooperate at all
i don't think thats the case, even a raft leader can't just simply refuse tx from the client
you can implement one that does
that's not what i'm looking for
but thx anyway
what are you looking for?
raft is cft, it can protect against very limited attack
at first, i'm confusing why fabric try to implement raft or btf base orderer
it seems not much better than kaffa base orderer, and even worse at speed
> at first, i'm confusing why fabric try to implement raft or btf base orderer
we have a paragraph for this in our doc: https://hyperledger-fabric.readthedocs.io/en/latest/orderer/ordering_service.html
```
New as of v1.4.1, Raft is a crash fault tolerant (CFT) ordering service based on an implementation of Raft protocol in etcd. Raft follows a “leader and follower” model, where a leader node is elected (per channel) and its decisions are replicated by the followers. Raft ordering services should be easier to set up and manage than Kafka-based ordering services, and their design allows different organizations to contribute nodes to a distributed ordering service.
```
> it seems not much better than kaffa base orderer, and even worse at speed
do you have benchmark result?
i don't have the benchmark result, but base on CAP, i think the answer is obviously
well, Raft and Kafka are not fundamentally different - they are both CFT
and CAP doesn't really prove this
i'm aware the feature of raft or bft, but why make it decentralized, it can work fine while orderer is centralized, the order can't fake a transaction on it's own
pls see my first response
it can also send different blocks to different peers
maybe my question should be "why try to make orderer decentralized"
centralized orderer can also implement all those feature
if it goes decentralized, the speed of transact could significant go slow due to the network
how do you prevent malicious, centralized orderer from sending out different blocks to different peers, to create a fork in the chain?
also, by centralizing, i suppose you are referring to cft, not solo (just to be clear here)
as i said, the orderer can fake a transaction on its own
as i said, the orderer can't fake a transaction on its own
it doesn't need to fake a tx... it can simply drop one and cause dispute
i.e. block 1 sent to orgA, which includes tx 1, but block 1' sent to orgB, which doesn't
IMO, the orderer could be a infrastructure like could service, provide by a third party or trustable org
if you have a _trustable_ org, why would you need blockchain at all?
you are right at some point, but it do have different priority for each org at every business scenario, i don't have to assume orgs who provide the orderer try to drop the transaction to crash the channel
in any case, you still need CFT to provide availability, no?
sure we need cft to provide availablity
but kaffa can do that and not necessary goes decentralized
again, kafka and raft are not different in terms of decentralization - they are both cft and need to be deployed across different AZ to achieve availability
yes, i agree with you.
let me try to clear my point, what i'm saying orderer decentralized means orderer need to reach a consensus across different orgs, thats why i say raft/bft is decentralized than kafka. and IMO, the best part design of fabric is the consensus flow, by adding a centralized orderer can significantly speed up the transaction also prevent the false attack, if the orderer goes the bft, it seems the design back to the 0.6 version of fabric. by using bft, the orderer seems to be useless, peer can do all the job. and the throughput/performance of bft can't satisfied most of the business scenario once the participant nodes increasing.
a fabric network can surely rely on a semi-trustable org to provide ordering service, and *by introducing Raft-OSN, we simply make that provider's life easier. If it's not desired, OSNs can still be deploy within a single org* and servicing the rest of Fabric network.
however, we still need to address the issue aforementioned by introducing bft, which doesn't necessarily deprecates Raft - it's simply one more choice to suit different use cases
agree, but from my perspective, the best part design of the fabric is the transaction flow, by adding a centralized orderer(no need to consensus amongst different orgs, simply order the transaction to keep no channel fork), significantly speed up the transaction. once the orderer goes bft, it seems the design back to 0.6 version of fabric, orderer seems to be useless, peer can do all the jobs.
as the fabric not a anonymous framework, i think we don't have to assume the participant try to crash the channel all the time, there are ways like cc/ddos attack can crash the service
frankly i'm a bit confused about our argument now... which part of my statement do you *not* agree so far?
not that i'm disagree your statement:joy: i'm just saying how i see the bft would be provided by fabric.
and the question i post at first, lehors says "when two transactions conflict with one another, the first one will be marked valid and the second one invalid at the time the block is committed to the ledger", but i can's see how raft/bft can solve this
and the question i post at first, lehors says "when two transactions conflict with one another, the first one will be marked valid and the second one invalid at the time the block is committed to the ledger", but i can't see how raft/bft can solve this
i think his full statement would be - if tx1 conflicts with tx2, OSN could send orgA blockX that includes tx1 only, whereas sends orgB blockX' that includes tx2 only
effectively, OSN forks the channel, and tricks orgA and orgB to believe in different values
effectively, OSN forks the chain, and tricks orgA and orgB to believe in different values
what Arnaud described there, was the expected behavior and we want to achieve that, even when osn is malicious
what Arnaud described there, was the expected behavior and we want to achieve that, even when part of osn is malicious
yeah, that make sense:joy:
:thumbsup:
thx for all the reply
welcome :)
hello everyone, does anyone know the path of the orderers ledger if its set up to be a json ledger
The JSON ledger is really only meant to test, it's not intended for production and will perform very badly.
i know but we are doing some tests, and i cant find where it is located
What version? The JSON ledger has been removed for some time, but I can check
1.4.2
https://github.com/hyperledger/fabric/blob/release-1.4/orderer/common/server/util.go#L40
You can specify its location using the `FileLedger.Location` config element, e.g. `ORDERER_FILELEDGER_LOCATION` in the env
Or look in the orderer.yaml
thanks
FYI, the JSON ledger is removed in v2.0, so I'd recommend avoiding dependencies on it.
yes, we are using file ledger for prod but we want to check the json ledger just to verify the structures inside, we wont use it for anything after this
Ah okay. FYI, you can always run `configtxlator` against blocks to look at them in JSON, this is actually how the JSON ledger is implemented.
to use configtxlator do i need to fetch an undecoded block and pass it to it?
You do need to fetch a block and parse it. There is a sample client which will stream all blocks from the orderer as they are created and dump them as JSON to your screen.
@jyellick one of the additional parameter that is been added to Kafka docker compose file in the fabric-samples since 1.0.6 is `- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR`, do we need to set it? Similarly there is addition to orderer compose 'ORDERER_KAFKA_TOPIC_REPLICATIONFACTOR'. We use 3 Orderer - 4 Kafka - 3 Zookeepeer with Kafka In sync replica set to 2 and Replication Factor set to 3. There is 1 error we received in our environment as reported by @swelankarcls recently - do you see it related as the error do state offset related error
```
The stack trace for the Kafka error is:
[2019-08-08 18:27:52,758] INFO Truncating cls1obo-cls4obo-0 to 269 has no effect as the largest offset in the log is 268. (kafka.log.Log)
[2019-08-08 18:27:52,759] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Based on follower's leader epoch, leader replied with an offset 345 >= the follower's log end offset 345 in cls1obo-cls2obo-0. No truncation needed. (kafka.server.ReplicaFetcherThread)
[2019-08-08 18:27:52,759] INFO Truncating cls1obo-cls2obo-0 to 345 has no effect as the largest offset in the log is 344. (kafka.log.Log)
[2019-08-08 18:27:52,807] FATAL [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Exiting because log truncation is not allowed for partition cls2obo-cls4obo-0, current leader's latest offset 245 is less than replica's latest offset 253 (kafka.server.ReplicaFetcherThread)
[2019-08-08 18:27:52,808] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Stopped (kafka.server.ReplicaFetcherThread)
[2019-08-08 18:27:52,810] INFO [KafkaServer id=2] shutting down (kafka.server.KafkaServer)
[2019-08-08 18:27:52,811] INFO [KafkaServer id=2] Starting controlled shutdown (kafka.server.KafkaServer)
```
@jyellick one of the additional parameter that is been added to Kafka docker compose file in the fabric-samples since 1.0.6 is `- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR`, do we need to set it? Similarly there is addition to orderer compose `ORDERER_KAFKA_TOPIC_REPLICATIONFACTOR`. We use 3 Orderer - 4 Kafka - 3 Zookeepeer with Kafka In sync replica set to 2 and Replication Factor set to 3. There is 1 error we received in our environment as reported by @swelankarcls recently - do you see it related as the error do state offset related error
```
The stack trace for the Kafka error is:
[2019-08-08 18:27:52,758] INFO Truncating cls1obo-cls4obo-0 to 269 has no effect as the largest offset in the log is 268. (kafka.log.Log)
[2019-08-08 18:27:52,759] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Based on follower's leader epoch, leader replied with an offset 345 >= the follower's log end offset 345 in cls1obo-cls2obo-0. No truncation needed. (kafka.server.ReplicaFetcherThread)
[2019-08-08 18:27:52,759] INFO Truncating cls1obo-cls2obo-0 to 345 has no effect as the largest offset in the log is 344. (kafka.log.Log)
[2019-08-08 18:27:52,807] FATAL [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Exiting because log truncation is not allowed for partition cls2obo-cls4obo-0, current leader's latest offset 245 is less than replica's latest offset 253 (kafka.server.ReplicaFetcherThread)
[2019-08-08 18:27:52,808] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Stopped (kafka.server.ReplicaFetcherThread)
[2019-08-08 18:27:52,810] INFO [KafkaServer id=2] shutting down (kafka.server.KafkaServer)
[2019-08-08 18:27:52,811] INFO [KafkaServer id=2] Starting controlled shutdown (kafka.server.KafkaServer)
```
@jyellick one of the additional parameter that is been added to Kafka docker compose file in the fabric-samples since 1.0.6 is `- KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR`, do we need to set it? Similarly there is addition to orderer compose `ORDERER_KAFKA_TOPIC_REPLICATIONFACTOR`. We use 3 Orderer - 4 Kafka - 3 Zookeepeer with below configuration
```
- KAFKA_MIN_INSYNC_REPLICAS=2
- KAFKA_DEFAULT_REPLICATION_FACTOR=3
- KAFKA_LOG_RETENTION_MS=-1
```
There is 1 error we received in our environment as reported by @swelankarcls recently - do you see it related as the error do state offset related error
```
The stack trace for the Kafka error is:
[2019-08-08 18:27:52,758] INFO Truncating cls1obo-cls4obo-0 to 269 has no effect as the largest offset in the log is 268. (kafka.log.Log)
[2019-08-08 18:27:52,759] INFO [ReplicaFetcher replicaId=2, leaderId=1, fetcherId=0] Based on follower's leader epoch, leader replied with an offset 345 >= the follower's log end offset 345 in cls1obo-cls2obo-0. No truncation needed. (kafka.server.ReplicaFetcherThread)
[2019-08-08 18:27:52,759] INFO Truncating cls1obo-cls2obo-0 to 345 has no effect as the largest offset in the log is 344. (kafka.log.Log)
[2019-08-08 18:27:52,807] FATAL [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Exiting because log truncation is not allowed for partition cls2obo-cls4obo-0, current leader's latest offset 245 is less than replica's latest offset 253 (kafka.server.ReplicaFetcherThread)
[2019-08-08 18:27:52,808] INFO [ReplicaFetcher replicaId=2, leaderId=0, fetcherId=0] Stopped (kafka.server.ReplicaFetcherThread)
[2019-08-08 18:27:52,810] INFO [KafkaServer id=2] shutting down (kafka.server.KafkaServer)
[2019-08-08 18:27:52,811] INFO [KafkaServer id=2] Starting controlled shutdown (kafka.server.KafkaServer)
```
In the Raft setup - channel configuration holds the raft node's TLS certificate. Even though this certificates may be issued by CA and since TLS pinning is used, there is no need for Raft Node to be aware of CA certificate. Please confirm my understanding.
In the Raft setup - channel configuration holds the raft node's TLS certificate. Even though this certificate may be issued by CA and since TLS pinning is used (using these TLS certificates), there is no need for Raft Node to be aware of CA certificate. Please confirm my understanding.
For the `ORDERER_KAFKA_TOPIC_REPLICATIONFACTOR` variable, in v1.0.6, the replication factor for your channels was set by configuring global defaults in your Kafka cluster. Now, you may configure the requested replication factor via Fabric. If you do not specify a replication factor, it should fall back to whatever global defaults you configured in your Kafka cluster.
For the `KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR`, this is the global default I referred to. In our samples, we use a single node Kafka cluster, so cannot use the replication factor default and must override this to be 1.
The Raft cluster communications themselves should be fine with no TLS CAs defined, as the TLS handshake cert verification is done purely by comparing the literal cert in the cluster config to the cert received.
However, Raft replicates blocks via the standard Orderer Deliver interface, and as this may not necessarily operate on the same port or address, the authentication to this service is done via TLS CA, so, you will in all likelihood need your TLS certs to be validly issued by a CA.
If you wish to run the cluster service on a separate port from the standard ordering services (Broadcast/Deliver), then those certs need not be issued by a TLS CA. @yacovm can you confirm?
~The Raft cluster communications themselves should be fine with no TLS CAs defined, as~ the TLS handshake cert verification is done by comparing the literal cert in the cluster config to the cert received, but then golang performs the cert chain verification.
@jyellick nope, you need a TLS CA for intra-cluster comm
We piggyback on the updateCARoots callback thingy
also keep in mind that the intra-cluster listener is often co-located with the regular orderer listener
@rahulhegde Looks like after the Fabric checks, golang also checks the TLS trust chain using the CAs, so see my update above.
@rahulhegde Looks like ~after~ before the Fabric checks, golang also checks the TLS trust chain using the CAs, so see my update above.
no it's *before* the checks
let me show the quote from the TLS code:
```
// VerifyCertificate, if not nil, is called after normal
// certificate verification by either a TLS client or server.
// If it returns a non-nil error, the handshake is aborted and that error results.
VerifyCertificate func(rawCerts [][]byte, verifiedChains [][]*x509.Certificate) error
```
Questions:
1. As per documentation, in which scenario RAFT using TLS Pinning applicable if it still requires CA?
2. How do we make the CA certificate available for RAFT consensus validation - as there is no CA metadata holder for it?
Questions:
1. As per documentation, in which scenario RAFT using TLS Pinning then applicable, if we still require CA?
2. How do we make the CA certificate available for RAFT consensus validation - as there is no CA metadata holder for it?
Questions:
1. As per documentation, in which scenario RAFT using TLS Pinning then applicable, if we still require CA?
2. How do we make the CA certificate available for RAFT consensus validation - as there is no RAFT CA metadata holder for it?
I think we actually can't accomplish verification of the certificate without also verifying them using the chain of trust:
```
if !c.config.InsecureSkipVerify {
opts := x509.VerifyOptions{
Roots: c.config.RootCAs,
CurrentTime: c.config.time(),
DNSName: c.config.ServerName,
Intermediates: x509.NewCertPool(),
}
for i, cert := range certs {
if i == 0 {
continue
}
opts.Intermediates.AddCert(cert)
}
var err error
c.verifiedChains, err = certs[0].Verify(opts)
if err != nil {
c.sendAlert(alertBadCertificate)
return err
}
}
if c.config.VerifyPeerCertificate != nil {
if err := c.config.VerifyPeerCertificate(certificates, c.verifiedChains); err != nil {
c.sendAlert(alertBadCertificate)
return err
}
}
```
notice the `c.verifiedChains` passed into `VerifyPeerCertificate` is initialized in the block above that does the x509 verification
so it's all or nothing i think
> 1. As per documentation, in which scenario RAFT using TLS Pinning then applicable, if we still require CA?
Raft *always* uses TLS pinning.
> 2. How do we make the CA certificate available for RAFT consensus validation - as there is no CA metadata holder for it?
waht do you mean no metadata holder?
you just need a TLS CA to issue the certificate for Raft.... why is it special or complex?
Clarification? Does TLS pinning still uses chain of trust for validation or it would use the end entity issued certificate?
Clarification? Does TLS pinning still use chain of trust for validation or it would use the end entity issued certificate?
yes
it first validates the x509 chain
and afterwards checks byte by byte comparison
And this is the additional validation for TLS pinning.
yes
why are you asking all this?
is something not working?
We are in process of migrating our Production System from Kafka to Raft and hence clarifying what changes are required.
oh i see
2. How do we make the CA certificate available for RAFT consensus validation - as there is no CA metadata holder for it?
waht do you mean no metadata holder?
As per documentation - Raft Consenter nodes can have a New Root CA issuing TLS certificates. Since CA is used for RAFT TLS validation, when would the CA certificate be placed.
Looking at the Raft Configuration (local/channel), I don't find a placeholder.
```
Consenters:
- Host: raft0.example.com
Port: 7050
ClientTLSCert: path/to/ClientTLSCert0
ServerTLSCert: path/to/ServerTLSCert0
...
```
2. How do we make the CA certificate available for RAFT consensus validation - as there is no CA metadata holder for it?
waht do you mean no metadata holder?
As per documentation - Raft Consenter nodes can have a New Root CA issuing TLS certificates. Since CA is used for RAFT TLS validation, where would the CA certificate be placed.
Looking at the Raft Configuration (local/channel), I don't find a placeholder.
```
Consenters:
- Host: raft0.example.com
Port: 7050
ClientTLSCert: path/to/ClientTLSCert0
ServerTLSCert: path/to/ServerTLSCert0
...
```
you don't use a separate CA for Raft
you use the same CA as for the orderer CA
Can you clarify the statement from documentation - my interpretation is a new CA can be provisioned
```
This is useful for cases where you want TLS certificates issued by the organizational CAs, but used only by the cluster nodes to communicate among each other, and TLS certificates issued by a public TLS CA for the client facing API.
```
Can you clarify the statement from documentation - my interpretation is a new CA can be provisioned
https://hyperledger-fabric.readthedocs.io/en/release-1.4/raft_configuration.html#local-configuration Cluster Configuration
Cluster
```
This is useful for cases where you want TLS certificates issued by the organizational CAs, but used only by the cluster nodes to communicate among each other, and TLS certificates issued by a public TLS CA for the client facing API.
```
Can you clarify the statement from documentation - my interpretation is a new CA can be provisioned
https://hyperledger-fabric.readthedocs.io/en/release-1.4/raft_configuration.html#local-configuration Cluster Configuration
```
This is useful for cases where you want TLS certificates issued by the organizational CAs, but used only by the cluster nodes to communicate among each other, and TLS certificates issued by a public TLS CA for the client facing API.
```
ah yes - you can run the TLS stuff on a separate listener, ok ?
then you can just use a custom TLS CAs
but only for Raft
Right, so where would the RAFT TLS CA be added.
Right, so where would the RAFT TLS CA be added. Should not be in the Channel Configuration MSP, for security reason.
Question 3 - Clarification on the configuration
```
ReplicationBufferSize: the maximum number of bytes that can be allocated for each in-memory buffer used for block replication from other cluster nodes. Each channel has its own memory buffer. Defaults to 20971520 which is 20MB.
```
So would each orderer raft follower (only?) node require 20MB
Question 3 - Clarification on the configuration
```
ReplicationBufferSize: the maximum number of bytes that can be allocated for each in-memory buffer used for block replication from other cluster nodes. Each channel has its own memory buffer. Defaults to 20971520 which is 20MB.
```
So would each orderer raft follower (only?) node require 20MB per channel.
Question 3 - Clarification on the configuration
```
ReplicationBufferSize: the maximum number of bytes that can be allocated for each in-memory buffer used for block replication from other cluster nodes. Each channel has its own memory buffer. Defaults to 20971520 which is 20MB.
```
So would each orderer raft follower (only or even leader?) node require 20MB per channel.
Question 3 - Clarification on the configuration
```
ReplicationBufferSize: the maximum number of bytes that can be allocated for each in-memory buffer used for block replication from other cluster nodes. Each channel has its own memory buffer. Defaults to 20971520 which is 20MB.
```
So would each raft node (only follower or even the leader) node require 20MB per channel during process initialization?
@rahulhegde we have a `RootCAs` key that is not shown in https://hyperledger-fabric.readthedocs.io/en/release-1.4/raft_configuration.html from some reason
@rahulhegde we have a `General.Cluster.RootCAs` key that is not shown in https://hyperledger-fabric.readthedocs.io/en/release-1.4/raft_configuration.html from some reason
:(
> So would each raft node (only follower or even the leader) node require 20MB per channel during process initialization?
yes exactly.
but when the replication finishes, the buffer is emptied
Thanks for the clarification, this indicates the chain-of-trust for RAFT would be validated using the local MSP and there is no need to update the channel configuration.
Should local MSP ony be used for client side of TLS validation and server validates the client identity using the channel configuration?
it doesn't work with the local MSP
not sure why you're saying that
Hello - Sorry, probably a repetitive question here..
I am trying to setup with raft. Orderers look good! However my "--tls" channel create command is failing.
I have mounted the orderer cert file into the peer container (tlsca.company-cert.pem).
I am using this file as the --cafile option in the peer channel create command from within the container...
When I make the call, the orderer rejects with Signed by unkown authority. (I can see this in the orderer0 container logs)
A few things online have pointed me to find how to add orderer hostnames in configtx.yaml.. however I do not think this is the likely solution
TLS has nothing to do with MSP
these root TLS CA certs are just added to the cert pool
meaning - both these *and* the channel ones are in use
this is an extension, not a replacement
My bad - i wanted to say CA certificate that is exposed through the local file-system for that raft node.
yeah
```
var serverRootCAs [][]byte
for _, serverRoot := range conf.General.Cluster.RootCAs {
rootCACert, err := ioutil.ReadFile(serverRoot)
if err != nil {
logger.Fatalf("Failed to load ServerRootCAs file '%s' (%s)",
err, serverRoot)
}
serverRootCAs = append(serverRootCAs, rootCACert)
}
```
yes what about it?
`meaning - both these and the channel ones are in use`
there is no provisioning I see for channel ones - this is not supported yet?
`meaning - both these and the channel ones are in use`
there is no provisioning I see for channel ones - this is not supported yet? If supported to add cluster CA , sorry I am not able to locate the param in the code that would be used to update the cluster CA in channel configuration.
`meaning - both these and the channel ones are in use`
there is no provisioning I see for channel ones - this is not supported yet? If supported , sorry I am not able to locate the param in the code that would be used to update the cluster CA in channel configuration.
I only see this code - that updates the cluster CA list used for trust validation in RAFT.
the channel ones are also in use
and same logic of using Root CA from Orderer MSP TLS CA + ICA list and file-sytem Root CA list is used on both the side for mutual TLS validation.
Hello, I have a question regarding Raft Orderer.```
Like the Kafka Orderer, does the Raft orderer also save the entire ledger in File?
```
Hello, I have a question regarding Raft Orderer.
Like the Kafka Orderer, does the Raft orderer also save the entire ledger in File?
And if the Raft orderer takes a Snapshot, does that mean it is not saving the entire ledger data?
My requirement is to save disk space in the ordering node, so it is better for me if the Raft orderer does not save the ledger info.
Hello, I have a question regarding Raft Orderer.
Like the Kafka Orderer, does the Raft orderer also save the `entire ledger in File`?
And if the Raft orderer takes a `Snapshot`, does that mean it is not saving the entire ledger data?
My requirement is to `save disk space in the ordering node`, so it is better for me if the Raft orderer does not save the ledger info.
Can someone kindly give me any info if the Raft is save the ledger info or not? Thank you.
```
Hello, I have a question regarding Raft Orderer.
Like the Kafka Orderer, does the Raft orderer also save the `entire ledger in File`?
And if the Raft orderer takes a `Snapshot`, does that mean it is not saving the entire ledger data?
My requirement is to `save disk space in the ordering node`, so it is better for me if the Raft orderer does not save the ledger info.
Can someone kindly give me any info if the Raft is save the ledger info or not? Thank you.
Has joined the channel.
The orderer needs to be able to provide blocks to peers which request them, so it does store all the blocks for a channel on disk. However, the Raft ordering service will likely save you disk space over using the Kafka based one because there is no need to store other copies of the data on the Kafka brokers.
If the Raft orderer is storing the blocks, what is the purpose of the snapshot function in the Raft orderer?
Snapshots are an implementation detail of the Raft protocol. It helps a replica get back in sync with the rest of the network quickly. Snapshots I believe default to 20 MB and are limited to have 5 retained.
I see. Is there any way to remove the ledger data of a specific channel from the Raft orderer?
You do not need to have every raft node participate in every Raft channel
You do not need to have every raft node participate in every channel
If a node is not a part of a channel, it will stop replicating blocks for that channel
So is it possible to revoke/remove the raft node from participating in some channel?
Yes
You may grow and shrink the Raft consenter set over time
This can be achieved by changing the channel config correct?
Correct
So if I wanted to delete the data regarding a particular channel from the Raft node, can I change channel config to make the raft node stop participating in the channel, and then delete the 'ledger file' associated with the channel from the raft node ?
Yes, you will also want to delete the 'indexes' directory to rebuild them as you have manually deleted files
Of course have the node stopped when you do this
You should do some testing around this, typically, we do not delete ledger data, only disconnect the node and prevent it from getting new ledger data
But I see no obvious reason it should not work
I see. I have a requirement to 'delete' a channel after a specific time. So my options are either stop and remove all the raft nodes participating in the channel, or do something like we discussed above. Do you see any other/better way of achieving this?
I think the approach described above makes the most sense for your use case
But as I said, do some testing.
You might have to leave the first block file, for instance. Though this should hopefully be fairly negligable in the sceme of things. If you find problems, feel free to report them, it would be a nice feature to support.
Got it, thank you very much! I will test it and report here if some problems :)
Also, just fyi, the link to the wiki page for the channel "Please Read Before Posting" seems broken
Ah, thanks, I'll try to fix that
[Please Read Before Posting](https://wiki-archive.hyperledger.org/community/chat_channels/fabric-orderer)
Hi, Kafka to Raft migration, the directory structure either for both the consensus is different in the orderer. What operation is involved on the file-system migration for this activity - is it like rebuilding ledger or rename the directories?
which dir are you referring to?
The Data Persistence directory of Kafka and Raft
In this case, I think other than `WALDir` and `SnapDir` in `Consensus` section of `orderer.yaml`, you don't need to change other configs.
@guoger my question is specific to activities that happens during the kafka to raft migration on the file-system not related to compose changes/procedure for migration as this is already documented. Question since there is a change in directory structure to data persistence folder, does orderer rebuild the ledger or just executes folder/files move commands.
@guoger my question is specific to activities that happens during the kafka to raft migration on the file-system not related to compose changes/procedure for migration as this is already documented. Question since there is a change in directory structure to data persistence folder, does orderer rebuild the ledger or just executes folder/files move commands.
Reason - this information will let us know what would be the estimate time for migration.
> since there is a change in directory structure to data persistence folder, does orderer rebuild the ledger or just executes folder/files move commands.
dir structure is *not* changed and it does *not* rebuild ledger
I am reading about raft consensus, it says if there are no heart beats from leader, a node will promote itself as candidate. But if the current leader always sends heart beats (changes to ledger) with in specified timeout, will it always stay as leader?
```
Raft nodes are always in one of three states: follower, candidate, or leader. All nodes initially start out as a follower. In this state, they can accept log entries from a leader (if one has been elected), or cast votes for leader. If no log entries or heartbeats are received for a set amount of time (for example, five seconds), nodes self-promote to the candidate state. In the candidate state, nodes request votes from other nodes. If a candidate receives a quorum of votes, then it is promoted to a leader. The leader must accept new log entries and replicate them to the followers.
```
yes it will
Hi, Seeing an issue with starting orderers with couchDB as the state database in kubernetes cluster - "panic: Error opening leveldb: resource temporarily unavailable". Any idea why its trying to open levelDB when we have couchDB as the state database? Here is the complete stack trace -
Hi, @dave.enyeart, @jyellick Seeing an issue with starting orderers with couchDB as the state database in kubernetes cluster - "panic: Error opening leveldb: resource temporarily unavailable". Any idea why it's trying to open levelDB when we have couchDB as the state database? Here is the complete stack trace -
panic: Error opening leveldb: resource temporarily unavailable
goroutine 1 [running]:
github.com/hyperledger/fabric/common/ledger/util/leveldbhelper.(*DB).Open(0xc000078ec0)
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/util/leveldbhelper/leveldb_helper.go:80 +0x271
github.com/hyperledger/fabric/common/ledger/util/leveldbhelper.NewProvider(0xc0003f23b0, 0xc0003f23b0)
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/util/leveldbhelper/leveldb_provider.go:40 +0xda
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.NewProvider(0xc0003ee260, 0xc0003ee280, 0x114b6a0, 0x1b89950, 0x7d23d5, 0xc00011e4b8)
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage/fs_blockstore_provider.go:36 +0x7f
github.com/hyperledger/fabric/common/ledger/blockledger/file.New(0xc000447d70, 0x23, 0x114b6a0, 0x1b89950, 0xc0003ae810, 0x43735e)
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/blockledger/file/factory.go:72 +0x101
github.com/hyperledger/fabric/orderer/common/server.createLedgerFactory(0xc000208480, 0x114b6a0, 0x1b89950, 0x0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/util.go:33 +0x1fa
github.com/hyperledger/fabric/orderer/common/server.Start(0x1015083, 0x5, 0xc000208480)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:109 +0x255
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:91 +0x1ce
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
Isn't that a problem where an organization can always produce transactions and always stay as leader?
@guoger
Has joined the channel.
producing blocks in turn makes sense in the context of BFT, to avoid mitigate censorship. Although raft-based OSN is CFT
producing blocks in turn makes sense in the context of BFT, to mitigate censorship. Although raft-based OSN is CFT
Hi All , Why we are providing same TLS twice in RAFT sample configuration ``` ClientTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt
ServerTLSCert: crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/tls/server.crt```
any idea or suggestion please
two certs are for client auth and server auth respectively, and for the sake of simplicity, we are using the same in the sample. you can use different ones
how can I create diff diff TLS any documentation for that if I am using cryptogen
@thanks for your quick response .
@guoger thanks for your quick response .
to my knowledge, i don't think it's documented and i don't think cryptogen actually does that
any other suggestion from your side for this @guoger ?
it's not really different from the cert used by client to communicate with orderer, you can use that given current structure. In practice, you should always have a proper tls ca to issue certificates
ok thanks @guoger
i will try
hello, considering raft consensus, if i add new orderer and if a voting for a new leader triggers after this orderer joins is there a chance that this new orderer gets elected before it receiving the ledger blocks?
in current implementation, new node participates in consensus *after* it receives all missing blocks, so it wouldn't be elected as new leader in the case you described.
is it the same for orderers that were in the network but they were stopped and started with a ledger that isnt current?
The orderer does not have/use a state database. It only maintains a leveldb for maintaining offsets into its blockstore.
the lagged node will catch up with other nodes through Raft protocol - either through log replication or snapshots - so in short, the Raft instance on this particular orderer is *started*, but behind. So this is different from a newly started node, whose Raft instance is *not* started before it gets missing blocks.
Has joined the channel.
HI All , I am getting an error when trying dev mode `* 'General' has invalid keys: LogFormat, LogLevel
`
HI All , I am getting an error when trying dev mode `* 'General' has invalid keys: LogFormat, LogLevel`
any idea please
this is not orderer related question, better ask in #fabric-chaincode-dev , also which version of fabric are you using and what did you try exactly? you should be able to use `FABRIC_LOGGING_SPEC` to set log level for peer/orderer
@guoger I am trying this https://hyperledger-fabric.readthedocs.io/en/release-1.4/peer-chaincode-devmode.html?highlight=dev%20mode
i fixed it earlier somehow
but i forgot now
what was the solution
running this command `ORDERER_GENERAL_GENESISPROFILE=SampleDevModeSolo orderer`
basically @guoger I think this is related to orderer . Orderer is not able parse orderer.yaml correctly
I am using orderer binary 1.4.2
and my source code is on 1.4 branch
logs please?
@guoger Right now I am in this directory `go/src/github.com/hyperledger/fabric/sampleconfig/`
and exported it as FABRIC_CFG_PATH so that orderer can take this as input
https://pastebin.com/cWD9NDT6
these are the logs
`orderer version`?
orderer:
Version: 1.4.2
Commit SHA: c6cc550
Go version: go1.11.5
OS/Arch: linux/amd64
git branch output ``` master
release-1.2
* release-1.4
```
I have created the channel near about 20 days ago
and faced the same issue
and fixed it
but now no luck
works for me
```
orderer:
Version: 1.4.2
Commit SHA: c6cc550
Go version: go1.12.5
OS/Arch: darwin/amd64
```
try clean your ledger file location
no luck erased
from which location you run this command
?
@guoger
sampleconfig
try git branch
@pankajcheema let's chat in thread w/o polluting main chat room
what is the output
orderer process simply starts fine
please try `git branch ` command
git is on hash c6cc550
so that i can identify on which branch your source code is
like release
git is being checked out to hash c6cc550
same on erro on the head provided by you
same error on the head provided by you
try it on another machine or vm, see if it's reproduceable
also, this should not affect you but fabric requires Go 1.12.x
go version go1.10 linux/amd64
this is mine go version
should i try to update
not necessarily, but first try in another environment pls
or container, or vm, whatever that is "cleaner"
I am using native ubuntu
physical machine
16.04
it doesn't matter...
I hope so
fwiw, i'm using 16.04 ubuntu as well
so i think there are some weird configs on your env, therefore pls see if this problem persists on another env
let me try
let me try
@guoger now getting this ```2019-08-23 15:39:27.155 IST [common.tools.configtxgen.localconfig] completeInitialization -> INFO 003 Orderer.Addresses unset, setting to [127.0.0.1:7050]
2019-08-23 15:39:27.155 IST [common.tools.configtxgen.localconfig] completeInitialization -> INFO 004 orderer type: solo
2019-08-23 15:39:27.155 IST [common.tools.configtxgen.localconfig] Load -> INFO 005 Loaded configuration: /home/pankaj/go/src/github.com/hyperledger/fabric/sampleconfig/configtx.yaml
2019-08-23 15:39:27.158 IST [orderer.common.server] Start -> PANI 006 Failed validating bootstrap block: initializing channelconfig failed: could not create channel Orderer sub-group config: Orderer Org SampleOrg cannot contain endpoints value until V2_0+ capabilities have been enabled
panic: Failed validating bootstrap block: initializing channelconfig failed: could not create channel Orderer sub-group config: Orderer Org SampleOrg cannot contain endpoints value until V2_0+ capabilities have been enabled
```
any idea
i have deleted the fabric folder and pull it again
after that reached here
Anyone please suggest that why I am getting this error ```2019-08-23 15:46:04.163 IST [orderer.common.server] Start -> PANI 006 Failed validating bootstrap block: initializing channelconfig failed: could not create channel Orderer sub-group config: Orderer Org SampleOrg cannot contain endpoints value until V2_0+ capabilities have been enabled
panic: Failed validating bootstrap block: initializing channelconfig failed: could not create channel Orderer sub-group config: Orderer Org SampleOrg cannot contain endpoints value until V2_0+ capabilities have been enabled
```
@akshay.soood
@akshay.sood ood
@akshay.sood
oerderer version 1.4.2
orderer version 1.4.2
trying to run in dev mode using command `ORDERER_GENERAL_GENESISPROFILE=SampleDevModeSolo orderer
`
trying to run in dev mode using command `ORDERER_GENERAL_GENESISPROFILE=SampleDevModeSolo orderer`
trying to run in dev mode using command `ORDERER_GENERAL_GENESISPROFILE=SampleDevModeSolo orderer`
clean the ledger files and try again
i guess you have some block files produced by 2.0 orderer, and then switched to use 1.4 orderer binary, which attempts to load leftover blocks
was the previous problem solved?
@guoger yes
No luck
I am removing /var/hyperledger/production
@guoger am right ?
In fabric `log.cleanup.policy` is not set in kafka and by default it is `delete`. I was thinking of changing this as in my case Kafka is deleting data even after 'log.retention.ms=-1'. Could anyone help me with this ?
we could set that to `compact`
Has joined the channel.
Has joined the channel.
on changing Machine ip address of peer1 in configtx file; tried to regenerate .tx file using command as -
but getting an issue --
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'aaachannel': error authorizing update: error validating ReadSet: proposed update requires that key [Group] /Channel/Application be at version 0, but it is currently at version 1
on changing Machine ip address of peer1 in configtx file; tried to regenerate .tx file using command as : peer channel update -c legaldescriptionchannel -f legaldescription-channel.tx -o 123.32.14.122:7050
but getting an issue --
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'aaachannel': error authorizing update: error validating ReadSet: proposed update requires that key [Group] /Channel/Application be at version 0, but it is currently at version 1
You need to follow a process similar to https://hyperledger-fabric.readthedocs.io/en/release-1.4/config_update.html in order to create a proper/valid config update tx
i had first generated the transaction file updating the configtx.yaml using configtxgen.
following the doc i ran the peer channel update and the error what i got above
actually i am not understanding what is meant by that error
Did you try setting the flag - log.cleaner.enable=false
It's true by default as of v0.9.0.1
Any update on this, previously network was up and running but on IP change an issue came.
You can't use configtxgen to generate the update transaction.
In the link I posted above, it hows that you actually need to get the latest config block from the channel ( `peer channel fetch ....` ) , use it as the base for your changes, then use configtxlator to create the delta update transaction and then submit it.
The error you are getting is a version mismatch as you are trying to submit a change at the same version rather than submitting an updated version
Got it, Thanks
no problem ... it's not easy :(
For future reference, I highly recommend using DNS names rather than IP addresses ;)
We were thinking to use DNS only, we tried placing DNS name but was not working. How to achieve that?
What was not working? All of the samples we provide use DNS names for the orderer (and peer) address
we tried using hostnames
instead of IP
May be i need to learn hot to set DNS and then do that.
In one machine we have order and kafka-zoo, how to differentiate DNS for orderer and kafka broker in the same machine? is DNS name can be same as docker container name?
Hi Team,
From the link -- https://github.com/Shopify/sarama , the sarama library is supporting the latest versions of kafka 2.3.x and zookeeper.
So can we upgrade the versions of kafka from 1.0.2 to 2.3.0 and zookeeper from 3.4.14 to 3.5.5 using and modifying the files and generating the docker images from the path --> https://github.com/hyperledger/fabric-baseimage/tree/master/images
Hi Team,
From the link -- https://github.com/Shopify/sarama , the sarama library is supporting the latest versions of kafka 2.3.x and zookeeper.
So can we upgrade the versions of kafka from 1.0.2 to 2.3.0 and zookeeper from 3.4.14 to 3.5.5 using and modifying the files and generating the docker images from the path --> https://github.com/hyperledger/fabric-baseimage/tree/master/images
Can we use the generated images with v1.4.3 or v2.0.0 fabric components?
You should perform your own testing for compatible versions of Kafka/Zookeeper, but there is nor eason to believe it should not work
@jyellick i had tried earlier with native version of kafka 2.2 and it was working fine would create the docker images for the latest versions and try doing it
@jyellick Also is there any plan of updating the docker hub with the hyperledger-zookeeper and hyperledger-kafka images with latest versions?
I think we would like to get out of the game of supplying kafka/zk images via Fabric. There are lots of other good sources of these images, so we'd rather not duplicate this work.
@jyellick can you suggest some links or sources from where we can pick up these images
any idea what this means:
```
2019-08-27 19:48:55.443 UTC [orderer.common.broadcast] ProcessMessage -> WARN 23d [channel: testchainid] Rejecting broadcast of config message from 172.31.0.1:35238 because of error: error applying config update to existing channel 'mychainid': error authorizing update: error validating ReadSet: existing config does not contain element for [Policy] /Channel/Consortiums/MyConsortium/org1/Admins but was in the read set
```
i'm submitting a channel config update
i'm submitting a channel config update to the system channel
When you get 'but was in the read set' errors, this generally indicates that the config you generated the update from is no longer current.
You should always pull the latest copy of the channel config before modifying it and computing your update.
Hello, Im facing errors while do peer channel create.
https://stackoverflow.com/questions/57291538/generating-new-channel-configtx/57690480#57690480
reference details are above.
is anything i need to work with configtx.yml..?
thanks @jyellick , i fetched the latest. my network is blank and i'm the only one making changes to it
```
2019-08-28 13:54:41.641 UTC [orderer.common.broadcast] ProcessMessage -> WARN 252 [channel: testchainid] Rejecting broadcast of config message from 172.31.0.1:35780 because of error: error applying config update to existing channel 'testchainid': error authorizing update: error validating DeltaSet: delta set was empty -- update would have no effect
```
I get an error while updating the orderer channel with one more org added to it.
```[grpc] HandleSubConnStateChange -> DEBU 04a pickfirstBalancer: HandleSubConnStateChange: 0xc0004a7930, READY Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'ordererchannel': error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Consortiums/SampleConsortium not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied ```
I signed the .pb file with orderer admin, org1 admin and org1 peer2 admin, still I get the above error. I don't know how to verify the policy from the block. The structure kind of okay but extracting exact values from the genesis block is still I find it difficult. There are two kinds of admins, one is the admin for the org and other is the admin of individual peers. who should sign the block?
[this](https://stackoverflow.com/questions/57662562/when-i-try-to-create-a-channel-using-hyperledger-fabric-the-request-fails/57662645#57662645) may help you
@guoger hi, i found that the client side(JAVA SDK) has the api to revoke a user, and the document shows that CA would generate a CRL (Certificate Revocation List), but which part of the fabric would using this CRL to verify the transaction? would orderer get the CRL from the CA and verify the transaction?
The CRL must be updated in the channel config
Then perhaps you are specifying the wrong channel id in the update? You can follow the channel config update documentation found here: https://hyperledger-fabric.readthedocs.io/en/release-1.4/config_update.html
is there any example or document shows how to update the CRL to the channel config?
based on api provide from JAVA SDK, it seems that can only send a revoke request to the CA and generate a CRL, but no sign of update the CRL to the channel. is that the CA would update the CRL to the channel automatically?
Hi, the CA has no notion on what channels exist and who to contact. You need to manually update all channels.
There is also lacking documentation on how to do this, but basically:
1. Produce a CRL
2. Retrieve a channel config block
3. Decode the config block to json
4. Locate "revocation_list" for your MSP
5. Convert your CRL to base64, and place it in the list
6. Proceed with the channel update by generating the block, getting signatures, etc.
7. Repeat with any other channel
@Coada thx for the detail! and i found a channel config json string example from https://hyperledger-fabric.readthedocs.io/en/latest/config_update.html
and there has no example where to set the "revocation_list", my guess is put the "revocation_list" under "MSP" tag, am i right?
@Coada thx for the detail! and i found a channel config json string example from https://hyperledger-fabric.readthedocs.io/en/latest/config_update.html
and there has no example where to set the "revocation_list", my guess is put the "revocation_list" under MSP/value/config tag, am i right?
i found the FabricMSPConfig struct in fabric, thx again
Has joined the channel.
[channel: byfn-sys-channel] Rejecting broadcast of config message from 172.30.0.7:35084 because of error: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
2019-08-29 09:16:45.248 UTC [comm.grpc.server] 1 -> INFO 021 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.30.0.7:35084 grpc.code=OK grpc.call_duration=709.682µs
tryng to update system channel with new org..facing above error...plz help!
Thank you verymuch @guoger , I understand a bit more from the link. But still I receive an error and have a doubt. As we provide an user, in this case admin of orderer org (as I am trying to change the orderer system channel) the users/admin/msp/ directory don't have admincerts as this is the MSP of the admin. So should I create a directory again named 'admincerts' and copy it's signcerts into admincerts? How does it work? ``` ERRO 022 Cannot run peer because error when setting up MSP of type bccsp from directory /tmp/hyperledger/org1/peer1/assets/org0/msp/users/admin/msp: could not load a valid admin certificate from directory /tmp/hyperledger/org1/peer1/assets/org0/msp/users/admin/msp/admincerts: stat /tmp/hyperledger/org1/peer1/assets/org0/msp/users/admin/msp/admincerts: no such file or directory
```
use an admin for the organization that owns the peer when doing the update
Hi friends,
I am getting tls: bad certificate server=Ordererremoteaddress=172.25.0.12:52966. when i am using
orderer type= etcdraft.
any idea to solve the problem
I solved this issue by changing the channel name to "mychannel1" in my test lab.
Change the channel name and try. this helped me lot
ok sure....
I have overcome this issue by just creating admincerts directory and copying the signcerts into it. Now it works. Thank you guoger!
Has joined the channel.
I need to plan our systems migration to raft . When moving to RAFT does the Orderer container use more storage than with Zookeeper / Kafka ? Eg does the Orderer persist any raft logs in storage, and if so how should I calculate the size of them ?
No... Raft truncates its WALs
Raft truncates the WAL, which I believe defaults to 20MB in size, it also retains 5 snapshots of the WAL by default, so I believe you should be able to expect about 120MB of Raft storage per channel in addition to your blockchain. This size will not change over time as your blockchain grows.
@jyellick If a org leader peer later wants to update it ledger from scratch directly from the orderer, will it be able to do for the whole blockchain as the WAL is getting truncated in RAFT ?
yes, it will be able to do it. Orderer truncates WAL, but *not* ledger
think wal as a facilitator of consensus mechanism, which is independent from orderer delivery service
ok thanks for the info @guoger
Any link where i can get more information about the WAL and snapshot thing
we are using etcd/raft lib, and they have some docs around wal [here](https://github.com/etcd-io/etcd/blob/master/wal/doc.go)
Can you please guide me in detail how to achieve it as I am facing the same issue again.
Has joined the channel.
What are the requirements to self-bootstrap an orderer?
I'm not sure I understand the question. You bootstrap your orderer with the genesis block of the ordering system channel you desire.
batch timeout
It is possible to dynamically change the configuration of {BatchTimeout or BatchSize} after starting a network?
yes, you'll need to submit a config update tx to update those values, see [this tutorial](https://hyperledger-fabric.readthedocs.io/en/latest/config_update.html)
`2019-09-04 13:02:11.489 UTC [orderer.consensus.etcdraft] createOrReadWAL -> INFO 07a Found WAL data at path '/var/hyperledger/production/orderer/etcdraft/wal/nath41channel', replaying it channel=nath41channel node=2
2019-09-04 13:02:11.489 UTC [orderer.commmon.multichannel] newChainSupport -> PANI 07b [channel: nath41channel] Error creating consenter: failed to restore persisted raft data: failed to create or read WAL: failed to open WAL: fileutil: file already locked
panic: [channel: nath41channel] Error creating consenter: failed to restore persisted raft data: failed to create or read WAL: failed to open WAL: fileutil: file already locked`
Orderers are failing with raft, kubernetes, 1.4.3 when try to create a channel
leader election has completed and channel created
another process has locked wal files.
was orderer being restarted?
deepak shinde
Hello @guoger total three orderers are there when i try to create channel two orderers went down stating above error and restarted
did you point them to the same directory
?
you mean all orderers to same directory ?
No each orderer has its own PVC
they should be using *different* dir
yes diff dir
hmm.. sounds like a pv problem.. if it's a newly created channel, it shouldn't say `Found WAL data`
all of them should create new wal
yes that is the case each channel has new wal data
iam debugging the issue with my current setup
Each orderer has individual PVC's
as we chatted, please create a jira with detailed reproduce steps, and send jira number here
thank you
Hello, Raft Migration Document (https://hyperledger-fabric.readthedocs.io/en/release-1.4/kafka_raft_migration.html?highlight=kafka%20raft%20migration#entry-to-maintenance-mode). Current value in the MetaData is like STATE_NORMAL however the documentation say NORMAL.
Hello, Raft Migration Document (https://hyperledger-fabric.readthedocs.io/en/release-1.4/kafka_raft_migration.html?highlight=kafka%20raft%20migration#entry-to-maintenance-mode). Current value in the MetaData is like STATE_NORMAL (if i parse the byfn using RAFT setup) however the documentation say NORMAL. I assume this needs correction?
Hello, Raft Migration Document (https://hyperledger-fabric.readthedocs.io/en/release-1.4/kafka_raft_migration.html?highlight=kafka%20raft%20migration#entry-to-maintenance-mode). Current value in the MetaData is STATE_NORMAL if i parse the fabric-samples byfn my-channel config block for RAFT setup however the documentation say NORMAL. I assume this needs correction?
Hello, the Raft Migration Document (https://hyperledger-fabric.readthedocs.io/en/release-1.4/kafka_raft_migration.html) indicates the RAFT metadata state to be set as 'NORMAL'. However parsing the application channel config block from byfn, shows the value to be set as 'STATE_NORMAL'. Do we need a update to this procedure?
Hello, the Raft Migration Document (https://hyperledger-fabric.readthedocs.io/en/release-1.4/kafka_raft_migration.html) indicates the RAFT metadata state to be set as `NORMAL`. However parsing the application channel config block from byfn, shows the value to be set as 'STATE_NORMAL'. Do we need a update to this procedure?
Hello, the Raft Migration Document (https://hyperledger-fabric.readthedocs.io/en/release-1.4/kafka_raft_migration.html) indicates the RAFT metadata state to be set as `NORMAL`. However parsing the application channel config block from byfn, shows the value to be set as `STATE_NORMAL`. Do we need a update to this procedure?
Please help in this. I also tried copying blockfile of peer to orderer but by doing that orderer can't restart and failed due to following error: https://pastebin.com/AAWgMmYD
Has joined the channel.
hi there, does anyone know if the smart-bft ordering service is compatible with a v1.4 network? https://github.com/bft-smart/fabric-orderingservice
Hello, @jyellick
In our block chain setup, Organization A is the administrator for all the channel configuration update. However once a i move the system channel into RAFT maintenance mode (STATE_MAINTENANCE), the only way to fetch the block is using the orderer organization looking at the orderer log below.
```
2019-09-05 14:28:42.441 UTC [orderer.common.server] Deliver -> DEBU 37070 Starting new Deliver handler
2019-09-05 14:28:42.441 UTC [common.deliver] Handle -> DEBU 37071 Starting new deliver loop for 172.27.0.14:60410
2019-09-05 14:28:42.441 UTC [common.deliver] Handle -> DEBU 37072 Attempting to read seek info message from 172.27.0.14:60410
2019-09-05 14:28:42.441 UTC [policies] Evaluate -> DEBU 37073 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Readers ==
2019-09-05 14:28:42.442 UTC [policies] Evaluate -> DEBU 37074 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
2019-09-05 14:28:42.442 UTC [policies] Evaluate -> DEBU 37075 == Evaluating *cauthdsl.policy Policy /Channel/Orderer/clsorder/Readers ==
2019-09-05 14:28:42.442 UTC [cauthdsl] func1 -> DEBU 37076 0xc0008cab80 gate 1567693722442050139 evaluation starts
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37077 0xc0008cab80 signed by 0 principal evaluation starts (used [false])
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37078 0xc0008cab80 processing identity 0 with bytes of f3dfc0
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37079 0xc0008cab80 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected clsorder, got clsbgb65)
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 3707a 0xc0008cab80 principal evaluation fails
```
This is not correct as Organization A is the reader, writer and admin and hence all these migration steps must be allowed using Organization A.
Hello, @jyellick
In our block chain setup, Organization A is the administrator for all the channel configuration update. However once a i move the system channel into RAFT maintenance mode (STATE_MAINTENANCE), the only way to fetch the block is using the orderer organization looking at the orderer log below.
```
2019-09-05 14:28:42.441 UTC [orderer.common.server] Deliver -> DEBU 37070 Starting new Deliver handler
2019-09-05 14:28:42.441 UTC [common.deliver] Handle -> DEBU 37071 Starting new deliver loop for 172.27.0.14:60410
2019-09-05 14:28:42.441 UTC [common.deliver] Handle -> DEBU 37072 Attempting to read seek info message from 172.27.0.14:60410
2019-09-05 14:28:42.441 UTC [policies] Evaluate -> DEBU 37073 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Readers ==
2019-09-05 14:28:42.442 UTC [policies] Evaluate -> DEBU 37074 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
2019-09-05 14:28:42.442 UTC [policies] Evaluate -> DEBU 37075 == Evaluating *cauthdsl.policy Policy /Channel/Orderer/clsorder/Readers ==
2019-09-05 14:28:42.442 UTC [cauthdsl] func1 -> DEBU 37076 0xc0008cab80 gate 1567693722442050139 evaluation starts
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37077 0xc0008cab80 signed by 0 principal evaluation starts (used [false])
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37078 0xc0008cab80 processing identity 0 with bytes of f3dfc0
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37079 0xc0008cab80 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected clsorder, got clsbgb65)
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 3707a 0xc0008cab80 principal evaluation fails
```
This is not correct as Organization A is the reader, writer and admin and hence all these migration steps must be allowed using Organization A MSP.
Hello, @jyellick
In our block chain setup, Organization A is the administrator for all the channel configuration update. However once a i move the system channel into RAFT maintenance mode (STATE_MAINTENANCE), the only way to fetch the block is using the orderer organization looking at the orderer log below.
```
2019-09-05 14:28:42.441 UTC [orderer.common.server] Deliver -> DEBU 37070 Starting new Deliver handler
2019-09-05 14:28:42.441 UTC [common.deliver] Handle -> DEBU 37071 Starting new deliver loop for 172.27.0.14:60410
2019-09-05 14:28:42.441 UTC [common.deliver] Handle -> DEBU 37072 Attempting to read seek info message from 172.27.0.14:60410
2019-09-05 14:28:42.441 UTC [policies] Evaluate -> DEBU 37073 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Readers ==
2019-09-05 14:28:42.442 UTC [policies] Evaluate -> DEBU 37074 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
2019-09-05 14:28:42.442 UTC [policies] Evaluate -> DEBU 37075 == Evaluating *cauthdsl.policy Policy /Channel/Orderer/clsorder/Readers ==
2019-09-05 14:28:42.442 UTC [cauthdsl] func1 -> DEBU 37076 0xc0008cab80 gate 1567693722442050139 evaluation starts
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37077 0xc0008cab80 signed by 0 principal evaluation starts (used [false])
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37078 0xc0008cab80 processing identity 0 with bytes of f3dfc0
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37079 0xc0008cab80 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected clsorder, got clsbgb65)
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 3707a 0xc0008cab80 principal evaluation fails
```
This is not correct as Organization A is the reader, writer and admin and hence all these migration steps must be allowed using Organization A.
Please provide your thought.
Hello, @jyellick
In our block chain setup, Organization A is the administrator for all the channel configuration update. However once a i move the system channel into RAFT maintenance mode (STATE_MAINTENANCE), the only way to fetch the block is using the orderer organization looking at the orderer log below.
```
2019-09-05 14:28:42.441 UTC [orderer.common.server] Deliver -> DEBU 37070 Starting new Deliver handler
2019-09-05 14:28:42.441 UTC [common.deliver] Handle -> DEBU 37071 Starting new deliver loop for 172.27.0.14:60410
2019-09-05 14:28:42.441 UTC [common.deliver] Handle -> DEBU 37072 Attempting to read seek info message from 172.27.0.14:60410
2019-09-05 14:28:42.441 UTC [policies] Evaluate -> DEBU 37073 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Readers ==
2019-09-05 14:28:42.442 UTC [policies] Evaluate -> DEBU 37074 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
2019-09-05 14:28:42.442 UTC [policies] Evaluate -> DEBU 37075 == Evaluating *cauthdsl.policy Policy /Channel/Orderer/clsorder/Readers ==
2019-09-05 14:28:42.442 UTC [cauthdsl] func1 -> DEBU 37076 0xc0008cab80 gate 1567693722442050139 evaluation starts
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37077 0xc0008cab80 signed by 0 principal evaluation starts (used [false])
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37078 0xc0008cab80 processing identity 0 with bytes of f3dfc0
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37079 0xc0008cab80 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected clsorder, got clsbgb65)
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 3707a 0xc0008cab80 principal evaluation fails
```
This is not correct as Organization A is the reader, writer and *admin* and hence all these migration steps must be allowed using Organization A.
Please provide your thought and it looks a defect to me.
Hello, @jyellick
In our block chain setup, Organization A is the administrator for all the channel configuration update. However once a i move the system channel into RAFT maintenance mode (STATE_MAINTENANCE), the only way to fetch the block is using the orderer organization looking at the orderer log below.
```
2019-09-05 14:28:42.441 UTC [orderer.common.server] Deliver -> DEBU 37070 Starting new Deliver handler
2019-09-05 14:28:42.441 UTC [common.deliver] Handle -> DEBU 37071 Starting new deliver loop for 172.27.0.14:60410
2019-09-05 14:28:42.441 UTC [common.deliver] Handle -> DEBU 37072 Attempting to read seek info message from 172.27.0.14:60410
2019-09-05 14:28:42.441 UTC [policies] Evaluate -> DEBU 37073 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Readers ==
2019-09-05 14:28:42.442 UTC [policies] Evaluate -> DEBU 37074 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
2019-09-05 14:28:42.442 UTC [policies] Evaluate -> DEBU 37075 == Evaluating *cauthdsl.policy Policy /Channel/Orderer/clsorder/Readers ==
2019-09-05 14:28:42.442 UTC [cauthdsl] func1 -> DEBU 37076 0xc0008cab80 gate 1567693722442050139 evaluation starts
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37077 0xc0008cab80 signed by 0 principal evaluation starts (used [false])
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37078 0xc0008cab80 processing identity 0 with bytes of f3dfc0
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37079 0xc0008cab80 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected clsorder, got clsbgb65)
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 3707a 0xc0008cab80 principal evaluation fails
```
This is not correct as Organization A is the reader, writer and *admin* and hence all these migration steps must be allowed using Organization A.
Please provide your thought and it looks a defect to me as the check is specifically using `/Channel/Orderer/Readers`.
Hello, @jyellick
In our block chain setup, Organization A is the administrator for all the channel configuration update. However once a i move the system channel into RAFT maintenance mode (STATE_MAINTENANCE), the only way to fetch the block is using the orderer organization looking at the orderer log below.
```
2019-09-05 14:28:42.441 UTC [orderer.common.server] Deliver -> DEBU 37070 Starting new Deliver handler
2019-09-05 14:28:42.441 UTC [common.deliver] Handle -> DEBU 37071 Starting new deliver loop for 172.27.0.14:60410
2019-09-05 14:28:42.441 UTC [common.deliver] Handle -> DEBU 37072 Attempting to read seek info message from 172.27.0.14:60410
2019-09-05 14:28:42.441 UTC [policies] Evaluate -> DEBU 37073 == Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Readers ==
2019-09-05 14:28:42.442 UTC [policies] Evaluate -> DEBU 37074 This is an implicit meta policy, it will trigger other policy evaluations, whose failures may be benign
2019-09-05 14:28:42.442 UTC [policies] Evaluate -> DEBU 37075 == Evaluating *cauthdsl.policy Policy /Channel/Orderer/clsorder/Readers ==
2019-09-05 14:28:42.442 UTC [cauthdsl] func1 -> DEBU 37076 0xc0008cab80 gate 1567693722442050139 evaluation starts
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37077 0xc0008cab80 signed by 0 principal evaluation starts (used [false])
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37078 0xc0008cab80 processing identity 0 with bytes of f3dfc0
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 37079 0xc0008cab80 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected clsorder, got clsbgb65)
2019-09-05 14:28:42.442 UTC [cauthdsl] func2 -> DEBU 3707a 0xc0008cab80 principal evaluation fails
```
This is not correct as Organization A is the reader, writer and *admin* and hence all these migration steps must be allowed using Organization A.
Please provide your thought and it looks a defect to me as the check is specifically scoped to `/Channel/Orderer/Readers`.
Hey @rahulhegde
Yes, maintenance mode does explicitly switch from using `/Channel/Readers` and `/Channel/Writers` to instead use `/Channel/Orderer/Readers` and `/Channel/Orderer/WRiters`
Yes, maintenance mode does explicitly switch from using `/Channel/Readers` and `/Channel/Writers` to instead use `/Channel/Orderer/Readers` and `/Channel/Orderer/Writers`
The easiest fix in your case, is to add your admin to the `/Channel/Orderer/OrdererOrg/Readers` and `/Channel/Orderer/OrdererOrg/Writers` policies.
(This is because those `/Channel/Orderer/Readers` policies implicitly recurse into the org level ones)
(This is because those `/Channel/Orderer/{Readers,Writers}` policies implicitly recurse into the org level ones)
(Or you could modify the `/Channel/Orderer/{Readers,Writers}` policies themselves, but I thought we'd discussed this in the past and decided not to)
I see this as workaround to move forward. Is there a reason for switch and can be control it?
I see this as workaround to move forward. Is there a reason for switch and can it be controlled?
Right, this is correct as there is lot many configuration update that needs to be done.
This workaround would be required for every time whenever there is RAFT configuration update say TLS certificate rotate. I do see this as defect as every channel is knowingly configured to have Organization A as admin for all updates.
No, 'Maintenance Mode' is currently only required for migration. You may perform Raft TLS cert rotation for instance without any interruption in service.
We chose the more general name of 'maintenance mode' because it puts the orderers into a state where they will no longer accept transaction and disseminate blocks.
The only task which requires it though is migration, otherwise it would be optional.
Suggestion - you mentioned adding admin certificate of Organization A (CLS), is it to the MSP of the /Channel/Orderer?
Adding it to the MSP could be done, but you would also need to add the root certs, and specify a different MSP ID
[2] Can this be second approach, changing the admin of the /channel/Orderer to be Orderer Organization admin rather Organization A admin. Do you see a issue?
Far easier to modify the orderer org policies to also allow the other MSPID
Do you say updating the follow /channel/Orderer/group/clsorder/policies
```
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 1,
"value": {
"identities": [
{
"principal": {
"msp_identifier": "clsbgb65",
"role": "ADMIN"
},
"principal_classification": "ROLE"
}
],
"rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0
}
]
}
},
"version": 0
}
},
"version": "0"
},
```
I am sure if this is acceptable - but if you could review
```
"Admins": {
"mod_policy": "Admins",
"policy": {
"type": 1,
"value": {
"identities": [
{
"principal": {
"msp_identifier": "clsbgb65",
"role": "ADMIN"
},
"principal_classification": "ROLE"
},
{
"principal": {
"msp_identifier": "clsorder",
"role": "ADMIN"
},
"principal_classification": "ROLE"
}
],
"rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0
},
{
"signed_by": 1
}
]
}
},
"version": 0
}
},
"version": "0"
},
```
I would be more inclined to add a second principal than to modify the first. So instead of:
```
{
"n_out_of" : {
"n": 1,
"rules": [
{sidnged_by": 0},
{sidnged_by": 1},
}
}
```
I would be more inclined to add a second principal than to modify the first. So instead get
```
{
"n_out_of" : {
"n": 1,
"rules": [
{sidnged_by": 0},
{sidnged_by": 1},
}
}
```
I would be more inclined to add a second principal than to modify the first. So instead get
```
{
"n_out_of" : {
"n": 1,
"rules": [
{"singed_by": 0},
{"singed_by": 1},
}
}
```
I would be more inclined to add a second principal than to modify the first. So instead get
```
"identities": [
{
"principal": { "msp_identifier": "clsordorg", "role": "ADMIN"},
}
{
"principal": { "msp_identifier": "clsbgb65", "role": "ADMIN"},
}
],
{
"n_out_of" : {
"n": 1,
"rules": [
{"singed_by": 0},
{"singed_by": 1},
}
}
```
I would do this to each of the org's Reader/Writer/Admins policies
Our first principal is `clsbgb65` which is the CLS organization. It does not have any reference to `clsorder` organization.
Our first principal is `clsbgb65` in all /channel/orderer which is the CLS organization. It does not have any reference to `clsorder` organization.
Ah, so that is what you did for the admins policy
But what do the readers/writers policies look like?
`clsorder` as the first and only principal
```
"value": {
"identities": [
{
"principal": {
"msp_identifier": "clsorder",
"role": "MEMBER"
},
"principal_classification": "ROLE"
}
],
"rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0
}
]
}
},
"version": 0
}
```
Yes, so here I am suggesting that you also add the admin of the other cls org here, and add it as one of the options in the rule.
In this way, when maintenance mode turns on, your admin will still have read/write access.
Got it - can u please review it.
```
"Writers": {
"mod_policy": "Admins",
"policy": {
"type": 1,
"value": {
"identities": [
{
"principal": {
"msp_identifier": "clsorder",
"role": "MEMBER"
},
"principal_classification": "ROLE"
},
{
"principal": {
"msp_identifier": "clsbgb65",
"role": "ADMIN"
},
"principal_classification": "ROLE"
}
],
"rule": {
"n_out_of": {
"n": 1,
"rules": [
{
"signed_by": 0
},
{
"signed_by": 1
}
]
}
},
"version": 0
}
},
"version": "0"
}
```
Yes, correct, for both readers and writers
Okay. Can u please also comment on the Approach 2
1. Move the /channel/orderer/ admin policy to be admin of the orderer organization before swtiching into maintenance mode.
2. Use the orderer MSP for all migration operations.
3. Switch back /channel/orderer/ admin policy back to CLS admin organization after switching out from the maintenance mode.
Okay. Can u please also review the Approach 2
1. Move the /channel/orderer/ admin policy to be admin of the orderer organization before swtiching into maintenance mode.
2. Use the orderer MSP for all migration operations.
3. Switch back /channel/orderer/ admin policy back to CLS admin organization after switching out from the maintenance mode.
I would expect for this approach also to work, though as with any flow where admin control changes hands, it's important to do good testing to ensure you do not accidentally lock yourself out of the system.
Thanks Jason - i understand the two approaches and will discuss here and come back for any further queries.
Thanks Jason - i understand the two approaches and will discuss internally and come back for any further queries.
is anyone familiar with this error on the orderer:
```
2019-09-06 13:43:27.436 UTC [orderer.common.broadcast] ProcessMessage -> WARN 246 [channel: testchainid] Rejecting broadcast of config message from 172.19.0.1:33486 because of error: error applying config update to existing channel 'testchainid': error authorizing update: proto: field "common.ConfigUpdate.channel_id" contains invalid UTF-8
```
Depends on how you are creating/submitting your config transaction, but, the error is fairly explicit. In your config update structure, the channel_id field contains some bytes which are not UTF-8. The expected contents is the encoded string `testchainid`.
Are you using the CLI or one of the SDKs?
sdk-go which uses the new `hyperledger/fabric-protos-go` package
The proto definitions should be the same regardless of platform. I'd look to see where/how you're building the common.ConfigUpdate message, and see what's getting stuck in the `channel_id` field.
it's just a variable being assigned: ` configUpdate.ChannelId = req.ChannelID`
Can you capture the envelope you're sending to broadcast and run it through configtxlator to inspect it?
yup thanks, was a problem with the envelope
setup aimed for- 2 orgs(buyer,lender) which 2 and 3 orderers respectively and 1 peer per org. If i try to create channel from buyer's peer (one with lesser number of orderers) then error on peer is:-
`2019-09-09 05:24:54.268 UTC [main] InitCmd -> WARN 001 CORE_LOGGING_LEVEL is no longer supported, please use the FABRIC_LOGGING_SPEC environment variable
2019-09-09 05:24:54.273 UTC [main] SetOrdererEnv -> WARN 002 CORE_LOGGING_LEVEL is no longer supported, please use the FABRIC_LOGGING_SPEC environment variable
2019-09-09 05:24:54.284 UTC [channelCmd] InitCmdFactory -> INFO 003 Endorser and orderer connections initialized
Error: got unexpected status: SERVICE_UNAVAILABLE -- no Raft leader
`
```
```
Setup- 2 org (buyer, lender), consensus - etcdraft, 2 orderers (buyer org), 3 orderers (lender org), 1 peer per org. Peers actions (channel create/join/fetch) are invoked on orderer of its own org.
Channel create,join on peers with more numbers of orderers in their org works but fails on peer of other org with lesser number of orderers. If both orgs have same number of orderers then it fails on both of them.
Below are logs from setup mentioned.
sandman - Mon Sep 09 2019 12:53:15 GMT+0530 (India Standard Time).txt
i think those 2 orderers in buyer's org probably cannot reach the other 3 in leader's org, so we have a network partition there. Could you provide orderer logs?
sandman - Mon Sep 09 2019 14:20:26 GMT+0530 (India Standard Time).txt
I agree but what could be the reason for this? Is it because they dont hve tls ca cert of lenders ca?
@sandman let's chat in thread to avoid flooding main channel. There could number of reasons, e.g. dns resolution, bad certificates, network blocked, etc
orderer logs normally clearly indicate error reasons
sure
I think the reason in my case is bad certificates, as is mentioned in logs as is evident from logs in line which states
ServerHandshake -> ERRO 1a6^[[0m TLS handshake failed with error remote error: tls: bad certificate server=Orderer
similar logs are there on other orderers as well
I think the reason in my case is bad certificates, as is mentioned in logs as is evident from logs in line which states
ServerHandshake -> ERRO 1a6^[[0m TLS handshake failed with error remote error: tls: bad certificate server=Orderer
similar logs are there on other orderers as well
would you be able to see leader being elected among 3 lender's orderers? try to observe orderer log to search for `leader changed X -> Y`
correct, no such logs on 2 buyer orderers
Hello,
I am getting following error while channel creation:
Principal deserialization failure (the supplied identity is not valid: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "ca-eprocure1"))
Here is the peer logs:
Error: got unexpected status: BAD_REQUEST -- error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: Failed to reach implicit threshold of 1 sub-policies, required 1 remaining
I have doubled checked the admin certificate, it is signed by ca-eprocure1 only.
Can anyone please help into this?
this error is recurring ````
TLS handshake failed with error tls: first record does not look like a TLS handshake server=Orderer remoteaddress=10.244.3.87:45240
``` `
TLS handshake failed with error tls: first record does not look like a TLS handshake server=Orderer remoteaddress=10.244.3.87:45240
what could be the reason for this>
i think your orderer is tls enabled, but client is not
grpc: addrConn.createTransport failed to connect to {orderer5.example.com:7050 0
can you expand on what you mean by client not being tls enabled? server is tls enabled, is enabling client auth not required in case if we want mutual tls ?
Clipboard - September 9, 2019 6:14 PM
> first record does not look like a TLS handshake
for this paritcular error, normally one end requires TLS but the other end attempts to connect without perfomring tls handshake. So you'll need to enable tls on client as well
thanks, it worked
Has joined the channel.
Has joined the channel.
Hello,
My application is crashing with the below error:
....../node_modules/fabric-client/lib/ChannelEventHub.js:849
cbtable.delete(listener_handle);
^
TypeError: Cannot read property 'delete' of undefined
at ChannelEventHub.unregisterChaincodeEvent (......./node_modules/fabric-client/lib/ChannelEventHub.js:849:12)
at Timeout.setTimeout (....../src/common/service/invoke.service.ts:177:12)
at ontimeout (timers.js:475:11)
at tryOnTimeout (timers.js:310:5)
at Timer.listOnTimeout (timers.js:270:5)
It runs well for few iterations before crashing with the above error. Below are the peer logs (unable to access the website to upload logs from office network):
2019-09-10 17:44:16.552 UTC [common/deliver] deliverBlocks -> DEBU a64fb [channel: ibmchannel] Delivering block for (0xc432adcd40) for 172.24.0.1:54204
2019-09-10 17:44:16.556 UTC [fsblkstorage] waitForBlock -> DEBU a64fd Going to wait for newer blocks. maxAvailaBlockNumber=[86], waitForBlockNum=[87]
2019-09-10 17:44:16.558 UTC [fsblkstorage] waitForBlock -> DEBU a64fe Going to wait for newer blocks. maxAvailaBlockNumber=[86], waitForBlockNum=[87]
2019-09-10 17:44:16.568 UTC [fsblkstorage] waitForBlock -> DEBU a64ff Going to wait for newer blocks. maxAvailaBlockNumber=[86], waitForBlockNum=[87]
2019-09-10 17:44:16.568 UTC [fsblkstorage] waitForBlock -> DEBU a6500 Going to wait for newer blocks. maxAvailaBlockNumber=[86], waitForBlockNum=[87]
2019-09-10 17:44:16.571 UTC [fsblkstorage] waitForBlock -> DEBU a6501 Going to wait for newer blocks. maxAvailaBlockNumber=[86], waitForBlockNum=[87]
2019-09-10 17:44:16.575 UTC [fsblkstorage] waitForBlock -> DEBU a6502 Going to wait for newer blocks. maxAvailaBlockNumber=[86], waitForBlockNum=[87]
2019-09-10 17:44:20.336 UTC [common/deliver] deliverBlocks -> DEBU a6504 Context canceled, aborting wait for next block
2019-09-10 17:44:20.336 UTC [common/deliverevents] func1 -> DEBU a6505 Closing Deliver stream
2019-09-10 17:44:20.336 UTC [common/deliver] deliverBlocks -> DEBU a6506 Context canceled, aborting wait for next block
2019-09-10 17:44:20.336 UTC [common/deliverevents] func1 -> DEBU a6507 Closing Deliver stream
2019-09-10 17:44:20.336 UTC [common/deliver] deliverBlocks -> DEBU a6503 Context canceled, aborting wait for next block
2019-09-10 17:44:20.336 UTC [common/deliver] deliverBlocks -> DEBU a6508 Context canceled, aborting wait for next block
2019-09-10 17:44:20.336 UTC [common/deliverevents] func1 -> DEBU a650a Closing Deliver stream
2019-09-10 17:44:20.336 UTC [common/deliverevents] func1 -> DEBU a6509 Closing Deliver stream
2019-09-10 17:44:20.336 UTC [fsblkstorage] waitForBlock -> DEBU a650b Came out of wait. maxAvailaBlockNumber=[16]
2019-09-10 17:44:20.337 UTC [fsblkstorage] waitForBlock -> DEBU a650c Came out of wait. maxAvailaBlockNumber=[86]
2019-09-10 17:44:20.337 UTC [fsblkstorage] waitForBlock -> DEBU a650e Came out of wait. maxAvailaBlockNumber=[86]
2019-09-10 17:44:20.337 UTC [common/deliver] deliverBlocks -> DEBU a650d Context canceled, aborting wait for next block
Images versions:
hyperledger/fabric-orderer x86_64-1.1.1
hyperledger/fabric-peer x86_64-1.1.1
hyperledger/fabric-baseimage x86_64-0.4.6
Event hub code extract:
const eventHubs = channel.getChannelEventHubsForOrg();
eventHubs.forEach(eh => {
this.logger.debug(`invokeEventPromise - setting up event: ${eh.getPeerAddr()}`);
const invokeEventPromise = new Promise((resolve, reject) => {
const eventTimeout = setTimeout(() => {
const message = `REQUEST_TIMEOUT: ${eh.getPeerAddr()}`;
this.logger.debug(`message receieved ${message}`);
eh.disconnect();
reject(new Error(message));
}, 300000);
eh.registerTxEvent(
txIdString,
(tx, code, blockNum) => {
this.logger.debug(`The chaincode invoke chaincode transaction has been committed on peer
${eh.getPeerAddr()}`);
clearTimeout(eventTimeout);
eh.unregisterTxEvent(txIdString);
if (code !== 'VALID') {
const message = `The invoke chaincode transaction was invalidcode: ${code}`;
this.logger.debug(message);
reject(new Error(message));
} else {
const message = 'The invoke chaincode transaction was valid.';
this.logger.debug(message);
resolve(message);
}
},
err => {
clearTimeout(eventTimeout);
eh.unregisterTxEvent(txIdString);
const message = `Problem setting up the event hub : ${err.toString()}`;
this.logger.debug(message);
reject(new Error(message));
}
);
});
const chaincodeEventMonitor = new Promise((resolve, reject) => {
let regid = null;
const handle = setTimeout(() => {
if (regid) {
// might need to do the clean up this listener
eh.unregisterChaincodeEvent(regid);
this.logger.debug('Timeout - Failed to receive the chaincode event');
}
reject(new Error('Timed out waiting for chaincode event'));
}, 300000);
this.logger.debug(`Registering chaincode ${chaincodeName.toString()}`);
regid = eh.registerChaincodeEvent(
chaincodeName.toString(),
'CHAINCODE_EVENT',
event => {
this.logger.debug(`regid: ${regid}`);
this.logger.debug('eventPayload: ' + event.payload);
const eventPayload = event.payload.toString('utf8');
responseOutput = eventPayload;
this.logger.debug(`eventPayload: ${eventPayload}`);
eh.unregisterChaincodeEvent(regid);
resolve('RECEIVED');
},
error => {
clearTimeout(handle);
this.logger.error(`Failed to receive the chaincode event :: ${error}`);
reject(error);
}
);
});
promises.push(invokeEventPromise);
promises.push(chaincodeEventMonitor);
eh.connect(true);
});
event_hub_code.txt
FYI - All the containers are up and running fine. I have run the app multiple times and each time, it crashes with the same error after running for few minutes. While the app runs, there are no errors that I see in any of the container logs.
It seems the app is crashing as per the timer set in setTimeout function.
const handle = setTimeout(() => {
if (regid) {
// might need to do the clean up this listener
eh.unregisterChaincodeEvent(regid);
this.logger.debug('Timeout - Failed to receive the chaincode event');
}
reject(new Error('Timed out waiting for chaincode event'));
}, 300000);
Start Time: [2019-09-10T18:14:21.144]
Crash Time: [2019-09-10 18:19:34.236]
const handle = setTimeout(() => {
if (regid) {
// might need to do the clean up this listener
eh.unregisterChaincodeEvent(regid);
this.logger.debug('Timeout - Failed to receive the chaincode event');
}
reject(new Error('Timed out waiting for chaincode event'));
}, 600000);
Start Time: [2019-09-10T18:27:42.833]
Crash Time: [2019-09-10 18:38:28.151]
The app is crashing as per the timer set in setTimeout function.
const handle = setTimeout(() => {
if (regid) {
// might need to do the clean up this listener
eh.unregisterChaincodeEvent(regid);
this.logger.debug('Timeout - Failed to receive the chaincode event');
}
reject(new Error('Timed out waiting for chaincode event'));
}, 300000);
Start Time: [2019-09-10T18:14:21.144]
Crash Time: [2019-09-10 18:19:34.236]
const handle = setTimeout(() => {
if (regid) {
// might need to do the clean up this listener
eh.unregisterChaincodeEvent(regid);
this.logger.debug('Timeout - Failed to receive the chaincode event');
}
reject(new Error('Timed out waiting for chaincode event'));
}, 600000);
Start Time: [2019-09-10T18:27:42.833]
Crash Time: [2019-09-10 18:38:28.151]
Hello everyone, I am facing a little problem and I hope someone here could shed a light for me. I am trying to boot 3 orderers running raft. Each orderer have its own organization. I just can't create my channel. I will upload my configtx.yaml, docker-compose.yaml and the logs for those orderers, hope someone can help :)
https://pad.riseup.net/p/raftproblems-configtx
https://pad.riseup.net/p/raftproblems-docker
https://pad.riseup.net/p/raftproblems-orderer1
https://pad.riseup.net/p/raftproblems-orderer2
https://pad.riseup.net/p/raftproblems-orderer3
Configtx.yaml https://pad.riseup.net/p/r.b9146bab5d03409fd5632fd71888043e
docker-compose https://pad.riseup.net/p/r.a7ffe46c554abe56dc4803fad63c8c6b
Logs orderer 0 https://pad.riseup.net/p/r.7216f9f3bb6ec49690ebad3ce4c7c6c2
Logs orderer 1 https://pad.riseup.net/p/r.42d34c81325c58cef84019bba23b107c
Logs orderer 2 https://pad.riseup.net/p/r.79f7276d618131271d5fd1035da5718a
If it is helpful, If I only boot 1 orderer, it works
@delao looking at your configtx.yaml and docker-compose, i think you have *2* orderers, instead of *3*.
could you double check that?
is this better asked in #fabric-sdk-node ?
I am trying to spin up raft using fabric sample repo and facing the following
```
2019-09-11 04:37:15.428 UTC [orderer.commmon.multichannel] newChainSupport -> PANI 005 Error retrieving consenter of type: etcdraft
panic: Error retrieving consenter of type: etcdraft
```
I am trying to spin up raft using fabric sample repo and facing the following. Anyone came across similar thing? Any setting that I missed?
```
2019-09-11 04:37:15.428 UTC [orderer.commmon.multichannel] newChainSupport -> PANI 005 Error retrieving consenter of type: etcdraft
panic: Error retrieving consenter of type: etcdraft
```
i suspect that you are using an old version of orderer. could you double check?
Oh, sorry, this was a test I was running
I'll reupload them all, just a sec
It is the same link above :)
Yes, you are correct, I was using 1.4
@delao you've misconfigured the port mapping in your docker-compose yaml
all orderers are listening on `7050` and you need to map them to corresponding `8050` and `9050`
take a look [here](https://github.com/hyperledger/fabric-samples/blob/0b980ebda7538723a83a59902c3977d7febbb0cc/first-network/docker-compose-etcdraft2.yaml#L32)
Hi all, Is there any easy to understand materials available for the policy we use in 'configtx.yaml' file? Because in the examples they have a separate org for orderers. But in reality all participating orgs can have their own orderers, in this case how does the policy we define in the configtx file changes? Where can I find the intricacies of policy usage? Thanks in advance!
@indirajith https://hyperledger-fabric.readthedocs.io/en/release-1.4/policies.html is a good place to start.
In general, we recommend that a separate logical organization be set up for each ordering org with its own CAs, etc. Starting with v1.4.3, there is a new orderer role which makes sharing the same MSP easier, but because of the important separation of responsibilities, I'd still recommend a separate logical organization.
Thanks a lot @jyellick !
Hi all, I am trying to spin up a raft cluster orderers, but when I try to use the genesis block created by org1 while starting another orderer from org2, creats the following error: I am unable to nerrow down the exact policy creating problem.
``` [cauthdsl] deduplicate -> ERRO 015 Principal deserialization failure (MSP SampleOrg is unknown) for identity 0
2019-09-13 20:24:36.634 UTC [cauthdsl] deduplicate -> ERRO 016 Principal deserialization failure (MSP SampleOrg is unknown) for identity 0
2019-09-13 20:24:36.635 UTC [cauthdsl] deduplicate -> ERRO 017 Principal deserialization failure (MSP SampleOrg is unknown) for identity 0
2019-09-13 20:24:36.635 UTC [cauthdsl] deduplicate -> ERRO 018 Principal deserialization failure (MSP SampleOrg is unknown) for identity 0
2019-09-13 20:24:36.635 UTC [common.deliver] deliverBlocks -> WARN 019 [channel: orderersyschannel] Client authorization revoked for deliver request from 172.18.0.5:45052$
implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied
```
Can anyone please help me how to check which policy is not not satisfied? And in my canfigtx.yaml I don't have an org with name 'SampleOrg', but still I get MSP SampleOrg is unknown. How to fix this?
@indirajith The fact that your MSPID is `SampleOrg` indicates to me that a default was not overridden somewhere.
In your `configtx.yaml` you should have org definitions for each of your ordering orgs, and they should each have a unique MSPID
I would expect that none of these orgs would be named `SampleOrg`, but instead something meaningful. Similarly in the configuration of each OSN, the `orderer.yaml` specifies the MSPID of the organization it belongs to. It must match the MSPID of the corresponding org in the `configtx.yaml` definition which issued its certificate.
Actually I have orderers and peers in the same org, two orgs with two peers each and two orderers each.
Can you explain whats OSN? sorry for the naive question. And orderer.yaml in orderer?
Although I would suggest that each org be split into two logical orgs, that should be fine. But still, the log indicates the MSPID is `SampleOrg`, which is almost definitely not right.
OSN is an abbreviation for "ordering system node", basically wherever you are running the `orderer` process.
OSN is an abbreviation for "ordering system node", basically wherever you are running the `orderer` each process.
OSN is an abbreviation for "ordering system node", basically wherever you are running the each `orderer` process.
Each OSN has an `orderer.yaml` file, and a set of environment variables starting with `ORDERER_`, which specify the configuration for that OSN.
You likely are setting `ORDERER_GENERAL_GENESISFILE` to override the `orderer.yaml` `General.GenesisFile` attribute.
You likely are setting `ORDERER_GENERAL_GENESISFILE` to override the `orderer.yaml` `General.GenesisFile` config element.
Yeah, I don't have any orgs specified with this name. Oh okay. Thanks, Yeah I get it, we specify them in docker-compose file, which are overwriting the orderer.yaml file.
Yes, so `ORDERER_GENERAL_LOCALMSPID` must not be set
Yes, so `ORDERER_GENERAL_LOCALMSPID` must not be set appropriately
thanks a lot @jyellick, I will check all the files and configs and figure it out.
Hi all, I am receiving this error when trying to start an orderer of org2 in raft setup. Is there any particular reason for this? ```ERRO 030 Failed receiving the latest block from ord1-org1.inuit.local:7050: didn't re
ceive a response within 7s
2019-09-13 20:24:50.641 UTC [orderer.common.cluster.replication] func1 -> WARN 031 Received error of type 'didn't receive a response within 7s' from```
Hi all, I am receiving this error when trying to start an orderer of org2 in raft setup. Is there any particular reason for this? ```ERRO 030 Failed receiving the latest block from ord1-org1.local:7050: didn't re
ceive a response within 7s
2019-09-13 20:24:50.641 UTC [orderer.common.cluster.replication] func1 -> WARN 031 Received error of type 'didn't receive a response within 7s' from```
Hi all, I am receiving this error when trying to start an orderer of org2 in raft setup. Is there any particular reason for this? ```[orderer.common.cluster.replication] fetchLastBlockSeq -> ERRO 030 Failed receiving the latest block from ord1-org1.inuit.local:7050: didn't re
ceive a response within 7s
2019-09-13 20:24:50.641 UTC [orderer.common.cluster.replication] func1 -> WARN 031 Received error of type 'didn't receive a response within 7s' from```
Hi all, I am receiving this error when trying to start an orderer of org2 in raft setup. Is there any particular reason for this? ```[orderer.common.cluster.replication] fetchLastBlockSeq -> ERRO 030 Failed receiving the latest block from ord1-org1.local:7050: didn't re
ceive a response within 7s
2019-09-13 20:24:50.641 UTC [orderer.common.cluster.replication] func1 -> WARN 031 Received error of type 'didn't receive a response within 7s' from```
Hi All,
When starting the orderer i am getting the below issue --
*2019-09-16 09:07:52.448 UTC [orderer.commmon.multichannel] checkResourcesOrPanic -> PANI 37a [channel ordererchannel] config requires unsupported orderer capabilities: Orderer capability V1_4_3 is required but not supported: Orderer capability V1_4_3 is required but not supported
panic: [channel ordererchannel] config requires unsupported orderer capabilities: Orderer capability V1_4_3 is required but not supported: Orderer capability V1_4_3 is required but not supported*
My configtx.yaml section for capabilities
Hi All,
Fabric - v1.4.3
When starting the orderer(v1.4.3) i am getting the below issue --
*2019-09-16 09:07:52.448 UTC [orderer.commmon.multichannel] checkResourcesOrPanic -> PANI 37a [channel ordererchannel] config requires unsupported orderer capabilities: Orderer capability V1_4_3 is required but not supported: Orderer capability V1_4_3 is required but not supported
panic: [channel ordererchannel] config requires unsupported orderer capabilities: Orderer capability V1_4_3 is required but not supported: Orderer capability V1_4_3 is required but not supported*
My configtx.yaml section for capabilities
Hi All,
Fabric - v1.4.3
When starting the orderer(v1.4.3) i am getting the below issue --
*2019-09-16 09:07:52.448 UTC [orderer.commmon.multichannel] checkResourcesOrPanic -> PANI 37a [channel ordererchannel] config requires unsupported orderer capabilities: Orderer capability V1_4_3 is required but not **supported: Orderer capability V1_4_3 is required but not supported*
*panic: [channel ordererchannel] config requires unsupported orderer capabilities: Orderer capability V1_4_3 is required but not supported: Orderer capability V1_4_3 is required but not supported*
My configtx.yaml section for capabilities
Clipboard - September 16, 2019 2:41 PM
Dear maintainer, is FAB-7559 ready to use or still under construction? Recently some git commits are found related to this.
This is implemented and available to use in Fabric v1.4.2, however, it is currently being backported to master/v2.0
This is implemented and available to use in Fabric v1.4.2, however, it is currently being forwardported to master/v2.0
I left a comments in the issue, could you help to go through to judge if it is just fault in my case?
The JIRA issue is really not the appropriate venue for general questions.
What version of Fabric are you running?
1.4.2
That error message was wrong in v1.4.2, and fixed in v1.4.3. Ensure that you have v1.4.2 capabilities enabled in your channel group when generating your genesis block, and that should resolve your issue.
Cool, I will feedback later after retry.
Thanks @jyellick the actual reason is I used "v1_4_2" instead of "V1_4_2"(case sensitive)
Any update on the above anybody?? as Orderer capabilities 1.4.3 is true still why its throwing the error "Orderer capability V1_4_3 is required but not supported"
What is your fabric-orderer version?
1.4.3
even for 1.4.3 orderer, please use "V1_4_2" for orderer Capabilities section
ok thanks @davidkhala
not V1_4_3, as sample indicated.
In general, you may refer to the `configtx.yaml` of the release to see the maximum supported capability versions https://github.com/hyperledger/fabric/blob/v1.4.3/sampleconfig/configtx.yaml
Now its generated. with V1_4_2 thanks
@jyellick I am getting the below error for RAFT orderers when trying to create Application channel
In RAFT Orderer Set up (v1.4.3) , created 5 orderers
Error: got unexpected status: SERVICE_UNAVAILABLE -- channel ordererchannel is not serviced by me
While creating genesis block i had given the below command -
configtxgen -profile LegalDescriptionGenesis -outputBlock ./legaldescription-genesis.block -channelID ordererchannel
I would check your orderer logs and ensure that Raft was able to form a quorum. It sounds like your initial Raft configuration may not have been valid.
orderer.zip
Attached is the logs of one of the Orderer logs out of the 5 orderers
Attached is the DEBUG logs of one of the Orderer logs out of the 5 orderers
You can see warning messages like:
```WARN 1fac Received error of type 'failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp 172.23.155.82:8050: connect: connection refused"' from {172.23.155.82:8050```
This indicates that your networking is not setup correctly to me.
This error was coming initially when another orderer (172.23.155.82:8050) was not started
initially it was coming for remaining other orderers also which were not started but after all were up this error was not there
You can also check this is just in the initial of the logs
```2019-09-16 13:17:31.482 UTC [orderer.consensus.etcdraft] detectSelfID -> WARN 2fc Could not find ```
This message indicates that the orderer's TLS cert is not in the Raft consenter set.
*In the orderer - docker compose
the signcerts folder certififcates are mapped and the private key*
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls-msp/signcerts/cert.pem
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls-msp/keystore/key.pem
*and in configtx.yaml file the tlscacerts folder certififcates are mapped*
ClientTLSCert: ./hyperledger/org0/orderer1/tls-msp/tlscacerts/tls-0-0-0-0-7052.pem
ServerTLSCert: ./hyperledger/org0/orderer1/tls-msp/tlscacerts/tls-0-0-0-0-7052.pem
*In the orderer - docker compose the signcerts folder certififcates are mapped and the private key*
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tls-msp/signcerts/cert.pem
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tls-msp/keystore/key.pem
*and in configtx.yaml file the tlscacerts folder certififcates are mapped*
ClientTLSCert: ./hyperledger/org0/orderer1/tls-msp/tlscacerts/tls-0-0-0-0-7052.pem
ServerTLSCert: ./hyperledger/org0/orderer1/tls-msp/tlscacerts/tls-0-0-0-0-7052.pem
If you look at that error message in detail, it prints all of the certificates. I'd suggest that you compare them to the values you expect and find where your bootstrapping has broken down.
ya i saw that its trying to look the signcerts folder certififcate among the tlscacerts certificates
so in configtx.yaml
ClientTLSCert , ServerTLSCert which of them should be mapped to signcerts folder or tlscacerts folder
*so in configtx.yaml *
*ClientTLSCert *, *ServerTLSCert *which of them should be mapped to signcerts folder or tlscacerts folder ?
serverTLSCert
so @guoger both the ClientTLSCert and ServerTLSCert should be mapped to signcerts folder certificate right?
https://hyperledger-fabric.readthedocs.io/en/latest/raft_configuration.html#local-configuration
these are tls certificates, not signcert
pls read sections around `ClientCertificate` and `ServerCertificate`
i meant was the certififcate from signcerts folder from TLS-MSP folder
ok
it's from `tls` directory, not `msp`
yes got it @guoger thanks
in my orderer2 i am getting this error logs
2019-09-13 16:26:17.503 UTC [orderer.consensus.kafka] processMessagesToBlocks -> ERRO 08c [channel: mychannel] Error during consumption: kafka: error while consuming channel/0: dial tcp: lookup kafka0 on 127.0.0.11:53: no such host
2019-09-13 16:26:17.503 UTC [orderer.consensus.kafka] processMessagesToBlocks -> WARN 08d [channel: mychannel] Deliver sessions will be dropped if consumption errors continue.
2019-09-13 16:26:17.504 UTC [orderer.consensus.kafka] processMessagesToBlocks -> ERRO 08e [channel: testchainid] Error during consumption: kafka: error while consuming testchainid/0: dial tcp: lookup kafka0 on 127.0.0.11:53: no such host
2019-09-13 16:26:17.504 UTC [orderer.consensus.kafka] processMessagesToBlocks -> WARN 08f [channel: testchainid] Deliver sessions will be dropped if consumption errors continue.
2019-09-13 16:26:19.506 UTC [orderer.consensus.kafka] processMessagesToBlocks -> WARN 090 [channel: testchainid] Consumption will resume.
as i replied in your other post:
> depends on how you provision the network, you'll need to configure the system to be able to resolve kafka0 to correct ip (docker-compose does this automatically)
yaa, but my kafka0 is cobfigured same as other kafka 1 2 3 in my repos
yaa, but my kafka0 is configured same as other kafka 1 2 3 in my repos
these error logs are started after two days after HLF NW up
Thanks @guoger that was it, but I've solved it by adding ORDERER_GENERAL_LISTENPORT env var on docker compose, thank you again :)
@Utsav_Solanki are you network functioning fine other than these warnings?
how exactly did you bring up the network?
i'm trying to create a new channel and this is the response i get: `config does not validly parse: initializing channelconfig failed: could not create channel Application sub-group config: ACLs may not be specified without the required capability`. the yaml file i use to create a channel is different from the one i used to create the system channel. i'm not too familiar with ACLs
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=SHKzyCu7EL2guEz4i) yes
for bring up i have script that make hyperledger NW up with all resources
[36m2019-09-19 04:28:55.414 UTC [orderer.consensus.etcdraft] logSendFailure -> DEBU 3c1[0m Failed to send StepRequest to 4, because: connection to 4(orderer-service.default.svc.cluster.local:7053) is in state CONNECTING channel=network-sys-channel node=1=====
We are seeing this error when bringing up orderer with type raft
Any suggestions? thanks
@jyellick Any suggestions? thanks
are you constantly observing this error - does it recover after period of time?
We are trying to deploy Fabric on kubernetes. We have a pod with multiple containers (orderers - 5 raft nodes) and when we bring up the pod we are hitting this issue
We are trying to deploy Fabric on kubernetes. We have a orderer pod with multiple containers (orderers - 5 raft nodes) and when we bring up the pod we are hitting this issue
We are trying to deploy Fabric on kubernetes. We have a orderer pod with multiple containers (orderers - 5 raft nodes) and when we bring up the pod we are hitting this issue
We still it constantly
We see it constantly every time the pod comes up. We tried setting GODEBUG=netdns=go environment variable. It didn't help.
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=MT4v5fhSYJD2iWKPA) executing a script file, it will make all peers kafka zookerper container start run , also it create and join channel
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=MT4v5fhSYJD2iWKPA) executing a script file, it will make all peers kafka zookerper container start & run , then create and join channel
V1.4.3 - Fabric
System channel -- ordererchannel
Applicatoin channel -- legaldescriptionchannel
While migration from kafka to RAFT i am getting an error in step -2
Step - 1 --> Updating both the system channel and application channel -- Maintenance Mode (*Completed*)
Step - 2 --> Updating the JSON with RAFT details -- type and metadata and submitting for channel update in application channel is giving me the below error --
*Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'legaldescriptionchannel': attempted to change consensus type from kafka to etcdraft *
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=buuyEo8u6eKDd2EcW)
CLI-Tool_logs.txt
Attached the CLI-tools logs after running the config update command
@guoger @jyellick
Orderer_logs.txt
Tried for Orderer system channel also the same above issue is replicating
hi there. I am configuring my orderer for image 1.4.3
however my orderer hits the error
Orderer capability V1_4_3 is required but not supported: Orderer capability V1_4_3 is required but not supported
is my capabilities section incorrect?
Global: &ChannelCapabilities
V1_4_3: true
Orderer: &OrdererCapabilities
V1_4_3: true
Application: &ApplicationCapabilities
V1_4_3: true
thanks
change the orderer capabilities from V1_4_3 to V1_4_2: true and also do the same for application capabilities
ok thanks
https://github.com/hyperledger/fabric/blob/release-1.4/sampleconfig/configtx.yaml
this is the configtx.yaml for 1.4.3
ahaa - was looking for that,. thank you!
Guys anybody any update on this ??
looks great now! - a final question - in the 1.4 sample, EtcdRaft consentors section, Im trying to figure out what ServerTLSCert and ClientTLSCert would be set as
would ServerTLS be
crypto-config/ordererOrg/company.../tls/server.crt?
does that mean the ClientTLS would = ca.crt
i think im slightly mixed up with what cert goes where from the samples. it means i cant deploy a channel via TLS
hello - I am trying to setup a etcdraft (from start, not migration) using the fabric v1.4.2. I am able to run the fabric-samples byfn with etcdraft but trying to map this setup to our requirement.
I am receiving following error during channel creation, can you please help what could be wrong
Peer CLI
```
2019-09-19 18:22:21.414 UTC [grpc] HandleSubConnStateChange -> DEBU 03e pickfirstBalancer: HandleSubConnStateChange: 0xc000289f40, READY
Error: got unexpected status: SERVICE_UNAVAILABLE -- channel testchainid is not serviced by me
```
Orderer Log
```
2019-09-19 18:22:21.442 UTC [orderer.common.broadcast] ProcessMessage -> WARN 3595 [channel: public] Rejecting broadcast of message from 172.20.0.2:45910 with SERVICE_UNAVAILABLE: rejected by Consenter: channel testchainid is not serviced by me
2019-09-19 18:22:21.442 UTC [orderer.common.server] func1 -> DEBU 3596 Closing Broadcast stream
2019-09-19 18:22:21.442 UTC [comm.grpc.server] 1 -> INFO 3597 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.20.0.2:45910 grpc.code=OK grpc.call_duration=28.058574ms
2019-09-19 18:22:21.447 UTC [grpc] warningf -> DEBU 3598 transport: http2Server.HandleStreams failed to read frame: read tcp 172.20.0.19:7050->172.20.0.2:45910: read: connection reset by peer
2019-09-19 18:22:21.447 UTC [grpc] infof -> DEBU 3599 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-09-19 18:22:21.448 UTC [common.deliver] Handle -> WARN 359a Error reading from 172.20.0.2:45908: rpc error: code = Canceled desc = context canceled
2019-09-19 18:22:21.448 UTC [orderer.common.server] func1 -> DEBU 359b Closing Deliver stream
2019-09-19 18:22:21.448 UTC [comm.grpc.server] 1 -> INFO 359c streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.20.0.2:45908 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=36.44571ms
```
hello - I am trying to setup a etcdraft (from start, not migration) using the fabric v1.4.2. I am able to run the fabric-samples byfn with etcdraft but trying to map this setup to our requirement.
I am receiving following error during channel creation, can you please help what could be wrong
Peer CLI
```
2019-09-19 18:22:21.414 UTC [grpc] HandleSubConnStateChange -> DEBU 03e pickfirstBalancer: HandleSubConnStateChange: 0xc000289f40, READY
Error: got unexpected status: SERVICE_UNAVAILABLE -- channel testchainid is not serviced by me
```
Orderer Log
```
2019-09-19 18:22:21.442 UTC [orderer.common.broadcast] ProcessMessage -> WARN 3595 [channel: public] Rejecting broadcast of message from 172.20.0.2:45910 with SERVICE_UNAVAILABLE: rejected by Consenter: channel testchainid is not serviced by me
2019-09-19 18:22:21.442 UTC [orderer.common.server] func1 -> DEBU 3596 Closing Broadcast stream
2019-09-19 18:22:21.442 UTC [comm.grpc.server] 1 -> INFO 3597 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.20.0.2:45910 grpc.code=OK grpc.call_duration=28.058574ms
2019-09-19 18:22:21.447 UTC [grpc] warningf -> DEBU 3598 transport: http2Server.HandleStreams failed to read frame: read tcp 172.20.0.19:7050->172.20.0.2:45910: read: connection reset by peer
2019-09-19 18:22:21.447 UTC [grpc] infof -> DEBU 3599 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-09-19 18:22:21.448 UTC [common.deliver] Handle -> WARN 359a Error reading from 172.20.0.2:45908: rpc error: code = Canceled desc = context canceled
2019-09-19 18:22:21.448 UTC [orderer.common.server] func1 -> DEBU 359b Closing Deliver stream
2019-09-19 18:22:21.448 UTC [comm.grpc.server] 1 -> INFO 359c streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.20.0.2:45908 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=36.44571ms
```
In the orderer filesystem, I don;t see WAL/Snap directory created so far
```
/opt/gopath/src/github.com/hyperledger/fabric $ ls -lrt /var/hyperledger/production/orderer/ -R
/var/hyperledger/production/orderer/:
total 0
drwxr-xr-x 2 10130 1002 85 Sep 19 18:11 index
drwxr-xr-x 3 10130 1002 25 Sep 19 18:11 chains
```
hello - I am trying to setup a etcdraft (from start, not migration) using the fabric v1.4.2. I am able to run the fabric-samples byfn with etcdraft but trying to map this setup to our requirement.
I am receiving following error during channel creation, can you please help what could be wrong
Peer CLI
```
2019-09-19 18:22:21.414 UTC [grpc] HandleSubConnStateChange -> DEBU 03e pickfirstBalancer: HandleSubConnStateChange: 0xc000289f40, READY
Error: got unexpected status: SERVICE_UNAVAILABLE -- channel testchainid is not serviced by me
```
Orderer Log
```
2019-09-19 18:22:21.442 UTC [orderer.common.broadcast] ProcessMessage -> WARN 3595 [channel: public] Rejecting broadcast of message from 172.20.0.2:45910 with SERVICE_UNAVAILABLE: rejected by Consenter: channel testchainid is not serviced by me
2019-09-19 18:22:21.442 UTC [orderer.common.server] func1 -> DEBU 3596 Closing Broadcast stream
2019-09-19 18:22:21.442 UTC [comm.grpc.server] 1 -> INFO 3597 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.20.0.2:45910 grpc.code=OK grpc.call_duration=28.058574ms
2019-09-19 18:22:21.447 UTC [grpc] warningf -> DEBU 3598 transport: http2Server.HandleStreams failed to read frame: read tcp 172.20.0.19:7050->172.20.0.2:45910: read: connection reset by peer
2019-09-19 18:22:21.447 UTC [grpc] infof -> DEBU 3599 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-09-19 18:22:21.448 UTC [common.deliver] Handle -> WARN 359a Error reading from 172.20.0.2:45908: rpc error: code = Canceled desc = context canceled
2019-09-19 18:22:21.448 UTC [orderer.common.server] func1 -> DEBU 359b Closing Deliver stream
2019-09-19 18:22:21.448 UTC [comm.grpc.server] 1 -> INFO 359c streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.20.0.2:45908 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=36.44571ms
```
In the orderer filesystem, I don;t see WAL/Snap directory created so far
```
/opt/gopath/src/github.com/hyperledger/fabric $ ls -lrt /var/hyperledger/production/orderer/ -R
/var/hyperledger/production/orderer/:
total 0
drwxr-xr-x 2 10130 1002 85 Sep 19 18:11 index
drwxr-xr-x 3 10130 1002 25 Sep 19 18:11 chains
/opt/gopath/src/github.com/hyperledger/fabric $ ls -alrt /var/hyperledger/production/orderer/chains/testchainid/
total 12
-rw-r----- 1 10130 1002 10652 Sep 19 18:11 blockfile_000000
drwxr-xr-x 3 10130 1002 25 Sep 19 18:11 ..
drwxr-xr-x 2 10130 1002 30 Sep 19 18:11 .
```
hello - I am trying to setup a etcdraft (from start, not migration) using the fabric v1.4.2. I am able to run the fabric-samples byfn with etcdraft but trying to map this setup to our requirement.
I am receiving following error during channel creation, can you please help what could be wrong
Peer CLI
```
2019-09-19 18:22:21.414 UTC [grpc] HandleSubConnStateChange -> DEBU 03e pickfirstBalancer: HandleSubConnStateChange: 0xc000289f40, READY
Error: got unexpected status: SERVICE_UNAVAILABLE -- channel testchainid is not serviced by me
```
Orderer Log
```
2019-09-19 18:22:21.442 UTC [orderer.common.broadcast] ProcessMessage -> WARN 3595 [channel: public] Rejecting broadcast of message from 172.20.0.2:45910 with SERVICE_UNAVAILABLE: rejected by Consenter: channel testchainid is not serviced by me
2019-09-19 18:22:21.442 UTC [orderer.common.server] func1 -> DEBU 3596 Closing Broadcast stream
2019-09-19 18:22:21.442 UTC [comm.grpc.server] 1 -> INFO 3597 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.20.0.2:45910 grpc.code=OK grpc.call_duration=28.058574ms
2019-09-19 18:22:21.447 UTC [grpc] warningf -> DEBU 3598 transport: http2Server.HandleStreams failed to read frame: read tcp 172.20.0.19:7050->172.20.0.2:45910: read: connection reset by peer
2019-09-19 18:22:21.447 UTC [grpc] infof -> DEBU 3599 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-09-19 18:22:21.448 UTC [common.deliver] Handle -> WARN 359a Error reading from 172.20.0.2:45908: rpc error: code = Canceled desc = context canceled
2019-09-19 18:22:21.448 UTC [orderer.common.server] func1 -> DEBU 359b Closing Deliver stream
2019-09-19 18:22:21.448 UTC [comm.grpc.server] 1 -> INFO 359c streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.20.0.2:45908 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=36.44571ms
```
In the orderer filesystem, I don;t see WAL/Snap directory created so far
```
/opt/gopath/src/github.com/hyperledger/fabric $ ls -lrt /var/hyperledger/production/orderer/ -R
/var/hyperledger/production/orderer/:
total 0
drwxr-xr-x 2 10130 1002 85 Sep 19 18:11 index
drwxr-xr-x 3 10130 1002 25 Sep 19 18:11 chains
/opt/gopath/src/github.com/hyperledger/fabric $ ls -alrt /var/hyperledger/production/orderer/chains/testchainid/
total 12
-rw-r----- 1 10130 1002 10652 Sep 19 18:11 blockfile_000000
drwxr-xr-x 3 10130 1002 25 Sep 19 18:11 ..
drwxr-xr-x 2 10130 1002 30 Sep 19 18:11 .
```
Does this mean there is no leader yet and the RAFT communication is broken?
check this first-network fabric sample -- https://github.com/hyperledger/fabric-samples/blob/v1.4.3/first-network/configtx.yaml
It means the RAFT quorum is still not formed. Still there is some issue. Please check the consenters TLS certificate mappings in configtx.yaml. i experience the same issue as i had mapped the wrong TLS certs in consenter set in configtx.yaml -- correcting it formed the quorum successfully.
could you upload complete orderer log (from beginning of migration), and configtx.yaml
i am trying now with 1.4.2 fabric version once will update you
so the Server and Client TLS certificate must be the same file? that seems a little strange
no, it can be whatever tls certs, as long as you configure it properly
that sample is just setting them to the same cert for the sake of simplicity
ah, thank you for telling me this, it seemed very odd to me
Can i Add an orderer with adding the organisation into a existing channel ?
so, that helped a lot! I am now hitting the same error as this
https://stackoverflow.com/questions/57513234/hyperledger-fabric-peer-unable-to-connect-to-raft-orderer-with-mutual-tls
when he says tlscacerts should be in the msp(s) directory(ies) PRIOR to creating genesis / channel bloc
does this just mean during generate.sh, we should try to create the channel last?
i also experienced same issue Please find atatched the complete orderer logs and configtx.yaml
OrdererLogs_AND_Configtx.zip
yes watched your thread resolution and got it corrected RAFT meta data. It is moving ahead hence i deleted my comment.
yes watched your thread resolution and got it corrected in the RAFT meta data. It is moving ahead hence i deleted my comment. Thanks.
in some examples, i see CORE_PEER_TLS_CLIENTROOTCAS_FILES = tlsca.company-cert.pem and others it is tls/ca.crt
which is the correct one?
When I use 'edtcraft' consensus I can not spin up second orderer even within the same org or diff org. I get the following error
``` [common.deliver] deliverBlocks -> WARN 019 [channel: orderersyschannel] Client authorization revoked for deliver request from 172.18.0.5:45506:
implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied
2019-09-23 07:22:52.937 UTC [comm.grpc.server] 1 -> INFO 01a streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=172.18.0.5
:45506 grpc.peer_subject="CN=ord1-org2,OU=orderer,O=Hyperledger,ST=North Carolina,C=US" grpc.code=OK grpc.call_duration=1.590709ms
2019-09-23 07:22:52.945 UTC [orderer.common.cluster.replication] fetchLastBlockSeq -> WARN 01b Received status:FORBIDDEN from ord1-org2.local:7050: forbidden pullin
g the channel
2019-09-23 07:22:52.945 UTC [orderer.common.cluster.replication] func1 -> WARN 01c Received error of type 'forbidden pulling the channel' from {ord1-org2.local:7050 ```
I understand there are policy issues but don't know how to figure it out.
Just a query - For RAFT Set up minimum 3 orderers is mandatory ? As currently in my kafka set up i have 2 orderers
it is required to provide HA - if one of two orderers is down, quorum cannot be reached and you'll lose your service
although you can run ordering service with any number of orderer node,
it's technically viable
what exact steps are you trying there?
I started the first orderer then I can not start any other orderers. They exit with errors, either from same org or different org. I want to have a cluster of orderers up and running using raft consensus.
When I start an orderer from the same orderer as first orderer, I get "orderer.common.cluster.replication] fetchLastBlockSeq -> ERRO 030 Failed receiving the latest block from ord1-org1.local:7050: didn't receive a response within 7s [orderer.common.cluster.replication] func1 -> WARN 031 Received error of type 'didn't receive a response within 7s' from {ord1-org1.local:7050 [-----BEGIN CERTIFICATE-----" this error. When I start an orderer from another org I get the previous error mentioned in the first message. Thank you!
so i started the two orderers cluster but it was failing .
Orderer_Logs.zip
I have attached both the the orderer logs
you have connection problem there
it has nothing to do with number of orderers
check your configs
Do we need to backup any data from orderer (production folder)?
I have overcome this issue, I used different listening port for an orderer. Thank you.
Hi all, when I try to create a new channel, it just waits after log messages. Has anyone encountered this problem?
log.jpg
Thanks Jay that was resolved as i was getting TLS issue because i forgot to set two environment variables in docker-compose - ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE
ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY
can you post your configtx.yaml file
Yes, I have pasted it on : https://pastebin.com/SLaGSrid
Hello - I am getting a orderer panic upon enabling promethus with different client + server TLS certificate for RAFT node.
```
2019-09-23 22:59:43.612 UTC [orderer.common.cluster] loadVerifier -> INFO 00b Loaded verifier for channel goldsac from config block at index 2
2019-09-23 22:59:43.634 UTC [orderer.common.cluster] loadVerifier -> INFO 00c Loaded verifier for channel public from config block at index 5
2019-09-23 22:59:43.640 UTC [orderer.common.cluster] loadVerifier -> INFO 00d Loaded verifier for channel testchainid from config block at index 7
2019-09-23 22:59:43.640 UTC [orderer.common.server] initializeServerConfig -> INFO 00e Starting orderer with TLS enabled
panic: duplicate metrics collector registration attempted
goroutine 1 [running]:
github.com/hyperledger/fabric/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0xc0000d5d10, 0xc00097d9b0, 0x1, 0x1)
/go/src/github.com/hyperledger/fabric/vendor/github.com/prometheus/client_golang/prometheus/registry.go:387 +0xad
github.com/hyperledger/fabric/vendor/github.com/prometheus/client_golang/prometheus.MustRegister(0xc00097d9b0, 0x1, 0x1)
/go/src/github.com/hyperledger/fabric/vendor/github.com/prometheus/client_golang/prometheus/registry.go:172 +0x53
github.com/hyperledger/fabric/vendor/github.com/go-kit/kit/metrics/prometheus.NewCounterFrom(0x148371a, 0x4, 0x148363a, 0x4, 0x1489869, 0xb, 0x14c6470, 0x4f, 0x0, 0x0, ...)
/go/src/github.com/hyperledger/fabric/vendor/github.com/go-kit/kit/metrics/prometheus/prometheus.go:24 +0xe3
github.com/hyperledger/fabric/common/metrics/prometheus.(*Provider).NewCounter(0x1de4b18, 0x148371a, 0x4, 0x148363a, 0x4, 0x1489869, 0xb, 0x14c6470, 0x4f, 0x0, ...)
/go/src/github.com/hyperledger/fabric/common/metrics/prometheus/provider.go:20 +0x138
github.com/hyperledger/fabric/core/comm.NewServerStatsHandler(0x15bde60, 0x1de4b18, 0x2)
/go/src/github.com/hyperledger/fabric/core/comm/metrics.go:29 +0x74
github.com/hyperledger/fabric/core/comm.NewGRPCServerFromListener(0x15c3820, 0xc0000b4958, 0x12a05f200, 0xc0001ca2d0, 0x1d86bc0, 0xc000521ca0, 0x2, 0x2, 0xc000521cb0, 0x2, ...)
/go/src/github.com/hyperledger/fabric/core/comm/server.go:152 +0x87e
github.com/hyperledger/fabric/core/comm.NewGRPCServer(0xc0006d1980, 0xc, 0x0, 0xc0001ca2d0, 0x1d86bc0, 0xc000521ca0, 0x2, 0x2, 0xc000521cb0, 0x2, ...)
/go/src/github.com/hyperledger/fabric/core/comm/server.go:55 +0x14b
github.com/hyperledger/fabric/orderer/common/server.configureClusterListener(0xc000203200, 0x0, 0xc0001ca240, 0x1d86bc0, 0xc000521ca0, 0x2, 0x2, 0xc000521cb0, 0x2, 0x2, ...)
/go/src/github.com/hyperledger/fabric/orderer/common/server/main.go:376 +0x78c
github.com/hyperledger/fabric/orderer/common/server.Start(0x148462a, 0x5, 0xc000203200)
/go/src/github.com/hyperledger/fabric/orderer/common/server/main.go:141 +0xdf9
github.com/hyperledger/fabric/orderer/common/server.Main()
/go/src/github.com/hyperledger/fabric/orderer/common/server/main.go:91 +0x1ce
main.main()
/go/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
```
```
- ORDERER_OPERATIONS_LISTENADDRESS=:36001
- ORDERER_METRICS_PROVIDER=prometheus
- ORDERER_OPERATIONS_TLS_ENABLED=false
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tlsclient/ord03.clsorder.cit.clsnet.pem
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tlsclient/ord03.clsorder.cit.clsnet.sk
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tlsintermediatecerts/tls.ca01.cit.clsnet.pem]
- ORDERER_GENERAL_CLUSTER_SERVERCERTIFICATE=/var/hyperledger/orderer/tlsserver/ord03.clsorder.cit.clsnet.pem
- ORDERER_GENERAL_CLUSTER_SERVERPRIVATEKEY=/var/hyperledger/orderer/tlsserver/ord03.clsorder.cit.clsnet.sk
- ORDERER_GENERAL_CLUSTER_LISTENPORT=5555
- ORDERER_GENERAL_CLUSTER_LISTENADDRESS=0.0.0.0
```
If i disable metric provider, orderer stands up and works.
```
#- ORDERER_OPERATIONS_LISTENADDRESS=:36001
#- ORDERER_METRICS_PROVIDER=prometheus
- ORDERER_OPERATIONS_TLS_ENABLED=false
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tlsclient/ord03.clsorder.cit.clsnet.pem
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tlsclient/ord03.clsorder.cit.clsnet.sk
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tlsintermediatecerts/tls.ca01.cit.clsnet.pem]
- ORDERER_GENERAL_CLUSTER_SERVERCERTIFICATE=/var/hyperledger/orderer/tlsserver/ord03.clsorder.cit.clsnet.pem
- ORDERER_GENERAL_CLUSTER_SERVERPRIVATEKEY=/var/hyperledger/orderer/tlsserver/ord03.clsorder.cit.clsnet.sk
- ORDERER_GENERAL_CLUSTER_LISTENPORT=5555
- ORDERER_GENERAL_CLUSTER_LISTENADDRESS=0.0.0.0
```
Hello - I am getting a orderer panic upon enabling promethus with different client + server TLS certificate for RAFT node.
```
2019-09-23 22:59:43.612 UTC [orderer.common.cluster] loadVerifier -> INFO 00b Loaded verifier for channel goldsac from config block at index 2
2019-09-23 22:59:43.634 UTC [orderer.common.cluster] loadVerifier -> INFO 00c Loaded verifier for channel public from config block at index 5
2019-09-23 22:59:43.640 UTC [orderer.common.cluster] loadVerifier -> INFO 00d Loaded verifier for channel testchainid from config block at index 7
2019-09-23 22:59:43.640 UTC [orderer.common.server] initializeServerConfig -> INFO 00e Starting orderer with TLS enabled
panic: duplicate metrics collector registration attempted
goroutine 1 [running]:
github.com/hyperledger/fabric/vendor/github.com/prometheus/client_golang/prometheus.(*Registry).MustRegister(0xc0000d5d10, 0xc00097d9b0, 0x1, 0x1)
/go/src/github.com/hyperledger/fabric/vendor/github.com/prometheus/client_golang/prometheus/registry.go:387 +0xad
github.com/hyperledger/fabric/vendor/github.com/prometheus/client_golang/prometheus.MustRegister(0xc00097d9b0, 0x1, 0x1)
/go/src/github.com/hyperledger/fabric/vendor/github.com/prometheus/client_golang/prometheus/registry.go:172 +0x53
github.com/hyperledger/fabric/vendor/github.com/go-kit/kit/metrics/prometheus.NewCounterFrom(0x148371a, 0x4, 0x148363a, 0x4, 0x1489869, 0xb, 0x14c6470, 0x4f, 0x0, 0x0, ...)
/go/src/github.com/hyperledger/fabric/vendor/github.com/go-kit/kit/metrics/prometheus/prometheus.go:24 +0xe3
github.com/hyperledger/fabric/common/metrics/prometheus.(*Provider).NewCounter(0x1de4b18, 0x148371a, 0x4, 0x148363a, 0x4, 0x1489869, 0xb, 0x14c6470, 0x4f, 0x0, ...)
/go/src/github.com/hyperledger/fabric/common/metrics/prometheus/provider.go:20 +0x138
github.com/hyperledger/fabric/core/comm.NewServerStatsHandler(0x15bde60, 0x1de4b18, 0x2)
/go/src/github.com/hyperledger/fabric/core/comm/metrics.go:29 +0x74
github.com/hyperledger/fabric/core/comm.NewGRPCServerFromListener(0x15c3820, 0xc0000b4958, 0x12a05f200, 0xc0001ca2d0, 0x1d86bc0, 0xc000521ca0, 0x2, 0x2, 0xc000521cb0, 0x2, ...)
/go/src/github.com/hyperledger/fabric/core/comm/server.go:152 +0x87e
github.com/hyperledger/fabric/core/comm.NewGRPCServer(0xc0006d1980, 0xc, 0x0, 0xc0001ca2d0, 0x1d86bc0, 0xc000521ca0, 0x2, 0x2, 0xc000521cb0, 0x2, ...)
/go/src/github.com/hyperledger/fabric/core/comm/server.go:55 +0x14b
github.com/hyperledger/fabric/orderer/common/server.configureClusterListener(0xc000203200, 0x0, 0xc0001ca240, 0x1d86bc0, 0xc000521ca0, 0x2, 0x2, 0xc000521cb0, 0x2, 0x2, ...)
/go/src/github.com/hyperledger/fabric/orderer/common/server/main.go:376 +0x78c
github.com/hyperledger/fabric/orderer/common/server.Start(0x148462a, 0x5, 0xc000203200)
/go/src/github.com/hyperledger/fabric/orderer/common/server/main.go:141 +0xdf9
github.com/hyperledger/fabric/orderer/common/server.Main()
/go/src/github.com/hyperledger/fabric/orderer/common/server/main.go:91 +0x1ce
main.main()
/go/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
```
```
- ORDERER_OPERATIONS_LISTENADDRESS=:36001
- ORDERER_METRICS_PROVIDER=prometheus
- ORDERER_OPERATIONS_TLS_ENABLED=false
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tlsclient/ord03.clsorder.cit.clsnet.pem
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tlsclient/ord03.clsorder.cit.clsnet.sk
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tlsintermediatecerts/tls.ca01.cit.clsnet.pem]
- ORDERER_GENERAL_CLUSTER_SERVERCERTIFICATE=/var/hyperledger/orderer/tlsserver/ord03.clsorder.cit.clsnet.pem
- ORDERER_GENERAL_CLUSTER_SERVERPRIVATEKEY=/var/hyperledger/orderer/tlsserver/ord03.clsorder.cit.clsnet.sk
- ORDERER_GENERAL_CLUSTER_LISTENPORT=5555
- ORDERER_GENERAL_CLUSTER_LISTENADDRESS=0.0.0.0
```
If i disable metric provider, orderer stands up and passes all our tests.
```
#- ORDERER_OPERATIONS_LISTENADDRESS=:36001
#- ORDERER_METRICS_PROVIDER=prometheus
- ORDERER_OPERATIONS_TLS_ENABLED=false
- ORDERER_GENERAL_CLUSTER_CLIENTCERTIFICATE=/var/hyperledger/orderer/tlsclient/ord03.clsorder.cit.clsnet.pem
- ORDERER_GENERAL_CLUSTER_CLIENTPRIVATEKEY=/var/hyperledger/orderer/tlsclient/ord03.clsorder.cit.clsnet.sk
- ORDERER_GENERAL_CLUSTER_ROOTCAS=[/var/hyperledger/orderer/tlsintermediatecerts/tls.ca01.cit.clsnet.pem]
- ORDERER_GENERAL_CLUSTER_SERVERCERTIFICATE=/var/hyperledger/orderer/tlsserver/ord03.clsorder.cit.clsnet.pem
- ORDERER_GENERAL_CLUSTER_SERVERPRIVATEKEY=/var/hyperledger/orderer/tlsserver/ord03.clsorder.cit.clsnet.sk
- ORDERER_GENERAL_CLUSTER_LISTENPORT=5555
- ORDERER_GENERAL_CLUSTER_LISTENADDRESS=0.0.0.0
```
when you run the channel create -- can you once post the complete logs
What version of Fabric are you running?
What version of Fabric are you running? (so that I can associate the line numbers to code)
Actually, I think I can see the problem even without it. Do you have a JIRA for this, or would you like me to open one?
Sorry - This is Fabric v1.4.2
This is Fabric v1.4.2
I can open 1.
@jyellick https://jira.hyperledger.org/browse/FAB-16695
Thanks @rahulhegde it may be a couple days, but I will add this to my list.
No Problem.
Hello, it was said that channel specific folders inside `etcdraft\snapshot` would require storage equivalent to 5 * 20MB. But i dont see any file created in any of the 3 running raft nodes.
I have used a default setting
```
"options": {
"election_tick": 10,
"heartbeat_tick": 1,
"max_inflight_blocks": 5,
"snapshot_interval_size": 20971520,
"tick_interval": "500ms"
}
```
Hello, it was said that channel specific folders inside `etcdraft\snapshot` would require storage equivalent to 5 * 20MB. But i dont see any file created in any of the 3 running raft nodes.
I have used a default setting
```
"options": {
"election_tick": 10,
"heartbeat_tick": 1,
"max_inflight_blocks": 5,
"snapshot_interval_size": 20971520,
"tick_interval": "500ms"
}
```
Can you please clarify.
@rahulhegde how many blocks are in the chain?
I am pasting this from fabric-samples byfn but definitely my other workspace has more block height >10.
```
/var/hyperledger/production/orderer/etcdraft/snapshot/mychannel:
total 8
drwxr-xr-x 4 root root 4096 Sep 24 23:18 ..
drwxr-xr-x 2 root root 4096 Sep 24 23:18 .
```
```
2019-09-25 10:49:22.698 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
Blockchain info: {"height":5,"currentBlockHash":"+kJc8C2XDVi/RhgZlML7xEw+z0mErIFrIOFPfBaFERs=","previousBlockHash":"AUCgeJtwJMTAbgiOK4vEqoZif9w3TpnQS6EJK/qvwOk="}
```
I am pasting this from fabric-samples byfn but definitely my other workspace has more block height >10.
```
/var/hyperledger/production/orderer/etcdraft/snapshot/mychannel:
total 8
drwxr-xr-x 4 root root 4096 Sep 24 23:18 ..
drwxr-xr-x 2 root root 4096 Sep 24 23:18 .
```
```
root@faef628d989d:/opt/gopath/src/github.com/hyperledger/fabric/peer# peer channel getinfo -c mychannel
2019-09-25 10:49:22.698 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
Blockchain info: {"height":5,"currentBlockHash":"+kJc8C2XDVi/RhgZlML7xEw+z0mErIFrIOFPfBaFERs=","previousBlockHash":"AUCgeJtwJMTAbgiOK4vEqoZif9w3TpnQS6EJK/qvwOk="}
```
Has joined the channel.
@yacovm @jyellick let me know if u need more data from me to help me understand the sizing for these folder.
So i guess snapshot will be created once you have enougj blocks
*enough
@rahulhegde In observations of Raft deployments, the steady state will be 5 20MB Snapshot files, and one 20MB WAL, totalling 120MB per channel.
If you tune the snapshot interval, this size can scale up or down.
Since snapshots occur on block boundaries, if the block sizes are variable or very large, the snapshot sizes could be larger, though this is unusual.
1. So a channel height > 0, should result in creation of entries in the snapshot directory? Currently i don't see any.
2. Is the 5 multiplier a configurable value for the snapshot?
3. snapshot interval size is the per snapshot file and is only configurable through channel update = snapshot_interval_size.
4. Even the WAL folder contains 60 MB file
```
/var/hyperledger/production/orderer/etcdraft/wal/public $ du * -h
61.0M 0.tmp
61.0M 0000000000000000-0000000000000000.wal
```
1. So a channel height > 0, should result in creation of entries in the snapshot directory? Currently i don't see any.
2. Is the 5 multiplier a configurable value for the snapshot?
3. snapshot interval size is the per snapshot file and is only configurable through channel update = snapshot_interval_size. Is this true
4. Even the WAL folder contains 60 MB file
```
/var/hyperledger/production/orderer/etcdraft/wal/public $ du * -h
61.0M 0.tmp
61.0M 0000000000000000-0000000000000000.wal
```
1) A snapshot is created every time that the WAL would exceed the configured snapshot size. IE, every 20MB of data written tot he WAL (blocks). So, when a channel is created, no snapshots should exist for it. Instead, only after 20MB of blocks have been created.
2) This is not a configurable value today.
3) Yes, this is true. You may of course set it in your orderer system channel so that other channels inherit it.
4) I'm not sure why this would be the case, perhaps this is simply the default size of the WAL before it has ever reached a point of truncation. I would expect the WAL to move to the 20MB mark.
@guoger can you confirm ^ ?
Regarding 4 - it pre-allocates to:
` SegmentSizeBytes int64 = 64 * 1000 * 1000 // 64MB`
and (1) is what I was trying to say... Rahul. Make some more blocks and you'll see your snapshots appear
For (4) - does it have multiple files of 64MB or it is one file that grows and shrink to use WAL?
the former
IRC
IIRC
For (1)
Snapshot and WAL does not have any correlation.
So reading https://hyperledger-fabric.readthedocs.io/en/release-1.4/orderer/ordering_service.html#snapshots, a single snapshot would be created only once blocks the cumulative size of the blocks so far exceeeds 20MB.
Now, this is again controlled through `snapshot_interval_size` which i can change. If i need to review the right size for my block chain, is there a way to get the average block size?
are you asking us for the average block size of *your* blockchain? It's obviously controlled by you... no?
blocks are cut either upon timeout, or upon memory being filled
i don't know how frequent you're sending transactions, and how big they are, or what is your config
I understand Yacov, we have the default orderering configuration. However if i have to review my block chain, is it captured by any metrics? I need to understand as we move to RAFT, what is the additional file-system provisioning for these 2 folders.
yes we have metrics for Raft
I don't really understand though why you're concerned about the size of these 2 folders. Are you allocating a volume for them and concerned about its size?
I would put it in the same volume as the ledger and that's it
Our privacy is scoped between 2 organization and hence have multiple channel definitions. This means we needs more space as the 2 folders are provisioned at the channel level. However all channel are not used at the same rate and hence we could accept lower snapshot side for some, I i have way to find the average block size for that channel.
Our privacy is scoped between 2 organization and hence have multiple channel definitions. This means we needs more space as the 2 folders are provisioned at the channel level. However all channel are not used at the same rate and hence we could accept lower snapshot size for some, If i have way to find the average block size for that channel.
you don't need to find the average block size
Raft writes to WAL and does snapshots by cumulative size
so why does it matter if you have 10000 blocks in these 20MB or 100 blocks?
True - I could resize the snapshot to 5MB
stepping out for a bit, be back later
so 200 channels would means 20G and then factored by 3 orders.
20GB is peanuts
Really true for not everyone!
back to WAL - when does a new file get created in this folder?
The log of creating a channel: https://pastebin.com/ahEU6Fn9
at the end of the log the shell just waits, does noting.
Can anyone help me fix the problem I encounter during creating a channel? The logs are here: https://pastebin.com/ahEU6Fn9
I have RFT consensus with multiple host setup. all orderers are up and running.
Can anyone help me fix the problem I encounter during creating a channel? The logs are here: https://pastebin.com/ahEU6Fn9
I have RFT consensus with multiple host setup. all orderers are up and running. Att he end of the log. it does nothing and waits forever.
Can anyone help me fix the problem I encounter during creating a channel? The logs are here: https://pastebin.com/ahEU6Fn9
I have RFT consensus with multiple host setup. all orderers are up and running. At the end of the log, it does nothing and waits forever.
Also post the complete logs of orderer once
all four orderer logs are here: https://pastebin.com/1SgD1TuD
checked your orderer logs its seems like the RAFT quorum is not properly formed . Please check your TLS certs. the logs shows the orderers are not able to connect and even at line no 3318 the TLS cert is not found from the list of certs provided in configtx .yaml consenter set
Correctly configuring the TLS certs would solve the issue
Oh, thanks a lot, will lookinto it now.
Hi, sorry forthe delay. I followed the mailinglist tread and your configtx and docker compose files. I have a doubt. in the configtx.yaml file, in consentters section, you have provided the same certificate for the server tls cert and client tls cert. How does this work? I have provided server tls root ca cert as server cert and the orderer node's tls cert as client tls cert.
I think that's why my setup doesn't form the raft quarum. Can you explain this?
https://chat.hyperledger.org/channel/fabric-orderer?msg=irEoqRJJ3mSaMH8Dc
Hi @guoger , can you please explain what is a client and server here? I was using TLS CA's root cert for server TLS cert and the cert issued to the orderer node as Client TLS cert, but it doesn't work. What is the context of server and client? The following is from the documentation "Thus, a Raft node must be referenced in the configuration of each channel it belongs to by adding its server and client TLS certificates (in PEM format) to the channel config. This ensures that when other nodes receive a message from it, they can securely confirm the identity of the node that sent the message." Thank you in advance.
Thanks Soumya, so the cert provided is the TLS cert of the orderer node, am I right?
yes indirajith
sendProposal - timed out after:6000 Hitting this error on trying to instantiate chaincode
Works the second time
@adityanalge Are you on master?
~@adityanalge Are you on master?~
~@adityanalge Are you on master?~ This is most likely a timeout waiting for your chaincode to build and launch. You may increase the chaincode execution timeout in your config.
Also, do all 5 orderers' production in RAFT have to be backed up or backing up one production folder is enough?
i am not able to /var/hyperledger/production/orderer/ directory in orderer container
i am not able to see /var/hyperledger/production/orderer/ directory in orderer container
orderer docker image 1.4.2
Has joined the channel.
in orderer container default Files Ledger path as ../production/orderer path is not available, also in orderer docker image 1.4.3
in orderer container default Files Ledger path as ../production/orderer, is not available, also in orderer docker image 1.4.3
This path gets created as apart of the environment variable value that you specify - ORDERER_FILELEDGER_LOCATION
ORDERER_GENERAL_LEDGERTYPE=file
can i write ORDERER_GENERAL_LEDGERTYPE=ram
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=XFmQkvRHA3JG8p5zf) can i write ORDERER_GENERAL_LEDGERTYPE=ram
what is difference between ram and file in ORDERER_GENERAL_LEDGERTYPE env variable
it is always recommended to use file ledger, so data is persisted to disk. Ram ledger is for test purpose only (and may be deprecated in the future)
for orderer in yml file i am mentioning ORDERER_FILELEDGER_LOCATION=/var/hyperledger/production/orderer, ORDERER_GENERAL_LEDGERTYPE=ram, then also not able to see /var/hyperledger/production/orderer/ directory in orderer container
production directory is blank rather then orderer directory
production directory is empty rather then containing orderer directory
ORDERER_FILELEDGER_LOCATION=/var/hyperledger/production/orderer, ORDERER_GENERAL_LEDGERTYPE=file
ok
- ram: An in-memory ledger whose contents are lost on restart.
Has joined the channel.
I have a question regarding the 'OrdererEndpoints' in confgtx.yaml (using Fabric 1.4.3):
From the comment here https://github.com/hyperledger/fabric/blob/release-1.4/sampleconfig/configtx.yaml#L253-L254
I was under the impression that OrdererEndpoints in the org definition are a *replacement* for Orderer.Addresses.
However if I omit Orderer.Addresses in my configtx.yaml, configtxgen will even put a default entry of 127.0.0.1:7050 into the genesis block/channel config.
I my question is, is it correct to use both OrdererEndpoints *and* Orderer.Addresses for now?
i have two orderers as orderer0 nad orderer1 but orderer0 gone down at that time second orderer1 not able to pick up from peers transaction, error as
i have two orderers as orderer0 nad orderer1 but orderer0 gone down at that time second orderer1 not able to pick up peers transaction, error as
error: [Remote.js]: Error: Failed to connect before to connect before the deadline URL:grpc: //orderer1.example.com:7050
error: [Remote.js]: Error: Failed to connect before to connect before the deadline URL:grpc: //orderer1.example.com:7050
error: [Remote.js]: Error: Failed to connect before to connect before the deadline URL:grpc: //orderer1.example.com:7050
error: [orderer.js]: Error: Failed to connect before to connect before the deadline URL:grpc: //orderer1.example.com:7050
Can you post the orderer1 complete logs once
Bcoz orderer0 is already removed, i am invoking transaction from SDK while transaction my orderer0 is removed and from orderer1 should take that transaction but above error facing
Bcoz orderer0 container is already removed, i am invoking transaction from SDK while transaction my orderer0 is removed and from orderer1 should take that transaction but above error facing
Bcoz orderer0 container is already removed, i am invoking transaction from SDK while transaction my orderer0 is removed and from orderer1 should take that transaction but above error facing in SDK
If you invoke from CLI tools is it working?
i havent tried CLI,
Hello, we are going to release a versión 1.0 version of a solutio using Fabric 1.4.3. Our client requests to have just one organization registered as later on they will add more organisations. Is there any problem with designing the network now with just one organization and raft orderign
... and later on add more organizations?
I would like not to go now with Solo mode for that reason and then have to change everything. Will there any "protocol" rechnical issues when coming to raft consensus with just one organization?
it should work
while using sdk we need to provide second orderer in network-config file
i did that still same error
here ?
mychannel:
# Required. list of orderers designated by the application to use for transactions on this
# channel. This list can be a result of access control ("org1" can only access "ordererA"), or
# operational decisions to share loads from applications among the orderers. The values must
# be "names" of orgs defined under "organizations/peers"
orderers:
- orderer.example.com
Hi All When we are generating genesis block with orderer type etcdraft then in logs of this action i am able to see orderer type solo. Any suggestion ```2019-10-17 17:08:12.928 IST [common.tools.configtxgen] main -> INFO 001 Loading configuration
2019-10-17 17:08:12.972 IST [common.tools.configtxgen.localconfig] completeInitialization -> INFO 002 orderer type: etcdraft
2019-10-17 17:08:12.972 IST [common.tools.configtxgen.localconfig] completeInitialization -> INFO 003 Orderer.EtcdRaft.Options unset, setting to tick_interval:"500ms" election_tick:10 heartbeat_tick:1 max_inflight_blocks:5 snapshot_interval_size:20971520
2019-10-17 17:08:12.972 IST [common.tools.configtxgen.localconfig] Load -> INFO 004 Loaded configuration: /home/pankaj/fabric-samples/first-network/configtx.yaml
2019-10-17 17:08:13.015 IST [common.tools.configtxgen.localconfig] completeInitialization -> INFO 005 orderer type: solo
2019-10-17 17:08:13.015 IST [common.tools.configtxgen.localconfig] LoadTopLevel -> INFO 006 Loaded configuration: /home/pankaj/fabric-samples/first-network/configtx.yaml
2019-10-17 17:08:13.017 IST [common.tools.configtxgen] doOutputBlock -> INFO 007 Generating genesis block
2019-10-17 17:08:13.017 IST [common.tools.configtxgen] doOutputBlock -> INFO 008 Writing genesis block```
HI I am trying to port a raft single orderer and single organization running on docker into Kubernetes. It's being a challenge. U have got some success but I'm stuck when trying to create the channel. I execute the same command but now I get this error in the orderer. Command:
peer channel create -o blockchain-orderer:31010 -c mrrc -f /shared/mrrc.tx --tls --cafile /shared/crypto-config/ordererOrganizations/gov.org/orderers/blockchain-orderer.gov.org/msp/tlscacerts/tlsca.gov.org-cert.pem
THe error I get (and I see in the logs:
Error: got unexpected status: BAD_REQUEST -- error validating channel creation transaction for new channel 'mrrc', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
I know it must be a silly configuratoin error when executing cryptogen or configtxgen. Any suggestion what I should check? Thanks
(When I run everything in local docker it works OK...)
I get this right before the error in orderer logs: 2019-10-17 16:59:30.935 UTC [cauthdsl] deduplicate -> ERRO 029 Principal deserialization failure (MSP SampleOrg is unknown) for identity 0
But I can't finde "SampleOrg" anywhere in my config files
But I can't find "SampleOrg" anywhere in my config files
Sometimes those debug messages can be a bit misleading. Sometimes parsing is taking place, such as for channel creations, which will end up ignoring the consensus type. Does your network start as a Raft cluster?
Yes
@jyellick
Yes, this is your problem. My guess is, you are invoking this in a container environment with the sample config, which defaults to `SampleOrg` as the MSPID. This is to raise alarm bells just as it has for you. You need to set the MSPID to be correct, often through `CORE_PEER_LOCALMSPID`
Then I'd suggest ignoring that message, it is benign.
Thanks @jyellick for clarification
Thanks @jyellick I just found the problem. I have a wrong value for the CORE_PEER_MSPCONFIGPATH (it was not pointing to an Admin user).
Thanks @jyellick I just found the problem. I have a wrong value for the CORE_PEER_MSPCONFIGPATH (it was not pointing to an Admin user msp).
Has joined the channel.
While launching the orderer I am getting following error can any help
Clipboard - October 18, 2019 8:15 AM
After that I tried with 0.0.0.0 IP and I am getting another error
Clipboard - October 18, 2019 8:16 AM
When I am launching the orderer 2
Clipboard - October 18, 2019 8:17 AM
Can any one help me ?
@sureshtedla I'd suggest you start from https://hyperledger-fabric.readthedocs.io/en/release-1.4/build_network.html
It has example docker-compose files where you can see network configuration done correctly.
I see error "Attempted to change consensus type from kafka to etcdraft" when I try to update channel config as part of migration on 1.4.0 peer. So is migration only supported on orderer above 1.4.2?
1.4.2+
@jyellick , @yacovm Is it possible (in the raft ordering service context) to have one OSN to participate in two different ordering services/networks? I think I know the answer is negative, but I would like to be sure. Thanks!
No, it's currently impossible because there has to be a single system channel
It should be possible in the future, according to @jyellick 's plans
The system channel makes channel creation and replication an easier task, but as you can imagine - it is a double edged sword and seriously hurts privacy
The privacy-sensitive approach would be to create a genesis block for the channel and give it to each ordering service node so it can join it, and eliminate the system channel
thanks for confirming, @yacovm
hello everyone, i am trying to set a network using multiple machines, i am having problem with the orderers as they cant detect each other. the logs show this error:
orderer3.autentia-bchain.com | 2019-10-22 16:05:59.706 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 023 Failed to send StepRequest to 1, because: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp: lookup orderer1.autentia-bchain.com on 127.0.0.11:53: no such host" channel=jumpitt-sys-channel node=3
orderer3.autentia-bchain.com | 2019-10-22 16:05:59.706 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 024 Failed to send StepRequest to 2, because: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp: lookup orderer2.autentia-bchain.com on 127.0.0.11:53: no such host" channel=jumpitt-sys-channel node=3
i tried to map the hostname to the real ip but it doesnt work
Has joined the channel.
@yacovm We proposed RAFT consensus in one blockchain project, but the client asked whether there is any existing project reference of using RAFT? Do you know any project reference? Thanks.
you mean that uses the Raft protocol?
well, Fabric uses the Raft library of etcd
it is widely used:
- etcd uses it, which means Kubernetes uses it too
https://kubernetes.io/docs/concepts/overview/components/
https://etcd.io/docs/v3.4.0/
also, from google:
Clipboard - October 24, 2019 5:04 PM
@yacovm Thanks. We will explain this to the client.
If you know of some real projects which use Raft consensus in ordering service of Hyperledger Fabric, that would be better, as that is the question the client wanted us to answer explicitly:-) .
Has joined the channel.
maybe @mastersingh24 can answer this one...
Hi all, I want to build a hyperledger fabric network for 3 organization, and i want to develop chaincode for the network. So how many chaincodes i have develop to 3 organzation network. Is it like between org1 and org2 one chaincode, and org2 and org3 one chaincode. please help me through this.
The latest version of the IBM Blockchain service actually uses Raft now (we used to use Kafka) and we have a good number of customers. We will also be migrating all existing Kafka deployments to Raft over the next few months. I also know of a few non-IBM deployments which are using Raft (I'm not at liberty to disclose the company names). You might want to send a note to the fabric mailing list to see if anyone is willing to "reveal themselves"
It depends on the your business scenario , only one chaincode is also fine for the network. It can be deployed on any number of channel or peer
we Mediconcen has migrated to raft for production network.
Not sure if this is the correct venue, but it is sort of an orderer issue.
Trying to get service discovery working in our internal BC networks. 1.4.3 on K8S. Java SDK.
It all works fine without SD - we manually build the network connection profile and shove in the external urls for everything.
It looks like SD is using the internal container names for the orderers though when it tries to submit a transaction.
Which of course is failing miserably.
I don't get very far with grpcs://raft0-orderer:7050 when it's mapped to a nodeport on 30020.
Is there an external url configuration setting similar to that for the peers?
I don't see it in the latest configuration, or if it's there I'm not recognizing it as such.
Not sure if this is the correct venue, but it is sort of an orderer issue.
Trying to get service discovery working in our internal BC networks. 1.4.1 on K8S. Java SDK.
It all works fine without SD - we manually build the network connection profile and shove in the external urls for everything.
It looks like SD is using the internal container names for the orderers though when it tries to submit a transaction.
Which of course is failing miserably.
I don't get very far with grpcs://raft0-orderer:7050 when it's mapped to a nodeport on 30020.
Is there an external url configuration setting similar to that for the peers?
I don't see it in the latest configuration, or if it's there I'm not recognizing it as such.
> Is there an external url configuration setting similar to that for the peers?
No, the orderer addresses are encoded in the channel configuration. The peers use those addresses to connect to the orderers, and report those orderer addresses to the clients via service discovery.
The assumption is that for a multi-org deployment, the orderer addresses must be externally routable.
So the host/port values in the profiles>{profile}>Orderer>Etcdraft>Consenters, or the Addresses?
So the host/port values in the profiles>{profile}>Orderer>Etcdraft>Consenters, or the Addresses? And if the latter, just the profiles or the ones in orderer>addresses too.
You may configure the orderers to use the same port for Raft consensus and Fabric ordering (the default)
Or you may configure separate listeners for each service.
The consenters would be the Raft internal cluster communication
The orderers are the addresses peers/clients connect to
The orderer addresses are the addresses peers/clients connect to
(Which again, by default, are the same targets, but may be split)
We use the same ports. Enough to take care of with one set.
I'll try changing Orderer>Addresses to the external values and reconfigure then - if I understood what you said that should be sufficient here.
Note, the orderers still connect to eachother via those orderer addresses when doing block replication in some recovery scenarios. But, so long as that address is routable network-wide, you should have no problems.
So I'm still having a small issue.
Would I be correct in surmising that the Orderer.Addresses and Orderer.EtcdRaft.Consenters values are defaults and the same in the genesis profile are overrides and can be discarded if the same, or do I need to replicate them?
Mainly because if that's the case it would explain why discovery is still pulling in the internal raft orderer addresses under some conditions. I only changed the Orderer... ones and left Profile... alone.
So I'm still having a small issue.
Would I be correct in surmising that the Orderer.Addresses and Orderer.EtcdRaft.Consenters values are defaults and the same in the genesis profile are overrides and can be discarded if the same, or do I need to replicate them still?
Mainly because if that's the case it would explain why discovery is still pulling in the internal raft orderer addresses under some conditions. I only changed the Orderer... ones and left Profile... alone.
they are not the same at all
and there is no fallback between them
@aatkddny - you can use the config command in the discovery CLI https://hyperledger-fabric.readthedocs.io/en/release-1.4/discovery-cli.html to see exactly what the peer is returning to the SDK
forgive me for being dense. i just want to be sure i understood this because it is at odds to what i *thought* i'd been told previously
in the two pieces of configtx here, are you saying that the set of address values inside the profile is not an override for the address values in the orderer section?
```
Orderer: &OrdererDefaults
# Orderer Type: The orderer implementation to start.
# Available types are "solo" and "kafka".
OrdererType: etcdraft
# Addresses here is a nonexhaustive list of orderers the peers and clients can
# connect to. Adding/removing nodes from this list has no impact on their
# participation in ordering.
# NOTE: In the solo case, this should be a one-item list.
Addresses:
- localhost:30004
- localhost:30006
- localhost:30008
- localhost:30010
- localhost:30012
===snip===
Profiles:
Genesis:
<<: *ChannelDefaults
Capabilities:
<<: *ChannelCapabilities
Orderer:
<<: *OrdererDefaults
OrdererType: etcdraft
EtcdRaft:
Consenters:
===consenters removed===
Addresses:
- localhost:30004
- localhost:30006
- localhost:30008
- localhost:30010
- localhost:30012
```
Well changing the second one stops it erroring out in the sdk. instead now the peers can't talk to the orderers.
```
```
Well changing the second one stops it erroring out in the sdk. instead now the peers can't talk to the orderers.
```
[31m2019-10-30 22:47:59.736 UTC [ConnProducer] NewConnection -> ERRO 331[0m Failed connecting to localhost:30006 , error: context deadline exceeded
[36m2019-10-30 22:47:59.736 UTC [grpc] func1 -> DEBU 332[0m Failed to dial localhost:30006: context canceled; please retry.
[31m2019-10-30 22:47:59.736 UTC [deliveryClient] connect -> ERRO 333[0m Failed obtaining connection: Could not connect to any of the endpoints: [localhost:30012 localhost:30010 localhost:30008 localhost:30004 localhost:30006]
[33m2019-10-30 22:47:59.737 UTC [deliveryClient] try -> WARN 334[0m Got error: Could not connect to any of the endpoints: [localhost:30012 localhost:30010 localhost:30008 localhost:30004 localhost:30006] , at 8 attempt. Retrying in 2m8s
```
This worked when they were set to the service names (raft0-orderer .. raft4-orderer), but then sd doesn't. I'm clearly missing something here.
@aatkddny let me explain
there are 2 types of endpoints of Raft orderers
1) Endpoints defined in the consensus section, along with the certificates.
these are used by the consensus algorithm between nodes and just that
2) Endpoints that are defined elsewhere, now they have 2 places:
- *non org* level config, at the root level of the channel, globally to all orderer nodes
- *per org* level config, right inside each every org definition, along with where the MSP resides
the latter of (2) overrides the former of (2)
and is used for peers and service discovery
I see the second now in the sample. It changed between 1.4.1 and 1.4.3 to add it.
I see the second now in the sample. It changed between 1.4.1 and 1.4.3 to add it.
edit: did the encoding of the genesis block change too? moving to 1.4.3 has stopped my peers from being able to join a channel.
I see the second now in the sample. It changed between 1.4.1 and 1.4.3 to add it.
@santmukh ^^ please check this
For (2), are the 'per org' entries a replacement for the 'non org' entries? So far it seems the 'non org' ones can't be omitted?
they are a replacement
the non org ones can be ommitted
the non org ones can be omitted
Thank you for clarifying. However it seems omitting the 'non org' ones causes trouble (using fabric 1.4.3): configtxgen will place default entries (127.0.0.1:7050) (log message `2019-10-31 08:34:27.230 CET [common.tools.configtxgen.localconfig] completeInitialization -> INFO 002 Orderer.Addresses unset, setting to [127.0.0.1:7050]`).
When I then execute a transaction using peer chaincode invoke, that peer receives 'orderer endpoint: 127.0.0.1:7050', where it can't reach the orderer.
Hi guys, when I fetch a config block for a channel from peer its different than the config block on orderer. Even though the block height of channel is same on the orderer and the peer. Why this discrepancy can anyone please explain? Also what would be correct source of config blocks?
`oot@fabric0:/opt/gopath/src/github.com/hyperledger/fabric/peer# peer channel fetch config -c channel100 -o orderer0.example.com:7050
2019-10-31 09:09:53.009 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2019-10-31 09:09:53.011 UTC [cli.common] readBlock -> INFO 002 Received block: 65
2019-10-31 09:09:53.012 UTC [cli.common] readBlock -> INFO 003 Received block: 6
root@fabric0:/opt/gopath/src/github.com/hyperledger/fabric/peer# peer channel fetch config -c channel100
2019-10-31 09:09:55.025 UTC [channelCmd] InitCmdFactory -> INFO 001 Endorser and orderer connections initialized
2019-10-31 09:09:55.027 UTC [cli.common] readBlock -> INFO 002 Received block: 65
2019-10-31 09:09:55.028 UTC [cli.common] readBlock -> INFO 003 Received block: 5`
are blocks 6 and 5 both config blocks? @AshishMishra 1
So back to my little orderer problem with service discovery and my favorite container setup. I'm going to wave my ignorance around a bit in the hope someone will take pity on me.
For simplicity i'll give you two addresses on a local setup.
`external` is `localhost:30004` and internal is `raft-orderer:7050`
I'm sticking them into the Address: part of configtx.yaml and building everything from there.
When I use internal I'm good right up to the point that i try to submit a transaction to the orderer.
Then it fails with a can't contact grpcs:internal.
If I add it to my hosts file it still fails and the orderer gives me one of those errors that look like it was sent http instead of grpcs.
When I use external I'm good right up to the point that the peer tries to contact the orderer. Then it fails with a `failed to connect to` connection error.
The peers seem to get round this with an internal and an external address. Not sure how to do the same with the orderers.
Anyone hit this and have a workaround. I'm sure it's something I've overlooked.
So back to my little orderer problem with service discovery and my favorite container setup. I'm going to wave my ignorance around a bit in the hope someone will take pity on me.
For simplicity i'll give you two addresses on a local setup.
`external` is `localhost:30004` and internal is `raft0-orderer:7050` - extrapolate from there to a full set.
I'm sticking them into the Address: part of configtx.yaml and building everything from there.
When I use internal I'm good right up to the point that i try to submit a transaction to the orderer.
Then it fails with a can't contact grpcs:internal.
If I add it to my hosts file it still fails and the orderer gives me one of those errors that look like it was sent http instead of grpcs.
When I use external I'm good right up to the point that the peer tries to contact the orderer. Then it fails with a `failed to connect to` connection error.
The peers seem to get round this with an internal and an external address. Not sure how to do the same with the orderers.
Anyone hit this and have a workaround. I'm sure it's something I've overlooked.
The internal/external address on the peer is more for port binding, because the peer launches chaincodes, which need to be told where to connect to. In the orderer case you are presumably listening on all available interfaces on a given port. The other bits of your network just need to know the externally routable address. You should be able to debug this using something like curl or nmap. You just need for the address given in the channel config to be routable from your peers and clients. Since it sounds like your peers are inside your k8s cluster but your clients aren't.
But this is more of a k8s problem than a Fabric problem. If you have a webserver listening on port 80 in a k8s cluster, and you want to curl it from another pod in the cluster, you would use one address. If you wanted to curl it from outside the k8s cluster you could use a different one. In this case, it's that all the peers need one address, and the clients need another, but in general, you could have some peers inside the k8s cluster, and some outside, some clients inside, and some clients outside. So, all Fabric can do is report one address, and let the deployer figure out how to make it routable from all parts of the network.
@jyellick can't we just put all addresses inside the config? both internal and external?
the SDK should be smart enough to figure out who is reachable, who is not
we should have HA anyway and it shouldn't effect the application experience
@aatkddny what if you put all endpoints in the config ?
Raft should know how to handle that
I don't see why it wouldn't work, though it's obviously not ideal. Could you just deploy to two different k8s clusters? That way all connections to the orderers are from outside?
I don't see why it wouldn't work, though it's obviously not ideal. Could you just deploy to two different k8s clusters? That way all connections to the orderers are from outside? [Edit: I guess we still need the orderers to be able to find eachother so maybe two clusters still doesn't solve it]
I know there are successful k8s deployments out there, might need an ingress controller... I can't seem to find anyone who knows at the moment, but I'll check with them next time I see them online.
Sorry was away. Just catching up.
The way it works now is to fully express the network configuration. All the peers, all the orderers, all the channels. I have a chunk of code that does that and it's been working for quite a long time.
It was designed to ape the configuration from v1 of IBP so I could go from one to the other by only inserting this builder in place of the call to the v1 endpoint.
Now the world is different. Configuration is minimal and by channel (although we don't actually say anything about the channel) and we are discovering everything ...
I don't want to have two totally different sets of code to maintain, so I was hoping to find a workaround that allows me to plug in a minimal configuration builder and get the rest of the discovery and gateway (java sdk remember) goodness.
I get it's a K8s issue - isn't everything - but in order to scale this puppy I need K8s. Which means I need to be able to get to the orderers from both inside and outside.
I can't guarantee my application code is inside the cluster. In some cases it isn't even dockerized.
Sorry was away. Just catching up. We have a working raft k8s setup. Have had for quite a while now.
The way it works is to fully express the network configuration.
All the peers, all the orderers, all the channels, all the cas, everything in one handy dandy chunk of network configuration.
I have a chunk of code that does that and it's been working for quite a long time. It was designed to ape the configuration from v1 of IBP so I could go from one to the other by only inserting this builder in place of the call to the v1 endpoint. Ours was actually reverse engineered from the config from the v1 endpoint.
Now the world is different. Configuration is minimal and for some unknown reason is by channel (although we don't actually say anything about the channel and it doesn't fit well with the java hfclient supporting multiple channels per user) and we are discovering everything about the network by calling a peer or two.
I don't want to have two totally different sets of code to maintain, so I was hoping to find a workaround that allows me to plug in a minimal configuration and get the rest of the discovery and gateway (java sdk remember) goodness. I can keep what currently works and reflect it into the gateway builder (java sdk again), but it's very different in form from the new way of doing things and I'm concerned that whatever comes next will totally break things if I can't maintain parity.
I get it's a K8s issue - isn't everything - but in order to scale this puppy I need K8s.
Which means I need to be able to get to the orderers from both inside and outside.
I can't guarantee my application code is inside the cluster. In some cases it isn't even dockerized.
How are you doing HA across zones? Isn't that the same problem?
Ingress doesn't work for the java SDK. Trust me. I've had a jira out about it forever...
Ingress doesn't work for the java SDK. Trust me. I've had a jira out about it forever... Edit: FABJ-457
Ingress doesn't work for the java SDK. Trust me - I wanted to use it just to get rid of my NodePort proliferation problem. I've had a jira out about it forever... Edit: FABJ-457
Has joined the channel.
Yes @yacovm both are config blocks.
Well after walking away discouraged yesterday I came back to this this morning and almost immediately realized what I'd overlooked.
I'm developing on a Mac. Their K8S setup is a bit different. It has some unique naming requirements.
If I set the `Address` fields to
`docker.for.mac.localhost` instead of `localhost` and the stick same in my hosts file it does exactly what it is supposed to do. My MySQL setup should have tipped me off earlier.
I'll go stand in the corner now quietly and wipe the egg off my face. Thanks everyone for their help and patience on this wild goose chase.
@aatkddny what I don't understand is why you're testing stuff from a mac
the applications anyway always run on servers
you should test them by deploying them there
are you basically running the application inside your IDE and testing against the Fabric installation?
exactly. i run a local k8s install for dev before moving it to a test one.
this is for code that manages the blockchain and its participants inside the cluster rather than the applications that add assets.
exactly. i run a local k8s install for dev before moving it to a test one.
this is for code that manages the blockchain and its participants inside the cluster rather than the applications that add assets. It actually creates the yaml files to allow me to deploy nodes to a cluster, add channels, install chaincode and suchlike. doing it manually doesn't scale at all.
Has joined the channel.
numerics are my problem? I know hyphens work in a channel name.
`22-8gvt-2027-35-1-68rh-2' contains illegal characters`
numerics are my problem? I know hyphens work in a channel name. If so, why?
`22-8gvt-2027-35-1-68rh-2' contains illegal characters`
numerics are my problem? I know hyphens work in a channel name. If so, why?
`22-8gvt-2027-35-1-68rh-2 contains illegal characters`
numerics are my problem? I know hyphens work in a channel name. If so, why?
`22-8gvt-2027-35-1-68rh-2 contains illegal characters`
everything is fine until it gets to the orderer to commit the create.
You must start with a letter
Channel IDs must start with a letter
See https://github.com/hyperledger/fabric/blob/1ac3f05847279f87da5daf81415a02b89048c2e9/common/configtx/validator.go#L24 (`channelAllowedChars = "[a-z][a-z0-9.-]*"`)
As for why it's disallowed, basically, we chose to take the intersection of allowable Kafka topics and CouchDB names, so that it would be easy for admins to look at their brokers and or dbs and understand what data was there and why. The alternative would have been to generate an internal GID type identifier within those constraints and allow more arbitrary channel ids, but that was not the approach taken.
Has joined the channel.
It would be nice if it were caught inside configtxgen rather than creating invalid configuration, appearing to get past the actual create and only failing when it goes to commit it.
Just a thought.
any idea what this means?
/Channel not satisfied: implicit policy evaluation failed - 1 sub-policies were satisfied, but this policy requires 2 of the 'Admins' sub-policies to be satisfied
The command you are trying to perform requires 2 Admins signatures to be performed, my guess is that this is a configuration update.
you will need to pass the configuration update to another msp to sign if my guess was correct. If it is a transaction, the endorsement policy requires that 2 admins endorse that transaction. your peer could not reach the peer it needs or that peer is off
yes, i'm trying to add an org into the system channel. it is a config update
So you need to run the `peer channel signconfigtx`on both your orgs
ther's only one org in the system, the orderer org. i'm trying to bring in a new peerorg
when i submit with the new org's signatures i get
Principal deserialization failure (MSP newOrgMSP is unknown) for identity 0
You should use only the MSPs that are already on it.
ok i took a different approach. now i have two orgs, orderer and peerorg1. and i want to create a new channel with just peerorg1 in it. heres what i get:
dev_orderer1 | 2019-11-11 20:42:48.963 UTC [orderer.common.broadcast] ProcessMessage -> WARN 5d3 [channel: mychannel] Rejecting broadcast of config message from 192.168.16.1:34000 because of error: error validating channel creation transaction for new channel 'mychannel', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
this is going to be one of those why on earth would you do that questions - one that i'm pretty sure the answer is no.
there's no way to (easily) rename a channel without recreating it is there?
no, and i cannot help but asking, why? :P
need to change the naming convention that we use to generate channel names.
would have been easier to rename than recreate.
@aatkddny No, renaming (or more generally deleting and recreating) channels is prohibited for very good reasons. Essentially, you could think of it as rewinding the channel to genesis, and forking it.
If the orderer signs block N on channel foo, then it cannot sign a different block N' on channel foo without violating the 'no forking' requirements.
I have a feeling he just wants to have an alias
and not really deleting and re-creating it
in theory perhaps you can envision a channel identifier to be the hash of the genesis block
and it has a reference to a string that is its name
and you can change that string
Not sure I'd think of a rename as rewinding to genesis and rolling forward.
So long as the name change is agreed and recorded I see no reason you'd not be able to stick it into the channel as a name change and add a set of range pointers into the db so that blocks x1-x2 are old name which can be aliased in and x2+1-now are this new name
edit - yacov called it.
Not sure I'd think of a rename as rewinding to genesis and rolling forward.
So long as the name change is agreed and recorded I see no reason you'd not be able to stick it into the channel as a name change and add a set of range pointers into the db so that blocks x1-x2 are old name which can be aliased in and x2+1-now are this new name
edit - yacovm called it.
then you could have many names that even point to the same chain
Has joined the channel.
Can a chaincode policy be altered post installation? Say the policy requires 1-of (two orgs) to consent? Can it be changed to 0-of or 3-of if a new organization joins the network?
This is not really an ordering related question, but yes, chaincode endorsement policies are mutable. Channel config policies are also mutable, in fact I can't think of any policies which are not.
seems obvious but what exactly is the cause of this message and how do i resolve it:
```
dev_orderer | 2019-11-15 20:22:44.191 UTC [common.configtx] policyForItem -> DEBU 3fa Getting policy for item Application with mod_policy ChannelCreationPolicy
dev_orderer | 2019-11-15 20:22:44.191 UTC [cauthdsl] func1 -> DEBU 3fb 0xc000aa9d10 gate 1573849364191888300 evaluation starts
dev_orderer | 2019-11-15 20:22:44.191 UTC [cauthdsl] func2 -> DEBU 3fc 0xc000aa9d10 signed by 0 principal evaluation starts (used [false])
dev_orderer | 2019-11-15 20:22:44.191 UTC [cauthdsl] func2 -> DEBU 3fd 0xc000aa9d10 processing identity 0 with bytes of a1f390
dev_orderer | 2019-11-15 20:22:44.192 UTC [cauthdsl] func2 -> DEBU 3fe 0xc000aa9d10 identity 0 does not satisfy principal: This identity is not an admin
dev_orderer | 2019-11-15 20:22:44.192 UTC [cauthdsl] func2 -> DEBU 3ff 0xc000aa9d10 principal evaluation fails
dev_orderer | 2019-11-15 20:22:44.192 UTC [cauthdsl] func1 -> DEBU 400 0xc000aa9d10 gate 1573849364191888300 evaluation fails
dev_orderer | 2019-11-15 20:22:44.192 UTC [orderer.common.broadcast] ProcessMessage -> WARN 401 [channel: mychannel] Rejecting broadcast of config message from 172.25.0.5:44250 because of error: error validating channel creation transaction for new channel 'mychannel', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
```
Has joined the channel.
@tommyjay `identity 0 does not satisfy principal: This identity is not an admin`. Ensure that the user submitting the channel creation request is an admin. If you're using cryptogen, you should be using a path like: `peerOrganizations/org1.example.com/users/Admin\@org1.example.com/msp`
Not sure if this is the correct place to ask but I have seen some conflicting answers, and have so far been unable to find .
*MUST* the order service maintain full ledgers *AND* full transaction data of all channels that it orders for or can it maintain just the latest block?
If it doesn't need to validate transactions would it work with just some headers to order them or something like this?
Or if your channel want internal ledger privacy against an ordering service best way to do it would just be to work with "private data collections"?
Not sure if this is the correct place to ask but I have seen some conflicting answers, and have so far been unable to find a concise answer.
*MUST* the order service maintain full ledgers *AND* full transaction data of all channels that it orders for or can it maintain just the latest block?
If it doesn't need to validate transactions would it work with just some headers to order them or something like this?
Or if your channel want internal ledger privacy against an ordering service best way to do it would just be to work with "private data collections"?
this is correct place to ask :)
FAB-106 and FAB-1223 describe the need to checkpoint history data. However, it is *not* implemented yet, so orderers keep full blocks for channels they participate in.
yes, private data should be used if you want your data to be hiden from orderers
@guoger Thank you for the answer, much appreciated!
Hello All, I am getting this error while trying to run orderer..."standard_init_linux.go:211: exec user process caused "no such file or directory"". I just took the fabric v2.0.0 and build everything using 'make' utility. Please help
Hello All, I am getting this error while trying to run orderer..."*standard_init_linux.go:211: exec user process caused "no such file or directory"*". I just took the fabric v2.0.0 and build everything using 'make' utility. Please help
I have also started seeing errors starting v2.0.0 docker images I downloaded today. Docker logs show:
```2019-11-21 16:12:48.253 UTC [orderer.common.server] initializeServerConfig -> INFO 003 Starting orderer with mutual TLS enabled
2019-11-21 16:12:48.272 UTC [fsblkstorage] NewProvider -> INFO 004 Creating new file ledger directory at /var/hyperledger/production/orderer/chains
panic: unable to bootstrap orderer. Error reading genesis block file: open /etc/hyperledger/fabric/genesisblock: no such file or directory
goroutine 1 [running]:
github.com/hyperledger/fabric/orderer/common/bootstrap/file.(*fileBootstrapper).GenesisBlock(0xc00031c2d0, 0xc00031c2d0)
/go/src/github.com/hyperledger/fabric/orderer/common/bootstrap/file/bootstrap.go:39 +0x1c0
github.com/hyperledger/fabric/orderer/common/server.extractBootstrapBlock(0xc0003a4400, 0xc0001364e0)
/go/src/github.com/hyperledger/fabric/orderer/common/server/main.go:595 +0x13f
github.com/hyperledger/fabric/orderer/common/server.Main()
/go/src/github.com/hyperledger/fabric/orderer/common/server/main.go:128 +0x1247
main.main()
/go/src/github.com/hyperledger/fabric/cmd/orderer/main.go:15 +0x20
```
My docker-compose file has this environment variable for the orderer `ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/twoorgs.genesis.block` so I'm not sure why it's looking at `/etc/hyperledger/fabric/genesisblock`.
Any ideas?
I can also see in the docker logs for the orderer: `General.GenesisFile = "/etc/hyperledger/configtx/twoorgs.genesis.block"`
Has joined the channel.
We did do work in this area in the last 2 weeks, let me take a look
We are also seeing problems in the daily tests. I thnk this may be the problem: https://gerrit.hyperledger.org/r/c/fabric/+/33276/16/orderer/common/localconfig/config.go , part of an update merged yesterday Nov 20 midday. https://gerrit.hyperledger.org/r/c/fabric/+/33276
Jason is looking into this now
@bestbeforetoday @scottz Jason see's the problem, he is going to fix it
It leaves the GenesisFile in the General structure but does not initialize it anymore in the Defaults.
Great!
https://gerrit.hyperledger.org/r/c/fabric/+/34442
This should be a fix for the above problem
If you want a workaround, specify `ORDERER_GENERAL_BOOTSTRAPFILE` instead
If you want a workaround, point `ORDERER_GENERAL_BOOTSTRAPFILE` to your genesis block instead, or set it to empty.
The change in FAB-16477 was to add a new config value "BootstrapFile" instead of "GenesisFile" because for bootstrapping Raft, this file may be a later config block in the chain. It was supposed to fall back to the old "GenesisFile" value if no "BootstrapFile" was specified (and it does). But, the sampleconfig/orderer.yaml specifies a default value of `genesisblock` for this field, so it is not showing up as actually uninnitialized.
@bestbeforetoday ^
We created this bug for this issue: https://jira.hyperledger.org/browse/FAB-17124
i've asked original author ot PR fabric-samples to adapt to recent change as well https://github.com/hyperledger/fabric-samples/pull/77
The bootstrap method half of that hasn't been merged into Fabric yet
https://gerrit.hyperledger.org/r/c/fabric/+/34437
Note you have an outstanding -1, and it will soon be in merge conflict once the fix for the regression merges
right... and the question is, since we could break compatibility in v2.0, do we actually need to keep old `GenesisFile`/`GenesisMethod` working? i'd hope we break less, but if there are other configs in yaml already being deprecated, we might as well change it all
Well, we can deprecate, but I wouldn't remove
I don't really see the benefit to removing
Especially if we have a more drastic config overhaul in the future
removing
pro: we'd have 2-3 lines of code less in that CR
con: old config option would not work anymore and we'll need to update them all
i personally think we shouldn't remove therefore -1'ed...
oh, i didn't see he replied, will follow-up
I am getting this error while creating channel "*implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied*". Please help
I am trying to add a new org named org3 to the network. I followed this tutorial end to end https://hyperledger-fabric.readthedocs.io/en/latest/channel_update_tutorial.html. I am able to join a channel that was already pre-created between orderer and org 1 and org2. I am not able to create a new channel on org3 as orderer says the Principal Deserialization failure, MSP Org3MSP is unknown
Which is understandable. When the genesis.block was created using configtxgen, the configtx.yaml only had OrdererMSP, Org1MSP and Org2MSP. How do I fix this though?
Dear All, I am using Fabric v1.4.4. Everything working fine with Fabric v1.4.2 but for Fabric v1.4.4 I am getting the error "*Error: got unexpected status: FORBIDDEN -- implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied*" at the time of channel creation with the command "*peer channel create -o orderer.example.com:7050 -c first-channel -f ./channel-artifacts/channel.tx --tls --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/example.com/orderers/orderer.example.com/msp/tlscacerts/tlsca.example.com-cert.pem*". Anything I am missing in Fabric v1.4.4????? I am stuck. I am preparing the network freshly. Please help.
I am using this command to generate orderer genesis "*configtxgen -profile OrdererGenesis -channelID first-channel -outputBlock ./config/genesis.block*"
Is there any difference between "*-channelID $SYS_CHANNEL*" and "*-channelID $CHANNEL_NAME*"????? As I am providing both channel name same.
ChannelID is for application channels that are defined by the admin.
Are there any guidance on batchSize for improving performance of chaincode invoke operations?
by "performance of chaincode invoke operations", do you mean the time it takes to complete a chaincode invoke? in that case, i don't think batchsize would help...
Has joined the channel.
time to cut a block might. assuming this doesn't have a lot of tx/s
time to cut a block might. assuming this doesn't have a lot of tx/s
edit: ran into this when we first started - couldn't understand why it took 2s for a tx to complete. until we saw the block cut time in the configuration.
https://medium.com/thinkdecentralized/updating-the-consortium-definition-in-hyperledger-fabric-d1b6a9d079b0 This link helped me solve this question
Hello, As part of orderer startup in RAFT mode, is there a way apart from checking from logs to confirm that every channel has Raft Leader
Hello, As part of orderer startup in RAFT mode, is there a way apart from checking from logs to confirm that every channel has Raft Leader?
tweaking batchTimeout from 2s to 0.5s helped. But doesn't that increase the number of blocks that get written? storage vs performance?
only if there's something in the block.
If you have enabled the operation's metrics, there is one: `consensus_etcdraft_is_leader`. If it is 1, the OSN is the leader, and it is 0 otherwize
i misunderstood your question. Fabric tx goes through 3 phases - execute, order, commit, and i was assuming you were talking about execute only, which is not affected by batchsize. But apparently you were talking about the whole lifecycle of tx. Yes, you need to tune those parameters to fit your use case and deployment: in a busy network, you may want to squeeze more tx into one block to minimize the overhead of block cutting, and on the other hand, reduce the latency so a tx would not need to wait for a block to be full
as @aatkddny said, if there's no tx at all, timeout would not take effect
Has joined the channel.
Getting error when updating orderer batchtimeout
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating DeltaSet: policy for [Value] /Channel/Orderer/BatchTimeout not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
I have two orgs and i have signed the envelope pb file from both the orgs.
GETTING ERROR WHEN UPDATING ORDERER BATCHTIMEOUT
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating DeltaSet: policy for [Value] /Channel/Orderer/BatchTimeout not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
I have two orgs and i have signed the envelope pb file from both the orgs.
GETTING ERROR WHEN UPDATING ORDERER BATCHTIMEOUT
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating DeltaSet: policy for [Value] /Channel/Orderer/BatchTimeout not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
I have two orgs and i have signed the envelope pb file from both the orgs.
GETTING ERROR WHEN UPDATING ORDERER BATCHTIMEOUT
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating DeltaSet: policy for [Value] /Channel/Orderer/BatchTimeout not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied.
I have two orgs and i have signed the envelope pb file from both the orgs.
GETTING ERROR WHEN UPDATING ORDERER BATCHTIMEOUT
*Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating DeltaSet: policy for [Value] /Channel/Orderer/BatchTimeout not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied.
*
I have two orgs and i have signed the envelope pb file from both the orgs.
GETTING ERROR WHEN UPDATING ORDERER BATCHTIMEOUT
*Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating DeltaSet: policy for [Value] /Channel/Orderer/BatchTimeout not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied.*
I have two orgs and i have signed the envelope pb file from both the orgs.
with default settings, you'll need to sign update tx with Orderer Org Admin. If you could provide orderer logs, we could probably further diagnose what's going on
I tried even with OrdererMSP
Same orderer logs
[orderer.comm.broadcast]
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating DeltaSet: policy for [Value] /Channel/Orderer/BatchTimeout not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied.
These are some debug logs
2019-11-28 08:58:25.472 UTC [cauthdsl] deduplicate -> WARN d9c De-duplicating identity [Org1MSP7ca7364a9cca69cf28ea7fad30f2fe4de53f2729a3c337c353fc1b584025e1ba] at index 3 in signature set
2019-11-28 08:58:25.472 UTC [cauthdsl] func1 -> DEBU d9d 0xc000b0e880 gate 1574931505472263661 evaluation starts
2019-11-28 08:58:25.472 UTC [cauthdsl] func2 -> DEBU d9e 0xc000b0e880 signed by 0 principal evaluation starts (used [false false false false])
2019-11-28 08:58:25.472 UTC [cauthdsl] func2 -> DEBU d9f 0xc000b0e880 processing identity 0 with bytes of a1f390
2019-11-28 08:58:25.472 UTC [cauthdsl] func2 -> DEBU da0 0xc000b0e880 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected OrdererMSP, got Org1MSP)
2019-11-28 08:58:25.472 UTC [cauthdsl] func2 -> DEBU da1 0xc000b0e880 processing identity 1 with bytes of a1f390
2019-11-28 08:58:25.472 UTC [cauthdsl] func2 -> DEBU da2 0xc000b0e880 identity 1 does not satisfy principal: the identity is a member of a different MSP (expected OrdererMSP, got Org2MSP)
2019-11-28 08:58:25.472 UTC [cauthdsl] func2 -> DEBU da3 0xc000b0e880 processing identity 2 with bytes of a1f390
2019-11-28 08:58:25.472 UTC [cauthdsl] func2 -> DEBU da4 0xc000b0e880 identity 2 does not satisfy principal: The identity is not an admin under this MSP [OrdererMSP]: The identity does not contain OU [ADMIN], MSP: [OrdererMSP]
2019-11-28 08:58:25.472 UTC [cauthdsl] func2 -> DEBU da5 0xc000b0e880 principal evaluation fails
2019-11-28 08:58:25.472 UTC [cauthdsl] func1 -> DEBU da6 0xc000b0e880 gate 1574931505472263661 evaluation fails
2019-11-28 08:58:25.472 UTC [policies] Evaluate -> DEBU da7 Signature set did not satisfy policy /Channel/Orderer/OrdererOrg/Admins
2019-11-28 08:58:25.472 UTC [policies] Evaluate -> DEBU da8 == Done Evaluating *cauthdsl.policy Policy /Channel/Orderer/OrdererOrg/Admins
2019-11-28 08:58:25.472 UTC [policies] func1 -> DEBU da9 Evaluation Failed: Only 0 policies were satisfied, but needed 1 of [ OrdererOrg/Admins ]
2019-11-28 08:58:25.472 UTC [policies] Evaluate -> DEBU daa Signature set did not satisfy policy /Channel/Orderer/Admins
2019-11-28 08:58:25.472 UTC [policies] Evaluate -> DEBU dab == Done Evaluating *policies.implicitMetaPolicy Policy /Channel/Orderer/Admins
Has joined the channel.
I experience an issue when running orderer can't seem to get it started, it only prints INFO log of config values...from that point it becomes unresponsible
I experience an issue when running orderer can't seem to get it started, it only prints INFO log of config values...from that point it becomes unresponsive
keep getting "connection refused" errors
Any one?
pls turn on debug and send logs here
Has joined the channel.
HI All, I'm getting this Error: got unexpected status: BAD_REQUEST -- error validating channel creation transaction for new channel 'mychannel', could not succesfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
Here's the output
Debug is turned on (both on orderer and peers).
Orderer prints out only the config:
==================================================================
2019-11-28 15:36:06.864 UTC [orderer.common.server] prettyPrintStruct -> INFO 001 Orderer config values:
General.LedgerType = "file"
General.ListenAddress = "0.0.0.0"
General.ListenPort = 7050
...
...
...
...
...
...
...
...
Metrics.Statsd.WriteInterval = 30s
Metrics.Statsd.Prefix = ""
==================================================================
Nothing more gets printed, which is why I suspect something with the orderer isn't right.
Peers are getting an error when trying to create a channel:
==================================================================
...
...
2019-11-28 15:38:08.291 UTC [grpc] HandleSubConnStateChange -> DEBU 03d pickfirstBalancer: HandleSubConnStateChange: 0xc000580130, TRANSIENT_FAILURE
Error: failed to create deliver client: orderer client failed to connect to blockchain-orderer1:7050: failed to create new connection: connection error: desc = "transport: error while dialing: dial tcp 10.16.2.246:7050: connect: connection refu
sed"
==================================================================
That's all I get from the log output.
Fabric version is 1.4.1
can you do a kill -6 to get a dump of orderer process?
It just closes without any message
no dump? and you saw that the orderer process is killed?
that's weird...
are you sure you are using -6?
or try `-ABRT`
Yes, -6, I'll post the picture
is it running in container?
yup, GKE
Has joined the channel.
hmm...never used it but i suspect gke disabled coredump there... are you able to reproduce the problem locally?
No, locally on my minikube everything runs smooth
https://imgur.com/a/UO5ROPm
oh, i guess either coredump is not enabled, or not directed to stdout/stderr
you probably need to figure that out first
i'm not gke expert unfortunately
okay, kill -6 from another terminal dumped this:
========================================
SIGABRT: abort
PC=0x45f8b1 m=0 sigcode=0
goroutine 0 [idle]:
runtime.futex(0x1b66140, 0x80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x7fff3a0b8000, 0x40b712, ...)
/opt/go/src/runtime/sys_linux_amd64.s:531 +0x21
runtime.futexsleep(0x1b66140, 0x7fff00000000, 0xffffffffffffffff)
/opt/go/src/runtime/os_linux.go:46 +0x4b
runtime.notesleep(0x1b66140)
/opt/go/src/runtime/lock_futex.go:151 +0xa2
runtime.stopm()
/opt/go/src/runtime/proc.go:2016 +0xe3
runtime.findrunnable(0xc000048000, 0x0)
/opt/go/src/runtime/proc.go:2487 +0x4dc
runtime.schedule()
/opt/go/src/runtime/proc.go:2613 +0x13a
runtime.park_m(0xc0000aec00)
/opt/go/src/runtime/proc.go:2676 +0xae
runtime.mcall(0x1b09f78)
/opt/go/src/runtime/asm_amd64.s:299 +0x5b
goroutine 1 [syscall]:
syscall.Syscall(0x49, 0x3, 0x6, 0x0, 0x0, 0x0, 0x0)
/opt/go/src/syscall/asm_linux_amd64.s:18 +0x5
syscall.Flock(0x3, 0x6, 0x0, 0x0)
/opt/go/src/syscall/zsyscall_linux_amd64.go:439 +0x4b
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/storage.setFileLock(0xc0000dacf0, 0x100, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/storage/file_storage_unix.go:59 +0x6f
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/storage.newFileLock(0xc000179880, 0x31, 0x0, 0xc000179880, 0x31, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/storage/file_storage_unix.go:41 +0xd2
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/storage.OpenFile(0xc0004017a0, 0x2c, 0xc000401700, 0x0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/storage/file_storage.go:107 +0x1b7
github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb.OpenFile(0xc0004017a0, 0x2c, 0xc0003e1800, 0x0, 0x0, 0xc0003e17d8)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/github.com/syndtr/goleveldb/leveldb/db.go:215 +0x55
github.com/hyperledger/fabric/common/ledger/util/leveldbhelper.(*DB).Open(0xc0004524c0)
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/util/leveldbhelper/leveldb_helper.go:78 +0x146
github.com/hyperledger/fabric/common/ledger/util/leveldbhelper.NewProvider(0xc0000c5b60, 0xc0000c5b60)
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/util/leveldbhelper/leveldb_provider.go:40 +0xda
github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage.NewProvider(0xc0002b3200, 0xc0002b3220, 0x7d2325, 0xc00000e1f8)
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/blkstorage/fsblkstorage/fs_blockstore_provider.go:34 +0x7f
github.com/hyperledger/fabric/common/ledger/blockledger/file.New(0xc00004406c, 0x26, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/common/ledger/blockledger/file/factory.go:71 +0xea
github.com/hyperledger/fabric/orderer/common/server.createLedgerFactory(0xc0004f7200, 0x1b89730, 0x0, 0x7ce857, 0xc0001841e0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/util.go:32 +0x1dd
github.com/hyperledger/fabric/orderer/common/server.Start(0x1013e09, 0x5, 0xc0004f7200)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:104 +0x118
github.com/hyperledger/fabric/orderer/common/server.Main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:91 +0x1ce
main.main()
/opt/gopath/src/github.com/hyperledger/fabric/orderer/main.go:15 +0x20
goroutine 20 [syscall, 1 minutes]:
os/signal.signal_recv(0x0)
/opt/go/src/runtime/sigqueue.go:139 +0x9c
os/signal.loop()
/opt/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.init.0
/opt/go/src/os/signal/signal_unix.go:29 +0x41
goroutine 42 [chan receive, 1 minutes]:
github.com/hyperledger/fabric/orderer/consensus/kafka.init.1.func1(0xc0000dd500)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/kafka/logger.go:41 +0x31
created by github.com/hyperledger/fabric/orderer/consensus/kafka.init.1
/opt/gopath/src/github.com/hyperledger/fabric/orderer/consensus/kafka/logger.go:38 +0x6c
rax 0xca
rbx 0x1b66000
rcx 0x45f8b3
rdx 0x0
rdi 0x1b66140
rsi 0x80
rbp 0x7fff3a0b7fc8
rsp 0x7fff3a0b7f80
r8 0x0
r9 0x0
r10 0x0
r11 0x286
r12 0x55
r13 0xc00051ff20
r14 0x200
r15 0x0
rip 0x45f8b1
rflags 0x286
cs 0x33
fs 0x0
gs 0x0
=============================================
Helpful or garbage?
I've also upgraded to version 1.4.3 same issue, stuck and never started...
I've also upgraded to version 1.4.3 same issue, orderer stuck and never started...
sorry, got pulled into a meeting. so this is helpful, orderer never succeeds to open a file for leveldb
flock may not work well with nfs
could you try a different type of fs on gke?
We are using following NFS
image: gcr.io/google_containers/volume-nfs:0.8
that binds gce disk
Hi All, The fabric is taking more than 10 seconds to commit a transaction in our lower environment (while we have tested it in past where the response time was less then few milliseconds). We are using fabric golang client. I am sharing the relevant logs from fabric client. Please suggest if you have encountered an issue like this
[fabsdk/fab] 2019/11/27 07:57:46 UTC - dispatcher.(*Dispatcher).Start.func1 -> DEBU Listening for events...
[fabsdk/fab] 2019/11/27 07:57:52 UTC - dispatcher.(*Dispatcher).disconnected -> DEBU Checking if event client should disconnect from peer [peer1.airtel.com:7051] on channel [entity]...
[fabsdk/fab] 2019/11/27 07:57:52 UTC - endpoint.(*DiscoveryWrapper).GetPeers -> DEBU Channel peer config for
[fabsdk/fab] 2019/11/27 07:57:46 UTC - dispatcher.(*Dispatcher).Start.func1 -> DEBU Listening for events...
[fabsdk/fab] 2019/11/27 07:57:52 UTC - dispatcher.(*Dispatcher).disconnected -> DEBU Checking if event client should disconnect from peer [peer1.sample.com:7051] on channel [entity]...
[fabsdk/fab] 2019/11/27 07:57:52 UTC - endpoint.(*DiscoveryWrapper).GetPeers -> DEBU Channel peer config for [peer1.sample.com:7051]: &fab.ChannelPeer{PeerChannelConfig:fab.PeerChannelConfig{EndorsingPeer:true, ChaincodeQuery:true, LedgerQuery:true, EventSource:true}, NetworkPeer:fab.NetworkPeer{PeerConfig:fab.PeerConfig{URL:"peer1.sample.com:7051", GRPCOptions:map[string]interface {}{"allow-insecure":false, "fail-fast":false, "keep-alive-permit":false, "keep-alive-time":"20s", "keep-alive-timeout":"100s", "ssl-target-name-override":"peer1.sample.com"}, TLSCACert:(*x509.Certificate)(0xc000439080)}, MSPID:"SampleMSP"}}
[fabsdk/fab] 2019/11/27 07:57:52 UTC - endpoint.(*DiscoveryWrapper).GetPeers -> DEBU Channel peer config for [peer0.sample.com:7051]: &fab.ChannelPeer{PeerChannelConfig:fab.PeerChannelConfig{EndorsingPeer:true, ChaincodeQuery:true, LedgerQuery:true, EventSource:true}, NetworkPeer:fab.NetworkPeer{PeerConfig:fab.PeerConfig{URL:"peer0.sample.com:7051", GRPCOptions:map[string]interface {}{"allow-insecure":false, "fail-fast":false, "keep-alive-permit":false, "keep-alive-time":"20s", "keep-alive-timeout":"100s", "ssl-target-name-override":"peer0.sample.com"}, TLSCACert:(*x509.Certificate)(0xc000439600)}, MSPID:"SampleMSP"}}
[fabsdk/fab] 2019/11/27 07:57:52 UTC - preferorg.(*PeerResolver).ShouldDisconnect -> DEBU Using the min block height resolver to determine whether peer [peer1.sample.com:7051] should be disconnected
[fabsdk/fab] 2019/11/27 07:57:52 UTC - minblockheight.(*PeerResolver).ShouldDisconnect -> DEBU Block height of connected peer [peer1.sample.com:7051] from Discovery: 0, Last block received: 292300, Max block height from Discovery: 0
[fabsdk/fab] 2019/11/27 07:57:52 UTC - minblockheight.(*PeerResolver).ShouldDisconnect -> DEBU Max block height of peers is 0 and reconnect lag threshold is 8 so event client will not be disconnected from peer
[fabsdk/fab] 2019/11/27 07:57:52 UTC - dispatcher.(*Dispatcher).disconnected -> DEBU Event client will not disconnect from peer [peer1.sample.com:7051] on channel [entity]...
[fabsdk/fab] 2019/11/27 07:57:56 UTC - comm.(*CachingConnector).openConn -> DEBU connection was opened [orderer0.ucccpr.com:7050]
[fabsdk/fab] 2019/11/27 07:57:56 UTC - comm.(*CachingConnector).ReleaseConn -> DEBU ReleaseConn [orderer0.ucccpr.com:7050]
[fabsdk/fab] 2019/11/27 07:57:56 UTC - txn.sendBroadcast -> DEBU Receive Success Response from orderer
[fabsdk/fab] 2019/11/27 07:57:58 UTC - dispatcher.(*Dispatcher).disconnected -> DEBU Checking if event client should disconnect from peer [peer1.sample.com:7051] on channel [entity]...
[fabsdk/fab] 2019/11/27 07:57:58 UTC - endpoint.(*DiscoveryWrapper).GetPeers -> DEBU Channel peer config for [peer1.sample.com:7051]: &fab.ChannelPeer{PeerChannelConfig:fab.PeerChannelConfig{EndorsingPeer:true, ChaincodeQuery:true, LedgerQuery:true, EventSource:true}, NetworkPeer:fab.NetworkPeer{PeerConfig:fab.PeerConfig{URL:"peer1.sample.com:7051", GRPCOptions:map[string]interface {}{"allow-insecure":false, "fail-fast":false, "keep-alive-permit":false, "keep-alive-time":"20s", "keep-alive-timeout":"100s", "ssl-target-name-override":"peer1.sample.com"}, TLSCACert:(*x509.Certificate)(0xc000439080)}, MSPID:"SampleMSP"}}
[fabsdk/fab] 2019/11/27 07:57:58 UTC - endpoint.(*DiscoveryWrapper).GetPeers -> DEBU Channel peer config for [peer0.sample.com:7051]: &fab.ChannelPeer{PeerChannelConfig:fab.PeerChannelConfig{EndorsingPeer:true, ChaincodeQuery:true, LedgerQuery:true, EventSource:true}, NetworkPeer:fab.NetworkPeer{PeerConfig:fab.PeerConfig{URL:"peer0.sample.com:7051", GRPCOptions:map[string]interface {}{"allow-insecure":false, "fail-fast":false, "keep-alive-permit":false, "keep-alive-time":"20s", "keep-alive-timeout":"100s", "ssl-target-name-override":"peer0.sample.com"}, TLSCACert:(*x509.Certificate)(0xc000439600)}, MSPID:"SampleMSP"}}
sending logs again
[fabsdk/fab] 2019/11/27 07:57:46 UTC - dispatcher.(*Dispatcher).Start.func1 -> DEBU Listening for events...
[fabsdk/fab] 2019/11/27 07:57:52 UTC - dispatcher.(*Dispatcher).disconnected -> DEBU Checking if event client should disconnect from peer [peer1.sample.com:7051] on channel [entity]...
[fabsdk/fab] 2019/11/27 07:57:52 UTC - endpoint.(*DiscoveryWrapper).GetPeers -> DEBU Channel peer config for [peer1.sample.com:7051]: &fab.ChannelPeer{PeerChannelConfig:fab.PeerChannelConfig{EndorsingPeer:true, ChaincodeQuery:true, LedgerQuery:true, EventSource:true}, NetworkPeer:fab.NetworkPeer{PeerConfig:fab.PeerConfig{URL:"peer1.sample.com:7051", GRPCOptions:map[string]interface {}{"allow-insecure":false, "fail-fast":false, "keep-alive-permit":false, "keep-alive-time":"20s", "keep-alive-timeout":"100s", "ssl-target-name-override":"peer1.sample.com"}, TLSCACert:(*x509.Certificate)(0xc000439080)}, MSPID:"SampleMSP"}}
[fabsdk/fab] 2019/11/27 07:57:52 UTC - endpoint.(*DiscoveryWrapper).GetPeers -> DEBU Channel peer config for [peer0.sample.com:7051]: &fab.ChannelPeer{PeerChannelConfig:fab.PeerChannelConfig{EndorsingPeer:true, ChaincodeQuery:true, LedgerQuery:true, EventSource:true}, NetworkPeer:fab.NetworkPeer{PeerConfig:fab.PeerConfig{URL:"peer0.sample.com:7051", GRPCOptions:map[string]interface {}{"allow-insecure":false, "fail-fast":false, "keep-alive-permit":false, "keep-alive-time":"20s", "keep-alive-timeout":"100s", "ssl-target-name-override":"peer0.sample.com"}, TLSCACert:(*x509.Certificate)(0xc000439600)}, MSPID:"SampleMSP"}}
[fabsdk/fab] 2019/11/27 07:57:52 UTC - preferorg.(*PeerResolver).ShouldDisconnect -> DEBU Using the min block height resolver to determine whether peer [peer1.sample.com:7051] should be disconnected
[fabsdk/fab] 2019/11/27 07:57:52 UTC - minblockheight.(*PeerResolver).ShouldDisconnect -> DEBU Block height of connected peer [peer1.sample.com:7051] from Discovery: 0, Last block received: 292300, Max block height from Discovery: 0
[fabsdk/fab] 2019/11/27 07:57:52 UTC - minblockheight.(*PeerResolver).ShouldDisconnect -> DEBU Max block height of peers is 0 and reconnect lag threshold is 8 so event client will not be disconnected from peer
[fabsdk/fab] 2019/11/27 07:57:52 UTC - dispatcher.(*Dispatcher).disconnected -> DEBU Event client will not disconnect from peer [peer1.sample.com:7051] on channel [entity]...
[fabsdk/fab] 2019/11/27 07:57:56 UTC - comm.(*CachingConnector).openConn -> DEBU connection was opened [orderer0.Samplecpr.com:7050]
[fabsdk/fab] 2019/11/27 07:57:56 UTC - comm.(*CachingConnector).ReleaseConn -> DEBU ReleaseConn [orderer0.Samplecpr.com:7050]
[fabsdk/fab] 2019/11/27 07:57:56 UTC - txn.sendBroadcast -> DEBU Receive SSampleess Response from orderer
[fabsdk/fab] 2019/11/27 07:57:58 UTC - dispatcher.(*Dispatcher).disconnected -> DEBU Checking if event client should disconnect from peer [peer1.sample.com:7051] on channel [entity]...
[fabsdk/fab] 2019/11/27 07:57:58 UTC - endpoint.(*DiscoveryWrapper).GetPeers -> DEBU Channel peer config for [peer1.sample.com:7051]: &fab.ChannelPeer{PeerChannelConfig:fab.PeerChannelConfig{EndorsingPeer:true, ChaincodeQuery:true, LedgerQuery:true, EventSource:true}, NetworkPeer:fab.NetworkPeer{PeerConfig:fab.PeerConfig{URL:"peer1.sample.com:7051", GRPCOptions:map[string]interface {}{"allow-insecure":false, "fail-fast":false, "keep-alive-permit":false, "keep-alive-time":"20s", "keep-alive-timeout":"100s", "ssl-target-name-override":"peer1.sample.com"}, TLSCACert:(*x509.Certificate)(0xc000439080)}, MSPID:"SampleMSP"}}
[fabsdk/fab] 2019/11/27 07:57:58 UTC - endpoint.(*DiscoveryWrapper).GetPeers -> DEBU Channel peer config for [peer0.sample.com:7051]: &fab.ChannelPeer{PeerChannelConfig:fab.PeerChannelConfig{EndorsingPeer:true, ChaincodeQuery:true, LedgerQuery:true, EventSource:true}, NetworkPeer:fab.NetworkPeer{PeerConfig:fab.PeerConfig{URL:"peer0.sample.com:7051", GRPCOptions:map[string]interface {}{"allow-insecure":false, "fail-fast":false, "keep-alive-permit":false, "keep-alive-time":"20s", "keep-alive-timeout":"100s", "ssl-target-name-override":"peer0.sample.com"}, TLSCACert:(*x509.Certificate)(0xc000439600)}, MSPID:"SampleMSP"}}
any context?
sorry,
The time taken to commit is in the range of 8-10 seconds
while we have tested the same application where response time was few milliseconds
The number of blocks are more than 250K
I am getting count of total data as well
sorry,
The time taken to commit is in the range of 8-10 seconds
while we have tested the same application where response time was few milliseconds
The number of blocks are more than 250K
I am getting count of total data as well
How do I retrieve the latest channel configuration for the orderer channel? So far I have been exec-ing into the cli and using the command `peer channel fetch config sys_config_block.pb -o orderer.example.com:7050 -c $ORDERER_CHANNEL --tls --cafile $ORDERER_CA`
And it works fine, I want to do the same using fabric-node-sdk
Hi all, I am getting this eror when trying to start the first orderer. Does anyone know how to troubleshoot this problem?
``` 2019-12-02 22:55:04.312 UTC [core.comm] ServerHandshake -> ERRO 015 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.27.
0.4:60238
2019-12-02 22:55:05.314 UTC [core.comm] ServerHandshake -> ERRO 016 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.27.
0.4:60242
2019-12-02 22:55:06.504 UTC [core.comm] ServerHandshake -> ERRO 017 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.27.
0.5:46584
2019-12-02 22:55:06.936 UTC [core.comm] ServerHandshake -> ERRO 018 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.27.
0.4:60254
2019-12-02 22:55:07.504 UTC [core.comm] ServerHandshake -> ERRO 019 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.27.
0.5:46592
2019-12-02 22:55:09.081 UTC [core.comm] ServerHandshake -> ERRO 01a TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.27.
0.4:60262
2019-12-02 22:55:09.171 UTC [core.comm] ServerHandshake -> ERRO 01b TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=172.27.
0.5:46598
2019-12-02 22:55:11.308 UTC [orderer.common.cluster.replication] probeEndpoint -> WARN 01c Failed connecting to {ord2-org2.inuit.local:8050 [-----BEGIN CERTIFICATE---- ```
``` 2019-12-02 22:55:18.314 UTC [orderer.common.cluster.replication] HeightsByEndpoints -> INFO 035 Returning the heights of OSNs mapped by endpoints map[]
2019-12-02 22:55:18.314 UTC [orderer.common.cluster] ReplicateChains -> PANI 036 Failed pulling system channel: failed obtaining the latest block for channel orderersyscha
nnel
panic: Failed pulling system channel: failed obtaining the latest block for channel orderersyschannel
goroutine 32 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000da2c0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc000128350, 0x1135104, 0x1031336, 0x21, 0xc000b63c00, 0x1, 0x1, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc000128350, 0x1031336, 0x21, 0xc000b63c00, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc000128358, 0x1031336, 0x21, 0xc000b63c00, 0x1, 0x1)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/orderer/common/cluster.(*Replicator).ReplicateChains(0xc00007cf00, 0x1062cb0, 0xc00035a1c0, 0xc00007cf00)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/cluster/replication.go:155 +0x4c2
github.com/hyperledger/fabric/orderer/common/server.(*replicationInitiator).ReplicateChains(0xc000120d80, 0xc000420580, 0xc0003d7a00, 0x1, 0x1, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/onboarding.go:120 +0x20a
github.com/hyperledger/fabric/orderer/common/server.(*inactiveChainReplicator).replicateDisabledChains(0xc00012a9c0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/onboarding.go:224 +0x1f5
github.com/hyperledger/fabric/orderer/common/server.(*inactiveChainReplicator).run(0xc00012a9c0)
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/onboarding.go:202 +0x42
created by github.com/hyperledger/fabric/orderer/common/server.initializeEtcdraftConsenter
/opt/gopath/src/github.com/hyperledger/fabric/orderer/common/server/main.go:692 +0x3fc
```
You might check #fabric-sdk-node but `peer channel fetch` is essentially invoking the `Deliver` API on the orderer, it's the same operation used to retrieve the genesis block from the orderer prior to joining a channel.
None of the orderers in the orderer system channel (according to the bootstrap block you supplied) can be reached. Their TLS certs are not valid according to the config block you are bootstrapping with.
@guoger any suggestion/pointers based on above logs?
are you saying that it takes long to complete a tx? could you grep log for
```
[%s] Committed block [%d] with %d transaction(s) in %dms (state_validation=%dms block_and_pvtdata_commit=%dms state_commit=%dms)
```
to see the actual commit/validate time?
Let me grep
okay
@guoger sorry, I couldn't find the above in orderer container logs?
Please let me know if I am looking at the right place
@gou
@guoger found in peer logs
sending
{"log":"\u001b[34m2019-12-03 09:56:09.361 UTC [kvledger] CommitWithPvtData -\u003e INFO 98862ff\u001b[0m [XXXXXX] Committed block [314218] with 1 transaction(s) in 42ms (state_validation=8ms block_and_pvtdata_commit=5ms state_commit=27ms) commitHash=[d9fb61a0cc7b31d36b599f4b15c4208ea4eeae7cf46751f8dbfaf123973e47aa]\n","stream":"stderr","time":"2019-12-03T09:56:09.361558812Z"}
{"log":"\u001b[34m2019-12-03 09:57:07.911 UTC [kvledger] CommitWithPvtData -\u003e INFO 988b278\u001b[0m [XXXXXX] Committed block [314219] with 1 transaction(s) in 38ms (state_validation=8ms block_and_pvtdata_commit=4ms state_commit=25ms) commitHash=[668e07d72e990ca205af795ce5385651c7d73d38b3b771240c7935c986161e20]\n","stream":"stderr","time":"2019-12-03T09:57:07.91183527Z"}
@guoger Thanks for helping with this grep. The logs tell a different story though. It is committing in a few milliseconds
but yet the end to end time when client receives response is too high
@jyellick Thank you. I checked the certificates and corrected them. Now I get a new error
``` [orderer.consensus.etcdraft] logSendFailure -> ERRO 0f8 Failed to send StepRequest to 1, because: aborted channel=orderersyschannel node=2
2019-12-03 11:27:30.818 UTC [orderer.common.cluster] func1 -> WARN 0f9 Certificate of unidentified node from 192.168.176.103:48702 for channel orderersyschannel expires in less than -2562047h47m16.854775808s
2019-12-03 11:27:30.818 UTC [comm.grpc.server] 1 -> INFO 0fa streaming call completed grpc.service=orderer.Cluster grpc.method=Step grpc.peer_address=192.168.176.103:48702 error="no TLS certificate sent" grpc.code=Unknown grpc.call_duration=354.906µs
``` Is there any intuitive way to troubleshoot the errors in TLS connection?
I still get the tls handshake error :(
Should I also include the IP in the SAN / csr.hosts section?
Orderer clocks are just one hour off of the actual time. (daylight saving time) and the certificates are valid for one year. Is one year very little duration?
That log message would seem to indicate that the expiration time on your certificate is very wrong, or that the clock on your orderer is very wrong. I would correct these things first.
If you are referencing the orderers by IP in the config, rather than by name, then you need to include the IP in the SAN
If you are referencing the orderers by IP in the config, rather than by name, then you need to include the IP in the SANs
The certificates are valid for a year. Is it very short? And the orderers clock are just one hour off of the actual timing(Daylight saving time). Would these create problems? I still could not figure out the bad certificate problem though.
A 1 year validity for a certificate should not be a problem. The 1 hour clock shift could be problematic, but does not account for the error message indicating that your certificate expired almost 300 years ago.
Oh! Okay I will try spinning up the entire network again from scratch. Thank you @jyellick
Hi @jyellick I have restarted everything, I get one error and the old warning for the certificate expiry.
``` 2019-12-04 13:19:53.765 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 0e2 Failed to send StepRequest to 1, because: aborted channel=orderersyschannel node=2
2019-12-04 13:20:00.764 UTC [orderer.consensus.etcdraft] Step -> INFO 0e3 2 is starting a new election at term 1 channel=orderersyschannel node=2
2019-12-04 13:20:00.764 UTC [orderer.consensus.etcdraft] becomePreCandidate -> INFO 0e4 2 became pre-candidate at term 1 channel=orderersyschannel node=2
2019-12-04 13:20:00.764 UTC [orderer.consensus.etcdraft] poll -> INFO 0e5 2 received MsgPreVoteResp from 2 at term 1 channel=orderersyschannel node=2
2019-12-04 13:20:00.765 UTC [orderer.consensus.etcdraft] campaign -> INFO 0e6 2 [logterm: 1, index: 4] sent MsgPreVote request to 1 at term 1 channel=orderersyschannel node=2
2019-12-04 13:20:00.765 UTC [orderer.consensus.etcdraft] campaign -> INFO 0e7 2 [logterm: 1, index: 4] sent MsgPreVote request to 3 at term 1 channel=orderersyschannel node=2
2019-12-04 13:20:00.765 UTC [orderer.consensus.etcdraft] campaign -> INFO 0e8 2 [logterm: 1, index: 4] sent MsgPreVote request to 4 at term 1 channel=orderersyschannel node=2
2019-12-04 13:20:00.765 UTC [orderer.consensus.etcdraft] send -> INFO 0e9 Successfully sent StepRequest to 1 after failed attempt(s) channel=orderersyschannel node=2
2019-12-04 13:20:06.274 UTC [orderer.common.cluster] func1 -> WARN 0ea Certificate of unidentified node from 192.168.176.103:55792 for channel orderersyschannel expires in less than -2562047h47m16.854775808s
2019-12-04 13:20:06.274 UTC [comm.grpc.server] 1 -> INFO 0eb streaming call completed grpc.service=orderer.Cluster grpc.method=Step grpc.peer_address=192.168.176.103:55792 error="no TLS certificate sent" grpc.code=Unknown grpc.call_duration=300.289µs
```
Hi all, my etcdraft orderers keeps on staying in election it seems. And seems to abort the channel. Any ideas on this. There is a JIRA task completed for the same but still I get this error.
If you are still seeing errors, I suggest you start from https://hyperledger-fabric.readthedocs.io/en/release-1.4/build_network.html and try to identity the difference in your configuration.
is it possible to run the orderer from the code and not through docker? i'd like to debug the data i'm sending and the debug logs are not enough
Yes, you can use the release binaries and run it locally, or simply build and run your own version
Almost all of our development is done using native binaries outside of docker
i mean, i'd like to debug through an IDE with breakpoints that i can step through
I've not had great experiences using a go debugger in an IDE, but you could certainly try it
There's nothing specific about Fabric which should be any harder than any other go application
ok, how do i get the orderer to run
`FABRIC_CFG_PATH=
jyellick (Wed, 04 Dec 2019 21:08:06 GMT):
`FABRIC_CFG_PATH=
tommyjay (Wed, 04 Dec 2019 21:10:00 GMT):
the thing is i don't want to use binaries. the idea is that if i have the code for a server, i can run the server from my ide and put breakpoints and step through it. i want to do the same with a solo orderer
tommyjay (Wed, 04 Dec 2019 21:10:00 GMT):
the thing is i don't want to use binaries. the idea is that if i have the code for a server or any app, i can run the server from my ide and put breakpoints and step through it. i want to do the same with a solo orderer
tommyjay (Wed, 04 Dec 2019 21:11:11 GMT):
like `go run orderermain.go`
tommyjay (Wed, 04 Dec 2019 21:12:32 GMT):
the reason is that there are errors that are not always bubbled up and will not be sent in the response or logged. so it's hard to know where the error actually is
tommyjay (Wed, 04 Dec 2019 21:12:51 GMT):
@jyellick does that make sense
jyellick (Wed, 04 Dec 2019 21:13:19 GMT):
What branch are you building?
tommyjay (Wed, 04 Dec 2019 21:14:17 GMT):
the image i'm using is 1.4.2. i haven't built the fabric repo yet
jyellick (Wed, 04 Dec 2019 21:25:53 GMT):
```FABRIC_CFG_PATH=sampleconfig/ go run ./orderer/
``` works for me
jyellick (Wed, 04 Dec 2019 21:27:39 GMT):
If you are simply trying to get more debug info, it's likely easiest just to add additional logging statements and reproduce your problem, rather than trying to use a debugger.
indirajith (Thu, 05 Dec 2019 13:18:59 GMT):
Hi, I was looking at it for sometime and tried it' There is not much difference.
jyellick (Thu, 05 Dec 2019 14:46:55 GMT):
That should make finding your error much easier then :slight_smile:
ahmedsajid (Thu, 05 Dec 2019 15:32:38 GMT):
Hi All. I'm facing issue during Kafka to Raft migration. 2/3 Orderers panic when 1 of the channels is remove from Maintenance mode during migration.
jyellick (Thu, 05 Dec 2019 15:33:10 GMT):
What is the panic/
jyellick (Thu, 05 Dec 2019 15:33:10 GMT):
What is the panic?
ahmedsajid (Thu, 05 Dec 2019 15:33:29 GMT):
the orderers are complaining about previous block hash being different
```
Could not append block: unexpected Previous block hash. Expected PreviousHash = [8572c0b7b464e9d03404eb0930f3f16397e4c2e3a4ed9d9207f2ad71cec0f054], PreviousHash referred in the latest block= [b3932e6365128107b8841cc4c9798c3d7fc278e621a981ec00771631c2817645]
```
jyellick (Thu, 05 Dec 2019 15:34:01 GMT):
How did you originally bootstrap your Kafka network?
jyellick (Thu, 05 Dec 2019 15:34:28 GMT):
Using `configtxgen`, or using the `provisional` genesis method? (I suspect the latter)
ahmedsajid (Thu, 05 Dec 2019 15:36:08 GMT):
file
ahmedsajid (Thu, 05 Dec 2019 15:36:08 GMT):
file method
ahmedsajid (Thu, 05 Dec 2019 15:36:31 GMT):
so generate the using configtxgen and provide to the orderer.
ahmedsajid (Thu, 05 Dec 2019 15:36:31 GMT):
so generate the channel config using configtxgen and provide to the orderer.
ahmedsajid (Thu, 05 Dec 2019 15:37:29 GMT):
This was previously v1.2 network, migrated to v1.4.
ahmedsajid (Thu, 05 Dec 2019 15:37:35 GMT):
not sure if that matters.
jyellick (Thu, 05 Dec 2019 15:43:56 GMT):
In Kafka, the orderers consent on the transaction order, not on the blocks, so it's possible for the orderer system channel to diverge in hash, while not diverging in state.
jyellick (Thu, 05 Dec 2019 15:44:24 GMT):
If you accidentally generated 3 genesis blocks for bootstrap, instead of 1, you could see this behavior.
jyellick (Thu, 05 Dec 2019 15:44:36 GMT):
Fortunately, in all cases, the recovery is fairly straightforward.
ahmedsajid (Thu, 05 Dec 2019 15:44:41 GMT):
So the channel its complaining about its Application channel.
ahmedsajid (Thu, 05 Dec 2019 15:44:41 GMT):
So the channel its complaining about its an Application channel.
jyellick (Thu, 05 Dec 2019 15:44:55 GMT):
An application channel? That is very odd indeed.
ahmedsajid (Thu, 05 Dec 2019 15:45:00 GMT):
right.
jyellick (Thu, 05 Dec 2019 15:45:59 GMT):
If I were you, I would take some time to investigate the root cause as to why these hashes have diverged. However, if you are interested in recovery, simply revert to your pre-migration backup, then pick one orderer who's ledger will be authoritative, and replace the other orderer's ledgers with a copy of the first's.
jyellick (Thu, 05 Dec 2019 15:46:53 GMT):
This way, you know the previous block hash is consistent across all of your orderers, and when you exit maintenance mode, you will now be in a consistent place to begin consenting on blocks as Raft does.
ahmedsajid (Thu, 05 Dec 2019 15:50:25 GMT):
I was able to revert the the most recent backup so the network is still up and running :)
ahmedsajid (Thu, 05 Dec 2019 15:51:26 GMT):
I was looking at the block height (by looking at the last block) across each orderer which looked alright.
But if I understood correctly, the hash could have be different at some point?
jyellick (Thu, 05 Dec 2019 15:51:34 GMT):
Yes, I expect if you pull the latest block from this application channel, you will see that its `previous_hash` is mismatched acrosss orderers.
ahmedsajid (Thu, 05 Dec 2019 15:52:12 GMT):
So I should expect the hash to be different if I pulled the Application block from each orderer and the hash should be different?
ahmedsajid (Thu, 05 Dec 2019 15:52:12 GMT):
So I should expect the hash to be different if I pulled the Application block from each orderer and the hash should be different? with current network running in Kafka mode.
ahmedsajid (Thu, 05 Dec 2019 15:52:12 GMT):
So I should expect the hash to be different if I pulled the Application block from each orderer and the hash should be different with current network running in Kafka mode.
ahmedsajid (Thu, 05 Dec 2019 15:53:12 GMT):
Thanks @jyellick
ahmedsajid (Thu, 05 Dec 2019 18:45:05 GMT):
@jyellick just copying the `/data` directory from good orderer should be enough?
ahmedsajid (Thu, 05 Dec 2019 18:45:05 GMT):
@jyellick just copying the `data` directory from good orderer should be enough?
ahmedsajid (Thu, 05 Dec 2019 18:45:05 GMT):
@jyellick just copying the `data` directory from the good orderer should be enough?
jyellick (Thu, 05 Dec 2019 18:59:07 GMT):
You should copy everything under the fileledger location, wherever you have configured it
jyellick (Thu, 05 Dec 2019 18:59:14 GMT):
These should be blockfiles and indices
ahmedsajid (Thu, 05 Dec 2019 19:01:07 GMT):
Got it. thanks
RahulEth (Fri, 06 Dec 2019 06:43:51 GMT):
Has joined the channel.
RahulEth (Fri, 06 Dec 2019 06:43:53 GMT):
hi all i am facing an error please have a look``
RahulEth (Fri, 06 Dec 2019 06:43:53 GMT):
hi all i am facing an error please have a look```
2019-12-05 10:00:41.232 UTC [orderer.common.broadcast] ProcessMessage -> WARN 959c89 [channel: qlqlchannel] Rejecting broadcast of normal message from 10.55.56.23:39324 with SERVICE_UNAVAILABLE: rejected by Order: aborted
```
RahulEth (Fri, 06 Dec 2019 06:44:46 GMT):
i have raised the same issue on stack overflow ```
https://stackoverflow.com/questions/59194979/rejecting-broadcast-of-normal-message-from-10-55-56-2339324-with-service-unavai/59195525?noredirect=1#comment104632671_59195525
```
RahulEth (Fri, 06 Dec 2019 06:44:46 GMT):
i have raised the same issue on stack overflow
https://stackoverflow.com/questions/59194979/rejecting-broadcast-of-normal-message-from-10-55-56-2339324-with-service-unavai/59195525?noredirect=1#comment104632671_59195525
tommyjay (Fri, 06 Dec 2019 20:32:17 GMT):
thanks i;ve tried that and keep getting this:
```
2019-12-06 15:31:46.375 EST [orderer.common.server] Main -> ERRO 001 failed to parse config: Error reading configuration: Unsupported Config Type ""
```
tommyjay (Fri, 06 Dec 2019 20:32:42 GMT):
i even ran `$ export FABRIC_CFG_PATH=sampleconfig/; export FABRIC_CFG_PATH=$PWD; sudo go run ./orderer/`
Rajatsharma (Sat, 07 Dec 2019 06:53:16 GMT):
I'm currently facing a issue. I was using a network(3 O, 5K, 4Z) and it was working perfectly.
But there was a need so I had to shift few orderers to another server situated in another AWS region. After this The latency between request processing has increased so much that we get a timeout at orderer. So there's any specific config I can look at, to increase the timeout time.
Rajatsharma (Sat, 07 Dec 2019 06:53:16 GMT):
I'm currently facing an issue. I was using a network(3 O, 5K, 4Z) and it was working perfectly.
But there was a need so I had to shift a few orderers to another server situated in another AWS region. After this, The latency between request processing has increased so much that we get a timeout at orderer. So is there any specific config I can look at, to increase the timeout time?
yacovm (Sat, 07 Dec 2019 08:24:40 GMT):
@Rajatsharma I tested Raft when it was deployed in California, London, Seoul and Tokyo and I still got thousands of transactions per second without any problem...
yacovm (Sat, 07 Dec 2019 08:24:40 GMT):
@Rajatsharma I tested Raft on AWS when the cluster was deployed in California, London and Tokyo and I still got thousands of transactions per second without any problem...
Rajatsharma (Sat, 07 Dec 2019 08:40:33 GMT):
Thanks !! But it will be very difficult to shift without testing.
So I'll need to deploy this in production using KAFKA and then we were planning of shifting on RAFT.
Rajatsharma (Sat, 07 Dec 2019 08:42:11 GMT):
So just for now, wanted to know if anyone deployed a network like this and if they're not facing this issue. Then it's probably a network issue.
Otherwise we'll need to fix this by setting some parameters.
tommyjay (Mon, 09 Dec 2019 15:42:11 GMT):
any idea why this could happen? if i run the same code twice, the orderer rejects it once and accepts it the next time without me changing the code or the data. the action is to add an org into the MyConsortium group i.e. add them into my system channel
```
error applying config update to existing channel 'testchainid': error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Consortiums/MyConsortium not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
```
jyellick (Mon, 09 Dec 2019 15:50:55 GMT):
What constitutes 'the same' code? Are you constructing signing and submitting the transaction each time?
tommyjay (Mon, 09 Dec 2019 15:53:16 GMT):
i'm signing a config update and the result is a config signature. then i submit the config signature along with the config update in an envelope to the orderer. the signing is done by an org admin identity
jyellick (Mon, 09 Dec 2019 15:53:16 GMT):
You can try turning debugging up in the orderer, particularly:
```FABRIC_LOGGING_SPEC=info:cauthdsl=debug:policy=debug:common.configtx=debug```
to get some more detail as to why exactly it's failing. Is `[Group] /Channel/Consortiums/MyConsortium` the element you're trying to modify?
tommyjay (Mon, 09 Dec 2019 15:57:06 GMT):
thanks i'll try that. yes that's the group i want to update. add a new org into the network and "MyConsortium" is the list of all orgs in my test network
ahmedsajid (Mon, 09 Dec 2019 16:31:20 GMT):
probably need to get multiple signatures from Orgs under MyConsortium depending on Mod policy.
tommyjay (Mon, 09 Dec 2019 21:10:26 GMT):
@jyellick under what circumstances would the orderer think that a signature is not from an org admin? the identity i'm using is the orderer org admin? i verified the cert is valid with openssl:
```
$ openssl verify -CAfile /msp/cacerts/cert0.pem /msp/admincerts/cert0.pem
/msp/admincerts/cert0.pem: OK
```
tommyjay (Mon, 09 Dec 2019 21:10:26 GMT):
@jyellick under what circumstances would the orderer evaluate a signature and believe it's not from an org admin? the identity i'm using is the orderer org admin and i'm operating on the system channel. i verified the cert is valid with openssl:
```
$ openssl verify -CAfile /msp/cacerts/cert0.pem /msp/admincerts/cert0.pem
/msp/admincerts/cert0.pem: OK
```
jyellick (Mon, 09 Dec 2019 21:49:03 GMT):
@tommyjay Typically, it's either:
1) The MSPID is incorrect. IE, it's an admin cert for one org, but your MSPID claims to be from another
2) The certificate is valid, but it's not in the MSP definition as an admin. Prior to v1.4.3 (I think, might have been +/- 0.0.1) admin certs had to be explicitly included in the MSP configuration.
soumyanayak (Wed, 11 Dec 2019 03:23:37 GMT):
Hi All,
Fabric - v1.4.3.
I am running a cluster of three orderers out of which two are running without any issues. When the third orderer was started i was getting the below *panic *error --
```[35m2019-12-10 13:06:22.537 UTC [orderer.consensus.etcdraft] commitTo -> PANI 020[0m tocommit(8) is out of range [lastIndex(3)]. Was the raft log corrupted, truncated, or lost? channel=ordererchannel node=3
panic: tocommit(8) is out of range [lastIndex(3)]. Was the raft log corrupted, truncated, or lost?
goroutine 38 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0001773f0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x515
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc00000e0f8, 0x4, 0x105c6a2, 0x5d, 0xc0003a9800, 0x2, 0x2, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0xf6
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(0xc00000e0f8, 0x105c6a2, 0x5d, 0xc0003a9800, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159 +0x79
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc00000e100, 0x105c6a2, 0x5d, 0xc0003a9800, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x60
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raftLog).commitTo(0xc00014dab0, 0x8)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/log.go:203 +0x14d
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raft).handleHeartbeat(0xc0002ac140, 0x8, 0x3, 0x2, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:1324 +0x54
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.stepFollower(0xc0002ac140, 0x8, 0x3, 0x2, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:1269 +0x450
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raft).Step(0xc0002ac140, 0x8, 0x3, 0x2, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:971 +0x12db
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*node).run(0xc0002200c0, 0xc0002ac140)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/node.go:357 +0x1101
created by github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.RestartNode
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/node.go:246 +0x31b
```
soumyanayak (Wed, 11 Dec 2019 03:24:29 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=oGLD7KA4NnLE75ecw)
Orderer3.log
guoger (Wed, 11 Dec 2019 04:32:00 GMT):
@soumyanayak as the log indicated, i think the wal files are corrupted. what you could do is to remove and re-add the third node
soumyanayak (Wed, 11 Dec 2019 06:17:06 GMT):
Ok @guoger - once will do that and update here
RahulHundet (Wed, 11 Dec 2019 06:42:01 GMT):
@guoger we did further analysis and stuck at this point. Please let me know if you can point what we are missing here
RahulHundet (Wed, 11 Dec 2019 06:42:11 GMT):
I am sharing logs here one by one from each op
RahulHundet (Wed, 11 Dec 2019 06:42:14 GMT):
hop*
RahulHundet (Wed, 11 Dec 2019 06:43:48 GMT):
The following is sendBroadcast from my client layer
RahulHundet (Wed, 11 Dec 2019 06:43:55 GMT):
{"log":" [fabsdk/fab] 2019/12/11 04:14:40 UTC - txn.sendBroadcast -\u003e DEBU Broadcasting envelope to orderer :orderer0.sample.com:7050\n","stream":"stdout","time":"2019-12-11T04:14:40.391820538Z"}
RahulHundet (Wed, 11 Dec 2019 06:44:08 GMT):
This is at 04:14:40
RahulHundet (Wed, 11 Dec 2019 06:45:32 GMT):
The client received response at following tme
RahulHundet (Wed, 11 Dec 2019 06:45:34 GMT):
{"log":" [fabsdk/fab] 2019/12/11 04:14:50 UTC - txn.sendBroadcast -\u003e DEBU Receive Success Response from orderer\n","stream":"stdout","time":"2019-12-11T04:14:50.404065873Z"}
RahulHundet (Wed, 11 Dec 2019 06:45:46 GMT):
that is almost after 10s
RahulHundet (Wed, 11 Dec 2019 06:46:32 GMT):
The handler in orderer started at this time itself
RahulHundet (Wed, 11 Dec 2019 06:46:34 GMT):
{"log":"\u001b[36m2019-12-11 04:14:50.398 UTC [orderer.common.server] Broadcast -\u003e DEBU 30a55c\u001b[0m Starting new Broadcast handler\n","stream":"stderr","time":"2019-12-11T04:14:50.398864541Z"}
RahulHundet (Wed, 11 Dec 2019 06:47:14 GMT):
but couldn't get what was happening from these 10 seconds and where the message was after sendBroadcast from client and start of broadcasthandler in orderer
soumyanayak (Wed, 11 Dec 2019 10:52:31 GMT):
i delete the orderer ledger folder , wal folder and restarted the services for orderer -- but still its giving the same issue @guoger
soumyanayak (Wed, 11 Dec 2019 15:40:19 GMT):
@guoger - any idea what to do
soumyanayak (Thu, 12 Dec 2019 02:50:28 GMT):
@jyellick @yacovm Any idea of this orderer error
ahmedsajid (Thu, 12 Dec 2019 02:54:10 GMT):
@jyellick Not sure if its related to above orderer problem that I described a few days ago.
Now I see Peer Panics with similar errors here https://jira.hyperledger.org/browse/FAB-13470
ahmedsajid (Thu, 12 Dec 2019 02:54:49 GMT):
related to this https://chat.hyperledger.org/channel/fabric-orderer?msg=YCA3y9J7QqqDP5AyL
jyellick (Thu, 12 Dec 2019 03:02:35 GMT):
@ahmedsajid Yes, peers will similarly panic if they encounter a block that has been validly signed by ordering, but does not fit in their hash chain.
guoger (Thu, 12 Dec 2019 03:04:01 GMT):
Not restart, but readd
guoger (Thu, 12 Dec 2019 03:04:52 GMT):
Once wal is corrupted, you need to remove node from consented set and readd
soumyanayak (Thu, 12 Dec 2019 03:06:18 GMT):
means we need to update in channel configuration or how we do the re addition
soumyanayak (Thu, 12 Dec 2019 03:08:16 GMT):
or is it like removing the docker container and then again starting the node?
soumyanayak (Thu, 12 Dec 2019 03:08:16 GMT):
or is it like stopping and removing the docker container and then again starting the node?
guoger (Thu, 12 Dec 2019 03:08:55 GMT):
Channel config
guoger (Thu, 12 Dec 2019 03:09:14 GMT):
Just like how you introduce a brand new node into cluster
soumyanayak (Thu, 12 Dec 2019 03:09:21 GMT):
ohk got it
ahmedsajid (Thu, 12 Dec 2019 03:50:25 GMT):
What's the fix in that case?
ahmedsajid (Thu, 12 Dec 2019 03:50:25 GMT):
@jyellick What's the fix in that case?
indirajith (Thu, 12 Dec 2019 14:36:01 GMT):
Hi all. I get the following error when instantiating chaincode. Any help to rectify this problem?
``` Error: could not assemble transaction, err proposal response was not successful, error code 500, msg chaincode registration failed: container exited with 2 ```
On orderer it says ``` Error reading from 172.27.0.6:59144: rpc error: code = Canceled desc = context canceled
2019-12-12 14:33:45.472 UTC [comm.grpc.server] 1 -> INFO 168 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.27.0.6:59144 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=2.672121844s
``` Any help? Thank you!
indirajith (Thu, 12 Dec 2019 14:36:01 GMT):
Hi all. I get the following error when instantiating chaincode. Any help to rectify this problem?
``` Error: could not assemble transaction, err proposal response was not successful, error code 500, msg chaincode registration failed: container exited with 2```
On orderer it says
``` Error reading from 172.27.0.6:59144: rpc error: code = Canceled desc = context canceled
2019-12-12 14:33:45.472 UTC [comm.grpc.server] 1 -> INFO 168 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.27.0.6:59144 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=2.672121844s ```
Any help? Thank you!
indirajith (Thu, 12 Dec 2019 14:36:01 GMT):
Hi all. I get the following error when instantiating chaincode. Any help to rectify this problem?
```Error: could not assemble transaction, err proposal response was not successful, error code 500, msg chaincode registration failed: container exited with 2```
On orderer it says
```Error reading from 172.27.0.6:59144: rpc error: code = Canceled desc = context canceled
2019-12-12 14:33:45.472 UTC [comm.grpc.server] 1 -> INFO 168 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.27.0.6:59144 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=2.672121844s ```
Any help? Thank you!
indirajith (Thu, 12 Dec 2019 14:36:01 GMT):
Hi all. I get the following error when instantiating chaincode. Any help to rectify this problem?
```Error: could not assemble transaction, err proposal response was not successful, error code 500, msg chaincode registration failed: container exited with 2```
```Error reading from 172.27.0.6:59144: rpc error: code = Canceled desc = context canceled
2019-12-12 14:33:45.472 UTC [comm.grpc.server] 1 -> INFO 168 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.27.0.6:59144 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=2.672121844s ```
Any help? Thank you!
indirajith (Thu, 12 Dec 2019 14:36:01 GMT):
Hi all. I get the following error when instantiating chaincode. Any help to rectify this problem?
`Error: could not assemble transaction, err proposal response was not successful, error code 500, msg chaincode registration failed: container exited with 2`
On orderer it says
`Error reading from 172.27.0.6:59144: rpc error: code = Canceled desc = context canceled
2019-12-12 14:33:45.472 UTC [comm.grpc.server] 1 -> INFO 168 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.27.0.6:59144 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=2.672121844s `
Any help? Thank you!
indirajith (Thu, 12 Dec 2019 14:36:01 GMT):
Hi all. I get the following error when instantiating chaincode. Any help to rectify this problem?
`Error: could not assemble transaction, err proposal response was not successful, error code 500, msg chaincode registration failed: container exited with 2`
On orderer it says
`Error reading from 172.27.0.6:59144: rpc error: code = Canceled desc = context canceled
2019-12-12 14:33:45.472 UTC [comm.grpc.server] 1 -> INFO 168 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=172.27.0.6:59144 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=2.672121844s`
Any help? Thank you!
guoger (Thu, 12 Dec 2019 14:50:52 GMT):
this is not an orderer problem. your chaincode has unexpectedly exited. Please check logs there (for example peer log and chaincode container log)
indirajith (Thu, 12 Dec 2019 15:12:17 GMT):
Thank you very much. I can not check the chaincode logs even with CORE_VM_DOCKER_ATTACHSTDOUT. Is there anyother method to deduce the problem?
indirajith (Thu, 12 Dec 2019 15:17:01 GMT):
I got the following error: ```INFO 8135 [twoorgschannel][9a8011c1] Exit chaincode: name:"lscc" (2810ms)
2019-12-12 15:15:44.057 UTC [endorser] SimulateProposal -> ERRO 8136 [twoorgschannel][9a8011c1] failed to invoke chaincode name:"lscc" , error: container exited with 2
github.com/hyperledger/fabric/core/chaincode.(*RuntimeLauncher).Launch.func1
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/runtime_launcher.go:63
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:1337
chaincode registration failed
```
indirajith (Thu, 12 Dec 2019 15:26:38 GMT):
```2019-12-12 15:03:29.991 UTC [endorser] callChaincode -> INFO 0a0 [][5c46dee6] Entry chaincode: name:"lscc"
2019-12-12 15:03:30.006 UTC [endorser] callChaincode -> INFO 0a1 [][5c46dee6] Exit chaincode: name:"lscc" (15ms)
2019-12-12 15:03:30.006 UTC [endorser] ProcessProposal -> ERRO 0a2 [][5c46dee6] simulateProposal() resulted in chaincode name:"lscc" response status 500 for txid: 5c46dee6fe1ba5a5b8b0412244d049c99a66a5cdcf3686f1a58cc8f45f228214
2019-12-12 15:03:30.007 UTC [comm.grpc.server] 1 -> INFO 0a3 unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=192.168.176.103:37144 grpc.code=OK grpc.call_duration=18.153515ms
2019-12-12 15:03:30.010 UTC [grpc] infof -> DEBU 0a4 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-12-12 15:03:30.009 UTC [grpc] warningf -> DEBU 0a5 transport: http2Server.HandleStreams failed to read frame: read tcp 172.23.0.3:7051->192.168.176.103:37144: read: connection reset by peer
2019-12-12 15:03:30.010 UTC [grpc] infof -> DEBU 0a6 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-12-12 15:21:49.561 UTC [endorser] callChaincode -> INFO 0a7 [twoorgschannel][5926b81a] Entry chaincode: name:"lscc"
2019-12-12 15:21:49.722 UTC [chaincode.platform.golang] GenerateDockerBuild -> INFO 0a8 building chaincode with ldflagsOpt: '-ldflags "-linkmode external -extldflags '-static'"'
2019-12-12 15:22:55.954 UTC [endorser] callChaincode -> INFO 0a9 [twoorgschannel][5926b81a] Exit chaincode: name:"lscc" (66393ms)
2019-12-12 15:22:55.954 UTC [endorser] SimulateProposal -> ERRO 0aa [twoorgschannel][5926b81a] failed to invoke chaincode name:"lscc" , error: container exited with 2
github.com/hyperledger/fabric/core/chaincode.(*RuntimeLauncher).Launch.func1
/opt/gopath/src/github.com/hyperledger/fabric/core/chaincode/runtime_launcher.go:63
runtime.goexit
/opt/go/src/runtime/asm_amd64.s:1337
chaincode registration failed
2019-12-12 15:22:55.954 UTC [comm.grpc.server] 1 -> INFO 0ab unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=192.168.176.103:37182 grpc.code=OK grpc.call_duration=1m6.395494006s
2019-12-12 15:22:55.960 UTC [grpc] infof -> DEBU 0ac transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2019-12-12 15:22:55.960 UTC [grpc] infof -> DEBU 0ad transport: loopyWriter.run returning. connection error: desc = "transport is closing"
```
RahulHundet (Mon, 16 Dec 2019 08:27:21 GMT):
Hi All, any suggestions/recommendation on the issue I posted related to orderer performance__
robmurgai (Mon, 16 Dec 2019 15:41:25 GMT):
Has joined the channel.
robmurgai (Mon, 16 Dec 2019 15:41:30 GMT):
@RahulHundet Could you post your issue again, with context and question, please?
RahulHundet (Mon, 16 Dec 2019 16:00:13 GMT):
@robmurgai The fabric client application submits transaction to orderer. For some of the transactions it is taking more than 10 seconds (most of the transactions completes within 2 seconds). The block creation time is 2 seconds. This performance result is based on one transaction at a time, so the system isn't under heavy load. I have posted the logs on following link, https://pastebin.com/KxtNBt47
RahulHundet (Mon, 16 Dec 2019 16:00:54 GMT):
The system has been tested in paste for better response time (under 500 milliseconds)
RahulHundet (Mon, 16 Dec 2019 16:00:54 GMT):
The system has been tested in past for better response time (under 500 milliseconds)
Rajatsharma (Tue, 17 Dec 2019 07:20:09 GMT):
Hi Everyone,
I was trying to set up a kafka using Azure servers. All the servers are not in the same region. While creating channel I'm getting this error in orderer:
```
2019-12-14 00:26:20.544 UTC [orderer.consensus.kafka] enqueue -> ERRO 013 [channel: testchainid] cannot enqueue envelope because = read tcp 10.0.0.31:38792->10.0.0.20:9092: i/o timeout
2019-12-14 00:26:20.545 UTC [orderer.common.broadcast] ProcessMessage -> WARN 014 [channel: mychannel] Rejecting broadcast of config message from 10.0.0.51:57680 with SERVICE_UNAVAILABLE: rejected by Configure: cannot enqueue
2019-12-14 00:26:20.545 UTC [comm.grpc.server] 1 -> INFO 015 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=10.0.0.51:57680 grpc.code=OK grpc.call_duration=30.489835644s
2019-12-14 00:26:20.548 UTC [common.deliver] Handle -> WARN 016 Error reading from 10.0.0.51:57678: rpc error: code = Canceled desc = context canceled
2019-12-14 00:26:20.548 UTC [comm.grpc.server] 1 -> INFO 017 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Deliver grpc.peer_address=10.0.0.51:57678 error="rpc error: code = Canceled desc = context canceled" grpc.code=Canceled grpc.call_duration=30.494925071s
```
Rajatsharma (Tue, 17 Dec 2019 07:30:31 GMT):
@guoger I found this ticket in sarama https://github.com/Shopify/sarama/issues/1192. Is it possible, this is not handled in fabric's code ?
Rajatsharma (Tue, 17 Dec 2019 07:30:31 GMT):
@jyellick @guoger I found this ticket in sarama https://github.com/Shopify/sarama/issues/1192. Is it possible, this is not handled in fabric's code ?
guoger (Tue, 17 Dec 2019 08:13:32 GMT):
could you turn on verbose via [this](https://github.com/hyperledger/fabric/blob/c25924aaa43abaea861ddc65af77e247426d7339/sampleconfig/orderer.yaml#L222)? so we have a better idea about why it fails
Rajatsharma (Tue, 17 Dec 2019 11:24:11 GMT):
It started working, even we don't know why. But I've set verbose, will let you know when we'll encounter this next time.
Rajatsharma (Tue, 17 Dec 2019 17:17:52 GMT):
@guoger https://hastebin.com/odopocudem.bash, Now I'm getting this error.
jamohtp (Wed, 18 Dec 2019 07:48:34 GMT):
Has joined the channel.
jamohtp (Wed, 18 Dec 2019 07:48:36 GMT):
G'day all, could I ask for some clarification regarding the etcdraft ordering service?
Specifically, [the docs](https://hyperledger-fabric.readthedocs.io/en/latest/raft_configuration.html#configuration) say that a consenter:
> ...must be referenced in the configuration of each channel it belongs to by adding its ... certificates ... to the channel config.
However the Raft ConfigMetadata are listed in the Orderer section of configtx.yaml, which is only used when generating an orderer genesis block. If consenters are indeed specified separately on each application channel, how is that configured?
toddinpal (Wed, 18 Dec 2019 18:52:35 GMT):
Is there a way to get a list of channels from the ordering service?
toddinpal (Wed, 18 Dec 2019 18:52:35 GMT):
Is there a way to get a list of channels from the ordering service or discovery service?
rahulhegde (Wed, 18 Dec 2019 20:22:12 GMT):
Hello - Question on the Raft certificate rotate procedure,
1. We have 3 orderer raft node fabric setup running on v1.4.2. It is ok to perform RAFT certificate renewal with a downtime to the system.
2. Now, RAFT node tls certificate renewal process documents to have a RAFT TLS certificate to be renewed one RAFT node at a time.
What makes not to allow all 3 RAFT Node TLS certificates to be renewed with a one channel configuration update per channel and then restart all three raft node?
yacovm (Wed, 18 Dec 2019 23:02:01 GMT):
@rahulhegde
1. Why do you need downtime? Just do it online.
2. Because suppose you have 3 nodes, and you rotate the TLS certificate of everyone of them by a single transaction.
If the block is somehow only committed to 2 out of 3 nodes, then the third node now cannot communicate with the other 2 because it expects the old certificates
yacovm (Wed, 18 Dec 2019 23:02:27 GMT):
but these 2 nodes have to use either the old certificates - but then they can't communicate with one another
yacovm (Wed, 18 Dec 2019 23:02:42 GMT):
or they need to use the new certificates but then they can't communicate with the third node
yacovm (Wed, 18 Dec 2019 23:02:56 GMT):
any way you look at it - you will be in trouble
AbhijeetSamanta (Thu, 19 Dec 2019 05:12:49 GMT):
Has joined the channel.
rahulhegde (Thu, 19 Dec 2019 20:58:09 GMT):
@yacovm Is this the only reason. Do you see a cause why third raft node would not commit the block apart from resource constraint (which can happen even if we do a certificate rotate per raft node basis)?
I did try and it looks there is code protection in Fabric v1.4.2 to prevent it.
iramiller (Thu, 19 Dec 2019 22:31:20 GMT):
Has anyone else seen data loss/corruption in an orderer node when an unexpected termination occurs? Running 1.4.3 ...
iramiller (Thu, 19 Dec 2019 22:32:31 GMT):
(also Raft ... in this case the orderer that went down was the leader...)
yacovm (Fri, 20 Dec 2019 00:19:48 GMT):
@iramiller - if it's only one node you can always just make it replicate all the blocks by "cleaning" its ledger
yacovm (Fri, 20 Dec 2019 00:20:26 GMT):
I think @guoger is working on a future change set that will make it recover on its own
yacovm (Fri, 20 Dec 2019 00:21:31 GMT):
@rahulhegde - it may not get the block due to network communication error or because when you commit a block that changes certificate, the Raft communication will close the communication with certificates that are no longer in the new config
iramiller (Fri, 20 Dec 2019 00:29:48 GMT):
@yacovm ... yeah ... we replicated things back and brought it up to date... it is just concerning that through some sort of strange failure/restart it managed to wedge itself...
toddinpal (Mon, 23 Dec 2019 19:22:21 GMT):
@yacovm Is there a way to get a list of channels from the ordering service or discovery service?
yacovm (Mon, 23 Dec 2019 22:34:33 GMT):
@toddinpal no, by design.
yacovm (Mon, 23 Dec 2019 22:34:56 GMT):
only if you have permissions to access the system channel you can do it
yacovm (Mon, 23 Dec 2019 22:35:11 GMT):
but just having permission to access application channels is not enough
yacovm (Mon, 23 Dec 2019 22:35:33 GMT):
this is that you, as a simple client - won't know who is doing business with who
Rajatsharma (Wed, 25 Dec 2019 03:49:57 GMT):
@guoger @jyellick I have set a Hyperledger Fabric network on AWS in region ( Mumbai and Canada ).
Mumbai Canada
Peer/Couch-0 Peer/Couch-1
Orderer-0 Orderer-1
kafka-0 kafka-4
kafka-1 kafka-5
kafka-2 Zookeeper-3
kafka-4 Zookeeper-4
Zookeeper-0
Zookeeper-1
Zookeeper-2
The orderer and kafka-zookeeper ensemble is running with rule.
Number of replicas -6
Number of insync replicas - 2
I have installed few channels and chaincode on them, all the channel and chaincode instantiation has been successful.
I can also see the creation and sync of the topic for few all across the kafka set but for few, we are getting below error -
https://justpaste.me/jxe7
Rajatsharma (Wed, 25 Dec 2019 03:50:23 GMT):
This is really important, could you help me debug this.
Rajatsharma (Wed, 25 Dec 2019 03:53:17 GMT):
I'm getting different error with azure and AWS. But I need to setup a multi host network.
Rajatsharma (Wed, 25 Dec 2019 03:55:17 GMT):
Earlier I was getting this error https://justpaste.me/jxkJ in AZURE servers. With almost same configuration.
gravity (Wed, 25 Dec 2019 15:07:18 GMT):
Hello.
Fabric 1.4.0, Java SDK 1.4.0.
I'm getting this error message from an orderer when I try to update channel's configuration:
```
ERRO 3f6[0m [channel: channel11563528397444] cannot enqueue envelope because = kafka server: Message was too large, server rejected it to avoid allocation error.
WARN 3f7[0m [channel: channel11563528397444] Rejecting broadcast of config message from 10.130.101.84:47478 with SERVICE_UNAVAILABLE: rejected by Configure: cannot enqueue
```
Looks like this issue is connected to the message size (updated channels config), but we must send an entire config to update the channel. or is it possible to send just some delta rather then entire config?
gravity (Wed, 25 Dec 2019 15:07:50 GMT):
or is there any way to increase message size limit?
root5533 (Thu, 26 Dec 2019 04:33:15 GMT):
Has joined the channel.
root5533 (Thu, 26 Dec 2019 04:37:53 GMT):
Hello everyone,
I'm doing a small research on Hyperledger Fabric orderer and I have a requirement to develop a new orderer of cluster type. I would like to know a small implementation detail regarding the cluster orderer.
Looking at Raft we create 5 orderers in the sample project. I would like to know how and when does the node invoking a query know to which orderer to send the payload to ? It should be sent to the Raft leader but does the user node store this data ? Or is it handled by some other service ? Appreciate any help regarding this. Thanks.
konda.kalyan (Thu, 26 Dec 2019 06:00:33 GMT):
Has joined the channel.
knagware9 (Thu, 26 Dec 2019 07:40:45 GMT):
these handled by client SDK, as a developer you no need to specify which order to send transactions
yacovm (Thu, 26 Dec 2019 08:45:14 GMT):
@root5533 - any follower just forwards the transaction to the leader
RahulEth (Thu, 26 Dec 2019 13:19:36 GMT):
have someone faced the issue```
2019-12-26 11:31:24.271 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 1257 Failed to send StepRequest to 7, because: aborted channel=chpreferences node=2
2019-12-26 11:31:24.273 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 1258 Failed to send StepRequest to 7, because: aborted channel=byfn-sys-channel node=2
2019-12-26 11:31:25.172 UTC [orderer.consensus.etcdraft] Step -> INFO 1259 2 [logterm: 153, index: 152, vote: 2] ignored MsgPreVote from 7 [logterm: 153, index: 152] at term 153: lease is not expired (remaining ticks: 6) channel=byfn-sys-channel node=2
2019-12-26 11:31:25.226 UTC [orderer.consensus.etcdraft] Step -> INFO 125a 2 [logterm: 153, index: 152, vote: 2] ignored MsgPreVote from 5 [logterm: 153, index: 152] at term 153: lease is not expired (remaining ticks: 6) channel=byfn-sys-channel node=2
2019-12-26 11:31:25.771 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 125b Failed to send StepRequest to 5, because: aborted channel=chpreferences node=2
2019-12-26 11:31:25.771 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 125c Failed to send StepRequest to 6, because: aborted channel=chpreferences node=2
2019-12-26 11:31:25.773 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 125d Failed to send StepRequest to 5, because: aborted channel=byfn-sys-channel node=2
2019-12-26 11:31:25.773 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 125e Failed to send StepRequest to 6, because: aborted channel=byfn-sys-channel node=2```
RahulEth (Thu, 26 Dec 2019 13:19:50 GMT):
any help would be highly appricated
RahulEth (Fri, 27 Dec 2019 07:54:11 GMT):
any raft expert @Rajatsharma
Rajatsharma (Sat, 28 Dec 2019 09:05:13 GMT):
No, I've not worked on raft yet.
Rajatsharma (Sat, 28 Dec 2019 09:08:12 GMT):
@guoger I've found out there are some inherent communication mistakes in docker swarm. Due to whuch we're encountering this issue.
We've found many articles regarding the same. Can you help me figure out how should I approach this.
https://codeblog.dotsandbrackets.com/multi-host-docker-network-without-swarm/
https://devopscube.com/open-source-service-discovery/
https://www.reddit.com/r/docker/comments/7lk701/multiregion_swarm_rollout_considerations/
https://forums.docker.com/t/docker-swarm-on-multi-regions/28661
RahulEth (Sat, 28 Dec 2019 09:16:58 GMT):
no problem somebody here who could help?
root5533 (Sat, 28 Dec 2019 14:03:51 GMT):
Thank you for the reply @yacovm @knagware9 . So if there are multiple orderers, will the peer send the payload randomly to one of these orderers or send them to all ?
guoger (Mon, 30 Dec 2019 03:21:51 GMT):
was there some communication problem between nodes?
RahulEth (Tue, 31 Dec 2019 12:11:38 GMT):
hi @guoger i am doing bulk transaction on chpreferences from 7 different clients simultaneously. from client 2(using orderer2) my some of the transaction not getting commit on peer nor i am getting any error message on client end
RahulEth (Tue, 31 Dec 2019 12:16:43 GMT):
there should be some error message on client end for randomly missing transaction not getting commit on peer.
RahulEth (Tue, 31 Dec 2019 12:18:55 GMT):
only error i am getting on peer ```Failed to send StepRequest to 5, because: aborted channel=chpreferences node=2```
knagware9 (Fri, 03 Jan 2020 06:13:38 GMT):
One orderer
rahulhegde (Fri, 03 Jan 2020 16:56:58 GMT):
Can you please clarify on the second reason, 'when you commit a block that changes certificate, the Raft communication will close the communication with certificates that are no longer in the new config'
which one is correct?
[1]Considering the n/w connectivity has no issue, all the orderers would process the configuration block eventually and move in the same state of connection lost on a channel per basis. However upon changing the on-disk RAFT certificates before restarting the RAFT nodes, we should achieve quarum for every channel.
OR
[2] Did you mean, once the committed block is processed by RAFT node, is there a further acknowledgement sent by/with that RAFT node that lost connection thus impacting the block acceptance on the channel?
yacovm (Fri, 03 Jan 2020 17:06:39 GMT):
I meant something entirely else... I meant that if you have 3 nodes - A,B,C and now you change the certificate of node C to D, then the Raft nodes will immediately stop communication with C until its restarts and changes its certificate to D
Flyyellow (Tue, 07 Jan 2020 16:46:13 GMT):
Has joined the channel.
guptasndp10 (Wed, 08 Jan 2020 10:57:55 GMT):
Hello everyone, I am running a HL network with 5 raft based orderers. Now I want to add new orderer to the running HL network. I followed all the steps which includes updating system channel with new consenter certs and address section and after that I run the new orderer with latest system channel config block which I fetched from one of the existing orderer after doing the update
But now when I am trying to fetch the latest config block from newly spinned orderer, I am getting the error readBlock -> INFO 047 Got status: &{SERVICE_UNAVAILABLE} in cli environment
When i checked the new orderer logs, I found the "[common.deliver] deliverBlocks -> WARN bd8f [channel: mib] Rejecting deliver request for 10.64.37.220:43232 because of consenter error" and also I found some logs saying
"confirmSuspicion -> INFO ece0 Last config block was found to be block [9] channel=mib node=6
2020-01-08 10:54:26.183 UTC [orderer.consensus.etcdraft] confirmSuspicion -> INFO ece1 Our height is higher or equal than the height of the orderer we pulled the last block from, aborting. channel=mib node=6"
guoger (Wed, 08 Jan 2020 12:10:05 GMT):
could you post full log of new orderer somewhere and link here?
guptasndp10 (Wed, 08 Jan 2020 12:20:44 GMT):
https://pastebin.com/kk1sPTkD
guptasndp10 (Wed, 08 Jan 2020 12:26:32 GMT):
Sorry logs size was too much. so pasting the maximum logs which inlcudes consenter error
https://pastebin.com/9J8pdBB0
guptasndp10 (Wed, 08 Jan 2020 12:27:37 GMT):
pasting the maximum logs here including consenter error
https://pastebin.com/9J8pdBB0
guoger (Wed, 08 Jan 2020 14:29:49 GMT):
looked the log and i couldn't find the `confirmSuspicion` log message you aforementioned there
guoger (Wed, 08 Jan 2020 14:29:49 GMT):
looked the log and i couldn't find the `confirmSuspicion` log message you aforementioned there?
guptasndp10 (Thu, 09 Jan 2020 08:07:34 GMT):
Yes that confirm Suscpicion message went away after we again run the orderer node with latest genesis block but this time we removed the entry "IPAdrress orderer6-mib" from /etc/hosts variable. My question is why it tries to connect to its own node and get warning message "Failed to connect to orderer6-mib". I am not getting this message in any of the pre spinned up orderer nodes and they are running fine. only this one orderer6-mib which I spinned up as new orderer is throwing this warning message and when i tried to fetch the latest config system channel then it fails saying that "Failed deliver due to consenter error".
Retonator (Thu, 09 Jan 2020 10:13:05 GMT):
Has joined the channel.
Retonator (Thu, 09 Jan 2020 10:13:05 GMT):
Just a small question. When creating a new channel I send a config transactions which already contains an AnchorPeers value (initial) but It seems that the channel gets created with only an MSP value in de Org group. Can anyone confirm that and what is the reasoning behind this behaviour?. This way I always have to send a version update with the AnchorPeers value (I am on 1.4.2)
kelvinzhong (Tue, 14 Jan 2020 03:08:51 GMT):
@guoger hi, it seems that the config update block can not be queried by the transaction id, and the sdk api could only retrieve the newest channel config, then how can I get all the config update history from the ledger?
jyellick (Tue, 14 Jan 2020 03:13:04 GMT):
Why do you need the history? Each config block contains the totality of the channel config. There is no need to incrementally build it. If you want history, you can pull the block before the latest config block, and look at its last-config field in the metadata. Pull the next-to-last config block with this data, and repeat the process. You will have to pull 2*N blocks, assuming there have been N config updates, but this should not be too bad.
kelvinzhong (Tue, 14 Jan 2020 03:20:32 GMT):
I'm try to build a blockchain explorer, which all the transaction could be query by the txId, and I found that the txid generate by config update could not be queried. which seems weird, I suppose that all the ledger could be queried and verified
kelvinzhong (Tue, 14 Jan 2020 03:20:32 GMT):
I'm trying to build a blockchain explorer, which all the transaction could be query by the txId, and I found that the txid generate by config update could not be queried. which seems weird, I suppose that all the ledger could be queried and verified
kelvinzhong (Tue, 14 Jan 2020 03:26:31 GMT):
"If you want history, you can pull the block before the latest config block, and look at its last-config field in the metadata. Pull the next-to-last config block with this data, and repeat the process. "
the process looks confuse to me, could you please describe in more detail.
guoger (Tue, 14 Jan 2020 04:01:19 GMT):
- each block has a field `lastconfigblock uint64` that stores the block number of last config block
- this field in a config block *points to itself*
therefore, we have following chain ( square brackets `[]` denote normal block, angle brackets `<>` denote config block, the number in it denotes `lastconfigblock` )
```
0 1 2 3 4 5 6 7 8
<0> - [0] - [0] - <3> - [3] - [3] - [3] - <7> - [7]
```
guoger (Tue, 14 Jan 2020 04:01:19 GMT):
- each block has a field `lastconfigblock uint64` that stores the block number of last config block
- this field in a config block *points to itself*
therefore, we have following chain ( square brackets `[]` denote normal block, angle brackets `<>` denote config block, the number in it denotes `lastconfigblock` )
```
<0> - [0] - [0] - <3> - [3] - [3] - [3] - <7> - [7]
```
guoger (Tue, 14 Jan 2020 04:01:19 GMT):
- each block has a field `lastconfigblock uint64` that stores the block number of last config block
- this field in a config block *points to itself*
therefore, we have following chain ( square brackets `[]` denote normal block, angle brackets `<>` denote config block, the number in it denotes `lastconfigblock`, number before it denotes block number )
```
0<0> - 1[0] - 2[0] - 3<3> - 4[3] - 5[3] - 6[3] - 7<7> - 8[7]
```
guoger (Tue, 14 Jan 2020 04:06:48 GMT):
to get all config blocks, you'll look at last block 8, get to know last config block is 7, retrieve 7, and the block before it 6, get to know last config block is 3, and so on so forth
guoger (Tue, 14 Jan 2020 04:06:48 GMT):
to get all config blocks, you'll look at last block 8, get to know last config block is 7, retrieve 7, and the block before it 6, get to know last config block is 3, and so on so forth @kelvinzhong
kelvinzhong (Tue, 14 Jan 2020 04:51:13 GMT):
got it, many thx!
AbhijeetSamanta (Tue, 14 Jan 2020 05:50:24 GMT):
Hi All, I am trying to setup HLF on aws eks however I am "getting error unexpected status: SERVICE_UNAVAILABLE — backing Kafka cluster has not completed booting try again later" I tried to find out the logs however it give same error. Please find attached orderer logs
AbhijeetSamanta (Tue, 14 Jan 2020 05:50:43 GMT):
orderer logs
AbhijeetSamanta (Tue, 14 Jan 2020 05:51:14 GMT):
Could anybody help me to find out the issue with orderer?
AbhijeetSamanta (Tue, 14 Jan 2020 05:55:54 GMT):
orderer logs
kelvinzhong (Tue, 14 Jan 2020 06:35:29 GMT):
Clipboard - January 14, 2020 2:34 PM
kelvinzhong (Tue, 14 Jan 2020 06:35:34 GMT):
Clipboard - January 14, 2020 2:35 PM
kelvinzhong (Tue, 14 Jan 2020 06:36:29 GMT):
@guoger I can see how to retrieve all the config block now, but still I don't understand why unset the tx_id of the config block
jyellick (Tue, 14 Jan 2020 17:24:12 GMT):
@kelvinzhong There is a long JIRA discussion around this: https://jira.hyperledger.org/browse/FAB-15411
ahmedsajid (Tue, 14 Jan 2020 17:24:58 GMT):
Hi, is it possible to use external etcdcluster instead of embeddded Orderer one?
jyellick (Tue, 14 Jan 2020 20:07:28 GMT):
No
jyellick (Tue, 14 Jan 2020 20:08:01 GMT):
The orderer is not using etcd, it is using the etcdraft consensus library. These are easily confused, but distinct.
ahmedsajid (Tue, 14 Jan 2020 20:08:15 GMT):
Thanks.
kelvinzhong (Wed, 15 Jan 2020 03:59:05 GMT):
thx for the reply, and I go through the discussion and seems the final conclusion is still no tx_id in config block. but I found the subtask which says orderer comput tx_id for config block: https://jira.hyperledger.org/browse/FAB-15655 ,is that means the tx_id of config block would be set in v2.0 and won't set in v1.4.X?
guoger (Wed, 15 Jan 2020 07:00:24 GMT):
i believe that subtask is to compute txid to be stored in *index db*. The block itself still comes with no txid
tommyjay (Wed, 15 Jan 2020 18:28:14 GMT):
any idea how to resolve this:
```
2020-01-15 18:20:22.658 UTC [orderer.common.broadcast] ProcessMessage -> WARN 1021 [channel: mychannel] Rejecting broadcast of config message from 192.168.224.5:49500 because of error: error applying config update to existing channel 'mychannel': initializing channelconfig failed: could not create channel Application sub-group config: setting up the MSP manager failed: expected at least one CA certificate
```
jyellick (Wed, 15 Jan 2020 18:29:16 GMT):
The error message is fairly explicit? You are attempting to modify or add an MSP to your channel, but it does not contain any CA certs.
jyellick (Wed, 15 Jan 2020 18:29:16 GMT):
The error message is fairly explicit? You are attempting to modify or add an MSP to your channel, but the new MSP does not contain any CA certs.
tommyjay (Wed, 15 Jan 2020 18:31:42 GMT):
it's an existing MSP under the Application group. Trying to modify the config and this is the existing "values.MSP" section in the channel's config:
```javascript
"MSP": {
"mod_policy": "Admins",
"value": {
"config": {
"admins": [
"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNqekNDQWphZ0F3SUJBZ0lVV2lhdXk1K2dDRjdmRXMya0FYRkx6eEFRTDJ3d0NnWUlLb1pJemowRUF3SXcKZnpFTE1Ba0dBMVVFQmhNQ1ZWTXhFekFSQmdOVkJBZ1RDa05oYkdsbWIzSnVhV0V4RmpBVUJnTlZCQWNURFZOaApiaUJHY21GdVkybHpZMjh4SHpBZEJnTlZCQW9URmtsdWRHVnlibVYwSUZkcFpHZGxkSE1zSUVsdVl5NHhEREFLCkJnTlZCQXNUQTFkWFZ6RVVNQklHQTFVRUF4TUxaWGhoYlhCc1pTNWpiMjB3SGhjTk1qQXdNVEUxTVRZMU5EQXcKV2hjTk1qRXdNVEUwTVRZMU9UQXdXakF5TVIwd0N3WURWUVFMRXdSMWMyVnlNQTRHQTFVRUN4TUhiM0puTVcxegpjREVSTUE4R0ExVUVBeE1JYjNKblFXUnRhVzR3V1RBVEJnY3Foa2pPUFFJQkJnZ3Foa2pPUFFNQkJ3TkNBQVNhCmdoaXQrK2hyYXJPQkFKMlZhWkFXMklSTk1qTkpCaS95SklIa01aMHBEcTZuSnBaazVkakVXdCtWam1kcGhJN1AKZ1g4ajBMcFJsWTVPM3MwTXdHb05vNEhjTUlIWk1BNEdBMVVkRHdFQi93UUVBd0lIZ0RBTUJnTlZIUk1CQWY4RQpBakFBTUIwR0ExVWREZ1FXQkJRZ3VGbUFiS3pKaW1lZHpFc1NvOXZ3MWFJK05qQWZCZ05WSFNNRUdEQVdnQlFYClowSTlxcDZDUDhURkhaOWJ3NW5SdFp4SUVEQVhCZ05WSFJFRUVEQU9nZ3c1WmpJMlpqZzNPV1EyWW1Nd1lBWUkKS2dNRUJRWUhDQUVFVkhzaVlYUjBjbk1pT25zaWFHWXVRV1ptYVd4cFlYUnBiMjRpT2lKdmNtY3hiWE53SWl3aQphR1l1Ulc1eWIyeHNiV1Z1ZEVsRUlqb2liM0puUVdSdGFXNGlMQ0pvWmk1VWVYQmxJam9pZFhObGNpSjlmVEFLCkJnZ3Foa2pPUFFRREFnTkhBREJFQWlCbk5lYVN2Z050N1NZOVJQSXpFdm5iSFNpZG8ybjFVRXQrQTI1UHJhNGUKR2dJZ1Y2QXdHMXp3YnpTR040MnVjdWdsa2c0azh0OTZZcHc3RDJWNHVtQUd3ZUU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K"
],
"crypto_config": {
"identity_identifier_hash_function": "SHA256",
"signature_hash_family": "SHA2"
},
"fabric_node_ous": null,
"intermediate_certs": [],
"name": "org1msp",
"organizational_unit_identifiers": [],
"revocation_list": [],
"root_certs": [
"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUNZakNDQWdtZ0F3SUJBZ0lVQjNDVERPVTQ3c1VDNUs0a24vQ2FxbmgxMTRZd0NnWUlLb1pJemowRUF3SXcKZnpFTE1Ba0dBMVVFQmhNQ1ZWTXhFekFSQmdOVkJBZ1RDa05oYkdsbWIzSnVhV0V4RmpBVUJnTlZCQWNURFZOaApiaUJHY21GdVkybHpZMjh4SHpBZEJnTlZCQW9URmtsdWRHVnlibVYwSUZkcFpHZGxkSE1zSUVsdVl5NHhEREFLCkJnTlZCQXNUQTFkWFZ6RVVNQklHQTFVRUF4TUxaWGhoYlhCc1pTNWpiMjB3SGhjTk1UWXhNREV5TVRrek1UQXcKV2hjTk1qRXhNREV4TVRrek1UQXdXakIvTVFzd0NRWURWUVFHRXdKVlV6RVRNQkVHQTFVRUNCTUtRMkZzYVdadgpjbTVwWVRFV01CUUdBMVVFQnhNTlUyRnVJRVp5WVc1amFYTmpiekVmTUIwR0ExVUVDaE1XU1c1MFpYSnVaWFFnClYybGtaMlYwY3l3Z1NXNWpMakVNTUFvR0ExVUVDeE1EVjFkWE1SUXdFZ1lEVlFRREV3dGxlR0Z0Y0d4bExtTnYKYlRCWk1CTUdCeXFHU000OUFnRUdDQ3FHU000OUF3RUhBMElBQktJSDViMkphU21xaVFYSHlxQytjbWtuSUNjRgppNUFkZFZqc1FpekRWNnVaNHY2cytQV2lKeXpmQS9yVHRNdllBUHEveWVFSHBCVUIxajA1M214bnBNdWpZekJoCk1BNEdBMVVkRHdFQi93UUVBd0lCQmpBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJRWFowSTkKcXA2Q1A4VEZIWjlidzVuUnRaeElFREFmQmdOVkhTTUVHREFXZ0JRWFowSTlxcDZDUDhURkhaOWJ3NW5SdFp4SQpFREFLQmdncWhrak9QUVFEQWdOSEFEQkVBaUFIcDVSYnA5RW0xRy9VbUtuOFdzQ2JxRGZXZWNWYlpQUWozUks0Cm9HNWtRUUlnUUFlNE9PS1loSmRoM2Y3VVJhS2ZHVGY0OTIvbm1SbXRLK3lTS2pwSFNyVT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
],
"signing_identity": null,
"tls_intermediate_certs": [],
"tls_root_certs": []
},
"type": 0
},
"version": "0"
}
```
tommyjay (Wed, 15 Jan 2020 18:32:04 GMT):
where would "ca certs" go?
jyellick (Wed, 15 Jan 2020 18:32:36 GMT):
`root_certs`, which in what you pasted is not empty
tommyjay (Wed, 15 Jan 2020 18:33:02 GMT):
that's why i'm confused
jyellick (Wed, 15 Jan 2020 18:33:55 GMT):
Are you certain this is the only MSP value you are modifying?
tommyjay (Wed, 15 Jan 2020 18:34:30 GMT):
yeah, the path is `/channel_group/groups/Application/groups/{{org1msp}}/values`
jyellick (Wed, 15 Jan 2020 18:35:58 GMT):
What abut the MSP definition are you attempting to modify?
tommyjay (Wed, 15 Jan 2020 18:39:17 GMT):
i'm actually trying to add anchor peers. so i want my values tree to have "AnchorPeers" and "MSP" as children nodes
jyellick (Wed, 15 Jan 2020 18:40:58 GMT):
If you are trying to add anchor peers, you should not be modifying the MSP definition
jyellick (Wed, 15 Jan 2020 18:41:25 GMT):
Are you using `configtxlator` or attempting to create your own update manually?
tommyjay (Wed, 15 Jan 2020 18:41:45 GMT):
manually. i'm not touching anything with MSP actually. not sure why that came up
jyellick (Wed, 15 Jan 2020 18:43:20 GMT):
Then you are not computing your update correctly. The MSP value should be at the same version in the read-set and write-set, and should have no actual values set in either place.
jyellick (Wed, 15 Jan 2020 18:44:13 GMT):
https://github.com/hyperledger/fabric/blob/8aab447be5350778fd037afa91723e33b50a233d/internal/configtxlator/update/update.go#L217
jyellick (Wed, 15 Jan 2020 18:44:42 GMT):
You might find looking at this logic/file interesting. It will correctly compute an update based on the difference between an original and modified config.
jyellick (Wed, 15 Jan 2020 18:51:17 GMT):
You could also simply use `configtxlator` once to see what the 'correct' update looks like for your modification, and see how the one you are computing differsl
jyellick (Wed, 15 Jan 2020 18:51:17 GMT):
You could also simply use `configtxlator` once to see what the 'correct' update looks like for your modification, and see how the one you are computing differs.
tommyjay (Wed, 15 Jan 2020 21:35:01 GMT):
problem was my writeset thanks
AbhijeetSamanta (Thu, 16 Jan 2020 07:39:26 GMT):
Hi All, I am trying to implement he raft orderer system into my HLF network on k8s cluster. but when I am trying to create the channel I am getting error like `"Failed pulling the last config block: retry attempts exhausted channel=testchainid node=1
"` on all 3 orderers. anybody help me what is the issue and how we can resolve it?
AbhijeetSamanta (Thu, 16 Jan 2020 07:40:13 GMT):
orderer issue
AbhijeetSamanta (Thu, 16 Jan 2020 07:40:48 GMT):
orderer issue
kelvinzhong (Thu, 16 Jan 2020 08:30:10 GMT):
thx for the reply, so the sdk could query the config block by txid in higher version, I would give it a try if 1.4.2 have support this feature
kelvinzhong (Thu, 16 Jan 2020 09:06:45 GMT):
I have try for image 1.4.4, still could not get the config block by txid, only 2.0 version support this feature then
AbhijeetSamanta (Thu, 16 Jan 2020 13:08:02 GMT):
anybody have faced same issue?
guoger (Thu, 16 Jan 2020 14:28:21 GMT):
do you observe any connection issue? something like "failed to send xxx to xxx due to some error"
tommyjay (Thu, 16 Jan 2020 15:03:41 GMT):
i'm curious why my anchor peers were empty when creating my new channel and with a recently added org who has anchor peers in the configtx.yaml file
jyellick (Thu, 16 Jan 2020 16:17:06 GMT):
By default when creating channels, org level elements are ignored, since a single admin creates the channel. Allowing other orgs to modify your anchor peers would be undesirable.
AbhijeetSamanta (Thu, 16 Jan 2020 16:44:58 GMT):
yes I think this issue due to dns host?
guoger (Fri, 17 Jan 2020 02:33:48 GMT):
the root cause then is that the cluster can not be successfully formed. Exhausted retry attempt is just one of side effects.
guoger (Fri, 17 Jan 2020 02:34:07 GMT):
you need to check that you have proper network setup so nodes can reach each other
AbhijeetSamanta (Tue, 21 Jan 2020 07:26:07 GMT):
ok thanks for the valuable advise
AbhijeetSamanta (Tue, 21 Jan 2020 07:27:13 GMT):
I want to know which architecture is suitable for the HL in k8s
iramiller (Tue, 21 Jan 2020 18:50:54 GMT):
We have ran both Kafka as well as raft in K8s ... I much prefer the raft approach personally. For new clusters in K8s I would recommend raft.
dineshthemacho1 (Wed, 22 Jan 2020 06:05:13 GMT):
Has joined the channel.
tommyjay (Wed, 22 Jan 2020 17:17:38 GMT):
is there a way to create a channel without using configtxgen @jyellick
jyellick (Wed, 22 Jan 2020 17:19:51 GMT):
`configtxgen` creates a channel creation transaction. You could of course create your own, the structure is not terribly complicated.
jyellick (Wed, 22 Jan 2020 17:19:51 GMT):
`configtxgen` creates a channel creation transaction. There are no other tools which create one, though you could of course create your own, the structure is not terribly complicated.
jyellick (Wed, 22 Jan 2020 17:19:51 GMT):
`configtxgen` creates a channel creation transaction. There are no other tools which create one, though you could of course create your own, the structure is not terribly complicated. It is just a config update, which assumes a certain structure already exists as the template config.
tommyjay (Wed, 22 Jan 2020 17:21:42 GMT):
yeah, looking at this https://sourcegraph.com/github.com/hyperledger/fabric@v1.4.4/-/blob/common/tools/configtxgen/encoder/encoder.go#L530:6
dsanchezseco (Thu, 23 Jan 2020 08:27:35 GMT):
Hi all! I've seen that the issue of PBFT (FAB-33) has been marked as `Won't do`. Has been the development of PBFT being abandoned??
yacovm (Thu, 23 Jan 2020 11:22:59 GMT):
You can take a look at https://github.com/SmartBFT-Go/
yacovm (Thu, 23 Jan 2020 11:23:55 GMT):
keep in mind, It doesn't have dynamic reconfiguration yet
yacovm (Thu, 23 Jan 2020 11:30:22 GMT):
We'll implement it in the upcoming months though
yacovm (Thu, 23 Jan 2020 11:30:22 GMT):
We'll implement it in the upcoming 1-2 months though
jyellick (Thu, 23 Jan 2020 14:19:52 GMT):
No, we are just doing an overall JIRA cleanup, closing issues which have not had activity in quite a while. BFT is still in the roadmap
braduf (Thu, 23 Jan 2020 15:12:47 GMT):
Hi all, if in the orderer configurations General.TLS.ClientRootCAs is not set, will General.TLS.RootCAs be used automatically?
Thanks in advance!
braduf (Thu, 23 Jan 2020 15:12:47 GMT):
Hi all, if in the orderer configurations General.TLS.ClientRootCAs is not set, will the same value as General.TLS.RootCAs be used automatically?
Thanks in advance!
yacovm (Thu, 23 Jan 2020 15:13:57 GMT):
@braduf yeah
yacovm (Thu, 23 Jan 2020 15:14:14 GMT):
but let me check the code
yacovm (Thu, 23 Jan 2020 15:16:11 GMT):
@braduf so it only uses them if mutual TLS is on
yacovm (Thu, 23 Jan 2020 15:16:20 GMT):
and if they are not defined, it doesn't use the RootCAs
braduf (Thu, 23 Jan 2020 15:28:30 GMT):
Ok, thank you, to see if I understand it correctly: when mutual TLS is on (clientAuthRequired=true), then I do always have to specify ClientRootCAs? Because it doesn't use the RootCAs automatically?
braduf (Thu, 23 Jan 2020 15:28:30 GMT):
Ok, thank you, to see if I understand it correctly: when mutual TLS is on (clientAuthRequired=true), then I do always have to specify ClientRootCAs? Because it doesn't use the RootCAs?
yacovm (Thu, 23 Jan 2020 15:35:51 GMT):
Yes
braduf (Thu, 23 Jan 2020 15:36:14 GMT):
Good to know, thank you!
lzaouche (Tue, 28 Jan 2020 09:17:44 GMT):
Has joined the channel.
sanket1211 (Tue, 28 Jan 2020 09:57:53 GMT):
[orderer.consensus.etcdraft] logSendFailure -> ERRO 001 Failed to send StepRequest to 4, because: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp: lookup orderer4.example.com on 127.0.0.11:53: no such host" channel=testchainid node=2
knagware9 (Tue, 28 Jan 2020 10:56:30 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=3nM9Gpk2jCbHqo6Xr)
Clipboard - January 28, 2020 4:26 PM
knagware9 (Tue, 28 Jan 2020 10:57:40 GMT):
@yacovm Could you please help
BrettLogan (Tue, 28 Jan 2020 14:08:41 GMT):
You're running testInvoke.sh from outside of a docker container, so none of the hostnames will resolve. Two possible solutions: add the hostnames to your /etc/hosts file, or run the commands from a container you've put on the same docker network as the rest of the containers `docker network ls`
Antimttr (Thu, 30 Jan 2020 23:24:46 GMT):
Has joined the channel.
mbanerjee (Mon, 03 Feb 2020 22:15:35 GMT):
Has any one migrated to fabric 2.0 from 1.4? Any issues seen with the migration? thanks
BrettLogan (Mon, 03 Feb 2020 22:45:29 GMT):
Prior to all of our releases we do validate the upgrade process manually. this involves a team of testers actually running through the upgrade documentation line for line and validating that the instructions are correct and work. However after the release we did find a bug, if you're going to upgrade you need to do an offline upgrade, a live upgrade will fail at the moment. We are working on a fix and expect to deliver that patch soon.
BrettLogan (Mon, 03 Feb 2020 22:45:29 GMT):
Prior to all of our releases we do validate the upgrade process manually. this involves a team of testers actually running through the upgrade documentation line for line and validating that the instructions are correct. However after the release we did find a bug, if you're going to upgrade you need to do an offline upgrade, a live upgrade will fail at the moment. We are working on a fix and expect to deliver that patch soon.
narendranathreddy (Sat, 08 Feb 2020 08:29:25 GMT):
Hello All iam proud to announce that my book `Mastering Hyperledger fabric` is now available for pre-order https://www.amazon.com/dp/B084KZP9M7?ref_=pe_3052080_276849420
narendranathreddy (Sat, 08 Feb 2020 08:29:25 GMT):
Hello All iam proud to announce that my book `Mastering Hyperledger fabric` is now available for pre-order https://amzn.to/2UI38ok
yacovm (Sat, 08 Feb 2020 11:20:35 GMT):
> and written with three years of hyperledger fabric production experience.
Well but Fabric 1.0 was released in the second half of 2017 so it doesn't even exist for 3 years.
mauricio (Sat, 08 Feb 2020 17:54:14 GMT):
https://chat.hyperledger.org/channel/fabric-kubernetes?msg=kRSDGCP3bLjMQXCDE
What do you think about it @narendranathreddy
mauricio (Sat, 08 Feb 2020 17:54:14 GMT):
https://chat.hyperledger.org/channel/fabric-kubernetes?msg=kRSDGCP3bLjMQXCDE
What do you think about it @narendranathreddy ?
Antimttr (Sat, 08 Feb 2020 22:11:49 GMT):
braduf: couldnt agree more
mauricio (Sat, 08 Feb 2020 22:14:53 GMT):
@braduf
BranimirMalesevic (Mon, 10 Feb 2020 10:21:06 GMT):
Hello everyone!
I've setup a Fabric network on Kubernetes cluster and it works fine. When I want to fetch a block outside the cluster, by it's IP address it gives me *tls: bad certificate*. I suppose it's due to TLS certificates being registered on the local pod names (blockchain-orderer1). I've tried with the *--ordererTLSHostnameOverride* parameter but the error still persists. Anyone had the same issue?
braduf (Mon, 10 Feb 2020 14:05:10 GMT):
You agree that Kubernetes is not the right tool for Hyperledger Fabric?
braduf (Mon, 10 Feb 2020 14:05:33 GMT):
You agree that Kubernetes is not the right tool for Hyperledger Fabric?
BranimirMalesevic (Mon, 10 Feb 2020 15:24:25 GMT):
I think it's irrelevant if it's bare-metal, docker-compose or K8s, as long as it does the job, which in my case solved 1001 headaches :smile:
We had it also running in docker-compose with million of other errors, but for a production-grade system with HA and easy scale-up, K8s is really a good option (no overhype ofc)
narendranathreddy (Tue, 11 Feb 2020 14:46:28 GMT):
@mauricio Forgoet about hyperledger fabric kubernetes is a container orchestrator and if you deal with containers at a large scale we do not want the headache of orchestrating. suppose 50 organizations in a consortium. kubernetes or docker swarm or etc some other orchestrator will make sure all replicas are available. This is just for high availability
narendranathreddy (Tue, 11 Feb 2020 14:46:28 GMT):
@mauricio Forgot about hyperledger fabric Kubernetes is a container orchestrator and if you deal with containers at a large scale we do not want the headache of orchestrating. suppose 50 organizations in a consortium. Kubernetes or docker swarm or etc some other orchestrator will make sure all replicas are available. This is just for high availability
LWIH (Tue, 11 Feb 2020 15:08:32 GMT):
Has joined the channel.
yacovm (Tue, 11 Feb 2020 15:10:36 GMT):
50 organizations and all in the same K8S cluster?
narendranathreddy (Tue, 11 Feb 2020 15:11:21 GMT):
depends on the use case
yacovm (Tue, 11 Feb 2020 15:11:35 GMT):
it sure does :)
narendranathreddy (Tue, 11 Feb 2020 15:12:21 GMT):
we are dealing 40 organizations with 40 different kubernetes clusters
yacovm (Tue, 11 Feb 2020 15:12:41 GMT):
so how does k8s help you if you run, say - 2 peers in each org?
narendranathreddy (Tue, 11 Feb 2020 15:12:43 GMT):
if not kubernetes then it will be a headache and lots of pit falls
yacovm (Tue, 11 Feb 2020 15:13:09 GMT):
wait you're saying that you manage all of the 40 clusters???
narendranathreddy (Tue, 11 Feb 2020 15:13:22 GMT):
you can use k8 operator as a replica set of 2 peers per organization
narendranathreddy (Tue, 11 Feb 2020 15:13:43 GMT):
operator ensure at any cost each organization has 2 peers with 1 rep;icaset
xhens (Wed, 12 Feb 2020 09:56:39 GMT):
Has joined the channel.
guoger (Tue, 18 Feb 2020 01:47:28 GMT):
but the way k8s restarts a pod is not that different from restarting a failed process, as HA was done traditionally, right? IMHO you shouldn't be introducing k8s *just for* Fabric, which seems to be an overkill. Of course if a company is already relying on k8s, fabric is just an extra app deployed.
gravity (Tue, 18 Feb 2020 15:06:10 GMT):
Hello
I have a question about grpc options for orderers: does adding `grpc.NettyChannelBuilderOption.maxInboundMessageSize` to orderer properties make any sense?
I've added this this property with a custom value (100 MB) for peers (also, changed properties for kafka itself), but I'm still getting this error:
`Reason: RESOURCE_EXHAUSTED: gRPC message exceeds maximum size 4194304: 4252712`
Fabric v1.4.4
thanks in advance
jyellick (Wed, 19 Feb 2020 04:33:51 GMT):
The orderer already sets the maximum message size to 100 MB (as does the peer). That maximum size of 4MB is the default for most clients though, which can be increased. The maximum message size is the lower of the two configurations for server and client.
carlosalca (Wed, 19 Feb 2020 11:31:44 GMT):
Has joined the channel.
BranimirMalesevic (Wed, 19 Feb 2020 13:02:21 GMT):
Hello!
How can I test if the orderer service is up & running?
Peers have the *peer* CLI command, but for orderers it's not the same.
BrettLogan (Wed, 19 Feb 2020 13:10:51 GMT):
Both the peer and the Orderer have a setting to enable their operations server. The server, when enabled, by default provides a /healthz endpoint, you can then simply curl the endpoint and you will receive a json object back that tells you if all services related to the peer or Orderer are running and healthy
BrettLogan (Wed, 19 Feb 2020 13:11:34 GMT):
https://hyperledger-fabric.readthedocs.io/en/release-1.4/operations_service.html#peer
carlosalca (Wed, 19 Feb 2020 13:14:03 GMT):
Hi everyone! I have a network deployed with 3 raft orderers. After a few weeks the orderes decided to go anarchist! Therefore, now there is no leader in my network. In peer's log only appears this:
2020-02-19 13:09:33.970 UTC [blocksProvider] DeliverBlocks -> WARN 2e10c [alastriachannel] Got error &{SERVICE_UNAVAILABLE}
And in the orderer there is a warning saying this:Rejecting deliver request for IP:PORT because of consenter error
Also, if I try to invoke a chaincode, I get the following message:
SERVICE_UNAVAILABLE -- no Raft leader
It seems there is no Raft leader and I don't know what can I do to fix it, does anybody know how to fix this ?
Thanks!
carlosalca (Wed, 19 Feb 2020 13:14:03 GMT):
Hi everyone! I have a network deployed with 3 raft orderers. After a few weeks the orderes decided to go anarchist! Therefore, now there is no leader in my network. In peer's log only appears this:
2020-02-19 13:09:33.970 UTC [blocksProvider] DeliverBlocks -> WARN 2e10c [alastriachannel] Got error &{SERVICE_UNAVAILABLE}
And in the orderer there is a warning saying this:Rejecting deliver request for IP:PORT because of consenter error
Also, if I try to invoke a chaincode, I get the following message:
SERVICE_UNAVAILABLE -- no Raft leader
It seems there is no Raft leader and I don't know what can I do to fix it, does anybody know how to fix this ?
Thanks!
channel
carlosalca (Wed, 19 Feb 2020 13:14:03 GMT):
Hi everyone! I have a network deployed with 3 raft orderers. After a few weeks the orderes decided to go anarchist! Therefore, now there is no leader in my network. In peer's log only appears this:
2020-02-19 13:09:33.970 UTC [blocksProvider] DeliverBlocks -> WARN 2e10c [channel] Got error &{SERVICE_UNAVAILABLE}
And in the orderer there is a warning saying this:Rejecting deliver request for IP:PORT because of consenter error
Also, if I try to invoke a chaincode, I get the following message:
SERVICE_UNAVAILABLE -- no Raft leader
It seems there is no Raft leader and I don't know what can I do to fix it, does anybody know how to fix this ?
Thanks!
channel
BrettLogan (Wed, 19 Feb 2020 13:19:01 GMT):
For Raft 5 nodes is an HA deployment, this is due to how they handle leader election, not 3 nodes. I'm trying to find our doc that explains why
carlosalca (Wed, 19 Feb 2020 13:26:05 GMT):
Thanks! I'm looking forward to read that doc, I'm really interested in that part of the network configuration!
BranimirMalesevic (Wed, 19 Feb 2020 13:50:40 GMT):
How should orderer restarts be handled? When I deploy the network and restart it, orderers get out of sync and I get *SERVICE_UNAVAILABLE: rejected by Order: no Raft leader*
BrettLogan (Wed, 19 Feb 2020 13:55:51 GMT):
How many raft nodes do you have
carlosalca (Wed, 19 Feb 2020 14:00:40 GMT):
Same problem here, I have 3 Raft orderers and they haven't any leader
BranimirMalesevic (Wed, 19 Feb 2020 14:04:15 GMT):
Also 3
BranimirMalesevic (Wed, 19 Feb 2020 14:04:32 GMT):
When I restart them one by one, the can't find a leader and any call gets rejected
barney2k7 (Wed, 19 Feb 2020 14:22:41 GMT):
When restarting, are you preserving the msp and tls dirs? And /var/hyperledger/production ?
carlosalca (Wed, 19 Feb 2020 14:24:19 GMT):
Yes, all the crypto material is preserverd ans also the ledger data
BranimirMalesevic (Wed, 19 Feb 2020 14:24:39 GMT):
What is /var/hyperledger/production used for?
barney2k7 (Wed, 19 Feb 2020 14:25:09 GMT):
That's where the orderer stores its state
BranimirMalesevic (Wed, 19 Feb 2020 14:26:09 GMT):
I didn't mount that volume at all... Will try to see if it's works
carlosalca (Wed, 19 Feb 2020 14:26:18 GMT):
I changed the this ORDERER_FILELEDGER_LOCATION: /shared/ledger/orderer1
barney2k7 (Wed, 19 Feb 2020 14:28:13 GMT):
I'd recommend to start from https://github.com/hyperledger/fabric-samples/tree/release-1.4/first-network
barney2k7 (Wed, 19 Feb 2020 14:28:36 GMT):
There you see all the relevant volumes for peers/orderers
barney2k7 (Wed, 19 Feb 2020 14:29:16 GMT):
Plus, a working set of ENV variables
BranimirMalesevic (Wed, 19 Feb 2020 14:30:05 GMT):
But I've changed the path of ORDERER_FILELEDGER_LOCATION and the ledger data is stored persistently
BranimirMalesevic (Wed, 19 Feb 2020 14:30:20 GMT):
And mounted on the local drive
barney2k7 (Wed, 19 Feb 2020 14:31:40 GMT):
That should do the trick, haven't done it that way, though
BranimirMalesevic (Wed, 19 Feb 2020 14:31:52 GMT):
But again, it fails
barney2k7 (Wed, 19 Feb 2020 14:32:29 GMT):
Anyway, orderer logs after the restart should tell you what's wrong. You should see a leader election
barney2k7 (Wed, 19 Feb 2020 15:06:28 GMT):
ORDERER_FILELEDGER_LOCATION is not the only config referring to a directory
barney2k7 (Wed, 19 Feb 2020 15:06:44 GMT):
waldir and snapdir are too
barney2k7 (Wed, 19 Feb 2020 15:06:50 GMT):
I guess you're missing those
barney2k7 (Wed, 19 Feb 2020 15:07:04 GMT):
since they by default are also under /var/hyperledger/production
carlosalca (Wed, 19 Feb 2020 15:07:26 GMT):
Ouh yeah i was missing thoese, i realized of that before but I've already lost them :(
BranimirMalesevic (Wed, 19 Feb 2020 15:08:07 GMT):
Working :thumbsup:
BranimirMalesevic (Wed, 19 Feb 2020 15:08:09 GMT):
https://github.com/hyperledger/fabric/blob/master/orderer/common/server/etcdraft_test.go
BranimirMalesevic (Wed, 19 Feb 2020 15:08:29 GMT):
ORDERER_CONSENSUS_WALDIR & ORDERER_CONSENSUS_SNAPDIR need to be persistent
BranimirMalesevic (Wed, 19 Feb 2020 15:08:36 GMT):
Full recovery is possible :D
carlosalca (Wed, 19 Feb 2020 15:08:49 GMT):
Ole!
carlosalca (Wed, 19 Feb 2020 15:20:02 GMT):
Sorry, but how this can help with this error ? I don't get it
carlosalca (Wed, 19 Feb 2020 15:20:02 GMT):
Sorry, but how this can help with this error ? I don't get it (the github link)
BranimirMalesevic (Wed, 19 Feb 2020 15:20:32 GMT):
I can backup the wal and snapshot files that help recreate the whole orderer state
BranimirMalesevic (Wed, 19 Feb 2020 15:20:54 GMT):
The github link just lists all the environment variables down there (sry for not explaining)
barney2k7 (Wed, 19 Feb 2020 15:21:23 GMT):
Or, keep it simple, just make sure /var/hyperledger/production is on a persistent volume, instead of fiddeling with all those ENV
BranimirMalesevic (Wed, 19 Feb 2020 15:21:43 GMT):
Or that
BranimirMalesevic (Wed, 19 Feb 2020 15:22:05 GMT):
Easier to do
gravity (Wed, 19 Feb 2020 21:23:29 GMT):
so in this case I should set 100 mb property for orderers and peers on the client (sdk) side, right?
Javi (Wed, 26 Feb 2020 17:12:31 GMT):
hi all! I have a raft cluster, orderers have mapped the /var/hyperledger/production/orderer volume and I want to test HA in the cluster, I removed orderer 1 and nothing works....
Javi (Wed, 26 Feb 2020 17:12:37 GMT):
cluster size is 5 orderers
Javi (Wed, 26 Feb 2020 17:13:11 GMT):
no new leader election, what I'm missing?
Javi (Wed, 26 Feb 2020 17:13:55 GMT):
using version 1.4.4
Javi (Wed, 26 Feb 2020 17:43:51 GMT):
from other orderer nodes I see this entry in the log ` [core.comm] ServerHandshake -> ERRO 057 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=127.0.0.1:44408`, also I'm mapping the `ORDERER_CONSENSUS_WALDIR `, `ORDERER_CONSENSUS_WALDIR ` and `ORDERER_FILELEDGER_LOCATION ` to a persistent volume....
Javi (Wed, 26 Feb 2020 17:43:51 GMT):
from other orderer nodes I see this entry in the log (when I shut down the first orderer node of the cluster) ` [core.comm] ServerHandshake -> ERRO 057 TLS handshake failed with error remote error: tls: bad certificate server=Orderer remoteaddress=127.0.0.1:44408`, also I'm mapping the `ORDERER_CONSENSUS_WALDIR `, `ORDERER_CONSENSUS_WALDIR ` and `ORDERER_FILELEDGER_LOCATION ` to a persistent volume....
barney2k7 (Thu, 27 Feb 2020 06:57:41 GMT):
From what I've observed I assumed a new leader election is only initiated by a raft node if it loses the connection to the leader or if it loses the connections to a majority of the raft nodes (in your case with a total of 5, that would be 3).
Javi (Thu, 27 Feb 2020 09:25:26 GMT):
but this behaviour only accurs when I kill the first orderer, if I kill the orderer2,3,4, or 5 then it works...
Javi (Thu, 27 Feb 2020 09:25:26 GMT):
but this behaviour only occurs when I kill the first orderer, if I kill the orderer2,3,4, or 5 then it works...
barney2k7 (Thu, 27 Feb 2020 09:48:24 GMT):
Is the config of orderer1 in configtx.yaml somehow wrong? Wrong certificate? Just guessing...
Javi (Thu, 27 Feb 2020 09:55:42 GMT):
no its ok, for example, when I send anchor peer channel update, I use the orderer.example.com:7050 orderer
barney2k7 (Thu, 27 Feb 2020 09:58:29 GMT):
...or do you have a port clash of some kind? Like any other container listening on your orderer1's port as soon as it's gone? That would explain the handshake failure
barney2k7 (Thu, 27 Feb 2020 09:58:35 GMT):
(again, just guessing...)
Javi (Thu, 27 Feb 2020 10:10:29 GMT):
it is possible, in my configtx.yaml I'm using the container ports, for orderers, using 7050
Javi (Thu, 27 Feb 2020 12:38:30 GMT):
I changed the ports of all orderers and now it works
Antimttr (Fri, 28 Feb 2020 16:53:46 GMT):
is their a way to output the orderer's current channel policies?
Antimttr (Fri, 28 Feb 2020 16:54:07 GMT):
i dont have any explicit policies set in my configtx.yaml file when i created the genesis block
Antimttr (Fri, 28 Feb 2020 16:54:11 GMT):
so i assume its using some defaults?
Antimttr (Fri, 28 Feb 2020 17:13:29 GMT):
i wonder, when i created the system channel genesis block i used the name mychannel
Antimttr (Fri, 28 Feb 2020 17:13:38 GMT):
but thats the name of the consortium channel im attempting to create
Antimttr (Fri, 28 Feb 2020 17:13:42 GMT):
perhaps they need to be different?
Antimttr (Fri, 28 Feb 2020 17:18:02 GMT):
this is from my orderer log when it starts: ```
General.GenesisMethod = "file"
General.GenesisProfile = "SampleInsecureSolo"
General.SystemChannel = "test-system-channel-name"
General.GenesisFile = "/etc/hyperledger/configtx/genesis.block"
```
Antimttr (Fri, 28 Feb 2020 17:18:18 GMT):
so i guess system channel isnt called mychannel
Antimttr (Fri, 28 Feb 2020 17:18:56 GMT):
but then later in the log i get this: ```
2020-02-27 21:39:44.954 UTC [orderer.commmon.multichannel] Initialize -> INFO 006 Starting system channel 'mychannel' with genesis block hash a1f437b4b62feaaae470565c05c47c00a737acd9b58c5673fc618ba543e7218d and orderer type solo
```
Antimttr (Fri, 28 Feb 2020 17:19:05 GMT):
so that makes me think it is named mychannel!
Antimttr (Fri, 28 Feb 2020 17:19:06 GMT):
lol
tommyjay (Fri, 28 Feb 2020 21:44:17 GMT):
what are all possible values for `grpcOptions`?
venzi (Mon, 02 Mar 2020 15:31:13 GMT):
Hello guys,
How should the CRL be encoded when adding it to the config block. I am getting this error, when checking the orderer logs:
```
2020-03-02 15:26:31.923 UTC [orderer.common.broadcast] ProcessMessage -> WARN 11e [channel: mainchannel] Rejecting broadcast of config message from 54.146.14.23:44810 because of error: error applying config update to existing channel 'mainchannel': initializing channelconfig failed: could not create channel Application sub-group config: setting up the MSP manager failed: could not parse RevocationList: asn1: structure error: tags don't match (16 vs {class:1 tag:12 length:83 isCompound:false}) {optional:false explicit:false application:false private:false defaultValue:
venzi (Mon, 02 Mar 2020 15:31:13 GMT):
Hello guys,
How should the CRL be encoded when adding it to the config block. I am getting this error, when checking the orderer logs:
```
2020-03-02 15:26:31.923 UTC [orderer.common.broadcast] ProcessMessage -> WARN 11e [channel: mainchannel] Rejecting broadcast of config message from 54.146.14.23:44810 because of error: error applying config update to existing channel 'mainchannel': initializing channelconfig failed: could not create channel Application sub-group config: setting up the MSP manager failed: could not parse RevocationList: asn1: structure error: tags don't match (16 vs {class:1 tag:12 length:83 isCompound:false}) {optional:false explicit:false application:false private:false defaultValue:
venzi (Mon, 02 Mar 2020 15:31:13 GMT):
Hello guys,
How should the CRL be encoded when adding it to the config block? I am getting this error, when checking the orderer logs:
```
2020-03-02 15:26:31.923 UTC [orderer.common.broadcast] ProcessMessage -> WARN 11e [channel: mainchannel] Rejecting broadcast of config message from 54.146.14.23:44810 because of error: error applying config update to existing channel 'mainchannel': initializing channelconfig failed: could not create channel Application sub-group config: setting up the MSP manager failed: could not parse RevocationList: asn1: structure error: tags don't match (16 vs {class:1 tag:12 length:83 isCompound:false}) {optional:false explicit:false application:false private:false defaultValue:
aatkddny (Thu, 05 Mar 2020 15:07:07 GMT):
Not sure if this is orderer or peer.
Is there anywhere that the size of the GRPC message is limited inside the communication between these two? And if so is there an override?
Have a problem with my raft cluster. We have messages that are fairly large and they've started throwing this.
`Caused by: io.grpc.StatusRuntimeException: RESOURCE_EXHAUSTED: gRPC message exceeds maximum size 4194304: 4464408`
The max message size in the channel config is 100M. Which is bigger than this...
We originally thought it was the SDK (java) since we changed from HFClient to gateway, but the override there *is* being correctly propagated through to netty, so now it's looking like it's inside the cluster. So my question is is there something inside either the peer or the orderer that also needs to be overridden?
BrettLogan (Thu, 05 Mar 2020 15:29:09 GMT):
This is a known issue in the Java Chaincode https://jira.hyperledger.org/browse/FABCJ-187
BrettLogan (Thu, 05 Mar 2020 15:29:09 GMT):
This is a known issue in the Java SDK https://jira.hyperledger.org/browse/FABCJ-187
BrettLogan (Thu, 05 Mar 2020 15:31:37 GMT):
Other chaincodes do not suffer from this problem
aatkddny (Thu, 05 Mar 2020 15:33:37 GMT):
That's good because our chaincode is GO
aatkddny (Thu, 05 Mar 2020 15:33:37 GMT):
That's good because our chaincode is all written in go. We are using the SDK to connect our app to the fabric
BrettLogan (Thu, 05 Mar 2020 15:36:19 GMT):
I'll assume the SDK has the same shortcoming
aatkddny (Thu, 05 Mar 2020 15:37:02 GMT):
why are you assuming that?
BrettLogan (Thu, 05 Mar 2020 15:37:14 GMT):
https://stackoverflow.com/questions/60522516/hyperledger-fabric-2-0-grpc-message-exceeds-maximum-size-4194304-5947481
BrettLogan (Thu, 05 Mar 2020 15:37:23 GMT):
Gari answers how to override it here
aatkddny (Thu, 05 Mar 2020 15:38:05 GMT):
we've been using that override for a very long time.
aatkddny (Thu, 05 Mar 2020 15:38:05 GMT):
we've been using that override for a very long time. as I already said it's going through to the netty instantiated by the sdk.
aatkddny (Thu, 05 Mar 2020 15:38:05 GMT):
we've been using that override for a very long time. as I already said it's going through to the netty instantiated by the sdk. this otoh we haven't seen before.
yacovm (Thu, 05 Mar 2020 17:51:49 GMT):
@aatkddny there is no way to change the orderer gRPC message size
aatkddny (Thu, 05 Mar 2020 17:54:16 GMT):
@yacovm what's the max?
aatkddny (Thu, 05 Mar 2020 17:54:16 GMT):
@yacovm what's the max? this is likely something to do with our move to fabric-gateway, but the request is reaching the peer, so it's a bit confusing.
yacovm (Thu, 05 Mar 2020 17:54:56 GMT):
Clipboard - March 5, 2020 7:54 PM
aatkddny (Thu, 05 Mar 2020 17:55:57 GMT):
that's what i thought. so it has to be something to do with netty and the move to gateway.
aatkddny (Thu, 05 Mar 2020 17:55:57 GMT):
that's what i thought. so it has to be something to do with netty and the move to fabric-gateway.
yacovm (Thu, 05 Mar 2020 17:57:02 GMT):
how is this related to netty?
yacovm (Thu, 05 Mar 2020 17:57:19 GMT):
I'm saying that Raft orderers can't send each other messages bigger than 100MB
aatkddny (Thu, 05 Mar 2020 17:58:31 GMT):
we connect through netty.
aatkddny (Thu, 05 Mar 2020 17:58:31 GMT):
we connect through netty. eventually. the java sdk uses it to handle the heavy lifting for grpc
aatkddny (Thu, 05 Mar 2020 17:58:31 GMT):
we connect through netty. eventually. the java sdk uses it to handle the heavy lifting for grpc. the default length in netty is about 4M. there's an override - that we are setting - that allows one to make it bigger. but for some reason it's failing. the confusion lies because it's failing after it gets to the peer.
yacovm (Thu, 05 Mar 2020 18:06:27 GMT):
ah
Antimttr (Fri, 06 Mar 2020 19:48:35 GMT):
When you have lest say 7 raft orderers in an ordering org, and one of them goes down so now you have 6, will raft handle having a non number of nodes gracefully?
Antimttr (Fri, 06 Mar 2020 19:48:35 GMT):
When you have lest say 7 raft orderers in an ordering org, and one of them goes down so now you have 6, will raft handle having a non odd number of nodes gracefully?
Antimttr (Fri, 06 Mar 2020 19:48:52 GMT):
or do you have to take additional action to take another orderer out of the group?
jyellick (Fri, 06 Mar 2020 20:40:42 GMT):
In a 7 Raft orderer setup, you may lose up to 3 orderers (1, 2, or 3) with no problems. You should definitely not take any nodes down deliberately, a configured odd number of nodes is not recommended, but the more online nods, the better.
jyellick (Fri, 06 Mar 2020 20:40:42 GMT):
In a 7 Raft orderer setup, you may lose up to 3 orderers (1, 2, or 3) with no problems. You should definitely not take any nodes down deliberately, a configured even number of nodes is not recommended, but the more online nods, the better.
jyellick (Fri, 06 Mar 2020 20:40:42 GMT):
In a 7 Raft orderer setup, you may lose up to 3 orderers (1, 2, or 3) with no problems. You should definitely not take any nodes down deliberately, a configured even number of nodes is not recommended, but the more online nodes, the better.
Antimttr (Fri, 06 Mar 2020 20:44:12 GMT):
ahh ok so I thought you always wanted an odd number of orderers to make sure there will always be a tie breaker
jyellick (Fri, 06 Mar 2020 20:44:37 GMT):
No, if you have configured 7 nodes, you must always have 4 agree to make progress.
jyellick (Fri, 06 Mar 2020 20:44:48 GMT):
So long as at least 4 are online, they can agree.
Antimttr (Fri, 06 Mar 2020 20:45:05 GMT):
so for 8 you'd always want atleast 5
jyellick (Fri, 06 Mar 2020 20:45:26 GMT):
Yes. And for 9 you'd need at least 5 as well.
jyellick (Fri, 06 Mar 2020 20:45:36 GMT):
That is why we recommend against even numbers of configured nodes.
jyellick (Fri, 06 Mar 2020 20:45:43 GMT):
It requires an additional node, with no additional fault tolerance.
jyellick (Fri, 06 Mar 2020 20:45:57 GMT):
With 8 nodes, you may have 3 crash. 7 nodes, you may have 3 crash.
Antimttr (Fri, 06 Mar 2020 20:47:33 GMT):
above you said configured odd number of nodes is not reccomended, but here you said configured even number of nodes is not reccomended
jyellick (Fri, 06 Mar 2020 20:47:50 GMT):
A, a typo, let me fix
jyellick (Fri, 06 Mar 2020 20:47:50 GMT):
Ah, a typo, let me fix
Antimttr (Fri, 06 Mar 2020 20:47:54 GMT):
ahh ok
braduf (Sat, 07 Mar 2020 21:11:53 GMT):
Hi all, does anyone has an idea why the Genesis/BootstrapMethod `provisional` got taken out of v2.0.x?
braduf (Sat, 07 Mar 2020 21:11:53 GMT):
Hi all, does anyone know the reason why the Genesis/BootstrapMethod `provisional` got taken out of v2.0.x?
braduf (Sat, 07 Mar 2020 21:11:53 GMT):
Ok, found it. https://jira.hyperledger.org/browse/FAB-16722
I understand that the provisional method and creating the genesis block shouldn't be a function of the orderer itself, but I think at least different orgs should be able to create the exact same genesis block from the same configtx.yaml file, instead of needing only one org to create the genesis block. In consortiums it always generates problems if one org can or should do something and others can't or shouldn't...
What are the opinions here?
I am thinking about adding a channel creation timestamp as a configurable field in the configtx.yaml, and when it is present, the configtxgen should not take the current time, but the time in the configtx.yaml to create the block. Like this all orgs can generate the genesis block themselves when a release of the configtx.yaml file is done in the shared repo of the consortium...
braduf (Sat, 07 Mar 2020 21:24:38 GMT):
Hi all, I have noticed that the `provisional` bootstrap method got removed and I understand that creating the block shouldn't be a task of the orderer, but this at least took away the dependency or the need to choose one single org to create the genesis block.
I think different orgs should be able to create the exact same genesis block from the same configtx.yaml file.
In consortiums it always generates problems if one org can or should do something and others can't or shouldn't...
What are the opinions here?
I am thinking about adding a channel creation timestamp as a configurable field in the configtx.yaml, and when it is present, the configtxgen should not take the current time, but the time in the configtx.yaml to create the block. Like this all orgs can generate the genesis block themselves when a release of the configtx.yaml file is done in the shared repo of the consortium...
braduf (Sat, 07 Mar 2020 21:24:38 GMT):
Hi all, I have noticed that the `provisional` bootstrap method got removed and I understand that creating the block shouldn't be a task of the orderer and that it was just for development purposes, but I think depending on the configtx.yaml instead of on an allready created block could have some advantages too, or at least take away the dependency or the need to choose one single org to create the genesis block.
I think different orgs should be able to create the exact same genesis block from the same configtx.yaml file.
In consortiums it always generates problems if one org can or should do something and others can't or shouldn't...
What are the opinions here?
I am thinking about adding a channel creation timestamp as a configurable field in the configtx.yaml, and when it is present, the configtxgen should not take the current time, but the time in the configtx.yaml to create the block. Like this all orgs can generate the genesis block themselves when a release of the configtx.yaml file is done in the shared repo of the consortium...
obelix (Sun, 08 Mar 2020 14:53:32 GMT):
Has joined the channel.
jyellick (Mon, 09 Mar 2020 13:55:27 GMT):
@braduf It's not that simple. Fabric depends on protobuf serialization for encoding many structures, including blocks, and channel configuration. Protobuf serialization is not guaranteed to be deterministic, and, in the case of channel configuration because of the heavy use of maps, it usually isn't.
braduf (Tue, 10 Mar 2020 13:53:36 GMT):
Ok, I was hoping it was simple, but I didn't think about those things. Thank you!
AbhijeetSamanta (Tue, 10 Mar 2020 20:16:36 GMT):
I have setup the hyperledger fabric on AWS ec2 instance, however I am facing issue with memory while upgrade the chaincode so I am thinking to upgrade the EC2 instance to t2.medium. But I want all the blocks which is in ledger should be recover to new setup So how can I do it please suggest me?
BrettLogan (Tue, 10 Mar 2020 20:18:15 GMT):
You can change the size of a VM in AWS without destroying it, just shut down and in the UI change the size
AbhijeetSamanta (Tue, 10 Mar 2020 20:20:21 GMT):
I think if I do it then It will destroy dockers of peer and orderer which is running on the ec2 instance
BrettLogan (Tue, 10 Mar 2020 20:21:16 GMT):
Did you set the "--rm" flag when you created the containers, otherwise why would they be removed when they were stopped?
BrettLogan (Tue, 10 Mar 2020 20:21:45 GMT):
And did you use docker volumes to back up the data?
AbhijeetSamanta (Tue, 10 Mar 2020 20:22:34 GMT):
No I didn't use to docker volumes to backup the data
AbhijeetSamanta (Tue, 10 Mar 2020 20:23:06 GMT):
actually I done it for dev env
AbhijeetSamanta (Tue, 10 Mar 2020 20:24:20 GMT):
is there is anyway to backup while running the blockchain?
BrettLogan (Tue, 10 Mar 2020 20:25:14 GMT):
So the "simple" is to launch a new peer on the new VM, add it to the network, let it replicate the blocks to the new VM
AbhijeetSamanta (Tue, 10 Mar 2020 20:26:07 GMT):
Ya I also think the same however its has lots of step so I thinking to other way
AbhijeetSamanta (Tue, 10 Mar 2020 20:26:43 GMT):
if there is no other way then I think to do it
Antimttr (Wed, 11 Mar 2020 15:52:34 GMT):
Goodmorning
Antimttr (Wed, 11 Mar 2020 15:53:05 GMT):
so after adding some of the default policies found in First network to my configtx.yaml and recompletely rebooting my network I'm unable to join peers to channels
Antimttr (Wed, 11 Mar 2020 15:53:25 GMT):
from my orderer im getting some logs like this: `2020-03-10 22:32:58.225 UTC [cauthdsl] func2 -> DEBU 6e91 0xc000baa5c0 identity 0 does not satisfy principal: the identity is a member of a different MSP (expected OrdererMSP, got Org1MSP)`
Antimttr (Wed, 11 Mar 2020 15:53:56 GMT):
so im not really sure why its expecting the OrdererMSP when my peer is in Org1MSP
Antimttr (Wed, 11 Mar 2020 15:54:55 GMT):
here's my policies from my configtx.yaml: ```
Channel: &ChannelDefaults
# Policies defines the set of policies at this level of the config tree
# For Channel policies, their canonical path is
# /Channel/
Antimttr (Wed, 11 Mar 2020 15:55:43 GMT):
And the policies defined in my ordererorg: ```
Policies:
Readers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Writers:
Type: Signature
Rule: "OR('OrdererMSP.member')"
Admins:
Type: Signature
Rule: "OR('OrdererMSP.admin')"
```
Abhishekkishor (Wed, 11 Mar 2020 15:59:18 GMT):
Has joined the channel.
jyellick (Thu, 12 Mar 2020 11:21:37 GMT):
This error is usually benign, policy evaluation will check each org's policy for the action, and some will necessarily fail, so long as one succeeds, things are fine. If you are able to successfully create the channel, then your orderer logs are not the place to look. If I were to guess, you regenerated your crypto material but did not re-deploy your peers.
Antimttr (Thu, 12 Mar 2020 15:19:28 GMT):
hey jyellick, thanks for the response
Antimttr (Thu, 12 Mar 2020 15:20:02 GMT):
so i did not regenerate any crypto material
Antimttr (Thu, 12 Mar 2020 15:20:30 GMT):
the system was working ok as far as the channel went but i was using an old style configtx.yaml that omitted all the policy sections
Antimttr (Thu, 12 Mar 2020 15:21:02 GMT):
so when i put the policy sections into the configtx.yaml and deleted all my nodes then restarted them I started getting all these errors when i rejoined my peers from org1 to teh channel
Antimttr (Thu, 12 Mar 2020 15:21:47 GMT):
so i was doing some researcha nd determined that i had no NodeOU's specified but the policies I put in did use the NodeOU's
Antimttr (Thu, 12 Mar 2020 15:21:53 GMT):
so im thinking this might be the issue
Antimttr (Thu, 12 Mar 2020 15:22:31 GMT):
so my plan is to regenerate crypto this time with NodeOUs specified for the identies (theres really only one identity its the org admin)
Antimttr (Thu, 12 Mar 2020 15:23:02 GMT):
but im thinking i might not have correctly formulated my changes in the configtx.yaml when regenerating the genesis block and channel.tx
Antimttr (Thu, 12 Mar 2020 15:23:14 GMT):
I was wondering if maybe you could take a quick look at it and see if it looks OK to you?
jyellick (Thu, 12 Mar 2020 16:08:12 GMT):
There are sample configtx.yaml in our documentation that I suggest you start from https://hyperledger-fabric.readthedocs.io/en/release-2.0/test_network.html
Antimttr (Thu, 12 Mar 2020 16:17:14 GMT):
im using 1.4
Antimttr (Thu, 12 Mar 2020 16:17:55 GMT):
and my configtx.yaml changes were from first-network fabric-samples
Antimttr (Thu, 12 Mar 2020 16:18:10 GMT):
but that file had a bunch of extra stuff about raft and kafka im not suing
Antimttr (Thu, 12 Mar 2020 16:18:23 GMT):
so im just hoping that omitting that stuff didnt mess it up
jyellick (Thu, 12 Mar 2020 16:19:23 GMT):
I'd recommend going back to the first-network example and working incrementally to identify where things are breaking. Obviously if you are using Raft or Kafka consensus, then you will need the respective section
Antimttr (Thu, 12 Mar 2020 16:20:46 GMT):
well i started with balance-transfer not first-network
Antimttr (Thu, 12 Mar 2020 16:20:53 GMT):
so the balance transfer configtx.yaml is much simpler
Antimttr (Thu, 12 Mar 2020 16:21:08 GMT):
but its giving me all kinds of warnings about policyless configtx.yaml files being deprecated
Antimttr (Thu, 12 Mar 2020 16:21:12 GMT):
so i wanted to integrate the policies
Antimttr (Thu, 12 Mar 2020 16:21:19 GMT):
now without the policies i dont have any of these issues
Antimttr (Thu, 12 Mar 2020 16:21:23 GMT):
but then i dont have any policies defined
Antimttr (Thu, 12 Mar 2020 16:22:00 GMT):
right now im modifying my artifact generation scripts to include NodeOU's for administrator
jyellick (Thu, 12 Mar 2020 16:22:09 GMT):
You have policies defined, but they're simply the defaults. If you've not worked through first-network, this would be a good place to start, since it has all of the policies defined explicitly, as well as NodeOUs enabled.
Antimttr (Thu, 12 Mar 2020 16:23:14 GMT):
right that's what i'm attempting to work through now is assigning the policies
Antimttr (Thu, 12 Mar 2020 16:23:48 GMT):
i was using fabric-ca ops guide as a guide to generating everything by hand so I can write my crypto generation scripts
Antimttr (Thu, 12 Mar 2020 16:24:06 GMT):
but the guide seems outdated so I'm running into these issues
Antimttr (Thu, 12 Mar 2020 16:24:32 GMT):
for instance, it specifies for orderer org admin all of these properties
Antimttr (Thu, 12 Mar 2020 16:24:39 GMT):
but for peer admin it just assigns OU=client
Antimttr (Thu, 12 Mar 2020 16:24:47 GMT):
and no extra properties for being a registrar
Antimttr (Thu, 12 Mar 2020 16:25:04 GMT):
which also first-network doesnt have any of these properties on their peer org admins
Antimttr (Thu, 12 Mar 2020 16:25:30 GMT):
so im trying to figure out what peer admins actually need reading this guide: https://hyperledger-fabric-ca.readthedocs.io/en/latest/users-guide.html?highlight=delegate#registering-a-new-identity
Antimttr (Thu, 12 Mar 2020 16:26:06 GMT):
you can see in the fabric-ca ops guide here: https://hyperledger-fabric-ca.readthedocs.io/en/latest/operations_guide.html#enroll-orderer-org-s-ca-admin
Antimttr (Thu, 12 Mar 2020 16:26:25 GMT):
that they have this set of attrs specified for the orderer admin but not for any of the peer admins that are being registered
Antimttr (Thu, 12 Mar 2020 16:26:45 GMT):
im wondering if this is oversite or if this is intentional since those properties aren't needed for peer org admins?
Antimttr (Thu, 12 Mar 2020 16:27:01 GMT):
`"hf.Registrar.Roles=client,hf.Registrar.Attributes=*,hf.Revoker=true,hf.GenCRL=true,admin=true:ecert,abac.init=true:ecert"`
Antimttr (Thu, 12 Mar 2020 16:27:16 GMT):
i would posit that they are required for peerorg admins as well
jyellick (Thu, 12 Mar 2020 16:27:27 GMT):
This is why I linked you to the new 'test-network' documentation which was delivered in v2.0, these concepts are more clearly explained there.
Antimttr (Thu, 12 Mar 2020 16:27:43 GMT):
i see, so will they also hold for 1.4.5?
jyellick (Thu, 12 Mar 2020 16:28:13 GMT):
Most of the concepts with the CA and channel configuration should be very similar. The notable serious divergence is with chaincode lifecycle.
Antimttr (Thu, 12 Mar 2020 16:28:43 GMT):
ok ill read through that guide and see if i can figure out how to properly register peerorg admins
Antimttr (Thu, 12 Mar 2020 16:28:44 GMT):
thanks
Antimttr (Thu, 12 Mar 2020 16:39:21 GMT):
i read through the guide but its using cryptogen, and not registering identities amnually
Antimttr (Thu, 12 Mar 2020 16:39:21 GMT):
i read through the guide but its using cryptogen, and not registering identities manually
Antimttr (Thu, 12 Mar 2020 16:40:10 GMT):
i have in my test networks already successfully used networks generated by cryptogen, but that is not a production ready scenario
Antimttr (Thu, 12 Mar 2020 16:40:38 GMT):
this network was intended to create a production grade network (step by step i know solo ordering is not production grade)
Antimttr (Thu, 12 Mar 2020 16:41:58 GMT):
using fabric-ca-server to perofrm registration of all identities
Antimttr (Thu, 12 Mar 2020 16:43:16 GMT):
ill check out network.sh though
Antimttr (Thu, 12 Mar 2020 16:43:24 GMT):
perhaps that has what im looking for
Antimttr (Thu, 12 Mar 2020 16:44:23 GMT):
i guess that script doesnt exist in 1.4.x
Antimttr (Thu, 12 Mar 2020 16:49:55 GMT):
ok so i found registerEnroll.sh that seems to be it
Antimttr (Thu, 12 Mar 2020 16:50:08 GMT):
and in it they register a peer admin: ` fabric-ca-client register --caname ca-org1 --id.name org1admin --id.secret org1adminpw --id.type admin --tls.certfiles ${PWD}/organizations/fabric-ca/org1/tls-cert.pem
`
Antimttr (Thu, 12 Mar 2020 16:50:08 GMT):
and in it they register a peer admin: `fabric-ca-client register --caname ca-org1 --id.name org1admin --id.secret org1adminpw --id.type admin --tls.certfiles ${PWD}/organizations/fabric-ca/org1/tls-cert.pem`
Antimttr (Thu, 12 Mar 2020 16:50:28 GMT):
so no specific attrs set there
Antimttr (Thu, 12 Mar 2020 16:52:29 GMT):
and in orderer admin registration: `fabric-ca-client register --caname ca-orderer --id.name ordererAdmin --id.secret ordererAdminpw --id.type admin --tls.certfiles ${PWD}/organizations/fabric-ca/ordererOrg/tls-cert.pem`
Antimttr (Thu, 12 Mar 2020 16:52:35 GMT):
they also dont specify any attrs
Antimttr (Thu, 12 Mar 2020 16:52:52 GMT):
so maybe thats just another area that the fabric-ca-server ops guide is wrong?
Antimttr (Thu, 12 Mar 2020 16:53:19 GMT):
or perhaps outdated
jyellick (Thu, 12 Mar 2020 16:55:49 GMT):
If any documentation is wrong or outdated, I'd encourage you to open a JIRA, especially if it exists in v2.0. Even in v1.4 which is still being maintained, if the bug is something like wrong or missing parameters, it can certainly be fixed.
Antimttr (Thu, 12 Mar 2020 16:57:08 GMT):
well if anything its parameters that are extra not missing, but i still am unsure if they're intentionally included because of a difference in network design or they're there because that was a doc from 2019 when perhaps these attrs were specifically needed and now they aren't anymore
Antimttr (Thu, 12 Mar 2020 16:57:55 GMT):
the registrar params are specifically discussed here: fabric-ca-client register --caname ca-orderer --id.name ordererAdmin --id.secret ordererAdminpw --id.type admin --tls.certfiles ${PWD}/organizations/fabric-ca/ordererOrg/tls-cert.pem
Antimttr (Thu, 12 Mar 2020 16:57:55 GMT):
the registrar params are specifically discussed here: https://hyperledger-fabric-ca.readthedocs.io/en/latest/users-guide.html?highlight=delegate#registering-a-new-identity
Antimttr (Thu, 12 Mar 2020 16:58:57 GMT):
but i dont see them being used in any of the fabric-samples, only in the fabric-ops guide
Antimttr (Thu, 12 Mar 2020 16:59:43 GMT):
for now im going to leave them out since they seem to be left out in the most modern fabric-samples
Antimttr (Thu, 12 Mar 2020 17:00:47 GMT):
perhaps the NodeOU's supersede the need for these attributes to be explicitly set in the identity certs
Antimttr (Thu, 12 Mar 2020 17:29:43 GMT):
interesting so i found the fabric-ca-server-config.yaml in the test-network orderer config and that's where attrs are set it looks like: ``` # Contains identity information which is used when LDAP is disabled
identities:
- name: admin
pass: adminpw
type: client
affiliation: ""
attrs:
hf.Registrar.Roles: "*"
hf.Registrar.DelegateRoles: "*"
hf.Revoker: true
hf.IntermediateCA: true
hf.GenCRL: true
hf.Registrar.Attributes: "*"
hf.AffiliationMgr: true
```
Antimttr (Thu, 12 Mar 2020 17:47:17 GMT):
although looking at my existing fabric-ca-server-config.yaml files this is already in there
Antimttr (Thu, 12 Mar 2020 17:48:02 GMT):
im thinking thats for the Fabric-ca admin but not for the org admins
rahulhegde (Mon, 16 Mar 2020 20:20:11 GMT):
Hello @jyellick - Question, Does fabric 1.4.x honors (Next Update)[https://tools.ietf.org/html/rfc5280#section-5.1.2.5] field present in the CRL?
rahulhegde (Mon, 16 Mar 2020 20:20:11 GMT):
Hello @jyellick - Question, Does fabric 1.4.x honors [Next Update](https://tools.ietf.org/html/rfc5280#section-5.1.2.5) field present in the CRL?
jyellick (Mon, 16 Mar 2020 20:25:52 GMT):
@rahulhegde Fabric uses golang's X.509 implementation. As best as I can tell https://golang.org/src/crypto/x509/pkix/pkix.go parses that field of the certificate, and makes it queryable via the `HasExpired` method. But, I don't see any invocations of it, either in the golang codebase, nor in the Fabric codebase, so I'm fairly certain the answer is 'no'.
rahulhegde (Mon, 16 Mar 2020 20:30:32 GMT):
thanks.
krabradosty (Tue, 17 Mar 2020 16:02:21 GMT):
Hello! I'm trying to add anchor peers to channel config during its creation. I'm using fabric-node-sdk to do it programmatically. Fabric version is 1.4.6. Orderer successfully accepts channel creation transactions but just ignores anchor peers' information. Then I retrieve the new channel config from orderer and don't see anchor peers records.
FYI:
- I'm able to add anchor peers information with further transactions right after channel creation, but I want to do everything in one step
- If I try to include in the channel creation transaction some not allowed values (I tried typo under section `groups.Application.groups.mspid.values`: "AnchooooorPeers" instead of "AnchorPeers"), I've got "Bad Request" from order. That means that orderer expects anchor peer records, but ignores them for some reason.
Is such channel creation transaction allowed? Or I'm doing something wrong?
jyellick (Tue, 17 Mar 2020 16:15:48 GMT):
@krabradosty It's possible to specify anchor peers at channel creation time. I'd recommend you use `configtxgen` and its `-baseProfile` flag to generate a channel creation tx which sets anchor peers. Then inspect the tx to see how it's being accomplished and replicate that in your code.
krabradosty (Wed, 18 Mar 2020 09:21:02 GMT):
I tried again, but no luck. I am able to redefine, for instance, BlockTimeout value during channel creation. But values for anchor peers are silently ignored. Is it possible that it is a bug in Orderer?
krabradosty (Wed, 18 Mar 2020 09:21:02 GMT):
I tried again, but no luck. I am able to redefine, for instance, BatchTimeout value during channel creation. But values for anchor peers are silently ignored. Is it possible that it is a bug in Orderer?
krabradosty (Wed, 18 Mar 2020 09:21:02 GMT):
I tried again, but no luck. I am able to redefine, for instance, BatchTimeout value during channel creation. But values for anchor peers are still ignored. Is it possible that it is a bug in Orderer?
krabradosty (Wed, 18 Mar 2020 09:21:02 GMT):
I tried again, but no luck. I am able to redefine, for instance, BatchTimeout value during channel creation. But values for anchor peers are still ignored. Is it possible that there is a bug in Orderer?
krabradosty (Wed, 18 Mar 2020 09:23:18 GMT):
BTW, there is not such option in `configtxgen ` to generate channel creation tx with anchor peers. Only transaction for updating channel config.
jyellick (Wed, 18 Mar 2020 11:25:05 GMT):
From an orderer perspective, channel creation is just a channel config update -- no different than the update you send to set the anchor peers. I can't claim to have done it personally, but I'm fairly certain others have set anchor peers at channel creation time.
There is definitely `-baseProfile` flag, but it exists only in v2.0 of `configtxgen`, not v1.4.x if that's what you're trying to do.
aberwag (Tue, 24 Mar 2020 08:08:40 GMT):
Has joined the channel.
rahulhegde (Thu, 02 Apr 2020 17:24:32 GMT):
hello @jyellick - trying to confirm if we have Enrol & TLS certificate expiry check for fabric-1.4.2 release. I do see conflicting information, can you confirm -
[Expiry Check Absent](https://hyperledger-fabric.readthedocs.io/en/latest/msp.html#msp-configuration)
```
It is important to note that MSP identities never expire; they can only be revoked by adding them to the appropriate CRLs. Additionally, there is currently no support for enforcing revocation of TLS certificates.
```
Expiry check available is implemented at the Orderer and is enabled with v1.4.2 capability. This is performed only on the signing entity i.e. client sending the broadcast.
```
// ExpirationCheck specifies whether the orderer checks for identity expiration checks
// when validating messages
func (cp *OrdererProvider) ExpirationCheck() bool {
return cp.v11BugFixes || cp.v142
}
```
rahulhegde (Thu, 02 Apr 2020 17:31:02 GMT):
hello jyellick - trying to confirm if we have Enrol & TLS certificate expiry check for fabric-1.4.2 release.
[1] [Expiry Check Absent](https://hyperledger-fabric.readthedocs.io/en/latest/msp.html#msp-configuration)
```
It is important to note that MSP identities never expire; they can only be revoked by adding them to the appropriate CRLs. Additionally, there is currently no support for enforcing revocation of TLS certificates.
```
[2] Expiry check is implemented at the Orderer and is enabled with v1.4.2 capability. This is performed only on the signing entity i.e. client sending the broadcast to orderer service.
```
// ExpirationCheck specifies whether the orderer checks for identity expiration checks
// when validating messages
func (cp *OrdererProvider) ExpirationCheck() bool {
return cp.v11BugFixes || cp.v142
}
```
a) If you can confirm - there is no TLS expiry check and however enrol check is limited by the Orderer.
b) Does this change in higher version >Fabric 1.4.2 or Fabric 2.0 release?
c) Is there a reason/Jira to understand the reason of not having this check/future implementation?
rahulhegde (Thu, 02 Apr 2020 17:31:02 GMT):
hello @jyellick - Trying to confirm if we have Enrol & TLS certificate expiry check for fabric-1.4.2 release.
[1] [Expiry Check Absent](https://hyperledger-fabric.readthedocs.io/en/latest/msp.html#msp-configuration)
`It is important to note that MSP identities never expire; they can only be revoked by adding them to the appropriate CRLs. Additionally, there is currently no support for enforcing revocation of TLS certificates.`
[2] Expiry check is implemented at the Orderer and is enabled with v1.4.2 capability. This is performed only on the signing entity i.e. client sending the broadcast to orderer service.
```
// ExpirationCheck specifies whether the orderer checks for identity expiration checks
// when validating messages
func (cp *OrdererProvider) ExpirationCheck() bool {
return cp.v11BugFixes || cp.v142
}
```
a) If you can confirm - there is no TLS expiry check and however enrol check is limited by the Orderer.
b) Does this change in higher version >Fabric 1.4.2 or Fabric 2.0 release?
c) Is there a reason/Jira to understand the reason of not having this check/future implementation?
jyellick (Thu, 02 Apr 2020 17:38:54 GMT):
Since v1.1, the orderer has blocked broadcast for signing identities which have an expiration date set, and are currently expired (as determined by the local time of the orderer).
jyellick (Thu, 02 Apr 2020 17:40:06 GMT):
The general reason behind not attempting to invalidate certificates based on time, is because in an asynchronous system, it's important for all nodes to validate deterministically, and we do not want some nodes to invalidate a transaction while others validate it, based on the time they receive it.
jyellick (Thu, 02 Apr 2020 17:41:19 GMT):
This is why the ingress control on broadcast was thought to be adequate -- it prevents the transaction from ever making it onto the blockchain so as to avoid these asynchronous validation problems.
jyellick (Thu, 02 Apr 2020 17:41:29 GMT):
That doc should probably be updated to reflect the current state of affairs.
jyellick (Thu, 02 Apr 2020 17:41:55 GMT):
There are some other places, notably in the Deliver API which checks for sign-cert expiration.
jyellick (Thu, 02 Apr 2020 17:44:09 GMT):
As far as TLS expiry checks, I would need to double check, but my feeling is that these are enforced (since this is mostly golang's doing).
jyellick (Thu, 02 Apr 2020 17:44:38 GMT):
Nothing has changed in v2.0 with respect to expiration checks or revoking TLS identities.
rahulhegde (Thu, 02 Apr 2020 18:12:38 GMT):
Is this scenario where orderer node will not deliver the block if its MSP identity has expired?
To confirm, no support for enforcing revocation list for TLS certificate is still valid in Fabric and requires implementation effort only?
jyellick (Thu, 02 Apr 2020 18:15:50 GMT):
The orderer will not deliver blocks to peers after their sign-cert identity has expired. The peer will not deliver blocks/events to clients after their sign-cert identity has expired.
No support for revoking TLS certs, which requires implementation effort only.
jyellick (Thu, 02 Apr 2020 18:15:50 GMT):
The orderer will not deliver blocks to peers after their sign-cert identity has expired. The peer will not deliver blocks/events to clients after their sign-cert identity has expired.
No support for revoking TLS certs, but probably a valid feature which which requires implementation effort only.
rahulhegde (Thu, 02 Apr 2020 18:17:36 GMT):
Thanks Jason.
pritam_01 (Mon, 06 Apr 2020 13:25:42 GMT):
Has joined the channel.
narendranathreddy (Wed, 08 Apr 2020 10:59:06 GMT):
2020-04-08 10:55:03.533 UTC [orderer.consensus.etcdraft] Step -> INFO 03a 3 [term: 1] received a MsgHeartbeat message with higher term from 5 [term: 6] channel=dedsyschannel node=3
2020-04-08 10:55:03.533 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO 03b 3 became follower at term 5 channel=ucrnetchannel node=3
2020-04-08 10:55:03.533 UTC [orderer.consensus.etcdraft] becomeFollower -> INFO 03c 3 became follower at term 6 channel=dedsyschannel node=3
2020-04-08 10:55:03.533 UTC [orderer.consensus.etcdraft] commitTo -> PANI 03d tocommit(76492) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost? channel=ucrnetchannel node=3
panic: tocommit(76492) is out of range [lastIndex(0)]. Was the raft log corrupted, truncated, or lost?
goroutine 136 [running]:
github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc000e782c0, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/zapcore/entry.go:229 +0x546
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).log(0xc000152e70, 0xc0001b7404, 0x1577809, 0x5d, 0xc000eb9680, 0x2, 0x2, 0x0, 0x0, 0x0)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:234 +0x101
github.com/hyperledger/fabric/vendor/go.uber.org/zap.(*SugaredLogger).Panicf(...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.uber.org/zap/sugar.go:159
github.com/hyperledger/fabric/common/flogging.(*FabricLogger).Panicf(0xc000152e78, 0x1577809, 0x5d, 0xc000eb9680, 0x2, 0x2)
/opt/gopath/src/github.com/hyperledger/fabric/common/flogging/zap.go:74 +0x7d
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raftLog).commitTo(0xc00022bea0, 0x12acc)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/log.go:203 +0x131
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raft).handleHeartbeat(0xc000667180, 0x8, 0x3, 0x5, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:1324 +0x54
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.stepFollower(0xc000667180, 0x8, 0x3, 0x5, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:1269 +0x459
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*raft).Step(0xc000667180, 0x8, 0x3, 0x5, 0x5, 0x0, 0x0, 0x0, 0x0, 0x0, ...)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/raft.go:971 +0x139a
github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.(*node).run(0xc00014f3e0, 0xc000667180)
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/node.go:357 +0x10e0
created by github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft.StartNode
/opt/gopath/src/github.com/hyperledger/fabric/vendor/go.etcd.io/etcd/raft/node.go:233 +0x408
narendranathreddy (Wed, 08 Apr 2020 10:59:59 GMT):
total 5 orderers in a cluster. In a total period of 13 days 11 days all 5 orderers are running suddenly 13th day orderer3 start giving error
narendranathreddy (Wed, 08 Apr 2020 11:00:21 GMT):
Note: I have not deleted any waf file or nor even touch the environment
jyellick (Wed, 08 Apr 2020 13:01:28 GMT):
Did this orderer run out of disk space? Did it crash? That error message can be read literally. Either the WAL was deleted (possibly because it was not properly persisted), or was corrupted/truncated in some other way.
narendranathreddy (Wed, 08 Apr 2020 13:02:36 GMT):
kubernetes node restarted so pods rescheduled
narendranathreddy (Wed, 08 Apr 2020 13:02:55 GMT):
why other 4 orderers are running perfectly ?
narendranathreddy (Wed, 08 Apr 2020 13:03:19 GMT):
i used same mechanism to store orderer data onto the pvc
jyellick (Wed, 08 Apr 2020 13:03:37 GMT):
Were they all restarted? Are they all configured identically?
narendranathreddy (Wed, 08 Apr 2020 13:04:09 GMT):
yes all restarted and all configured identically
narendranathreddy (Wed, 08 Apr 2020 13:04:31 GMT):
we are automation using client go so all orderers are created with same configuration
narendranathreddy (Wed, 08 Apr 2020 13:04:40 GMT):
{
Name: "ORDERER_FILELEDGER_LOCATION",
Value: "/mnt/" + ordererName + "/production/data",
}
jyellick (Wed, 08 Apr 2020 13:05:04 GMT):
Since it only happened on one node, I would try to identify what is different about this node. Something must be, or otherwise as you point out, you should see the problem consistently. What about the restart order? Were they all shut down at once? Was this the last node to start?
narendranathreddy (Wed, 08 Apr 2020 13:05:29 GMT):
i created a pvc with mount path /mnt/" + ordererName + "/production/data
narendranathreddy (Wed, 08 Apr 2020 13:06:19 GMT):
actual our security team has added some mnnitionring to kubernetes cluster and restarted all nodes
narendranathreddy (Wed, 08 Apr 2020 13:06:19 GMT):
actually our security team has added some mnnitionring to kubernetes cluster and restarted all nodes
narendranathreddy (Wed, 08 Apr 2020 13:06:19 GMT):
actually, our security team has added some monitoring to Kubernetes cluster and restarted all nodes
jyellick (Wed, 08 Apr 2020 13:06:30 GMT):
What about `ORDERER_CONSENSUS_WALDIR`?
jyellick (Wed, 08 Apr 2020 13:06:43 GMT):
What about `ORDERER_CONSENSUS_WALDIR`?
narendranathreddy (Wed, 08 Apr 2020 13:07:03 GMT):
do we need to mount this ?
jyellick (Wed, 08 Apr 2020 13:07:09 GMT):
This is where the WAL is persisted
narendranathreddy (Wed, 08 Apr 2020 13:07:09 GMT):
fuck i forgot this
narendranathreddy (Wed, 08 Apr 2020 13:07:22 GMT):
what to do now ?
narendranathreddy (Wed, 08 Apr 2020 13:07:48 GMT):
i know we need to update the channel
narendranathreddy (Wed, 08 Apr 2020 13:07:56 GMT):
to remove orderer and add new orderer
narendranathreddy (Wed, 08 Apr 2020 13:08:15 GMT):
otherthan this can we do anything here to make this work ?
jyellick (Wed, 08 Apr 2020 13:08:50 GMT):
The correct, and recommended way to fix this in production would be to remove the node, create a new node with correctly configured persistence, and add it back, via channel config txes.
narendranathreddy (Wed, 08 Apr 2020 13:08:52 GMT):
but how come other 4 orderers are working ?
jyellick (Wed, 08 Apr 2020 13:09:42 GMT):
That being said, you should be able to simply shut the bad node down, copy a good ledger and WAL to it, and start it back up, and have things be okay. The hard part you may find in your env is shutting down another node so as to safely copy these directories without losing them.
narendranathreddy (Wed, 08 Apr 2020 13:09:50 GMT):
i have not added ORDERER_CONSENSUS_WALDIR at all to all 5 orderers
jyellick (Wed, 08 Apr 2020 13:11:15 GMT):
Were I to guess, the orderers were cycled in a rolling fashion. If the leader fails, then we cannot detect WAL regressions, so the WAL likely silently reverted to index 0 everywhere, and your one node was the last to start. But just a guess.
narendranathreddy (Wed, 08 Apr 2020 13:11:18 GMT):
what else we need to persist ? in the future
jyellick (Wed, 08 Apr 2020 13:12:44 GMT):
The WAL, the ledger, the msp dir, the config. These are the four things I would expect to be mounted into the container. Of course the MSP dir and the config will not be written to by the orderer, only read from. The WAL and ledger are the only things written to.
jyellick (Wed, 08 Apr 2020 13:13:15 GMT):
Yes, if you do not persist the WAL, behavior is unspecified, in this case you were lucky/unlucky that four nodes came back up after the restart.
narendranathreddy (Wed, 08 Apr 2020 13:13:34 GMT):
do we need to persist snapshot also ?
narendranathreddy (Wed, 08 Apr 2020 13:14:32 GMT):
gotcha completely
narendranathreddy (Wed, 08 Apr 2020 13:14:55 GMT):
one last question
narendranathreddy (Wed, 08 Apr 2020 13:15:15 GMT):
shall i copy other orderer wal file to the one is in error mode ?
narendranathreddy (Wed, 08 Apr 2020 13:15:24 GMT):
will it work
jyellick (Wed, 08 Apr 2020 13:15:43 GMT):
Ah, yes, snapshot should also be persisted, typically this is in the same PV as the WAL, though need not be.
jyellick (Wed, 08 Apr 2020 13:16:45 GMT):
You need to copy an intact ledger _and_ WAL together for this to work. You are traveling down an unsupported path where because the protocol guarantees of Raft are being violated, it's possible, though unlikely that bad things could happen. If you want to be totally safe, remove the node via a configtx, and add a new one.
narendranathreddy (Wed, 08 Apr 2020 13:17:32 GMT):
perfect thank you jason
narendranathreddy (Wed, 08 Apr 2020 14:32:10 GMT):
If a consortium has one organization and it has 3 orderers
when we try to add 4th orderer. Do we need to update the system channel ?
or simply use existing genesis block and update application channel and make 4th orderer up and running
narendranathreddy (Wed, 08 Apr 2020 14:32:10 GMT):
I have couple of questions in my mind: question1 -> If a consortium has one organization and it has 3 orderers
when we try to add 4th orderer. Do we need to update the system channel ?
or simply use existing genesis block and update application channel and make 4th orderer up and running
narendranathreddy (Wed, 08 Apr 2020 14:36:20 GMT):
question2: We have two consortiums
consortium-1 has one organization and has one channel
consorium-2 has one organization and has one channel
an org from consortium-2 has joined consortium-1 channel
consortium-2 org orderer also wants to join to consortium-1 channel
Do we need to update consortium-1 system channel ?
jyellick (Wed, 08 Apr 2020 14:37:37 GMT):
As it stands today. All orderers must be members of the orderer system channel. You bootstrap an orderer by supplying it with the genesis block of the orderer system channel.
jyellick (Wed, 08 Apr 2020 14:38:49 GMT):
There is an RFC to allow independent channel membership for orderers and to eliminate the orderer system channel -- https://github.com/hyperledger/fabric-rfcs/pull/24 -- but this has not been approved yet.
lehors (Wed, 08 Apr 2020 17:22:01 GMT):
is it by designed that configtxgen now requires EtcdRaft Consenters to be specified in configtx.yaml?
lehors (Wed, 08 Apr 2020 17:23:20 GMT):
I just found out that BYFN no longer works against master and what's needed to fix it is to add the EtcdRaft Consenters section to confitgx.yaml
lehors (Wed, 08 Apr 2020 17:23:52 GMT):
it's an easy fix but I'm curious as to what changed and whether it was intentional
tommyjay (Wed, 08 Apr 2020 17:35:42 GMT):
@jyellick when a peer joins a channel, what does it need from tls perspective to trust the orderer
jyellick (Thu, 09 Apr 2020 20:22:33 GMT):
The peer uses the TLS information and orderer addresses supplied in the genesis block. If orderer locations or TLS CAs have changed since genesis, you may override them using the orderer overrides section of `core.yaml`. Eventually, we hope to allow joining a peer to a channel by supplying the latest config block (which would have current TLS CAs and orderer addresses), but this is not available yet.
kopaygorodsky (Thu, 09 Apr 2020 21:32:35 GMT):
Has joined the channel.
kopaygorodsky (Thu, 09 Apr 2020 21:46:38 GMT):
that's why we see tls error when right after peer joined a channel, because in genesis block there weren't tls configs, right?
kopaygorodsky (Thu, 09 Apr 2020 21:46:38 GMT):
that's why we see sometimes tls error when right after peer joined a channel, because in genesis block there weren't tls configs, right?
kopaygorodsky (Thu, 09 Apr 2020 21:46:38 GMT):
that's why we see sometimes tls error when right after peer joined a channel, because in genesis block there weren't tls configs, right? After all blocks pulled - error disappears
metadata (Fri, 10 Apr 2020 09:12:43 GMT):
Has joined the channel.
tommyjay (Fri, 10 Apr 2020 13:05:22 GMT):
yes, i tried that second option to send new certs in a config block. i have to add everything into the genesis block
tommyjay (Fri, 10 Apr 2020 13:06:06 GMT):
but then for brand new orgs how do they then talk to the orderer if they don't have the tlscacert
jyellick (Fri, 10 Apr 2020 13:08:24 GMT):
If none of your or original orderers are at the same addresses, or if all of the TLS CAs for those orderers have rotated, then you will need to map the old orderer addresses to new ones, and reference the new TLS CAs in those overrides.
As I mentioned, we realize this is not a great story, so we're working to make this better. The overrides are essentially a workaround fr the problem until it can be more elegantly addressed.
Adhavpavan (Fri, 10 Apr 2020 16:36:51 GMT):
Hello Everyone,
I am using fabric 2.0 and I have 3 orderers.
When I run these services(RAFT orderers) they are not electing any leader among them.
Error log form orderer 1:
[orderer.consensus.etcdraft] logSendFailure -> ERRO 020 Failed to send StepRequest to 3, because: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 172.21.0.4:9050: connect: connection refused" channel=sys-channel node=1
The weird thing is when I run the same services with orderer image:1.4.4, It's working fine.
Find the docker services here- https://www.codepile.net/raw/d2lN4zk3.js
kopaygorodsky (Fri, 10 Apr 2020 17:45:53 GMT):
your orderer's can't communicate to each other
kopaygorodsky (Fri, 10 Apr 2020 17:45:53 GMT):
your orderers can't communicate to each other
Adhavpavan (Sat, 11 Apr 2020 03:01:53 GMT):
What could be possible reason?
kopaygorodsky (Sat, 11 Apr 2020 05:57:41 GMT):
Connection refused, any reason here
Taffies (Mon, 13 Apr 2020 09:10:27 GMT):
Hello,
I am trying to upgrade to Fabric 2.0, but am having trouble with channel creation. It has something to do with policies, but I can't quite figure out what the error is. I've attached a screenshot of the orderer logs to this message.
Any help is greatly appreciated, thank you!
Taffies (Mon, 13 Apr 2020 09:10:27 GMT):
Hello,
I am trying to upgrade to Fabric 2.0, but am having trouble with channel creation. It has something to do with policies, but I can't quite figure out what the error is. I've attached a screenshot of the orderer logs in the next message.
As far as I know, the policies in configtx is the exact same as the one in fabric-samples test-network, but it keeps throwing an error that the policies were not satisfied.
Any help is greatly appreciated, thank you!
Taffies (Mon, 13 Apr 2020 09:11:05 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=H4KR4eJP9BSp4MtxR)
Screen Shot 2020-04-13 at 5.07.01 PM.png
Rajatsharma (Tue, 14 Apr 2020 20:18:54 GMT):
Hey @guoger ,
I was using CouchDB and I tried to spawn a network with more than 2 peers in an organization(I had 5 peers). It was very unusual data in all the CouchDB was not the same. I didn't get this if a peer belongs to an authorized organization, then ideally it should pull data from other peer. Should this be the expected behavior ?
Rajatsharma (Tue, 14 Apr 2020 20:18:54 GMT):
Hey @guoger ,
I was using CouchDB and I tried to spawn a network with more than 2 peers in an organization(I had 5 peers). It was very unusual data in all the CouchDB was not the same. I didn't get this if a peer belongs to an authorized organization, then ideally it should pull data from another peer. Should this be the expected behavior?
jyellick (Wed, 15 Apr 2020 20:19:56 GMT):
Looks to me like your orderers are not able to write to the channel. Were I to guess, you did not bootstrap your orderers with NodeOU support, but your `configtx.yaml` requires them.
MHBauer (Thu, 16 Apr 2020 21:17:10 GMT):
Has left the channel.
Ashish (Fri, 24 Apr 2020 10:32:50 GMT):
Hi
Regardless of whatever consensus mechanism we choose, the endorsement policy ( yaml based) is always going to be there in fabric??
I mean we can't have PBFT at transaction endorsement level??
ShobhitSrivastava (Fri, 24 Apr 2020 13:15:54 GMT):
hi @yacovm can you take a look here. I have been working with 1.4 version of fabric, but in 2.0 I am not able to create channel block with the same procedure I used in 1.4. I always get below error
ShobhitSrivastava (Fri, 24 Apr 2020 13:15:54 GMT):
hi @yacovm can you take a look here. I have been working with 1.4 version of fabric, but in 2.0 I am not able to create channel block with the same procedure I used in 1.4. I always get below error "Error: got unexpected status: FORBIDDEN -- config update for existing channel did not pass initial checks: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied"
ShobhitSrivastava (Fri, 24 Apr 2020 13:15:54 GMT):
hi @yacovm can you take a look here. I have been working with 1.4 version of fabric, but in 2.0 I am not able to create channel block with the same procedure I used in 1.4. I always get below error "Error: got unexpected status: FORBIDDEN -- config update for existing channel did not pass initial checks: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied".```
Using below command
docker exec -e "CORE_PEER_LOCALMSPID=tfmMSP" -e "CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperle
dger/fabric/peer/crypto/peerOrganizations/telecom-network.com/users/Admin@telecom-network.com/msp" peer0.telecom-network.com peer channel create -o orderer.telecom-network.com
:7050 -c commonchannel -f /etc/hyperledger/configtx/channel1.tx --tls true --cafile "/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/telecom-net
work.com/orderers/orderer.telecom-network.com/msp/tlscacerts/tlsca.telecom-network.com-cert.pem"
```
ShobhitSrivastava (Fri, 24 Apr 2020 13:15:54 GMT):
hi @yacovm can you take a look here. I have been working with 1.4 version of fabric, but in 2.0 I am not able to create channel block with the same procedure I used in 1.4. I always get below error "Error: got unexpected status: FORBIDDEN -- config update for existing channel did not pass initial checks: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied".```
Using below command
docker exec -e "CORE_PEER_LOCALMSPID=tfmMSP" -e "CORE_PEER_MSPCONFIGPATH=/opt/gopath/src/github.com/hyperle
dger/fabric/peer/crypto/peerOrganizations/telecom-network.com/users/Admin@telecom-network.com/msp" peer0.telecom-network.com peer channel create -o orderer.telecom-network.com
:7050 -c commonchannel -f /etc/hyperledger/configtx/channel1.tx --tls true --cafile "/opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/ordererOrganizations/telecom-net
work.com/orderers/orderer.telecom-network.com/msp/tlscacerts/tlsca.telecom-network.com-cert.pem"
``` May anyone plz check it?
jyellick (Fri, 24 Apr 2020 13:27:08 GMT):
I don't really understand this question? Fabric is an execute-order-validate model. Validation is done by endorsement policy, order is done by consensus mechanism, which has nothing to do with endorsements or endorsement policy. Unlike in some other blockchains, ordering does not execute or validate transactions (it only validates the submitted is authorized to transact).
Ashish (Fri, 24 Apr 2020 13:29:31 GMT):
May be my understanding about the consensus is wrong, but what i understood is that in PBFT, 2N+1 nodes can vouch for a transaction - result's validity which prevents the a malicious node from pushing a bad result into ledger
Ashish (Fri, 24 Apr 2020 13:30:17 GMT):
in fabric, the transaction can be pushed into the fabric if the endorsement policy can be met
Ashish (Fri, 24 Apr 2020 13:30:52 GMT):
so this model of execute order validate, is that preventing fabric from implementing a transaction level consensus.?
jyellick (Fri, 24 Apr 2020 13:31:08 GMT):
In PBFT, it's traditionally 3f+1, but as I said, ordering nodes do not validate transactions. Invalid transactions may be present on the blockchain, peers validate transactions, according to the endorsement policy after they have been ordered. Because endorsement policy checking is deterministic, all nodes end up with the same validation result and the same resulting state when applied.
jyellick (Fri, 24 Apr 2020 13:31:54 GMT):
If you want to be technical, endorsement is consensus on execution, and ordering is consensus on order. Some systems choose to combine the concepts, Fabric does not.
jyellick (Fri, 24 Apr 2020 13:33:34 GMT):
Transactions may be pushed onto the blockchain, so long as they are submitted by an authorized user. An authorized user may submit invalid, unauthorized transactions, and they will have on effect on the state. Because Fabric is permissioned, and because transactions are attributable, if a user chose to spam the blockchain, their access can simply be revoked. No harm is actually done to the state, as their transactions are marked invalid.
jyellick (Fri, 24 Apr 2020 13:33:34 GMT):
Transactions may be pushed onto the blockchain, so long as they are submitted by an authorized user. An authorized user may submit invalid, not appropriately endorsed transactions, and they will have on effect on the state. Because Fabric is permissioned, and because transactions are attributable, if a user chose to spam the blockchain, their access can simply be revoked. No harm is actually done to the state, as their transactions are marked invalid.
jyellick (Fri, 24 Apr 2020 13:33:34 GMT):
Transactions may be pushed onto the blockchain, so long as they are submitted by an authorized user. An authorized user may submit invalid, not appropriately endorsed transactions, and they will have no effect on the state. Because Fabric is permissioned, and because transactions are attributable, if a user chose to spam the blockchain, their access can simply be revoked. No harm is actually done to the state, as their transactions are marked invalid.
Ashish (Fri, 24 Apr 2020 13:41:57 GMT):
-- Agreed,
And on this point "endorsement is consensus on execution" my question is, when Fabric says it has pluggable consensus, it is always talking about the pluggability at the "ordering level", isnt it?
Ashish (Fri, 24 Apr 2020 13:42:30 GMT):
and pluggable consensus concept is not applied on the execution. rite?
jyellick (Fri, 24 Apr 2020 13:42:42 GMT):
Actually, there is pluggable consensus at ordering, and at endorsement -- though yes, when we refer to pluggable consensus, it is usually about pluggable ordering.
jyellick (Fri, 24 Apr 2020 13:43:30 GMT):
Fabric ships with a standard endorsement plugin, and validation plugin. But, Fabric can be enhanced with custom endorsement and validation plugins.
Ashish (Fri, 24 Apr 2020 13:44:37 GMT):
so when we switch from solo, to kafka to raft, the consensus mode in the ordering is what gets changed. Rite?
jyellick (Fri, 24 Apr 2020 13:44:50 GMT):
Yes
Ashish (Fri, 24 Apr 2020 13:45:28 GMT):
Thank you Jason, Thank you very much. this cleared my confusion.
jyellick (Fri, 24 Apr 2020 13:46:06 GMT):
You're quite welcome
Ashish (Fri, 24 Apr 2020 13:46:10 GMT):
:)
yehuofirst (Sun, 26 Apr 2020 09:32:10 GMT):
Has joined the channel.
yehuofirst (Sun, 26 Apr 2020 09:32:11 GMT):
every invoke , order logs 👩❤💋👩 2020-04-26 08:37:38.347 UTC [orderer.common.broadcast] Handle -> WARN 065 Error reading from 172.18.0.11:53966: rpc error: code = Canceled desc = context canceled
BrettLogan (Mon, 27 Apr 2020 02:08:17 GMT):
Without more granular logs, and the command you ran, and the environment variables you have set, its hard to be certain, but generally this error means you've presented the wrong TLS certs
BrettLogan (Mon, 27 Apr 2020 02:08:17 GMT):
Without more granular logs, and the command you ran, and the environment variables you have set, its hard to be certain, but generally this error means you've presented invalid TLS certs
vieiramanoel (Thu, 07 May 2020 13:48:18 GMT):
is there a way for orderer fetch tls private key from hsm?
vieiramanoel (Thu, 07 May 2020 13:48:18 GMT):
Hi, is there a way for orderer fetch tls private key from hsm?
vieiramanoel (Thu, 07 May 2020 13:48:18 GMT):
Hi, is there a way for orderer to fetch tls private key from hsm?
vieiramanoel (Thu, 07 May 2020 13:48:18 GMT):
Hi, is there a way for orderer to fetch its tls private key from hsm?
vieiramanoel (Thu, 07 May 2020 13:48:18 GMT):
Hi, is there a way for orderer to fetch it's tls private key from hsm?
vieiramanoel (Thu, 07 May 2020 13:48:18 GMT):
Hi, is there a way for orderer to fetch its tls private key from hsm?
vieiramanoel (Thu, 07 May 2020 13:49:04 GMT):
or do I necessarily need to generate the key pair outside hsm? Didn't find anything related on docs
yacovm (Thu, 07 May 2020 14:52:55 GMT):
no way
vieiramanoel (Thu, 07 May 2020 14:55:44 GMT):
thnks
nvxtien (Sat, 09 May 2020 19:17:48 GMT):
Hi,
When I got orderer started in my machine, I got this error.
"
[orderer.common.server] initializeServerConfig -> INFO 004 Starting orderer with TLS enabled
[orderer.common.server] Main -> PANI 005 Failed validating bootstrap block: initializing channelconfig failed: could not create channel Application sub-group config: setting up the MSP manager failed: admin 0 is invalid: could not obtain certification chain: An X509 certificate with Basic Constraint: Certificate Authority equals true cannot be used as an identity
orderer1-org0 orderer[14086]: panic: Failed validating bootstrap block: initializing channelconfig failed: could not create channel Application sub-group config: setting up the MSP manager failed: admin 0 is invalid: could not obtain certification chain: An X509 certificate with Basic Constraint: Certificate Authority equals true cannot be used as an identity
"
Could you please give any idea about the problem.
Thanks.
yacovm (Sat, 09 May 2020 21:15:53 GMT):
@nvxtien I guess you have an admin certificate that is actually a CA
kopaygorodsky (Sun, 10 May 2020 01:34:56 GMT):
@yacovm I've had a bug with ordering service: when I have multiple orgs added to system channel Orderer group in genesis block I'm not able to connect to my ordering nodes (mutual tls enabled because of raft). In ordering logs I see `TLS handshake failed with error tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "fabric-ca-server") `.
When I start with 1 org and then add orgs with channel update - it works, but after a few mins it throws same error again.
I figured out the problem by reading orderer source code. The problem was in CRS subjects of all 3 CAs, they have same CN + OU which means subjects are equal.
https://github.com/hyperledger/fabric/blob/2c2274d0519a1ce1eba596ff0a43636dce64d926/internal/pkg/comm/server.go#L271 it happens because key in this map.
Is it a bug or feature? yes, it's stupid to have same subjects in all orgs, but for dev environment why not?
I can create jira issue if you say this is bug.
kopaygorodsky (Sun, 10 May 2020 01:34:56 GMT):
@yacovm I've had a bug with ordering service: when I have multiple orgs added to system channel Orderer group in genesis block I'm not able to connect to my ordering nodes (mutual tls enabled because of raft). In ordering logs I see `TLS handshake failed with error tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "fabric-ca-server") `.
When I start with 1 org and then add orgs with channel update - it works, but after a few mins it throws same error again.
I figured out the problem by reading orderer source code. The problem was in CRS subjects of all 3 CAs, they have same CN + OU which means subjects are equal.
https://github.com/hyperledger/fabric/blob/2c2274d0519a1ce1eba596ff0a43636dce64d926/internal/pkg/comm/server.go#L271 it happens because key in this map.
Is it a bug or feature? yes, it's stupid to have same subjects in all orgs, but for dev environment why not?
I can create jira issue if you say this is bug.
Also, I was able to add consenters with empty port and host.
kopaygorodsky (Sun, 10 May 2020 01:34:56 GMT):
@yacovm I've had a bug with ordering service: when I have multiple orgs added to system channel Orderer group in genesis block I'm not able to connect to my ordering nodes (mutual tls enabled because of raft). In ordering logs I see `TLS handshake failed with error tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "fabric-ca-server") `.
When I start with 1 org and then add orgs with channel update - it works, but after a few mins it throws same error again.
I figured out the problem by reading orderer source code. The problem was in CRS subjects of all 3 CAs, they have same CN + OU which means subjects are equal.
https://github.com/hyperledger/fabric/blob/2c2274d0519a1ce1eba596ff0a43636dce64d926/internal/pkg/comm/server.go#L271 it happens because key in this map.
Is it a bug or feature? yes, it's stupid to have same subjects in all orgs, but for dev environment why not?
I can create jira issue if you say this is bug.
Btw, I was able to add consenters with empty port and host.
kopaygorodsky (Sun, 10 May 2020 01:34:56 GMT):
@yacovm @jyellick I've had a bug with ordering service: when I have multiple orgs added to system channel Orderer group in genesis block I'm not able to connect to my ordering nodes (mutual tls enabled because of raft). In ordering logs I see `TLS handshake failed with error tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "fabric-ca-server") `.
When I start with 1 org and then add orgs with channel update - it works, but after a few mins it throws same error again.
I figured out the problem by reading orderer source code. The problem was in CRS subjects of all 3 CAs, they have same CN + OU which means subjects are equal.
https://github.com/hyperledger/fabric/blob/2c2274d0519a1ce1eba596ff0a43636dce64d926/internal/pkg/comm/server.go#L271 it happens because key in this map.
Is it a bug or feature? yes, it's stupid to have same subjects in all orgs, but for dev environment why not?
I can create jira issue if you say this is bug.
Btw, I was able to add consenters with empty port and host.
kopaygorodsky (Sun, 10 May 2020 01:34:56 GMT):
@yacovm @jyellick I've had a bug with ordering service: when I have multiple orgs added to system channel Orderer group in genesis block I'm not able to connect to my ordering nodes (mutual tls enabled because of raft). In ordering logs I see `TLS handshake failed with error tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "fabric-ca-server") `.
When I start with 1 org and then add orgs with channel update - it works, but after a few mins it throws same error again.
I figured out the problem by reading orderer source code. The problem was in CRS subjects of all 3 CAs, they have same CN + OU which means subjects are equal.
https://github.com/hyperledger/fabric/blob/2c2274d0519a1ce1eba596ff0a43636dce64d926/internal/pkg/comm/server.go#L271 it happens because key in this map.
Is it a bug or feature? yes, it's stupid to have same subjects in all orgs, but for dev environment why not?
I can create jira issue if you say this is bug.
Btw, I was able to add consenters with empty port and host, is it a bug too?
kopaygorodsky (Sun, 10 May 2020 01:34:56 GMT):
@yacovm @jyellick I've had a bug with ordering service: when I have multiple orgs added to system channel Orderer group in genesis block I'm not able to connect to my ordering nodes (mutual tls enabled because of raft). In ordering logs I see `TLS handshake failed with error tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "fabric-ca-server") `.
When I start with 1 org and then add orgs with channel update - it works, but after a few mins it throws same error again.
I figured out the problem by reading orderer source code. The problem was in CRS subjects of all 3 CAs, they have same CN + OU which means subjects are equal.
https://github.com/hyperledger/fabric/blob/2c2274d0519a1ce1eba596ff0a43636dce64d926/internal/pkg/comm/server.go#L271 it happens because of key in this map.
Is it a bug or feature? yes, it's stupid to have same subjects in all orgs, but for dev environment why not?
I can create jira issue if you say this is bug.
Btw, I was able to add consenters with empty port and host, is it a bug too?
kopaygorodsky (Sun, 10 May 2020 01:34:56 GMT):
@yacovm @jyellick I've had a bug with ordering service: when I have multiple orgs added to system channel Orderer group in genesis block I'm not able to connect to my ordering nodes (mutual tls enabled because of raft). In ordering logs I see `TLS handshake failed with error tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "fabric-ca-server") `.
When I start with 1 org and then add orgs with channel update - it works, but after a few mins it throws same error again.
I figured out the problem by reading orderer source code. The problem was in CRS subjects of all CAs, they have same CN + OU which means subjects are equal.
https://github.com/hyperledger/fabric/blob/2c2274d0519a1ce1eba596ff0a43636dce64d926/internal/pkg/comm/server.go#L271 it happens because of key in this map.
Is it a bug or feature? yes, it's stupid to have same subjects in all orgs, but for dev environment why not?
I can create jira issue if you say this is bug.
Btw, I was able to add consenters with empty port and host, is it a bug too?
kopaygorodsky (Sun, 10 May 2020 01:34:56 GMT):
@yacovm @jyellick I've had a bug with ordering service: when I have multiple orgs added to system channel Orderer group in genesis block I'm not able to connect to my ordering nodes (mutual tls enabled because of raft). In ordering logs I see `TLS handshake failed with error tls: failed to verify client's certificate: x509: certificate signed by unknown authority (possibly because of "x509: ECDSA verification failure" while trying to verify candidate authority certificate "fabric-ca-server") `.
When I start with 1 org and then add orgs with channel update - it works, but after a few mins it throws same error again.
I figured out the problem by reading orderer source code. The problem was in CRS subjects of all CAs, they have same CN + OU which means subjects are equal.
https://github.com/hyperledger/fabric/blob/2c2274d0519a1ce1eba596ff0a43636dce64d926/internal/pkg/comm/server.go#L271 it happens because of key in this map.
Is it a bug or feature? yes, it's stupid to have same subjects in all orgs, but for dev environment why not? It's default value, not many users change it.
I can create jira issue if you say this is bug.
Btw, I was able to add consenters with empty port and host, is it a bug too?
yacovm (Sun, 10 May 2020 07:23:22 GMT):
@kopaygorodsky I stumbled across this piece of code once and also didn't understand why it was there. I discussed removing it with someone and then I said i'll do it some later time... please open a JIRA and I'll remove it.
kopaygorodsky (Sun, 10 May 2020 08:52:19 GMT):
Perfect
kopaygorodsky (Sun, 10 May 2020 08:52:19 GMT):
Perfect, https://jira.hyperledger.org/browse/FAB-17869
kopaygorodsky (Sun, 10 May 2020 13:04:47 GMT):
What do you think about orderer accepts empty host and port for a consenter? I can show config block with this case.
kopaygorodsky (Sun, 10 May 2020 13:04:47 GMT):
What do you think about orderer accepts empty host and port for a consenter? I can show config block with this case. @yacovm @jyellick
kopaygorodsky (Sun, 10 May 2020 13:16:58 GMT):
Another problem I see is described here, but it's repeating every 10 seconds.
https://github.com/rupeshtr78/raftleader#leader-election-after-node-failure
In genesis block Org1 has 2 consenters, org2 has 1 consenter.
I start first org1, then download last config block and supply it to org2 and then orderer of org2 became pre-candidate and his votes are always rejected. It repeats every a few seconds.
The thing is the vote at different term.
org1-consenter1 logs https://gist.github.com/kopaygorodsky/1d7af4272560098f762f0e3261eb1e9f
org1-consenter2 logs https://gist.github.com/kopaygorodsky/6a3cfca9c5e308e4fcf4d5b6d829b454
org2-consenter1 logs https://gist.github.com/kopaygorodsky/28e158eed3d9e46cf476c305ded622bd
kopaygorodsky (Sun, 10 May 2020 13:16:58 GMT):
Another problem I see is described here, but it's repeating every 10 seconds.
https://github.com/rupeshtr78/raftleader#leader-election-after-node-failure
In genesis block Org1 has 2 consenters, org2 has 1 consenter.
I start first org1 with genesis block described above, then download last config block from org1 and supply it to org2.
Orderer of org2 became pre-candidate and his votes are always rejected. It repeats every a few seconds.
The thing is the vote at different term.
org1-consenter1 logs https://gist.github.com/kopaygorodsky/1d7af4272560098f762f0e3261eb1e9f
org1-consenter2 logs https://gist.github.com/kopaygorodsky/6a3cfca9c5e308e4fcf4d5b6d829b454
org2-consenter1 logs https://gist.github.com/kopaygorodsky/28e158eed3d9e46cf476c305ded622bd
Not sure if I get quorum if I add 4th consenter in that case because leader is never selected...
kopaygorodsky (Sun, 10 May 2020 13:16:58 GMT):
Another problem I have is very similar to this usecase and described here https://github.com/rupeshtr78/raftleader#leader-election-after-node-failure, but it's repeating every 10 seconds.
In genesis block Org1 has 2 consenters, org2 has 1 consenter.
I start first org1 with genesis block described above, then download last config block from org1 and supply it to org2.
Orderer of org2 became pre-candidate and his votes are always rejected. It repeats every a few seconds.
The thing is the vote at different term.
org1-consenter1 logs https://gist.github.com/kopaygorodsky/1d7af4272560098f762f0e3261eb1e9f
org1-consenter2 logs https://gist.github.com/kopaygorodsky/6a3cfca9c5e308e4fcf4d5b6d829b454
org2-consenter1 logs https://gist.github.com/kopaygorodsky/28e158eed3d9e46cf476c305ded622bd
Not sure if I get quorum if I add 4th consenter in that case because leader is never selected...
yacovm (Sun, 10 May 2020 13:26:07 GMT):
> What do you think about orderer accepts empty host and port for a consenter? I can show config block with this case.
Well, what if you put a port that is incorrect or a host that is incorrect?
yacovm (Sun, 10 May 2020 13:26:37 GMT):
I think users need to be responsible and ensure the config updates are done with correct data
yacovm (Sun, 10 May 2020 13:29:12 GMT):
looking at your logs, this looks like a connection problem
kopaygorodsky (Sun, 10 May 2020 13:39:20 GMT):
ok, I've faced this case because of bug in my code, fixed it already, just wanted to report.
kopaygorodsky (Sun, 10 May 2020 13:41:52 GMT):
Failed connecting to {"CAs":[{"Expired":false,"Issuer":"self","Subject":"CN=DST Root CA X3,O=Digital Signature Trust Co."},{"Expired":false,"Issuer":"self","Subject":"CN=org3.com,OU=Fabric,O=Hyperledger,ST=North Carolina,C=US"}],"Endpoint":"ordering-heh-haha-heh.proxy.org3.catalyst.dev.intellecteu.com:443"}: failed to create new connection: context deadline exceeded channel=testchainid
orderer tries to connect to itself. ordering-heh-haha-heh.proxy.org3.catalyst.dev.intellecteu.com:443 it's his OSN
kopaygorodsky (Sun, 10 May 2020 13:53:19 GMT):
hm, I recreated whole setup again, same thing, but I see sometimes
```2020-05-10 13:49:54.747 UTC [grpc] Warningf -> DEBU 1a5 grpc: addrConn.createTransport failed to connect to {ordering-heh-haha-heh.proxy.org3.catalyst.dev.intellecteu.com:443
kopaygorodsky (Sun, 10 May 2020 13:55:30 GMT):
I'm sure that tls certs are fine, I see orderer added it to own pool. I fetched config block and verified with openssl that org2-consenter1 has good certs signed by org3.comMSP
kopaygorodsky (Sun, 10 May 2020 13:55:30 GMT):
I'm sure that tls certs are fine, I see orderer added it to own pool. I fetched config block and verified with openssl that org2-consenter1 has good certs signed by org3.comMSP. And I can connect to orderers from sdk.
I don't need to add org2 certs to org1 orderer using this variable ORDERER_GENERAL_TLS_CLIENTROOTCAS because they share it with config block and `updateTrustedRoots` shows that 2 tls certs are added from org2. Exactly how it should be.
kopaygorodsky (Sun, 10 May 2020 13:55:30 GMT):
I'm sure that tls certs are fine, I see orderer added it to own pool. I fetched config block and verified with openssl that org2-consenter1 has good certs signed by org3.comMSP. And I can connect to all orderers from sdk (from org1 and from org2)
I don't need to add org2 certs to org1 orderer using this variable ORDERER_GENERAL_TLS_CLIENTROOTCAS because they share it with config block.
`updateTrustedRoots` shows that 2 tls certs are added from org2. Exactly how it should be.
kopaygorodsky (Sun, 10 May 2020 14:04:56 GMT):
Will debug more
yacovm (Sun, 10 May 2020 14:05:42 GMT):
@kopaygorodsky you can turn on gRPC logging
yacovm (Sun, 10 May 2020 14:05:51 GMT):
it will print more information about why the TLS handshake fails
kopaygorodsky (Sun, 10 May 2020 14:49:20 GMT):
```2020-05-10 14:45:32.368 UTC [grpc] infof -> DEBU 149 transport: loopyWriter.run returning. connection error: desc = "transport is closing"
2020-05-10 14:45:32.385 UTC [grpc] createTransport -> DEBU 14a grpc: addrConn.createTransport failed to connect to {ordering-petro-ord2-petro.proxy.org1.catalyst.dev.intellecteu.com:443 0
yacovm (Sun, 10 May 2020 15:07:23 GMT):
@kopaygorodsky https://github.com/hyperledger/fabric/pull/1229
kopaygorodsky (Sun, 10 May 2020 19:55:52 GMT):
saw it, good, many thanks :)
kopaygorodsky (Sun, 10 May 2020 19:55:52 GMT):
saw it, many thanks :) anyway I learned that OU is important.
kopaygorodsky (Sun, 10 May 2020 19:55:52 GMT):
saw it, many thanks. anyway I learned that OU is important :)
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks).
I create an application channel and add only org1 and it's consenter, nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel there are 3 consenters (2 from org1 are serving channel and 1 from org2).
Seems the problem is that org2's consenter can't pull block in which I added org2 to the channel, therefore, policy is not satisfied. Am I correct? What solution could be here except deleting orderer state and supplying new genesis block as last block from org1?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
I downloaded block from org1's consenter and see
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks).
I create an application channel and add only org1 and it's consenter, nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel there are 3 consenters (2 from org1 are serving channel and 1 from org2).
Seems the problem is that org2's consenter can't pull block in which I added org2 to the channel, therefore, policy is not satisfied. Am I correct? What solution could be here except deleting orderer state and supplying new genesis block as last block from org1?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
I downloaded block from org1's consenter and see
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenter, nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel there are 3 consenters (2 from org1 are serving channel and 1 from org2).
Seems the problem is that org2's consenter can't pull block in which I added org2 to the channel, therefore, policy is not satisfied. Am I correct? What solution could be here except deleting orderer state and supplying new genesis block as last block from org1?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
I downloaded block from org1's consenter and see
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel there are 3 consenters (2 from org1 are serving channel and 1 from org2).
Seems the problem is that org2's consenter can't pull block in which I added org2 to the channel, therefore, policy is not satisfied. Am I correct? What solution could be here except deleting orderer state and supplying new genesis block as last block from org1?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
I downloaded block from org1's consenter and see
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter.
Seems the problem is that org2's consenter can't pull block in which I added org2 to the channel, therefore, policy is not satisfied. Am I correct? What solution could be here except deleting orderer state and supplying new genesis block as last block from org1?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
I downloaded block from org1's consenter and see
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter.
Seems the problem is that org2's consenter can't pull block in which I added org2 to the channel, therefore, policy is not satisfied. Am I correct? What solution could be here except deleting orderer state and supplying new genesis block as last block from org1?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter.
Seems the problem is that org2's consenter can't pull block in which I added org2 to the channel, therefore, policy is not satisfied. I see 3 forbidden responses from 3 other consenters, 2 of them should said OK at least.
Am I correct? What solution could be here except deleting orderer state and supplying new genesis block as last block from org1?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter.
Seems the problem is that org2's consenter can't pull block in which I added org2 to the channel, therefore, policy is not satisfied. I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least.
Am I correct? What solution could be here except deleting orderer state and supplying new genesis block as last block from org1?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter.
Seems the problem is that org2's consenter can't pull block in which I added org2 to the channel, therefore, policy is not satisfied. I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied`
Am I correct? What solution could be here except deleting orderer state and supplying new genesis block as last block from org1?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Am I correct?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 - it should be satisfied.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Am I correct?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Am I correct?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies...
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Am I correct?
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine, no?
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies...
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies...
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies, but why system channel with 4 consenters is in quorum and returns data. Policies satisfied then and identities of orderer are ok.
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level. Even I put ANY on all levels (any readers, writers) in Application and Orderer - same error.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies, but why system channel with 4 consenters is in quorum and returns data. Policies satisfied then and identities of orderer are ok.
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies, but why system channel with 4 consenters is in quorum and returns data. Policies satisfied then and identities of orderer are ok.
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
update: I've put for Orderer, Application policies ANY Reader, ANY Writer. It's working after 6 mins of Successfully sent StepRequest to 3 after failed attempt(s) channel=common node=2
2020-05-11 18:06:24.428 UTC [orderer.consensus.etcdraft] logSendFailure -> ERRO 8a4 Failed to send StepRequest to 3, because: aborted channel=common node=2
I changed all policies on Orderer, Application to ANY reader, ANY writer, the behaviour of the system was exactly the same, but after 6 mins org2's consenter elected leader and pulled blocks
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies, but why system channel with 4 consenters is in quorum and returns data. Policies satisfied then and identities of orderer are ok.
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
update: I've put for Orderer, Application policies ANY Reader, ANY Writer. It's working after 6 mins of Successfully sent StepRequest to 3 after failed attempt(s) -> Failed to send StepRequest to 3.
I see when org1's consenter validates policy with org2.comMSP identity - it's valid.
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies, but why system channel with 4 consenters is in quorum and returns data. Policies satisfied then and identities of orderer are ok.
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
update: I've put for Orderer, Application policies ANY Reader, ANY Writer. It's working after 6 mins of Successfully sent StepRequest to 3 after failed attempt(s) -> Failed to send StepRequest to 3.
I see when org1's consenter validates policy with org2.comMSP identity - it's valid. Why in previous case it org1 was expecting identity from own msp? (https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3#file-org1-consenter1-txt-L224)
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and add only org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies, but why system channel with 4 consenters is in quorum and returns data. Policies satisfied then and identities of orderer are ok.
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
update1: it works when I add specify organizations and their consenters at the creation step (no channel update later). No issues means policies are satisfied. Every participant reads,writes into the channel.
update2: I've put for Orderer, Application policies ANY Reader, ANY Writer. It's working after 6 mins of Successfully sent StepRequest to 3 after failed attempt(s) -> Failed to send StepRequest to 3.
I see when org1's consenter validates policy with org2.comMSP identity - it's valid. Why in previous case it org1 was expecting identity from own msp? (https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3#file-org1-consenter1-txt-L224)
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and with org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and then `implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies, but why system channel with 4 consenters is in quorum and returns data. Policies satisfied then and identities of orderer are ok.
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
update1: it works when I add specify organizations and their consenters at the creation step (no channel update later). No issues means policies are satisfied. Every participant reads,writes into the channel.
update2: I've put for Orderer, Application policies ANY Reader, ANY Writer. It's working after 6 mins of Successfully sent StepRequest to 3 after failed attempt(s) -> Failed to send StepRequest to 3.
I see when org1's consenter validates policy with org2.comMSP identity - it's valid. Why in previous case it org1 was expecting identity from own msp? (https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3#file-org1-consenter1-txt-L224)
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and with org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 are serving channel and 1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies, but why system channel with 4 consenters is in quorum and returns data. Policies satisfied then and identities of orderer are ok.
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
update1: it works when I add specify organizations and their consenters at the creation step (no channel update later). No issues means policies are satisfied. Every participant reads,writes into the channel.
update2: I've put for Orderer, Application policies ANY Reader, ANY Writer. It's working after 6 mins of Successfully sent StepRequest to 3 after failed attempt(s) -> Failed to send StepRequest to 3.
I see when org1's consenter validates policy with org2.comMSP identity - it's valid. Why in previous case it org1 was expecting identity from own msp? (https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3#file-org1-consenter1-txt-L224)
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and with org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 and recently added1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should said OK at least. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies, but why system channel with 4 consenters is in quorum and returns data. Policies satisfied then and identities of orderer are ok.
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
update1: it works when I add specify organizations and their consenters at the creation step (no channel update later). No issues means policies are satisfied. Every participant reads,writes into the channel.
update2: I've put for Orderer, Application policies ANY Reader, ANY Writer. It's working after 6 mins of Successfully sent StepRequest to 3 after failed attempt(s) -> Failed to send StepRequest to 3.
I see when org1's consenter validates policy with org2.comMSP identity - it's valid. Why in previous case it org1 was expecting identity from own msp? (https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3#file-org1-consenter1-txt-L224)
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and with org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 and recently added1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should have said OK, at least, I think. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies, but why system channel with 4 consenters is in quorum and returns data. Policies satisfied then and identities of orderer are ok.
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
update1: it works when I add specify organizations and their consenters at the creation step (no channel update later). No issues means policies are satisfied. Every participant reads,writes into the channel.
update2: I've put for Orderer, Application policies ANY Reader, ANY Writer. It's working after 6 mins of Successfully sent StepRequest to 3 after failed attempt(s) -> Failed to send StepRequest to 3.
I see when org1's consenter validates policy with org2.comMSP identity - it's valid. Why in previous case it org1 was expecting identity from own msp? (https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3#file-org1-consenter1-txt-L224)
kopaygorodsky (Mon, 11 May 2020 16:58:47 GMT):
I'm stuck upon weird case in RAFT and have no idea what to do, I checked integration tests, but it's still not clear for me, so:
I have system channel with 3 orgs on it (in consortium and orderer blocks). (2 consetners from org1 +1 from org2 + 1 from org3)
I create an application channel and with org1 and it's consenters(2), nothing more. Consenters of two others orgs say " do not belong to channel common or am forbidden pulling it" which is fine. Then I add org2 to ths application channel via config update (to Application, Orderer + consenter). Everything is right, but consenter of org2 is never able to pull blocks. First it says `Discovered 1 channels: [common], evaluating` and `do not belong to channel common or am forbidden pulling it`. In current state of the channel I see 3 consenters (2 from org1 and recently added1 from org2), 2 orgs in Application and Orderer blocks. State fetched from org1's consenter, so when they respond to org2 about policy evaluation - it should be satisfied. Read is (org1.member or org2.member) and ANY on Channel level.
I see 3 forbidden responses from 3 other consenters, 2 of them from org1 should have said OK, at least, I think. They deny his permissions by saying `[channel: common] Client authorization revoked for deliver request from 10.56.4.82:55346: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Readers' sub-policies to be satisfied: permission denied` which is weird.
Leader's logs from org1: https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3
Consenter's logs from org2: https://gist.github.com/kopaygorodsky/65e24dae309cb40993b90b935e87d766
@yacovm @jyellick if it does not bother you, could you check, please. Maybe it's more related to policies, but why system channel with 4 consenters is in quorum and returns data. Policies satisfied then and identities of orderer are ok.
https://github.com/hyperledger/fabric/blob/master/integration/raft/config_test.go#L792 says it should work just fine
update1: it works when I specify organizations and their consenters at the creation step (no channel update later). No issues means policies are satisfied. Every participant reads,writes into the channel.
update2: I've put for Orderer, Application policies ANY Reader, ANY Writer. It's working after 6 mins of Successfully sent StepRequest to 3 after failed attempt(s) -> Failed to send StepRequest to 3.
I see when org1's consenter validates policy with org2.comMSP identity - it's valid. Why in previous case it org1 was expecting identity from own msp? (https://gist.github.com/kopaygorodsky/5e45df27d2dfa509fefd96a641c5e2e3#file-org1-consenter1-txt-L224)
yacovm (Mon, 11 May 2020 22:32:47 GMT):
@kopaygorodsky it does not bother me but i honestly don't understand what you're asking, your questions are spread like a sparse matrix and it's very hard to understand
yacovm (Mon, 11 May 2020 22:33:08 GMT):
if you can narrow down the questions a bit that would help
kopaygorodsky (Mon, 11 May 2020 22:34:06 GMT):
yes, I see it too, edited this question a few times and it's getting more complex. I'll try to provide doc with clean description of the problem.
yacovm (Mon, 11 May 2020 22:34:20 GMT):
or open a JIRA
yacovm (Mon, 11 May 2020 22:34:26 GMT):
and tag me and Jason
kopaygorodsky (Mon, 11 May 2020 22:35:10 GMT):
yes, thank you
kopaygorodsky (Tue, 12 May 2020 05:10:49 GMT):
I fixed this issue, sorry for misleading. The problem was in business logic, I configured in a wrong way SignaturePolicy_NOutOf_ when was adding an org to the channel, wasn't familiar with it.
After many hours of debugging it works :)
chintanr11 (Thu, 14 May 2020 09:18:06 GMT):
Has joined the channel.
chintanr11 (Thu, 14 May 2020 09:18:07 GMT):
Hi, I would like to know how to find out if the newly added orderer nodes have synced the blocks in a dynamic network ..I have explained my situation here:
https://lists.hyperledger.org/g/fabric/message/8304 if anyone could help!
AbdullahJoyia (Sat, 16 May 2020 09:22:23 GMT):
Has joined the channel.
kopaygorodsky (Sat, 16 May 2020 15:52:59 GMT):
just try to get channel info from new orderer, you won't be able to do this if orderer is out of sync. You will get 'consenter error'
kopaygorodsky (Sat, 16 May 2020 15:52:59 GMT):
just try to get channel info from new orderer, you won't be able to do this if orderer is out of sync and quorum is not reach in ordering service. You will get 'consenter error'
kopaygorodsky (Sat, 16 May 2020 15:52:59 GMT):
just try to get channel info from new orderer, you won't be able to do this if orderer is out of sync and quorum is not reach in ordering service. You will get 'consenter error' . If you got successful response - ordering service is fine and you can push transactions, new orderer will catch up soon.
kopaygorodsky (Sat, 16 May 2020 15:54:16 GMT):
basically new node will reroute your request to the leader, in the mean time leader checks logs on every follower to see if quorum is reached.
kopaygorodsky (Sat, 16 May 2020 15:55:36 GMT):
not sure though, I've just started learning RAFT protocol and how it works.
kopaygorodsky (Sat, 16 May 2020 15:57:41 GMT):
also, you will be able to commit new transactions while your orderer is out of sync as long as ordering service is in quorum.
chintanr11 (Mon, 18 May 2020 10:53:05 GMT):
In production environment we cannot try to fetch the configuration blocks if we are providing HLF as a service to others
kopaygorodsky (Mon, 18 May 2020 12:32:27 GMT):
btw, what the name of your service?
kopaygorodsky (Mon, 18 May 2020 12:32:27 GMT):
btw, what the name of your service? I would like to try it for deployment
kopaygorodsky (Mon, 18 May 2020 12:32:27 GMT):
btw, what is the name of your service? I would like to try it for deployment
kopaygorodsky (Mon, 18 May 2020 12:36:21 GMT):
you can use metrics api to see if ordering service is in quorum.
kopaygorodsky (Mon, 18 May 2020 12:36:21 GMT):
ask all your orderer for a leader, if you got 0s - it means no leader is elected, so consensus problem exists
chintanr11 (Mon, 18 May 2020 12:44:44 GMT):
metrics api returns 0 for every case where orderer is not a leader. That does not gurantee whether it is in sync or not. Yes there are other metrics .. I can use them .. I just want to have an idea . of which one would be an optimal choice (or rather the correct one too!)
chintanr11 (Mon, 18 May 2020 12:45:52 GMT):
metrics api returns 0 for every case where orderer is not a leader. That does not gurantee whether it is in sync or not. Yes there are other metrics .. I can use them .. I just want to have an idea . of which one would be an optimal choice (or rather the correct one too!)
kopaygorodsky (Mon, 18 May 2020 12:50:35 GMT):
ask all your orderers if there is a leader, it you got all 0s in response - no leader elected and consensus is not reached
kopaygorodsky (Mon, 18 May 2020 12:50:35 GMT):
ask all your orderers if there is a leader, it you got all 0s in response - no leader is present and consensus is not reached
chintanr11 (Mon, 18 May 2020 12:58:07 GMT):
I am adding new orderer node. The original network will always have N orderers .. with one leader there ..
kopaygorodsky (Mon, 18 May 2020 13:26:12 GMT):
what do you mean with adding orderer node? just running container or you actually doing channel update with new consenter?
kopaygorodsky (Mon, 18 May 2020 13:28:40 GMT):
because if you managed to do channel update - it means orderers were in consensus, but on next term(logical time tick) they may be or may be not.
kopaygorodsky (Mon, 18 May 2020 13:28:40 GMT):
because if you managed to do channel update - it means orderers were in consensus, but on next term(logical time tick) they may be or may be not depending on number of working consenters.
kopaygorodsky (Mon, 18 May 2020 13:28:40 GMT):
because if you managed to do channel update - it means orderers were in consensus, but on next term(logical time tick) they may be or may be not depending on the number of working consenters.
rahulhegde (Mon, 18 May 2020 15:44:52 GMT):
Hello @jyellick `- ORDERER_GENERAL_AUTHENTICATION_NOEXPIRATIONCHECKS=true` looks that this is capability controlled feature and would be active if the orderer capability is set to 142. The channel where the expired admin certificate is present was created during fabric v.1.0.6. Would it still work to renew the new admin certificate to the channel?
jyellick (Mon, 18 May 2020 19:58:19 GMT):
Expiration checks will not be performed unless you have channel capabilities set to enable them. So, you may not need `ORDERER_GENERAL_AUTHENTICATION_NOEXPIRATIONCHECKS=true`. However, enabling this variable will not hurt, and will ensure expiration checks are skipped regardless of channel capabilities.
ShobhitSrivastava (Tue, 19 May 2020 07:27:12 GMT):
Hi @yacovm ..is there a way to find out the time, a transaction reaches orderer and come out of orderer once it is a part of block?
ShobhitSrivastava (Tue, 19 May 2020 12:00:17 GMT):
hi I am getting below logs "Could not append block: error appending block to file: write /var/hyperledger/production/orderer/chains/mainchannel/blockfile_000007: no space left on device"
ShobhitSrivastava (Tue, 19 May 2020 12:00:35 GMT):
has anyone encountered this?
jyellick (Tue, 19 May 2020 13:46:10 GMT):
`no space left on device` -- I think this is fairly clear? You've run out of room on your ledger volume
samwood (Thu, 21 May 2020 22:48:32 GMT):
Is it possible to have online orderer nodes on a channel that are not in the consenter set and not actively participating in raft leader election? This would primarily be to store an active replica of the orderer state, say in another region
kopaygorodsky (Thu, 21 May 2020 22:52:27 GMT):
nope
samwood (Thu, 21 May 2020 22:54:51 GMT):
thanks. if we have 6 orderer nodes (3 in 2 regions), and have a consenter set of 3 (2 in region 1, 1 in region 2), and the others turned off--in the event of a region failure could we switch on the other orderers and change the consenter set to the 3 live nodes? Or does this take a quorum of the existing consenters to change the set?
kopaygorodsky (Fri, 22 May 2020 09:52:04 GMT):
https://hyperledger-fabric.readthedocs.io/en/release-2.0/raft_configuration.html#reconfiguration
kopaygorodsky (Fri, 22 May 2020 09:52:27 GMT):
just add nodes from other region to consenters list
ShobhitSrivastava (Fri, 22 May 2020 11:59:18 GMT):
thanks mate!! increasing the space fixed the issue.
jyellick (Fri, 22 May 2020 13:03:19 GMT):
For DR resistance, you must use 3 regions. The general recommendation would be 2 nodes in geo1, 2 in geo2, and 1 in geo3.
jyellick (Fri, 22 May 2020 13:04:40 GMT):
The notion of a hot replica is something that was proposed recently in an RFC and will likely be implemented at some point. However, even with a hot replica, you need quorum in order to swap in a hot replica. Otherwise you would risk forking.
jyellick (Fri, 22 May 2020 13:04:40 GMT):
The notion of a hot replica is something that was proposed recently in an RFC and will likely be implemented at some point. However, even with a hot replica, you need quorum in order to swap in the replica. Otherwise you would risk forking.
samwood (Fri, 22 May 2020 15:25:55 GMT):
Thank you.
samwood (Fri, 22 May 2020 15:26:04 GMT):
Thank you.
AbdullahJoyia (Wed, 27 May 2020 19:07:37 GMT):
Has left the channel.
xhens (Thu, 28 May 2020 18:20:13 GMT):
Hey guys, I bring up a network and everything runs fine (channel creation, deploy, etc). After a minute, the orderer container goes down. Any idea where the issue might be?
xhens (Thu, 28 May 2020 18:22:55 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hC6PZsCmZBAec5Ekv)
Clipboard - May 28, 2020 8:22 PM
xhens (Thu, 28 May 2020 18:22:55 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hC6PZsCmZBAec5Ekv)
Clipboard - May 28, 2020 8:22 PM
xhens (Thu, 28 May 2020 18:22:55 GMT):
[ ](https://chat.hyperledger.org/channel/fabric-orderer?msg=hC6PZsCmZBAec5Ekv)
Clipboard - May 28, 2020 8:22 PM
xhens (Thu, 28 May 2020 18:23:24 GMT):
Any idea how I can fix this?
BrettLogan (Thu, 28 May 2020 22:20:24 GMT):
Can you put your logs in Debug mode and upload the full log once you have. Though my assumption is you have either loaded the wrong tls cert, or your using an IP or hostname not mentioned in the certificate Common Name or SANS fields
BharathiSundar (Fri, 29 May 2020 07:54:52 GMT):
Has joined the channel.
RahulEth (Thu, 04 Jun 2020 05:28:24 GMT):
is it possible by removing solo ordering service and attach raft orderering service in current network?
kelvinzhong (Thu, 04 Jun 2020 09:34:47 GMT):
@jyellick hi, is raft orderer can somehow achieve load balance for multiple channel at the same network? like we got 4 orderer. and two orderer work for channel one, and the other two orderer work for channel two, so the orderer can be added dynamically to support plenty of channel and transaction at the same time
kelvinzhong (Thu, 04 Jun 2020 09:34:53 GMT):
and i right?
kopaygorodsky (Thu, 04 Jun 2020 11:56:18 GMT):
yes, but it's not load balancing, it's just subset of orderers on a channel
kopaygorodsky (Thu, 04 Jun 2020 11:56:34 GMT):
yes
kelvinzhong (Fri, 05 Jun 2020 03:09:36 GMT):
got it, many thx!
chintanr11 (Wed, 10 Jun 2020 12:33:36 GMT):
Hi, what is the best way to identify the number of active nodes in RAFT cluster in HLF v1.4? I can identify if there is leader or not, but I need number of active nodes to check if let's say changes to RAFT cluster will make me loose current quorum or not?
jyellick (Wed, 10 Jun 2020 15:38:04 GMT):
I'd suggest moving to HLF v2.0+ where there is the metric `consensus_etcdraft_active_nodes`
https://hyperledger-fabric.readthedocs.io/en/release-2.0/metrics_reference.html
chintanr11 (Thu, 11 Jun 2020 04:15:32 GMT):
Yes. I just wanted an equivalent to that metric in v1.4. I did go through the JIRA that created that feature in v2.0. But I need some more light on it, if it can be implemented in v1.4 without changing configurations of orderers. Currently, I can only identify a leader and conclude that RAFT is healthy or not, but let's say 2/3 nodes are alive and I attempt to add a new node, then the OS will fail. So, for this I need the number of active nodes in RAFT
jyellick (Thu, 11 Jun 2020 17:32:17 GMT):
We're aware of the limitation, which is why it was addressed in v2.0. It was deliberately not backported to v1.4.x because it was considered too invasive of a change for an LTS. The easiest thing to do is simply to update your orderer binaries to v2.x -- you do not need to update other binaries in your network, and this will provide you with the metrics you need.
chintanr11 (Fri, 12 Jun 2020 11:28:35 GMT):
Thanks a lot, @jyellick I will check that!
jworthington (Thu, 18 Jun 2020 09:40:11 GMT):
Starting order error: Failed to setup local msp with config: administrators must be declared when no admin ou classification is set.
jworthington (Thu, 18 Jun 2020 09:43:29 GMT):
Custom CA (not cryptogen) with CA Admin, Org1 Admin, and Orderer1 admin. Used Org1Admin to create genesisblock. Org1Admin has OU=admin in subject. What do I not understand?
jworthington (Thu, 18 Jun 2020 10:04:24 GMT):
Well, I found a note on MSP config file and adding the NodeOUs enabled. At least getting different error now.
jworthington (Thu, 18 Jun 2020 10:21:42 GMT):
Well, it was PANI 003, and is now PANI 007. Same basic error, though, administrators must be declared when no admin ou classification is set, but on Failed validating bootstrap block: initializing channelconfig failed: What am I missing now?
jworthington (Thu, 18 Jun 2020 12:44:18 GMT):
I tired setting NodeOUs enabled false and copied the OrgAdmin signcerts to admincerts for the ordererAdmin. Starting the orderer now errors with admin 0 is invalid: could not validate identity's OUs: none of the identity's organizational units [admin(ECFB0B468543D424)] are in MSP. But the OrgAdmin certs has OU=admin in the subject
jworthington (Thu, 18 Jun 2020 12:45:15 GMT):
Really giving me a headache. Can anyone give me any pointers?
jyellick (Thu, 25 Jun 2020 20:14:24 GMT):
I would go through the test network tutorial and look at the MSP configuration there.
jyellick (Thu, 25 Jun 2020 20:14:51 GMT):
It sounds to me like you've specified a required OU for your MSP definition in its config.yaml, but not included this OU in the certs as issued.
jyellick (Thu, 25 Jun 2020 20:15:16 GMT):
If you are bootstrapping new crypto, I'd highly recommend to enable node OUs, include an OU for all roles (Admin, Client, Peer, and Orderer)
jyellick (Thu, 25 Jun 2020 20:15:35 GMT):
And ensure that your admin certs have the admin OU set (and generally, the client OU as well).
ViokingTung (Mon, 29 Jun 2020 08:39:35 GMT):
Has joined the channel.
julian (Fri, 03 Jul 2020 13:00:06 GMT):
julian - Fri Jul 03 2020 14:00:03 GMT+0100 (British Summer Time).txt
julian (Fri, 03 Jul 2020 13:01:38 GMT):
Hello. I have been following the guide: https://hyperledger-fabric-ca.readthedocs.io/en/latest/operations_guide.html but using a later version of fabric. 2.1.1. I have got to the point of starting the orderer, but it fails with the following message:
orderer1-org0 | panic: runtime error: index out of range [1] with length 1
orderer1-org0 |
orderer1-org0 | goroutine 1 [running]:
orderer1-org0 | github.com/hyperledger/fabric/msp.(*bccspmsp).sanitizeCert(0xc00015aa00, 0xc00015f080, 0x119c500, 0xe8fde0, 0xc00094cb60)
orderer1-org0 | /go/src/github.com/hyperledger/fabric/msp/mspimpl.go:812 +0x1f8
orderer1-org0 | github.com/hyperledger/fabric/msp.newIdentity(0xc00015f080, 0x11873e0, 0xc000136640, 0xc00015aa00, 0xc00004d708, 0x11873e0, 0xc000136640, 0x0)
julian (Fri, 03 Jul 2020 13:03:10 GMT):
Can anyone help with the above error?
yacovm (Fri, 03 Jul 2020 23:52:24 GMT):
@julian can you open a JIRA without your network configuration (certificates, etc.) ?
julian (Fri, 03 Jul 2020 23:57:43 GMT):
Yes, can you point me in the right direction?
julian (Sat, 04 Jul 2020 09:23:21 GMT):
@yacovm I managed to fix the issue. When creating the genesis block, I had the wrong CA certs for both cacerts and tlscacerts as part of the org MSP for each org my configtx.yaml.
julian (Sat, 04 Jul 2020 09:23:21 GMT):
@yacovm I managed to fix the issue. When creating the genesis block, I had the wrong CA certs for both cacerts and tlscacerts as part of the org MSP for each org in my configtx.yaml.
ever-upwards (Tue, 07 Jul 2020 11:35:14 GMT):
Has joined the channel.
jworthington (Sun, 12 Jul 2020 15:36:17 GMT):
Thx. It's basically that, but still unclear. Even if I set to NodeOU disabled and add admincerts it doesn't seem to find anything. (Not that I want to do that, but trying to isolate the problem). So I think it may be the OU itself, as you suspect. Too many MSPIDs and paths and OUs and other names and paths in too many files that all have to agree. I'll eventually figure it out. but it hurts my head. ;)
FarhanShafiq (Mon, 13 Jul 2020 08:15:15 GMT):
Has joined the channel.
FarhanShafiq (Mon, 13 Jul 2020 08:56:48 GMT):
Hi, Is there any way to check if all orderers are in sync or not? Since, I'm getting TLS handshake errors in log file. I want to confirm that everything is working fine.
FarhanShafiq (Mon, 13 Jul 2020 11:57:23 GMT):
I'm trying to connect orderer node1 with node2 but tls authentication failed.
Failed to send StepRequest to 1, because: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: x509: certificate is valid for orderer2.liquidus.com, not orderer1.liquidus.com" channel=syschannel node=2
BrettLogan (Mon, 13 Jul 2020 17:27:38 GMT):
The error is right in the message, you are sending the request to `orderer1` but using `orderer2` tls certs
FarhanShafiq (Tue, 14 Jul 2020 07:22:10 GMT):
Problem is solved by adding cluster client Root CA cert.
FarhanShafiq (Tue, 14 Jul 2020 07:22:50 GMT):
Thank you for your response. I really appreciate it. :)
rahulhegde (Tue, 14 Jul 2020 15:57:47 GMT):
Hello @jyellick , @dave.enyeart
I am looking at slowness on the endorsement processing time. The client receives the response in 500-1000ms where the chainocde takes around 50ms. Looking at the peer log, i found a print
[34m2020-07-13 09:34:14.365 CEST [endorser] callChaincode -> INFO 731[0m [cls1obo][4fa59cdb] Entry chaincode: path:"bsl/clsnet-cls-chaincode/p2cls" name:"p2cls" version:"1.2"
[34m2020-07-13 09:34:14.412 CEST [endorser] callChaincode -> INFO 732[0m [cls1obo][4fa59cdb] Exit chaincode: path:"bsl/clsnet-cls-chaincode/p2cls" name:"p2cls" version:"1.2" (47ms)
[34m2020-07-13 09:34:15.686 CEST [comm.grpc.server] 1 -> INFO 733[0m unary call completed grpc.service=protos.Endorser grpc.method=ProcessProposal grpc.peer_address=10.32.230.14:57030 grpc.code=OK grpc.call_duration=1.325757909s
GRPC duration is approximately the time taken for the endorsement by the client. Is Peer holding up the response or we need to tweak some value?
FarhanShafiq (Thu, 16 Jul 2020 09:15:22 GMT):
Is there any way to modify the channel_group of one organization in already running network?
I know below command can be used to append new channel group in channel block.
jq -s '.[0] * {"channel_group":{"groups":{"Application":{"groups": {"RFMSP":.[1]}}}}}' config.json newChannelGroup.json > modified_config.json
FarhanShafiq (Thu, 16 Jul 2020 09:15:22 GMT):
Is there any way to modify the channel_group of one organization in already running network?
I know below command can be used to append new channel group in channel block.
`jq -s '.[0] * {"channel_group":{"groups":{"Application":{"groups": {"RFMSP":.[1]}}}}}' config.json newChannelGroup.json > modified_config.json`
jyellick (Thu, 16 Jul 2020 15:18:54 GMT):
@FarhanShafiq sure, that `jq` command is inserting some JSON, but you can instead simplify modify or replace the JSON. The key is that `modified_config.json` contians your new desired configuration
FarhanShafiq (Thu, 16 Jul 2020 15:21:43 GMT):
Thanks @jyellick I realize that this command works also for update.
FarhanShafiq (Thu, 16 Jul 2020 15:21:43 GMT):
Thanks @jyellick I realize that this command works also for update key value.
FarhanShafiq (Thu, 16 Jul 2020 15:27:46 GMT):
I'm trying to invoke the chaincode but getting below error
```
Error: endorsement failure during invoke. response: status:500 message:"make sure the chaincode fabcar has been successfully defined on channel mychannel and try again: chaincode definition for 'fabcar' exists, but chaincode is not installed"
```
Invoking also required endorsement from majority peers so, I also need to installed chaincode of these endorsement peers?
lucidprogrammer (Sun, 19 Jul 2020 02:07:44 GMT):
Has joined the channel.
FarhanShafiq (Mon, 20 Jul 2020 14:05:06 GMT):
Hello, I'm trying to approve chaincode definition from endorsers with default LifecycleEndorsement "majority" policy and I used two endorsers out of four to endorsed the definition and both endorsers are successfully able to fetch private data but other peers are not able to do so.
1. To my knowledge, approving chaincode definition should give me an error if you don't provide enough endorsements but In my case, it's not giving me any error even if definition endorsed by 1 out of 4 peers.
2. I don't seem to understand why other peers are not able to fetch private data if they don't take part in definition endorsement. Is this the normal behavior?
FarhanShafiq (Mon, 20 Jul 2020 14:05:06 GMT):
Hello, I'm trying to approve chaincode definition from endorsers with default LifecycleEndorsement "majority" policy and I used two endorsers out of four to endorsed the definition and both endorsers are successfully able to fetch private data but other peers are not able to do so.
1. To my knowledge, approving chaincode definition should give me an error if I don't provide enough endorsements but In my case, it's not giving me any error even if definition endorsed by 1 out of 4 peers.
2. I don't seem to understand why other peers are not able to fetch private data if they don't take part in definition endorsement. Is this the normal behavior?
arjones (Thu, 23 Jul 2020 00:40:57 GMT):
Has joined the channel.
arjones (Thu, 23 Jul 2020 00:40:57 GMT):
I am reading the fabric ordering service FAQ, and I don't fully understand the answer to the question "Can I have an organization act both in an ordering and application role?"
Why should block signers be restricted to a subset of orderer certificates? Apologies if the answer is obvious. The architecture my organization is planning on using uses a node in both the peer role and the orderer role, and I would like to understand the risks so that we can change our architecture now if need be.
FarhanShafiq (Thu, 23 Jul 2020 09:25:41 GMT):
Although this is possible, it is a highly discouraged configuration. By default the /Channel/Orderer/BlockValidation policy allows any valid certificate of the ordering organizations to sign blocks. If an organization is acting both in an ordering and application role, then this policy should be updated to restrict block signers to the subset of certificates authorized for ordering.
FarhanShafiq (Thu, 23 Jul 2020 09:43:53 GMT):
I'm also learning fabric blockchain but I will try my best to answer your question with my knowledge. Imo, ordering organization taking apart in application role can disturb the blockchain consortium governance since, it can influence upon the result. Imo, it's best to have distributive membership of organizations in ordering service by changing the any block signer to subset of membership.
FarhanShafiq (Thu, 23 Jul 2020 09:43:53 GMT):
I'm also learning fabric blockchain but I will try my best to answer your question with my knowledge. Imo, ordering organization taking apart in application role can disturb the blockchain consortium governance since, it can influence upon the result. Imo, it's best to have distributive membership of organizations in ordering service by changing any block signer policy to subset of membership.
jyellick (Thu, 23 Jul 2020 17:52:07 GMT):
@rahulhegde Sorry I missed this. There's not a lot that happens after the chaincode exits. The two likely bottlenecks would be either a) Private data dissemination, if there's private data, or b) Generating the signature for the endorsement.
My guess would be that it's (b), and it might be exacerbated by load on the HSM if you are using one.
arjones (Thu, 23 Jul 2020 19:42:18 GMT):
Thank you
guoger (Fri, 24 Jul 2020 01:39:17 GMT):
you probably already solved this, but yes, chaincode should be installed on those peers whose endorsements are required
arjones (Fri, 24 Jul 2020 02:26:56 GMT):
This article
https://developer.ibm.com/articles/blockchain-hyperledger-fabric-ordering-decentralization/
confirms what you told me.
FarhanShafiq (Fri, 24 Jul 2020 09:11:07 GMT):
Thanks
rahulhegde (Tue, 04 Aug 2020 16:09:00 GMT):
@jyellick - Right, we don't have private data. yes, it is the HSM signing operation.
The other thing we noticed in the Peer log is the Gossip. Every peer does Avg. 50 HSM signing operation per minute. And we have average 20 peer running.
We don't use Gossip in our deployment as we have 1 Peer as Leader per Organization which pulls the Block from orderer directly.
I am trying to snooze the Gossip completely or disable it.
Not knowing much on Gossip Internals - I tried to set these two variables and I see the performance has improved
https://github.com/hyperledger/fabric/blob/v1.4.2/sampleconfig/core.yaml#L120-L121 and
https://github.com/hyperledger/fabric/blob/v1.4.2/sampleconfig/core.yaml#L150-L151
Can you please help what is the right value to be set?
I have initiated a email with @mastersingh24 on the same, will copy you too.
rahulhegde (Tue, 04 Aug 2020 16:09:00 GMT):
@jyellick - Right, we don't have private data. yes, it is the HSM signing operation.
The other thing we noticed in the Peer log is the Gossip. Every peer does Avg. 50 HSM signing operation per minute. And we have average 20 peer running.
We don't use Gossip in our deployment as we have 1 Peer as Leader per Organization which pulls the Block from orderer directly.
I am trying to snooze the Gossip completely or disable it.
Not knowing much on Gossip Internals - I tried to set these two variables and I see the performance has improved
https://github.com/hyperledger/fabric/blob/v1.4.2/sampleconfig/core.yaml#L120-L121
https://github.com/hyperledger/fabric/blob/v1.4.2/sampleconfig/core.yaml#L150-L151
Can you please help what is the right value to be set?
I have initiated a email with @mastersingh24 on the same, will copy you too.
rahulhegde (Tue, 04 Aug 2020 16:09:00 GMT):
@jyellick - Right, we don't have private data. It is certain that it is HSM signing operation.
The other thing we noticed in the Peer log is the Gossip. Every peer does Avg. 50 HSM signing operation per minute. And we have average 20 peer running.
We don't use Gossip in our deployment as we have 1 Peer as Leader per Organization which pulls the Block from orderer directly.
I am trying to snooze the Gossip completely or disable it.
Not knowing much on Gossip Internals - I tried to set these two variables and I see the performance has improved
https://github.com/hyperledger/fabric/blob/v1.4.2/sampleconfig/core.yaml#L120-L121
https://github.com/hyperledger/fabric/blob/v1.4.2/sampleconfig/core.yaml#L150-L151
Can you please help what is the right value to be set?
I have initiated a email with @mastersingh24 on the same, will copy you too.
jyellick (Tue, 04 Aug 2020 20:46:57 GMT):
There's no direct way to disable gossip -- I'd note that its useful outside of block pulling, and private data, for service discovery. However, I'll try to track down how we can minimize these signing requests.
rahulhegde (Tue, 04 Aug 2020 21:51:36 GMT):
Okay @jyellick
braduf (Fri, 07 Aug 2020 14:55:38 GMT):
Hi all, we have a raft cluster orderering server, existing out of 3 entities. One entity stopped participating and we want to take it out of the cluster. Since their nodes are down, we have only 2/3 quorum. I was thinking that if we take out the one not working, their is no problem because we will have 2/2 quorum, but trying to remove them out of the consenters gives the following error:
```
consensus metadata update for channel config update is invalid: 2 out of 3 nodes are alive, configuration will result in quorum loss
```
Can somebody explain me why if we will still have 2/2 quorum? And can I force it in some way or what can I do?
Thanks for your help!
braduf (Fri, 07 Aug 2020 14:55:38 GMT):
Hi all, we have a raft cluster orderering server, existing out of 3 entities. One entity stopped participating and we want to take it out of the cluster. Since their nodes are down, we have only 2/3 quorum. I was thinking that if we take out the one not working, their is no problem because we will have 2/2 quorum, but trying to remove them out of the consenters gives the following error:
```
consensus metadata update for channel config update is invalid: 2 out of 3 nodes are alive, configuration will result in quorum loss
```
Can somebody explain me why if we will still would have 2/2 quorum? And can I force it in some way or what can I do?
Thanks for your help!
braduf (Fri, 07 Aug 2020 14:55:38 GMT):
Hi all, we have a raft cluster orderering server, existing out of 3 entities. One entity stopped participating and we want to take it out of the cluster. Since their nodes are down, we have only 2/3 working. I was thinking that if we take out the one not working, their is no problem because we will have 2/2 quorum, but trying to remove them out of the consenters gives the following error:
```
consensus metadata update for channel config update is invalid: 2 out of 3 nodes are alive, configuration will result in quorum loss
```
Can somebody explain me why if we will still would have 2/2 quorum? And can I force it in some way or what can I do?
Thanks for your help!
braduf (Fri, 07 Aug 2020 14:55:38 GMT):
Hi all, we have a raft cluster orderering service, existing out of 3 entities. One entity stopped participating and we want to take it out of the cluster. Since their nodes are down, we have only 2/3 working. I was thinking that if we take out the one not working, their is no problem because we will have 2/2 quorum, but trying to remove them out of the consenters gives the following error:
```
consensus metadata update for channel config update is invalid: 2 out of 3 nodes are alive, configuration will result in quorum loss
```
Can somebody explain me why if we will still would have 2/2 quorum? And can I force it in some way or what can I do?
Thanks for your help!
braduf (Fri, 07 Aug 2020 14:55:38 GMT):
Hi all, we have a raft cluster orderering service, existing out of 3 entities. One entity stopped participating and we want to take it out of the cluster. Since their nodes are down, we have only 2/3 working. I was thinking that if we take out the one not working, their is no problem because we will have 2/2 quorum, but trying to remove them out of the consenters gives the following error:
```
consensus metadata update for channel config update is invalid: 2 out of 3 nodes are alive, configuration will result in quorum loss
```
Can somebody explain me why if we still would have 2/2 quorum? And can I force it in some way or what can I do?
Thanks for your help!
braduf (Fri, 07 Aug 2020 15:11:01 GMT):
The orderer is already taken out of the orderer addresses and the org is taken out of the orderer orgs, the only thing I can't get it out of is out of the consenters...
jkalwar (Sat, 08 Aug 2020 11:08:23 GMT):
Has joined the channel.
jkalwar (Sat, 08 Aug 2020 11:08:24 GMT):
Hi All , In the documentation for the transaction flow of hyperledger fabric , it is mentioned that "The ordering service does not need to inspect the entire content of a transaction in order to perform its operation, it simply receives transactions from all channels in the network, orders them chronologically by channel, and creates blocks of transactions per channel." , I have a couple of questions here 1. What does "chronological ordering mean" ? , Does it mean that the transactions for a channel are ordered depending on the time they are received at the OSN ?
PJHaga (Mon, 10 Aug 2020 14:22:58 GMT):
Has joined the channel.
PJHaga (Mon, 10 Aug 2020 14:22:59 GMT):
Hi all, we are trying to add a second orderer to a running raft cluster (currently consisting of 1 orderer) we are following the steps as mentioned in the answer on https://stackoverflow.com/questions/57571629/how-to-add-a-new-orderer-in-a-running-hyperledger-fabric-network-using-raft. We are stuck at step 16 because the new orderer logs the following: `I do not belong to channel testchainid or am forbidden pulling it (not in the channel), skipping chain retrieval` Would any of you have an idea why we are getting this error, or more in general if there is an 'official' hyperledger fabric guide on how to add orderers to a running raft cluster?
robert.beerta (Mon, 10 Aug 2020 14:24:17 GMT):
Has joined the channel.
PJHaga (Wed, 12 Aug 2020 07:35:02 GMT):
seems like we had an error in the encoding/decoding of our certificates. Some whitespace threw us off
rahulhegde (Thu, 13 Aug 2020 12:13:00 GMT):
Hello @jyellick - Can you confirm or add to this note. Requirement is to convince stake holder - Is it ok to keep Kafka Configuration in orderer yaml even though the setup is run using Raft?
Our setup runs the orderer (all channels) in RAFT consensus mode. Having Kafka section defined in the orderer yaml file does not have any functional impact on working of orderer. This Kafka configuration is only used for initializing the configuration object and since none of the channels are running on Kafka, this configuration object will not be used anytime in the Raft enabled setup .
guoger (Thu, 13 Aug 2020 13:21:31 GMT):
this is correct (if something is still depending on Kafka configs in this case, i think that should be fixed). And we should also remove the requirement for kafka section during config init
rahulhegde (Thu, 13 Aug 2020 14:50:20 GMT):
Considering Kafka is still supported consensus on v1.4.2, this should be ok to be present.
rahulhegde (Thu, 13 Aug 2020 14:50:20 GMT):
Considering Kafka is still supported consensus on v1.4.2, this should be ok to be present?
guoger (Thu, 13 Aug 2020 14:52:37 GMT):
oh, i meant `master`, not really `release-1.4` branch, since this is not really a bug. But anyway, if that section is not used, i don't think we should mandate it anymore
rahulhegde (Thu, 13 Aug 2020 14:58:00 GMT):
Even if it is present in the orderer.yaml for `release-1.4` , do you agree there is no impact (security perspective lens) "This Kafka configuration is only used for initializing the configuration object and since none of the channels are running on Kafka, this configuration object will not be used anytime in the Raft enabled setup ."
rahulhegde (Thu, 13 Aug 2020 14:58:00 GMT):
Even if it is present in the orderer.yaml for `release-1.4` , do you agree there is no impact (also include security perspective lens) "This Kafka configuration is only used for initializing the configuration object and since none of the channels are running on Kafka, this configuration object will not be used anytime in the Raft enabled setup ."
guoger (Thu, 13 Aug 2020 14:58:47 GMT):
yes
guoger (Thu, 13 Aug 2020 14:59:23 GMT):
obviously only if you've migrated all your nodes to Raft (nothing is running on kafka anymore)
jyellick (Thu, 13 Aug 2020 17:50:04 GMT):
+1 @rahulhegde I agree with @guoger there should be no security implications associated with leaving the Kafka config in place. The kafka configuration is only ever referenced if a channel's consensus type is kafka, so, so long as all of your channels are migrated, it should have no impact on actual execution.
PulkitSarraf (Wed, 19 Aug 2020 04:08:55 GMT):
Hi
I was trying to deploy the test network pf hyperledger 2.0. Not able to run the orderer
Getting this issue
" config requires unsupported channel capabilities: Channel capability V2_0 is required but not supported: Channel capability V2_0 is required but not supported "
krabradosty (Fri, 04 Sep 2020 13:46:47 GMT):
Hi. Recently I was going through orderer system channel config, the default one generated by `configtxgen` CLI. Under channel_groups, it contains a definition of all consortiums:
```
{
"Consortiums": {
"groups": {...},
"mod_policy": "/Channel/Orderer/Admins",
"policies": {
"Admins": {
"mod_policy": "/Channel/Orderer/Admins",
"policy": {
"type": 1,
"value": {
"identities": [],
"rule": {
"n_out_of": {
"n": 0,
"rules": []
}
},
"version": 0
}
},
"version": "0"
}
},
"values": {},
"version": "0"
}
}
```
I'm confused with `policies` section. Admin policy definition looks totally wrong for me. Could anyone explain how does it work?
knagware9 (Fri, 04 Sep 2020 15:21:44 GMT):
Here "mod_policy": "/Channel/Orderer/Admins", indicates that only orderer admins could update the orderer system channel and how many orderer admin required that will be defined somewhere here n_out_of": {
"n": 0,
"rules": []
}
NRaj 2 (Sun, 06 Sep 2020 05:00:18 GMT):
Has joined the channel.
NRaj 2 (Sun, 06 Sep 2020 05:00:19 GMT):
Hi all, i was trying to upgrade my orderer using this tutorial *https://hyperledger-fabric.readthedocs.io/en/release-2.0/upgrading_your_components.html* and i do not understand the mening of "--env-file ./env
NRaj 2 (Sun, 06 Sep 2020 05:00:19 GMT):
Hi all, i was trying to upgrade my orderer using this tutorial *https://hyperledger-fabric.readthedocs.io/en/release-2.0/upgrading_your_components.html* and i do not understand the mening of *--env-file ./env
NRaj 2 (Sun, 06 Sep 2020 05:02:09 GMT):
please suggest what should be the name in *--env-file ./env
krabradosty (Mon, 07 Sep 2020 08:09:41 GMT):
But this rule
```
n_out_of": {
"n": 0,
"rules": []
}
```
is meaningless. I though `n` stands for the number of signatures. Also identities are not defined.
knagware9 (Mon, 07 Sep 2020 09:53:22 GMT):
yes there should be correct policy ..you can fetch your config block and convert into json format to see its contents
krabradosty (Tue, 08 Sep 2020 08:07:30 GMT):
But this is already my default config, generated by `configtxgen` CLI
jyellick (Tue, 08 Sep 2020 19:22:50 GMT):
Sounds like your orderer binary version is not new enough, perhaps you are using v1.4.x images?
PJHaga (Wed, 09 Sep 2020 12:39:31 GMT):
Has left the channel.
MinatoReturns (Thu, 17 Sep 2020 07:53:12 GMT):
Has joined the channel.
MinatoReturns (Thu, 17 Sep 2020 07:58:01 GMT):
Hi All,
Can we use "raft" as consensus type while using HSM(BCCSP provider PKCS11)?
I understand that storing TLS keys in HSM is currently not supported by Fabric. I wanted to understand the impact on using raft based orderer with HSM.
guptasndp10 (Fri, 18 Sep 2020 16:58:56 GMT):
Hello All,
rahulhegde (Tue, 22 Sep 2020 02:48:49 GMT):
Hello @jyellick - In the following print statement, `c.logger.Infof("Raft leader changed: %d -> %d", soft.Lead, newLeader)` , does `soft.Lead` indicate who was the previous RAFT leader and zero value means no leader?
jyellick (Tue, 22 Sep 2020 04:51:14 GMT):
@rahulhegde Yes, this is correct.
jyellick (Tue, 22 Sep 2020 04:52:41 GMT):
Of course it is only this node's understanding of the leadership, since it is an asychronous system, the node only knows whether it believes there was an active leader before or not. If for instance, it were subject to a network partition, then the node could believe there was no leader, even though there was.
jyellick (Tue, 22 Sep 2020 04:52:41 GMT):
Of course it is only this node's understanding of the leadership, since it is an asychronous system, the node only knows whether it believes who was an active leader before or not. If for instance, it were subject to a network partition, then the node could believe there was no leader, even though there was.
jyellick (Tue, 22 Sep 2020 04:52:41 GMT):
Of course it is only this node's understanding of the leadership, since it is an asychronous system, the node only knows whether it believes who was the active leader before (or none). If for instance, it were subject to a network partition, then the node could believe there was no leader, even though there was.
jyellick (Tue, 22 Sep 2020 04:54:22 GMT):
In Raft, the node ID 0 is reserved to indicate no leader. While the raft IDs above 0 correspond to a particular node and are never reused (so even if node 1 is removed, and a new node added, node 1 will never be referred to again).
bh4rtp (Thu, 24 Sep 2020 03:33:16 GMT):
hi, i get orderer tls handshake error using java sdk client from another host:
```2020-09-24 11:11:33.603 CST [core.comm] ServerHandshake -> ERRO 059 TLS handshake failed with error remote error: tls: internal error server=Orderer remoteaddress=192.1.102.252:49644```
bh4rtp (Thu, 24 Sep 2020 03:33:37 GMT):
etcraft orderer is used.
kopaygorodsky (Thu, 24 Sep 2020 19:39:09 GMT):
hey, I'm working on FAB-18192 issue, here is PR https://github.com/hyperledger/fabric/pull/1888. I moved TLS verification from computing membership changes to raft metadata validation function and now I need to fix tests in consenter module. To validate consenter I need to add TLS CA root cert to a MSP config of SampleOrg, here I explained it -> https://github.com/kopaygorodsky/fabric/blob/FAB-18192/orderer/consensus/etcdraft/consenter_test.go#L162. Locally tests passed, but on CI it returns me this error `subjectKeyIdentifier not found in certificate` which I can't reproduce locally. I understand different that's because of different environments. Any idea what could it be? TLS CA cert is taked from tlsgen.CA which is used across whole project in tests.
kopaygorodsky (Thu, 24 Sep 2020 19:39:09 GMT):
hey, I'm working on FAB-18192 issue, here is PR https://github.com/hyperledger/fabric/pull/1888. I moved TLS verification from computing membership changes to raft metadata validation function and now I need to fix tests in consenter module. To validate consenter I need to add TLS CA root cert to a MSP config of SampleOrg, here I explained it -> https://github.com/kopaygorodsky/fabric/blob/FAB-18192/orderer/consensus/etcdraft/consenter_test.go#L162. Locally tests passed, but on CI it returns me this error `subjectKeyIdentifier not found in certificate` which I can't reproduce locally. TLS CA cert is taked from tlsgen.CA which is used across whole project in tests.
Any idea what could it be? Why it behaves differently on my env and CI.
kopaygorodsky (Thu, 24 Sep 2020 19:39:09 GMT):
hey, I'm working on FAB-18192 issue, here is PR https://github.com/hyperledger/fabric/pull/1888. I moved TLS verification from computing membership changes to raft metadata validation function and now I need to fix tests in consenter module. To validate consenter I need to add TLS CA root cert to a MSP config of SampleOrg, here I explained it -> https://github.com/kopaygorodsky/fabric/blob/FAB-18192/orderer/consensus/etcdraft/consenter_test.go#L162. Locally tests passed, but on CI it fails with this error `subjectKeyIdentifier not found in certificate` which I can't reproduce locally. TLS CA cert is taked from tlsgen.CA which is used across whole project in tests.
Any idea what could it be? Why it behaves differently on my env and CI.
kopaygorodsky (Thu, 24 Sep 2020 19:39:36 GMT):
Clipboard - September 24, 2020 10:39 PM
jyellick (Thu, 24 Sep 2020 19:52:13 GMT):
Double checked the tlsgen.NewCA, it does look like it should be generating certs with a subject key identifier to me. Perhaps add some debugging into your test for CI that will dump out the PEM encoded version of the cert so you can take a look at it locally using something like `openssl x509 -noout -text`?
jyellick (Thu, 24 Sep 2020 19:53:19 GMT):
There's no obvious reason things should behave differently. Sometimes there is a difference between local and CI because local dev environments are using volume mounts, which can mess with case sensitivity, fsync, permissions, etc. though I don't see how that would be the case here.
jyellick (Thu, 24 Sep 2020 20:00:44 GMT):
FWIW, I see the same error when run locally on my laptop:
```Summarizing 2 Failures:
[Fail] Consenter when the consenter is asked about join-block membership [It] identifies a member block
/home/yellickj/go/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/consenter_test.go:190
[Fail] Consenter when the consenter is asked about join-block membership [It] identifies a non-member block
/home/yellickj/go/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/consenter_test.go:200
Ran 111 of 111 Specs in 24.515 seconds
FAIL! -- 109 Passed | 2 Failed | 0 Pending | 0 Skipped
--- FAIL: TestEtcdraft (24.52s)
```
jyellick (Thu, 24 Sep 2020 20:00:44 GMT):
FWIW, I see the same error when running the tests locally against your PR on my laptop:
```Summarizing 2 Failures:
[Fail] Consenter when the consenter is asked about join-block membership [It] identifies a member block
/home/yellickj/go/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/consenter_test.go:190
[Fail] Consenter when the consenter is asked about join-block membership [It] identifies a non-member block
/home/yellickj/go/src/github.com/hyperledger/fabric/orderer/consensus/etcdraft/consenter_test.go:200
Ran 111 of 111 Specs in 24.515 seconds
FAIL! -- 109 Passed | 2 Failed | 0 Pending | 0 Skipped
--- FAIL: TestEtcdraft (24.52s)
```
kopaygorodsky (Thu, 24 Sep 2020 20:08:45 GMT):
hm, weird. I'm debugging it more, thx.
kopaygorodsky (Fri, 25 Sep 2020 15:22:02 GMT):
@jyellick I added debug statement https://github.com/hyperledger/fabric/blob/749bdfa248b30786b595e788b525a937c17de3b0/orderer/consensus/etcdraft/consenter_test.go#L181, it prints TLS root cert from tlsgen.
kopaygorodsky (Fri, 25 Sep 2020 15:22:02 GMT):
Clipboard - September 25, 2020 6:21 PM
kopaygorodsky (Fri, 25 Sep 2020 15:23:39 GMT):
`
`
kopaygorodsky (Fri, 25 Sep 2020 15:23:39 GMT):
`-----BEGIN CERTIFICATE-----
MIIBpDCCAUugAwIBAgIRAI808gvPd+5mZWzQUi1CPkowCgYIKoZIzj0EAwIwMjEw
MC4GA1UEBRMnMTkwMzU0NTEyMTEyNzI5MDM3MjQwNzMwMTM4MTMxNDM4OTExMDUw
MB4XDTIwMDkyNDEzMzYxMloXDTMwMDkyMzEzMzYxMlowMjEwMC4GA1UEBRMnMTkw
MzU0NTEyMTEyNzI5MDM3MjQwNzMwMTM4MTMxNDM4OTExMDUwMFkwEwYHKoZIzj0C
AQYIKoZIzj0DAQcDQgAEZnlvCz+0nYFFwElyYa+NQ2136dIMRYPl5geoWXdWi1OR
maZIhKjwi51NL0bhDGELETpU8BMCh6EtaCZWy7Pq5aNCMEAwDgYDVR0PAQH/BAQD
AgGmMB0GA1UdJQQWMBQGCCsGAQUFBwMCBggrBgEFBQcDATAPBgNVHRMBAf8EBTAD
AQH/MAoGCCqGSM49BAMCA0cAMEQCIEuJ4v071Xp7SmhdnPfQXvRigg5ZAQt8OkTS
9E298Co+AiA+dTWeCiqh7hrGN/8YWDU9BvIM85XJlJuPGvNZ8t3a7A==
-----END CERTIFICATE-----`
kopaygorodsky (Fri, 25 Sep 2020 15:24:07 GMT):
it does not have subject key id
kopaygorodsky (Fri, 25 Sep 2020 15:24:07 GMT):
it does not have subject key id, https://gist.github.com/kopaygorodsky/fbdd70b2a9f52c12737e686866e50f87#file-cert-test-L23
kopaygorodsky (Fri, 25 Sep 2020 15:24:07 GMT):
it does not have subject key id(X509v3 Subject Key Identifier), https://gist.github.com/kopaygorodsky/fbdd70b2a9f52c12737e686866e50f87#file-cert-test-L23
kopaygorodsky (Fri, 25 Sep 2020 15:24:08 GMT):
Clipboard - September 25, 2020 6:24 PM
kopaygorodsky (Fri, 25 Sep 2020 15:25:23 GMT):
when I check certs on my local machine - they do have.
kopaygorodsky (Mon, 28 Sep 2020 10:10:16 GMT):
it works on my machine because I have go 1.15 https://github.com/golang/go/blob/dev.boringcrypto.go1.15/src/crypto/x509/x509.go#L2129
on 1.14 https://github.com/golang/go/blob/dev.boringcrypto.go1.14/src/crypto/x509/x509.go#L2126 it works differently
kopaygorodsky (Mon, 28 Sep 2020 10:10:51 GMT):
Do I need to change the way to x509 template is generated for CA?
kopaygorodsky (Mon, 28 Sep 2020 10:10:51 GMT):
Do I need to change the way how x509 template is generated for CA?
kopaygorodsky (Mon, 28 Sep 2020 10:10:51 GMT):
Do I need to change the way how x509 template is generated for tlsgen.CA?
kopaygorodsky (Mon, 28 Sep 2020 10:10:51 GMT):
Do I need to fix x509 template for tlsgen.CA?
kopaygorodsky (Tue, 29 Sep 2020 12:42:49 GMT):
@jyellick Do I need to fix x509 template for tlsgen.CA?
jyellick (Tue, 29 Sep 2020 19:08:30 GMT):
Hey @kopaygorodsky sorry for the delay, I'll try to have you a response within the next 8 hours.
yacovm (Tue, 29 Sep 2020 19:50:37 GMT):
@kopaygorodsky maybe just use your own certificate and not use tlsgen
yacovm (Tue, 29 Sep 2020 19:50:54 GMT):
that's what i do when tlsgen can't create something that i need
yacovm (Tue, 29 Sep 2020 19:51:14 GMT):
i change it, generate the certificates, and then just stick them into the test :joy:
yacovm (Tue, 29 Sep 2020 19:51:23 GMT):
and then revert the tlsgen test
yacovm (Tue, 29 Sep 2020 19:51:27 GMT):
it's dirty but it works
kopaygorodsky (Tue, 29 Sep 2020 19:55:16 GMT):
that's what I did in a first place, but then I saw tlsgen is used across whole test env and decided that I need to make it work.
kopaygorodsky (Tue, 29 Sep 2020 19:55:27 GMT):
are there any plans to move to 1.15?
kopaygorodsky (Tue, 29 Sep 2020 19:55:27 GMT):
are there any plans to move to go1.15?
yacovm (Tue, 29 Sep 2020 19:57:42 GMT):
yeah: https://github.com/hyperledger/fabric/pull/1716
yacovm (Tue, 29 Sep 2020 19:57:58 GMT):
but it won't be easy. There is plenty of work to do to make it work safely, and we are all lazy
kopaygorodsky (Tue, 29 Sep 2020 19:59:27 GMT):
heh, I saw that :)
kopaygorodsky (Tue, 29 Sep 2020 19:59:27 GMT):
heh, I saw that, I mean prev message :)
kopaygorodsky (Tue, 29 Sep 2020 20:00:17 GMT):
what I see that you are trying to move away from certs as fixtures and generate them all in runtime
yacovm (Tue, 29 Sep 2020 20:00:39 GMT):
not me, but other people
yacovm (Tue, 29 Sep 2020 20:00:57 GMT):
i am not scared of technical debt as i am very good at ignoring it
yacovm (Tue, 29 Sep 2020 20:04:58 GMT):
@kopaygorodsky the problem with your case, is that tlsgen is used in tests but it's also used in production
yacovm (Tue, 29 Sep 2020 20:05:36 GMT):
If you want to change how it generates certificates you can push a PR and we will review it
yacovm (Tue, 29 Sep 2020 20:05:51 GMT):
but if you only want to overcome the test, then you can just make a custom certificate
yacovm (Tue, 29 Sep 2020 20:06:13 GMT):
in the `orderer/consensus/etcdraft` package there are certificates that are statically defined in the test
yacovm (Tue, 29 Sep 2020 20:06:23 GMT):
so i think that adding another one will not kill anyone
yacovm (Tue, 29 Sep 2020 20:06:32 GMT):
but if you want you can change tlsgen in a separate PR
jyellick (Wed, 30 Sep 2020 02:54:11 GMT):
@kopaygorodsky Did Yacov's reply help you? If tlsgen is in fact not putting in an SKI, that seems like something it should probably be doing. My cursory look at the code says that it should, so I'm a little surprised that it's not. I would say open up another PR which changes the behavior of the tlsgen to include the SKI, and then your test will be fixed. I am a little confused why the behavior is different across your machine vs. mine and CI, you seemed to think that it was go version related?
kopaygorodsky (Wed, 30 Sep 2020 07:40:08 GMT):
@jyellick yes, 1.14 and 1.15 have different implementation of method x509.CreateCertificate. in 1.14 there is a comment on top of the method
`// The AuthorityKeyId will be taken from the SubjectKeyId of parent, if any,
// unless the resulting certificate is self-signed. Otherwise the value from
// template will be used`
In 1.15 they fixed this part and SubjectKeyId is used instead of AuthorityKeyId. I sent you two links above to compare 1.14 and 1.15
kopaygorodsky (Wed, 30 Sep 2020 07:40:08 GMT):
@jyellick yes, 1.14 and 1.15 have different implementation of method x509.CreateCertificate. in 1.14 there is a comment on top of the method
```// The AuthorityKeyId will be taken from the SubjectKeyId of parent, if any,
// unless the resulting certificate is self-signed. Otherwise the value from
// template will be used```
In 1.15 they fixed this part and SubjectKeyId is used instead of AuthorityKeyId. I sent you two links above to compare 1.14 and 1.15
kopaygorodsky (Wed, 30 Sep 2020 07:40:08 GMT):
@jyellick yes, 1.14 and 1.15 have different implementation of method x509.CreateCertificate.
In 1.15 theySubjectKeyId is generated if isCA and it was empty. I sent you two links above to compare 1.14 and 1.15
could you show the place where SubjectKeyId is generated you think. I see Subject, but not SubjectKeyId.
kopaygorodsky (Wed, 30 Sep 2020 07:40:08 GMT):
@jyellick yes, 1.14 and 1.15 have different implementation of method x509.CreateCertificate.
In 1.15 theySubjectKeyId is generated if it's empty and cert is self-signed.
Could you show the place where SubjectKeyId is generated you think. I see Subject, but not SubjectKeyId.
I sent you two links above to compare 1.14 and 1.15
kopaygorodsky (Wed, 30 Sep 2020 07:40:08 GMT):
@jyellick yes, I sent you two links above to compare both versions. 1.14 and 1.15 have different implementation of method x509.CreateCertificate.
In 1.15 theySubjectKeyId is generated if it's empty and cert is self-signed.
Could you show the place where SubjectKeyId is generated you think. I see Subject, but not SubjectKeyId.
kopaygorodsky (Wed, 30 Sep 2020 07:40:08 GMT):
@jyellick yes, I sent you two links above to compare both versions. 1.14 and 1.15 have different implementation of method x509.CreateCertificate.
In 1.15 theySubjectKeyId is generated if it's empty and cert is self-signed.
Could you show the place where SubjectKeyId is generated right now?. I see Subject, but not SubjectKeyId.
kopaygorodsky (Wed, 30 Sep 2020 07:41:14 GMT):
btw Yacov's suggestion works fine, but I would like to fix tlsgen as well
kopaygorodsky (Wed, 30 Sep 2020 07:41:14 GMT):
btw Yacov's suggestion works
kopaygorodsky (Wed, 30 Sep 2020 09:05:16 GMT):
could you show the place where SubjectKeyId is generated you think. I see Subject, but not SubjectKeyId.
jyellick (Wed, 30 Sep 2020 12:19:04 GMT):
https://golang.org/pkg/crypto/x509/#CreateCertificate has a `SubjectKeyId` field you may specify.
jyellick (Wed, 30 Sep 2020 12:19:53 GMT):
I expect that Fabric will move to go v1.15 soon, usually we upgrade go versions shortly after they are released unless some explicit compatibility problem is found.
kopaygorodsky (Thu, 01 Oct 2020 14:07:27 GMT):
have you created ticket in Jira, so everyone can be aware of the status
kopaygorodsky (Thu, 01 Oct 2020 14:07:27 GMT):
have you created ticket in Jira, so everyone can be aware of the status?
kopaygorodsky (Thu, 01 Oct 2020 14:07:27 GMT):
have you created ticket in Jira, so everyone can be aware of the status? @rahulhegde
jyellick (Fri, 02 Oct 2020 02:03:06 GMT):
There are a number of gossip and pkcs11 related fixes that are in the releases published yesterday -- 1.4.9 and 2.2.1 https://github.com/hyperledger/fabric/releases
kopaygorodsky (Fri, 02 Oct 2020 15:00:28 GMT):
@jyellick @yacovm I generated fixtures that satisfy msp validator. All tests passed, PR is ready to be reviewed. After moving to go1.15 I can revert it back to tlsgen if needed.
rahulhegde (Mon, 05 Oct 2020 15:35:04 GMT):
Hello @jyellick - we are getting following error during RAFT consensus. Does it hint anything on our configuration?
``
[36m2020-10-05 13:58:28.504 UTC [orderer.consensus.etcdraft] logSendFailure -> DEBU 94ebb9[0m Failed to send StepRequest to 2, because: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: tls: first record does not look like a TLS handshake" channel=xxx-yyy node=1
```
```
rahulhegde (Mon, 05 Oct 2020 15:35:04 GMT):
Hello @jyellick - we are getting following error during RAFT consensus. Does it hint anything on our configuration?
```
[36m2020-10-05 13:58:28.504 UTC [orderer.consensus.etcdraft] logSendFailure -> DEBU 94ebb9[0m Failed to send StepRequest to 2, because: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: authentication handshake failed: tls: first record does not look like a TLS handshake" channel=xxx-yyy node=1
```
jyellick (Mon, 05 Oct 2020 16:14:44 GMT):
@rahulhegde This looks like a network configuration problem -- perhaps a proxy or firewall doing something odd to the connection?
rahulhegde (Mon, 05 Oct 2020 16:18:31 GMT):
we had a similar setup running a week back with lot lesser channels (150) environment was down for a week. This time it was brought up with more channels (380). Is there a way, we add logging to detect the problem (grpcs?)
rahulhegde (Mon, 05 Oct 2020 16:41:10 GMT):
Looking at the Orderer 01, we did find this output indicating there is communication on consensus(?)
```
[36m2020-10-05 13:58:28.601 UTC [orderer.common.cluster.step] handleMessage -> DEBU 94ee0d[0m Received message from ord03clsorder.jas.clsnet(10.98.16.143:43270): ConsensusRequest for channel cl11obo-cls7obo with payload of size 28
[36m2020-10-05 13:58:28.601 UTC [common.deliver] Handle -> DEBU 94ee0e[0m Starting new deliver loop for 10.98.16.187:43404
[36m2020-10-05 13:58:28.601 UTC [cauthdsl] func1 -> DEBU 94ee0f[0m 0xc1354b56f0 gate 1601906308601872476 evaluation starts
[36m2020-10-05 13:58:28.601 UTC [cauthdsl] func1 -> DEBU 94ee10[0m 0xc1354b57c0 gate 1601906308601906061 evaluation starts
```
At some point of time - gate evaluation succeeds/fails
```
[36m2020-10-05 13:58:29.426 UTC [cauthdsl] func1 -> DEBU 951694[0m 0xc1354b56f0 gate 1601906308601872476 evaluation succeeds
...
[36m2020-10-05 13:58:29.100 UTC [cauthdsl] func1 -> DEBU 95019f[0m 0xc1354b57c0 gate 1601906308601906061 evaluation fails
```
rahulhegde (Mon, 05 Oct 2020 16:41:10 GMT):
Looking at the Orderer 01, we did find this output indicating there is communication on consensus(?)
```
[36m2020-10-05 13:58:28.601 UTC [orderer.common.cluster.step] handleMessage -> DEBU 94ee0d[0m Received message from ord03clsorder.jas.clsnet(10.98.16.143:43270): ConsensusRequest for channel cl11obo-cls7obo with payload of size 28
[36m2020-10-05 13:58:28.601 UTC [common.deliver] Handle -> DEBU 94ee0e[0m Starting new deliver loop for 10.98.16.187:43404
[36m2020-10-05 13:58:28.601 UTC [cauthdsl] func1 -> DEBU 94ee0f[0m 0xc1354b56f0 gate 1601906308601872476 evaluation starts
[36m2020-10-05 13:58:28.601 UTC [cauthdsl] func1 -> DEBU 94ee10[0m 0xc1354b57c0 gate 1601906308601906061 evaluation starts
```
At some point of time - gate evaluation succeeds/fails
```
[36m2020-10-05 13:58:29.426 UTC [cauthdsl] func1 -> DEBU 951694[0m 0xc1354b56f0 gate 1601906308601872476 evaluation succeeds
...
[36m2020-10-05 13:58:29.100 UTC [cauthdsl] func1 -> DEBU 95019f[0m 0xc1354b57c0 gate 1601906308601906061 evaluation fails
```
Is there a relation, this can impact the RAFT leader election?
yacovm (Tue, 06 Oct 2020 17:30:13 GMT):
@kopaygorodsky Thanks for your fix :)
kopaygorodsky (Tue, 06 Oct 2020 17:45:54 GMT):
Np :) Finally started contributing fabric.
Nick (Fri, 09 Oct 2020 08:20:21 GMT):
Has joined the channel.
Nick (Fri, 09 Oct 2020 08:22:32 GMT):
Hi Team, When I backup/restore raft based orderer nodes to another set of virtual machines, I need to start the orderers to let them know the IP/Ports are changed. How can I override the endpoint info for each orderer node so that orderer1 knows where to find orderer2 and other orderer nodes.
guoger (Sat, 10 Oct 2020 15:27:26 GMT):
they are embedded in \href{https://github.com/hyperledger/fabric/blob/9adde2e7d30e1cb8778f8dbb564e08bc49afb70d/sampleconfig/configtx.yaml#L322-L334}{channel config}
guoger (Sat, 10 Oct 2020 15:27:26 GMT):
they are embedded in \[\href{https://github.com/hyperledger/fabric/blob/9adde2e7d30e1cb8778f8dbb564e08bc49afb70d/sampleconfig/configtx.yaml#L322-L334}{channel config}\]
guoger (Sat, 10 Oct 2020 15:27:26 GMT):
they are embedded in channel config: https://github.com/hyperledger/fabric/blob/9adde2e7d30e1cb8778f8dbb564e08bc49afb70d/sampleconfig/configtx.yaml#L322-L334
guoger (Sat, 10 Oct 2020 15:29:25 GMT):
in order to alter values there, you'll need to submit `channel config update` transaction
guoger (Sat, 10 Oct 2020 15:29:25 GMT):
which requires up and running orderer cluster
guoger (Sat, 10 Oct 2020 15:29:59 GMT):
an easier solution would be to use the same host&port for your new cluster so it can still function at start, and update desired host:port one by one
guoger (Sat, 10 Oct 2020 15:35:03 GMT):
if there's communication problem among orderers, it can indeed impact leader election (vote messages can not be passed around)
kopaygorodsky (Sat, 10 Oct 2020 20:27:38 GMT):
use domain names, not IPs, in that way you don't need to do a config update.
Nick (Sun, 11 Oct 2020 03:57:28 GMT):
It seems the authorization and security demands made the disaster recovery/restore to another set of servers hard. Have to create a bunch of steps to "channel config update". Thank you guys.
roxhens (Mon, 12 Oct 2020 15:40:01 GMT):
Has joined the channel.
braduf (Mon, 12 Oct 2020 19:12:36 GMT):
Hi all, I was just trying to add a new orderer org to an existing network. In previous versions I remember I could add the organization to the orderer orgs and add the new orderer as a raft consenter in the same update and then after starting the new orderer with the latest config block a second update was needed to add the orderer address.
I now noticed, using v2.2.0, that I should first do an update adding the organization's definition before I can add the consenter in another update. If I try to add the organization and the consenter in the same update I get an error that the tls cert of the consenter is issued by an unknown authority.
This error sounds logical and it makes sense to me that adding the org and adding the consenter should be done in different steps, but in the docs it still has adding the consenter as the first step, so I want to know if the docs are not up to date or if I am doing something wrong that doesn't let me add a new org and the consenter in the same channel update...
Thanks in advance!
braduf (Mon, 12 Oct 2020 19:12:36 GMT):
Hi all, I was just trying to add a new orderer org to an existing network. In previous versions I remember I could add the organization to the orderer orgs and add the new orderer as a raft consenter in the same update and then after starting the new orderer with the latest config block a second update was needed to add the orderer address.
I now noticed, using v2.2.0, that I should first do an update adding the organization's definition before I can add the consenter in another update. If I try to add the organization and the consenter in the same update I get an error that the tls cert of the consenter is issued by an unknown authority.
This error sounds logical and it makes sense to me that adding the org and adding the consenter should be done in different steps, but in the docs it still has adding the consenter as the first step, so I want to know if the docs are not up to date or if I am doing something wrong that doesn't let me add the new org and their consenter in the same channel update...
Thanks in advance!
yacovm (Mon, 12 Oct 2020 20:05:37 GMT):
FAB-18192
yacovm (Mon, 12 Oct 2020 20:05:52 GMT):
https://jira.hyperledger.org/browse/FAB-18192
kopaygorodsky (Mon, 12 Oct 2020 21:55:11 GMT):
why do you need that? do you know about problem when peer joins a channel from 1st block
braduf (Wed, 14 Oct 2020 15:49:00 GMT):
@yacovm , thanks a lot, I have seen it has been closed today. I am not familiar with the process now, will this fix be included from the 2.2.2 release only?
yacovm (Wed, 14 Oct 2020 16:01:34 GMT):
dunno, ask @jyellick or @dave.enyeart
BrettLogan (Thu, 15 Oct 2020 01:11:50 GMT):
Clipboard - October 14, 2020 9:11 PM
BrettLogan (Thu, 15 Oct 2020 01:12:41 GMT):
You can see in the Jira it will be in 2.2.2 and starting 2.3.0
BrettLogan (Thu, 15 Oct 2020 01:13:50 GMT):
You can pull non-release images from our artifactory, which get built every night. They will be available in a few hours: https://hyperledger.jfrog.io/ui/repos/tree/General/fabric
braduf (Thu, 15 Oct 2020 02:18:55 GMT):
Good to know, thanks @BrettLogan !
RahulEth (Fri, 16 Oct 2020 07:36:32 GMT):
@kopaygorodsky how can we achieve it . can you explain it?
as per my understanding we need to upgrade solo to raft in our genesis.block . how we can do it?
sanket1211 (Wed, 21 Oct 2020 10:49:03 GMT):
2020-10-21T10:39:52.855Z - error: [ServiceEndpoint]: Error: Failed to connect before the deadline on Discoverer- name: peer0.org1.example.com, url:grpcs://localhost:7051
2020-10-21T10:39:52.855Z - error: [ServiceEndpoint]: waitForReady - Failed to connect to remote gRPC server peer0.org1.example.com url:grpcs://localhost:7051 timeout:3000
Getting error: Error: Failed to connect before the deadline on Discoverer- name: peer0.org1.example.com, url:grpcs://localhost:7051
troyronda (Wed, 28 Oct 2020 17:44:20 GMT):
Has left the channel.
kopaygorodsky (Fri, 30 Oct 2020 23:35:42 GMT):
@jyellick I have an issue with system chaincode which is installed as builtinCc. I use it for governance model in my system and it's good to have it installed by default on every peer.
Previously it was working fine on 1.4, but after I enabled 2.0 capability, txcommitter was changed and now peer can't validate a transaction from orderer on commit. https://gist.github.com/kopaygorodsky/8ccbf53db927fa5abe7133e23e3b60e7
Querying still works. I'm trying to figure it out in the source code, but not familiar with that part of code at all.
My guess that it goes wrong direction somewhere when validation transaction. In the logs I see `lscc`, I assume that it thinks that application chaincode was called, not system one.
Could you, please, point me to a right direction?
kopaygorodsky (Fri, 30 Oct 2020 23:35:42 GMT):
@jyellick I have an issue with system chaincode which is installed as builtinCc. I use it for governance model in my system, it's good to have it installed by default on every peer.
Previously it was working fine on 1.4, but after I enabled 2.0 capability, txcommitter was changed and now peer can't validate a transaction from orderer on commit. https://gist.github.com/kopaygorodsky/8ccbf53db927fa5abe7133e23e3b60e7
Querying still works. I'm trying to figure it out in the source code, but not familiar with that part of the code at all.
My guess that it goes in the wrong direction somewhere when validating a transaction. In the logs I see `lscc`, I assume that it thinks that an application chaincode was called, not system one.
Could you, please, point me to a right direction?
kopaygorodsky (Fri, 30 Oct 2020 23:35:42 GMT):
@jyellick I have an issue with system chaincode which is installed as builtinCc. I use it for governance model in my system, it's good to have it installed by default on every peer.
Previously it was working fine on 1.4, but after I enabled 2.0 capability, txcommitter was changed and now peer can't validate a transaction from orderer on commit. https://gist.github.com/kopaygorodsky/8ccbf53db927fa5abe7133e23e3b60e7
Querying still works. I'm trying to figure it out in the source code, but not familiar with that part of the code at all.
My guess that it goes in the wrong direction somewhere when validating a transaction. In the logs I see `lscc`, I assume that it thinks that an application chaincode was called, not system one and look in wrong namespace?
Could you, please, point me to a right direction?
kopaygorodsky (Fri, 30 Oct 2020 23:35:42 GMT):
@jyellick I have an issue with system chaincode which is installed as builtinCc. I use it for governance model in my system, it's good to have it installed by default on every peer.
Previously it was working fine on 1.4, but after I enabled 2.0 capability, txcommitter was changed and now peer can't validate a transaction from orderer on commit. https://gist.github.com/kopaygorodsky/8ccbf53db927fa5abe7133e23e3b60e7
Querying still works. I'm trying to figure it out in the source code, but not familiar with that part of the code at all.
My guess that it goes in the wrong direction somewhere when validating a transaction. In the logs I see `lscc`, I assume that it thinks that an application chaincode was called, not system one and look in wrong namespace?
Could you, please, point me in the right direction?
kopaygorodsky (Fri, 30 Oct 2020 23:35:42 GMT):
@jyellick I have an issue with system chaincode which is installed as builtinCc. I use it for governance model in my system, it's good to have it installed by default on every peer.
Previously it was working fine on 1.4, but after I enabled 2.0 capability, txcommitter was changed and now peer can't validate a transaction from orderer on commit. https://gist.github.com/kopaygorodsky/8ccbf53db927fa5abe7133e23e3b60e7
Querying still works. I'm trying to figure it out in the source code, but not familiar with that part of the code at all.
My guess that it goes in the wrong direction somewhere when validating a transaction. In the logs I see `lscc`, I assume that it thinks that an application chaincode was called, not system one and look in wrong namespace?
Chaincode is built using shim package with shim.GetState and shim.PutState functions.
Could you, please, point me in the right direction?
kopaygorodsky (Fri, 30 Oct 2020 23:35:42 GMT):
@jyellick I have an issue with system chaincode which is installed as builtinCc. I use it for governance model in my system, it's good to have it installed by default on every peer.
Previously it was working fine on 1.4, but after I enabled 2.0 capability, txcommitter was changed and now peer can't validate a transaction from orderer on commit. https://gist.github.com/kopaygorodsky/8ccbf53db927fa5abe7133e23e3b60e7
Querying still works. I'm trying to figure it out in the source code, but not familiar with that part of the code at all.
My guess that it goes in the wrong direction somewhere when validating a transaction. In the logs I see `lscc`, I assume that it thinks that an application chaincode was called, not system one and look in wrong namespace?
Chaincode is built using shim package with shim.GetState and shim.PutState functions.
Could you, please, point me in the right direction where to look?
husnain (Tue, 03 Nov 2020 11:44:42 GMT):
Has joined the channel.
usamaarshad (Mon, 09 Nov 2020 09:33:11 GMT):
Has joined the channel.
usamaarshad (Mon, 09 Nov 2020 09:33:12 GMT):
hy everyone. Good morning.
i am trying to setup orderer and It shows me error "[orderer.common.server] Main -> PANI 005 Failed validating bootstrap block: initializing channelconfig failed: could not create channel Orderer sub-group config: setting up the MSP manager failed: CA Certificate did not have the CA attribute, (SN: 4ec1f8f95b44dba2d653a934755254291760a191)" ..
But when I decode my certificates it has CA attribute true.
bmatsuo (Mon, 09 Nov 2020 18:15:09 GMT):
Hi everyone. I added an org to a v1.4.3 network and now am having issues with the org signing transactions to enable v2_0 channel capabilities having switched to fabric v2.2.1. The orderer logs say `[36m2020-11-05 16:55:20.320 UTC [policies] EvaluateSignedData -> DEBU 11a9e5[0m Signature set did not satisfy policy /Channel/Application/AcreMSP/Admins` despite the proposal being signed by the org's admin cert. The org has NodeOUs enabled (empty admincerts directory) but I noticed its Admins policy has a "ROLE" principal_classification instead of "ORGANIZATION_UNIT" (https://hastebin.com/aradamebok.lua). Is that the reason why the org's signature gets rejected? It looks like there is no cert which can satisfy the Admins policy. Is it possible to fix the policy if that is the case?
robert.beerta (Thu, 12 Nov 2020 08:41:04 GMT):
Hello all, I am currently in the process of adding multiple orderers but im running into a challenge and i hope someone here can give me some assistance:
Situation: We have a running HLF 1.4.3 network with 1 raft ordering node. We want to add 2 orderers on this running network, without the need to bring the network down.
We’re following the HLF docu on that: https://hyperledger-fabric.readthedocs.io/en/release-1.4/raft_configuration.html#reconfiguration
Issue: We see that when we add the 2nd orderer to a application channel, all transactions on that channel fail. We notice that a new election is started once we add the second ordering node to the application cluster.
This will fail, because the 2nd orderer is not “up” until the synchronization process has completed yet, so there will never be a quorum. Only after the 2nd orderer has synced all blocks (somewhere between 200.000 and 10 million blocks) and is “up and running” a leader can be elected and transactions succeed again. This process takes longer than our maintenance window allows, and therefor we’ve been trying to find workarounds to minimize impact to our platform.
Question: Is there a way to “set” this “leader” on the first orderer, until the 2nd orderer is up and running (and has synced all blocks) and quorum can be reached again? Also, this issue does not seem to occur when adding the 3rd orderer, because with the 2 existing orderers a quorum is still reached.
Btw. the HLF docu suggests that this is possible, but doesn’t explain how: “”The fourth node won’t be able to onboard because nodes can only onboard to functioning clusters (unless the total size of the cluster is one or two).”
robert.beerta (Thu, 12 Nov 2020 08:41:04 GMT):
Hello all, I am currently in the process of adding multiple orderers but im running into a challenge and i hope someone here can give me some assistance:
Situation: We have a running HLF 1.4.3 network with 1 raft ordering node. We want to add 2 orderers on this running network, without the need to bring the network down.
We’re following the HLF docu on that: https://hyperledger-fabric.readthedocs.io/en/release-1.4/raft_configuration.html#reconfiguration
Issue: We see that when we add the 2nd orderer to a application channel, all transactions on that channel fail. We notice that a new election is started once we add the second ordering node to the application cluster.
This will fail, because the 2nd orderer is not “up” until the synchronization process has completed yet, so there will never be a quorum. Only after the 2nd orderer has synced all blocks (somewhere between 200.000 and 10 million blocks) and is “up and running” a leader can be elected and transactions succeed again. This process takes longer than our maintenance window allows, and therefor we’ve been trying to find workarounds to minimize impact to our platform.
Question: Is there a way to “set” this “leader” on the first orderer, until the 2nd orderer is up and running (and has synced all blocks) and quorum can be reached again? Also, this issue does not seem to occur when adding the 3rd orderer, because with the 2 existing orderers a quorum is still reached.
Btw. the HLF docu suggests that this is possible, but doesn’t explain how: “The fourth node won’t be able to onboard because nodes can only onboard to functioning clusters (unless the total size of the cluster is one or two).”
guoger (Fri, 13 Nov 2020 04:54:35 GMT):
one workaround would be to copy the ledger from orderer1 to the new orderer, so it doesn't need to start from block0.
i believe there is ongoing work to add orderer as observer, which does not participate in consensus, but only pull blocks from network. and once observer node has caught up, you could promote it to be member to participate in consensus. Although i'm not sure about exact status of this work
robert.beerta (Fri, 13 Nov 2020 08:41:42 GMT):
Thanks for helping @guoger ! it sounds a little bit risky (especially in a prod environment). But we can do some tests to see the behaviour.
kopaygorodsky (Sun, 15 Nov 2020 20:22:45 GMT):
then at the beginning create more than 1 node so adding one won't change leadership
adanacs (Tue, 24 Nov 2020 18:26:56 GMT):
Has joined the channel.
adanacs (Tue, 24 Nov 2020 18:27:29 GMT):
Hi all - I've inherited a Hyperledger setup. All was running well until some certs expired 😞 Specifically the TLS certs. I have 2 orgs, each with an orderer that belongs to a channel.
I have been able to update the TLS certs so that the TLS connection between orderers is successful, but then the orderers are not recognized in the channels.
I believe I can bypass the expiry check temporarily for the channels, but I'm not sure which certs are being sent/used for channel identification by the orderers, or if its even possible for an orderer to be identified to the channel by a different cert than is used for TLS. (If so I can't find the setting/ENV_VAR to point to the correct one)
Is this possible? I'm hoping I can get the systems talking so I can add the new certs to the channels and then remove the old ones. Any help or pointers are appreciated.
yacovm (Wed, 25 Nov 2020 01:18:48 GMT):
upgrade the nodes to the latest Fabric version and it will be solved
yacovm (Wed, 25 Nov 2020 01:19:20 GMT):
In the latest Fabric version, the orderers only care about the public key, not the rest of the certificate
yacovm (Wed, 25 Nov 2020 01:19:48 GMT):
so you need to replace the certificates with new certificates with the same public key but relevant expiry times
adanacs (Wed, 25 Nov 2020 05:32:01 GMT):
Thanks, I'll give that a go
adanacs (Fri, 27 Nov 2020 20:59:55 GMT):
Unfortunately it doesn't look like I can upgrade atm :( (currently on 1.4.1)
I've gotten to the point where the orderers start and recognize that they are part of a channel, but cannot send messages to each other because the tls certificate has expired.
I've tried changing the ORDERER_GENERAL_TLS_PRIVATEKEY and ORDERER_GENERAL_TLS_CERTIFICATE to point to the new certs, but that makes the orderer not recognize that its part of the channel
Which settings / certs are used for the TLS handshaking between orderers?
adanacs (Fri, 27 Nov 2020 21:00:29 GMT):
Unfortunately it doesn't look like I can upgrade atm :( (currently on 1.4.1)
I've gotten to the point where the orderers start and recognize that they are part of a channel, but cannot send messages to each other because the tls certificate has expired.
I've tried changing the ORDERER_GENERAL_TLS_PRIVATEKEY and ORDERER_GENERAL_TLS_CERTIFICATE to point to the new certs, but that makes the orderer not recognize that its part of the channel
Which settings / certs are used for the TLS handshaking between orderers?
yacovm (Fri, 27 Nov 2020 21:07:36 GMT):
why can't you upgrade?
yacovm (Fri, 27 Nov 2020 21:08:03 GMT):
you can also try to use a time-shift
adanacs (Fri, 27 Nov 2020 21:09:02 GMT):
We have a few other systems talking to HL and I'm unsure what effects upgrading will have
yacovm (Fri, 27 Nov 2020 21:09:19 GMT):
are you serious? Fabric is currently dead isn't it?
yacovm (Fri, 27 Nov 2020 21:10:17 GMT):
Anyway take a look at https://jira.hyperledger.org/browse/FAB-15700 it might help you
yacovm (Fri, 27 Nov 2020 21:12:06 GMT):
also look at https://hyperledger-fabric.readthedocs.io/en/latest/raft_configuration.html?highlight=expired#local-configuration - specifically `TLSHandshakeTimeShift`
adanacs (Fri, 27 Nov 2020 21:12:09 GMT):
thanks - because of the dependencies, upgrade is the next option - since I'm new to fabric I'm wanting to rule out me putting a cert in the wrong place or upgrading the wrong cert and also get a better understanding as to how the certs I have are used
adanacs (Fri, 27 Nov 2020 21:12:09 GMT):
thanks - because of the dependencies, upgrade is the next option - since I'm new to fabric I'm wanting to rule out me putting a cert in the wrong place or upgrading the wrong cert and also get a better understanding as to how the certs stored in various places are used
yacovm (Fri, 27 Nov 2020 21:13:33 GMT):
if you upgrade from 1.4.1 to 1.4.x then you are still running in compatibility mode to 1.4.1 until you make Fabric to upgrade to the new version from the channel level
yacovm (Fri, 27 Nov 2020 21:13:38 GMT):
so there shouldn't be a problem
yacovm (Fri, 27 Nov 2020 21:14:06 GMT):
if you try the timeshift option to make your cluster alive
adanacs (Fri, 27 Nov 2020 21:14:13 GMT):
ok - I'll take a look at the links you sent and also try the upgrade - which version do you recommend
yacovm (Fri, 27 Nov 2020 21:14:15 GMT):
but i don't remember at what version i added it
yacovm (Fri, 27 Nov 2020 21:14:22 GMT):
always the latest, obviously
yacovm (Fri, 27 Nov 2020 21:14:36 GMT):
ohh it seems timeshift was introduced in 1.4.3 only
yacovm (Fri, 27 Nov 2020 21:14:53 GMT):
if you have a time machine, go back in time and ask me to implement it sooner
adanacs (Fri, 27 Nov 2020 21:15:33 GMT):
lol - unfortunately my time machine is dependent on the fabric being up and running
yacovm (Fri, 27 Nov 2020 21:15:42 GMT):
chicken and egg
adanacs (Fri, 27 Nov 2020 21:15:47 GMT):
thanks for you time - much appreciated!
yacovm (Fri, 27 Nov 2020 21:15:49 GMT):
np
adanacs (Sat, 28 Nov 2020 00:22:09 GMT):
I had to do a few more config changes (separate cluster/orderer ports) but they're communicating now! thanks again
adanacs (Sat, 28 Nov 2020 00:22:09 GMT):
I upgraded to 1.4.3 and had to do a few more config changes (separate cluster/orderer ports) but they're communicating now! thanks again
yacovm (Sat, 28 Nov 2020 01:04:23 GMT):
cool
ohryan (Wed, 16 Dec 2020 15:56:50 GMT):
Has joined the channel.
ohryan (Wed, 16 Dec 2020 16:00:27 GMT):
Hey there, looking through the `orderer` code, it seems like this binary does not accept a custom `configPath` to the `orderer.yaml` and instead only looks at `.` some `/env/*` path and `FABRIC_CFG_PATH` env var if you set it. Does anyone know why it was built this way, not allowing me to add a path, or does anyone know a way to. add a custom path to the `orderer.yaml` on binary start? Thank you.
ohryan (Wed, 16 Dec 2020 16:00:27 GMT):
Hey there, looking through the `orderer` code, it seems like this binary does not accept a custom `configPath` to the `orderer.yaml` and instead only looks at `.` some `/env/*` path and `FABRIC_CFG_PATH` env var if you set it. Does anyone know why it was built this way, not allowing me to add a path, or does anyone know a way to add a custom path to the `orderer.yaml` on binary start that doesn't rely on the FABRIC_CFG_PATH? Thank you.
alacambra (Wed, 23 Dec 2020 22:09:11 GMT):
Has joined the channel.
xujiaming (Sat, 02 Jan 2021 06:13:57 GMT):
Has joined the channel.
Roger (Thu, 07 Jan 2021 07:26:46 GMT):
Has left the channel.
akshay.sood (Tue, 02 Feb 2021 05:36:17 GMT):
Hey folks,
I am getting an error in my ordering nodes: WAL: file already locked error
Can someone check this out please
https://stackoverflow.com/questions/65987424/hyperledger-fabric-orderers-throwing-wal-file-already-locked-error?noredirect=1#comment116681127_65987424
dvitas (Tue, 16 Feb 2021 15:10:42 GMT):
Has joined the channel.
dvitas (Tue, 16 Feb 2021 15:13:40 GMT):
Hi channel!
I wonder if there is an ongoing effort to create snapshots for the ledger DB in orderers, similar to the snapshot feature for the peers which was introduced in v2.3?
That snapshot feature was a bless for us as it allows to control the size of the disk space consumed. But the orderers are the problem now as their disk space consumption grows uncontrollable.
I did not find a corresponding ticket in JIRA
dvitas (Tue, 16 Feb 2021 15:13:40 GMT):
Hi channel!
I wonder if there is an ongoing effort to create snapshots for the ledger DB in orderers, similar to the snapshot feature for the peers which was introduced in v2.3?
That snapshot feature was a bless for us as it allows to control the size of the disk space consumed. But the orderers are the problem now as their disk space consumption grows uncontrollably.
I did not find a corresponding ticket in JIRA
dvitas (Tue, 16 Feb 2021 15:46:32 GMT):
It looks like we are going to develop the ledger snapshot feature for orderers ourselves, but I would like to know if there was some work already done on it.
jyellick (Thu, 18 Feb 2021 04:54:42 GMT):
Strictly speaking, there's very little work on the orderer side to be done for 'snapshotting'. The only state the orderers carry is the channel configuration, so it's really just a matter of block replication.
jyellick (Thu, 18 Feb 2021 04:55:16 GMT):
The bigger open question would be 'pruning', which should be made easier by the recent peer snapshot work, but, still needs to be well thought out and designed.
dvitas (Thu, 18 Feb 2021 08:45:46 GMT):
Thank you, Jason!
Yes, what we are actually looking for is pruning, that is starting an orderer from the snapshot, as it is happening with peers, to manage the disk space consumption.
We will start researching the problem then. Of course the functionality of peers is the natural starting point.
Also it is acceptable for our use case to start all the orderers and peers from the snapshot of the same block, so not having historical blocks is not a problem in our use case.
I have a question regarding the formal procedure. I would like to indicate to the community that we are working on this. What is the best way? It looks like creating an epic in JIRA will be a good indicator that some work is being done, and the epic can be a central place to coordinate the effort if someone else is interested or has already done some research/development.
Is it ok for me to create an epic?
BrettLogan (Fri, 19 Feb 2021 01:06:16 GMT):
Major changes like this are submitted through the RFC process found here: https://github.com/hyperledger/fabric-rfcs
BrettLogan (Fri, 19 Feb 2021 01:07:04 GMT):
As an example, here is the RFC for snapshotting: https://github.com/hyperledger/fabric-rfcs/blob/master/text/0000-ledger-checkpointing.md
BrettLogan (Fri, 19 Feb 2021 01:08:16 GMT):
This will be especially helpful as another company, I believe Fujitsu, indicated last year they were interested in contributing pruning capabilities to Fabric. Though as far as I'm aware there hasn't been any follow up on that.
BrettLogan (Fri, 19 Feb 2021 01:08:47 GMT):
But this will give other technical contributors a chance to join the discussion and coordinate to make sure there is no duplication of effort (if there was any at all)
dvitas (Fri, 19 Feb 2021 11:09:28 GMT):
Thank you Brett!
The truth is that we are not ready to submit an RFC proposal at the moment, it might take some time (4-8 weeks).
What would you recommend to indicate our interest in the "orderer ledger pruning" feature, avoid duplication of effort and to start collecting ideas and maybe find people already doing some work on this?
The "RFC process" document mentions Fabric mailing list. Is it a good idea to declare our intent to work on this RFC there?
BrettLogan (Fri, 19 Feb 2021 23:00:42 GMT):
I would definitely take it to the mailing list, getting Dave, Manish and Jason involved in the conversation would be a great start as well as they would be the maintainers with a technical stake in the conversation. You can find their contact info here: https://github.com/hyperledger/fabric/blob/master/MAINTAINERS.md
dvitas (Sat, 20 Feb 2021 17:19:00 GMT):
ok, great, all clear, thank you!
kartheekgottipati (Sat, 27 Feb 2021 07:48:05 GMT):
Has joined the channel.
abhishekktpl (Fri, 05 Mar 2021 08:18:15 GMT):
Has joined the channel.
abhishekktpl (Fri, 05 Mar 2021 08:18:16 GMT):
Hi l, I happen to test some config changes, In the process I was able to change orderer Admin policy as one of the orgs in application, but now when I try to change the batchtimeout or revert back the same policy to Admins I am getting this error.
[channel: testone] Rejecting broadcast of config message from 192.168.144.1:42158 because of error: error applying config update to existing channel 'testone': error authorizing update: error validating DeltaSet: policy for [Value] /Channel/Orderer/BatchTimeout not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the '/Channel/Application/testoneMSP/Admins' sub-policies to be satisfied
Is there any way the orderer Admin policy can be changed back?
jyellick (Tue, 09 Mar 2021 18:36:19 GMT):
Unfortunately not, the channel config is enforced by all nodes in the network, and can only be changed using the rules encoded there. You should revert to a backup if needed.
abhishekktpl (Wed, 10 Mar 2021 04:33:49 GMT):
The rules encoded is referring to the Application org admins signature, so logically if that particular org signature is taken on the revert change then, the change should be allowed ?
abhishekktpl (Wed, 10 Mar 2021 04:33:49 GMT):
The rules encoded is referring to one of the Application org admins signature, so logically if that particular org signature is taken on the revert change then, the change should be allowed ?
jyellick (Thu, 11 Mar 2021 14:13:36 GMT):
If you can supply a signature set satisfying the policy, you may change it back. I'm unsure exactly how you modified your channel config, but if you modified it in such a way that you can no longer produce that signature set (perhaps by referencing a key that you have lost) then there is no easy answer.
abhishekktpl (Fri, 12 Mar 2021 11:47:11 GMT):
Actually I set three Admin policies to this confirguration I am able to do the remaining two Admins functionality fine but not the orderer Admin. The policy is /Channel/Application/testoneMSP/Admins which I assume that a signature from testoneMSP Admins should satisfy the condition and work but thats not happening
lukeledet (Thu, 08 Apr 2021 20:01:23 GMT):
Has joined the channel.
Dainius 2 (Tue, 08 Jun 2021 11:49:43 GMT):
Has joined the channel.
Param-S (Thu, 10 Jun 2021 18:13:38 GMT):
Has joined the channel.
knagware9 (Thu, 17 Jun 2021 10:52:37 GMT):
Hyperledger Fabric Upgrade : Issue while Upgrading fabric from 1.4.4 to 2.2
I am trying to upgrade fabric 1.4 network to latest fabric stable version 2.2. In migration steps, I am able to update peer and orderer components to the latest version but while updating the channel capababilities to fetch the config block,peers are not able to connect with the orderer
Error: could not not connect to ordering service:could not dial endpoint:dial tcp:lookup orderer.example.com on 192.168.x.xxxx :no such host channel=mychannel
@mastersingh24 @yacovm
yacovm (Thu, 17 Jun 2021 13:21:19 GMT):
look at the logs of the orderer
yacovm (Thu, 17 Jun 2021 13:21:24 GMT):
did it crash?
saurabhsharmabg (Tue, 22 Jun 2021 08:00:37 GMT):
Has joined the channel.
saurabhsharmabg (Tue, 22 Jun 2021 08:00:38 GMT):
How do I rotate the Raft Orderer certificates?
saurabhsharmabg (Tue, 22 Jun 2021 08:00:38 GMT):
How do I rotate the Raft Orderer certificates?I am following the documentation here https://hyperledger-fabric.readthedocs.io/en/release-1.4/raft_configuration.html#tls-certificate-rotation-for-an-orderer-node
Not clear which values to update and which certificates to use for updates
saurabhsharmabg (Tue, 22 Jun 2021 11:23:34 GMT):
So when trying to update raft orderer certificates I am getting this error:
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating DeltaSet: policy for [Value] /Channel/Orderer/ConsensusType not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
saurabhsharmabg (Tue, 22 Jun 2021 11:23:34 GMT):
So when trying to update raft orderer certificates I am getting this error:
Error: got unexpected status: BAD_REQUEST -- error applying config update to existing channel 'mychannel': error authorizing update: error validating DeltaSet: policy for [Value] /Channel/Orderer/ConsensusType not satisfied: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied.
The issue is that I did not sign with the admin of the order. How do I sign it as an admin of orderer?
jmaric (Mon, 12 Jul 2021 16:38:33 GMT):
Has joined the channel.
bh4rtp (Wed, 18 Aug 2021 02:54:26 GMT):
Hi, for production how to backup the ledgers? Is it necessary to backup the ledgers for every order and peer nodes?
wangxiaobo1216 (Mon, 30 Aug 2021 08:02:22 GMT):
Has joined the channel.
nikhil550 (Fri, 03 Sep 2021 20:09:43 GMT):
Has joined the channel.
nikhil550 (Fri, 03 Sep 2021 20:09:43 GMT):
Hello. I migrated from a SOLO ordering service to a single node raft. When I added another ordering node to the system channel/channel, I received the following error:
nikhil550 (Fri, 03 Sep 2021 20:09:48 GMT):
```2021-09-03 19:58:43.904 UTC [orderer.common.cluster.replication] obtainStream -> INFO 048 Sending request for block [1] to orderer.example.com:7050 channel=byfn-sys-channel
2021-09-03 19:58:43.910 UTC [orderer.common.cluster.replication] pullBlocks -> INFO 049 Got block [1] of size 21 KB from orderer.example.com:7050 channel=byfn-sys-channel
2021-09-03 19:58:43.910 UTC [orderer.common.cluster.replication] pullBlocks -> INFO 04a Got block [2] of size 22 KB from orderer.example.com:7050 channel=byfn-sys-channel
2021-09-03 19:58:43.911 UTC [orderer.common.cluster.replication] pullBlocks -> INFO 04b Got block [3] of size 19 KB from orderer.example.com:7050 channel=byfn-sys-channel
2021-09-03 19:58:43.912 UTC [orderer.common.cluster.replication] pullBlocks -> INFO 04c Got block [4] of size 23 KB from orderer.example.com:7050 channel=byfn-sys-channel
2021-09-03 19:58:43.913 UTC [orderer.common.cluster.replication] pullBlocks -> INFO 04d Got block [5] of size 23 KB from orderer.example.com:7050 channel=byfn-sys-channel
2021-09-03 19:58:43.913 UTC [orderer.common.cluster.replication] pullBlocks -> INFO 04e Got block [6] of size 23 KB from orderer.example.com:7050 channel=byfn-sys-channel
2021-09-03 19:58:43.914 UTC [orderer.common.cluster.replication] pullBlocks -> INFO 04f Got block [7] of size 28 KB from orderer.example.com:7050 channel=byfn-sys-channel
2021-09-03 19:58:43.918 UTC [orderer.common.cluster] appendBlock -> PANI 050 Failed to write block [1]: unexpected Previous block hash. Expected PreviousHash = [e9a45f600b4e4e2b97256f8277635878188afdc00d1aa1656d15f65bd3459707], PreviousHash referred in the latest block= [3204b681d2022522b12630c65cd534b152e6d25f93d4f50a0c3994150b0b315d]
panic: Failed to write block [1]: unexpected Previous block hash. Expected PreviousHash = [e9a45f600b4e4e2b97256f8277635878188afdc00d1aa1656d15f65bd3459707], PreviousHash referred in the latest block= [3204b681d2022522b12630c65cd534b152e6d25f93d4f50a0c3994150b0b315d]
````
robinrob (Thu, 09 Sep 2021 21:50:16 GMT):
Has left the channel.
akshay.sood (Thu, 16 Sep 2021 18:30:58 GMT):
Hey guys
After rotating tls certificate of one of the orderer, it started throwing following warning
```
2021-09-16 18:28:33.791 UTC [orderer.common.cluster.puller] fetchLastBlockSeq -> WARN 294 Received status:NOT_FOUND from orderer0.org1.com:7050: faulty node, received: status:NOT_FOUND channel=assetschannel
2021-09-16 18:28:33.791 UTC [orderer.common.cluster.puller] func1 -> WARN 295 Received error of type 'faulty node, received: status:NOT_FOUND ' from orderer0.org1.com:7050 channel=assetschannel
```
Has anyone faced similar issue before?
I rotated the tls certificate in the config block, then changed the certificates in the orderer's volume and restarted the node. After restart, it started throwing these warnings
akshay.sood (Thu, 16 Sep 2021 18:31:47 GMT):
Besides this, this node is no longer available for consensus
akshay.sood (Thu, 16 Sep 2021 18:33:23 GMT):
I also rotated the enrollment certificate along with the tls one
kopaygorodsky (Thu, 16 Sep 2021 19:51:31 GMT):
Hey, I've just stumbled on the same error. Network was up and running, one orderer crashed because of no disk space, I resized it and orderer now fails to start. I replicated the same behaviour on other orderers.
did you find the problem?
kopaygorodsky (Thu, 16 Sep 2021 19:53:39 GMT):
Each orderer has own disk and there is no way some other orderer holding a lock. What I found that orderer tries to replicate the same channel twice, and on second try it panics.
kopaygorodsky (Thu, 16 Sep 2021 21:50:32 GMT):
According to logs it tries to create chain support two times for the same channel for some reason
kopaygorodsky (Thu, 16 Sep 2021 21:50:32 GMT):
According to logs it tries to create chain support two times for the same channel
kopaygorodsky (Thu, 16 Sep 2021 22:04:02 GMT):
```2021-09-17 00:59:50.345 EEST 013b INFO [orderer.consensus.etcdraft] createOrReadWAL -> Found WAL data at path '/Users/kopaihorodskyi/ord1/orderer/etcdraft/wal/channel', replaying it channel=channel node=1
```
kopaygorodsky (Thu, 16 Sep 2021 22:05:24 GMT):
@yacovm could you take a look, please? Any hints?
kopaygorodsky (Thu, 16 Sep 2021 22:15:36 GMT):
Adding here retries helps
https://github.com/hyperledger/fabric/blob/main/orderer/consensus/etcdraft/storage.go#L90
Is it some kind of race condition when raft node is restarting and storage lock wasn't released?
kopaygorodsky (Thu, 16 Sep 2021 22:15:36 GMT):
Adding here a retry helps
https://github.com/hyperledger/fabric/blob/main/orderer/consensus/etcdraft/storage.go#L90
Is it some kind of race condition when raft node is restarting and storage lock wasn't released?
kopaygorodsky (Thu, 16 Sep 2021 22:15:36 GMT):
Adding here a retry helps
https://github.com/hyperledger/fabric/blob/main/orderer/consensus/etcdraft/storage.go#L90
Is it some kind of race condition when raft node is restarting and storage lock wasn't released?
Found smth similar here https://github.com/docker/swarmkit/issues/1421 and they fixed it with a retry too.
yacovm (Thu, 16 Sep 2021 23:08:27 GMT):
@kopaygorodsky the ledger creation in the orderer is not atomic :(
yacovm (Thu, 16 Sep 2021 23:08:41 GMT):
it's a well known problem that no one is fixing from some reason
yacovm (Thu, 16 Sep 2021 23:09:03 GMT):
I think you should open a github issue
yacovm (Thu, 16 Sep 2021 23:09:21 GMT):
https://github.com/hyperledger/fabric/issues
yacovm (Thu, 16 Sep 2021 23:10:54 GMT):
> What I found that orderer tries to replicate the same channel twice, and on second try it panics.
why is it doing that? :thinking:
yacovm (Thu, 16 Sep 2021 23:11:05 GMT):
can you PM me the logs?
kopaygorodsky (Thu, 16 Sep 2021 23:34:28 GMT):
https://github.com/hyperledger/fabric/issues/2931
akshay.sood (Fri, 17 Sep 2021 03:02:59 GMT):
@yacovm can you check this one
yacovm (Fri, 17 Sep 2021 08:42:39 GMT):
Probably you rotated the certificate wrong or something
action-sj (Tue, 12 Oct 2021 01:29:25 GMT):
Has joined the channel.
Tipuch (Fri, 12 Nov 2021 21:16:50 GMT):
Has joined the channel.
Tipuch (Fri, 12 Nov 2021 21:23:44 GMT):
Hi, I'm getting an error when creating a channel from the peer, the failing certificate is my peer node's sign cert (I got this from the certificate serial number), I have validated with openssl the peer's signcert against the orderer's cacert, I've also tried regenerating the certificates for the architecture from the CA (I'm using fabric-ca) and restarting the orderer and peer processes.
Here is the error log from the orderer's side:
`2021-11-12 20:55:15.674 UTC [policies] SignatureSetToValidIdentities -> WARN 016 invalid identity: certificate subject=CN=peer1,OU=peer,O=******,L=******,ST=*****,C=****** serialnumber=344915063405395599426328589903861193567263965445 error="the supplied identity is not valid: x509: certificate signed by unknown authority"
2021-11-12 20:55:15.674 UTC [orderer.common.broadcast] ProcessMessage -> WARN 017 [channel: gdbchannel] Rejecting broadcast of config message from 100.64.0.161:39258 because of error: config update for existing channel did not pass initial checks: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
2021-11-12 20:55:15.674 UTC [comm.grpc.server] 1 -> INFO 018 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=100.64.0.161:39258 grpc.code=OK grpc.call_duration=821.152µs
2021-11-12 20:55:15.677 UTC [common.deliver] Handle -> WARN 019 Error reading from 100.64.0.161:39256: rpc error: code = Canceled desc = context canceled`
Would anyone have ideas of what to check / the next steps to diagnose and/or fix the issue I'm facing please?
Tipuch (Fri, 12 Nov 2021 21:23:44 GMT):
Hi, I'm getting an error when creating a channel from the peer, the failing certificate is my peer node's sign cert (I got this from the certificate serial number), I have validated with openssl the peer's signcert against the orderer's cacert, I've also tried regenerating the certificates for the architecture from the CA (I'm using fabric-ca) and restarting the orderer and peer processes.
Here is the error log from the orderer's side:
```2021-11-12 20:55:15.674 UTC [policies] SignatureSetToValidIdentities -> WARN 016 invalid identity: certificate subject=CN=peer1,OU=peer,O=******,L=******,ST=*****,C=****** serialnumber=344915063405395599426328589903861193567263965445 error="the supplied identity is not valid: x509: certificate signed by unknown authority"
2021-11-12 20:55:15.674 UTC [orderer.common.broadcast] ProcessMessage -> WARN 017 [channel: gdbchannel] Rejecting broadcast of config message from 100.64.0.161:39258 because of error: config update for existing channel did not pass initial checks: implicit policy evaluation failed - 0 sub-policies were satisfied, but this policy requires 1 of the 'Writers' sub-policies to be satisfied: permission denied
2021-11-12 20:55:15.674 UTC [comm.grpc.server] 1 -> INFO 018 streaming call completed grpc.service=orderer.AtomicBroadcast grpc.method=Broadcast grpc.peer_address=100.64.0.161:39258 grpc.code=OK grpc.call_duration=821.152µs
2021-11-12 20:55:15.677 UTC [common.deliver] Handle -> WARN 019 Error reading from 100.64.0.161:39256: rpc error: code = Canceled desc = context canceled```
Would anyone have ideas of what to check / the next steps to diagnose and/or fix the issue I'm facing please?
s.vahidi (Sun, 23 Jan 2022 12:02:12 GMT):
Has joined the channel.
rjones (Wed, 23 Mar 2022 17:25:02 GMT):
rjones (Wed, 23 Mar 2022 17:25:02 GMT):