/var/log/DMIT-NOC.log
4.67K subscribers
189 photos
6 files
117 links
Download Telegram
DMIT increased the granularity of the transfer statistics, during this we found some problems in the traffic statistics.

This is to quickly apply the rate limit after the transfer runs out or reset.

When we updated the code, we used >= instead of > as the last sync time comparison.

This leads to the traffic at the last time point be counted multiple times for each sync time.

Solution: We reset all transfer statistics and restored all suspensions due to transfer runs out. Please contact us if your VM is still in suspension.
/var/log/DMIT-NOC.log pinned «DMIT increased the granularity of the transfer statistics, during this we found some problems in the traffic statistics. This is to quickly apply the rate limit after the transfer runs out or reset. When we updated the code, we used >= instead of > as the…»
We’ve already working on the network issue in Tokyo Pro.

CTGnet response they already fix it, but we found it’s not.

There is no much detail we could offer right now.
For TYO.Pro:

We are still working hard to communicate with China Telecom; we will update here as soon as there is progress.
Chine Telecom is still working on this.

This network fault is caused by China Telecom. There is a network issue on the CTGnet device.

Fault 1: The return routing should be CN2 instead of AS4134.

Fault 2: TCP packet loss.

===
DMIT is currently working with China Telecom for restore this service.

Once the service has been restored, we will offer SLA compensation.
Please test the TYO Pro.

CT responds circuit backs to normal. We confirmed this situation based our general diagnosis.

Submit / reply to the ticket ASAP if you still have problems.
We are still following up with CTGnet for incident investigation and report;
SLA compensation detail will be posted here within 2 weeks.
TYO Pro RFO (Reason of Outage) From CTGnet:

Quote:
> hardware fault, and the circuit was restored after manufacturer urgent fixed it.


DMIT will issue SLA compensation for all active orders placed before Feb 8, 2023, for TYO Pro. Please wait for more detail.
DMIT.io maintenance; ETA 2hr
/var/log/DMIT-NOC.log
Extend for 2hr;
The maintenance has complete 2hrs ago;
We've improved the performance of control panel.
Please let us know if you found any error or hard-to-use case in this new control panel.

Thanks for assistance.
DMIT is now support PTR record update;

For IPv6, we also support /64 PTR wildcard record;
But the more specific record for IPv6 will not be supported.

PTR update can be accessed by click the change hostname icon on the right side of your hostname.

The AUP will be also appied to PTR.
DMIT does not support Dest. TCP 25 outbound access.
The snapshot feature has become available in LAX.
Every LAX Pro instance has 1 quota; sPro has 2 quotas.
The extra snapshot upgrade and selection on order will be available soon.

This feature will become available for SJC next, then HKG, last is TYO.
SJC is out of hardware resources.

The new SSD, RAM, and servers are on the way.

1. Two nodes will be rebooted next weekend ( March 17 or 19) for a RAM upgrade.

2. The block storage will be doubled, and the snapshot will be available this weekend (March 10 or 12).

3. Your VM might be cold-migrated (one reboot) without notice to a new node since tight hardware resources.

4. The new instance purchase will be available the next day after the action ( March 11 or 13. ) if the no.2 action is completed on time.
We have to performe emergency reboot for some SJC node. It will be done very soon.
/var/log/DMIT-NOC.log pinned «We have to performe emergency reboot for some SJC node. It will be done very soon.»
We've noticed IO Error in SJC; Investigating. Keep you posted.
/var/log/DMIT-NOC.log
We've noticed IO Error in SJC; Investigating. Keep you posted.
The network components we used implemented hash Layer3+4 for Bond interface, which is not supported by Infiniband.

It caused the disonnection-dead-loop for the entire SJC Ceph cluster.

We've removed the config implemented by component and locked it.
We experience extramly high load in SJC, the new hardware is on the way;
The new NVMe block storage hardware will be installed tomorrow.
We are working on restore Ceph-OSD; The problem is found; It still takes more time to recovery.