Yes, I'm a computer guy and not supposted to get viruses, but I got one recently. I went to some website and my antivirus software (Symantec) pops up a warning about a virus. It appeared to have cleared the virus, but then I got more virus warnings about a Trojan.Gen called DWH*.tmp. I dug a little further and found that my temp folder was filling up with these tmp files. I was too busy to deal with it, so it kept going on and on and I just kept closing the alert. I figured I was going to have to rebuild my computer or create a new profile to get rid of it. Finally found an article online that told me it was Symantec Antivirus that was actually causing the problem. Apparently, Windows was indexing the computer and would index the Symantec Quarantine folder (c:\Users\All users\Symantec\Symantec Endpoint Protection\Quarantine) and everytime it did this it triggered the virus alert. I had to log in as the local administrator and delete the quarantine and log files so the warnings would stop.
So much for computer guys not getting viruses. And so much for Symantec for doing it's job.
Friday, August 27, 2010
Friday, July 9, 2010
Command Line Saves the Day
Had an interesting issue this morning. A computer had lost it's trust relationship with the domain controller. Furthermore, the administrator password was not working because the account was logged out. I had command line access to the machine using Kaseya K2 agent. To resolve, I was able to connect to the computer via command line and run the following commands:
net user administrator password [sets the administrator password to "password"]
net user administrator /active:yes [enables the administrator account]
Once the administrator password was reset and the account was enabled, I was able to login. I then disjoined and rejoined the domain.
net user administrator password [sets the administrator password to "password"]
net user administrator /active:yes [enables the administrator account]
Once the administrator password was reset and the account was enabled, I was able to login. I then disjoined and rejoined the domain.
Thursday, June 10, 2010
Windows 2003 SBS and Terminal Server Licensing:
This has been a long known and troubling issue. If a client has a Windows 2003 Small Business Server, they can only install Terminal Server in Remote Desktop Mode which allows a maximum of two simultaneous remote connections. The only way to resolve the problem is to upgrade from 2003 SBS to 2003 Server. After being tormented by this problem more than once, I did a search for "2003 SBS Terminal Server Hack" and found some nice instructions and a patch file.
Here are the instructions:
1. Install v-patch
2. From VPatch diretory, launch vpatchprompt.exe
3. Vpatchprompt will ask you for the following files
- patch file (the .pat file)
- source file (termsrv.dll)
- destination file (the patched termsrv.dll)
4. Reboot computer in safe mode and replace c:\windows\system32\termsrv.dll) with the patched file
5. Reboot in normal mode.
The patch file can be downloaded from http://www.remkoweijnen.nl/blog/download/2003tspatch.zip
Vpatch can be downloaded from http://www.tibed.net/vpatch
This has been a long known and troubling issue. If a client has a Windows 2003 Small Business Server, they can only install Terminal Server in Remote Desktop Mode which allows a maximum of two simultaneous remote connections. The only way to resolve the problem is to upgrade from 2003 SBS to 2003 Server. After being tormented by this problem more than once, I did a search for "2003 SBS Terminal Server Hack" and found some nice instructions and a patch file.
Here are the instructions:
1. Install v-patch
2. From VPatch diretory, launch vpatchprompt.exe
3. Vpatchprompt will ask you for the following files
- patch file (the .pat file)
- source file (termsrv.dll)
- destination file (the patched termsrv.dll)
4. Reboot computer in safe mode and replace c:\windows\system32\termsrv.dll) with the patched file
5. Reboot in normal mode.
The patch file can be downloaded from http://www.remkoweijnen.nl/blog/download/2003tspatch.zip
Vpatch can be downloaded from http://www.tibed.net/vpatch
Wednesday, April 21, 2010
Intermedia Outage
Intermedia had another outage last week. Finally received their RFO written by Jonathan McCormick, Intermedia Chief Operating Officer:
I personally and on behalf of Intermedia apologize for the April 16 and 17, 2010 service outage you experienced. This letter is a follow-up to the information you received from Intermedia CEO Serguei Sofinski. As part of our commitment to transparency, it addresses the following items:
• Detailed Reason for Outage (RFO)
• Service Credit
• Corrective Action Plans
• Client Communication
Detailed Reason for Outage (RFO):
At approximately 6:15 a.m. PT on Thursday 4/16, a hardware failure occurred on one of the EMC storage area networks (SANs) located in Intermedia’s New Jersey data center. The service processor for one of the controller nodes had a failure. This failure caused the entire load for that SAN to be shifted to the service processor on the redundant controller node.
The spare capacity on the single service processor was not enough to handle the entire load of all systems connected to the SAN, which caused a degradation of performance for the reading and writing of data to the SAN. The degradation of performance on the SAN in turn impacted the overall system’s ability to process email messages creating a queuing of several hundred thousand messages within the system. The back log was large enough that it took 32 hours for it to clear after the original event. At approximately 2 p.m. PT on Friday 4/17, all systems were functioning normally and mail delivery was considered to be “real-time.”
Service Credit:
In accordance with the terms of your SLA, a service credit for the above time period will be proactively applied to your account balance by the close of business on Friday 4/23.
Corrective Actions:
• Our SAN vendor analyzed the system logs for the event. The vendor determined that the service processor failure occurred due to a unique bug in the specific version of firmware on the system. This bug caused the service processor to “panic” and automatically take itself off line. As the first corrective action, on Friday 4/17 at 11 p.m. PT, our vendor performed an emergency upgrade to the version of firmware running on the SAN. This newer version of firmware has a fix for the bug that caused the failure we experienced.
• Since the outage, as the second corrective action, we have added additional processing capacity to the SMTP hub farm in this domain. We have also performed performance tuning on the SMTP hubs to guarantee that they are able to more rapidly process a larger than normal queue of messages.
• Over the next several weeks, we will be taking additional corrective actions to make certain that there is enough spare capacity on the SAN to guarantee that it performs without performance degradation in the case of a single hardware failure. An additional SAN is being installed this week and starting as early as this weekend we will begin to migrate a portion of the existing systems to the new SAN. Additionally, we have engaged our SAN vendor to review the performance tuning of our SAN and implement adjustments to increase its overall performance capabilities. These events in tandem will guarantee that the SAN will be able to perform without an impact to the service in the event we experience another individual hardware error.
Client Communication:
We have received significant constructive feedback regarding our communication throughout the outage. We recognize the importance of proactive communication of timely, detailed information that clearly explains the current impact on your service.
Intermedia recognizes the fact that our current client notification tools and processes are more reactive than proactive and that they do not function well in an outage situation. For this reason, we have developed a new client notification tool that will be used by the Technical Support organization to proactively notify and communicate with clients during a service interruption. The new notification tool will be released at the end of April and will be put into operation during the month of May.
This new notification tool will equip the Technical Support organization with the ability to rapidly create a list of affected accounts and instantly generate an appropriate message to be sent to the account contacts of an affected account via both email and SMS (text messaging).
We will notify you when the notification tool has been implemented, as your account contacts will need to update their information with an SMS address to receive notifications.
I want to assure you that we recognize the importance of business communications and the negative affect it has on your business when the service is not available. Your feedback is always appreciated. We welcome your feedback regarding our service at Feedback@intermedia.net.This distribution list is monitored by the entire Intermedia management team.
Sincerely,
Jonathan McCormick
Intermedia Chief Operating Officer
I personally and on behalf of Intermedia apologize for the April 16 and 17, 2010 service outage you experienced. This letter is a follow-up to the information you received from Intermedia CEO Serguei Sofinski. As part of our commitment to transparency, it addresses the following items:
• Detailed Reason for Outage (RFO)
• Service Credit
• Corrective Action Plans
• Client Communication
Detailed Reason for Outage (RFO):
At approximately 6:15 a.m. PT on Thursday 4/16, a hardware failure occurred on one of the EMC storage area networks (SANs) located in Intermedia’s New Jersey data center. The service processor for one of the controller nodes had a failure. This failure caused the entire load for that SAN to be shifted to the service processor on the redundant controller node.
The spare capacity on the single service processor was not enough to handle the entire load of all systems connected to the SAN, which caused a degradation of performance for the reading and writing of data to the SAN. The degradation of performance on the SAN in turn impacted the overall system’s ability to process email messages creating a queuing of several hundred thousand messages within the system. The back log was large enough that it took 32 hours for it to clear after the original event. At approximately 2 p.m. PT on Friday 4/17, all systems were functioning normally and mail delivery was considered to be “real-time.”
Service Credit:
In accordance with the terms of your SLA, a service credit for the above time period will be proactively applied to your account balance by the close of business on Friday 4/23.
Corrective Actions:
• Our SAN vendor analyzed the system logs for the event. The vendor determined that the service processor failure occurred due to a unique bug in the specific version of firmware on the system. This bug caused the service processor to “panic” and automatically take itself off line. As the first corrective action, on Friday 4/17 at 11 p.m. PT, our vendor performed an emergency upgrade to the version of firmware running on the SAN. This newer version of firmware has a fix for the bug that caused the failure we experienced.
• Since the outage, as the second corrective action, we have added additional processing capacity to the SMTP hub farm in this domain. We have also performed performance tuning on the SMTP hubs to guarantee that they are able to more rapidly process a larger than normal queue of messages.
• Over the next several weeks, we will be taking additional corrective actions to make certain that there is enough spare capacity on the SAN to guarantee that it performs without performance degradation in the case of a single hardware failure. An additional SAN is being installed this week and starting as early as this weekend we will begin to migrate a portion of the existing systems to the new SAN. Additionally, we have engaged our SAN vendor to review the performance tuning of our SAN and implement adjustments to increase its overall performance capabilities. These events in tandem will guarantee that the SAN will be able to perform without an impact to the service in the event we experience another individual hardware error.
Client Communication:
We have received significant constructive feedback regarding our communication throughout the outage. We recognize the importance of proactive communication of timely, detailed information that clearly explains the current impact on your service.
Intermedia recognizes the fact that our current client notification tools and processes are more reactive than proactive and that they do not function well in an outage situation. For this reason, we have developed a new client notification tool that will be used by the Technical Support organization to proactively notify and communicate with clients during a service interruption. The new notification tool will be released at the end of April and will be put into operation during the month of May.
This new notification tool will equip the Technical Support organization with the ability to rapidly create a list of affected accounts and instantly generate an appropriate message to be sent to the account contacts of an affected account via both email and SMS (text messaging).
We will notify you when the notification tool has been implemented, as your account contacts will need to update their information with an SMS address to receive notifications.
I want to assure you that we recognize the importance of business communications and the negative affect it has on your business when the service is not available. Your feedback is always appreciated. We welcome your feedback regarding our service at Feedback@intermedia.net.This distribution list is monitored by the entire Intermedia management team.
Sincerely,
Jonathan McCormick
Intermedia Chief Operating Officer
Friday, March 12, 2010
Certificates now requiring 2048 bit encryption
Attempted to renew the certificate on my mail server yesterday, but found out that my 1024 certificate could not be renewed by GoDaddy. The reason for not allowing renewal is that Microsoft will no longer accept root certificates with RSA 1024-bit modulus of any expiration. I had to revoke my 1024 certificate and issue a new certificate request with 2048 bit encryption. This caused webmail access to the server to be offline during the upgrade process, but it was only down for a shot time while I removed the old certificate, create a new certificate request, uploaded the request to GoDaddy, received and installed the new certificate.
For more information, please see our website at www.24hourtek.com.
For more information, please see our website at www.24hourtek.com.
Intermedia Outage
Intermedia finally sent an RFO (Reason for Outage) for the major outage on March 5th. Sounds like the real problem was that they weren't keeping the firmware upgraded on their EMC boxes. I wonder if they knew that new firmware was available and were aware that there was a problem with the old firmware. I always recommend monitoring firmware versions in all hardware, reading the release notes whenever a new version comes out, and apply the firmware if it will fix a bug or prevent a failure. I've see this a lot with the Dell RAID firmware which seems to get upgraded every 6 months or so and fixes problems that they discover which will prevent a RAID failure. Hopefully this will be the last outage for a while.
Dear David Moss
Regarding your account: Tek24hour
As a follow-up to Intermedia’s CEO, Serguei Sofinski’s letter regarding the March 5, 2010 outage with your service, and in our continuing commitment to complete transparency, this letter addresses the following items:
• Detailed Reason for Outage (RFO)
• Corrective action plans
• Timeline of events surrounding the outage
RFO – Client Infrastructure
On March 5 at approximately 6 a.m. PST Intermedia’s monitoring system began to display alerts for high RPC (remote procedure call) latency on several of our Exchange database servers in multiple domains. Distributed applications and services within the Exchange domain communicate via RPC. The high RPC latency in-turn began to affect the front-end services within each domain that process mail flow and manage client connectivity. The RPC latency continued to increase and eventually hit a critical point that effectively prevented the processing of commands by the Exchange database servers which caused front-end services to back-up resulting in the queuing of mail and disconnecting of clients. Minutes later all cluster services on the Exchange databases began to fail. By 6:30 a.m. PST, all senior Intermedia engineers were engaged in resolving the issue.
The cause of the RPC Latency on the Exchange database servers was due to poor I/O (input/output) performance on one of our EMC CX3-80 SANs. This resulted in long wait times for reads and writes to the databases. The root cause of the poor I/O performance on the SAN was determined to be faulty hardware; specifically disk 14 in enclosure five (5) was in a partially failed state. By design this should not have affected performance of the EMC CX3-80 SAN or the Exchange database servers connected to the SAN.
The EMC CX3-80 is an enterprise SAN designed with redundant components. Each EMC CX3-80 SAN contains 32 enclosures that are serviced by redundant controllers, each with live service processors. Data stored on the SAN is striped across multiple disks within multiple enclosures. Each enclosure has 14 active disks plus a hot standby disk available within it to take over for failed disks. All Exchange database servers are clustered and each server within the cluster is multi-pathed via separate fiber connections and fiber switches to each service processor. The databases reside as single copies of data on the SAN.
Under normal operation, if the service processor on a SAN recognizes a faulty disk it will automatically bypass it and replace it with the hot standby. The hot standby then becomes part of the raid group and data is automatically redistributed to it as a background process. Because data is striped across multiple disks using bit parity, this action happens automatically without impacting performance of the SAN.
The failure that occurred on March 5 was unique in the aspect that the SAN did not perform as expected. The faulty disk was generating large amounts of soft SCSI errors, but the service processor failed to remove it from the raid group. Service Processor A continued to process large amounts of errors being created by the faulty disk, which in-turn made it unable to deliver data to the servers at an acceptable rate.
At approximately 9:30 a.m. PST the faulty hard drive was recognized as the root cause of the issue and was manually bypassed and performance of the SAN returned to normal. Based on experience, the time between 6:30 a.m. PST and 9:30 a.m. PST was spent trouble shooting more regular causes of performance degradation within the SAN and associated fiber network.
Due to their extreme sensitivity to high RCP latency, all Microsoft Clustering Services within the affected domains had failed. A cold restart of all nodes was required to return services to normal. A cold restart requires shutting down all servers and bringing them up one at a time until everything is back on-line. Although there were a dozen Intermedia system administrators focused on this task as their sole priority, the restoration of Exchange services took several hours and was completed by approximately 11:30 a.m. PST.
During the event, incoming mail was queued within the hubs and/or mail filters and then subsequently delivered throughout the afternoon to the Exchange database servers.
RFO – Corporate Communications and Account Administration tools
During the event, our ability to communicate status effectively was hindered by an outage of our corporate communication tools until 9:50 a.m. PST. The databases for www.Intermedia.net, Intermedia’s client control panel and Intermedia’s trouble ticket system were located on the affected SAN and therefore were not available during the SAN event. These systems were restored as soon as the SAN performance issue was resolved. All available personnel were directed to answer incoming customer calls. Intermedia logged over 2,000 incoming calls to our PBX and effectively answered more than 1,000 of those calls.
Corrective Action Plans
At the time of the event, Intermedia escalated the performance issue with the EMC CX3-80 SAN to both Dell and EMC senior support engineers. Both Dell and EMC have continued to evaluate the root cause of the event since the outage and have recommended an upgrade to the version of flare code (firmware) running on the affected EMC CX3-80. The newer version of flare code has improvements in the way the system processes different types of disk failures. It is the belief of both Dell and EMC that this newer version of flare code will prevent a recurrence of a similar issue. We are planning the upgrade at this time and expect to have it completed within the next 30 days. You will receive a maintenance notification when the upgrade is scheduled.
As a high priority for completion, no later than Q2, Intermedia will also be isolating corporate communication infrastructure from the same infrastructure that provides our Exchange services, guaranteeing that we will be able to communicate effectively with clients at all times during a service interruption. Additionally, we will be rolling out a new, internally developed, client communication tool in late Q2 that enables more efficient and proactive communication with our clients via SMS as well as email.
Timeline of Event (PST)
• 6:00 a.m. – RPC latency threshold monitors begin to alert
• 6:30 a.m. – Client services begin to be impacted
• 6:30 a.m. – Intermedia VP of Operations and COO are notified and the event is classified as a Severity 1 outage, critical response team is deployed
• 6:30 a.m. – 7:30 a.m. – SAN processing priorities are adjusted in an attempt to improve performance of the SAN without success
• 7:30 a.m. – 8:30 - a.m. indications of fiber path errors exist leading the team to trouble shoot potential fiber network issues
• 8:00 a.m. – Dell engineers are engaged to help identify root cause of the SAN performance issues
• 9:30 a.m. – Faulty hard drive is determined to be the root cause of the issue and is manually bypassed returning SAN performance to normal
• 9:50 a.m. – Control panel, www.intermedia.net and the ticketing system are back on-line
• 9:30 a.m. – 11:30 a.m. - all Exchange service are brought back on-line
• 11:30 a.m. – 5:00 a.m. - BlackBerry services catch-up and mail queues clear
We recognize the importance of business communications and understand the great responsibility we have accepted by being your chosen provider. I want to assure you that from the moment the outage was classified as a Severity 1 event, Intermedia’s most senior engineers were engaged and focused on resolving the issue as their sole priority. After any service impacting outage, we invest significant resources in analyzing the event in an attempt to continually improve the service levels we deliver. Your feedback is always appreciated and helps Intermedia better serve you. We welcome your feedback regarding our services at Feedback@intermedia.net. This distribution list is monitored by the entire Intermedia management team.
Sincerely,
Jonathan McCormick
Intermedia Chief Operating Officer
Dear David Moss
Regarding your account: Tek24hour
As a follow-up to Intermedia’s CEO, Serguei Sofinski’s letter regarding the March 5, 2010 outage with your service, and in our continuing commitment to complete transparency, this letter addresses the following items:
• Detailed Reason for Outage (RFO)
• Corrective action plans
• Timeline of events surrounding the outage
RFO – Client Infrastructure
On March 5 at approximately 6 a.m. PST Intermedia’s monitoring system began to display alerts for high RPC (remote procedure call) latency on several of our Exchange database servers in multiple domains. Distributed applications and services within the Exchange domain communicate via RPC. The high RPC latency in-turn began to affect the front-end services within each domain that process mail flow and manage client connectivity. The RPC latency continued to increase and eventually hit a critical point that effectively prevented the processing of commands by the Exchange database servers which caused front-end services to back-up resulting in the queuing of mail and disconnecting of clients. Minutes later all cluster services on the Exchange databases began to fail. By 6:30 a.m. PST, all senior Intermedia engineers were engaged in resolving the issue.
The cause of the RPC Latency on the Exchange database servers was due to poor I/O (input/output) performance on one of our EMC CX3-80 SANs. This resulted in long wait times for reads and writes to the databases. The root cause of the poor I/O performance on the SAN was determined to be faulty hardware; specifically disk 14 in enclosure five (5) was in a partially failed state. By design this should not have affected performance of the EMC CX3-80 SAN or the Exchange database servers connected to the SAN.
The EMC CX3-80 is an enterprise SAN designed with redundant components. Each EMC CX3-80 SAN contains 32 enclosures that are serviced by redundant controllers, each with live service processors. Data stored on the SAN is striped across multiple disks within multiple enclosures. Each enclosure has 14 active disks plus a hot standby disk available within it to take over for failed disks. All Exchange database servers are clustered and each server within the cluster is multi-pathed via separate fiber connections and fiber switches to each service processor. The databases reside as single copies of data on the SAN.
Under normal operation, if the service processor on a SAN recognizes a faulty disk it will automatically bypass it and replace it with the hot standby. The hot standby then becomes part of the raid group and data is automatically redistributed to it as a background process. Because data is striped across multiple disks using bit parity, this action happens automatically without impacting performance of the SAN.
The failure that occurred on March 5 was unique in the aspect that the SAN did not perform as expected. The faulty disk was generating large amounts of soft SCSI errors, but the service processor failed to remove it from the raid group. Service Processor A continued to process large amounts of errors being created by the faulty disk, which in-turn made it unable to deliver data to the servers at an acceptable rate.
At approximately 9:30 a.m. PST the faulty hard drive was recognized as the root cause of the issue and was manually bypassed and performance of the SAN returned to normal. Based on experience, the time between 6:30 a.m. PST and 9:30 a.m. PST was spent trouble shooting more regular causes of performance degradation within the SAN and associated fiber network.
Due to their extreme sensitivity to high RCP latency, all Microsoft Clustering Services within the affected domains had failed. A cold restart of all nodes was required to return services to normal. A cold restart requires shutting down all servers and bringing them up one at a time until everything is back on-line. Although there were a dozen Intermedia system administrators focused on this task as their sole priority, the restoration of Exchange services took several hours and was completed by approximately 11:30 a.m. PST.
During the event, incoming mail was queued within the hubs and/or mail filters and then subsequently delivered throughout the afternoon to the Exchange database servers.
RFO – Corporate Communications and Account Administration tools
During the event, our ability to communicate status effectively was hindered by an outage of our corporate communication tools until 9:50 a.m. PST. The databases for www.Intermedia.net, Intermedia’s client control panel and Intermedia’s trouble ticket system were located on the affected SAN and therefore were not available during the SAN event. These systems were restored as soon as the SAN performance issue was resolved. All available personnel were directed to answer incoming customer calls. Intermedia logged over 2,000 incoming calls to our PBX and effectively answered more than 1,000 of those calls.
Corrective Action Plans
At the time of the event, Intermedia escalated the performance issue with the EMC CX3-80 SAN to both Dell and EMC senior support engineers. Both Dell and EMC have continued to evaluate the root cause of the event since the outage and have recommended an upgrade to the version of flare code (firmware) running on the affected EMC CX3-80. The newer version of flare code has improvements in the way the system processes different types of disk failures. It is the belief of both Dell and EMC that this newer version of flare code will prevent a recurrence of a similar issue. We are planning the upgrade at this time and expect to have it completed within the next 30 days. You will receive a maintenance notification when the upgrade is scheduled.
As a high priority for completion, no later than Q2, Intermedia will also be isolating corporate communication infrastructure from the same infrastructure that provides our Exchange services, guaranteeing that we will be able to communicate effectively with clients at all times during a service interruption. Additionally, we will be rolling out a new, internally developed, client communication tool in late Q2 that enables more efficient and proactive communication with our clients via SMS as well as email.
Timeline of Event (PST)
• 6:00 a.m. – RPC latency threshold monitors begin to alert
• 6:30 a.m. – Client services begin to be impacted
• 6:30 a.m. – Intermedia VP of Operations and COO are notified and the event is classified as a Severity 1 outage, critical response team is deployed
• 6:30 a.m. – 7:30 a.m. – SAN processing priorities are adjusted in an attempt to improve performance of the SAN without success
• 7:30 a.m. – 8:30 - a.m. indications of fiber path errors exist leading the team to trouble shoot potential fiber network issues
• 8:00 a.m. – Dell engineers are engaged to help identify root cause of the SAN performance issues
• 9:30 a.m. – Faulty hard drive is determined to be the root cause of the issue and is manually bypassed returning SAN performance to normal
• 9:50 a.m. – Control panel, www.intermedia.net and the ticketing system are back on-line
• 9:30 a.m. – 11:30 a.m. - all Exchange service are brought back on-line
• 11:30 a.m. – 5:00 a.m. - BlackBerry services catch-up and mail queues clear
We recognize the importance of business communications and understand the great responsibility we have accepted by being your chosen provider. I want to assure you that from the moment the outage was classified as a Severity 1 event, Intermedia’s most senior engineers were engaged and focused on resolving the issue as their sole priority. After any service impacting outage, we invest significant resources in analyzing the event in an attempt to continually improve the service levels we deliver. Your feedback is always appreciated and helps Intermedia better serve you. We welcome your feedback regarding our services at Feedback@intermedia.net. This distribution list is monitored by the entire Intermedia management team.
Sincerely,
Jonathan McCormick
Intermedia Chief Operating Officer
Wednesday, March 10, 2010
Blackberry - Duplicate Email Issue
Had an interesting issue with a Blackberry today. The user was receiving duplicate emails and calendar invites on the Blackberry. Accepting the calendar invites also send a second .ics file. We contacted Intermedia for support and they recommended wiping and reloading the Blackberry, but this did not resolve the problem. We finally determined that the user had a delegate setup. All of the emails were being forwarded to the delegate which was causing the duplication. Removing the delegate resolved the problem.
For more information, please visit our website at www.24hourtek.com.
Subscribe to:
Posts (Atom)