ISC2 has finally formally approved my CISSP certification. One major goal for the year has been completed…now for all the rest!
It looks like big telco is trying to break up wholesale subsidies, according to an article in ArsTechnica.
I’m split in my opinion on the one. On one hand we need competition and smaller providers. Smaller providers often offer a variety of services that SMB’s can leverage as they expand and grow for a better price than competitors. They often will work directly with customers who have a few big customer needs without the ability to pay big customer prices. These lower wholesale prices help small providers punch well above their weight class, which still compensating large providers for the use of the infrastructure investment.
On the other hand, I’ve seen what happens when regulated monopolies are forced to open their infrastructure to their competitors and barely be able to recover costs. Eventually, the infrastructure lags as the incumbent is unable/unwilling to reinvest in infrastructure upgrades. This leaves the smaller of the “big” incumbents ripe for takeover, which in turn leads to the new owner simply milking the investment and ditching it later. This is bad for all customers, but worse for residential who bear the brunt of the costs for ‘service of last resort’ in rural areas.
Fairpoint (ME/NH/VT) is a great example of this. When Verizon made the choice to get out of the wires business, they stopped investing in their people and infrastructure in preparation for sale. Along come Fairpoint, a small conglomerate of municipal level telcos who somehow scraped together the money for this. The three PUC’s stepped in an strong-armed terms into the deal which required investment in broadband to rural areas, adding to the already debt-heavy deal. Somewhere along the lines, a VC firm steps in to bail out Fairpoint which had been bleeding money. The writing was on the wall at this point – Fairpoint is up for sale. Eventually, Consolidated Communications picked up Fairpoint for likely less than Verizon sold it for. Who lost out in the end? Most of rural New England who during Fairpoint’s reign lost their provider of last resort, only recently received even moderate high speed bandwidth, and now suffer during service calls bungled by low cost contract workers.
At the end of the day, the best solution is to maintain the wholesale structure but negotiate a fair price for both sides. The incumbent should be able to offer the same price service provided across their network as the small guy does reselling their service. This would force the smaller company to be creative and offer value-adds not found at the incumbent. Even with this approach, I’m not sure what the end result would be or if this would even be ‘fair’ in today’s environment. Maybe we should all just get fiber to the home and transition to IP services?
I’ve made a great deal of progress with my personal goals over the last few months. My CISSP is currently in review waiting for final approval, and my GPEN is in progress. I’ve even managed to post semi-regular blog posts.
We are steadily making progress on our family goals as well. A child enrolled in college, one property sold, another on the market, and an offer placed on our new property. If things keep moving at this pace, 2018 is going to be a great year.
More to come!
One of my biggest annoyances with my regular Nessus scans are the continuous medium risks related to weak SSL ciphers. Nartac Software created a simple tool to help admins fix these issues: ISSCrypto. Simply download the tool, then run it as an administrator on your Windows box. I recommend you take the “Best Practices” template and apply those settings first. Always back up your current settings before changing anything!
I became aware yesterday that several sources are reporting Energy Services Group was “hacked” or “attacked.” There’s been a little saber rattling about hackers getting control of the US energy markets. Being that I’ve had some dealings with ESG over the years, I thought I might speak to this.
Here’s what we know at this point: ESG suffered a massive outage, but the cause is not known. ESG appears to have gotten some systems related to competitive energy providers back online, however that is all I know at this time.
What does ESG do? According to their website, the provide various services to the energy industry ranging from data management, retail billing, pipeline and storage management, and market management. My experience with them to date is as an EDI provider handling the competitive energy provider data communication with utility companies. They process the enrollments, billing, payment, and usage data sent by the utility to the CEP. CEP’s operate in a low profit margin market, making outsourcing the backend functions almost mandatory.
The services ESG provides do not equate to them having any direct control over the energy grid (gas or electric) – to my knowledge. ESG does have access to a treasure trove of PII such as names, addresses, metering and billing information, gas and electric wholesale orders and pricing information. I do not believe ESG has any direct influence over the ICS systems in use by their customers. At this time, I think we need to keep FUD to a minimum but ESG needs to inform their customers of the possible risks.
The electric utility companies who serve customers directly impacted by the ESG breach as also victims here – they will undoubtedly have to deal with an influx of customer and regulatory inquires over this matter. However, they have absolutely no control over who signs up for competitive supply, nor who the supplier uses for their backend systems. All of these expenses will be passed directly on to the rate payers in the end – both by ESG and the various regulated entities affected by this.
There are a couple scenarios here: Whatever happened caused ESG to be knocked totally offline – even requiring them to use a Gmail account for communication. My suspicion is this was a ransomware attack that got out of hand, as they appear to have been able to get back up and running in a relatively short time. But the company as yet has not released any public information.
A few months ago, I had moved almost all of my storage into Google Drive, OneDrive, or iCloud depending on the usage. This allowed me to turn down my old Dell FreeNAS server in an attempt to save on my electric bill. I’ve never been completely on-board with this model, even though I know I’m keeping some physical backups for emergencies. It could be that I spend too much time listening to Michael Bazzell and Justin Carroll or the control freak in me, but not having control of my data really bugs me.
The revelations from the Cambridge Analytica debacle stirred up information on just what Facebook, Google, and Apple store. I won’t go into detail here, as The Guardian and TechDirt have two great articles on this. This all left me wondering what Google, Microsoft, and Apple are really doing with all of my files, photos, and email. All of these companies could hand my information over to the government without warning, or could be breached and I would never know. It’s definitely time to bring everything back in house.
My initial plan is to bring all of my files back down from the cloud and simply store them on my FreeNAS server. Once that is done, a NextCloud server should provide me a solid way to sync files across devices as well as online collaboration.
What I’m lacking is a plan to privatize my email. Do I ramp up my ProtonMail account? Or do I build my own email server? Both have their pros and cons, but what is really worrisome is what happens to ProtonMail if it simply disappears? What if our government decides to block access or make it illegal to store your email in another country? On the other hand, do I really want to take on managing my own email infrastructure? In the end, I think I will in-house the majority of my email and rely on ProtonMail for secure backup mail client.
I expect the whole process to take several weeks due to current time constraints. I have FreeNAS back up and running, however I need to get a solid back strategy in place before moving forward. Ideally, I would have an encrypted cloud-based backup like RSYNC.NET or Amazon S3 combined with a offline physical copy. I have some details to iron out yet.
Folks – it’s time to tick everyone off with network maintenance windows! Cisco PSIRT released 30 vulnerabilities in their router firmware across multiple versions of IOS and IOS EX. Three critical vulnerabilities include one hard-coded credential affecting all IOS XE routers running IOS XE v16, and two which affect v15 under certain conditions. Fifteen high risk vulnerabilities run the gamut from denial of service, buffer overflow, and privileged escalation.
A complete list follows, and I will update it as more come in today.
The draft for this project has changed three times since starting – mostly due to resource constraints on my end. I’ve bounced between hardware, hypervisors, and focus but I’ve settled on an approach. My immediate needs outweighed the need for a full VMWare stack. What I really needed was a FreeNAS replacement, and after trying a few different options I’ve ended up right back on FreeNAS 11. This platform will support most of my storage, media, and VM needs for a year or so. It will also support several options for backing up and securing my data, allowing me to get off the cloud as much as possible.
The entire system is setup on a Dell Precision T7500 currently running 2 Intel Xeon E5520, 12 GB RAM, and almost 2 TB of storage across 6 SATA hard drives. This hardware comes with some downsides however: the server is so physically large it has it’s own gravity; the processors lack the UG functionality required by bhyve to support virtualization; and the hardware draws a fair amount of power under the current configuration; and the hardware is old. I’m correcting the UG support issue by ordering a couple E5620 processors, and possibly add additional RAM at a later date. I might be able to cut power usage by configuring FreeNAS power management and migrating to smaller drives or SSD’s as well.
As a side note, I had considered going with an online lab or picking up an Intel NUC. Both would have saved me a fair amount on my electrical bill, however at the end of the day I have two drivers. The first was to ensure a minimum impact to my checking account right now, the second was to ensure a solid platform platform to ensure the privacy and security of my family files. Since everything in this lab has been cobbled together from discarded hardware, I have little investment other than time. Additionally, there is more than enough support bringing all of my information back down from the cloud. Once things get situated, I’ll likely migrate to a Intel NUC platform and a Synology NAS.
My current plan is to migrate everything hosted on other machines back on to either a FreeNAS jail until the CPU upgrade is complete. This should be fairly straightforward as I am only migrating a Plex server and VPN gateway. I also want to setup a Splunk jail for log collection, but I am debating on setting up ELK instead as a learning experience. Once these are done and operational, I will be configuring at few different lab machines depending on what skills I am trying to hone.
After the consolidation is complete, I’ll have just the pfSense and FreeNAS physical boxes instead of four different machines. I am assuming my security will be more robust as some functions are running on one box instead of a VM or jail. The setup won’t help my power bills much but I can fix that by moving to newer hardware later. I’ll document the process as I go.
TCP and UDP are two very different protocols. I’ve spent a fair amount of time over the years explaining these two issues to our power engineers and technicians. What better topic to post here.
TCP is more reliable but has more overhead.
Probably the most important thing to realize is only TCP has a true connection, where UDP simply streams packets. TCP connections begin with a three-way handshake (SYN, SYN-ACK, ACK) which ensures that both ends of the connection are alive. TCP also ensures that all packets are passed to the next layer in the proper order, and if any packets are missing they are resent. UDP is a packet or stream of packets depending on the application. The protocol itself does not care if the packets arrive out of order or at all. TCP connections come with the additional overhead required for the reliability, making UDP seem like the ideal choice for low-bandwidth connections.
Before choosing protocols, consider the communications medium and purpose. A remote ICS/IIoT device connected via a wireless or cellular connection should be configured to use TCP, whereas the same device connected to a leased line should utilize UDP. My experience is that all cellular data connectivity including 4G experiences enough variation to cause problems for UDP-based devices, where TCP-based devices barely notice. Additionally, I always recommend TCP unless you are bandwidth constrained on something like an old 56k digital circuit.
Voice, video or other data streams which can withstand missing and out-of-order packets should always be run over UDP for maximum quality.
TCP and UDP ports can exist at the same number
Since TCP and UDP are two different protocols, they are not mutually exclusive. UDP/443 is different that TCP/443. Take care when configuring ACL and NAT rules in your network, especially if the device does not differentiate between the two.
DNS is the most common example of this. UDP/53 is used for the vast majority of domain name lookups, while TCP/53 is used primarily for zone transfers between servers. If you need to block zone transfers, then simply blocking TCP/53 might be enough (never tried myself).
Disagree with me or I missed something? Please let me know with a comment!
Here’s a Splunk query to list any changes to privileged Active Directory groups:
sourcetype=WinEventLog:Security (EventCode=4728 OR EventCode=4729 OR EventCode=4732 OR EventCode=4733 OR EventCode=4756 OR EventCode=4757) (user_group="Domain Admins" OR user_group="Enterprise Admins" OR user_group="Administrators" OR user_group="Schema Admins" OR user_group="Account Operators" OR user_group="Backup Operators" OR user_group="Cert Publishers" OR user_group="Cryptographic Operators" OR user_group="DHCP Administrators" OR user_group="DnsAdmins" OR user_group="Domain Controllers" OR user_group="Read-only Domain Controllers" OR user_group="Network Configuration Operators") | table EventCode, EventCodeDescription, user_group, user, src_user | rename EventCodeDescription as "Description", user_group as "Group Changed", user as "User Added/Removed", src_user as "Changed By"
I have this setup as both an alert and monthly report to catch any undocumented changes to these groups. You may also want to consider a monthly listing of these groups as well.