• WSgadget

    WSgadget

    @wsgadget

    Viewing 15 replies - 31 through 45 (of 57 total)
    Author
    Replies
    • in reply to: An introduction to Linux for Windows users #1493927

      “rather, the most certain way to judge the effectiveness of security features in software is to let a bunch of software engineers poke through the code.”

      I would have thought that after the Heartbleed fiasco in open source software that people would stop peddling this argument.
      [/QUOTE]

      The “Heartbleed” problem wasn’t about the effectiveness of a particular security feature, but rather a common programming mistake (input validation and bounds checking) that affects all types of software. Had the source code for OpenSSL been closed-source, the bug may have taken even longer for the public to learn about.

      Consider the millions of home network routers that have unpatched firmware, plus the growing list of connected smart thermostats, light bulbs and other “Internet of Things”. Most of them are running closed-source software that we’re all relying on the manufacturers’ assurances that it’s secure. Most people (myself included) don’t have the skills to properly determine if a piece of software and/or hardware is secure. We hope (and assume) that it is unless told otherwise.

      Regardless of whether it’s open-source or closed-source, the argument made by the article’s author that having more people peer review code ultimately leads to fewer security holes is valid — security through obscurity is only a temporary speed bump for hackers. It’s for the same reasons that Microsoft, Apple, Google, Mozilla and many others participate in hacking contests, provide bug bounties, work with security researchers, and share information with one another.

      More eyeballs won’t root out every security bug, but it definitely helps reduce the amount that slips through.

      Chung

    • Hi JWThau,

      Did you get the networking sorted out?

      Chung

    • in reply to: Should large hard drive be partitioned? #1486611

      On previous Desktop systems, I never partitioned my Hard Drives.

      On my current 8 year old Dell Dimension 9200 with a 250GB Hard Drive, I did partition the Hard Drive into 3 Partitions (1 25GB Partition for just 1 piece of Industry specific software & its data files, 1 100+GB Partition for Installation Files only, & a 3rd 100GB+ Partition for the OS & everything else)

      All of my backups were separate image backups of each entire Partition – so I never realized any real benefit of having Partitions.

      I do make extensive use of folders to organize My Documents (including Pictures), Favorites, Etc.

      I am about to pull the trigger and purchase a Dell XPS 8700 (Special Edition)

      It will have an enormous 3TB 7200 rpm Hard Drive and a 256GB mSATA Solid State Drive.

      I could have gotten only a 1TB Hard Drive, but because of my Dell Premier account as part of a very large Industry group, the cost difference was not that much.

      I expect to follow the guidance in Fred Langa’s 1-15-15 story entitled “Mastering Windows 8’s backup/restore system”

      Additionally, I expect to use my existing 3 External Hard Drives (250GB, & two 1TB) to continue to do separate image backups of the entire drives.

      QUESTION IS:
      What is the current thinking as to whether or not large drives should be partitioned?

      Hi StevenXXXX,

      As expected, when it comes to drive partitioning (and backups), there’s definitely no one-size-fits-all. 🙂

      There’s been a lot of great suggestions/recommendations along with good arguments to support them. Your choice of backup method(s) will ultimately determine whether or not partitioning the drives is ideal. Personally, the only reason I can see it being worthwhile to partition the 3TB hard drive is if you opt to install applications on it. Keeping it separate from user data will make it easier to create backup images of the applications partition.

      As far as image backups, I’ve rarely seen any benefit to using drive image tools on user data. There are plenty of file-based backup tools that are much better suited for the task (i.e. better overall compression, easier to restore individual files, less risk compared to a backup image being corrupted, etc.).

      Since it sounds like the Dell XPS 8700 you’re considering will be your first computer with a solid state drive, I’d like to offer a few recommendations:

        [*]It’s normally not necessary to defragment a SSD. Unlike on spinning media, file fragmentation is not a major problem because the access times remain the same for every block (there are exceptions, but we’ll leave that as a separate topic). Frequently defragmenting a SSD will cause excessive wear.

        [*]Consider disabling NTFS file indexing (at least on the SSD). The indexing is intended to speed up searching for files but it comes with a few disadvantages including increasing CPU load, reducing available RAM, and in the case of SSDs, excessive wear from the disk writes to the file index.

        [*]Relocate the pagefile to the hard drive to reduce the wear on the SSD (don’t forget to disable the pagefile on the SSD). Although the hard drive will be slower than the SSD, with RAM prices hovering at around $10/GB, it’s a better investment to buy more RAM if there’s frequent swapping regardless of the type of disk being used.

        [*]Because the SSD is your C: drive, consider turning off hibernation. The Windows hibernation file (C:hiberfil.sys) can’t be moved and its size is equal to the amount of system RAM + video RAM — if the computer has 16GB of RAM, that’s >16GB of space eaten up on the SSD. Also, the time to write/read the contents of RAM to/from the hibernation file could easily exceed the typical boot time: 32 seconds to recover from hibernation vs. maybe 20 seconds to boot from a cold start if the SSD is capable of 500MB/s read speeds. Then there’s the wear on the SSD from writing 16GB of data every time the computer hibernates. And as a bonus, memory leaks won’t pile up over time reducing the amount of free RAM, and ironically, forcing the system to swap to disk causing excessive wear on the SSD and/or even slower performance from swapping to the hard drive.

        [*]Try to leave enough free space on the SSD for more efficient wear leveling. The amount will vary from user to user so it isn’t a fixed percentage. Good SSDs will even out wear on each block by relocating data from blocks that don’t change as often in order to spread out the writes. The more free space there is, the longer it takes before the SSD has to shuffle data around, keeping the overall performance up.

      One thing that’s odd about the specs for the Dell XPS 8700 Special Edition is the 256GB mSATA SSD. If it really is mSATA, I would see if there’s an option to switch to a full size 2.5″ SSD. It will make it much easier to migrate to another computer, do data recovery, and/or reuse in an external enclosure at some point in the future.

      Chung

    • in reply to: Static IP: Router or Windows #1486599

      Chung – Definitely not too confusing, in fact, enlightening. It’s nice to understand those subtle differences.

      FWIW, I use a static IP on my NAS device (mapped from the NAS). Why? Because the documentation said so. Everywhere else I use DHCP. First ethernet networked computer is sitting in a box waiting to be set up and I expect to use DHCP and print to it by name unless I find a reason not to.

      Hi ed,

      Oh yes, definitely. I’ve run across more than a few network devices (and server software) that insist on static IP addresses only without DHCP. In most cases it’s because the software can’t gracefully handle the network connection going offline even for just a moment. The worst offenders I’ve seen are desktop database programs that pretend to be in the same league as real client/server database engines. Network file sharing over NFS and SMB/CIFS aren’t a problem because clients use local caching to ride out network hiccups.

      Thanks everyone for a fun topic.
      ed

      Agreed. 🙂

      It’s these kinds of thoughtful discussions that make the forums so interesting and a learning experience.

      Chung

    • in reply to: Static IP: Router or Windows #1486596

      My computers are always set up with a static IP address. This is the only way to tweak your router to control computers as needed. An example would be; bedroom desktop needs a port opened for P2P. This is done with Port Forwarding in your router, and the bedroom desktop has to have the same IP address every time it boots. Or your 15 year old is hogging all your bandwidth with videos and music downloads (you’re stuck with DSL). Your router can limit the bandwidth your teen monster uses. But only if you know his IP address. IE; it has to be a static IP. If these are not issues, than let Windows handle the IP addresses for you. That’s the way windows comes, out of the box with default settings, and for the average user, it’s fine.

      Hi mb96001,

      Most home routers default to assigning IP addresses that are “sticky” — each network device gets the same IP address as before when it renews its lease. The DHCP server does this by automatically keeping track of the most recent IP address used by a particular MAC address. In a way, it’s the best of both worlds because you get pseudo-static IP addresses without having to manually set them on every device and/or maintain a MAC address table. The more often a device uses the network, the more likely it is to continue getting the same IP address. For devices like servers and network printers, those can be explicitly reserved in the router for extra measure.

      As far as filtering, a good router offers the option of filtering based on MAC addresses because it’s more reliable and secure than IP addresses. It requires more computer skills to spoof a MAC address than to change an IP address to get around network controls. (On a related note, not every network card supports MAC spoofing so all of those utilities found on the Internet aren’t guaranteed to work every time.)

      Although generally not an issue on home networks, MAC-based filtering also makes it easier to support more devices. Over time devices will come and go so MAC-based filtering avoids the need to track which IP addresses are still in use and which can be reused.

      Chung

    • in reply to: Offsite backups without high-speed Internet possible? #1486261

      My “high speed” DSL line is 700-something KBS and has been that way since they first offered DSL and that was considered high speed. In reality it’s a lot, lot slower. I’ve got off-site unlimited backup thru the company that built my desktop. It’s at the other end of the city and on private property for $90/year. It takes more than a week to do a complete backup, and that’s with the computer on 24/7. And that’s backing up less than 1/4 terrabyte of data (the stuff I feel has to be backed up off site). The daily incrementals can take an hour.

      I can get higher speeds with Comcast but I won’t go that route because their cheap internet is dependent on you getting TV, phone, etc.

      Is satelite worth it?

      Suggestions? Options? Legal recourse for a phone company that advertises double or triple that speed to new users and then can’t give it to folks well inside the city limits (I’m about 5 miles from downtown) who’ve been with them for years? They say the lines can’t handle more than that and they will upgrade the whole city to fiberoptics–eventually. (They’ve been saying that for 5 years.)

      Hi areader,

      What backup software does the company use/require?

      Based on the times required for the backups, it sounds like the backup software does use compression since the effective transfer rate is close to 3 Mbit/s. The daily incremental backups are likely resending the entire contents of changed files rather than using deltas so it’s taking a lot longer than it could be. If it’s an option, switching to a better backup program will help speed things up a lot.

      As others have already mentioned, satellite is going to be slow because of the distance the network packets have to travel (and the additional network router hops). Depending on your monthly data usage, Internet service over a cellular network might be worth looking into. With LTE networks, it could easily exceed what you’re currently getting with your DSL connection. For example, T-Mobile has a $30 no-contract, paid-as-you-go monthly service plan that offers unlimited data (first 5GB at 4G LTE then slows down afterwards for the rest of the month with no overage charges). The plan also includes unlimited text and 100 minutes of talk. It’s geared toward users who need more data than talk. Although it’s a mobile plan, you don’t have to be mobile. Check the coverage maps and then try it out for just one month to see how well it works for your house. All of the mobile carriers offer dedicated Wi-Fi hotspots, but it’s almost always a better deal to go with a cell plan even if you never make a phone call with it. On a 4G LTE connection, I average 4 to 6 Mbit/s where I use it the most and have seen upwards of 24 Mbit/s in other locations.

      If enough customers leave for other services, it might prod CenturyLink into paying attention and speeding up their network upgrades (I know someone who had service with CenturyLink and eventually jumped ship).

      Chung

    • in reply to: Static IP: Router or Windows #1486231

      Thanks for the replies. So it sounds like I’ll be ok having static IP’s set in my router for a home network (without the server running a DHCP server).

      Just to get a little more info, why would someone want to assign a static IP in Windows vs. the router? Does it accomplish the same thing just software vs. hardware?

      For small networks it can be easier to simply assign static IP addresses on each device rather than depend on a DHCP server running on a router or a computer. With DHCP, there’s the additional complication of dealing with not only the DHCP server but DHCP clients, IP address leases, MAC addresses and so on.

      For larger networks that span multiple IP address ranges it can also be desirable to assign static IP addresses on each device to make it easier to identify each computer on the network (e.g. not all networks run Microsoft Active Directory and not all computers/devices support it).

      It wasn’t mentioned in any of the earlier posts so I just wanted to clarify that there’s a difference in terminology: An IP address that is manually entered onto a computer/device is a static IP address. An IP address that is mapped to a specific MAC address by a DHCP server running either on a router or a computer is technically a fixed lease. It’s a subtle, but very important difference.

      Most home networks use dynamic IP addressing — a device broadcasts a message onto the local network that it’s in need of an IP address. A DHCP server sees the request and responds with an offer. The IP address that’s offered comes with an expiration date. At what point a renewal request happens varies, but in Windows it’s typically at the halfway mark. For example, if the lease time is 3 days, Windows will try to renew its IP address at about 36 hours into the current lease.

      If a device doesn’t get confirmation from a DHCP server that its lease has been renewed, it will continue to use the assigned IP address until the end of the current lease, at which point a properly designed client will release its IP address back into the local IP address pool. When the lease expires the DHCP server assumes that the client is no longer using the IP address and may reassign it to a different device (This is the situation that starvinmarvin ran into when one of his Windows PCs lost its lease while it was in sleep mode. When it woke up, before it realized that its lease had expired it had caused a conflict with another computer that had a newer lease).

      Although a static IP address and a fixed lease have the effect of assigning the same IP address to a particular device each day, a fixed lease essentially offers a temporary static IP address that must be renewed on a regular basis. But unlike a dynamic IP address, the DHCP server knows that the fixed lease included an IP address that is mapped to a MAC address so it won’t reassign the IP address to a device with a different one. The IP address is reserved in the IP address pool for only that device.

      As a network gets larger and/or more complex, it’s easier to manage IP addresses in one place instead of individually on each device. This is where assigning fixed leases using a DHCP server on a router helps avoid potential IP address conflicts. It’s usually less work and more reliable compared to manually configuring each device with a static IP address.

      On a related note, the router doesn’t necessarily have to also be the DHCP server. A computer or other device can also be used. It could be that the DHCP server in the router stinks, is buggy, doesn’t support fixed leases, etc.

      Hope I wasn’t too confusing,

      Chung

    • in reply to: Running XP in VirtualBox #1486209

      I usually work on a Linux box but there is a CAD program I like that runs only in Windows. So, I’ve installed Virtualbox and WinXP SP3 and my CAD program. Everything is working well. The down side is that XP is running without all the updates that were issued up to “end of support”. That may not be a real problem because it works for my purposes the way it is. However, I thought I read somewhere that you would be able to download and install all of the existing updates. You just can’t get new ones. Is this true?

      Second question: To what extent, if any, will XP be protected from malware when running in virtualbox? (Lets assume for this question that XP has an internet connection)

      Hi Yobil,

      There’s already great advice from others so I thought I’d toss in an additional idea that might be of interest. Given that the CAD program was written to support WinXP, it might run just fine using Wine (http://winehq.org/). Besides the official package, there’s a commercially supported package called “CrossOver Linux” from CodeWeavers (http://codeweavers.com/) and also an open-source front-end called PlayOnLinux (http://playonlinux.com/) that’s written in Python. I’ve been using Wine to run Google’s SketchUp and several other Windows programs.

      As far as protecting XP from malware, if you’re not surfing the web from within XP (especially using Internet Explorer), I don’t think you have anything to worry about. WinXP SP3 turns on the built-in Windows firewall by default and VirtualBox’s default setting is to put virtual machines behind a virtual router in NAT rather than bridged mode, so if your Linux box also has its firewall enabled, there are 3 security walls a hacker has to punch through. VirtualBox also provides the option of virtually disconnecting the network cable in case you want to leave open the option of using the network for network time sync, downloading updates, etc.

      You’re probably already aware of the shared folders feature in VirtualBox so I’ll skip the details. As long as you set up a separate directory for the shared area between Linux and the WinXP virtual machine, any malware won’t be able to easily pass over into your Linux system.

      As far as malware that’s made for Windows, it’s unlikely that any of it will run on Linux because of the difference in libraries, OS architecture, etc. Unless something bad was specifically made for Linux, deliberately downloaded via a Windows virtual machine, and then run from the Linux host as a user with root privileges, the real risk is near zero.

      Depending on which Linux distro you’re using, SELinux (Fedora and related spins) and AppArmor (Ubuntu and related spins) will also help to keep malware at bay.

      Chung

    • in reply to: Best way to install six months of MS monthly updates? #1486194

      I was unavailable to do the June updates, and let them slide, since then I haven’t done any updates. I’m scared that a bad update could cause problems. After a bad update, does MS eventually correct the problem so I don’t need to worry about them causing problems and could therefore let MS download all the old updates and maybe just watch Dec/Jan following the patch update?

      Thanks for any help you may provide.

      Norm

      Hi Norm,

      A very useful open-source tool to use is WSUS Offline Update (http://wsusoffline.net/). For XP updates, go to the download page (http://download.wsusoffline.net/) and grab version 9.2.1. It’s the last version that includes support for XP.

      The download is a zip file. The files can be extracted and run from any location so it’s fine, and a good idea, to keep it on a removable drive. You’ll need a little under 700 MB of free space to hold all of the available updates from Service Pack 3 forward. After extracting the program files, I recommend renaming the default folder name “wsusoffline” to something like “wsusoffline921” to make it easy to preserve the XP updates for future use. That way you can also use the latest version of WSUS Offline Update for updating any newer versions of Windows.

      Unlike the normal Windows Update and other alternative update downloaders, WSUS Offline Update doesn’t require you to pick and choose the updates to install and it doesn’t blindly install every update that’s available by using install scripts and an exclusion list to avoid bad updates.

      Another nice feature is the option to auto-reboot, login and continue the update process if an update requires it (e.g. service packs, IE). For anyone with a mix of operating systems, it also supports downloading the Windows updates from a Linux machine (the shell scripts might even work in OS X but I haven’t tried it).

      Since there isn’t likely to ever be any future updates for XP, I would turn off the auto-update (to avoid installing updates that might break your system) and use WSUS Offline Update whenever you feel the need to.

      Chung

    • in reply to: Wi-Fi extenders #1476283

      I’ve been looking at extending the range of my Wi-Fi network, I cannot move the router easily. The current signal is weak I some rooms and drops out also it would be good if it reached the garden. There seem to be two main routes either Wi-Fi repeaters e.g. TP Link or mains borne repeaters e.g. D-Lan.

      I’d be grateful for any advice. I’ve read somewhere that repeaters like TP-Link slow the data rate.

      TIA

      Peter

      Hi Peter,

      Yes, devices connected to a Wi-Fi repeater get 1/2 the data rate of devices connected directly to a Wi-Fi router because the repeater is transmitting/receiving with both sides of the connection.

      When locating a repeater, ideally it should be about half way between the Wi-Fi router and where you need the signal.

      Chung

    • Hi Steve,

      My laptop has 4 devices listed in the Network sub-directory of the Device Manager. 2 of the devices are Bluetooth, both of the remaining devices are Qualcomm network adapters (1) Atheros AR956x wireless network adapter showing Ad Hoc 11n BUT its value is Disabled;[…]

      Unfortunately, it doesn’t look like the Atheros AR956x supports dual-band (2.4 GHz and 5 GHz).

      Here’s a quick list of the radio frequencies used by 802.11 a/b/g/n/ac:

      [INDENT]a – 5 GHz
      b – 2.4 GHz
      g – 2.4 GHz
      n – 2.4 and 5 GHz
      ac – 2.4 and 5 GHz
      [/INDENT]

      So, although the 802.11n standard includes both 2.4 and 5 GHz, a wireless network adapter is not required to support both frequency bands.

      If any of that info I provided above makes sense to you please let me know if it is good or bad and if I should do something about it. I can live with the slow connection speed though I have been thinking about changing ISP’s for a faster one.

      Try running a few tests to gauge the quality of your current Internet connection. It’s best to turn off any other network devices (e.g. tablets, phones, media players, etc.) that could skew the results:

      First, get a raw speed rating (run it 2 or 3 times and average the speeds)…

      [INDENT]http://speedtest.net/[/INDENT]

      Next, run a network quality test…

      [INDENT]http://pingtest.net/[/INDENT]

      (In order to run a more complete network quality test you’ll need to have a Java runtime installed. If you don’t already have the Java plugin installed, and/or aren’t familiar with using it, it’s best to install it, run the network test and then uninstall the plugin to avoid any security issues.)

      A few tips regarding streaming video:

        [*]Network latency is often more important than network speed. Data over networks are sent in packets (chunks of data). Each packet takes time to assemble, be transmitted by the sender (e.g. YouTube) and be received by the recipient (your computer). In between all of this is a little extra overhead of each side saying it’s ready for the next packet, it got the packet, and so on. The time spent between packets is the latency. The shorter the latency, the smoother a video plays back. Video buffering helps smooth out the rough edges. The video buffer is like a kitchen sink… if the faucet (your network line) can’t keep filling the sink fast enough to compensate for the drain (your computer) taking water away, eventually the sink will dry up temporarily cutting off the steady flow of water down the drain pipe.
        [*]You may need to manually adjust the video resolution because the auto-detection isn’t always able to compensate for all network situations. People generally notice frequent buffering more than lower video resolution. It’s all about perception — a lower resolution video that plays smoothly is less annoying than a video that pauses a lot to refill the video buffer.
        [*]Depending on the video resolution, your laptop may not be able to keep up with the network data transfer and playback. Bring up the Windows Task Manager and see what the CPU and network loads are like when streaming a video.

      Wireless network adapters can connect via “infrastructure” or “ad-hoc” mode. Infrastructure mode is used when connecting to wireless access points like those provided by wireless routers. Ad-hoc mode allows one computer to connect directly to another computer without the need for a dedicated access point. It’s fine to leave ad-hoc mode disabled.

      Chung

    • in reply to: Firefox 33 and 33.0.1 not displaying site correctly #1475099

      Chung, thank you so much for your super helpful post. I think you may have it figured out – and thanks for taking the time to do so thorough a look at our site.

      Hi Linda,

      You’re welcome. 😎

      I had recently installed 3 new WP plugins, one was W3 Total Cache to speed up our site. And now I believe all this weirdness started after those installations.

      After reading your post, I disabled W3 Total Cache…and the site displays perfectly in all 3 browsers viewed as wp-admin or as a user.

      I couldn’t find a Minify plugin listed, but a WP forum post https://wordpress.org/support/topic/can-use-wp-minify-plugin-with-w3-total-cache leads me to believe that it is a part of W3 TC??

      I compared the source code for W3 Total Cache against Minify and although they share the same name for the compression library, they aren’t from the same authors. It turns out that the Minify included in W3 Total Cache is a library (http://code.google.com/p/minify/) with a similar purpose to the original Minify plugin for WordPress. There’s also a second WordPress plugin called “WP Minify” that uses the same library.

      Question: would you suggest I totally uninstall the plugin or just leave it disabled?

      If you won’t be using it, I recommend uninstalling it. It’s one less plugin to maintain updates for and one less chance of getting bitten by a software bug. Since it’s so easy to install plugins in WordPress, it’s better to install it again when you do need it.

      Our site was originally designed in HTML and later re-created in WordPress with the designer’s own theme. It has often created issues for us since it is not a “pure” WP theme. Guess this is another one.

      Ah… if there are coding/syntax errors in the original theme, it might be tripping up the parser that’s trying to compact the code. The various rendering engines (e.g. Gecko, WebKit, Blink) handle CSS errors differently so that can cause mixed results between web browsers.

      You could try a few of the other Minify-type plugins (some with and without caching) to see if another one might work better for your website.

      Another worthwhile tweak (depends on the web hosting service) is to enable compression (http://en.wikipedia.org/wiki/HTTP_compression). HTTP compression will reduce the size of all the HTML, JavaScript, CSS and other readily compressible content. For websites that are more text than images, it can easily shave 50% or more off of the data transferred, making the website snappier on the client side.

      Thanks so much again! I’m off to check the code now to see if the css looks any better.

      CSS UPDATE – sorry, couldn’t figure this out at all…a bit above my “tech knowledge level”!!

      Linda:)

      I can definitely relate. :^_^:

      HTML and CSS has gone through so many changes the past few years that it’s difficult to keep up. If it weren’t for WordPress, Drupal and many of the other content management systems, we’d most likely have a lot fewer websites — or at the very least a lot more plain looking ones. 🙂

      Chung

    • in reply to: Firefox 33 and 33.0.1 not displaying site correctly #1475081

      Well, more information…

      Looking into it further today, discovered issue is not exactly what I had thought it was…

      In Firefox, display is incorrect in both “normal” and WordPress Admin mode.

      In Chrome, display is perfect in normal but not in WordPress Admin mode.

      As for IE…normal is fine; first two times I tried to open it in Admin mode, browser crashed!! Finally got it to open admin log in, but when I tried to view the page, it reverted to non-admin mode (i.e., no Edit capability), and displayed fine. Weird. I did manage to “open” a page from the menu in admin mode… and it was not displaying correctly.

      So, does this mean I have a WordPress issue instead of a browser one?

      If so, any ideas…or on what to try next?

      Thanks,

      Linda

      Hi Linda,

      It does seem like it’s a WordPress issue. From the sample screenshot you posted, it looked like it was a stylesheet problem because the content was there but the formatting was all off. The website looked fine for me on Firefox 33 so as a second test I saved a local copy of the homepage via File -> Save Page As. I then deleted the .css files and opened the .html file and the result matched your screenshot.

      I took a closer look at the HTML and CSS. In the header block, there are a few lines that link to the stylesheets:

      Note the random looking stylesheet filenames. According to the URLs, your installation of WordPress has the Minify plugin installed. Minify compresses and caches the JavaScript and cascading stylesheets to speed up page generation, ultimately improving response times and saving some network bandwidth for both the server and clients. The cache files can be shared by multiple users but whether or not a particular web browser gets a unique set is dependent on a number of conditions.

      Chances are that some of the cached files are corrupted. If you have Administrator privileges, try clearing Minify’s cache, or better yet, temporarily disable Minify. Then in Firefox, go to your website and press the keyboard combination [Ctrl] + [Shift] + [R] (that tells Firefox to force a complete reload of the webpage, ignoring the local cache).

      As for how/why it happened, my best guess is that Minify hasn’t been updated in more than 2 years so it might not be 100% compatible with WordPress 4.0.

      Chung

    • in reply to: Win8.1 PC can’t see Linux PC on same network #1471036

      Gadget
      Dumb question are linux firewall rules processed sequentially? If I remember correctly w/ a cisco router if a rule is encountered that calls for an action on a packet either allow or block that action is performed and no further processing of rules is done for that package. When I boot up a linux box I will try some of the commands to try to understand what I have been content to ignore. Thanks again for all the good info.

      Hi wavy,

      Thanks for catching my mistake! 😮

      Yes, you’re right. In the example I gave, the deny rule would have stopped the processing (I was thinking about the flow through chains at the time). That’s what I get for editing and re-editing trying to spot mistakes only to miss the proverbial “forest for the trees”.

      I had a elementary school teacher that liked to say that there’s no such thing as a dumb question — except for the ones that you don’t ask. 😎

    • in reply to: Win8.1 PC can’t see Linux PC on same network #1470938

      I don’t just want to say Thank You to Gadget – I want to say THANK YOU, THANK YOU, THANK YOU:)

      You’re welcome. 🙂

      This post was fabulous.

      LINUX PC
      ======
      smbclient -L localhost showed that I did have a share set up, and
      smbclient ‘\localhostmichael’ showed that I could connect to it
      sudo iptables-save showed that I did have rules (and that my recent attempt to remove some had failed, or only worked for that session)
      (sudo) ufw status (I needed sudo) showed that I was not accepting requests, and
      (sudo) ufw disable (I needed sudo) killed the firewall (which I had never enabled, that must have happened on installation)

      Yes, I checked one of my Ubuntu 14.04 installs, and unlike previous versions of Ubuntu, a default set of firewall rules is included during installation. Interestingly, this is only true for the desktop installs; in Ubuntu Server 14.40 the firewall is disabled by default.

      WINDOWS PC
      =========
      net use u: \ip-address-of-linux-pcmichael then connected and I was able to see my files on the Linux PC.

      This state remained after both PCs were rebooted and I was able to read and write to a file on the Linux PC using software on the Windows PC.

      Windows records the details for network shares in the Windows registry so that the connections can be re-established between reboots. There’s a parameter “/persistent” that defaults to “yes”. If it had been “no”, the connection would only last as long as the current login session:

      [INDENT]net use u: \192.168.1.10michael /persistent:no[/INDENT]

      I can’t thank Gadget enough for this clear, concise and practical guidance.

      All I need now is guidance on how to close the firewall on Linux so that it is open to requests from ‘michael’, but not from anyone else.

      Michael Barraclough

      At this rate, you might soon end up knowing more about Linux than Windows. 😉

      Chances are that your router is already providing a firewall shielding your home network from the rest of the Internet, so enabling the firewall on the Linux PC isn’t absolutely necessary, but it doesn’t hurt either. If you’re curious about what others outside your network can see, run a remote scan using Steve Gibson’s (a well respected security researcher and software developer) Shields Up scanner:

      [INDENT]https://www.grc.com/x/ne.dll?bh0bkyd2[/INDENT]

      Assuming that your Ubuntu Linux PC has the default set of firewall rules, here’s a quick primer on configuring the firewall…

      (Prefix each command below with “sudo”, or to save some typing, do “sudo -i” to switch to an interactive root shell — just watch out for typos. :p)

      See if there are any custom rules defined. Dump a numbered list:

      [INDENT]ufw status numbered[/INDENT]

      If it just shows “Status: active”, it will make it a lot easier to add your own rules. Here’s a very basic template for inserting a rule:

      [INDENT]ufw insert 123 allow proto tcp from 192.168.1.10 to any port 12345[/INDENT]

      In the template above, translated to non-geek speak…

      [INDENT]Insert as rule #123, allowing TCP traffic from the network host with IP address 192.168.1.10 to connect to any network interface listening on port 12345.[/INDENT]

      The parts highlighted in bold text need to be customized to suit the particular need. Keep in mind that we’re making a lot of assumptions here so that’s why we’re ignoring the choice of “tcp” or “udp”, using “to any”, etc.

      The first highlighted number, 123, is the line number in the list of rules. So, if there are already two rules defined, you can insert a new rule between them by specifying “insert 2”. This would push the existing rule 2 down to rule 3.

      Before…

      Code:
      Status: active
      
           To                         Action      From
           —                         ——      —-
      [ 1] 12345/tcp                  ALLOW IN    192.168.1.9
      [ 2] 12345/tcp                  ALLOW IN    192.168.1.11

      After “ufw insert 2 allow proto tcp from 192.168.1.10 to any port 12345″…

      Code:
      Status: active
      
           To                         Action      From
           —                         ——      —-
      [ 1] 12345/tcp                  ALLOW IN    192.168.1.9
      [ 2] 12345/tcp                  ALLOW IN    192.168.1.10
      [ 3] 12345/tcp                  ALLOW IN    192.168.1.11

      If you don’t include a line number, it’s assumed that the new rule is to be added to the bottom of the list.

      To delete a rule, use the delete command plus the line number:

      [INDENT]ufw delete 2[/INDENT]

      The second highlighted number, 192.168.1.10, is the IP address pattern you want to match against. It can be a single IP address or it can be a range. For home users, probably the two most common ones are to allow a single IP address…

      [INDENT]192.168.1.10[/INDENT]

      Or the entire home network…

      [INDENT]192.168.1.0/24[/INDENT]

      The “/24” masks the first 3 octets (3 octets x 8 bits = 24 bits). Every computer on the local network is assumed to have the same prefix “192.168.1” followed by a unique host number from 1 to 254 (0 and 255 are reserved, while 1 is typically the network gateway/router).

      (In the examples I’m using the “Class C” range, 192.168.0.0 to 192.168.255.255, because it seems to be the default for most consumer routers.)

      The third highlighted number, 12345, is the network port. In Linux, the text file located at /etc/services contains a list of the assigned network ports for various services. You can also find similar lists on Wikipedia or Google for the appropriate port number(s) to use.

      For a simple configuration, the order of the rules generally isn’t important. If you add more rules for additional services and/or want a more complicated mix, then the order becomes more important. Rules are processed in order of appearance. Here’s a quick example:

      Code:
      Status: active
      
           To                         Action      From
           —                         ——      —-
      [ 1] 5900/tcp                  ALLOW IN    192.168.1.9
      [ 2] 5900/tcp                  ALLOW IN    192.168.1.0/24

      In the example above, rule #1 is overridden by rule #2 because the range covers every host number from 1 to 255.

      So, in a nutshell, to re-enable the firewall in Ubuntu with custom rules to allow SMB/CIFS network traffic through (substituting in the appropriate IP address assigned to your Windows PC):

      First, turn the firewall back on…

      [INDENT]ufw enable[/INDENT]

      Then add two custom rules to open up TCP ports 139 and 445 used by Microsoft’s SMB/CIFS protocol…

      [INDENT]ufw insert allow proto tcp from 192.168.1.10 to any port 139
      ufw insert allow proto tcp from 192.168.1.10 to any port 445[/INDENT]

      The changes are effective immediately. If there are no errors, you’ll see the new rules with “ufw status” and you’ll still be able to access the network share from your Windows PC.

      If you want to allow all computers within your local network, specify a range like this:

      [INDENT]ufw insert allow proto tcp from 192.168.1.0/24 to any port 139
      ufw insert allow proto tcp from 192.168.1.0/24 to any port 445[/INDENT]

      Depending on the features needed and how old the connecting clients are, you might also need to open up TCP ports 137 and 138 (“grep -i netbios /etc/services” for a complete list).

      There are also all kinds of configuration options for Samba that can help with additional security, but given that the manual just for the “smb.conf” file is more than 141 pages, everyone reading this might doze off before we got through it. 😎

      Chung

    Viewing 15 replies - 31 through 45 (of 57 total)