Active Directory Administrator - Zurka Interactive        
Washington, DC - Join a sharp, fun team doing challenging work supporting world class science and technology. At the Center for Computational Science at the US Naval Research Laboratory you'll be part of a team responsible for providing Windows platform administration.
          Side-Talk: Halal Omiyage (Souvenir) from Japan        
As mentioned in my previous post, my eldest sister went to Japan in December 2014 and brought back lots of Japan face products. While she was there, she whatsapp us to help to her search online for halal mochi which she can bring as souvenirs for her muslim colleague. I look up on google and do not find any halal kochi that is already in Japanese market. However, I did find out that there are halal omiyage out there are halal certified by Nippon Asia Halal Association (NAHA). Interesting huh. Below is the halal logo of NAHA.

One of the product is "Arare", which is a type of rice crackers. It is product by Osama Rice Cracker Co. Pte Ltd. Do note that not all of their products are halal. So what is "Arare"? According to wikipedia, it is japanese confectionary made from glutinous rice. The difference from senbei is its size and shape. In other words it is fried mochi.

Okay great there are halal souvenirs but where do we buy them? We saw online that it can found in a shop called "Hyobando" along the row of shops at Asakusa Temple. But my sister said it is too out of the way for her. 

Photo credit: NAHA Facebook page

Then browse through NAHA Facebook Page. I found out that it is also sold at Narita Airport Terminal 1! I was thinking great then, she can buy it before she bought the plane. NAHA posted the photo below but did not state the exact location of it…like where in Terminal can it be bought.

Photo credit: NAHA Facebook page
So using my detective skills, I go Narita airport website to check the directory and look out for shops with similar display. The is called KONNICHIWA and the location is at:

Narita Airport Terminal 1
Level 4 of Central Building

So back to the story about my eldest sister. She say her departure hall is at Terminal 2 instead and most like would not be able to Terminal 1 first due to shortage of time. So I look for a way for her to buy the halal "Arare" online. Sakura Gate (link) enables you to buy online and provides delivery service to the Airport. Except you will need you to order in advance…and this is not possible for my sister. But I think the idea is genius! Tourists can now save time from looking for souvenirs while they are on holidays. I always find looking for souvenirs to but for others a waste of time HAHA. In the end, I said to me sister that I have no other solution for her.

So she finally returned to Singapore after her trip and reach home at 2am in the morning. I saw her carry this ANA paper bag with the 8 packets of halal "Arare" in it. I was like, where did you get it from. Apparently, it is sold in Narita Airport Terminal 2 as well. So another place for you to buy the halal "Arare" is at ANA FESTA lobby gift shop located at:

Narita Airport Terminal 2
Level 4 of the Main Building

My sister bought 2 flavours of halal "Arare, seaweed and red pepper. I only have photo of the seaweed one since we already ate the red pepper one. Heard there is going to be a new flavour (Wasabi) coming out soon.

For more information and updates of halal products in Japan, do visit NAHA Facebook page. 

          HTTP Verb Tampering Demo/Example/Tutorial         

What is HTTP Verb?

  •  According to Wiki "The Hypertext Transfer Protocol (HTTP) is an application protocol for distributed, collaborative, hypermedia information systems.  HTTP is the foundation of data communication for the World Wide Web.

  • Verb is nothing but HTTP methods used to indicate the desired action to be performed on the identified resource.

-  List of some basic HTTP Verb or Methods
  • GET
  • HEAD
  • POST 
  • PUT

What is HTTP Verb Tampering? 

It's a method to bypass a defense technique by tampering the verb. Some secret directories have restricted access by  basic authentication. This directories are protected by the .htaccess file which can be easily exploited. This attack is a result of a Apache  htaccess file misconfiguration .

An administrator, limits the access to the private resource or directory just via POST request method. See the vulnerable code below.

Here AuthUserFile is the directory to the .htpasswd file which contains the username & password in encrypted format.

require valid-user

It just limits the POST method & matches the credentials that saved in htpasswd file, if wrong error page shows up.

Here the administrator has limited POST method, but also not blacklisted other methods?. This means any requests via other method would lead the attacker having access to the protected  private resources or directory. Below i have provided a video DEMO of  successful exploitation of an HTTP Verb tampering vulnerability via Live HTTP Headers ( Firefox add-on) on AT&T sub domain (Reported & Fixed). In the next post i will be showing you various ways to fix or apply a patch to this vulnerability .

          SCSI support and a big surprise        
Last week I added SCSI disk support for the CD-i 60x extension board to CD-i Emulator. It took somewhat longer then I expected, though. This was mostly because the DP5380 SCSI controller chip exposes most low-level details of the SCSI protocol to the driver which means that all of these details have to be emulated.

The emulation ended up to be a more-or-less complete software implementation of the parallel SCSI-2 protocol, including most of the low-level signaling on the BSY, SEL, ATN, MSG, C/D-, I/O-, REQ and ACK lines. This is all implemented by the new CScsiBus class representing the SCSI bus that connects up to 16 instances of the CScsiPort class that each represent a single SCSI-2 bus interface. I was able to mostly avoid per-byte signaling of REQ and ACK if the target device implementation supports block transfers, a big performance win.

The new CCdiScsiDevice class emulates the DP5380 controller chip, working in conjunction with the CCdiScsiRamDevice and CCdiScsiDmaDevice classes that emulate the 32 KB of local extension SRAM and the discrete DMA logic around it that are included on the CD-i 60x extension board.

The CD-i 182 extension uses a compatible SCSI controller chip but a different DMA controller and has no local extension SRAM. I have not yet emulated these because I have almost no software to test it.

The new CScsiDevice class implements a generic SCSI device emulating minimal versions of the four SCSI commands that are mandatory for all SCSI device types: TEST UNIT READY, REQUEST SENSE, INQUIRY and SEND DIAGNOSTIC. It implements most of the boiler-plate of low-level SCSI signaling for target devices and the full command and status phases of SCSI command processing, allowing subclasses to focus on implementing the content aspects of the data transfer phase.

The CScsiFile class emulates a SCSI device backed by a file on the host PC; it includes facilities for managing the SCSI block size and the transfer of block-sized data to and from the backing file.

The CScsiDisk and CScsiTape classes emulate a SCSI disk and tape device, respectively, currently supporting a block size of 512 bytes only. Instances of these classes are connected to the SCSI bus by using the new
-s[csi]d[isk][0-7] FILE and -s[csi]t[ape][0-7] FILE options of CD-i Emulator.

The CD-i 60x extension board normally uses SCSI id 5; the built-in ROM device descriptors for SCSI disks use SCSI ids starting at zero (/h0 /h1 /h2) while the built-in device descriptor for a SCSI tape uses SCSI id 4 (/mt0). This means that the useful options with the 60x are -scsidisk0, -scsidisk1, -scsidisk2 and -scsitape 4.

I've added the new dsk subdirectory to contain disk images; tape images have no standard location as they are mostly intended for bulk-transfer purposes (see below).

Inside the CD-i player this leads to the following response to the built-in inquire command:
$ inquire -i=0
vendor identification:"CDIFAN CDIEMU SCSIDISK "

$ inquire -i=4
vendor identification:"CDIFAN CDIEMU SCSITAPE "
where the "CDIFAN " part is the vendor name and the "CDIEMU SCSIXXXX " part is the product name.

In the previous post I described a 450 MB OS-9 hard disk image that I found on the Internet. After mounting it with
-scsidisk0 mw.dsk I got the following output:
$ free /h0
"MediaWorkshop" created on: Feb 17, 1994
Capacity: 1015812 sectors (512-byte sectors, 32-sector clusters)
674144 free sectors, largest block 655552 sectors
345161728 of 520095744 bytes (329.17 of 496.00 Mb) free on media (66%)
335642624 bytes (320.09 Mb) in largest free block

$ dir -d /h0

Directory of /h0 23:49:36
ETC/ FDRAW/ FONTS/ FontExample/ ISP/
TEST/ USR/ VIDEO/ abstract.txt bibliographic.txt
bkgd.c8 bkgd.d cdb cdb1 cdb2
cdi_opt_install chris_test cin copyright.mws copyright.txt
csd_605 custominits_cin delme dos/ file
font8x8 get globs.mod go go.mkfont
inetdb ipstat kick1a_f.c8 kick2a_f.c8 mtitle
mws net new_shell new_shell.stb scratch
screen startup_cin thelist
You can see why thought it was a MediaWorkshop disc, but on closer inspection this turned out to something quite different. Some basic scrutiny lead to the hypothesis that this is probably a disk backup of someone from Microware working on early development of the DAVID (Digital Audio Video Interactive Decoder) platform. There are various surprises on the disk which I will describe below.

Anyway, I wanted to transfer the contents to the PC as a tar archive, similar to the procedure I used for my CD-i floppy collection. After starting CD-i Emulator with a -scsitape4 mw.tar option this was simply a matter of typing the following into the terminal window:
tar cb 1/h0
This command runs the "tape archiver" program to create a tape with the contents of the /h0 directory, using a tape blocking size of 1 (necessary because my SCSI tape emulation doesn't yet support larger block sizes). The resulting mw.tar file on the PC is only 130 MB, not 450 MB which indicates that the disk is mostly empty. At some point I might use an OS-9 "undelete" program to find out if there are additional surprises.

Extracting the mw.tar file was now a simple matter of running the PC command
tar xvf mv.tar
This produced an exact copy of the OS-9 directory structure and files on the PC.

Many of the directories on the hard disk are clearly copies of various distribution media (e.g. CDI_BASECASE, CINERGY, CURSORS, ENET, FONTS, ISP, MWOS, NFS). The contents of the ENET, ISP and NFS directories at first appear to match some of my floppies, including version numbers, but on closer inspection the binaries are different. Running some of them produces "Illegal instruction" errors so I suspect that these are 68020 binaries.

The SHIP directory contains some prerelease RTNFM software; the readme talks about PES which is a type of MPEG-2 stream (Packetized Elementary Stream). Various asset directories contain versions of a "DAVID" logo.

The CMDS directory contains working versions of the Microware C compiler, identical to the ones I already had and also many other programs. It also contains some "cdb" files (configuration database?) that mention the 68340 processor.

The contents of the CMDS/BOOTOBJS directory produced a first surprise: it contains a subdirectory JNMS containing among others files named "rb1793" and "scsijnms". Could this be floppy and SCSI drivers for the CD-i 182 extension, as it contains with a 1793 floppy drive controller (the CD-i 60x uses a different one) and the player has a "JNMS" serial number?

Well, yes and no. Disassembly of the scsijnms file proved it to be compiled C code using an interface different from OS-9 2.4 drivers, so I suspect this is an OS-9 3.x driver. In any case, I cannot use it with the stock CD-i 180 player ROMs. Bummer...

And now for the big surprise: deeply hidden in a directory structure inside the innocently named COPY directory is the complete assembly source for the VMPEG video driver module "fmvdrv". At first glance it looked very familiar from my disassembly exercises on the identically-named Gate Array 2 MPEG driver module "fmvdrv", which is as expected because I had already noticed the large similarity between these two hardware generations.

The source calls the VMPEG hardware the "IC3" implementation, which matches CD-i digital video history as I know it. The Gate Array MPEG hardware would be "IC2" and the original prototype hardware would be "IC1". Furthermore, the sources contain three source files named fmvbugs1.a to fmvbugs3.a whose source file titles are "FMV first silicon bugs routines" to "FMV third silicon bugs routines". The supplied makefile currently uses only fmvbugs3.a as is to be expected for a VMPEG driver.

The fmvbugs1.a source contains some of the picture buffer manipulation logic that I've so far carefully avoided triggering because I couldn't understand it from my disassemblies, and this is now perfectly understandable: they are workarounds for hardware bugs!

As of two hours ago, I have verified that with a little tweaking and reconstruction of a single missing constants library file these sources produce the exact "fmvdrv" driver module contained in the vmpega.rom file directly obtained from my VMPEG cartridge.

In general these sources are very heavily commented, including numerous change management comments. They also include a full set of hardware register and bit names, although no comments directly describing the hardware. This should be of great help in finally getting the digital video emulation completely working.

All of the comments are English, although a few stray words and developer initials lead me to believe that the programmers were either Dutch or Belgian.

Disassembly comparisons lead me to the conclusion that careful undoing of numerous changes should result in exact sources for the GMPEGA2 driver module "fmvdrv" as well. I might even do it at some point, although this is not high priority for me.

The disk image containing all of these surprises is publicly available on the Internet since at least 2009, which is probably someone's mistake but one for which I'm very grateful at this point!
          CD-i floppy inventory        
Last weekend I future-proofed my CD-i floppy collection. A bit to my surprise, all floppies except one turned out to be perfectly readable (nearly twenty years after they were last written!). Luckily, the one exception was a backup copy so I didn’t lose any contents.

I had originally intended to use the borrowed CDI 182 unit for this (it has two floppy drives). The primary motivation for this was that my unstowed CDI 605 could not read beyond track zero of any floppy, but after giving the matter some thought I decided to try my other CDI 605 first, the primary motivation for this being speed (see below). It turned out that this 605 could read the floppies perfectly, including the three 38U0 format ones that gave problems on the 182 unit. Microware has defined a number of OS-9 disk formats for floppies, the 38U0 one supposedly being the “universal” 3.5" format (there is also a 58U0 “universal” 5¼" format).

The problem with the “universal” formats is that track zero can be (and on my floppies, is) in a different density which makes it a bad fit for most tools, both on CD-i and PC. It also means that only 79 tracks are used for data storage, giving a raw capacity of 79 × 2 × 16 × 256 = 632 KB. The 3803 format used by all my other CD-i floppies uses all 80 tracks and consequently has 8 KB more of raw storage for a total of 640 KB (these are both double-density, double-side formats (DS, DD) with 16 sectors of 256 bytes per track like nearly all OS-9 disk formats).

Before unstowing my other CDI 605 (it was nearly at the bottom of a 150 cm stowed equipment stack) I tried reading the floppies with my trusty old Windows 98 machine which still has floppy drives. I could not quickly find a DOS tool that handled the 256 byte sectors (not even raread and friends), although I suspect that Sydex’s TELEDISK product would have handled it just fine. I also tried Reischke’s OS9MAX which should handle all OS-9 formats under the sun according to its documentation. The demo version ran under MS-DOS and gave me working directory listings, even for the 38U0 floppies, but it does not support actually reading the files and I am somewhat doubtful about the current availability of the paid-for full version (even apart from cost concerns).

Why did I decide to use the 605? It was not a question of reading the disks (the 182 did this mostly fine) but of handling the data thus read. The 182 unit has a SCSI connector but I have no drivers for it (yet) and dumping my full floppy collection over the serial port did not really appeal to me for speed and reliability reasons (it could have been done, of course).

The 605 player has a SCSI connector and includes drivers for it so I could have just connected it to the SCSI disk in my E1 emulator and copied the floppies to hard disk (I would still have needed to transfer them to my laptop which would have been a two-step process via the Windows 98 PC as I have no SCSI connection on my laptop).

Instead I used the BNC network connector of the 605 to directly transfer floppy images to my laptop (it needs a network switch supporting both a BNC connector and the modern RJ45 connectors, but luckily I have two of those, even if they are only 10 Mbit/s). Starting up the network environment of the 605 took only two OS-9 commands at the command shell prompt:
ispmode /le0 addr=
After this I could just ftp in to my laptop where I ran ftpdmin, a very minimal ftp server program, and transfer floppy disk images directly:
put /d0@ floppy.dsk
(where /d0@ is the raw floppy device, for 38U0 I used /d0uv@, both are built-in for the 605).

The transfers ran at the maximum speed of the floppy drive (way below the 10 Mbit/s network speed), and the resulting .dsk files are perfectly readable using the –v option (virtual disk) of Carey Bloodworth’s os9.exe program even though that program was originally written for Tandy Color Computer OS9/6809 floppies (the floppy disk format was not changed for OS-9/68000 which is at the core of CD-i’s CD-RTOS operating system).

For easy access I also created a “tar” format archive of each floppy on a RAM disk:
chd /d0
tar cvf /r768/floppy.tar .
and ftp’d those to my laptop as well (the /r768 device is a 768 KB variation of the /r512 built-in 512 KB RAM disk device of the 605 player).

I ended up with the following collection of unique floppy disk images:
  • 605h3 - 605 H3 Driver Update (1 floppy)
  • 605upd - 605 Driver Update (1 floppy)
  • bcase - Basecase Tests (1 floppy)
  • eboot41 - Emulation Boot Diskette (1 floppy)
  • eburn41 - Emulation and CDD 521 Boot Diskette (1 floppy)
  • inet - CD-I Internet Installation Disk - V1.3 (1 floppy)
  • nfs - OS-9/68000 Network File System V.1.0 (1 floppy)
  • os9sys - OS-9 System Diskette (1 floppy)
  • pubsoft - OptImage Public Domain Software (2 floppies)
  • pvpak - OptImage Preview Pak Installation Disk (1 floppy)
  • ubridge - OS-9 UniBridge Resident Utilities (3 floppies)

The 605* and eb* floppies are mostly interesting for CD-i 605 or E1 emulator owners, but the bcase floppy contains a set of CD-i standard conformance test programs that.

The inet and nfs floppies contain a full set of Internet software including Telnet and FTP servers and clients and an NFS client (all except the latter are also in the 605 ROMs).

The os9sys floppy contains a full set of Professional OS-9 programs and is my original source for most of the OS-9 CD-i disc that I described earlier (most of these are not in ROM on any CD-i player that I’ve seen so far).

The pubsoft floppies contain miscellanous utilities such as bfed, du, kermit, umacs and vi, most of which can be obtained elsewhere, some CD-i specific utilities such as da (CD-i disk analyzer) and iffinfo (CD-i IFF file dumper) as well as library source files for the CD-i IFF file library.

The pvpak floppy contains preview software for CD-i images that will preview CD-i IFF files from an NFS-mounted host file system directory.

The ubridge floppies are the goldmine (and also the 38U0 format ones) as they contain a full set of native Microware C compiler/assembler/linker/debugger software for OS-9 complete with CD-i header files and libraries and C runtime startup sources. Both the srcdbg and sysdbg debuggers are included as well as the rdump utility for dumping ROFF (Relocatable Object File Format) files.

Unfortunately, most of the above software except for the pubsoft contents is copyrighted property of Microware (now Radisys) or OptImage (a former Philips/Microware joint venture) which means that I cannot distribute it, even though they could be very useful to CD-i homebrew developers. For that the hopefully soon-to-be available GCC cross-port will have to be enough...

While investigating all of the above I also stumbled upon a 450 MB OS-9 hard disk image for MediaWorkshop. The os9.exe program recognizes it just enough to say that it does not support it so I have no real idea about its contents except the obvious.

To remedy that problem I’m in the process of adding SCSI disk support to CD-i emulator so that I can use the SCSI support in the CD-i 605 ROMs to mount the disk image and look at it. This should also allow the CD-i 180 to boot from a SCSI disk if I ever find drivers for it (a possible path to that has just appeared, we’ll see...).
          ROM-less emulation progress        
Over the last two weeks I have implemented most of the high-level emulation framework that I alluded to in my last post here as well as a large number of tracing wrappers for the original ROM calls. In the next stage I will start replacing some of those wrappers with re-implementations, starting with some easy ones.

It turns out I was somewhat optimistic; so far I have wrapped over 450 distinct ROM entry points (the actual current number of wrappers is 513 but there are some error catchers and possible duplicates). Creating the wrappers and writing and debugging the framework took more effort then I expected, but it was worth it: every call to a ROM entry point described or implied by the Green Book or OS-9 documentation is now wrapped with a high-level emulation function that so far does nothing except calling the original ROM routine and tracing its input/output register values.

Surely there aren't that many application-callable API functions, I can hear you think? Well actually there are, for sufficiently loose definitions of "application-callable". You see, the Green Book specifies CD-RTOS as being OS-9 and every "trick" normally allowed under OS-9 is theoretically legal in a CD-i title. That includes bypassing the OS-supplied file managers and directly calling device drivers; there are many CD-i titles that do some of this (the driver interfaces are specified by the Green Book). In particular, all titles using the Balboa library do this.

I wanted an emulation framework that could handle this so my framework is built around the idea of replacing the OS-9 module internals but retaining their interfaces, including all the documented (and possibly some undocumented) data structures. One of the nice features of this approach is that native ROM code can be replaced by high-level emulation on a routine-by-routine basis.

How does it really work? As a start, I've enhanced the 68000 emulation to possibly invoke emulation modules whenever an emulated instruction generates one of the following processor exceptions: trap, illegal instruction, line-A, line-F.

The emulation modules can operate in two modes: either copy an existing ROM module and wrap its entry points, or generate an entirely new memory module. In both cases, the emulation module will emit line-A instructions at the appropriate points. The emitted modules will go into a memory area appropriately called "emurom" that the OS-9 kernel scans for modules. Giving the emitted modules identical names but higher revision numbers then the ROM modules will cause the OS-9 kernel to use the emitted modules.

This approach works for every module except the kernel itself, because it is entered by the boot code before the memory scan for modules is even performed. The kernel emulation module will actually patch the ROM kernel entry point so that it jumps to the emitted kernel module.

The emitted line-A instructions are recognized by the emulator disassembler; they are called "modcall" instructions (module call). Each such instruction corresponds to a single emulation function; entry points into the function (described below) are indicated by the word immediately following it in memory. For example, the ROM routine that handles the F$CRC system call now disassembles like this:

modcall kernel:CRC:0
jsr XXX.l
modcall kernel:CRC:$

Here the XXX is the absolute address of the original ROM routine for this system call; the two modcall instructions trace the input and output registers of this handler. If the system call were purely emulated (no fallback to the original ROM routine) it would look like this:

modcall kernel:CRC:0
modcall kernel:CRC:$

Both modcall instructions remain, although technically the latter is now unnecessary, but the jsr instruction has disappeared. Technically, the rts instruction could also be eliminated but it looks more comprehensible this way.

One could view the approach as adding a very powerful "OS-9 coprocessor" to the system.

If an emulation function has to make inter-module calls, complications arise. High-level emulation context cannot cross module boundaries, because the called module may be native (and in many cases even intra-module calls can raise this issue). For this reason, emulation functions need additional entry points where the emulation can resume after making such a call. The machine language would like this, e.g. for the F$Open system call:

modcall kernel:Open:0
modcall kernel:Open:25
modcall kernel:Open:83
modcall kernel:Open:145
modcall kernel:Open:$

The numbers following the colon are relative line numbers in the emulation function. When the emulation function needs to make a native call, it pushes the address of one such modcall instruction on the native stack, sets the PC register to the address it wants to call and resumes instruction emulation. When the native routine returns, it will return to the modcall instruction which will re-enter the emulation function at the appropriate point.

One would expect that emulation functions making native calls need to be coded very strangely: a big switch statement on the entry code (relative line number), followed by the appropriate code. However, a little feature of the C and C++ languages allows the switch statement to be mostly hidden. The languages allow the case labels of a switch statement to be nested arbitrarily deep into the statements inside the switch.

The entire contents of emulation functions are encapsulated inside a switch statement on the entry number (hidden by macros):

switch (entrynumber)
case 0:

On the initial call, zero is passed for entrynumber so the function body starts executing normally. Where a native call needs to be made, the processor registers are set up (more on this below) and a macro is invoked:


This macro expands to something like this:

return eMOD_CALL;
case __LINE__:

Because this is a macro expansion, both invokations of the __LINE__ macro will expand to the line number of the MOD_CALL macro invokation.

What this does is to save the target address and return line inside MOD_PARAMS and then return from the emulation function with value eMOD_CALL. This value causes the wrapper code to push the address of the appropriate modcall instruction and jump to the specified address. When that modcall instruction executes after the native call returns, it passes the return line to the emulation function as the entry number which will dutifully switch on it and control will resume directly after the MOD_CALL macro.

In reality, the code uses not __LINE__ but __LINE__ - MOD_BASELINE which will use relative line numbers instead of absolute ones; MOD_BASELINE is a constant defined as the value of __LINE__ at the start of the emulation function.

The procedure described above has one serious drawback: emulation functions cannot have "active" local variables at the point where native calls are made (the compiler will generate errors complaining that variable initialisations are being skipped). However, the emulated processor registers are available as temporaries (properly saved and restored on entry and exit of the emulation function if necessary) which should be good enough. Macros are defined to make accessing these registers easy.

When native calls need to be made, the registers must be set up properly. This would lead to constant "register juggling" before and after each call, which is error-prone and tedious. To avoid it, it is possible to use two new sets of registers: the parameter set and the results set. Before a call, the parameter registers must be set up properly; the call will then use these register values as inputs and the outputs will be stored in the results registers (register juggling will be done by the wrapper code). The parameter registers are initially set to the values of the emulated processor registers and also set from the results registers after each call.

The following OS-9 modules are currently wrapped:

kernel nrf nvdrv cdfm cddrv ucm vddrv ptdrv kbdrv pipe scf scdrv

The *drv modules are device drivers; their names must be set to match the ones used in the current system ROM in order to properly override those. The *.brd files in the sys directory have been extended to include this information like this:

** Driver names for ROM emulation.

The kernel emulation module avoids knowledge of system call handler addresses inside the kernel by trapping the first "system call" so that it can hook all the handler addresses in the system and user mode dispatch tables to their proper emulation stubs. This first system call is normally the I$Open call for the console device.

File manager and driver emulation routines hook all the entry points by simply emitting a new entry point table and putting the offset to it in the module header. The offsets in the new table point to the entry point stubs (the addresses of the original ROM routines are obtained from the original entry point table).

The above works fine for most modules, but there was a problem with the video driver because it is larger then 64KB (the offsets in the entry point are 16-bit values relative to the start of the module). Luckily there is a text area near the beginning of the original module (it is actually just after the original entry point table) that can be used for a "jump table" so all entry point offsets fit into 16 bits. After this it should have worked, but it didn't because it turns out that UCM has a bug that requires the entry point table to *also* be in the first 64KB of the module (it ignores the upper 16-bits of the 32-bit offset to this table in the module header). This was fixed by simply reusing the original entry point table in this case.

One further complication arose because UCM requires the initialisation routines of drivers to also store the absolute addresses of their entry points in UCM variables. These addresses were "hooked" by adding code to the initialisation emulation routine that changes these addresses to point to the appropriate modcall instructions.

All file managers and drivers contain further dispatching for the SetStat and GetStat routines, based on the contents of one or two registers. Different values in these registers will invoke entirely separate functions with different register conventions; they really must be redirected to different emulation functions. This is achieved by lifting the dispatching to the emulation wrapper code (it is all table-driven).

Most of the above has been implemented, and CD-i emulator now traces all calls to ROM routines (when emurom is being used). A simple call to get pointing device coordinates would previously trace as follows (when trap tracing was turned on with the "et trp" command):

@00DF87E4(cdi_app) TRAP[5812] #0 I$GetStt <= d0.w=7 d1.w=SS_PT d2.w=PT_Coord
@00DF87E8(cdi_app) TRAP[5812] #0 I$GetStt => d0.w=$8000 d1.l=$1EF00FD

Here the input value d0.w=7 is the path number of the pointing device; the resulting mouse coordinates are in d1.l and correspond to (253,495),

When modcall tracing is turned on, this "simple" call will trace as follows:

@00DF87E4(cdi_app) TRAP[5812] #0 I$GetStt <= d0.w=7 d1.w=SS_PT d2.w=PT_Coord
@00F86EE0(kernel) MODCALL[16383] kernel:GetStt:0 <= d0.w=7 d1.w=$59 [Sys]
@00F86D10(kernel) MODCALL[16384] kernel:CCtl:0 <= d0.l=2 [NoTrap]
@00F86D1A(kernel) MODCALL[16384] kernel:CCtl:$ =>
@00F8A460(ucm) MODCALL[16385] ucm:GetPointer:0 <= u_d0.w=7 u_d2.w=0
@00FA10A4(pointer) MODCALL[16386] pointer:PtCoord:0 <= d0.w=7
@00FA10AE(pointer) MODCALL[16386] pointer:PtCoord:$ => d0.w=$8000 d1.l=$1EF00FD
@00F8A46A(ucm) MODCALL[16385] ucm:GetPointer:$ =>
@00F86D10(kernel) MODCALL[16387] kernel:CCtl:0 <= d0.l=5 [NoTrap]
@00F86D1A(kernel) MODCALL[16387] kernel:CCtl:$ =>
@00F86EEA(kernel) MODCALL[16383] kernel:GetStt:$ =>
@00DF87E8(cdi_app) TRAP[5812] #0 I$GetStt => d0.w=$8000 d1.l=$1EF00FD

You can see that the kernel dispatches this system call to kernel:GetStt, the handler for the I$GetStt system call. It starts by doing some cache control and then calls the GetStat entry point of the ucm modules, which dispatches it to its GetPointer routine. This routine in turn calls the GetStat routine of the pointer driver, which dispatches it to its PtCoord routine. This final routine performs the actual work and returns the results, which are then ultimately returned by the system call, after another bit of cache control.

The calls to ucm:GetStat and pointer:GetStat are no longer visible in the above trace as the emulation wrapper code directly dispatches them to ucm:GetPointer and pointer:PtCoord, respectively; it doesn't even trace the dispatching because this would result in another four lines of tracing output.

As a sidenote, all of the meticulous cache and address space control done by the kernel is really wasted, as CD-i systems do not need these. But the calls are still being made, which makes the kernel needlessly slow; one major reason why calling device drivers directly is often done. Newer versions of OS-9 eliminate these calls by using different kernel flavors for different processors and hardware configurations.

The massive amount of tracing needs to be curtailed somewhat before further work can productively be done; this is what I will start with next.

I have already generated fully documented stub functions for the OS-9 kernel from the OS-9 technical documentation; I will also need to generate for all file manager and driver calls, based on the digital Green Book.

It is perhaps noteworthy that some kernel calls are not described in any of the OS-9 version 2.4 documentation that I was able to find, but they *are* described in the online OS-9/68000 version 3.0 documentation.

Some calls made by the native ROMs remain undocumented but those mostly seem to be CD-i system-control (for example, one of them sets the front display text). Of the OS-9 kernel calls, only the following ones are currently undocumented:


Their existence was inferred by the appropriate constants existing in the compiler library files, but I have not seen any calls to them (yet).
          CD-i Emulator Cookbook        
Just a quick note that work on CD-i Emulator hasn't stopped.

I have some wild ideas about ROM-less emulation; this would basically mean re-implementing the CD-RTOS operating system. Somewhat daunting; it contains over 350 separate explicit APIs and callable entry points and many system data structures would need to be closely emulated. But it can be done, CD-ice proved it (although it took a number of shortcuts that I want to avoid).

I'm not going to tackle that by myself; my current thinking is to make a start by implementing a high-level emulation framework, tracing stubs for all the calls (luckily these can mostly be generated automatically from the digital Green Book and OS-9 manuals) and some scaffolding and samples.

One of the pieces of scaffolding would be a really simple CD-i player shell; one that just shows a big "Play CD-i" button and then starts the CD-i title :-)

For samples I'm thinking about a few easy system calls like F$CRC, F$SetCRC, F$SetSys, F$CmpNam, F$PrsNam, F$ID, F$SUser, F$Icpt, F$SigMask, F$STrap, F$Trans, F$Move, F$SSvc (I may not get through the entire list) and a new NVRAM File Manager (NRF).

It would be nice to do a minimal UCM with Video and Pointer driver so that the simple CD-i player shell would run, but that might be too much. We'll see.

However, it's the new NRF that would be the most immediately interesting for CD-i Emulator users. It would intercept NVRAM access at the file level and redirect it to the PC file system (probably to files in the the nvr directory). This would allow easy sharing of CD-i NVRAM files (e.g. game-saves) over player types or between CD-i emulator users.

To allow all of the above and clean up some dirty tricks that were needed for input playback and handling Quizard, I've done some internal restructuring of CD-i Emulator. In particular, I introduced a new "handler" class beneath the existing "device" and "memory" classes (which are now no longer derived from each other but from a common "component" base class). This restructuring isn't finished yet, but it will allow the input and Quizard stuff to become handlers instead of devices (the latter is improper because they shouldn't be visible on the CD-i system bus).

The new "module" class (a subclass of handler) will be used to add high-level emulation of OS-9 and CD-RTOS rom modules. I want to preserve the interfaces between the modules and the public data structures as much as possible, because it will allow a gradual transition from "real" to "emulated" modules.

To prepare for all of the above I had to do some fairly heavy design, which caused me to properly write down some of the design information and tradeoffs for the first time. This will be invaluable information for co-developers (if they ever materialize), hence the title "CD-i Emulator Cookbook". Well, at present it's more like a leaflet but I hope to expand it over time and also add some history.

Pieces of the cookbook will be added to the CD-i Emulator website if I feel they're ready.

I've also been giving some thought on a collaboration model for the ROM-less emulation. If there is interest I could do a partial source release that would allow other developers to work on the ROM-less emulation. This release would *not* contain any non-video chipset emulation but it would contain "generic" CD-i audio and video decoding. You would still need to use (part of) a system ROM (in particular the OS-9 kernel and file managers) until enough of the emulation is finished.

I'm still considering all this, but I wanted to get the word out to see if there is interest and to show that I haven't abandoned the project.

Potential co-developers should start boning up on OS-9 and CD-RTOS. All of the technical OS-9 documentation is online at ICDIA and links to the digital Green Book can also be found.
          CD-i Emulator 0.5.3-beta1 released!        
I have just released the new version of CD-i Emulator. It has received very limited pre-beta testing, but that's what public betas are for.

I've changed relatively little since the prebeta1 version.

The most import change is the addition of the Help | Report menu option that contains a link to the new Report section of the website and will automatically fill in most of the information fields on that form when clicked.

Unfortunately, to make this work properly I had to touch most files in the sys directory to add correct player model, extension and digital video cartridge identification.

I've also added checksum reporting of DV cartridges, let's find out how complete my current collection is. These checksums will not (yet) trigger an "Unknown ROM" dialog if unknown, but they are posted with compatibility reports.

The -writepng option also had to be fixed to support macros in its filename, otherwise each video frame would overwrite the previous one! The macros that I've decided to support for now are $seq$, which produces a 6-digit sequence number and $time$ which produces the 6-digit frame time (mmssff).

The Release Notes have been modified to include these changes.

Use of the input recording/playback functions revealed a number of bugs; I've fixed those and also added more player information recording so that recorded input files contain enough information for compatibility reports (this isn't used yet).
          Finalizing input record/playback        
Today I resumed working on the input record/playback code to get it towards its final specification for this beta.

I want the record/playback code to be generic, by which I mean:

1. Recorded input files must be dumpable (e.g. by cdifile) without having to know intimate details of the recorded devices and messages. This can be achieved by labeling all input channels with name and type and recording the formats of all input message types, which is now mostly done.

2. Recorded input files must contain all the information required for faithful playback, which includes things like CD-i player model, DVC cartridge type, extension roms, PAL/NTSC and special startup options and emulated disc insertions.

The ideal is that "wcdiemu -playback" will just work without the need to fiddle with emulator options to reproduce the recording environment. This is now also mostly done except for the disc insertions (they are recorded but cannot yet be played back).

3. It must in principle be possible to play back on a different CD-i player model or with a different DVC cartridge. This means that the input channel matching must be somewhat "intelligent". The groundwork for this is in place but the actual channel matching is not yet there.

All of the above are needed to get the feature usable for extensive compatibility testing, which is a goal of this beta: it should be possible to exchange input recordings that reproduce crashes or rendering bugs, even when they require some time to reproduce. Emulator state snapshots would make this even better, but that is too much work for now.

Recorded input files are in IFF format with the following generic structure:
- each file contains a single FORM chunk of type INPT (recorded input)
- the first chunk inside the FORM is an IVER chunk (version information)
- the next chunk inside the FORM is an ICHN chunk (channel information)
- the final chunk inside the FORM is an IMSG chunk (input messages)

Input messages are recorded in binary format to keep their size small; the first time that a message type is used, the message format string is prepended.

The cdifile dumping code will not be released with this beta, there's too much unfinished code in there (i.e. the -dir[ectory] option to display the directory of a CD-i disc image file).

I will attempt to finish this tomorrow, but it's just barely achievable...

Until then, you'll have to do with this picture:

          Soundmap cleanup, file writing, beta preparation        
Today I started with the planned cleanup of the revised soundmap playback code. This took some time but I had it finished my mid-afternoon (please note that I have small children around, which can sometimes make the progress quite slow).

By way of final test I did a 10 minute Burn:Cycle game session; there's still an audio decoding issue there because some parts of the music crackle heavily.

Then I continued working on the WAV / AVI writing front. I have this essentially working now; both types of files are correctly written (the AVI files now include both audio and video). There are new options -writewav and -writeavi to invoke these functions from the command line.

I used the WAV writing feature to check out the Burn:Cycle audio decoding issue. The audio samples go way out of range at the point of the crackles (they actually clamp against the minimum/maximum values). There is probably something wrong in the scaling or something.

One generic issue with WAV file writing is that silent periods (where the CD-i application doesn't play any audio) do not show up in the recorded audio. For the moment, I've decided that this is a feature :-)

There is also some kind of audio timing issue in AVI file writing; the recorded files sometimes sound "skippy". However, this may also be related to the speed of my PC; these are uncompressed AVI files which take 61.5 MB per second for the CD-i video data alone (50 x 3 x 768 x 560 bytes per second). Compared to this, audio is only a measly 172 KB per second :-)

I've found that real-time AVI writing is nearly impossible on my hardware; the framerate drops below 2/50 sometimes (I've lowered the previous 10/50 limit) which makes the titles unplayable. Tomorrow I'll try my work PC which is much faster. That should also give me an opportunity to add frame rate throttling; on fast PC's the emulator currently runs too fast.

However, non-realtime AVI writing also gives the "skippy" sound, so it may be unrelated. Sometimes the audio also gets way out of sync...

One worrying generic issue is that CD-i Emulator seems to have become slower as of late, for no good reason that I can think of. It may be that some debugging code somewhere is slowing things down, but I haven't found it yet. Or it could be that the titles I'm currently running are simple more demanding...

During testing, I also fixed a recent bug where mouse input and keyboard input interfere with each other. I also took out the use of the Shift key for button 2, as it is prone to generate false button presses when using Alt-Tab / Shift-Alt-Tab to switch between windows. You can still use the numeric "+" key for button 2, though, but that is too far away for generic use: I need an alternative closer to the space bar. Perhaps Backspace for button 2, and Esc for button 3 (buttons 1 and 2 together)?

Finally, I've started putting together the first v0.5.3 beta distribution. It will be mostly identical to a v0.5.2 one, with some updates in the sys directory, updated cdiroms.ini and cditypes.rul files and of course an updated executable which will be named wcdiemu-v053b1.exe (for beta 1) to avoid accidentally overwriting an existing v0.5.2 executable. I've also taken a first crack at a release notes document (very descriptively named BETA1).
          La fine di Yahoo è ormai segnata e nel suo futuro c'è il nuovo nome Altaba        

"Do you Yahoo?" è stato il claim che ha accompagnato i primi spot pubblicitari dello storico motore di ricerca, nato ben prima di Google e basato su un'ampia directory di siti web compilata manualmente. Solo successivamente fu implementato l'algoritmo di ricerca che, per un...

          This is what privatisation did to Australia's household electricity bills        

When three eastern and one southern state formed the National Electricity Market in December 1998 Australia had the lowest retail prices in the world along with the United States and Canada.

The rules which underpin this National Electricity Market are created by the Australian Energy Market Commission (AEMC) set up by the Council of Australian Governments (COAG) - through the COAG Energy Council - for that purpose and to advise federal & state governments on how best to develop energy markets over time.

The Australian Energy Regulator (AER) sets the amount of revenue that network businesses can recover from customers for using networks (electricity poles and wires and gas pipelines) that transport energy.

So far so good. There's a defined market and there are rules.

Then the privatisation of electricity supply and infrastructure began in earnest.

It should come as no surprise that this push towards full privatisation, with its downhill spiral in service delivery and uphill climb in cost to retail customers, began and was progressed during the term of Liberal Prime Minister John Howard.

By 2017 the NSW Berejiklian Coalition Government has almost completed its three-stage privatisation of state power infrastructure by selling off poles and wires and, it goes without saying that the retail cost of electricity is expected to rise again next year.

This is where we stand today……………………

[Graphs in Financial Review, 4 August 2017]
The Financial Review, 4 Augut 2017:

The annual cost to households of accepting a standing offer from one of the big three retailers instead of the best offer in the market has been estimated at $830 in Victoria, $900 in Queensland and $1400-$1500 in NSW and SA by the St Vincent de Paul Society.

Mr Mountain said power bills are constructed in such a complex way that ordinary customers without sophisticated spreadsheet and analytical skills have little hope of analysing competing offers to work out which offers them the best deal.

Private comparison websites do not include all market offers and charge retailers for switching customers, while the websites offered by the Australian Energy Regulator and the Victorian government do not provide the tools customers need to discriminate among offers.

Prime Minister Malcolm Turnbull has ordered the Australian Competition and Consumer Commission (ACCC) to conduct an inquiry into electricity supply, costs and pricing, including retail pricing.

The Treasurer should have a preliminary report from the ACCC in his hands by the end of September this year, however this body does not submit a final report until 30 June 2018 with no guarantee that any recommendations will be adopted by government and industry.

Quite frankly, it appears the privatisation train left the platform some time ago and there is no way to halt or divert it in order to genuinely benefit household consumers.

          Dirsync Database keeps on Increasing…        
  Hi Have you noticed the Office 365 directory sync database just getting larger and larger… If so try the following to clear the sync runs 1. Open the MIISClient.exe and click the Operations tab. 2. From the Actions menu, select "Clear Runs" 3. On the Clear Runs dialog a. Uncheck the option "save runs...
          Blog directory - Blogdire        
Directory of blogs, ordered by category. This is a blog about anything and everything. Here you can find what you are looking for. List your blog for free and get more traffic.
          Driving Under the Influence of Marijuana: Can You Prove It?        
In every state in the union it is unlawful to operate a motor vehicle while under the influence of marijuana. In driving under the influence of marijuana cases, the prosecution must prove beyond a reasonable doubt that the driver was so impaired at the time of driving, that the driver could not operate a motor vehicle safely. Unlike alcohol, there is no accurate way to measure THC levels (the active ingredient in marijuana) since it is difficult to know when the actual intake of marijuana actually occurred and because a person can test positive for marijuana for up to thirty days after first using marijuana. Measuring alcohol is a different story. Most jurors understand that a blood alcohol level of over 1.0 usually means the driver was "per se" too impaired to drive a motor vehicle safely. Marijuana is a different story.

Without a viable and reliable way to measure the level of THC in a driver, the police must resort to using other forms of testing such as conducting field sobriety tests on the driver. These tests are by nature very subjective and problematic to administer fairly. Field sobriety tests are used by the officer to determine if, in his or her professional police opinion, the driver was so impaired he or she could not operate a motor vehicle safely and therefore should be arrested for driving under the influence of marijuana.

The driver is usually ordered out of the vehicle to perform field sobriety tests. While standing on the side of the road, an officer puts the driver through a series of tests. While each jurisdiction uses its own field sobriety tests, there are some common tests used by practically all police agencies.

Balance tests. Although marijuana is not generally associated with a loss of balance, this is still a commonly used test.

Coordination tests. These tests are often difficult even when sober, so they may not be fair in certain conditions.

Mental tests. External factors such as fatigue or nervousness can affect these tests, which are often given at night or under stressful circumstances.

Marijuana impairment is difficult to prove but carries with it heavy punishment. If you are arrested for driving under the influence of marijuana based on the results of field sobriety tests, you should hire a qualified attorney at once. is a popular online directory that has a ton of information about medical marijuana, including lists of dispensaries, doctors who can prescribe it, and criminal defense attorneys who specialize in defending patients who have been arrested in relation to marijuana.

dui attorny: dui attorney

dui attorny: dui attorney

Article Source:

          status changed; resolution set        
  • status changed from assigned to closed
  • resolution set to fixed

(In [8497]) Look for template files in the stylesheet folder as well as the template folder for inheriting themes. Fixes #7086 props ionfish.

          status changed; resolution deleted        
  • status changed from closed to reopened
  • resolution fixed deleted

A fix is needed for get_themes() which will now set a theme's 'Template Dir' to the name of the directory which happens to contain the first-listed file.


Not sure exactly what the patch achieves here.

Doesn't it just pick a different arbitatry file to get the directory from?

          status changed; resolution set        
  • status changed from reopened to closed
  • resolution set to fixed

Fixed in [8497], the directory that is checked in r9179 is the template directory of that file. Therefore it doesn't matter which file is chosen, it will point to the same base directory.

I haven't extended too much testing to it, but it appears to be the case.

You can extend your Directory website by using a wide range of add-ons that we offer, see the list of add-ons: Events Turn your Directory into an... Read more »
          Manage a global website with Directory        
Directory is our brand new platform that encompasses a parent theme, various plugins and a wide selection of child themes. It is the most advanced theme we’ve... Read more »
          How to speed up your Directory website?        
Note: You must take backup of your site and database before following this step. Better be safe then sorry. Here are some tips on how to speed... Read more »
          Taking the Fun Out of Fungus Getting Rid of Fungal Infections        
Did you hear about the woman who dated a mushroom? She heard he was a real "fun guy" to be with. If you are spending too much time around any fungi of your own, it may be time to see your dermatologist. Fungal infections are nothing to laugh about (much like the aforementioned joke), and in some cases they can be either debilitating or disgusting - or often a combination of the two.

If the thought of having a fungus growing on your body makes you feel a bit queasy, there's probably a reason. The bad news is that fungal infections aren't any fun; the good news is that they can be relatively easily treated. So if you have an infection on your body caused by a fungus, do something about it - quick!

What is a Fungus?

A fungus is a certain type of organism belonging to the fungus kingdom, which has more than 80,000 species (but sadly, no king or queen). Fungi are notoriously difficult to characterize, especially as they share traits with both plants and animals, although they lack both chlorophyll and vascular tissue. They can multiply both sexually and asexually (by cloning themselves), and they feed on many different types of organic material, ones that are both living and dead.

Many things fall under the classification of fungi, including mushrooms, toadstools, spores, smuts, yeasts, molds and lichens, to name but a few. Some people choose to call fungi "primitive vegetables", and as such they are able to live in air, soil, on plants or in water. Often, they live on our skin.

Fungal infections are caused by a harmful fungus (about half of all fungi fall into this category) which has infected your skin or has been breathed in by you and invaded your lungs, and they can appear in many different forms. Often it's difficult to ascertain whether a specific health complaint is caused by a fungal infection or not; this is where a dermatologist can be helpful in making an accurate diagnosis.

General Fungal Infections

Fungal infections are incredibly common and can happen to anyone, regardless of personal hygiene - although poor hygiene can definitely contribute to burgeoning infections. Here are a few you are most likely to encounter...

Athlete's Foot. Also known as tinea pedis. Perhaps the most common fungal infection of all. Makes the foot red, itchy, scaly and often smelly. Occurs as the ringworm fungus loves feet, because they are so often encased in warm sweaty socks. This nice moist environment is a prime place for the fungus to thrive, and if not treated properly can even allow transfer to other parts of the body, such as the groin, nails, trunk etc. For some reason athlete's foot is more common among men (boys, take note) and usually affects the area between the fourth and fifth toes (foot fetishists, take note).

Nail Infections. Can begin as a small yellow or white spot underneath a fingernail, but then spreads. As it burrows deeper and deeper into your toenail or fingernail it can cause discoloration, thickening or crumbling, and can be incredibly painful. In some cases the nail will separate itself from the nail bed and an unpleasant odor can occur. Usually caused by a group of fungi called dermatophytes, sometimes caused by yeasts or molds. Can easily be picked up in swimming pools or other moist, warm places where fungi thrive, much like athlete's foot.

Scalp Ringworm. If your scalp is turning red, crusting and becoming incredibly itchy, there's a chance you have scalp ringworm. This is fairly common among young children, and it's estimated that 50 percent of all child hair loss is caused by this nasty fungus. It occurs primarily in one of three different guises: gray patch ringworm, black dot ringworm and inflammatory ringworm.

Body Ringworm. Also known as tinea corporis. Usually occurs on parts of the body not covered by clothes, such as hands and face. Not as revolting as it seems as it not actually caused by a worm but by - surprise! - a fungus. Gets its name as it can cause a ring-shaped rash with scaly center. Can sometimes be passed on by cats although usually is passed on through human contact. So always wash your hands after stroking a feline... or a human.

Lung fungal infection. Also known as aspergillosis, this fungus thrives in places such as air ducts and compost heaps, then it attacks your lungs. Can be most dangerous to people who have had lung disease in the past and therefore have cavities in their lungs which can become infected. However, this infection can be treated and does not usually spread outside the lung area.

Personal Fungal Infections

Also incredibly common, these fungal infections are the most unpleasant because they infect our most personal areas. Not to be confused with sexually-transmitted diseases, but they can be just as irritating - and sometimes even more so! And as the symptoms so often mimic those of STDs, sometimes it's hard to tell between the two. That's where a proper diagnosis by a doctor or dermatologist can be so important..

Jock Itch. If you spend too much time flaunting a tight, wet Speedo on the beach in hot weather, chances are you'll develop jock itch. This fungal infection of the groin can attack both men and women, but it more common among the boys. Heat and humidity are the biggest factors contributing to this irritating itch, although wearing tight clothing or being very overweight can play a role as well. Results in nasty red pustules that are uncomfortable and unsightly and require medical treatment. Also called Ringworm of the Groin.

Vaginal Yeast Infections. Caused by an overgrowth of the Candida albicans fungus, this unpleasant infection can cause extreme itchiness around the vagina, as well as an unpleasant discharge and smell as well as occasional pain and burning. It's estimated that ?? of all women have at least one yeast infection in their lives, which can easily spread to sexual partners. Easily treated in the vast majority of cases.

Treatment of Fungal Infections

Most fungal infections are treated with anti-fungal medications, but along with the correct meds you should also wash regularly and keep the affected areas clean and dry. Following a strict skin-care regime is important to avoid re-infection, or infecting others. That means, depending on which type of infection you have, not sharing towels or combs, wearing flip-flops in changing areas or poolside, using an anti-fungal foot spray, wearing clean cotton socks and underwear and changing into clean cotton clothes regularly.

Here, in alphabetical order, are a few of the anti-fungal drugs you may be prescribed, follow the doctor or dermatologist's instructions and let them know beforehand if you are taking any other medications:






Terbinafine Hydrochloride

Fungal infections can affect anyone, and if you have a busy, active lifestyle chances are you'll come down with at least one - if not more - at some stage of your life. While fungal infections are never fun, there's no need to suffer in silence, so if you have any of the above symptoms, get thee to a doctor pronto. You'll have a new, fungal-free you in no time!

The information in the article is not intended to substitute for the medical expertise and advice of your health care provider. We encourage you to discuss any decisions about treatment or care an appropriate health care provider.

Sarah Matthews is a writer for Yodle, a business directory and online advertising company. Find a dermatologist or more personal care articles at Yodle Consumer Guide.

fingernail fungus cures: toe nail fungus treatments

fingernail fungus cures: nail fungus remedy

Article Source:

          Unduh Map Dota 6.81d Terbaru - 2014        
DotA 6.81d map is publicly released in the early August. This map has been released after a very long wait. This release closely focuses on balancing some key heroes which were carrying extra edge in the previous map. Interestingly there hasn't been any new hero or item included in the new map instead there are numerous heroes, cosmetics, items and gameplay balance changes have been made.

DotA 6.81d
DotA 6.81d Map Download:  
DotA 6.81c Map Download:
DotA 6.81b Map Download:
Download the map file (with .w3x extension) and place it in (Warcraft III\Maps\Download) directory. You must have Warcraft 3 TFT v1.24e or above to play this map.

Last Updated on August 10, 2014

DotA 6.81b/c/d Map Changelogs (Official) 
Seeing the past record its clear that fans might well have to see one more map in the few days. The further progress in the current series is under progress and fans will be able to see the new version in the short period of time.  So stay patient for the next update.
          Unduh Map Dota 6.81 Terbaru - 2014        

DotA 6.81d map is publicly released in the early August. This map has been released after a very long wait. This release closely focuses on balancing some key heroes which were carrying extra edge in the previous map. Interestingly there hasn't been any new hero or item included in the new map instead there are numerous heroes, cosmetics, items and gameplay balance changes have been made.

DotA 6.81d
DotA 6.81d Map Download:  
DotA 6.81c Map Download:
DotA 6.81b Map Download:
Download the map file (with .w3x extension) and place it in (Warcraft III\Maps\Download) directory. You must have Warcraft 3 TFT v1.24e or above to play this map.

Last Updated on August 10, 2014

DotA 6.81b/c/d Map Changelogs (Official
Seeing the past record its clear that fans might well have to see one more map in the few days. The further progress in the current series is under progress and fans will be able to see the new version in the short period of time.  So stay patient for the next update.
          Unduh Map Dota 6.78 AI terbaru - 2014        
DotA 6.78c AI Download
DotA v6.78c AI map has been released by Defense of the Ancients development team. The DotA AI map consists of computer controlled players called Bots. With the help of this map, you can play DotA offline without any need of working internet/network connection. DotA 6.78c AI v1.4e contains improved AI item builds, skills and optimizations. If you are a beginner and want to learn the basics of DotA or Dota 2, AI maps are perfect for this purpose.

DotA 6.78c AI v1.4eFile Download
Download the map file with [.w3x] extensions. Open your Warcraft III Frozen Throne directory, go top maps folder and drop the map file there. Just make sure you have v1.24e or v1.26a patch installed before playing.
DotA v6.78c AI Map Notes
1. All of 6.78c content has been ported.
2. All AI heroes have their skills updated, they should work properly (more or less).
3. Some tweaks to item builds, including item builds for Oracle and Kaolin. More changes were intended but sadly were put down due to lack of time.
4. Name roster for -cn mode has been updated. Now it contains the names of TI3 contestants, instead of TI2.
5. Numerous behavior and logic changes, including a fix to old and annoying stuck bug that forced all AI heroes group in one point ( now they should get unstucked after 10 seconds or so ).
6. AI now uses 2 couriers instead of 1. Originally number was intended to be 3, but it was decided to use 2 since 3 would be, a little unfair.
7. Additional small tweaks like price checks and mana checks for AI have been applied. These should help AI manage their gold and mana slightly efficiently.
8. Now when AI is lvl25 and XP/Gold bonus mode is active, it will get extra boost to gold ( since bonus xp for lvl25 hero won’t matter anymore ).
9. Fixed some minor bugs that were reported. Most of them shouldn’t happen anymore.
10. Some code optimizations were applied. This should make map slightly faster/stable.
  • This is the initial release, so expect minor bugs or glitches.
  • DotA 6.79e AI will released after this map is deemed stable.
  • Post a comment if you encounter error(s) or anything unusual.

          Download Map 6.77c LOD        
DotA 6.77c LoD
DotA v6.77c LoD Map is now released. Legends of DotA is a modified version of IceFrog's DotA in which you play any hero with your desired skills combination or optionally you can go random. Currently, this map is only compatible with AP, AR SD and MD mode. However, you can enter additional modes for more restrictions/balance. The current version is v6.77c LoD v1b which brings tons of improvements and balance to the skills to avoid misuse.

DotA v6.77c LoD v1b Map Download:  

Download the map file (.w3x) put it in 'Warcraft 3\Maps\Download' sub-directory.
Tip: You can use AucT Hotkey Tools to avoid hotkey conflicts with other skills during the game. It allows you to change the default hotkeys of spells.

DotA LoDGame Modes:

Enter any of the game modes listed below to start Legends ofDotA
-AP (No hero and skill limitaion)
-AR (Random hero/skills)
-SD (Few heroes/limited skills)
-MD (Mirrored heroes/spells)
Additional Modes:
-bo Balance Off-d2 Provides a choice of 20 heroes.
-d3 Provides a choice of 30 heroes.
-d4 Provides a choice of 40 heroes.
-d5 Provides a choice of 50 heroes.
-s5 allows you to pick 5 skills.
-s6 allows you to pick 6 skills (1 extra ultimate and 1 extra normal ability).
-ra Random Abilities, the extra abilities from s5 & s6 are chosen randomly.
-fn Fast Neutrals, first neutrals spawn already after drafting, then 30 seconds after, then the normal 1 minute spawn
-ss See Skills, allows you to see the enemy skills while drafting as well
-ab Anti-backdoor
-ul Unlimted levles
-os One Skill, skills can't be picked twice on each side
-ls Limit Skills, you cannot have more than 2 passive skills and more than 2 skills from a single hero.
Dota LoD Map TestDota LoD Game ModeNaix in LoD Dota

In-game commands:

Here's a list of in-game commands that can be used.

-AI  displays your teams spells in the scoreboard
-FF only works if mode -ff is entered
-WFF to see who voted to finish fast
-SP # Toggle passive skill display, # is the skill's place number while drafting
-SDDON/-SDDOFF System Display Damage
-ADDTIME Adds 1 minute to the clock when picking skills, can be entered once by every team, 2 extra minutes possible as max
-READY During the skill picking phase, chooses your remaining skills randomly
-RANDOM MELEE / -RANDOM RANGE During the hero pick phase, chooses a random melee/range hero from your pool

Thanks to ResQ, PEW_PEW and Lordshinjo for modding this map!
          FIFA 13 INTERNAL-RELOADED        

Release Date: 7 Okt 2012
Mirrors: PutLocker | UPaFile | Cyberlocker | BillionUploads
Uploaded | Rapidgator | Turbobit

Free Download PC Game FIFA 2013 Full Version - captures all the drama and unpredictability of real-world football. This year, the game creates a true battle for possession across the entire pitch, and delivers freedom and creativity in attack. Driven by five game-changing innovations that revolutionize artificial intelligence, dribbling, ball control and physical play, FIFA 2013 represents the largest and deepest feature set in the history of the franchise.

  • All-new positioning intelligence infuses attacking players with the ability to analyze plays, and to better position themselves to create new attacking opportunities.
  • Make every touch matter with complete control of the ball. Take on defenders with the freedom to be more creative in attack.
  • A new system eliminates near-perfect control for every player by creating uncertainty when receiving difficult balls.
  • The second generation of the physics engine expands physical play from just collisions to off-the-ball battles, giving defenders more tools to win back possession.
  • Create dangerous and unpredictable free kicks. Position up to three attacking players over the ball and confuse opponents with dummy runs, more passing options, and more elaborate free kicks.
  • Compete for club and country in an expanded Career Mode that now includes internationals. Play for or manage your favorite national team, competing in friendlies, qualifiers and major international tournaments.
  • Learn or master the fundamental skills necessary to compete at FIFA 13 in a competitive new mode. Become a better player, faster, no matter what your skill level. Compete against yourself or friends in 32 mini-games perfecting skills such as passing, dribbling, shooting, crossing and more.
  • Earn rewards, level up, enjoy live Challenges based on real-world soccer events, and connect with friends. Everything within FIFA 13, and against friends, is measured in a meaningful way.
  • Access your Football Club identity and friends, manage your FIFA Ultimate Team, search the live auctions and bid to win new players.
  • 500 officially licensed clubs and more than 15,000 players.

Release NOTE: It internal because the DRM is bypassed using a loader. The game works, but it’s not how we would usually release a crack.

Minimum System Requirements
  • OS: Windows XP/Vista/7
  • Processor: Intel Core 2 Duo @ 2.4 Ghz / AMD Athlon 64 X2 5000+
  • Memory: 2 Gb
  • Video Memory: 512 Mb
  • Video Card: nVidia GeForce 8800 / ATI Radeon HD 2900
  • Sound Card: DirectX Compatible
  • DirectX: 9.0c
  • Keyboard
  • Mouse
  • DVD Rom Drive

Update Link download (05-05-2013)
Mirror via PutLocker
Mirror via UPaFile
Mirror via Cyberlocker
Mirrror via BillionUploads
Mirror via Uploaded, Rapidgator, Turbobit
1. Unrar.
2. Burn or mount the image.
3. Install the game.
4. Copy the cracked files from the \Crack directory on the disc to the \Game directory, overwriting the existing exe.
5. Before you start the game, use your firewall to block all exe files in the game's install directory from going online. Use the game setup before starting as well. It can be found in the following directory:\Game\fifasetup
6. Play the game. While in game, avoid all of the online options. If you have Origin installed, it may start it up. If that happens, ignore the prompt, play offline, and don't login.
7. Enjoy!

1. PL, UPa, CL, BU Interchangeable Links
2. Total part: 10 / 700 MB
3. Total file : 6.4 GB

1. UL, RG, TB Interchangeable Links
2. Total part: 7 / 1.00 GB
3. Total file : 6.4 GB

          Mars: War Logs-COGENT        

Mars: War Logs - COGENT
Release Date: 26-04-2013
Language: English
Mirrors: PutLocker | UPaFile | Cyberlocker | BillionUploads

Free download PC game 2013 Mars: War Logs Full Version - In the destroyed world of Mars, two destinies mingle together. Two beings searching for their identity travel across a broken planet, constantly facing bloody political conflicts which tear the old colonies apart. Often divided, sometimes fighting the same enemies, both are the source of the advent of a new era…

Mars: War Logs is a sci-fi RPG action game that innovatively merges character development and light and rhythmic fights. it takes you on a journey deep into an original futuristic universe and presents you with scenarios dealing with difference, racism and environment.

  • Take on the role of Roy Temperance, a multi-talented renegade, and surround yourself with companions with real personalities.
  • Choose from the numerous dialog possibilities and influence the destiny of your people.
  • Personalize your fighting style through a dynamic and developed combat system, for entirely different approaches depending on the choices you make.
  • Personalize your development by choosing from dozens of skills and numerous additional perks!
  • Modify and create your own equipment with our craft system.


Minimum System Requirements
  • OS: Windows XP/Vista/7/8
  • Processor: Intel Core 2 Duo @ 2.2 Ghz / AMD Athlon 64 X2 4600+
  • Memory: 2 Gb
  • Hard Drive: 3 Gb free
  • Video Memory: 512 Mb
  • Video Card: nVidia GeForce 8800 / ATI Radeon HD 3870
  • Sound Card: DirectX Compatible
  • Network: Broadband Internet Connection
  • DirectX: 9.0c
  • Keyboard
  • Mouse

Link download
Mirror via PutLocker
Mirror via UPaFile
Mirror via Cyberlocker
Mirror via BillionUploads

1. Unrar
2. Mount or burn
3. Install
4. Copy contents of Crack Directory to install directory
5. Play the game
6. Support the software developers. If you like this game, BUY IT!

1. PL, UPa, CL, BU Interchangeable Links
2. Total part: 8 / 350 MB
3. Total file : 2.54 GB
          God Mode Update 1-RELOADED        

God Mode Update 1-RELOADED
Release Date: 04-2013
Language: -
Mirrors: PutLocker | UPaFile | Cyberlocker | BillionUploads

RELOADED group telah merilis update pertama untuk PC game God Mode. Info changelognya dapat di bawah, dan bagi teman-teman sekalian yang sudah mendownload PC Game God Mode-RELOADED yuk langsung di patch gamenya biar lebih enak di mainin.

  • Desynchronization: Network optimizations were made to stop the desync issue during games which also led to non-progressions.
  • FPS Cap: The FPS cap can now be removed via the game.cfg file. Add the option "LockFps = 0" under the Video section. Note that removing the FPS cap may cause issues depending on system configuration and network quality.
  • Disable voice chat option: You can now disable all incoming sounds via the options while still having the ability to chat via Push to talk.
  • Push to talk voice chat: To enable chat while in-game, parties or a menu press the ` (tilde) button (can be reconfigured).
  • Minigun ammo clip upgrade: Fixed the issue where users were not seeing the clip properly upgraded.
  • Graceful error handling: Better message boxes and debug text appear when issues occur to give developers better visibility.
  • Crash at Voting Screen: A fix was made to the crash that occurred while voting for a map.
  • Crash when connection to Steam is lost: Launching the game when Steam is offline will launch into LAN mode instead of crashing.
  • Alt+Tab Crash: Fix to the crash when users Alt+Tab out of the game.
  • Overall game stability: Optimizations were made increase game stability.
  • God Mode Executable compatibility: Steam DRM has been altered from Standard to Compatibility. This change will help users experiencing crashes while booting the game.
  • Memory Allocation Fix: Optimizations were made to game memory allocation.
  • Note: We recommend players install the patch before launching God Mode. Not installing the patch could lead to connection compatibility issues. 

Link download
Mirror via PutLocker
Mirror via UPaFile
Mirror via Cyberlocker
Mirror via BillionUploads

1. Unrar God Mode Update 1-RELOADED.rar > Next Unrar rld-godupd1.rar
2. Install the update.
3. Copy over the cracked content from the /Crack directory to your game install directory.
4. We recommend not allowing the game to go online with your FW. Select LAN from partygame settings.
5. Play the game.
6. Support the software developers. If you like this game, BUY IT!
          Resident Evil 6 Update 4-RELOADED        

Resident Evil 6 Update 4-RELOADED
Release Date: 04-2013
Mirrors: PutLocker | UPaFile | Cyberlocker | BillionUploads

RELOADED group kembali lagi dengan rilis update terbaru untuk PC Game Resident Evil 6. Pada update ke 4 kali ini terdapat penambahan fungsi option baru untuk pencarian, serta improvisasi performa SLI untuk graphics cards NVIDIA. Nah, bagi teman-teman sekalian yang sudah memiliki pc game Resident Evil 6, monggo langsung di amankan filenya.

New Features
  • Added new options to the session search function
  • Improved SLI performance with NVIDIA graphics cards

Link download
Mirror via PutLocker
Mirror via UPaFile
Mirror via Cyberlocker
Mirror via BillionUploads

1. Unrar.
2. Install the Update. No other updates are needed for this release.
3. Copy over the cracked content from the /Crack directory to your game install directory.
4. Play the game.
5. Support the software developers. If you like this game, BUY IT!
          God Mode-RELOADED        
Release Date: 04-2013
Language: English | German | French | Italian | Spanish | Russian
Mirrors: PutLocker | UPaFile | BillionUploads 

Free download PC game God Mode Full Version - Is set in a twisted version of Greek mythology and the afterlife. The player is a descendant of an ancient God whose family has been banished by Hades from Mt. Olympus and turned into a mere mortal. To avoid an afterlife of eternal damnnation, the player must battle through this purgatory known as the Maze of Hades against an army of the underworld.

God Mode combines non-linear gameplay, fast and frantic shooting, hordes of on-screen enemies, and a fully functional online coop mode. Matches rarely ever play out the same, as dozens of in-game modifiers can significantly alter the gameplay on the fly. Characters are fully customizable, both in appearance and equipment, which continually evolve. Gold and experience is constantly accrued and used to unlock new satisfying weaponry and unique powerful abilities, both of which can be further upgraded.


Minimum System Requirements
  • OS: Windows XP/Vista/7
  • Processor: Intel Core 2 Duo @ 2.0 Ghz / AMD Athlon 64 X2 4200+
  • Memory: 2 Gb
  • Hard Drive: 5 Gb free
  • Video Memory: 512 Mb
  • Video Card: nVidia GeForce 8800 / ATI Radeon HD 2900
  • Sound Card: DirectX Compatible
  • DirectX: 9.0c
  • Keyboard
  • Mouse

Recommended System Requirements
  • OS: Windows XP/Vista/7
  • Processor: Intel Core i5 @ 2.4 GHz / AMD Phenom II X4 @ 2.6 GHz
  • Memory: 3 Gb
  • Hard Drive: 5 Gb free
  • Video Memory: 1 Gb
  • Video Card: nVidia GeForce GTX 460 / ATI Radeon HD 5850
  • Sound Card: DirectX Compatible
  • Network: Broadband Internet Connection
  • DirectX: 9.0c
  • Keyboard
  • Mouse

Link download
Mirror via PutLocker
Mirror via UPaFile
Mirror via BillionUploads

1. Unrar.
2. Burn or mount the image.
3. Install the game.
4. Copy over the cracked content from the /Crack directory on the image to your game install directory.
5. Before you start the game, use your firewall to the game exe file
from going online.
6. Play the game, change party settings to LAN.
7. Support the software developers. If you like this game, BUY IT!

1. PL, UPa, BU Interchangeable Links
2. Total part: 6 / 300 MB
3. Total file : 1.77 GB


mattn/memoはGoで書かれていて、800行程度なのですぐに読めますし、気になる挙動があったらしゅっと直せます(自分も細かいPull Requestをいくつか送ってマージしてもらった)。特に以下の点が気にいっています。

  • Goで書かれていて、高速に立ち上がる
  • 簡単にgrepしたり(memo g)、pecoでファイル選択(memo e)ができる
  • httpサーバーが立ち上がって(memo s)、markdownがいい感じに見れる
    • テンプレートファイルをいじると自分の好きなようにできる(後述)



memo nから新しいテキストファイルを作れますが、同じ日付けの事柄は1つのテキストファイルに日報として書いています。なので、その日の日報ファイルが3秒で開けることが大事です。EmacsからM-x memoですぐに開けるようにしておきます。

(defun memo ()
   (concat "~/Dropbox/_posts/" (format-time-string "%Y-%m-%d") "-日報.md")))









<!DOCTYPE html>
    <meta charset="UTF-8">

    <link rel="stylesheet/less" type="text/css" href="/assets/style.less">
    <script src="/assets/less.js" type="text/javascript"></script>

    <link rel="stylesheet" type="text/css" href="/assets/atelier-dune-light.min.css">
    <script src="/assets/highlight.min.js" type="text/javascript"></script>

    <link rel="stylesheet" type="text/css" href="/assets/katex.min.css">
    <script src="/assets/katex.min.js" type="text/javascript"></script>

    <script src="/assets/auto-render.min.js"  type="text/javascript"></script>

     document.addEventListener("DOMContentLoaded", () => {
       var katex_opts = {
         displayMode: true

       Array.from(document.querySelectorAll("div"), elem => {
         if (elem.className === "highlight highlight-math") {
           var e = elem.querySelector("pre");
           katex.render(e.textContent, e, katex_opts);
         } else {

       renderMathInElement(document.body, {
         delimiters: [
           {left: "$", right: "$", display: false},




package main

import (

    tty ""

const (
    memoDir            = "/Users/yasuhisa/Dropbox/_posts"
    emptyContentRegexp = `(?s)^# 日報[\n\s]*$`

type File struct {
    Path    string
    Content string

func filterMarkdown(files []string) []string {
    var newfiles []string
    for _, file := range files {
        if strings.HasSuffix(file, ".md") {
            newfiles = append(newfiles, file)
    return newfiles

func isEmptyContent(content string) bool {
    return regexp.MustCompile(emptyContentRegexp).MatchString(content)

func ask(prompt string) (bool, error) {
    fmt.Print(prompt + ": ")
    t, err := tty.Open()
    if err != nil {
        return false, err
    defer t.Close()
    var r rune
    for r == 0 {
        r, err = t.ReadRune()
        if err != nil {
            return false, err
    return r == 'y' || r == 'Y', nil

func deleteFile(f File) error {
    color.Red("%s", "Will delete the following entry. Are you sure?")
    fmt.Println("File: " + f.Path)
    answer, err := ask("Are you sure? (y/N)")
    if answer == false || err != nil {
        return err
    answer, err = ask("Really? (y/N)")
    if answer == false || err != nil {
        return err
    err = os.Remove(f.Path)
    if err != nil {
        return err
    color.Yellow("Deleted: %v", f.Path)
    return nil

func main() {
    f, _ := os.Open(memoDir)
    defer f.Close()
    files, _ := f.Readdirnames(-1)
    files = filterMarkdown(files)
    for _, file := range files {
        path := filepath.Join(memoDir, file)
        b, _ := ioutil.ReadFile(path)
        content := string(b)
        if isEmptyContent(content) {
            deleteFile(File{path, content})

簡易markdown previewerとしてmemoを使う


% ln -s ${PWD}/ ~/Dropbox/_posts/


% unlink ~/Dropbox/_posts/


memo s --addr :3456でhttpサーバーがさっと立ち上がるのですが、ずっと使っていると立ち上げるのすら面倒で、PC起動したら立ち上がっていてくれという気持ちになりました。そこで、supervisorでmemoをサーバーとして起動時に立ち上げるようにしました。/usr/local/share/supervisor/conf.d/memo.confにこんな感じに書いておく。


command=zsh -c 'direnv exec . memo s --addr :3456'


          Cant open Books PDFs by clickung PDF or Path        
Calibre 2.85.1 Linux Opensuse 42.3 Usually I open a document by clicking the link in the right details panel "PDF" or "Path". This does not work and results in an error File or directory not found: Fehler beim Holen der Informationen für Datei »/home/zzz_servers/thot03/x_database/calibre/Marion%20Wendland/Kommunikation%20-%20Seminar%20(3586)«: Datei oder Verzeichnis nicht gefunden. However copying the underlying link to firefox or the file browser does the job... Have you forgotten the " ... " for the path? Anybody who experienced a similar behavior? I am grateful for tipps! Thanks
          Automated Publishing Pipeline with Org Mode        

I have some files written in Org that I want to publish & upload to a web server every week at a set time. Why? I’d like to be able to view my notes from anywhere, and others may stumble upon them and find them useful. However, I don’t want to have to remember to publish & upload everytime I make change to these files, and I want this to occur with as little user interaction as possible.

To accomplish this, I’m going to make use of Emacs Batch Mode. Batch Mode will run Emacs in a non-interactive fashion; you feed it a file or an Elisp function to run, Emacs does its work and then exits without displaying a window. ox-twbs is an Emacs package I’ll be using to export my Org files to Bootstrap-themed HTML.

The project I want to set up will be a couple of files that will act as a log of the things I have accomplished each week, goals for the next week and any associated notes or links I’d like to record. I’m going to create a directory for this project at ~/org/log and create a couple of files: will just contain a link to our second file, will contain a descending-order list of the weeks of the year with associated notes using the following structure:

* Week 11 (March 13 2017)
** Recap
** Next Week
* Week 10 (March 6 2017)
** Recap
** Next Week

By default, Org mode will create a full table of contents for the page, complete with the headlines being numbered. I’d like to avoid this: I only want to see up to the second level headlines in the table of contents and nothing below. Additionally, I don’t want the headlines to be numbered.

Thankfully, I can easily specify all of these settings in one place by defining a project in Org mode. By defining a project, we can easily generate HTML for our project with one command: (org-publish-project "PROJECT_NAME"). A variable called org-publish-project-alist is used to define any number of Org projects one may have.

The first thing I want to do is create a new Elisp file that will be used to import needed packages, define any required variables, and run the necessary function to build the site. Since I use Spacemacs (which has a fairly high amount of modules to load), the only modules I want to load are the bare minimum required to get Org running so Emacs can do its business. Inside ~/.emacs.d/, I create a new file called org-export.el with the following contents:

(require 'org)
(require 'ox)
(require 'ox-publish)

(setq user-full-name "Dale Karp")
(setq user-mail-address "")

(setq org-publish-project-alist
        :base-directory "~/org/log/"
        :publishing-directory "~/projects/site/log"
        :publishing-function org-twbs-publish-to-html
        :section-numbers nil
        :table-of-contents 2)))

(org-publish-project "my_log")

First, I import some packages and set some variable such as who the author will be (me!). Next, the Org project gets defined within org-publish-project-alist.

I define a single project called “my_log” with some options. Some of these are self-explanatory: :base-directory tells Org where to look for the files - any .org file in this folder will be processed. Since I’ll be publishing this to a subsection of my website, I’ve set the :publishing-directory to the right location. Because I’m using ox-twbs to generate our HTML, I need to tell Org to use it when publishing instead of the built-in HTML exporter. Finally, I let Org know that I don’t want any sections to have numbers, and that I only want our table of contents to have links for level 2 headlines and above.

Now that our publishing pipeline is ready, I need to set up the ‘automation’ part. I host my site on GitHub Pages, so all I need to do to build my site is to create a new Git commit in my site repository after publishing, and pushing that commit to GitHub. To do all of this, I’ll write a Shell script! Because I’ll be a systemd timer to run this script at a set interval once a week, I need to set up a few things like retrieving SSH keys for the session so I can push to GitHub without being prompted for a pssword.


# load ssh keys from keychain for this session
eval $(keychain --eval --quiet --noask id_rsa)

# call & run our exporter elisp script
emacs --batch -l ~/.emacs.d/orgexport.el

# change directory to our repo
cd ~/projects/site

# commit latest changes
git commit -am "Log for: $(date)"

# push commit to get built on gh pages
git push

With the script written, the last thing to do is to use a scheduler to run it every week. I’m using systemd’s timers to do this, but cron should work as well. I created two files in ~./config/systemd/user with the following contents:

# mylog-org-export.service
Description=Exports Org project "My_Log" and pushes Git to GH Pages

# mylog-org-export.timer
Description=Run mylog-org-export.service every Monday at midnight.



Now I can test to make sure our timer works and once verified, enable it:

systemctl --user start mylog-org-export.service
systemctl --user enable mylog-org-export.timer

Verify that the timer is active with systemctl --user list-timers, and we’re good to go!

At this point, I have an automated Org publishing pipeline that will convert our Org files to HTML once a week and then push the new data to GitHub pages to built. This approach can be customized fairly easily to publish anything from project documentation to a personal blog. If you have any suggestions on how to improve this or have any alternative methods, please let me know!

          A Boilerplate for WebGL Projects        
  • 2015/12/12: Added source map support

Between Grunt, Gulp, Webpack and Broccoli, it seems like a new build tool/asset pipeline/module bundler/etc arrives in the land of JavaScript every few months. While learning WebGL, I started to look into ways to make starting a new project and development a bit easier. I’ve been using Webpack for work-related purposes over the past few months and I’ve really grown to like its easy-to-read config files, async code splitting abilities and wide plugin ecosystem. “Since it helped make my React-based development smooth as butter,” I thought to myself, “perhaps using Webpack would help to make WebGL development go just as smoothly.”

This post follows the steps and rationale used to build a Webpack-based boilerplate for WebGL. If you just want to look at the code, you can grab the source here.

Getting Started

For this build system, all you need to start with is Node.js + npm. We will use Webpack as our module loader/build tool combo. Another invaluable tool in our arsenal will be Babel library for allowing use of ES6+ features. This guide assumes you have basic familiarity with JavaScript and using npm to install modules.

We will start off by installing Webpack globally. I like doing this so I can just use webpack on the command line from anywhere:

$ npm install -g webpack

Webpack is a tool used to bundle files into modules. It has a vast variety of plugins created by both the Webpack team and community to allow modules to be created using both JavaScript and non-JavaScript files. You can even use it in conjunction with tools like Grunt or Gulp! We will be making use of Webpack to transpile our ES6 code to something modern browsers can read with Babel, a plugin that allow us to write our GLSL shader code like you would for a desktop application, and finally bundle all our code into a single (or multiple) JavaScript files.

Let’s create a new directory called webgl_project:

$ mkdir webgl_project
$ cd webgl_project

We will be saving a list of libraries and tools we need to package.json so let’s create one first:

$ npm init -y

Using the -y flag uses default values; we will edit these shortly. Personally, I’d rather edit these fields in a text editor rather than input them into a prompt.

Next, we will install our dependencies required for development, including:

  • webpack: The webpack documentation recommends saving webpack as a local dev. dependency
  • webpack-dev-server: Used to watch files & recompile on changes, servers files
  • babel-core: Compiler used to make our ES6 code usable in modern browsers
  • babel-preset-es2015: Since version 6, Babel comes with no settings out of the box. This is a preset to compile ES6.
  • babel-loader: Webpack plugin to transpile JavaScript files before creating modules out of them
  • html-webpack-plugin: A plugin used to auto-generate a HTML file with all webpack modules included.
  • webpack-glsl-loader: Webpack plugins to load our shaders from files into strings

Let’s install these modules all at once:

$ npm install webpack webpack-dev-server babel-core babel-reset-es2015 babel-loader html-webpack-plugin webpack-glsl-loader --save-dev

At this point, I usually like to open package.json and start making some edits. This is what I end up with:

  "name": "webgl-webpack-boilerplate",
  "version": "1.0.0",
  "private": true,
  "description": "a boilerplate webpack project for getting started with WebGL.",
  "scripts": {},
  "author": "Dale Karp <> (",
  "license": "CC-BY-4.0",
  "repository": {
    "type": "git",
    "url": ""
  "devDependencies": {
    "babel-core": "^6.1.19",
    "babel-loader": "^6.1.0",
    "babel-preset-es2015": "^6.1.18",
    "html-webpack-plugin": "^1.7.0",
    "webpack": "^1.12.4",
    "webpack-dev-server": "^1.12.1",
    "webpack-glsl-loader": "^1.0.1"

We’ll come back to this file to add some commands to use to the scripts key. For now, let’s hop back to the terminal.

Project Structure

I wanted the file structure for an intial project to be simple:

├── package.json # lists dependencies for easy re-install
├── src
│   ├── index.html # html to use as template for generated output html
│   ├── js
│   │   └── main.js # entry point for our application
│   └── shaders
│       └── ... # glsl files go here
└── node_modules/ # folder containing our dependencies

This structure can be created with mkdir & touch commands.

Now that we have a clear idea of what files are going to be a part of the project, let’s get a Git repository going and set up a .gitignore file:

$ git init
$ touch .gitignore

Open up .gitignore up and enter the following:


Since the contents of these two folders are generated, we shouldn’t commit them to version control.

With that out the way, let us continue by setting up Webpack.

Webpack Configuration

Webpack can be configured a few different ways: with its own configuration files, via the command line with flags, or as a module in Gulp/Grunt. We’ll be using the config file route. Create a file in the root of your project directory called webpack.config.js. Input the following:

var path = require('path');
var HtmlWebpackPlugin = require('html-webpack-plugin');

var ROOT_PATH = path.resolve(__dirname);
var ENTRY_PATH = path.resolve(ROOT_PATH, 'src/js/main.js');
var SRC_PATH = path.resolve(ROOT_PATH, 'src');
var JS_PATH = path.resolve(ROOT_PATH, 'src/js');
var TEMPLATE_PATH = path.resolve(ROOT_PATH, 'src/index.html');
var SHADER_PATH = path.resolve(ROOT_PATH, 'src/shaders');
var BUILD_PATH = path.resolve(ROOT_PATH, 'dist');

var debug = process.env.NODE_ENV !== 'production';

module.exports = {
    entry: ENTRY_PATH,
    plugins: [
        new HtmlWebpackPlugin({
            title: 'WebGL Project Boilerplate',
            template: TEMPLATE_PATH,
            inject: 'body'
    output: {
        path: BUILD_PATH,
        filename: 'bundle.js'
    resolve: {
        root: [JS_PATH, SRC_PATH]
    module: {
        loaders: [
                test: /\.js$/,
                include: JS_PATH,
                exclude: /(node_modules|bower_components)/,
                loader: 'babel',
                query: {
                    cacheDirectory: true,
                    presets: ['es2015']
                test: /\.glsl$/,
                include: SHADER_PATH,
                loader: 'webpack-glsl'
    debug: debug,
    devtool: debug ? 'eval-source-map' : 'source-map'

Let’s take a look at what’s going on here:

File header

Firstly, we import some modules such as path and our html-webpack-plugin. We also define some constants containing the absolute paths to the folders we wil keep various types of files in, along with the entry and output paths.

We also check for the presence of the environmental variable NODE_ENV to determine whether to build production or development bundles.


The JavaScript entry point of our application.


The html-webpack-plugin generates HTML files that already have appropriate script and link tags to bundled JS and CSS bundles. We’ll make use of that plugin with a few options. title is the value of the title tag. template is a path to the HTML file we want to base our index.html off of. Finally, inject allows us to control where our script tags are being created. Valid values are body and head. We will set up our HTML template after configuring Webpack.


Here we define where we want Webpack to place the modules it creates.


resolve’s root key lets you tell Webpack which folders to search in when importing one file into another. For example, imagine the following file structure:

├── js
│   ├── Component
│   │   └── Dude.js
│   ├── main.js
│   └── Utility
│       └── VectorUtils.js
└── shaders

The contents of js/utility/VectorUtils.js look something like this:

export function getDotProduct(v1, v2) {
    // gets & returns dot product

Without setting the resolve.root property in our Webpack settings, the way to include VectorUtils.js in Dude.js would look something like this:

import { getDotProduct } from '../VectorUtils.js';

export default class Dude extends Person {
    // use getDotProduct somewhere in here

After setting resolve.root to include our src/js path, we can treat that as a root directory that Webpack will search through to find other modules. This lets us write import paths like this:

import { getDotProduct } from 'Utility/VectorUtils.js';

export default class Dude extends Person {
    // use getDotProduct somewhere in here

Since we’ve also added src as a root path, we’ll be able to include shaders just by writing something like this:

import boxShader from 'Shaders/boxShader_v.glsl';

This makes importing files easy, no matter what directory you happen to be working in.


The loader key on module tells Webpack which files to load into modules, and how to load them. We’ve defined two loaders: one for JavaScript files, and one for our GLSL shaders. test is a regular expression that will bundle the files that match it; obviously for a JavaScript loader, we want Webpack to find modules ending with .js. We don’t want Webpack to do anything with our dependencies unless we explicitly import one, so we tell Webpack to exclude them. The loader key is the name of the Webpack plugin used to process JS files. Since we want to be able to write using ES6 syntax, Babel will handle that responsibility. Because Babel 6 comes with no options out of the box, we need to tell it to use the es2015 presets package we installed earlier. Enabling directory caching will give us faster compile times so of course we enable that.

Our last loader is for our shader files, ending with .glsl. Here, we simply tell Webpack to use the webpack-glsl loader.


Some loaders will perform optimizations when building bundles depending on whether the debug key is set to true or false.


At the moment, this tells Webpack which style of source maps to use. source-map is recommended for production use only, so we’ll use eval-source-map which is faster and produces cache-able source maps.

With that, our configuration of Webpack is complete. Not too bad, eh?

Let’s re-open package.json and add some commands to scripts that will make interacting with webpack a bit easier:

    // ...
    "scripts": {
        "build": "NODE_ENV=production webpack --progress --colors",
        "watch": "webpack --progress --colors --watch",
        "dev-server": "webpack-dev-server --progress --colors --inline --hot"
    // ...

Webpack will check for webpack.config.js and use it if found. I’m using flags to show build progress and to give the output a bit more colour. Any key added to the scripts object can be used by running npm run <keyname>. This gives us three easily accessible commands:

  • NODE_ENV=production npm run build: Builds our project into /dist/. Uses a flag to tell Webpack we want loaders to build production-mode modules.
  • npm run watch: Builds our project and watches files for changes, re-builds on change.
  • npm run dev-server: Builds our project and watches files for changes, also serves files from a web server. The --inline --hot flags enable Hot Module replacement, which will update your page without any user input. You can read more about HMR in webpack-dev-server’s documentation.

Now we have one thing left to do, and that is to create our index.html in /dist. This is what the contents look like:

<!doctype html>
<html lang="en">
        <meta http-equiv="Content-type" content="text/html; charset=utf-8"/>
        <title>{%= o.htmlWebpackPlugin.options.title %}</title>
        <canvas id="canvas"></canvas>

That weird value used in the title tag allows Webpack to change the page’s title depending on the value of the title key passed to html-webpack-plugin. As you can see, no script tags are needed - Webpack will handle that for us.

And with that, the boilerplate is complete. “But Dale,” you exclaim, “I don’t see the point of this. You made a simple Webpack config, so what?”


My main motivations behind this came from my own experiences trying to learn WebGL. I’ve noticed that many WebGL tutorials will demonstrate shader usage in WebGL in one of two ways:

  1. Write a JavaScript string containing the shader code.
  2. Have a script element of type x-ver-shader and grab the element’s content from JavaScript.

Personally, I don’t like either of these solutions. The first one is just ugly, unreadable and difficult to maintain yet most of the WebGL resources I see introduce shaders using this method. The second is slightly better, but I prefer to keep shaders in their own files, where I can use an editor with syntax highlighting to make readability a bit easier.

I completely understand why tutorial writers do this. WebGL is complex, and wrapping your head around everything needed to render a simple triangle in shader-based GL can be a lot to take in at once. Once you get your head around those concepts, though, then what? What’s the best way to deal with shaders moving forward as a project scales? I’m not claiming this boilerplate is anywhere close to the best, but for me it works. Hopefully it does for you too (and if it doesn’t, please open an issue/PR on the GitHub repo and let me know).

Mini-example: Hello Triangle

Let’s go through a short example of how one would potentially use this boilerplate to easily load GLSL shaders. Imagine we have two shaders in our src/shaders directory, called box_vert.glsl and box_frag.glsl containing our vertex and fragment shader, respectively. No need for <script> tags or multi-line strings! Just import it when you need it, where you need it.

// import our shaders and store them variables
import boxVertShaderSource from 'shaders/box_vert.glsl';
import boxFragShaderSource from 'shaders/box_frag.glsl';

const canvas = document.getElementById('canvas');
const gl = canvas.getContext('experimental-webgl');

// ...

// time to create our shader program with a handy function!
// it expects the gl context and strings containing shaders.
let program = createProgramFromGLSL(gl, boxVertShaderSource, boxFragShaderSource);

That’s all!

I hope you found this useful in some way. If you have any suggestions on how to improve the boilerplate, please open an issue on the GitHub repo.


I recently decided that my personal site needed a tune up. No longer happy with the current design, I decided to throw it away completely and to start from scratch. Here are a few things I thought about during the process of re-writing the website.

Choosing a backend

Right off the bat, I knew that I wanted to continue using a static site generator for my homepage. Tools like WordPress and Ghost are interesting but for my use case I found both to be overkill. After taking a look at some of the most popular static site generators1 around and playing around with them, I had a better idea of the current tool landscape at the time.

There are a bunch of static site generators these days written in a variety of languages. I took a look at a few JavaScript-based ones, such as Hexo and Metalsmith - both on opposite sides of the feature list spectrum. Metalsmith is completely barebones to the point where all functionality is added via plugins. This is a pretty neat idea but I was looking for something that was more balanced between features and do-it-yourself mentality. When I was initially researching static site generators, Metalsmith had a lot of issues with plugin version rot as well. The idea of everything being a plugin is pretty neat - as long as these individual plugins are maintained! On the other end of the feature spectrum, Hexo reminds me a lot of Jekyll but written in JavaScript and more of an emphasis on blogs. This wasn’t enough to pull me away from Jekyll, a tool I already knew.

On the Ruby end, I took a look at Middleman and Jekyll. Middleman is a great competitor for Jekyll with some neat features. It allows for incremental building of your site, a feature that Jekyll will be lacking until its v3 release. I have not had much of an issue with build times with Jekyll but I can see how a huge site with thousands of pages would not scale well without incremental builds. Middleman also offers more in the way of a default template and such. However, in the end I stuck with Jekyll. Recent additions to Jekyll such as collections make building non-blog and hybrid websites with Jekyll much easier.

Choosing a colour palette

I am not a designer - I think the previous designs of my site show as much. As much as I wish I did, I just don’t have the eye for colour. Working with designers in the past always blew my mind as to how easy it came to them. I know I could take the time to learn colour theory (and perhaps I will someday) but at the time I was more focused on redesigning my site quickly, not leisurely learning something new. This made the simple task of choosing a colour palette a long and tedious process. There are a few tools available to help one come up with a colour palette, such as Paletton or Adobe Color CC, formerly Kuler. After fiddling for them for a few days on and off and having no success, I decided just to choose a colour I liked: blue. Once I had a colour chosen for the header, I was able to build the rest of the site off of that.

While I’m pleased with how the header turned out, I still feel as if the rest of the site can possibly use a splash of colour here or there just to make things a bit more appealing. I couldn’t really see any place to fit these accent colours in a way that felt natural. This will probably be something that will to be iterated on over time.

The Holy Grail that is Flexbox

Flexbox is the best thing to happen to CSS. I’ll scream it from the mountain tops if needed because it’s true. As long as the only browsers your site needs to support are IE11+ or an evergreen browser, you can (mostly2) say goodbye to hacky CSS rules to position elements on a web page.

The great thing about Flexbox is that it’s flexible (yet another surprise!) and will adjust its size across different screen sizes if the developer desires it. For example, the page header makes use of flexbox to make sure it looks great on both large and small screens. All I need to do to get my header to scale the way I wish is use the following CSS rules on the wrapper for my header block:

.nav-wrapper {
    display: flex;
    justify-content: center;
    align-items: center;

This sets the wrapper rendering box style to flexbox and centers all the elements along both axis. When the screen width is below a certain size, we can use the following rules to change the direction of the main axis from a row to a reversed column so each header element has room to breath and appears in the proper order:

@media screen only and (max-width: 600px) {
    .nav-wrapper {
        flex-direction: column-reverse;
        align-items: stretch;

I’m using a few more rules to align individual items, but that is the basics of it. Flexbox has enabled me to re-arrange the entire site with just a few CSS rules and I think that is awesome. I’m using it for any part of my page that requires even a tiny bit of element positioning and it’s been very easy to both learn and use. Offloading a lot of the grunt work to the browser while providing a fewer set of CSS rules was definitely a smart move, and I’m excited to see what further revisions to the spec bring.

Consistency between old and new

In the redesign, I decided to make better use of header tags to denote subheading rather than bolded text. Unfortunately, a few of my older blog posts made liberal use of the old and bold way. I took this opportunity to flex my bash and regex muscles and see if I can come up with an elegant way to fix this in one swing.

My first step was to construct a regular expression that would find all the old headers. The old headers took the form of bold text which, in Markdown, meant the text was surrounded by two asterisks on each end. The header would be the only string of text on the line. With this information, I came up with the following regular expression and used it to find all headers in my _posts folder:

git grep '^\*\*.*\*\*$'

This regular expression looks for any length of characters in between two literal asterisks that bookend the entire line. After testing that the regex worked, I used a combination of the find and sed commands to do a massive search and replace with the following command:

find _posts -type f -exec sed -i '' 's/^\*\*\(.*\)\*\*$/\#\#\# \1/g' {} \;

The above command looks for files in the _post directory, and for each file runs a sed command asking it to edit files in-place without a backup3. The regular expression now creates a group around the header text so we can use it in the replace with the proper symbol denoting an h3 tag: three literal pound characters. Now all blog posts with headers will conform to the new style.

Design & organization choices

Aside from choosing site colours, I spent the most time thinking about how I wanted to layout my website. My previous v3 design was bland and didn’t make the best use of available space. I spent a lot of time looking at other Jekyll blogs for some inspiration but I found many of them to be overly busy or overly minimalistic. I would draw layout after layout on paper until I came to the one you see now. Moving the persistent navigation block from the side to the top allows me to take advantage of all the below space: this comes in handy when scaling the site to work on multiple resolutions.

Since the beginning of 2015, I’ve been writing reviews on the games I have finished for fun. After deciding to add these reviews to the site. As of the launch of the redesign, only one of the eight reviews I have written are published but I’ll be editing the others and getting them online shortly. Right now, they are a separate entity from the other blog posts and I’m not sure if this was the best idea. This home page is a place for me to talk about what I love doing. For the past few years, this has solely been about programming. I’d like to expand that to writing about other subjects but dividing posts up into ‘blogs’ and ‘reviews’ may limit myself in the future. I could (and probably will) just treat everything as a blog and use tags or categories to group the related info. Design choices like this are what I spent the most time on during this redesign. Implementing the ideas I come up with are no problem but deciding whether I’m making the right choice on certain decisions is an area where working on a team definitely benefits.

There are a bunch of other small but useful additions I’ve made. If you haven’t noticed, I tend to go off on tangent mid-paragraph. Making use of footnotes allows me to do this without breaking up the flow of content, and the Kramdown markdown renderer used by Jekykll makes usage of footnotes very easy. Some posts can have a header image, such as this one. If you’re reading this on a small device, you may not see it - I’m currently thinking of a good way to display the header image without making all of the above-the-fold content on a mobile device nothing but headers. Most of the small additions are there to make writing content easier for me and for said content to display correctly, no matter the screen scale.

In the end

I’m pretty happy with the layout I ended up with. I find easier on the eyes than previous designs. Standing here with the project finished, I already have a list of things I’m no longer happy with and will work on for the next iteration. Personal projects like home page redesigns give me an opportunity to experiment and play but if I don’t limit myself and consider the project ‘finished’, I’ll never release it. Do you have any suggestions about the design? Love it? Hate it? Let me know in the comments!

  1. According to StaticGen which ranks generators based off of GitHub stars and forks. 

  2. While much better than the old way of doing things, Flexbox itself is not perfect. Check out Flexbugs for more info. 

  3. Not recommended. 

          Call of Duty Black Ops III Zombies Chronicles RELOADED-3DMGAME Torrent Free Download        

Call of Duty Black Ops III Zombies Chronicles (c) Activision

07/2017 :..... RELEASE.DATE .. PROTECTION .......: Arxan + Steam
1 :.......... DISC(S) .. GAME.TYPE ........: Action, Adventure

Includes 8 remastered classic Zombies maps from Call of Duty: World at War,
Call of Duty: Black Ops and Call of Duty: Black Ops II.  Complete maps from
the original saga in fully remastered HD playable in Call of Duty: Black
Ops III.

Also included: Base game (Update 23), season pass content & The Giant.

Some content will not appear when playing offline (MP, some GobbleGum, etc).

1. Unrar.
2. Mount the image.
3. Install the game.
4. Copy over the cracked content from the /Crack directory on the image to
your game install directory.
5. Play the game.

Torrent Free Download Here

          The Long Dark RELOADED-3DMGAME Torrent Free Download        

The Long Dark (c) Hinterland Studio Inc.

08/2017 :..... RELEASE.DATE .. PROTECTION .......: Steam
1 :.......... DISC(S) .. GAME.TYPE ........: Adventure, Indie, Simulation, Strategy, Early Access

Launching August 1st, 2017

1. Unrar.
2. Burn or mount the image.
3. Install the game.
4. Copy over the cracked content from the /Crack directory on the image to
your game install directory.
5. Play the game.
6. Support the software developers. If you like this game, BUY IT!

OS: Windows XP
Processor: Dual-Core Intel i5 CPU @ 2GHz+
Memory: 4 GB RAM
Graphics: Intel 4xxx Series w/ 512MB VRAM or better
Storage: 1 GB available space
Sound Card: Any on-board chip will work.
Additional Notes: The game is in an Alpha state and is constantly being expanded and optimized. System requirements are subject to change until the game ship

Torrent Free Download Here
          Sundered RELOADED-3DMGAME Torrent Free Download        

Sundered (c) Thunder Lotus Games

07/2017 :..... RELEASE.DATE .. PROTECTION .......: Steam
1 :.......... DISC(S) .. GAME.TYPE ........: Action, Adventure, Indie

Resist or Embrace

1. Unrar.
2. Burn or mount the image.
3. Install the game.
4. Copy over the cracked content from the /Crack directory on the image to
your game install directory.
5. Play the game.

Torrent Free Download Here

Crackfix Torrent Free Download Here

          Car Mechanic Simulator 2018 RELOADED-3DMGAME Torrent Free Download        

Car Mechanic Simulator 2018 (c) PlayWay S.A.

07/2017 :..... RELEASE.DATE .. PROTECTION .......: Steam
1 :.......... DISC(S) .. GAME.TYPE ........: Racing, Simulation

Build and expand your repair service empire in this incredibly detailed and
highly realistic simulation game, where attention to car detail is
astonishing. Find classic, unique cars in the new Barn Find module and
Junkyard module. You can even add your self-made car in the Car Editor.

1. Unrar.
2. Burn or mount the image.
3. Install the game.
4. Copy over the cracked content from the /Crack directory on the image to
your game install directory.
5. Play the game.

Torrent Free Download Here

          Dying Light The Following Enhanced Edition Reinforcements RELOADED-3DMGAME Torrent Free Download        

Dying Light The Following Enhanced Edition: Reinforcements
(c) Techland

07/2017 :..... RELEASE.DATE .. PROTECTION .......: Steam
1 :.......... DISC(S) .. GAME.TYPE ........: Action, RPG

Experience the untold chapter of Kyle Crane\'s story set in a vast region
outside the city of Harran. Leave the urban area behind and explore a
dangerous countryside packed with mysterious characters, deadly new
weapons, and unexpected quests. Gain the trust of the locals and infiltrate
a centuries-old cult that hides a dangerous secret. Take the wheel of a
fully customizable dirt buggy, smear your tires with zombie blood, and
experience Dying Light\'s creative brutality in high gear.

Dying Light is a first-person, action survival game set in a vast open
world. Roam a city devastated by a mysterious epidemic, scavenging for
supplies and crafting weapons to help defeat the hordes of flesh-hungry
enemies the plague has created. At night, beware the Infected as they grow
in strength and even more lethal nocturnal predators leave their nests to
feed on their prey.


Reinforcements is the name of the DLC included in the Content Drop #0
update. This standalone pack includes the main game and all current DLC.
The upcoming content drops will come in seperated Update/DLC packs.

1. Unrar.
2. Burn or mount the image.
3. Install the game.
4. Copy over the cracked content from the /Crack directory on the image to
your game install directory.
5. Play the game.

Torrent Free Download Here

          NBA Playgrounds v1.3 RELOADED-3DMGAME Torrent Free Download        

NBA Playgrounds v1.3 (c) Mad Dog Games, LLC

07/2017 :..... RELEASE.DATE .. PROTECTION .......: Steam
1 :.......... DISC(S) .. GAME.TYPE ........: Sports

Classic NBA arcade action is back! Take your   A   game to the playground and
beat the best in high-flying 2-on-2 basketball action. Practice your skills
offline, play with up to three others and take your talents online to
posterize your opponents with acrobatic jams and ridiculous displays of skill.


They added again loads of new content, like new players ( > 30), new playmode
(3 point contest) and much more. Several changes and fixes as well.
For a complete list check:

1. Unrar.
2. Burn or mount the image.
3. Install the game.
4. Copy over the cracked content from the /Crack directory on the image to
your game install directory.
5. Play the game.

Torrent Free Download Here

          Mafia III Sign of the Times RELOADED-3DMGAME Torrent Free Download        

Mafia III: Sign of the Times (c) 2K

07/2017 :..... RELEASE.DATE .. PROTECTION .......: Arxan and Steam
1 :.......... DISC(S) .. GAME.TYPE ........: Action, Adventure

It\'s 1968 and after years of combat in Vietnam, Lincoln Clay knows this
truth: family isn\'t who you\'re born with, it\'s who you die for.

A string of ritualistic killings has New Bordeaux on the edge of terror.
At Father James\' request, Lincoln agrees to hunt down the cult responsible,
a quest that will take him from the dark heart of the old bayou to the drug
ridden counterculture of the inner city.

1. Unrar.
2. Mount the image.
3. Install the game. Crank it the new RLD tune until your dad threatens you
for listening to ethnic music.
4. Copy over the cracked content from the /Crack directory on the image to
your game install directory.
5. Play the game.

Torrent Free Download Here

          Tank Warfare Tunisia 1943 Longstop Hill RELOADED-3DMGAME Torrent Free Download        

Tank Warfare Tunisia 1943 Longstop Hill (c) Strategy First Inc.

07/2017 :..... RELEASE.DATE .. PROTECTION .......: Steam
1 :.......... DISC(S) .. GAME.TYPE ........: Violent, Gore, Simulation, Strategy

Tank Warfare: Tunisia 1943 - tactical battalion level combat simulation.
Continuation of Graviteam Tactics series on the Western Front.

The British spring offensive in Tunisia was brought to a halt on the
approaches to Djebel el Ahmera (Longstop Hill) by the German 5th Panzer Army.
German forces took defensive positions on key heights along the roadway
leading to Tunisia\'s capital. Despite the fierce resistance of the enemy,
British 78th \"Battleaxe\" Division units supported by the Churchill tanks of
the North Irish Horse regiment are advancing along the roadway.

The game is made standalone and includes all previous updates!

1. Unrar.
2. Burn or mount the image.
3. Install the game.
4. Copy over the cracked content from the /Crack directory on the image to
your game install directory.
5. Play the game.

Torrent Free Download Here

          MSBuild Multi-Targeting in SharpDevelop        

SharpDevelop has had multi-targeting support for a long time - for example, SharpDevelop 2.0 supported targeting .NET 1.0, 1.1 and 2.0. Our original multi-targeting implementation would not only change the target framework, but also use the matching C# compiler version*.

When Visual Studio 2008 and MSBuild 3.5 came along and introduced official multi-targeting support, we separated the 'target framework' and 'compiler version' settings. The 'target framework' setting uses the <TargetFrameworkVersion> MSBuild property, which is the official multi-targeting support as in Visual Studio 2008. The 'compiler version' setting determines the MSBuild ToolsVersion, which controls the version of the C# compiler to use - Visual Studio does not have this feature.

I'll call the latter feature MSBuild Multi-Targeting, as this allows us to pick the MSBuild version to use, and thus enables SharpDevelop to open and edit VS 2005 or 2008 projects without having to upgrade them to the VS 2010 project format.

Unfortunately, life isn't as simple as that. It turns out that MSBuild 4.0 is unable to compile projects with a ToolsVersion lower than 4.0 if the Windows SDK 7.1 is not installed. To allow users to use SharpDevelop without downloading the Windows SDK, we implemented a simple fix: we use MSBuild 3.5 to compile projects with a ToolsVersion of 2.0 or 3.5. This is why SharpDevelop ships with both "ICSharp­Code.Sharp­Develop.Build­Worker40.exe" and "ICSharp­Code.Sharp­Develop.Build­Worker35.exe".

Now what happens if SharpDevelop is run on a machine without .NET 3.5? If the framework specified by the 'ToolsVersion' is missing, SharpDevelop crashed with an MSBuild error when opening the project. There were also crashes when creating/upgrading projects to missing ToolsVersions. Moreover, in the rare scenario where .NET 2.0 and .NET 4.0 are installed, but .NET 3.5 is missing, SharpDevelop was able to open the project but the build worker would crash when trying to compile.

For this reason, the SharpDevelop 4.0 and 4.1 setups require both  .NET 3.5 and .NET 4.0 to be installed. This wasn't an issue when we made that decision - .NET 3.5 is likely to be already installed on most machines. However, Windows 8 will change that - .NET 4.5 is installed by default, but .NET 3.5 is missing. So we added the necessary error handling to SharpDevelop 4.2. The SharpDevelop 4.2 setup no longer requires .NET 3.5 - you'll need it only when targeting .NET 2.0/3.0 or 3.5.

Another issue is that .NET 4.0 does not ship with the Reference Assemblies - you need to install the Windows SDK to get those. This causes MSBuild to reference the assemblies in the GAC instead, which might be a later version (due to installed service packs or in-place upgrades like .NET 4.5), and also emit massive amounts of warnings (one warning per reference). Moreover, it caused the 'Copy Local' flag to default to true for references to .NET assemblies, causing System.dll etc. to be copied into the output directory.

At the time, the reference assemblies were only available as part of Visual Studio 2010 - the free Windows SDK 7.1 was released later. So it was a high priority for us to work around this problem. For this reason, SharpDevelop injects a custom MSBuild .targets file into the project being built: SharpDevelop.TargetingPack.targets. This file runs a simple custom MSBuild task that detects references to default .NET assemblies and sets the 'Copy Local' flag to false. (we also inject several other custom .targets files; for example for running FxCop or StyleCop as part of a build)

We used the Microsoft.Build.Utitilies.dll when implementing this custom task. However, that library ships only with .NET 2.0, not with .NET 4.0, so we had to switch to Microsoft.Build.Utitilies.v4.dll to get the C# 4.0 build working without .NET 2.0. This should not be a problem as the copy local workaround is only included when targeting .NET 4.0 or higher, so we won't try to load it the 3.5 build worker process.


To summarize, the SharpDevelop 4.2 setup requires:

  • Windows XP SP2 or higher
  • .NET 4.0 Full (.NET 4.5 Full will also work)
  • VC++ 2008 runtime (part of .NET 3.5 so most people have it already)
  • In the minimal configuration, you can only compile for .NET 4.0 using MSBuild 4.0.


  • If .NET 4.5 is installed, the C# 5 compiler will replace the C# 4 compiler; and .NET 4.5 will appear as an additional target framework.
  • If .NET 3.5 SP1 is installed, you will be able to use .NET 2.0/3.0/3.5 as target framework, and C# 2 and C# 3 as compiler versions.
  • Installing the Windows SDK 7.1 is highly recommended (provides reference assemblies and documentation for code completion).
  • Some SharpDevelop features might require installation of additional tools such as FxCop, StyleCopF#, TortoiseSVN, SHFB.

* Everything said about the C# compiler in this post also applies to the VB compiler.

          Where to put the file on Jenkins        
TL;DR: Put your file in the /var/lib/jenkins/.gradle/ directory on a standard Ubuntu install. Sometimes you need to have a file that is not included in your git repository. Many times this is because you need to have an api key that you would not like to be made public. Because your file …
          Commented Unassigned: Two critical data communication bugs [408]        
Hi, I use `System.Net.FtpClient` assembly to download/upload a bunch of files to FTP from *single* thread (all timeout are default). I shortly reached the server connection limit (20 pcs) if `EnableThreadSafeDataConnections=true`. I investigated the assembly code and found that `OpenRead`/`OpenWrite` don't close duplicated connections and they alive for timeout (see `ftpClient` variable):
public virtual Stream OpenWrite(string path, FtpDataType type)
FtpDataStream ftpDataStream = (FtpDataStream) null;
lock (this.m_lock)
System.Net.FtpClient.FtpClient ftpClient;
if (this.m_threadSafeDataChannels)
ftpClient = this.CloneConnection();
ftpClient = this;
long fileSize = ftpClient.GetFileSize(path);
ftpDataStream = ftpClient.OpenDataStream(string.Format("STOR {0}", (object) path.GetFtpPath()), 0L);
if (fileSize > 0L)
if (ftpDataStream != null)
return (Stream) ftpDataStream;
So, I set `EnableThreadSafeDataConnections=false` and .... got strange errors. My investigation shows that the answers from ftp server mixed: `ExistDirectory`/`CreateDirectory` sometimes throw error "550 Could not get file size", `TYPE I` doesn't apply for binary uploading (I got corrupted binary files), ... The problem was with double answer for data transfer commands (for example `STOR`/`RETR`): "150 Ok to send data" + "226 Transfer complete". Your code doesn't expect second answer! So, next commands read previous answers! I expect the bug with any other FTP commands where data connections used.

Could you please fix both critical bugs?

Comments: The idiot is the person who say that word to an another unknown person. Thanks for the redirect. Google can't match `FluentFTP` and `System.Net.FtpClient`...
          Commented Unassigned: FileExists perfromance issue [409]        
Hi, could you please use `SIZE` ftp command in case `SIZE` feature is supported by FTP server? Now you use `GetListing` and it's works very slow because every time new data connection creates:
public bool FileExists(string path, FtpListOption options)
string ftpDirectoryName = path.GetFtpDirectoryName();
lock (this.m_lock)
if (!this.DirectoryExists(ftpDirectoryName))
return false;
foreach (FtpListItem ftpListItem in this.GetListing(ftpDirectoryName, options))
if (ftpListItem.Type == FtpFileSystemObjectType.File && ftpListItem.Name == path.GetFtpFileName())
return true;
return false;

Comments: The idiot is the person who say that word to an another unknown person. Thanks for the redirect. Google can't match `FluentFTP` and `System.Net.FtpClient`...
          Thing #9: Useful Library-Related Blogs and News Feeds        
Now that you have an RSS reader (your Google Reader or Bloglines account), you can begin adding other blog feeds that interest you. Technorati, a blog tracking site, reports that they are currently tracking 133 million blogs. Out of the millions of blogs available, how do you find the ones that are of most value to you? There are several resources that you can use.

First, read this post from The Cool Cat Teacher blog for some great suggestions on how to select good RSS feeds: How to Create Your Circle of the Wise.
Next, explore some other options for locating appropriate RSS feeds.

Discovery Resources:
When visiting your favorite websites -- look for RSS feed icons (like this ) that indicate the website provides it. Often a feed icon will be displayed somewhere in the navigation bar of the site.

Google Blog Search - See what appears when you search "Library2Play" or "Spring Branch ISD".

Use Blogline's Search tool - Bloglines recently expanded search tool lets you search for news feeds in addition to posts, citations and the web. Use the Search for Feeds option to locate RSS feeds you might be interested in.

Consider Edublogs' award winners. Each of the winners and the other nominees in each catagory have blogrolls containing useful, helpful and often highly respected representative blogs that could meet your need. Click a catagory and go see what is there!

Other Search tools that can help you find feeds:

School Library Blogs on Suprglu - this site offers a selection of postings from lots of different blogs by School Librarians. Click on the link under each post to visit the actual blog. - This search tool allows you to locate recent newsfeed items based upon keyword or phrase searching. The tool focuses specifically on news and media outlet RSS feeds for information, not weblogs. - Syndic8 is an open directory of RSS feeds that contains thousands of RSS feeds that users have submitted.

Technorati - Technorati is a popular blog finding tool that lets you search for blogs. Since RSS feeds are inherent to all blogging tools, Technorati Blog Search can help you find RSS feeds for topic specific blogs you may be interested in. Additonal Resource: Technorati Tutorial on finding and adding your blog.

Atomic Learning has lots of video clips on RSS feeds. Type "feeds" into the search box for the basics. If you want more, type "RSS" into the search box. (requires SBISD password info.)

Discovery Exercise:

Explore some of the search tools noted above that can help you locate some RSS feeds.

Add any pertinent feeds to your RSS reader.

Create a blog post about your experience that answers these questions:

  • Which Search tool was the easiest for you?

  • Which was more confusing?

  • What kind of useful feeds did you find in your travels? Or what kind of unusual ones did you find?

EXTRA STUFF -- Feed icon information:

In February of 2006, the adoption of a standard feed icon among websites and browsers finally began to assist in stopping the madness and confusion caused by so many variations. So far this icon has been adopted by many websites and browsers, including Opera and FireFox, where it displays in the address bar:

Internet Explorer 7 has something like this as well. For more information about this emerging new standard, see

          Thing #14: Technorati and How Tags Work        
Now that you have been working with blogs for awhile, it is time to look at a search engine that is specifically for searching blogs for their content. That would be Technorati, a portmanteau (or morphing) of technology and literati or intellectuals. as of August, 2007, it has indexed over 84 million blogs.

There are a lot of features in Technorati including the capability to search for keywords in blog posts, search for entire blog posts that have been tagged with a certain keyword, limit a search by language, or search for blogs that have been registered and tagged as whole blogs about a certain subject (like book reviews or libraries).

Background information:
1. View the blog from Technorati. View the tags used to categorize the information included in the posts.
2. Watch this video of the leadership at Technorati talking about their "product."

3. Read this blog post that discusses tags and tagging in things like Technorati,,, and the effect it is having on advertisers.

Discovery activities:
1. Take a look at Technorati and try doing a keyword search for “School Library Learning 2.0” in Blog posts, in tags and in the Blog Directory. Are the results different?
2. Explore popular blog, searches and tags. Is anything interesting or surprising in your results?
3. Create a blog post for Thing #14 and express your thoughts regarding how Technorati and its features could assist you. Since you have now looked at several tools that use tagging (Technorati,, & Flickr), add your thoughts about the value of tagging information

1. Register and claim your blog. It will increase the traffic that visits your blog.
2.Explore the various Technorati widgets that you could add to your blog.

Tag was a fun childhood game...hope tagging has now become a fun "learning" tool!

P.S. Did you realize you are 2/3 of the way through the 23 Things? Yipee!!
Have you taken a look at some of the other Players' blogs, read some of their posts, and commented? Please be sure you comment to some of the thoughts expressed by your fellow Players...commenting is an important part of the interactive web world!

          New Post: FileExists sporadically fails        
I am trying to move to using netftp in my Keepass2Android app but things are not working as expected.

Running the code below, it seems like even though "IsConnected" returns true, a directly following call to FileExists() calls Connect (which means the connection is lost exactly between the calls?). However, as Connect() can fail every now and then, this also results in a failing FileExists() (where failing means it throws Connection refused).

Is there anything wrong with my code? Is this something to be expected, i.e. should I be prepared to retry everything I do with an FtpClient? Is there any flag to set to automatially do the retry which I have created my own for my GetClient method (which calls Connect() in a retry loop).

Thanks for any help or suggestion!

private static T DoInRetryLoop<T>(Func<T> func)
    double timeout = 30.0;
    double timePerRequest = 1.0;
    var startTime = DateTime.Now;
    while (true)
        var attemptStartTime = DateTime.Now;
            return func();
        catch (System.Net.Sockets.SocketException e)
            if ((e.ErrorCode != 10061) || (DateTime.Now > startTime.AddSeconds(timeout)))
            double secondsSinceAttemptStart = (DateTime.Now - attemptStartTime).TotalSeconds;
            if (secondsSinceAttemptStart < timePerRequest)
                Thread.Sleep(TimeSpan.FromSeconds(timePerRequest - secondsSinceAttemptStart));

internal FtpClient GetClient(IOConnectionInfo ioc)
    FtpClient client = new FtpClient();
    if ((ioc.UserName.Length > 0) || (ioc.Password.Length > 0))
        client.Credentials = new NetworkCredential(ioc.UserName, ioc.Password);
        client.Credentials = new NetworkCredential("anonymous", ""); //TODO TEST

    Uri uri = IocPathToUri(ioc.Path);
    client.Host = uri.Host;
    if (!uri.IsDefaultPort) //TODO test
        client.Port = uri.Port;
    client.EnableThreadSafeDataConnections = false;

    client.EncryptionMode = ConnectionSettings.FromIoc(ioc).EncryptionMode;

    Func<FtpClient> connect = () =>
        return client;
    return DoInRetryLoop(connect);


string myPath = ..;
string myTempPath = myPath+".tmp";

_client = GetClient(_ioc, false);
var _stream = _client.OpenWrite(myTempPath);

//write to stream

Android.Util.Log.Debug("NETFTP", "connected: " + _client.IsConnected.ToString()); //always outputs true

if (_client.FileExists(myPath) //sporadically throws, see below
System.Net.Sockets.SocketException : Connection refused
10-24 13:08:07.487 I/mono-stdout(24073):          at System.Net.Sockets.SocketAsyncResult.CheckIfThrowDelayedException () [0x00017] in /Users/builder/data/lanes/3540/1cf254db/source/mono/mcs/class/System/System.Net.Sockets/SocketAsyncResult.cs:127 
          at System.Net.Sockets.SocketAsyncResult.CheckIfThrowDelayedException () [0x00017] in /Users/builder/data/lanes/3540/1cf254db/source/mono/mcs/class/System/System.Net.Sockets/SocketAsyncResult.cs:127 
10-24 13:08:07.487 I/mono-stdout(24073):          at System.Net.Sockets.Socket.EndConnect (IAsyncResult result) [0x0002f] in /Users/builder/data/lanes/3540/1cf254db/source/mono/mcs/class/System/System.Net.Sockets/Socket.cs:1593 
          at System.Net.Sockets.Socket.EndConnect (IAsyncResult result) [0x0002f] in /Users/builder/data/lanes/3540/1cf254db/source/mono/mcs/class/System/System.Net.Sockets/Socket.cs:1593 
          at System.Net.FtpClient.FtpSocketStream.Connect (System.String host, Int32 port, FtpIpVersion ipVersions) [0x0011a] in [my source folder]src
10-24 13:08:07.487 I/mono-stdout(24073):          at System.Net.FtpClient.FtpSocketStream.Connect (System.String host, Int32 port, FtpIpVersion ipVersions) [0x0011a] in [my source folder]src
10-24 13:08:07.487 I/mono-stdout(24073):          at (wrapper remoting-invoke-with-check) System.Net.FtpClient.FtpSocketStream:Connect (string,int,System.Net.FtpClient.FtpIpVersion)
          at (wrapper remoting-invoke-with-check) System.Net.FtpClient.FtpSocketStream:Connect (string,int,System.Net.FtpClient.FtpIpVersion)
10-24 13:08:07.487 I/mono-stdout(24073):          at System.Net.FtpClient.FtpClient.Connect () [0x000ce] in [my source folder]src
          at System.Net.FtpClient.FtpClient.Connect () [0x000ce] in [my source folder]src
          at System.Net.FtpClient.FtpClient.Execute (System.String command) [0x00136] in [my source folder]src
10-24 13:08:07.487 I/mono-stdout(24073):          at System.Net.FtpClient.FtpClient.Execute (System.String command) [0x00136] in [my source folder]src
10-24 13:08:07.487 I/mono-stdout(24073):          at System.Net.FtpClient.FtpClient.Execute (System.String command, System.Object[] args) [0x00001] in [my source folder]src
          at System.Net.FtpClient.FtpClient.Execute (System.String command, System.Object[] args) [0x00001] in [my source folder]src
10-24 13:08:07.487 I/mono-stdout(24073):          at System.Net.FtpClient.FtpClient.DirectoryExists (System.String path) [0x0005d] in [my source folder]src
          at System.Net.FtpClient.FtpClient.DirectoryExists (System.String path) [0x0005d] in [my source folder]src
10-24 13:08:07.487 I/mono-stdout(24073):          at System.Net.FtpClient.FtpClient.FileExists (System.String path, FtpListOption options) [0x0001c] in [my source folder]src
10-24 13:08:07.487 I/mono-stdout(24073):          at System.Net.FtpClient.FtpClient.FileExists (System.String path) [0x00001] in [my source folder]src
          at System.Net.FtpClient.FtpClient.FileExists (System.String path, FtpListOption options) [0x0001c] in [my source folder]src
          at System.Net.FtpClient.FtpClient.FileExists (System.String path) [0x00001] in [my source folder]src

          Summer Break        
tn_n_pump1sm Beginning mid-April, I removed my books from all venues. I needed a break and am completely enjoying my time.

For literary kicks check out Arts & Letters Daily; it's heavy on brain food.

Can't decide on a good book? NPR has suggestions for your summer read plus 14 book related podcasts

An all-time favorite, Garrison Keillor's The Writer's Almanac is a beautiful daily podcast lasting merely minutes. Treat yourself!

I sincerely hope you are enjoying your days!

New Member Pictorial Directory: 110th Congress. Get your pictures of the new members of Congress. There is also a link for the whole Congress.
          Launching a New Blog        

For more than eight years, Marriage Equality Watch was the blog for the Purple Unions Wedding Directory. But this month, we are launching a brand new blog, celebrating the weddings of our friends in the LGBTQIA community. You can find it here: And we’re posting our own wedding first. We’re thrilled to have marriage […]

The post Launching a New Blog appeared first on Marriage Equality Watch.

          Work from Home Senior Active Directory Engineer in Jacksonville        
A healthcare company is seeking a Work from Home Senior Active Directory Engineer in Jacksonville. Core Responsibilities of this position include: Analyzing complex business and competitive issues Evaluating the applicability of leading edge technologies Designing projects with broad implication for the business and/or the future architecture Applicants must meet the following qualifications: Bachelor's degree preferred or equivalent experience 8 years of experience in managing large scale Active Directory environments Experience with configuring and managing O365 environment
          UNI goes mobile!        
Mobile app

University of Northern Iowa students, faculty and staff now have the ability to take UNI with them wherever they go... on their smartphones! The MyUNI mobile app is now available for the iPhone and iPod Touch, Android phones and other phones, such as Windows phones and Blackberry, via mobile Web.

A small team of UNI staff worked before and during winter break to get the app ready for its debut as students return to campus. "This is a purchased framework so we weren't starting at ground zero," said Kevan Forest, associate director of database & application administration. "We had a good starting point from which to work and build upon."

MyUNI, which includes a custom designed user interface, is a great way for students, faculty, staff, alumni and the community to interact with UNI's campus. So far, more than 800 people have downloaded the app.

Features of MyUNI include:

  • An eLearning app. Access your online course materials through the eLearning app from Blackboard.

  • Follow your favorite Panther sports teams with the UNI Athletics app.

  • Use the Directory app to find contact information for students, faculty and staff.

  • Discover what's cookin' at Piazza, Prexy's, 23rd St. Market and other UNI dining centers with the Menus app.

  • Find out what's happening at UNI through the news feed.

  • Send suggestions for MyUNI via a Feedback Form.

MyUNI has something for everyone. "One of the coolest items I think is the augmented reality available in the maps app," said Forest. "For iPhones and iPads, when in maps, if you click on the arrow in the left corner, it shows you where you are with an orange circle that says 'current location.' If you press the orange circle and hold the device perpendicular to the ground, it turns on the camera and includes red pins that show what buildings are in the direction your device is facing and how far away they are."

While MyUNI has a lot to offer, the team is always looking to the future and ways to improve the app – they hope to develop a campus tour app and transit app in the near future.

"We're looking at student feedback for things to include," said Forest. "We really want to have some authenticated content, such as allowing students to complete their registration via the app. We're looking for high value, high impact pieces, so let us know what would help the most."

For more information, FAQs and to download MyUNI, visit

          New! OneOpinion UK        
OneOpinion UK Panel Highlights: Take online surveys in exchange for cash payments Participate in fun product testing opportunities that will allow you to preview great new products Earn points every time you complete a survey The PaidSurveysUK directory now includes OneOpinion UK. Participate in surveys and product testing Voice your opinion by participating in online […]
          New! Univox UK        
Univox Community UK Panel Highlights: Receive 500 points (£2.50) credited to your account just for joining Get paid by PayPal, or choose or prepaid Visa cards instead Earn more points the longer you’re a member, through Univox’s loyalty program The PaidSurveysUK directory now includes Univox UK. Join Univox UK to immediately get 500 points […]
          New! OpinionPLUS UK        
OpinionPLUS UK – super straightforward! Panel Highlights: Complete online surveys to receive cash for every survey you complete Join, complete the profile survey and receive a £1 bonus! Automatically receive PayPal payments without having to request them The PaidSurveysUK directory now includes OpinionPLUS. OpinonPLUS pays cash (in the form of points) for each successfully completed […]
          New! Pinecone Research        
Pinecone Research – a solid choice Panel Highlights: Earn £3 for every online survey you complete Get paid to your PayPal account, by cheque, or with a gift card – it’s your choice Partake in occasional product testing opportunities from home Pinecone Research UK has just been added to the PaidSurveysUK directory. Super reputable Pinecone […]
          ROCKER CHICK -FTU        

Rocker Chick-FTU
The tubes are included in the scrapkit, Rock Star,by Kittz Kreationz and
can be found here-

The mask I used is Vix_BigMask005 and you can download it and the font I used-
Heavy Heap- here:

Put the mask in your PSP Masks folder.
Install the font in your font folder.
(There are 2 ways in which this can be accomplished.  The first is a simple one. 
Just copy the font to the Font Directory located at C:\Windows\Fonts which will
install it for the system to use.  The other method is to click on the font file
at its current location.  It will then give you a screen that shows the font preview
and there you will see the button to install the font right at the top of the screen
toward the left side.)

Filter Used: (Optional-For The Text)
Eye Candy 4-Gradient Glow

This tag was created using PSP9, but can be easily done in any version.
1-File/New 800 by 800, Transparent
(We will resize later and your tag will look slightly different than mine
as I made mine bigger)

2-Open RS-25
Edit/Copy, Close original
Edit/Paste/Paste As New Layer
Image/Resize by 125%, Resize All Layers Unchecked

3-Effects/3D Effects/Drop Shadow
Vertical & Horizontal=3, Opacity=50, Blur=5, Color=Black
Repeat Drop Shadow but change the V & H to (minus) -3

4-Open Open RS-29
Edit/Copy, Close original
Edit/Paste/Paste As New Layer
Effects/3D Effects/Drop Shadow, using the same settings as in Step #3
With The Move Tool, position the musical notes to the bottom left of tag.

5-Open RS-33
Edit/Copy, Close original
Edit/Paste/Paste As New Layer
Image/Resize by 75%, Resize All layers Unchecked
With the Move Tool, position the drums to the right of the
musical notes, see my tag above for example
Effects/3D Effects/Drop Shadow, same settings as in Step #3.

6-Open RS-17
Edit/Copy, Close original
Edit/Paste/Paste As New Layer
With the Move Tool, position at the top left of tag.
Effects/3D Effects/Drop Shadow, same settings as in Step #3

7-Open RS-35
Edit/Copy, Close original
Edit/Paste/Paste As New Layer
With the Move Tool, position the tube near the center, up some.
See my tag above for an example.
Effects/3D Effects/Drop Shadow, same settings as in Step #3

8-Open RS-27
Edit/Copy, Close original
Edit/Paste/Paste As New Layer
With the Move Tool, position the keyboard not quite to the top of the
right side of tag. See my tag above for an example.
Layers/Arrange/Send To Bottom
Effects/3D Effects/Drop Shadow, same settings as in Step #3

9-Open RS-12
Edit/Copy, Close original
Highlight the top layer in the Layers Palette
Edit/Paste/Paste As New Layer
With the Move Tool, position at the top of your tag
Effects/3D Effects/Drop Shadow, same settings as in Step #3

10-Highlight the bottom layer in the Layers Palette
Layers/New Raster Layers
Layers/Arrange/Send To Bottom

11-In the Materials Palette, change the background color to #35201f
Using the Flood Fill Tool, fill this layer with your background color.
Effects/Texture Effects/Blinds, using these settings-
Width=10, Opacity=25, Color=White Horizontal=Unchecked

12-Layers/Load-Save Mask/Load Mask From Disk and locate the
Vix-BigMask005, and Load
Layers/Merge/Merge Group

13-Layers/Merge/Merge Visible
Image/Resize by 75%, Resize All layers Checked
(If you want your tag smaller, repeat this step again)

14-Layers/New Raster Layer
Click on the Text Tool, find the Heavy Heap Font, settings are:
Vector, Size=72, Stroke Width=4 Miter Limit=10, Warp Text=checked

15-In the Materials Palette, change the Foreground color to White
Click on your tag and type your name in the box and Click Apply
Layers/Convert To Raster Layer
With the Move Tool, position the text to your liking, you can see
my tag above for where I placed my name.

16-Efects/Plugins/Eye Candy 4000/Gradient Glow
Glow Width=9, Draw Only Outside Selection=Checked,
Thin, Color=#35201f

17-Effects/3D Effects/Drop Shadow, same settings as in Step #3
Add your copyrights and
Layers/Merge Visible
File/Export/PNG Optimizer/OK and save to your designate folder.

          The Weekend Away - my latest article for Australian Fashion Guide        
Click here to read my latest article for Australian Fashion Guide! xoxox

The Weekend Away - Fashion & Beauty, Fashion Designers, News & People - Fashion Directory - Australian Fashion Guide

You get the call last minute. A weekend away with the girls and you need to be packed by this evening in order to make the flight. Don't panic, AFG has you covered. Simply follow this fool proof guide to packing for that ‘surprise trip-out-of-town’, and you'll never have reason to stress again.

The first item on the agenda is taking the right bag. Forget lugging an oversized suitcase through Tullamarine, that’s so 2005. These days it’s all about taking carry-on luggage, so you can quickly skip off to your destination as soon as you hit the tarmac. The Samsonite Carrylite Wheeled Duffle in black is perfect. It is large enough to fit a weekend of necessities and small enough to fit in the overhead compartment. It has even been approved as carry-on luggage by Samsonite.

Shortly after you check into your hotel, it's off to the latest tapas bar for a light dinner and a few cocktails. Work all dress codes in Camilla & Marc's interpretation of this seasons jumpsuit, the "Harmonia" in navy blue silk. Slip on this Charlies Angels inspired piece for a little one-step chic. $550 from Camilla and Marc.

The sun is up and the birds are singing! What better way to spend the day than by hitting up the markets and doing a spot of shopping, and we all know this means wearing something comfortable. In the softest cotton chambray, and cheeky peek-a-boo cut out at the back, select the “Romantic” jumpsuit by Aussie style favourites Sass & Bide $350 complete this outfit with a pair of Tony Bianco ‘Spencer’ flat sandals. As we all know, nothing ruins a shopping expedition faster than a blister!

We all know a weekend away with the girls wouldn’t be complete without hitting up a club for a dance and some mingling with cute boys, so get your flirt on in this super cute Shakuhachi off-the-shoulder wrap dress in the very on-trend blush tone. With black sheer mesh panels for a hint of sauciness, nothing’s sexier than leaving a little something to the imagination!

Exhausted after a night out? Then the only way to spend your last day in town is to go for a splash down the beach and check out the sights. Go for something a little unexpected on the sand in this Seventh Wonderland 7pm Zip one-piece $265 with bandeau wrapping around the bust and contrasting panels that flatter the hips, this is the sort of piece that could take you from beach to bar with the simple addition of a tube skirt.

As the sun starts to set over the sparkling Pacific Ocean, it’s time to cover up, and who better to assist than the Caftan Queen, Camilla. The Waterfall Off-the-shoulder rouched caftan, in silk crepe with Swarovski crystals is ideal. This loose fitting floaty piece is exactly what you want on your salty skin after swimming all day.

          From Runway to the Streets        
Click here to read my latest article on Australian Fashion Guide! xoxo

From Runway to the Streets - Fashion & Beauty, Fashion Designers, News & People - Fashion Directory - Australian Fashion Guide

Harness Your Style

Right now everyone is talking about harnesses. A little bit bondage and a whole lotta rock'n'roll, this is one versatile look that can go from coffee to cocktails. Sienna Miller shocked the critics back in 2006 but in 2010, every girl is trying to get her hands on one. Blake Lively looked ravishing earlier this month in her straight-off-the-catwalk Lanvin and Roberto Cavalli recently showed off his interpretation for Spring 2011, incorporating the new season trend of nude colors, as seen below.

However, we are more inclined towards Australian label Sass & Bide – who seem to have been showing them for two seasons now – many attached to dresses and tees. ‘The Answer Harness’ made from a combination of crocheted lace and leather with gold stud detailing. In black and ivory (RRP $290). Wear this regal piece of body jewellery over long simple maxi’s and plain scoop neck tee’s or tanks for maximum impact. Remember, this item is the statement piece, so keep the hairstyle & extra bling super simple, or risk looking overdone.

Next up on the hit list is by independent Sydney based brand, Metallic Dreamer. Exclusive in their form, these intricate works of art are hand crafted by the very creative Tanya Arlidge. Think mysterious & somewhat tribal, this brand is so on-trend it had clothing label, Tallulah, use pieces to style out their lookbook. This brand isn’t for the meek – with Tanya describing her collection as less jewellery and more styling/layering tools. The pick of the collection? The ‘Black Ceremony’ harness (RRP $120.00). Picture the perfect mix of black leather, crochet appliqué with dirty silver pyramid stud detail & ostrich feather trim Work them back with bodysuits and form fitting shifts.

Now you know you’re on to a good thing when the likes of Miranda Kerr, Gwen Stefani, Kate Bosworth, Emma Booth and Megan Gale are wearing their pieces, so look no further than Damselfly. At the helm is Melbourne based Christianna Heideman, with the ‘Mesh Body Chain’ (RRP $189) from her new range "I hate myself for loving you". Ultra sexy and rock inspired, this adornment is an intricate web of silver plated brass based chains and will be available from from early December, or check out her pop up store - Shop 20-28 Chatham Street, Prahran.

More Kaleidescope than Casio (Digital Print Dresses)

Digital print dresses are a breath of fresh air for Fashionistas in a sea of played out Boho and overdone Retro. Finally, something that hasn’t been done to death and it seems that the fashion savvy are embracing the look with gusto.

Josh Goot pioneered this look here in Australia for his Spring Summer collection in 2008, using a more neutral colour palette, as if to be testing the buying public. These days he’s confidently made a bigger commitment in his collection to Digi prints – using brighter, almost fluoro tones, his fabrics are like a dream of acid paint swirls and lightening strikes that make for a bold statement. Of his latest collection, the pick has to be the Mirrored Sleeveless Mini Dress (RRP$945). Keep accessories to a minimum and let this piece speak for itself.

Moving on to my love of this Kaleidoscope inspired look is design sensation Dion Lee. Bursting onto the Australian fashion scene only last year, the 24 year old LMFF Design Award finalist presented an impressive and highly anticipated collection showcasing Ultraviolet coloured Rorschach (ink blot) prints, as seen here Dion is certainly the designer to watch, invest in his pieces now to pass down to future generations.

And finally we have the scandalous Sydney based fashion trio ‘Illionaire’ with their interpretation – the ‘Worshipper Frock’ (RRP $638) available in the very wearable tones of peacock emerald and bronze. This dress has very structured tailoring with its layered and folded hemline – work this back with ankle boots and a blazer for a softer look.

          Authenticating to Azure AD non-interactively        
I want to use Azure AD as a user directory but I do not want to use its native web authentication mechanism which requires users to go via an Active Directory page to login (which can be branded and customized to look like my own). I just want to give a user name & password […]
          Joining an ARM Linux VM to AAD Domain Services        
Active Directory is one of the most popular domain controller / LDAP server around. In Azure we have Azure Active Directory (AAD).  Despite the name, AAD isn’t just a multi-tenant AD.  It is built for the cloud. Sometimes though, it is useful to have a traditional domain controller…  in the cloud.  Typically this is with […]
          Azure Active Directory Labs Series – Multi-Factor Authentication        
Back in June I had the pleasure of delivering a training on Azure Active Directory to two customer crowds.  I say pleasure because not only do I love to share knowledge but also, the preparation of the training forces me to go deep on some aspects of what I’m going to teach. In that training […]
          Turbo C++ for Windows 7 64 bit        

 Its a very simple guide to install Turbo C++ on Windows 7 64 bit  operating system.
Follow the steps below to install Turbo C++ compiler on your windows 7 ,64 bit operating system.

Note - Please let me know whether the process is working for you or if you are getting some kind of error.

Step 1) Install the software DOSBOX version 0.73(Search for it in the search bar).

Step 2) Create a folder on your C drive, e.g.Turbo; (c:\Turbo\)

Step 3) Extract TC into the created folder(e.g. c:\Turbo\);(you can search and download it)

Step 4)Run the application DOSBOX 0.73,previously downloaded and installed.

Step 5) Type the following commands at the command prompt that will appear,

[z]: mount d c:\Turbo\

Following this you will get a message "Drive D is mounted as local directory c:\Turbo\"

Step 6) Type d: to shift to d:

Step 7) Then type commands mentioned below;

cd tc

cd bin

tc or tc.exe

Step 8) In the turbo C++ go to Options\Directories\ Change the source of TC to the source directory .

Here's a video to help you out:

Show Notes

Thank you to Mark Buelsing, for your very useful comment during the show, with additional Form and List module examples!

Articles, News, Blogs
Extension Releases

          Give your Apache file Index a face lift with h5ai        
Are you running an Apache server? If you are then you would have been greeted by that horrible directory page more than once. Isn’t it time to get rid of the old to make way for the new? Give your … Continue reading
          Integrate BEMS Presence Service into Your Application        
Previous blog posts explained how to use the BEMS Docs service and BEMS Directory service.  Today we… / Read More
          Integrate BEMS Directory Lookup Service into Your Application        
  In an earlier blog post – Does Your App Need Storage Options? Integrate BEMS Docs Service… / Read More
          free dota 6.49ai download        

Which is the best deal when it comes to picking a place to download music MP3 online? Do you want to Download Free PSP Games?

Most reputable PSP download sites offer a one-time fee of around $40 for unlimited access to their database. Well, you can still download games for free but often times the games are illegal and copyrighted. Here they sit to infect your computer to either hijack it for illegal means or just to do some major damage.

Now, it is not advisable to download these to play. You can simply copy the music files manually onto your player and it would work just fine except for the extra step. Do not worry if the synchronization is not working well.

Look around and use these tips to find a good service for your downloads. These are the tips that would come in handy when you go shopping for a download site to download music, movies, videos and games.

Reverse Phone Directory Funeral Guide

          Comment on Oracle 12c RAC on Oracle VirtualBox by Anzo        
Please Help I am getting an failure message on the GI installation at the script on node one [root@odbrac1 ~]# /u01/app/ Performing root user operation. The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/ Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin … Copying oraenv to /usr/local/bin … Copying coraenv to /usr/local/bin … Creating /etc/oratab file… Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/ The log of current session can be found at: /u01/app/oracle/crsdata/odbrac1/crsconfig/rootcrs_odbrac1_2017-04-11_04-30-54PM.log 2017/04/11 16:30:58 CLSRSC-594: Executing installation step 1 of 19: ‘SetupTFA’. 2017/04/11 16:30:58 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector. . 2017/04/11 16:47:26 CLSRSC-614: failed to get the list of configured diskgroups Died at /u01/app/ line 2044. The command '/u01/app/ -I/u01/app/ -I/u01/app/ /u01/app/ ' execution failed [root@odbrac1 ~]# 
          BizTalk Server 2009 verlaagt werkdruk op de KPN-servers met 70%         
Via het internet ontvangt telecomaanbieder KPN dagelijks circa 15.000 klantmutaties betreffende telefonie, internet en/of tv. De diverse stappen in dit proces zijn geautomatiseerd met Microsoft technologie. Rolf Velthuys, senior IT-architect bij KPN: "Ons Microsoft-platform bestaat uit Active Directory voor beheer, Exchange Server voor communicatie, SQL Server voor dataopslag en enkele in eigen beheer ontwikkelde .NET en Web applicaties. BizTalk Server zit centraal tussen alle processen in: het verzorgt de intake van orders, de vertaalslag van gegevens uit SQL Server naar Oracle, zoals de ordergegevens die naar de administratie moeten, het regelt het uitvoeren van de opdrachten, de voortgangsrapportages, de operationele bewaking etc." Het platform staat intern bekend onder de naam Maranello, refererend aan de Ferrari Maranello. Die naam suggereert snelheid, en in het begin klopte dat ook. Rolf Velthuys: "In de loop der jaren nam de druk op het platform steeds verder toe, tot de performance-monitor van onze servers op een gegeven moment een workload van 80 tot 90 procent te zien gaf. De responstijden liepen op, time-outs kwamen steeds vaker voor en de capaciteitsreserve was nul."
          Forefront Identity Manager als oplossing voor het identiteitsbeheer van duizenden leerlingen en leraren bij OVO Zaanstad         
OVO Zaanstad heeft al geruime tijd een schooladministratiesysteem of 'SAS' in gebruik, waarin alle gegevens van leerlingen en docenten staan geregistreerd. De externe toegang tot het IT-netwerk van de school werd bijgehouden in Active Directory (AD). Jan Zonneveld, hoofd van de ICT-afdeling: "Het mooiste zou zijn als de gebruikeraccounts automatisch en integraal vanuit Active Directory gegenereerd zouden worden en via SAS beheerd kunnen worden. Daarmee verminderen de werkzaamheden rond het beheer van de gebruikeraccounts en neemt de werkdruk op IT-beheer af."
          Oad Groep heeft met Microsoft Enterprise Project Management continu totaalinzicht in alle lopende projecten         
Het nieuwe businessmodel van Oad bracht aan de businesskant een grote verandering van de processen met zich mee. Ook de IT moest hierop aangepast worden. De Oad groep startte daarom vijf jaar geleden een programma op, waar 20 miljoen euro mee gemoeid is en dat uit bijna dertig projecten bestaat. "Wil je al die projecten goed managen, dan kun je niet meer toe met Excel-sheets. Niet alleen mis je dan het totaaloverzicht, het is ook veel te kostbaar om alle informatie in spreadsheets up-to-date te houden,” aldus Robert Dorenbusch. Hij is manager ICT binnen de Oad Groep. "Een professionele planning is een eerste vereiste, zeker als er gemiddeld 50 medewerkers aan 30 projecten gelijktijdig werken." Als oplossing koos de Oad Groep voor Microsoft Enterprise Project Management (EPM), bestaande uit Microsoft Project Server, Project Web Access en Project Professional. Naar producten van andere leveranciers is slechts kort gekeken. Robert Dorenbusch: “De Oad Groep gebruikt al vrij veel Microsoft producten en EPM laat zich daar goed mee integreren. Zoals met de portals van Microsoft SharePoint Server, een perfecte plaats voor de opslag van alle documentatie, die bij de diverse projecten hoort." DBS Project uit Amersfoort is voor Oad de Microsoft Certified Partner. DBS Project heeft de afgelopen vijf jaar bij zo’n zestig bedrijven en organisaties consultancydiensten geleverd voor Project Server. Al vanaf de uitrol van EPM profiteert de Oad Groep van de vele voordelen die het pakket te bieden heeft. Behalve meer inzicht, betere factuurcontrole en tijdige sturing noemt Robert Dorenbusch nog een vierde voordeel, namelijk de toegankelijkheid: “We werken met een Roemeens bedrijf dat voor ons applicaties ontwikkelt. Voor hen is EPM via een webinterface goed toegankelijk, evenals voor andere externe partijen met wie wij samenwerken. Via de integratie met Active Directory worden zij geautoriseerd en worden ook hun werkzaamheden in EPM geregistreerd. Daarmee is het beheer van de projecten voor onze projectmedewerkers en –managers een stuk eenvoudiger.” Dat de projectdocumentatie overzichtelijk wordt opgeslagen is volgens Robert Dorenbusch een pré: “Zodra je een nieuw project opstart in EPM wordt een SharePoint projectomgeving aangemaakt, waar automatisch alle bij dat project behorende documenten worden opgeslagen." Als Robert Dorenbusch een ruwe schatting moet maken over de efficiency dan verwacht hij dat die met vijf procent is toegenomen. “Op een totaalbedrag van 20 miljoen euro is dat bijzonder veel."
          ConQuaestor maakt van 550 laptops back-up's op afstand voor een fractie van de begrote kosten        
Bijna alle 550 professionals, die ConQuaestor in dienst heeft, werken met een laptop, waarop zij informatie bewaren. Mede omdat veel van onze medewerkers hooguit één keer per half jaar op ons hoofdkantoor komen, konden we met een traditionele back-up oplossing weinig aanvangen. We zochten een oplossing die op een intelligente manier en geheel automatisch van al onze laptops incrementele back-up’s maakt en deze opslaat op een centrale server. Een eerste poging die werd ondernomen was gebaseerd op ComVault. Wouter Baaij: “Die poging strandde al vrij snel en wel om een aantal redenen. De oplossing die Premier Support ConQuaestor aan de hand deed is Exchange Server 2010 in combinatie met Data Protection Manager (DPM) 2010, dat deel uitmaakt van Microsoft System Center familie. DPM regelt de veiligstelling van alle data die in Windows werkomgevingen worden opgeslagen, zoals SQL Server, Exchange, Share¬Point, fileservers (wel en niet gevirtualiseerd) en ook Windows desktops en laptops. Een nieuwe eigenschap in versie 2010 is de mogelijkheid om ook 'roaming laptops' centraal te beheren en dat is precies waar ConQuaestor behoefte aan heeft. De data die op een laptop opgeslagen ligt is daardoor beschermd, ongeacht of de laptop met het netwerk verbonden is of op 10km hoogte in een vliegtuig wordt gebruikt. Wouter Baaij, manager IT & Facility bij ConQuaestor: ConQuaestor stapt af van het fenomeen HOME-directory. Wouter Baaij: "Die hadden we in gebruik om back-up redenen, maar we faseren hem nu uit. Onze gebruikers bewaren voortaan alles lokaal, gewoon in de map Mijn Documenten. Zodra ze ergens een netwerkaansluiting hebben, worden er automatisch back-up's gemaakt. Meerdere back-up's zelfs, zodat er ook van oudere documenten versies blijven bestaan. Met de HOME directory kon dat allemaal niet, de nieuwe oplossing werkt veel handiger. We hoeven onze gebruikers niet meer lastig te vallen met aparte drive-letters en ingewikkeld sync-geknutsel. Inmiddels zijn we negen maanden verder en hebben we een zeer goed resultaat geboekt, waar niet alleen wij, maar ook de specialisten van partner inovativ, de SDM en de Microsoft consultant heel trots op zijn."
          KPN Zakelijke Markt reduceert aantal IT-incidenten met 600% na health checks met Premier Support         
Desondanks kampte KPN ZM zelf met een Windows IT-omgeving die niet 100% liep. "We moesten veel tijd besteden aan IT-zaken die niet goed geconfigureerd waren. De oorzaak daarvan lag in het feit, dat we ooit een partner ingehuurd hebben die Windows Server 2003 heeft uitgerold alsof het Windows NT Server was. Daardoor zaten we met een serveromgeving die regelmatig uit de lucht was en veel problemen gaf met beleidsregels (policies) voor werkplekken en medewerkers. Op een gegeven moment konden we niet voorkomen dat onze servers twee hele dagen niet beschikbaar waren! Dat was de druppel, er moest een oplossing komen." KPN ZM laat zien hoe Premier Support over een periode van drie jaar voor verbetering van de situatie heeft gezorgd. Van 6000 minuten voor reactieve probleemoplossing in 2007 naar 1000 minuten in 2009. Een verbetering met een factor 6.  Allan de Gier: "Productietechnisch zijn er geen problemen meer. De proactieve aanpak met Microsoft Premier heeft zijn vruchten afgeworpen! De interrupties voor de eindgebruiker zijn tot een minimum teruggebracht." Voor Henk Schultz, Allan de Gier en hun team betekent Premier Support vooral 'rust': "Met de kennis, de best practices, de white papers en de templates waar je via Premier Support over beschikt, weet je zeker dat je alles wat je doet in één keer goed doet. En mocht het onverhoopt toch misgaan, dan heb je met 24x7 problem resolution support van Microsoft de harde garantie dat er snel en adequaat een oplossing voor jouw probleem wordt gezocht. Zodat je 24x7 continuïteit in diensten en services kunt bieden en dat geeft veel gemoedsrust." "Zo zijn we nu bezig om de Windows Server 2003 omgeving om te zetten naar Windows Server 2008. We splitsen het migratietraject op in onderdelen, waaronder fileserver, printserver, Active Directory, policies etc. Sommige van die onderdelen doen we zelf, andere samen met een partner, maar alles gebeurt onder regie van Microsoft Premier support." Henk Schultz: "Dat geeft je het vertrouwen dat – als de productieomgeving eenmaal overgeschakeld is op Windows Server 2008 -  het werkt zoals gepland."
          Stabiele en storingsvrije pc-werkruimtes leveren het Zoomvliet 50% tijdbesparing         
Met Hyper-V is de storingskans afgenomen, en is de frustratie van het gesleep met externe harde schijven verleden tijd. Hélène van der Putten, netwerkbeheerder van Zoomvliet, legt uit: “Via het netwerk had elke pc toegang tot de server. De student had daar toegang tot een central home directory. Daarop stonden hulpprogramma’s, uitleg en naslagwerken, maar deze omgeving was totaal niet afgeschermd en dus niet geschikt voor de opslag van bijvoorbeeld oefeningen, die de studenten moeten maken. Daarom had iedere student een eigen externe vaste schijf, die telkens fysiek aan het werkstation moest worden gekoppeld. Op die harde schijf stonden de praktijkoefeningen die een student diende te maken voor zijn examen." Cees van Rijsbergen, ICT-docent bij Zoomvliet: “Het gebruik van de externe vaste schijven was nogal storingsgevoelig, wat hoogst frustrerend is voor de veelal ongeduldige studenten.” Om in deze situatie verandering te brengen, koos Zoomvliet College voor servervirtualisatie met Hyper-V R2. Cees van Rijsbergen: "Al vanaf de eerste dag van de invoering van Hyper-V R2 ervaren we de enorme verbetering op het gebied van storingsgevoeligheid. Waar praktijkgedeelten en examens voorheen 18 weken duurden, worden diezelfde trajecten nu in 9 weken afgelegd!" Hij verklaart deze enorme winst als volgt: "Studenten, maar ook docenten en ICT-beheer werken efficiënter en zijn geen tijd meer kwijt met het op orde brengen van verstoorde systemen. We boeken een enorme tijdwinst met het klaarzetten van de server ten behoeve van de examenopdrachten. Het simpele maar tijdrovende installatieproces kan door de student worden overgeslagen, ze hebben nu direct toegang tot de machine." Niet alleen de storingskans is afgenomen, ook de frustratie van het gesleep met externe harde schijven is nu verleden tijd. De nieuwe werkwijze heeft volgens van Rijsbergen tot nieuw enthousiasme geleid. Door de afname van storingen hoeft de student geen routinematig herstelwerk meer uit te voeren, als hij met zijn examenopgaven aan de slag gaat. Ook de cd met serversoftware is niet meer nodig. De extra investering in capaciteit en werkgeheugen leveren volgens van Rijsbergen enorme kostenbesparingen op in onderhoud en ondersteuningsuren: "We zijn nu veel minder tijd kwijt met het blussen van 'brandjes' dan voorheen. Dat geeft niet alleen veel rust, we krijgen daardoor ook meer tijd en ruimte voor andere ICT-aspecten, zoals de beveiliging in en rond onze ICT.
          Transparante uitwisseling en herkenning van DigiD en andere digitale identificatiemiddelen bij de RDW        
De RDW stelt gegevens en diensten online beschikbaar aan de voertuigbranche en aan particulieren. Voor particulieren wordt DigiD als authenticatiemiddel gehanteerd. Om het gebruik van DigiD, maar in de toekomst ook van andere identificatiesystemen mogelijk te maken stapte de RDW over op Microsoft Active Directory Federation Services (ADFS). Met de komst van DigiD moesten veel overheidsorganisaties hun computersystemen aanpassen voor dit digitale identificatiemiddel. De Belastingdienst, het UWV, het CBR en vele andere overheidsinstellingen, provincies en gemeenten maken al gebruik van DigiD. RDW: "Voor de voertuigbranche werkten wij met onze eigen componenten die de digitale certificaten begreep en toegang gaf tot onze applicaties. Om aan te sluiten op DigiD moesten deze componenten worden uitgebreid. Bovendien moesten wij ons verdiepen in de details van het DigiD ticket. Op zich niet zo'n probleem, maar wat nu als er een derde of vierde identificatiemiddel zou komen?" Active Directory Federation Services (ADFS) is een standaardonderdeel van Microsoft Windows Server 2003 R2. Het is een technologie voor eenmalige aanmelding, die gebruikers na verificatie toegang geeft tot meerdere webtoepassingen gedurende één online sessie. Dit wordt bereikt door het op een veilige manier delen van digitale id's en rechten, of 'claims', tussen beveiligingssystemen en organisaties.
          Grontmij reduceert IT-kosten met 55 procent         
Grontmij, opgericht in 1915, is een advies- en ingenieursbureau voor bouw, infrastructuur en milieu. Het is een beursgenoteerde dienstverlener die zich richt op advies, ontwerp, engineering, management en turnkey-realisatie van projecten in de marktsegmenten bouw, infrastructuur en milieu. Grontmij werd vrijwel dagelijks geplaagd door storingen van haar e-mailplatform dat gebaseerd was op Lotus Notes. Dat platform was niet alleen kostbaar maar ook decentraal georganiseerd, met mailservers op vrijwel elke vestiging. Daarbij konden medewerkers niet van buitenaf inloggen op het systeem. Door te migreren naar Microsoft Exchange Server kreeg Grontmij een stabieler platform met meer functionaliteit tegen veel lagere kosten. Met de vervanging van de Lotus Notes-servers door een centrale en geclusterde oplossing heeft Grontmij het aantal servers sterk teruggebracht en heeft daarmee haar jaarlijkse IT-kosten voor communicatie en samenwerking sterk kunnen verlagen. Erik ten Winkel: "De overhead die gemoeid is met die 32 Lotus Notes servers is verdwenen. Het beheer op elke locatie was erg kostbaar, alleen al vanwege de licentiekosten." Grontmij slaagde erin haar jaarlijkse hardware kosten met $100.000 en de licentiekosten met $50.000 te reduceren. In zijn totaliteit bespaart Grontmij op jaarbasis 55% op haar IT-kosten, wat uitkomt op $470.000 per jaar. Dankzij de overgang naar Exchange Server wordt Grontmij niet meer gehinderd door storingen. In vergelijking met de oude situatie een enorme vooruitgang. Bovendien besparen de beheerders veel tijd. Twee beheerders kunnen nu het e-mailplatform onderhouden. Voor het administreren van alle gebruikers van Exchange Server wordt Active Directory ingezet, onderdeel van Microsoft Server 2003 en een heel krachtige en kosteneffectieve oplossing.
          How to Generate GPG Public / Private Key Pair (RSA / DSA / ElGamal)?        
Written by Pranshu Bajpai |  | LinkedIn

This post is meant to simplify the procedure for generating GNUPG keys on a Linux machine. In the example below, I am generating a 4096 bit RSA public private key pair.

Step 1. Initiate the generation process

#gpg --gen-key
 This initiates the generation process. You have to answer some questions to configure the needed key size and your details. For example, select from several kinds of keys available. If you do not know which one you need, the default 1 will do fine.

I usually select my key size to be 4096 bits which is quite strong. You can do the same or select a lower bit size. Next, select an expiration date for your key -- I chose 'never'.

Step 2. Generate entropy

The program needs entropy, also known as randomness, to generate the keys. For this you need to type on the keyboard or move the mouse pointer or use disk. However, you may still have to wait a while before the keys are generated.

For this reason, I use rng-tools to generate randomness. First install 'rng-tools' by typing:
#apt-get install rng-tools
Run the tool: 
#rngd -r /dev/urandom
The process of finding entropy should now conclude faster. On my system, it was almost instantaneous.

Step 3. Check ~/.gnupg to locate the keys

Once the keys are generated, they are usually stored in ~/.gnupg, a hidden gnupg directory in the home folder. You can check the location of keys by typing:

#gpg -k
The key fingerprint can be obtained by:
   #gpg --fingerprint

Step 4. Export the public key to be shared with others

For others to be able to communicate with you, you need to share you public key. So move to the ~/.gnupg folder and export the public key:

#gpg --armor --export > pub_key.asc
'ls' should now show you a new file in the folder called 'pub_key.asc'. 'cat' will show you that this is the public key file.

Important !

Needless to say, do not share your private key with anyone.
          /var/log Disk Space Issues | Ubuntu, Kali, Debian Linux | /var/log Fills Up Fast        
Written by Pranshu Bajpai |  | LinkedIn

Recently, I started noticing that my computer keeps running out of space for no reason at all. I mean I didn't download any large files and my root drive should not be having any space issues, and yet my computer kept tellling me that I had '0' bytes available or free on my /root/ drive. As I found it hard to believe, I invoked the 'df' command (for disk space usage):

So clearly, 100% of the disk partition is in use, and '0' is available to me. Again, I tried to see if the system simply ran out of 'inodes' to assign to new files; this could happen if there are a lot of small files of '0' bytes or so on your machine.
#df -i

Only 11% inodes were in use, so this was clearly not a problem of running out of inodes. This was completely baffling. First thing to do was to locate the cause of the problem. Computers never lie. If the machine tells me that I am running out of space on the root drive then there must be some files that I do not know about, mostly likely these are some 'system' files created during routine operations.

To locate the cause of the problem, I executed the following command to find all files of size greater than ~2GB:
# find / -size +2000M

Clearly, the folder '/var/log' needs my attention. Seems like some kernel log files are humongous in size and have not been 'rotated' (explained later). So, I listed the contents of this directory arranged in order of decreasing size:
#ls -s -S

That one log file 'messages.1' was 12 GB in size and the next two were 5.5 GB. So this is what has been eating up my space. First thing I did, was run 'logrotate':
It ran for a while as it rotated the logs. logrotate is meant to automate the task of administrating log files on systems that generate a heavy amount of logs. It is responsible for compressing, rotating, and delivering log files. Read more about it here.

What I hoped by running logrotate was that it would rotate and compress the old log files so I can quickly remove those from my system. Why didn't I just delete that '/var/log' directory directly? Because that would break things. '/var/log' is needed by the system and the system expects to see it. Deleting it is a bad idea. So, I needed to ensure that I don't delete anything of significance.

After a while, logrotate completed execution and I was able to see some '.gz' compresses files in this directory. I quickly removed (or deleted) these.

Still, there were two files of around 5 GB: messages.1 and kern.log.1.  Since these had already been rotated, I figured it would be safe to remove these as well. But instead of doing an 'rm' to remove them, I decided to just empty them (in case they were being used somewhere).
#> messages.1
#> kern.log.1

The size of both of these was reduced to '0' bytes. Great! Freed up a lot of disk space this way and nothing 'broken' in the process.

How did the log files become so large over such a small time period?

This is killing me. Normally, log files should not reach this kind of sizes if logrotate is doing its job properly or if everything is running right. I am still interested in knowing how did the log files got so huge in the first place. It is probably some service, application or process creating a lot of errors maybe? Maybe logrotate is not able to execute under 'cron' jobs? I don't know. Before 'emptying' these log files I did take a look inside them to find repetitive patterns. But then I quickly gave up on reading 5 GB files as I was short on time.

Since this is my personal laptop that I shut down at night, as opposed to a server that is up all the time, I have installed 'anacron' and will set 'logrotate' to run under 'anacron' instead of cron. I did this since I have my suspicions that cron is not executing logrotate daily. We will see what the results are.

I will update this post when I have discovered the root cause of this problem.
          10 ways we’re making Classroom and Forms easier for teachers this school year        

We’ve seen educators do incredible things with G Suite for Education tools: creatively teach classroom material, collaborate with students, and design innovative assignments to achieve meaningful outcomes. Classroom is a useful tool for teachers, and since it launched three years ago, students have submitted more than 1 billion assignments.

This year, we’re sending teachers back to school with updates designed to help them do what they do best—teach. Today, we’re announcing 10 updates to Google Classroom and Google Forms to help teachers save time and stay organized.


  1. Single view of student work: To help teachers track individual student progress, we’ve created a dedicated page for each student in Classroom that shows all of their work in a class. With this new view, teachers and students can see the status of every assignment, and can use filters to see assigned work, missing work, or returned and graded work. Teachers and students can use this information to make personalized learning decisions that help students set goals and build skills that will serve them in the future.

  2. Reorder classes: Teachers can now order their classes to organize them based on daily schedule, workload priorities or however will help them keep organized throughout the school year. And students can use this feature too. "For teachers and students, organization is important, and being able to reorder class cards allows us to keep our classes organized in a simple and personalized way," notes Ross Berman, a 7th and 8th grade math teacher. "Students can move classes around so that the first thing they see is the class they know they have work for coming up."

  3. Decimal grading: As teachers know, grading is often more complicated than a simple point value. To be as accurate with feedback as possible, educators can now use decimal points when grading assignments in Google Classroom.

  4. Transfer class ownership: Things can change a lot over the summer, including who’s teaching which class. Now, admins and teachers can transfer ownership of Google Classroom classes to other teachers, without the need to recreate the class. The new class owner can get up to speed quickly with a complete view of past student work and resources in Drive.

  5. Add profile picture on mobile: Today’s users log a lot of hours on their phones. Soon, teachers and students will be able to make changes to their Classroom mobile profiles directly from their mobile devices too, including changing their profile picture from the Google Classroom mobile app. Ready the selfies!

  6. Provision classes with School Directory Sync: Google School Directory Sync now supports syncing Google Classroom classes from your student or management information system using IMS OneRoster CSV files. Admins can save teachers and students time by handling class setup before the opening bell.

  7. New Classroom integrations: Apps that integrate with Classroom offer educators a seamless experience, and allow them to easily share information between Classroom and other tools they love. Please welcome the newest A+ apps to the #withClassroom family: Quizizz, Edcite, Kami and coming soon,

  8. Display class code: Joining Google Classroom classes is easier than ever thanks to this new update. Teachers can now display their class code in full screen so students can quickly join new classes.

  9. Sneak Peek! Import Google Forms Quiz scores into Classroom: Using Quizzes in Google Forms allows educators to take real-time assessments of students’ understanding of a topic. Soon, teachers will be able to import grades from Quizzes directly into Google Classroom.

  10. Add feedback in question-by-question grading in Quizzes: More than test grades, meaningful feedback can improve learning. At ISTE this year, we launched question-by-question grading in Quizzes in Google Forms to help teachers save time by batch grading assessments. We’re taking it one step further and now, teachers will have the option to add feedback as well.

As educators head back to school, we want our newest Classroom teachers to get the most out of their experience. In the coming weeks, we’ll be launching a new resource hub to help teachers get set up on their first day of Classroom. If you’re already a Classroom pro, help your fellow teachers by sharing your favorite Classroom tips, tricks, resources and tutorials on social media using the hashtag #FirstDayofClassroom. Stay tuned on Twitter this Back to School season for more.

From all of us here at Google, we wish you a successful start to the school year! We hope these Google Classroom and Forms updates help you save time, stay organized and most importantly, teach effectively during back to school and beyond.

          Dating Sites The Best Dating Safety Tip Around Reverse Phone Lookup        
As I am scouting through the headlines of news and advice articles online, I can't help but zero in on the fact that I am seeing several redundant topics related to dating: flirting success tips, tips to "inspire him to pursue you", tips for recession-proof dating, etc. Where are the safe dating tips, tips to "returning home in one piece", or tips for getting to know your date before you meet? No, I haven't misspoken. Investigating the person you've never met before to assess the potential risk of meeting him alone is key to a successful and safe date.

I never meet anyone I initially met online in person before I get them confirmed and checked out via a reverse phone lookup. After he calls you and his Caller ID is recorded on your phone, use that number. Enter it into the online search form and watch what comes back.

It's almost like reading a book about someone else's life, and it's all there. If it is an honorable man you are about to date, you'll find out. The dishonorable ones are pretty obvious too. The reverse phone lookup is like a mirror of truth: it reflects everyone's good and bad deeds in full. And now that you know he told you the truth about his name, age, address, marital status, career and financial situation, living and family situations, etc., you can silence the little concerned voice in your head and listen to his voice instead.

Normally, when you worry about being out with a person you know nothing about, you are bound to feel tense and be jumpy on a dark street. That's not conducive to a successful date. My suggestion is to find out all about your date long before you meet and make sure he is a wholesome, trustworthy man without criminal history or a wife. An added bonus is to have him checked out before you meet, so you don't feel guilty yet that you are betraying his trust. He may very well be betraying your trust, and don't you forget it. Only a reverse phone lookup can tell.

Reverse Phone Directory

dating sites: free dating sites

dating sites: dating sites

Article Source:

          Giveaway: Win $100 PayPal Cash        
Welcome to the $100 Cash Giveaway! Hosted by Giveaway Promote. Are you a blogger looking for a very effective, traffic packed, easy to use paid Featured Giveaway Plan? Giveaway Promote, Giveaway Scoop and Blog Giveaway Directory have teamed up to create just that with the Gold Giveaway Plan. All together, the Gold Giveaway Plan allows […]
          maven 依赖打包插件        




<!-- 生成linux, Windows两种平台的执行脚本 -->
<!-- 根目录 -->
<!-- 打包的jar,以及maven依赖的jar放到这个目录里面 -->
<!-- 可执行脚本的目录 -->
<!-- 配置文件的目标目录 -->
<!-- 拷贝配置文件到上面的目录中 -->
<!-- 从哪里拷贝配置文件 (默认src/main/config) -->
<!-- lib目录中jar的存放规则,默认是${groupId}/${artifactId}的目录格式,flat表示直接把jar放到lib目录 -->
<!-- 启动类 -->


<move todir="${}/${project.artifactId}-${version}/com/duxiu/demo/app">
<fileset dir="${}/classes/com/duxiu/demo/app">
<include name="*.class" />


SIMONE 2016-07-20 09:42 发表评论

          ubuntu kerberos配置


Kerberos is a network authentication system based on the principal of a trusted third party. The other two parties being the user and the service the user wishes to authenticate to. Not all services and applications can use Kerberos, but for those that can, it brings the network environment one step closer to being Single Sign On (SSO).

This section covers installation and configuration of a Kerberos server, and some example client configurations.


If you are new to Kerberos there are a few terms that are good to understand before setting up a Kerberos server. Most of the terms will relate to things you may be familiar with in other environments:

  • Principal: any users, computers, and services provided by servers need to be defined as Kerberos Principals.

  • Instances: are used for service principals and special administrative principals.

  • Realms: the unique realm of control provided by the Kerberos installation. Usually the DNS domain converted to uppercase (EXAMPLE.COM).

  • Key Distribution Center: (KDC) consist of three parts, a database of all principals, the authentication server, and the ticket granting server. For each realm there must be at least one KDC.

  • Ticket Granting Ticket: issued by the Authentication Server (AS), the Ticket Granting Ticket (TGT) is encrypted in the user's password which is known only to the user and the KDC.

  • Ticket Granting Server: (TGS) issues service tickets to clients upon request.

  • Tickets: confirm the identity of the two principals. One principal being a user and the other a service requested by the user. Tickets establish an encryption key used for secure communication during the authenticated session.

  • Keytab Files: are files extracted from the KDC principal database and contain the encryption key for a service or host.

To put the pieces together, a Realm has at least one KDC, preferably two for redundancy, which contains a database of Principals. When a user principal logs into a workstation, configured for Kerberos authentication, the KDC issues a Ticket Granting Ticket (TGT). If the user supplied credentials match, the user is authenticated and can then request tickets for Kerberized services from the Ticket Granting Server (TGS). The service tickets allow the user to authenticate to the service without entering another username and password.

Kerberos Server


Before installing the Kerberos server a properly configured DNS server is needed for your domain. Since the Kerberos Realm by convention matches the domain name, this section uses the domain configured in the section called “Primary Master”.

Also, Kerberos is a time sensitive protocol. So if the local system time between a client machine and the server differs by more than five minutes (by default), the workstation will not be able to authenticate. To correct the problem all hosts should have their time synchronized using the Network Time Protocol (NTP). For details on setting up NTP see the section called “Time Synchronisation with NTP”.

The first step in installing a Kerberos Realm is to install the krb5-kdc and krb5-admin-server packages. From a terminal enter:

sudo apt-get install krb5-kdc krb5-admin-server 

You will be asked at the end of the install to supply a name for the Kerberos and Admin servers, which may or may not be the same server, for the realm.

Next, create the new realm with the kdb5_newrealm utility:

sudo krb5_newrealm 


The questions asked during installation are used to configure the /etc/krb5.conf file. If you need to adjust the Key Distribution Center (KDC) settings simply edit the file and restart the krb5-kdc daemon.

  1. Now that the KDC running an admin user is needed. It is recommended to use a different username from your everyday username. Using the kadmin.local utility in a terminal prompt enter:

    sudo kadmin.local Authenticating as principal root/admin@EXAMPLE.COM with password. kadmin.local: addprinc steve/admin WARNING: no policy specified for steve/admin@EXAMPLE.COM; defaulting to no policy Enter password for principal "steve/admin@EXAMPLE.COM":  Re-enter password for principal "steve/admin@EXAMPLE.COM":  Principal "steve/admin@EXAMPLE.COM" created. kadmin.local: quit 

    In the above example steve is the Principal, /admin is an Instance, and @EXAMPLE.COM signifies the realm. The "every day" Principal would be steve@EXAMPLE.COM, and should have only normal user rights.


    Replace EXAMPLE.COM and steve with your Realm and admin username.

  2. Next, the new admin user needs to have the appropriate Access Control List (ACL) permissions. The permissions are configured in the /etc/krb5kdc/kadm5.acl file:

    steve/admin@EXAMPLE.COM        * 

    This entry grants steve/admin the ability to perform any operation on all principals in the realm.

  3. Now restart the krb5-admin-server for the new ACL to take affect:

    sudo /etc/init.d/krb5-admin-server restart 
  4. The new user principal can be tested using the kinit utility:

    kinit steve/admin steve/admin@EXAMPLE.COM's Password: 

    After entering the password, use the klist utility to view information about the Ticket Granting Ticket (TGT):

    klist Credentials cache: FILE:/tmp/krb5cc_1000         Principal: steve/admin@EXAMPLE.COM    Issued           Expires          Principal Jul 13 17:53:34  Jul 14 03:53:34  krbtgt/EXAMPLE.COM@EXAMPLE.COM 

    You may need to add an entry into the /etc/hosts for the KDC. For example:       kdc01 

    Replacing with the IP address of your KDC.

  5. In order for clients to determine the KDC for the Realm some DNS SRV records are needed. Add the following to /etc/named/

    _kerberos._udp.EXAMPLE.COM.     IN SRV 1  0 88 _kerberos._tcp.EXAMPLE.COM.     IN SRV 1  0 88 _kerberos._udp.EXAMPLE.COM.     IN SRV 10 0 88  _kerberos._tcp.EXAMPLE.COM.     IN SRV 10 0 88  _kerberos-adm._tcp.EXAMPLE.COM. IN SRV 1  0 749 _kpasswd._udp.EXAMPLE.COM.      IN SRV 1  0 464 

    Replace EXAMPLE.COM, kdc01, and kdc02 with your domain name, primary KDC, and secondary KDC.

    See Chapter 7, Domain Name Service (DNS) for detailed instructions on setting up DNS.

Your new Kerberos Realm is now ready to authenticate clients.

Secondary KDC

Once you have one Key Distribution Center (KDC) on your network, it is good practice to have a Secondary KDC in case the primary becomes unavailable.

  1. First, install the packages, and when asked for the Kerberos and Admin server names enter the name of the Primary KDC:

    sudo apt-get install krb5-kdc krb5-admin-server 
  2. Once you have the packages installed, create the Secondary KDC's host principal. From a terminal prompt, enter:

    kadmin -q "addprinc -randkey host/" 

    After, issuing any kadmin commands you will be prompted for your username/admin@EXAMPLE.COM principal password.

  3. Extract the keytab file:

    kadmin -q "ktadd -k keytab.kdc02 host/" 
  4. There should now be a keytab.kdc02 in the current directory, move the file to /etc/krb5.keytab:

    sudo mv keytab.kdc02 /etc/krb5.keytab 

    If the path to the keytab.kdc02 file is different adjust accordingly.

    Also, you can list the principals in a Keytab file, which can be useful when troubleshooting, using the klist utility:

    sudo klist -k /etc/krb5.keytab 
  5. Next, there needs to be a kpropd.acl file on each KDC that lists all KDCs for the Realm. For example, on both primary and secondary KDC, create /etc/krb5kdc/kpropd.acl:

    host/ host/ 
  6. Create an empty database on the Secondary KDC:

    sudo kdb5_util -s create 
  7. Now start the kpropd daemon, which listens for connections from the kprop utility. kprop is used to transfer dump files:

    sudo kpropd -S 
  8. From a terminal on the Primary KDC, create a dump file of the principal database:

    sudo kdb5_util dump /var/lib/krb5kdc/dump 
  9. Extract the Primary KDC's keytab file and copy it to /etc/krb5.keytab:

    kadmin -q "ktadd -k keytab.kdc01 host/" sudo mv keytab.kdc01 /etc/kr5b.keytab 

    Make sure there is a host for before extracting the Keytab.

  10. Using the kprop utility push the database to the Secondary KDC:

    sudo kprop -r EXAMPLE.COM -f /var/lib/krb5kdc/dump 

    There should be a SUCCEEDED message if the propagation worked. If there is an error message check /var/log/syslog on the secondary KDC for more information.

    You may also want to create a cron job to periodically update the database on the Secondary KDC. For example, the following will push the database every hour:

    # m h  dom mon dow   command 0 * * * * /usr/sbin/kdb5_util dump /var/lib/krb5kdc/dump && /usr/sbin/kprop -r EXAMPLE.COM -f /var/lib/krb5kdc/dump 
  11. Back on the Secondary KDC, create a stash file to hold the Kerberos master key:

    sudo kdb5_util stash 
  12. Finally, start the krb5-kdc daemon on the Secondary KDC:

    sudo /etc/init.d/krb5-kdc start 

The Secondary KDC should now be able to issue tickets for the Realm. You can test this by stopping the krb5-kdc daemon on the Primary KDC, then use kinit to request a ticket. If all goes well you should receive a ticket from the Secondary KDC.

Kerberos Linux Client

This section covers configuring a Linux system as a Kerberos client. This will allow access to any kerberized services once a user has successfully logged into the system.


In order to authenticate to a Kerberos Realm, the krb5-user and libpam-krb5 packages are needed, along with a few others that are not strictly necessary but make life easier. To install the packages enter the following in a terminal prompt:

sudo apt-get install krb5-user libpam-krb5 libpam-ccreds auth-client-config 

The auth-client-config package allows simple configuration of PAM for authentication from multiple sources, and the libpam-ccreds will cache authentication credentials allowing you to login in case the Key Distribution Center (KDC) is unavailable. This package is also useful for laptops that may authenticate using Kerberos while on the corporate network, but will need to be accessed off the network as well.


To configure the client in a terminal enter:

sudo dpkg-reconfigure krb5-config 

You will then be prompted to enter the name of the Kerberos Realm. Also, if you don't have DNS configured with Kerberos SRV records, the menu will prompt you for the hostname of the Key Distribution Center (KDC) and Realm Administration server.

The dpkg-reconfigure adds entries to the /etc/krb5.conf file for your Realm. You should have entries similar to the following:

[libdefaults]         default_realm = EXAMPLE.COM ... [realms]         EXAMPLE.COM = }                                 kdc =                                admin_server =         } 

You can test the configuration by requesting a ticket using the kinit utility. For example:

kinit steve@EXAMPLE.COM Password for steve@EXAMPLE.COM: 

When a ticket has been granted, the details can be viewed using klist:

klist Ticket cache: FILE:/tmp/krb5cc_1000 Default principal: steve@EXAMPLE.COM  Valid starting     Expires            Service principal 07/24/08 05:18:56  07/24/08 15:18:56  krbtgt/EXAMPLE.COM@EXAMPLE.COM         renew until 07/25/08 05:18:57   Kerberos 4 ticket cache: /tmp/tkt1000 klist: You have no tickets cached 

Next, use the auth-client-config to configure the libpam-krb5 module to request a ticket during login:

sudo auth-client-config -a -p kerberos_example 

You will should now receive a ticket upon successful login authentication.


SIMONE 2016-07-05 11:37 发表评论

          Spark History Server配置使用

Spark history Server产生背景

以standalone运行模式为例,在运行Spark Application的时候,Spark会提供一个WEBUI列出应用程序的运行时信息;但该WEBUI随着Application的完成(成功/失 败)而关闭,也就是说,Spark Application运行完(成功/失败)后,将无法查看Application的历史记录;

Spark history Server就是为了应对这种情况而产生的,通过配置可以在Application执行的过程中记录下了日志事件信息,那么在Application执行 结束后,WEBUI就能重新渲染生成UI界面展现出该Application在执行过程中的运行时信息;

Spark运行在yarn或者mesos之上,通过spark的history server仍然可以重构出一个已经完成的Application的运行时参数信息(假如Application运行的事件日志信息已经记录下来);


配置&使用Spark History Server

以默认配置的方式启动spark history server:

cd $SPARK_HOME/sbin


starting org.apache.spark.deploy.history.HistoryServer, logging to /home/spark/software/source/compile/deploy_spark/sbin/../logs/spark-spark-org.apache.spark.deploy.history.HistoryServer-1-hadoop000.out failed to launch org.apache.spark.deploy.history.HistoryServer:         at org.apache.spark.deploy.history.FsHistoryProvider.<init>(FsHistoryProvider.scala:44)         ... 6 more

需要在启动时指定目录: hdfs://hadoop000:8020/directory






history server相关的配置参数描述

1) spark.history.updateInterval





  用于HistoryServer的kerberos keytab文件位置









spark.eventLog.enabled  true spark.eventLog.dir      hdfs://hadoop000:8020/directory spark.eventLog.compress true

export SPARK_HISTORY_OPTS="-Dspark.history.ui.port=7777 -Dspark.history.retainedApplications=3 -Dspark.history.fs.logDirectory=hdfs://had oop000:8020/directory"


spark.history.ui.port=7777  è°ƒæ•´WEBUI访问的端口号为7777

spark.history.fs.logDirectory=hdfs://hadoop000:8020/directory  é…ç½®äº†è¯¥å±žæ€§åŽï¼Œåœ¨start-history-server.sh时就无需再显示的指定路径

spark.history.retainedApplications=3   指定保存Application历史记录的个数,如果超过这个值,旧的应用程序信息将被删除



访问WEBUI: http://hadoop000:7777


在使用spark history server的过程中产生的几个疑问:




spark.history.fs.logDirectory:Spark History Server页面只展示该指定路径下的信息;




疑问2:spark.history.retainedApplications=3 貌似没生效??????

The History Server will list all applications. It will just retain a max number of them in memory. That option does not control how many applications are show, it controls how much memory the HS will need.




SIMONE 2016-05-26 14:12 发表评论

          Installing Streamripper on Mac OS X via Homebrew        
Here's a quick post on how to install "streamripper" on OS X, with a little help from homebrew.

1. Install homebrew

ruby -e "$(curl -fsSkL"

2. Check installation

brew doctor

3. Fix any issues

I had the message "Warning: /usr/bin occurs before /usr/local/bin"...

To fix this, edit "/etc/paths" and change the order of the directories.

On re-running "brew doctor", you should hopefully now get the message "Your system is ready to brew."

4. Install "streamripper"

brew install streamripper

5. Run streamripper

streamripper http://(your stream URL here)

6. Play files

If you want to listen to your MP3 recordings from the command-line, you can use the built-in OS X "afplay" command.

afplay file.mp3

Homebrew Notes

Here are some additional notes on homebrew.

Software is installed into directories inside "/usr/local/Cellar/" with symbolic links created in "/usr/local/bin".

For example, after installing "streamripper", a directory "/usr/local/Cellar/streamripper" will be created (along with some other dependencies), and a symbolic link "/usr/local/bin/streamripper" will be created which points to "../Cellar/streamripper/1.64.6/bin/streamripper"

To get the latest packages and homebrew: -

brew update 

Upgrade the installed software with: -

brew upgrade

These commands can be combined as: -

brew update && brew upgrade

To uninstall softeare: -

brew uninstall imagemagick

There is a great on-line package browser here:

          playframwork dist 打包时将非项目中的文件也打包进去

Play uses sbt-native-packager, which supports the inclusion of arbitrary files by adding them to the mappings:

mappings in Universal ++=   (baseDirectory.value / "scripts" * "*" get) map     (x => x -> ("scripts/" + x.getName)) 
The syntax assumes Play 2.2.x

val jdk8 = new File("D:\\JDK\\JDK8\\jre1_8_0_40")
mappings in Universal ++= (jdk8 ** "*" get) map (x => x -> ("jre8/" + jdk8.relativize(x).getOrElse(x.getName)))

SIMONE 2016-02-26 16:46 发表评论

          Enabling mod_rewrite in Apache under Ubuntu Server        
Here's some quick instructions on enabling mod_rewrite in Apache under Ubuntu 10.04 Server.
  1. sudo a2enmod rewrite
  2. sudo vi /etc/apache2/sites-available/default
  3. Change "AllowOverride" from "None" to "all" for the /var/www directory.
  4. sudo /etc/init.d/apache2 restart
That's it!
          Creating EFI String for Asus 8400GS Silent        
There are a number of ways of getting your graphics card working within OS X (in order of difficulty) : -
  • Adding "GraphicsEnabler=Yes" to Chameleon /Extra/
  • Adding EFI string to Chameleon /Extra/
  • Using an injector such as NVInject or NVEnabler
  • Patching your DSDT file
The first one didn't work for my Asus Silent EN8400GS, so here's how I generated and added an EFI string.

First, install "gfxutil"
Next, get the location of your graphics card by entering the following: -
gfxutil -f display
You should get something back like this: -
DevicePath = PciRoot(0x1)/Pci(0x1,0x0)/Pci(0x0,0x0)
Next, create a file called "graphics.plist" which is the following (but put your settings in): -

Next you need to generate a hex string to be inserted - run the following command: -
gfxutil -i xml -o hex graphics.plist graphics.hex
This will create a "graphics.hex" file in the current directory.
Lastly, copy and paste this string into your /Extra/ file in the following format: -

Reboot and voila! You should have Quartz Extreme (QI) and Core Image (CI) enabled - open up Front Row, if it works then you're done!
          Repeated WMI DCOM errors        
We have an office of 50+ Dell workstations all running XP SP3 with all current patches. The servers are all Server 2008 (Active directory, Exchange...
          Comment on Configuring Internet Sharing between an iMac running Snow Leopard, a Mac laptop, an Ubuntu netbook, and a Roku by Bubbalou        
Thanks! Still working. Followed instructions with the Mark August 7, 2011 at 3:26 pm amendments and it worked instantly. However, no luck double-checking following the instructions: 'in Terminal, entering more /etc/bootpd.list and looking to see that “reply_threshold_seconds” is still set to “0″.' This resulted in the message: '/etc/bootpd.list: No such file or directory'
          Information For Your Knowledge        
Information For Your Knowledge
  • Knowledge of CICAG Informations

          Directory Compare        
There is an interesting post on Ray Camden's blog for the "Friday Challenge". I suggested the challenge of doing a directory compare to show file differences.

Ray's Blog Post:

Link to my original solution:

I am going to try out some of the posts listed and report if I plan to change my original solution.
          [3 Hours LEFT] Closed by Midnight, FOREVER!        
It is NOW or NEVER. After Midnight, Today, Friday, June 19, 2017 at 11:59 PM EST/ 8:59 PM PST, Small List, Big Profit 2.0 will be taken off the market FOREVER. — Even if Henry plans to release again in the future, it will be at an extreme high price and who knows when he … Continue reading "[3 Hours LEFT] Closed by Midnight, FOREVER!"
          [8 Hours LEFT] Remove From The Market By Midnight.        
You’ve got exactly 8 hours. I’ve received FINAL confirmation from Henry that he will remove the product from the market by Midnight, Today, Monday, June 19, 2017 at 11:59 PM EST/ 8:59 PM PST. So, if you really want to know how to make big affiliate cash with extreme small list where Henry showed you … Continue reading "[8 Hours LEFT] Remove From The Market By Midnight."
          [14 Hours LEFT] After Midnight TODAY, GONE.        
I have to make this really clear. As I really want you to receive your transformation as possible, I strongly encourage you to grab your seat right now at: (Wild Stuff!) >> (We will remove the secrets from the public in 14 hours!) No Upsells. No Downsells. No more than 100 people allowed. Further.. … Continue reading "[14 Hours LEFT] After Midnight TODAY, GONE."
          17 Spots LEFT. After 1 1/2 hours, $97 a pop.        
I’ve just received a FINAL confirmation from Henry that there are ONLY 17 spots LEFT at this point. It means that you ONLY have less than 1 1/2 hours or so to grab your spot right now at: (3 SPOTS LEFT!) >> (Only 17 SPOT LEFT. 73 Spots GONE in 6 hours!) It also means … Continue reading "17 Spots LEFT. After 1 1/2 hours, $97 a pop."
          37 Slots LEFT in 6 hours. Superb.        
This is just awesome Personally, I couldn’t believe that the ENERGY is so high in which everyone wants to learn how my bud, Henry decided to show you how he helped his student to attract $1,000.28 in 48 hours with extremely small list. In fact, Henry will even show you how to attract hungry buyers … Continue reading "37 Slots LEFT in 6 hours. Superb."
          I FORCED him on your behalf        
Instead of making you feel miserable along with 2,800+ people who didn’t get on earlier, our inner circle partners and I forced Henry to make few more slots available. Some friends have even forwarded those emails we received to his Vice President. At first, Henry rejected the idea to re-open the door to the public. … Continue reading "I FORCED him on your behalf"
          15 Spots left. (235 seats gone in 12 hours)        
I’ve just received a FINAL confirmation from Henry that there are 15 spots LEFT at this point. It also means that you ONLY have less than two hours or so to grab your spot right now at: (15 SPOTS LEFT!) >> (Only 15 SPOT LEFT. 235 Spots GONE in 12 hours!) REMEMBER: Once all … Continue reading "15 Spots left. (235 seats gone in 12 hours)"
          190 out of 250 spots are GONE in less than 10 hours        
At this point, I anticipate that all slots will be GONE in less than five hours and may be less. It also means that once all the remaining slots are taken, there is really nothing I can do for you. Again, like I mentioned many (many) times before in my business that once everything is … Continue reading "190 out of 250 spots are GONE in less than 10 hours"
          WOW — Only 100 LEFT in 5 hours.        
Seriously! This is just bizarre. In fact, I’ve never seen anything like this before where people are just eager to know what Henry did to help one of his students to receive $1,000.28 in 48 hours with extremely small list. Seriously, you really need to grab it right now at: (Wild Stuff!) >> (Only … Continue reading "WOW — Only 100 LEFT in 5 hours."
          LIVE: Small List with Big Profit. 3,000+ Replies. (250 Spots ONLY!)        
Since my partners and I received over 3,000+ replies, I foresee that all 250 spots will be GONE extremely FAST. Reserve your seat right now at: >> I mentioned on the previous emails that the whole thing is ONLY 47 bucks. There is no upsell, no downsell, and no hardsell. In fact, I’ve even … Continue reading "LIVE: Small List with Big Profit. 3,000+ Replies. (250 Spots ONLY!)"
          Comment on What can I do with AIR Native Extensions? by Michael Baisuck        
HELP! After three days of trying to create a damn ANE, I have come to the conclusion that the reason I am having so much trouble is that there is a very ambiguous part of the tutorial. I keep getting tripped up on the following sections: "The library.swf file needs to be placed inside of the native code directory for every platform you target. For example, you'd place this file inside iOS/, android/, x86/, etc., depending on your project's targets....)" So, first of all, where is this "native code directory"? That concept just gets thrown into the mix without an explanation. My new FB project did not create it so where is that supposed to be and what should be in it? Secondly, in the same section, you go on to say "When complete, your HelloANELibrary folder should contain both HelloANELibrary.swc, and HelloANENative should contain library.swf." I think the grammar on that sentence needs some work. It does not make sense. "Both" indicates that there are two things to put in the "HelloANELibrary folder" but only "HelloANELibrary.swc" is mentioned. This is followed by another reference (I assume) to the non-existant "native code directory". Can you please, please, please clarify these points so I can continue trying to implement an ANE? Thank you very much for your time and attention. Sincerely, Mike
          Comment on MAX: Developing iOS Applications with Flash Builder and Adobe AIR by Hamid        
This is very great example, I love the way Adobe handle it. Do you have also example for saving the image on your ios device and uploading from the ios image directory? thanks and regards
          A lazy yet surprisingly effective approach to regression testing        

To regression test, or not to regression test

Building up a proper testing suite can involve a good amount of work, which I'd prefer to avoid because it's boring and I'm lazy.

On the other hand, if I'm not careful, taking shortcuts that save effort in the short run could lead to a massive productivity hit further down the road.

For example, let's say instead of building up a rigorous test suite I test my code manually, and give a lot of careful thought to its correctness.

Right now I think I'm OK. But how long will it stay that way?

Often my code expresses a significant amount of subtle complexity and it would be easy to break the code in the future when the subtleties of the logic are not as fresh in my mind.

That could introduce regressions, nasty regressions, especially nasty if this code goes into production and breaks things for thousands of users. That would be a very public humiliation, which I would like to avoid.

So in the absence of a proper regression test, the most likely outcome is that if I ever need to make changes to the code I will get paralyzed by fear. And that, at the very least, will increase the cost of code maintenance considerably.

So maybe a regression test would be a good idea after all.

Comparing outputs: minimum work, maximum result

With that settled, the only part left to figure out is how to do it in the laziest possible way that still works.

For TKLBAM I implemented a lazy yet effective approach to regression testing which should work elsewhere as well.

In a nutshell the idea is to setup a bit of code (e.g., a shell script) that can run in two modes:

  1. create reference: this saves the output from code that is assumed to work well. These outputs should be revision controlled, just like the code that generates them.

    For example, in tklbam if you pass the --createrefs cli option it runs various internal commands on pre-determined inputs and saves their output to reference files.

  2. compare against reference: this runs the same code on the same inputs and compares the output with previously saved output.

    For example, in tklbam when you run without any options it repeats all the internal command tests on the same pre-determined inputs and compares the output to the previously saved reference output.

Example from TKLBAM's shell script regtest:

internal dirindex --create index testdir
testresult ./index "index creation"

Testresult is just a shell script function with a bit of magic that detects which mode we're running in (e.g., based on an environment variable because this is a lowly shell script).

Example usage of

$ --createrefs
OK: 1 - dirindex creation
OK: 2 - dirindex creation with limitation
OK: 3 - dirindex comparison
OK: 4 - fixstat simulation
OK: 5 - fixstat simulation with limitation
OK: 6 - fixstat simulation with exclusion
OK: 7 - fixstat with uid and gid mapping
OK: 8 - fixstat repeated - nothing to do
OK: 9 - dirindex comparison with limitation
OK: 10 - dirindex comparison with inverted limitation
OK: 11 - delete simulation
OK: 12 - delete simulation with limitation
OK: 13 - delete
OK: 14 - delete repeated - nothing to do
OK: 15 - merge-userdb passwd
OK: 16 - merge-userdb group
OK: 17 - merge-userdb output maps
OK: 18 - newpkgs
OK: 19 - newpkgs-install simulation
OK: 20 - mysql2fs verbose output
OK: 21 - mysql2fs myfs.tar md5sum
OK: 22 - fs2mysql verbose output
OK: 23 - fs2mysql tofile=sql

I'm using this to test cli-level outputs but you could use the same basic approach for code-level components (e.g., classes, functions, etc.) as well.

Note that one gotcha is that you have to clean the outputs you're comparing from local noise (e.g., timestamps, current working directory). For example the testresult function I'm using for tklbam runs the output through sed and sort so that it doesn't include the local path.

Instead of:

/home/liraz/public/tklbam/tests/testdir 41ed    0       0
/home/liraz/public/tklbam/tests/testdir/chgrp   81a4    0 e1f06
/home/liraz/public/tklbam/tests/testdir/chmod   81a4    0 d58c9
/home/liraz/public/tklbam/tests/testdir/chown   81a4    0 d58cc
/home/liraz/public/tklbam/tests/testdir/donttouchme     81a4 0 4b8d58ab
/home/liraz/public/tklbam/tests/testdir/emptydir        41ed 0 4b8d361c
/home/liraz/public/tklbam/tests/testdir/file    81a4    0 d35bd
/home/liraz/public/tklbam/tests/testdir/link    a1ff    0 d362e

We get:

testdir 41ed    0       0
testdir/chgrp   81a4    0 e1f06
testdir/chmod   81a4    0 d58c9
testdir/chown   81a4    0 d58cc
testdir/donttouchme     81a4 0    4b8d58ab
testdir/emptydir        41ed 0    4b8d361c
testdir/file    81a4    0 d35bd
testdir/link    a1ff    0 d362e

That way the tests don't break if we happen to run them from another directory.

          Comment on Push hilights and msg’s from IRSSI to your iPhone by Torvald Lekvam        
You must have the pushIPhone file inside the prowl directory. If you do not; pushIPhone will not know where to locate the pyrowl module.
          Decompress only zip files present in a directory to a particular folder        
Scenario: There are one or multiple zip files present in a directory. The objective is to select only zip files, decompress those and put it in another directory. This program can also be used to select files of a particular extention type. Here is the code package com.cs.unzipexample; /** * This program is for decompress … Continue reading
          Welcome to the new map of the Isle of Man        

Re-posted from blog logoWith less than a week to go until the third Isle of Man mapping day - to be held in Douglas on Saturday 2nd October - we are launching a new version of with the aim of helping to promote the new map of the Isle of Man.

For over four years, volunteers have been building up the map of the Isle of Man as part of the OpenStreetMap project, creating a map of the Island that can be used by anyone, not only as an online map, but also as a source of information for their own projects. is one such project, built by Dan Karran with the aim of promoting this new map of the Isle of Man and showing what can be done with open data to help promote local businesses and organisations both within the Island and to a wider audience.

The site will continue to grow from this initial stage, to include an online directory of much of the information contained within the map, and an ability to simply update any of that information, which we will then use to update the OpenStreetMap project itself.

If you are interested in this new map of the Isle of Man, please do come along to the Velvet Lobster at 10am (or 1pm) on Saturday to the mapping day for an introduction to the OpenStreetMap project, what it's all about, how to update the map, and how to use the information in various ways.

          shadowsocks 安装        

Install the Command Line Client

If you prefer command line client, then you can install it on your Linux with the following command.


sudo apt-get install python-pip sudo pip install shadowsocks


Yes, you can use the above commands to install shadowsocks client on ubuntu. But it will install it under ~/.local/bin/ directory and it causes loads of trouble. So I suggest using su to become root first and then issue the following two commands.

apt-get install python-pip pip install shadowsocks


sudo yum install python-setuptools   or   sudo dnf install python-setuptools sudo easy_install pip sudo pip install shadowsocks


sudo zypper install python-pip sudo pip install shadowsocks


sudo pacman -S python-pip sudo pip install shadowsocks

As you can see the command of installing shadowsocks client is the same to the command of installing shadowsocks server, because the above command will install both the client and the server. You can verify this by looking at the installation script output

Downloading/unpacking shadowsocks Downloading shadowsocks-2.8.2.tar.gz Running (path:/tmp/pip-build-PQIgUg/shadowsocks/ egg_info for package shadowsocks  Installing collected packages: shadowsocks Running install for shadowsocks  Installing sslocal script to /usr/local/bin Installing ssserver script to /usr/local/bin Successfully installed shadowsocks Cleaning up...

sslocal is the client software and ssserver is the server software. On some Linux distros such as ubuntu, the shadowsocks client sslocal is installed under /usr/local/bin. On Others such as Archsslocal is installed under /usr/bin/. Your can use whereis command to find the exact location.

user@debian:~$ whereis sslocal sslocal: /usr/local/bin/sslocal

Create a Configuration File

we will create a configuration file under /etc/

sudo vi /etc/shadowsocks.json

Put the following text in the file. Replace server-ip with your actual IP and set a password.

"local_address": "",

Save and close the file. Next start the client using command line

sslocal -c /etc/shadowsocks.json

To run in the background

sudo sslocal -c /etc/shadowsocks.json -d start

Auto Start the Client on System Boot

Edit /etc/rc.local file

sudo vi /etc/rc.local

Put the following line above the exit 0 line:

sudo sslocal -c /etc/shadowsocks.json -d start

Save and close the file. Next time you start your computer, shadowsocks client will automatically start and connect to your shadowsocks server.

Check if It Works

After you rebooted your computer, enter the following command in terminal:

sudo systemctl status rc-local.service

If your sslocal command works then you will get this ouput:

● rc-local.service - /etc/rc.local 

Compatibility Loaded: loaded (/etc/systemd/system/rc-local.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2015-11-27 03:19:25 CST; 2min 39s ago
Process: 881 ExecStart=/etc/rc.local start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/rc-local.service
├─ 887 watch -n 60 su matrix -c ibam
└─1112 /usr/bin/python /usr/local/bin/sslocal -c /etc/shadowsocks....

As you can see from the last line, the sslocal command created a process whose pid is 1112 on my machine. It means shadowsocks client is running smoothly. And of course you can tell your browser to connect through your shadowsocks client to see if everything goes well.

If for some reason your /etc/rc.local script won’t run, then check the following post to find the solution.

How to enable /etc/rc.local with SystemdInstall the Command Line Client

If you prefer command line client, then you can install it on your Linux with the following command.


sudo apt-get install python-pip
sudo pip install shadowsocks


Yes, you can use the above commands to install shadowsocks client on ubuntu. But it will install it under ~/.local/bin/ directory and it causes loads of trouble. So I suggest using su to become root first and then issue the following two commands.

apt-get install python-pip
pip install shadowsocks


sudo yum install python-setuptools   or   sudo dnf install python-setuptools
sudo easy_install pip
sudo pip install shadowsocks


sudo zypper install python-pip
sudo pip install shadowsocks


sudo pacman -S python-pip
sudo pip install shadowsocks

As you can see the command of installing shadowsocks client is the same to the command of installing shadowsocks server, because the above command will install both the client and the server. You can verify this by looking at the installation script output

Downloading/unpacking shadowsocks
Downloading shadowsocks-2.8.2.tar.gz
Running (path:/tmp/pip-build-PQIgUg/shadowsocks/ egg_info for package shadowsocks

Installing collected packages: shadowsocks
Running install for shadowsocks

Installing sslocal script to /usr/local/bin
Installing ssserver script to /usr/local/bin
Successfully installed shadowsocks
Cleaning up...

sslocal is the client software and ssserver is the server software. On some Linux distros such as ubuntu, the shadowsocks client sslocal is installed under /usr/local/bin. On Others such as Archsslocal is installed under /usr/bin/. Your can use whereis command to find the exact location.

user@debian:~$ whereis sslocal
sslocal: /usr/local/bin/sslocal

Create a Configuration File

we will create a configuration file under /etc/

sudo vi /etc/shadowsocks.json

Put the following text in the file. Replace server-ip with your actual IP and set a password.

"local_address": "",

Save and close the file. Next start the client using command line

sslocal -c /etc/shadowsocks.json

To run in the background

sudo sslocal -c /etc/shadowsocks.json -d start

Auto Start the Client on System Boot

Edit /etc/rc.local file

sudo vi /etc/rc.local

Put the following line above the exit 0 line:

sudo sslocal -c /etc/shadowsocks.json -d start

Save and close the file. Next time you start your computer, shadowsocks client will automatically start and connect to your shadowsocks server.

Check if It Works

After you rebooted your computer, enter the following command in terminal:

sudo systemctl status rc-local.service

If your sslocal command works then you will get this ouput:

● rc-local.service - /etc/rc.local Compatibility
Loaded: loaded (/etc/systemd/system/rc-local.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2015-11-27 03:19:25 CST; 2min 39s ago
Process: 881 ExecStart=/etc/rc.local start (code=exited, status=0/SUCCESS)
CGroup: /system.slice/rc-local.service
├─ 887 watch -n 60 su matrix -c ibam
└─1112 /usr/bin/python /usr/local/bin/sslocal -c /etc/shadowsocks....

As you can see from the last line, the sslocal command created a process whose pid is 1112 on my machine. It means shadowsocks client is running smoothly. And of course you can tell your browser to connect through your shadowsocks client to see if everything goes well.

If for some reason your /etc/rc.local script won’t run, then check the following post to find the solution.

How to enable /etc/rc.local with Systemd

abin 2016-05-13 22:56 发表评论

          Samba Standalone Server Installation on Debian 9 (Stretch)        
This tutorial explains the installation of a Samba fileserver on Debian 9 and shows you how to configure Samba to share files over the SMB/CIFS the protocol. Samba is configured as a standalone server, not as a domain controller. In the resulting setup, every user has his own home directory, all users have a shared group directory with read-/write access and optionally an anonymous share is added.
          Why you should consider a Windows Phone for your next phone - part 2        
In part 1 of this article I outlined a few of the reasons why I am really starting to enjoy my Windows phone.  In this followup I'll continue that and describe a few more of those reasons.  I'll reiterate here that this is not a review of Windows Phone 8 nor is it a treatise on why it's the best phone OS.  I happen to think that each of the 3 major phone systems are great and have their target audience.  I'm only intending to outline what makes me smile about Windows Phone.

Office and Sky Drive

The next area I'd like to highlight is Office.  Like it or not, the world runs on Microsoft Office.  My company slings around Excel and Word documents.  My daughter complained the other day that she didn't have Office on her computer and that meant she couldn't interact properly with her college professors.  Office runs business, plain and simple.  And Windows Phone has Office built right in.

Samsung and the other Android phone makers often do include office suites that do a remarkably good job with office compatibility however all it takes is one bad experience with formatting or losing a page or two in your power point for you to realize that "almost 100% compatible" can be very frustrating.

You combine Office with the seamless integration with SkyDrive and you get a very nice mobile workplace.  I'm not spending much time talking about Office as it really "just works" and is one of the best spreadsheet and document editor experiences you'll find on a mobile phone.  Where it really shines is when you mix in SkyDrive.

When you open up Office on the phone you are met with a Recents list that spans documents on your phone or in the cloud.  No differences are made and opening a document from the cloud will check that you have the latest before opening and automatically save it back to the cloud when done.  You don't ever have to worry about manually syncing a folder.  Trying to set up something similar on my Galaxy is harder.  You can install an office suite (like KingSoft) and even open documents right from sites like Dropbox however saving the files didn't appear to push them back to Dropbox automatically.  Yes, I could setup some auto syncing of Dropbox with a folder but none of this is automatic and can be challenging for a new user (like your mom!) to setup.  And, even once you got it running, you still have something that is "mostly compatible" with Office.

Being 100% compatible with Office and seamless integration with the cloud makes Office and SkyDrive a killer story for Windows Phone 8.  Windows 8.1 is making the story even sweeter with even better integration of SkyDrive with Windows on tablets, laptops, and desktops.  Installing Office on these other computers now means that you can edit your Office files wherever you are without any concern about breaking compatibility or that you are editing an older version.  Peace of mind is worth a lot!

Photo Integration, Automatic Uploads, and Live Tile

Windows Phone does a great job of pulling photos together from several different sources into a single location.  Everyone manages their photos in a different way.  On my Lumia you can go into the Photos app and  choose Albums and you'll see all the photo albums I have on my phone, on my SkyDrive, and on Facebook all in one location. No need to open each of these apps separately.  And, while it hasn't been utilized a great deal, I think other apps can take advantage of this as well.

The next call out here is automatic, full resolution, upload of your photos and videos to SkyDrive.  It's a ton of fun to take a bunch of pics, come home and grab some dinner (while your phone uploads all the stuff you shot as soon as it hits your WiFi), and then grab your Surface RT or tablet and swish through all the shots you got right there on SkyDrive.

The last thing I want to mention here is how the Photos app updates the Live Tile.  It's always fun to unlock you phone and see a fresh, rotating, set of pictures from your camera roll right there on the Live Tile.

Facebook Integration

There is a lot to say about Facebook integration in Windows Phone so I will just highlight a few of the areas that I particularly enjoy.

In the Me tile I can post a status update to all of my social networks at one time.  I can update my status on Live Messenger, Facebook, Twitter, and LinkedIn all at one and in one place.  This is great time saver.  Yes, I know that Android has apps that can do this but anything integrated and built in is better in my book.  Also in the Me tile I can Facebook check in and see all my Facebook and Twitter notifications and interactions.  Very handy!

I've already highlighted that I can see all my Facebook photo albums just by opening the Photos app and looking in my albums.  However, as you can see in the photo above, the Photos app has a What's New section that shows you all the photos that your Facebook friends are posting.  Want to see that cool photo  your sister posted this morning?  No need to open Facebook. Just open Photos and hit What's New and there it is!

One pet peeve I have of the Facebook app on Android is that I keep having to search for my family members I want to tag.  No so on Windows Phone.  Right from the picture I can choose share to Facebook, then click the add tag button to see a list of most recently used tags.  No need to search.  Love this feature!  And if I want to do something more complicated I can always just crack open the Facebook app.

The last thing I want to mention about Facebook integration is one of favorites.  I originally didn't think I would like it but boy have I changed my mind.  Windows Phone allows you to specify that an app will control what the lock screen looks like and Facebook supports this.  When you install Facebook and run it the first time you are given the choice to have Facebook manage the lock screen and how it should look.  Now, every time I wake my phone I'm greeted with a new photo right out of my Facebook photo albums.  I really can't tell you how many times I've chuckled or smiled at a photo that was on my lock screen.  It rotates them many times throughout the day too so it's always fresh.

Contact Handling

Contact handling on Windows Phone is really quite nice.  It has all the same features you would expect such as grouping contacts from Facebook, Google, and others into a single directory, the ability to set custom ringtones for a contact, and edit details like birthdays, spouses, etc.  However there are a couple things that really make it stand out.

The first is contact grouping.  You can create groups of contacts with a given name.  Once you have the group you can then go into the group and email them as a group, SMS to them as a group, or see what they have been posting to their Facebook or Twitter accounts as a group. You can view their shared photos as a group.  You get all the same functionality as when you are looking at all your contacts but it is filtered down to just that group.  This can be very handy!

And my favorite is contact profile pics from social networks.  Yes, I know that some Android phone makers have done this for Facebook but I have never seen anyone do it with Facebook, Twitter, and LinkedIn and have that information flow everywhere in the system including SMS and email.  It's very cool to get a phone call from your wife, see her picture full screen on your phone, and realize that she has changed her profile picture.  Yes, I can manually set her contact photo to anything I like but I enjoy getting having my contacts set their profile pictures.

And that wraps up part 2.  In the next (and likely last) installment I'll finish up going over my favorite features with two of the best, battery life and voice commands.  Talk to you soon!

          Why you should consider a Windows Phone for your next phone - part 1        
Back in November of 2012 I gave out a warning that I didn't think that Windows Phone would survive.  I'm still not certain it will but I am more optimistic today than ever before.  I've been carrying a Lumia 920 as my daily driver for most of the last two months and really have no strong desire to go back to my Galaxy S3.  So I decided I should write a lengthy article on what is keeping me on the Lumia as it might be helpful to others.  First let me say that you really can't go wrong with any of the current flagship devices, the IPhone 5, Galaxy S3/S4, or Lumia 920/1020 so this post is not at all about decided which one is better. I would say that each one of them is better for different types of people.  I'm only writing about what I like about the Lumia 920.

Build Quality And Size

The Lumia devices are extremely well built.  They don't have removable back plates and, like the IPhone, feel solid through and through.  The color is baked into the phone so dropping it and even taking a gouge out of the body will not leave you with a scratch that is discolored.

The height and width of the 920 are fine but the device really is thicker and heavier than is necessary.  This is due in large part to the built-in wireless charging.  While I prefer the thinness and weight of my Galaxy S3, I don't find it a deal breaker on the Lumia. It spends most of it's time in my pocket and I don't spend hours at a time holding the device to my ear.

Phone Dialer and Integration

Two things that I really like about the phone dialer is the tight integration of visual voice mail and the ability to see phone numbers I have looked up by business name rather than number.  First, visual voice mail is simply a swipe away.  Just tap the phone icon and swipe over the voice mail section.  Some Android phones may have something similar but I didn't really see it on my Galaxy S3 (which still does visual voice mail but it's a separate app from the carrier).

The second thing I really like is how Windows Phone saves the name of a location I looked up to call in the call log.  On most phones when you use maps to look up a business to call, the entry that appears in the call log is just the number.  Later you will look at that number and not remember who you called.  With Windows Phone you will see the actual business name in the cal log making it very clear.  One other small thing I like about the call log is how I can call back the number which just a simple touch of the phone icon next to the entry. I know with the Galaxy S3/S4 you can swipe one direction to call and the other to text but there is something I prefer about a simple touch over a swipe motion.

One thing that Windows Phone is missing is a T-9 dialer.  It can be nice to just pull up a dial pad and start dialing the name of the person you want to call.  You can do this with Android but it's missing on WP8.


While Google Maps is an excellent product, there are two features about Nokia Maps on Windows Phone devices that I really want to highlight.

The first is offline maps.  In this connected world it's easy to think that offline maps are not important.  However just a few weeks ago my wife was taking my daughter on a college visit.  She was driving on local surface streets and discovered she had not cell coverage in that area.  Her "blue ball" had kept moving but her maps would not update.  She was lost.  She had to stop and ask directions.  Yes I am aware that Google maps has (had?) some type of offline map caching but it's not like Nokia maps where you can pre-download entire states and countries of mapping data that is always available to you even when you are offline. This even includes local business names.  It's really impressive to show your friends that you can turn off all WiFi and cell service to your phone and *still* look up the directions to a local restaurant and navigate without any issue.

The second one is really small but shows the completeness of the maps.  I've heard Google make noise about "inside maps" so I did the following quick test.  On both my phones I zoomed in on my local mall, Rivergate.  On my Galaxy S3 I could see the outline of the entire mall and the names of the 3-4 major retailers on the corners of the mall.  On the Lumia I could see all that *plus* every single shop on the inside of the mall too.  The complete directory with every shop correctly sized and placed.  Very cool to also show your friends that you have the mall directory in your pocket.  You start getting questions like "what kind of phone is that again?"


Music is one area where I think Windows Phone 8 really shines right now.  There are plenty of apps for playing your music.  The Pandora app (now that it is finally here) is probably the best implementation of Pandora I have ever seen.  I also have Spotify and  iHeartRadio installed.  I am using an app called CloudMuzik to access my Google music cloud.  And while I'm not a subscriber, Xbox Music is baked in and is an excellent alternative to Spotify.  I have also heard that Amazon is bringing an Amazon Music Cloud player to WP8 very soon.  The point is that unless you depend on Amazon Cloud player for your music you will really not have any trouble rocking your tunes.  That being said, I want to call out two things that really make it shine for me.

The first is an app, Nokia Music.  I have been a Pandora user for a long time and I really love the service.  However I find myself using Nokia music more and more.  The main reason is there are no commercials!  And speaking of commercials it is important to point out Pandora is ad-free on WP8 until 2014.  While Nokia doesn't allow you fine-tune by removing songs you don't like, they do provide lots of pre-built mixes and allows you to create your own from a set of artists.  You can also take these mixes offline *all for free*.  This offline capability is really nice as I recently found out on my trip to Tech Ed.  I was about to board one of my flights and wanted to keep listening.  So I just tapped the mix I was listening to and told it to take it offline. Within just a few minutes I had 3 mixes offline that gave me between 1-2 hours of non-repetitive listening while on the plan.  Very nice!  Yes, I know that Spotify can do this too but Nokia Music does it for free!!

The other thing to call out about Nokia Music is their Music+ offering.  For $3.99 per month you can skip as many tracks as you like in your mixes and you can get lyrics to all the songs.  While this is not as good as the on demand ability you get with Spotify, it is also less then half the price.  Nokia Music, Pandora, and Spotify make my Lumia a musical powerhouse.

The second thing I want to mention about music is the controls.  No matter what app is playing background music you can get access to controls to play, pause, and (if supported) next and previous tracks just by pressing either of the volume controls and the controls slide down from the top.  Yes, I realize that on Android the controls are generally available on the lock screen and in the pull down shade.  However, anyone who has used Android has discovered that lock screen music controls are not a given.  Pandora didn't have lock screen controls for some time on my GS3.  In any case, I think the music situation on WP8 is very nice and easily able to satisfy all comers.

That's it for part 1.  In the next installment we'll look at a few more features that make Windows Phone an excellent choice for your next phone!
          Administrative Assistant        

The Administrative Assistant will be an assistant to the ministry staff and ministry teams to achieve the goals of FBCSD. Essential duties and responsibilities include the following. Other duties may be assigned.

Personal Qualifications
For fruitful work it is essential that the Administrative Assistant have the following qualifications:
1. Be a growing Christian
2. Self motivated
3. Ability to work independently and as a team player
4. Be flexible, cheerful & patient
5. Ability to meet people comfortably and confidently
6. Be organized, competent, efficient & creative
7. Be computer proficient in Microsoft Office
8. Ability to multi-task
9. Writing and editing ability, possess proficiency in grammar, usage and style
10. Able to safeguard confidential material

Reception Responsibilities
1. Greet visitors to the church office, providing assistance whenever possible
2. Answer the telephone cordially, providing assistance whenever possible
3. Handle miscellaneous details (unscheduled, unspecified, unexpected as assigned by the Pastor(s))

Secretarial Responsibilities
1. Handle all church mail, including pick-up, drop-off, opening, distribution, and responding as appropriate
2. Maintain church membership records, including posting weekly attendance
3. Prepare and mail bulletins, and newsletters
4. Arrange and organize material for Sunday and Wednesday distribution
5. Keep church calendar up to date
6. Prepare business meeting agenda items and file meeting minutes
7. File bulletins and other church related documents
8. Keep office organized/neat

Financial Responsibilities
1. Enter and post contributions in ACS
2. Enter and post all accounts payables into ACS and print checks
3. Post timesheet and leave request information for payroll; print payroll checks
4. Make sure checks are signed by appropriate personnel; mail checks and file check records
5. Enter and post journal entries as needed
6. Perform teller duties as needed, make bank deposits; post all deposit information to ACS
7. Balance checking account in ACS
8. Provide financial reports as needed
9. Work with Treasurer to update budget
10. Work with Treasurer to close out financial year in ACS

Managerial Responsibilities
1. Keep an inventory of and order supplies for office and staff
2. Supervise the maintenance of church office equipment in consultation with the building managers
3. Recruit and supervise volunteer help for routine tasks as needed
4. Take initiative in problem solving

Media Responsibilities
1. Design and prepare the weekly bulletin
2. Design and prepare public relations pieces (mailers and fliers)
3. Design and prepare Church Directory
4. Update and prepare Deacon Manuals

To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements and responsibilities are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform essential functions.

Physical Demands
The physical demands described here are representative of those that must be met by an employee to successfully perform the essential function of this job. While performing the duties of this job, the employee may need to lift boxes of paper around 40 pounds. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.

Position Supervision:
The Administrative Assistant will be under the direct supervision of the pastoral staff. The desire is that a good, team oriented working arrangement will be expressed between the pastoral staff and administrative assistant.

Hours: 32 hours a week over four days, Monday - Thursday.

          proof that curiousj =/ (doesnt equal) V_flashbang        
Table of contents: A) difference with me and him b) our chat conversation c) proof based on logic a) differences im in V Alliance he is not I have played with him no offense curiousj, you dont build fast =P and i cannot play myself in battle room, technology is limited and computer unable, its impossible. search how you would with my old smurfs, you will see he is not me. ask stchurdak too. b) chat conversation i was chatting with him on GR. Can't chat myself, i now have only one computer thats usable, and GR does not cope with logins from the same comp. chat---- V_flashbang232: rawrV_flashbang232: escalation?curiousj: wow, how long has this been here?V_flashbang232: what been here?V_flashbang232: escalation?V_flashbang232: idkV_flashbang232: a whilecuriousj: this pmcuriousj: srryV_flashbang232: idkcuriousj: btw, i tried to download TADR...curiousj: It set off my antivirusV_flashbang232: lolV_flashbang232: its cuz its a .exe fileV_flashbang232: thats the main file type that carries a viruscuriousj: I've downloaded others, but they did not set it offV_flashbang232: others include zip files, archive files (zip, rar, sfx)curiousj: still no set offV_flashbang232: which site did you dl it fromcuriousj: FUV_flashbang232: well gosh then....if you want it that way. =P ahahha jkcuriousj: wtf?curiousj: i saw this comingV_flashbang232: lolspeakV_flashbang232: lolcuriousj: ?V_flashbang232: but FU doesnt carry virusesV_flashbang232: FU in lolspeak (lol, lmfao, wtf) is Fuck youcuriousj: I had a simialar experence with ESCcuriousj: btw, i knowV_flashbang232: oh you were all like wtf so sorry...curiousj: nvm, lets move oncuriousj: my first thread? it talked about viruses and TAUcuriousj: ESC destroyed two of my computersV_flashbang232: ???V_flashbang232: esc has no virusV_flashbang232: like i downloaded it on like at least 10 computerscuriousj: 7 trojans?V_flashbang232: with like the latest security stuffV_flashbang232: your computer was just bein retarded, it happened to my old XP computer too, worked fine stillcuriousj: killing a laptop in twenty minutes w/ vista?V_flashbang232: detected 47 trojans in ta, impossible, there were barely any files in directoryV_flashbang232: umm thats probably hardware relatedcuriousj: ya, no.V_flashbang232: esc is widely used on computers dating back to a decade ago.V_flashbang232: and prove it isntV_flashbang232: its probably something wrong with your power buttonV_flashbang232: its happened to me beforeV_flashbang232: ta has 0 trojanscuriousj: we did extensive testing on itcuriousj: anyways, what news on the project?V_flashbang232: \whmmV_flashbang232: removing shieldsV_flashbang232: oh dont mind if im idle for a bit its likely to happenV_flashbang232: i had to do CHORES?!??!?? D:V_flashbang232: balancing unitscuriousj: thats fine, nobody is going to get in trouble over TAV_flashbang232: oh and gamma asked if curiousj = flashbang232V_flashbang232: :|curiousj: ya, what's his problem?V_flashbang232: he pm'd me on moddbV_flashbang232: oh idk what happenedcuriousj: ya, but if i'm accused of being you, there's a problemV_flashbang232: he accused me of being youV_flashbang232: i explained everything to himcuriousj: one round of ta and it's obvious we are not the samecuriousj: thx!V_flashbang232: npV_flashbang232: i even told him to ask stchurdakV_flashbang232: the other day when you guys were playingV_flashbang232: i was online/activecuriousj: ha!V_flashbang232: cannot be playing ta and chatting on GR at same would give me that wierdo warning message, and i would have to abort ta to chat on GRcuriousj: ya, i've seen that before, "(username) did not recieve your private message because they are in a game"V_flashbang232: nahV_flashbang232: your in a roomV_flashbang232: playin taV_flashbang232: cannot alt tab to chatV_flashbang232: try it next timecuriousj: kkV_flashbang232: window comes up sayin you gotta exit ta to do that or whatevercuriousj: i tried to talk to you once, but you were in a gameV_flashbang232: im also cuttin + pastin our conversation to pm to prove to himcuriousj: that's where i got itcuriousj: thx dude!curiousj: all he has to do is play one game of ta with me and he'll realize that we are not the sameV_flashbang232: im putting it in a blogV_flashbang232: its too long for a pm xDcuriousj: ha!V_flashbang232: and imah hand him the linkV_flashbang232: lol your all excitedV_flashbang232: =Pcuriousj: again, tell him to play me and it will settle it out, i am terrible at TA compared to ypoucuriousj: you* c) logical reasoning cannot play myself at ta, as mentioned before, computers are limited and stuff, yes yes ask stchurdak cant chat with myself on Gameranger, i have 1 comp and GR does not support logging in from multiple comps. a couple of days ago i was online with GR. Curiousj was too. he was playing stchurdak. i believe i was talking to V_SA1GON (PlayerV) due to the way GR runs, people cannot chat while they're playing a game. a message pops up saying they gotta abort ta to continue chatting. so i could not be curiousj, reasoning and the course of events creates a stacked deck against the claim that i am.
          Total Annihilation-Free up ram space/lessen lag (for better pathfinding, etc...)        
alright, dont mind my typos, i got a new laptop with a new keyboard and i tend to type fast without looking at the keys. Its different. But anyways, over the past 7.5 months i have been experimenting online and in skirmish with ta and it's faults, and im here to present some of the knowledge i aquired to you. Well i have windows seven now, so no need to use my old xp computer for ta, but it served good in experimenting with lag. it had over 1.5 gig ram but still was, in fact, a slow piece of shit, so i decided to experiment. (you can skip this part) well here are some of the things i did to slow lag. (i used mods and the regular install of ta btw) i actually pressed "ctrl" repeatedly and lowered lag remove tademo.ufo if found from ta directory. not needed. remove every dll file except: (if found) dplayx.dll, ddraw.dll. do not remove smackw32.dll as well or else you will make ta non-functional. this is like only if you are running ancient systems such as 95, 98, or 2000, or even a very slow version of xp for online: you can use multidump, found on download, install anywhere where you can find it, run multidump.exe now, go to ta directory. [if found] rename ccdata.ccx to ccdata.ufo rename totala1.hpi to totala1.ufo. if you have rev3.gp3 in there and it isn't found on multidump (or scanned), rename it to rev31.ufo you might want to back up these files now, open multidump, (if you closed it out), scan ta directory, extract all files. choose to pack into a monolithic archive (name it totala1.hpi) pack it up! (into ta directory) in-game this solution is really cool. in-game, ta uses RAM space to control unit pathfinding. (ex, how a unit moves around a hill) when you have a shitload of units, pathfinding gets worse. not only does this solution [im about to explain] free up ram for pathfinding, but it also reduces unit count in ta AND frees up a bit of RAM. now, when you go tech 2 and have some mohos/fusions up, either reclaim or self-destruct your tech 1 resource buildings. (tech 1 metal extractors, solars, tidal generators...) reclamation is the preferred choice because self-d'ing your units causes collateral damage to other structures. lessens up about 5-10% of lag, and in some cases, 15%. also: try to empty recycle bin delete unnecessary files from your computer (frees up RAM/Hard drive space) defrag your hard disk drive if you have to, get a new computer.
          Spoof Mailing-Send fake email        
Spoof mailing is a technique to send email as someone else.It means if my email id is
then sending someone email to appear as if it was sent by me ,in other words,it appears to receiver that he/she has received email from
There are many free sites available which enable you to send fake emails for free.Some of them are:
and many more....
but the problem is that most of these services do not mail the message instantly.Moreover, most of these can easily be identified.I prefer to send mail from my own script.
It's really easy if you know a little bit of php.But I have shared download link for the script
All you have to do is register and get a free hosting account(say on and unzip the file.
Keep both the files in the same folder (or you may upload the folder itself ,if possible).Suppose you got and uploaded the folder in the main directory then you can access the prank mail sender on
Please note that you may be violating the terms and conditions you agreed to while signing up.Also some hosting sites do not such scripts to run


You may also want to read:Backtracking Emails

          Netatalk 2.2.2 updates UAM naming convention        
Netatalk 2.2.x renames the DHX UAM modules.. people should check their uams directory and update afpd.conf appropriately.
          ABSPD Module 4 / / Card Crazy Live Brief!        
We are bursting with excitement about some of the incredible ***LIVE BRIEF'S*** that we have in store for you! Set by a hand-picked selection of ***TRULY AMAZING COMPANIES*** in Module 4 of 'The Art and Business of Surface Pattern Design (the e-course)'! Module 4 is an advanced seven week course in which you will learn everything you need to know about building your professional portfolio. The course is crammed with masterclass video tutorials, regular live briefs from real companies, templates, tons of brilliant inspiration, invaluable content, a weekly Q&A with Rachael, & individual work reviews from either Rachael or another flourishing designer! PLUS you will get six months’ FREE access to the MOYO Directory, where you can create a bespoke profile, upload unlimited portfolios and attract potential clients. This is a rare opportunity to build your commercial portfolio in a structured way, with the support & guidance to boost your confidence & make you raring to go!
Card Crazy are a UK based, greetings card publisher with the focus of making designers stand out from the crowd! They invest a lot of time in designing beautiful greetings cards & providing a no-nonsense service that delivers at all times. They love to celebrate & showcase their designers ensuring that they share in the success of their designs! We love our own collection with Card Crazy & we are very excited for them to share their Live Brief! You can register here for March 2014, when Module 4 begins! This is an amazing opportunity to gain REAL design experience from a brilliant brief along with the chance to present your work to an innovative company. Rachael has worked with Card Crazy on her own illustration & pattern card ranges & loved designing for them!
Many of our graduates are now big players within the design industry & we're proud to say we have really nurtured some amazing talent! You can see more success stories from current students & Alumni in this Facebook album or read some of our lovely testimonials. We are so pleased that the course & our Alumni have received outstanding industry recognition, as we have been going from strength to strength since our intial launch at the end of 2011.
So far we have lined up wonderful textile company Dashwood Studio (see the announcement post here) & fantastic modern homewares company DENY Designs (see the reveal post here).

          ABSPD Module 4 / / DENY Designs Live Brief!        
We are bursting with excitement about some of the incredible ***LIVE BRIEF'S*** that we have in store for you! Set by a hand-picked selection of ***TRULY AMAZING COMPANIES*** in Module 4 of 'The Art and Business of Surface Pattern Design (the e-course)'! Module 4 is an advanced seven week course in which you will learn everything you need to know about building your professional portfolio. The course is crammed with masterclass video tutorials, regular live briefs from real companies, templates, tons of brilliant inspiration, invaluable content, a weekly Q&A with Rachael, & individual work reviews from either Rachael or another flourishing designer! PLUS you will get six months’ FREE access to the MOYO Directory, where you can create a bespoke profile, upload unlimited portfolios and attract potential clients. This is a rare opportunity to build your commercial portfolio in a structured way, with the support & guidance to boost your confidence & make you raring to go!


We are so delighted to announce that modern, think-outside-the-box home furnishings company DENY Designs will be writing a Live Brief for Module 4! This global, prestigious company offers products to transform the home while supporting art communities all over the world. They believe in each product being custom-made for each & every customer using state of the art printing methods to create fresh, vibrant colours & high-end quality products. DENY Designs are a passionate group of people who want to inspire & be inspired, create & be visionaries & they want to share this with designers & customers from across the globe!

This is an amazing opportunity to gain REAL design experience from a brilliant brief along with the chance to present your work to an innovative company. DENY Designs is a company that really shows the impact & power of surface pattern design & the range it can have. Currently DENY Design produces over 30 home-wares products with plans to expand even more in 2014! It is wonderful to see the versatility you can get from one design! Their team of artists even includes some of our ABSPD Alumni such as Loni Harris, Wendy KendallTammie Bennett & Vy La! Rachael has collaborated with DENY Designs herself & cannot sing their praises enough!

Many of our graduates are now big players within the design industry & we're proud to say we have really nurtured some amazing talent! You can see more success stories from current students & Alumni in this Facebook album or read some of our lovely testimonials. We are so pleased that the course & our Alumni have received outstanding industry recognition, as we have been going from strength to strength since our intial launch at the end of 2011.

          USA Work Trip / / ABSPD Alumni Meet Up        
On a recent work trip to Boston, I was fortunate enough to meet up with a lovely group of the students & Alumni of 'The Art and Business of Surface Pattern Design (the e-course)' (from left to right: Alik, Margaret, me, Hannah & Mary). It is always wonderful to to get to meet the students in person & chat about their experiences & successes during & after the course & I feel so grateful to be part of their journeys. We met at Wagamama & enjoyed a lovely meal together, it was such a fun evening & they were all such a fab group of talented ladies.

Are you having your own ABPSD meet-up? Let us know by leaving a comment or on the ABSPD Facebook page & we will share them with our fantastic community!

You can find out more about my trip to the USA on previous posts including my inspirational trips to the Boston Marina & Boston Aquarium, the Pumpkin Patch & getting to meet my agent Lilla Rogers for the first time!

Alik Arzoumanian is an illustrator & surface pattern designer living in Cambridge, MA, USA. She was trained as an illustrator at the Massachusetts College of Art and Design. Since graduating in 2004, Alik has mainly been illustrating picture books, until she decided to pursue my love of surface pattern design. Alik just loves making patterns & likes to think of her patterns as being simple, bold, & playful. If you want to know what about Alik visit her blogwebsite or find her on The MOYO Directory.

Margaret Applin is an artist/designer living in Lowell, Massachusetts. Born & raised in New England, Margaret's art is full of inspiration taken from the natural landscape & the changing seasons. Margaret explores various techniques such as stamp & stencil making, drawing, mono printing, screen printing, or straight digital design to develop the building blocks that define her artwork. Margaret's style is characterised by simple images that are transformed into quirky, casually elegant, fresh & modern design expressions. See more of Margaret's work on her website, blog or on The MOYO Directory.

Hannah Milkins is a lover of all things decorative, swirly and beautiful. Currently she is an aspiring pattern designer & illustrator based out of New York's Capital Region. Since taking the Art and Business of Surface Pattern Design course she has been having fun experimenting, refining her style & building a professional portfolio. She looks forward to continuing her pattern journey as she makes new friends & takes next steps into the wonderful world of design! Find more of Hannah's work on her website or her MOYO Directory profile.

Mary Tanana is a designer who is inspired by many things; art, photography, gardening, landscapes and nature. She’s had a life-long love affair with anything patterned, & after a long & successful career as an award winning jewellery designer, she has once again embraced her first love of surface pattern design. Mary originally studied fashion illustration, & has adapted her fascination with textures & patterns into a unique design style. Her style is influenced by the diverse art & architecture she observed while traveling & living overseas. Follow Mary on her blog or Facebook or take a look at her portfolio.

          MOYO Magazine / / Christmas Gift Guide!        
We are excited to bring you the downloadable MOYO Festive Gift Guide. Packed full of unique gift ideas from talented designers, we hope this guide gives you ideas for presents to delight this holiday season. From pretty Christmas tree baubles to gorgeous gift wrap, you are sure to find something just perfect for friends and loved ones. The products come from around the world and all the featured designers are members of the MOYO Directory, or alumni of The Art and Business of Surface Pattern Design.
This special downloadable version is a gorgeous selection of our favourite products that have been sent in. All of the entries are being featured on the Make it in Design blog here until Tuesday 17th December. Below are just a few of the beautiful pages inside the guide.
To download the gift guide simply go here. If you are on an iPad click here to view the magazine on If you want to download this guide or any of our MOYO magazines, here are some simple guidelines to help you: Downloading MOYO.
Please feel free to share this page and link to the Gift Guide on your blog, in your newsletters and tell all your friends about it. Wishing you all a wonderful festive season, from the Make it in Design team!

          ABSPD Module 4 / / Dashwood Studio Live Brief!        
We are bursting with excitement about some of the incredible ***LIVE BRIEF'S*** that we have in store for you! Set by a hand-picked selection of ***TRULY AMAZING COMPANIES*** in Module 4 of 'The Art and Business of Surface Pattern Design (the e-course)'! Module 4 is an advanced seven week course in which you will learn everything you need to know about building your professional portfolio. The course is crammed with masterclass video tutorials, templates, tons of brilliant inspiration, invaluable content & individual work reviews from either Rachael or another flourishing designer! PLUS you will get six months’ FREE access to the MOYO Directory, where you can create a bespoke profile, upload unlimited portfolios and attract potential clients. This is a rare opportunity to build your commercial portfolio in a structured way, with the support & guidance to boost your confidence & make you raring to go!

We are thrilled to announce that wonderful textile company 'Dashwood Studio' will be writing a Live Brief! Dashwood Studio specialise in producing beautiful, design-led fabric collections for today's quilt & home-sewing communities. They are passionate about what they do, taking a fresh approach to the design process, seeking & collaborating with the best design talent the UK has to offer! Their team of designers includes three of our talented ABSPD Alumni; Phyllida CoroneoWendy Kendall & Bethan Janine!
Many of our graduates are now big players within the design industry & we're proud to say we have really nurtured some amazing talent! You can see more success stories from current students & Alumni in this Facebook album or read some of our lovely testimonials. We are so pleased that the course & our Alumni have received outstanding industry recognition, as we have been going from strength to strength since our intial launch at the end of 2011.

          Comment on User report by Sean Marx        
Hi Stephan This report is part of a custom set of reports that I have build for the client. I normally create a reporting dashboard plugin in the /local directory with a few simple reports, and let it evolve around the clients needs. Custom reports are a common request so building a Dashboard to hold them makes sense.
          Grails application now working on GlassFish v3        

Recently Guillaume reported to me about his Grail app not deploying on GlassFish v3 Preview 2. The problem reported was that the Grail app was taking lots of time to deploy on GlassFish v3 Preview 2. Although such failures are not acceptable but considering GlassFish v3 is a complete new architecture, is still under development and feature incomplete and above all the preview releases do not go thru the normal test cycle so such bugs can appear.

This issue was discussed at the GlassFish mailing list, see the discussion here and the corresponding bug.

The good news is that Jerome quickly found out what the problem was and after the code went thru reviews, it was checked in and the fix went into yesterday's nightly build.

Here is how I created and deployed a Grails application on GlassFish v3:

Create a simple Grails app

Setup Grails This is the standard way you would setup the Grails environments:

    export PATH=$GRAILS_HOME/bin:$PATH
export GRAILS_HOME=/tools/grails



Create a simple Grails app
    vivekmz@boson(555)> grails create-app MyFirstGrailsApp
Welcome to Grails 1.0.1 -
Licensed under Apache Standard License 2.0
Grails home is set to: /tools/grails
Base Directory: /ws/sb
Environment set to development
Note: No plugin scripts found
Running script /tools/grails/scripts/CreateApp.groovy
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/src
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/src/java
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/src/groovy
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/grails-app
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/grails-app/controllers
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/grails-app/services
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/grails-app/domain
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/grails-app/taglib
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/grails-app/utils
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/grails-app/views
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/grails-app/views/layouts
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/grails-app/i18n
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/grails-app/conf
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/test
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/test/unit
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/test/integration
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/scripts
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/web-app
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/web-app/js
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/web-app/css
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/web-app/images
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/web-app/META-INF
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/lib
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/grails-app/conf/spring
[mkdir] Created dir: /ws/sb/MyFirstGrailsApp/grails-app/conf/hibernate
[propertyfile] Creating new property file:
[copy] Copying 2 files to /ws/sb/MyFirstGrailsApp
[copy] Copying 2 files to /ws/sb/MyFirstGrailsApp/web-app/WEB-INF
[copy] Copying 5 files to /ws/sb/MyFirstGrailsApp/web-app/WEB-INF/tld
[copy] Copying 87 files to /ws/sb/MyFirstGrailsApp/web-app
[copy] Copying 17 files to /ws/sb/MyFirstGrailsApp/grails-app
[copy] Copying 1 file to /ws/sb/MyFirstGrailsApp
[copy] Copying 1 file to /ws/sb/MyFirstGrailsApp
[copy] Copying 1 file to /ws/sb/MyFirstGrailsApp
[propertyfile] Updating property file:
Created Grails Application at /ws/sb/MyFirstGrailsApp

Start GlassFish v3

    vivekmz@boson(735)> glassfish/bin/asadmin start-domain
Mar 7, 2008 7:43:04 PM com.sun.enterprise.v3.server.AppServerStartup run
INFO: HK2 initialized in 281 ms
Mar 7, 2008 7:43:04 PM com.sun.enterprise.v3.server.AppServerStartup run
INFO: com.sun.enterprise.naming.impl.ServicesHookup@51b48197 Init done in 307 ms
Mar 7, 2008 7:43:04 PM com.sun.enterprise.v3.server.AppServerStartup run
INFO: Init done in 310 ms
Mar 7, 2008 7:43:04 PM com.sun.enterprise.v3.server.AppServerStartup run
INFO: com.sun.enterprise.v3.server.SystemTasks@1fd0fafc Init done in 382 ms
Mar 7, 2008 7:43:04 PM com.sun.enterprise.v3.server.AppServerStartup run
INFO: Init done in 411 ms
Mar 7, 2008 7:43:04 PM com.sun.enterprise.v3.server.AppServerStartup run
INFO: Init done in 413 ms
Mar 7, 2008 7:43:04 PM start
INFO: Listening on port 8080
Mar 7, 2008 7:43:04 PM start
INFO: Listening on port 8181
Mar 7, 2008 7:43:04 PM start
INFO: Listening on port 4848
Mar 7, 2008 7:43:04 PM com.sun.enterprise.v3.server.AppServerStartup run
INFO: startup done in 630 ms
Mar 7, 2008 7:43:04 PM com.sun.enterprise.v3.server.AppServerStartup run
INFO: startup done in 732 ms
Mar 7, 2008 7:43:04 PM com.sun.enterprise.v3.server.AppServerStartup run
INFO: Glassfish v3 started in 733 ms

You can see above it took 733ms to boot up!

Deploy the Grails App

Now that I have built MyFirstGrailsApp, it is time to deploy. So first I will create a war file:

    vivekmz@boson(558)> cd MyFirstGrailsApp/
vivekmz@boson(559)> grails war


Now Let's deploy to GlassFish v3:
    vivekmz@boson(749)> ../glassfish/bin/asadmin deploy MyFirstGrailsApp-0.1.war
SUCCESS : MyFirstGrailsApp-0.1 deployed successfully

The server log tells, it took about 9.7 seconds to deploy it:

[#|2008-03-07T20:19:03.580+0000|INFO|GlassFish10.0||_ThreadID=12;_ThreadName=Thread-4;|Deployment of MyFirstGrailsApp-0.1 done is 9765 ms|#]

Now when I accessing http://localhost:8080/MyFirstGrailsApp-0.1/ my Grails app appears in the Firefox:


GlassFish v3 has been going thru continuous improvements and the development team is busy making it rock solid while adding new features to it. Continue sending your feedbacks to

草儿 2008-03-10 11:53 发表评论

          more domain issues        
Originally Published 2003-10-03 08:08:05

From: Alison Stone

To: fuery

Sent: Thursday, October 02, 2003 12:27 PM

Subject: Continuing website problems

Dear Johnny,

I'm writing on behalf of Lisa. It appears the is once again linked to the porno site. Any suggestions?




Hello Allison,

The problem hasn't changed. Since your company doesn't own the domain, the new owner can (and has) post anything there he or she wishes.

What we were able to do was get it removed for awhile and update the search engines databases so that a search for your company name no longer sends potential customers to

In my last correspondence with Lisa, I touched on this:


Date: Thu, 28 Aug 2003 11:55:39 -0700 (PDT)

From: "Johnny Fuery"

Subject: RE: questions

To: "Lisa F

CC: "Stephen N

> 1) If the site is now being redirected to

>, do we need to do

> anything else??

Well, it's not being redirected. Links directly to are simply going nowhere -- your browser

displays a "Page Not Found" error. The domain is

effectively dead.

> 2) Do we have to buy back the site from

> Slutnames.whatever or will the

> redirect stand? And do we know whether or not they

> actually "own" it or

> stole it.?

I'm pretty sure they own the lease to

fair and square. Unethically, perhaps, but not


If you actually want a redirect, i.e., typing in

"" yields your site located at, then yes, control of the domain must be

reobtained from Smutnames.

Please note that much of the damage has been done,

however. The prominence of mncabinet in the google

search results went away when we asked google to

remove it from it's directory and cache (remember, it

was linking to the smutnames obscenities). I'm not sure

if you agree, but it seems to me that the largest

value of that domain was in it's prominence in the

google engine. Even if it were recovered, I'm

unfortunately not confident that the google placement

would return. It very well may, but it's difficult to

guarantee that behavior.

[end snip]

In terms of options, there are only two choices at this point are:

+ continue the efforts to phase out Since the search engines have been updated already, this means making sure that all of the sites you control do not refer to it. It also means keeping your eyes open for links to in "the wild", i.e., on the internet at large that you do not control and contacting the webmasters of those sites asking them to update their sites. This actually tends to happen on its own (webmasters don't appreciate linking to smut either), but you should nonetheless make certain your own sites are completely updated. This effort is largely complete already; I had thought based on the lack of response from my last message to Lisa (quoted above) that this course of action was the decision.

+ buy control of from the new legitimate owner. He quoted me $500 in my initial correspondence with him. I can handle this or I can simply forward you his contact information.

Let me know what I can do for you.

btw, thanks for the opportunity to work with you.

Johnny Fuery


          Swagger IncludeXmlComments PlatformServices (obsolete) replacement        

We migrated a project to ASP.NET Core 2 (preview) and needed to configure swagger.

In ASP.NET Core v1 we used this code to load the auto generated xml file into swagger:

services.ConfigureSwaggerGen(options =>


//Determine base path for the application.

var basePath = PlatformServices.Default.Application.ApplicationBasePath;


//Set the comments path for the swagger json and ui.

options.IncludeXmlComments(System.IO.Path.Combine(basePath, "ProjectName.xml"));


Since PlatformServices is obsolete (, you shouldn't use this for your (new) projects anymore.
Like the github page states, you should use the equivalent .NET API instead.

Here is what I use instead of PlatFormAbstractions (PlatformServices) for loading the xml into swagger:

services.ConfigureSwaggerGen(options =>


//Determine base path for the application.

var basePath = AppContext.BaseDirectory;


var assemblyName = System.Reflection.Assembly.GetEntryAssembly().GetName().Name;


var fileName = System.IO.Path.GetFileName(assemblyName + ".xml");


//Set the comments path for the swagger json and ui.

options.IncludeXmlComments(System.IO.Path.Combine(basePath, fileName));


Hint: You could add a check here, if the file really exists before loading it.

          General Directory        
A good general directory for you to submit your site to is the General Directory. General Directory has a good page strength and inclusion in the directory is free with a reciprocal link, or very cheap ($4 for a regular link and $8 for a featured link) without.
The directory submission category contains details of directory submission services - companies who will carry out directory submission for your website. These services are a real time saver and can be quite cost effective - if you value your time I suggest you consider using one.
          5000 Web Directory Listings        
There are a whole lot of web directories on the internet, both free web directories and paid web directories. lists over 5000 different web directories, and is a great place to start if you are thinking of promoting a website through directory submissions.
The site is designed to help directory submitters find what they want, including relevant information to directory submission decisions.
One of the best parts of the site is the niche directory lists, using these lists you can find directories that have a high relevance to your site.
          Comment on Active Directory troubleshooting tools by Adding first Windows Server 2012 Domain Controller within Windows 2003/2008/2008R2 network | vmwindows        
[…] more about Active Directory Troubleshooting Tools check one of my articles on this […]
           CentOS 7 安裝 Nginx、PHP7、PHP-FPM        

  1. 安裝 nginx 
    CentOS 7 沒有內建的 nginx,所以先到 nginx 官網 ï¼Œæ‰¾åˆ° CentOS 7 的 nginx-release package 檔案連結,然後如下安裝
    rpm -Uvh
    安裝後,會自動產生 yum 的 repository 設定(在 /etc/yum.repos.d/nginx.repo), 
    接下來便可以使用 yum 指令安裝 nginx
    yum install nginx
  2. 啟動 nginx 
    以前用 chkconfig 管理服務,CentOS 7 改用 systemctl 管理系統服務 
    systemctl start nginx
    systemctl status nginx
    查看 nginx 服務目前的啟動設定
    systemctl list-unit-files | grep nginx
    若是 disabled,可以改成開機自動啟動
    systemctl enable nginx
    若有設定防火牆,查看防火牆運行狀態,看是否有開啟 nginx 使用的 port
    firewall-cmd --state
    永久開放開啟防火牆的 http 服務
    firewall-cmd --permanent --zone=public --add-service=http
    firewall-cmd --reload
    列出防火牆 public 的設定
    firewall-cmd --list-all --zone=public
    經過以上設定,應該就可以使用瀏覽器訪問 nginx 的預設頁面。
  3. 安裝 PHP-FPM 
    使用 yum 安裝 php、php-fpm、php-mysql
    yum install php php-fpm php-mysql
    查看 php-fpm 服務目前的啟動設定 
    systemctl list-unit-files | grep php-fpm
    systemctl enable php-fpm
    systemctl start php-fpm
    systemctl status php-fpm
  4. 修改 PHP-FPM listen 的方式 
    若想將 PHP-FPM listen 的方式,改成 unix socket,可以編輯 /etc/php-fpm.d/www.conf 
    listen =
    listen = /var/run/php-fpm/php-fpm.sock
    然後重新啟動 php-fpm
    systemctl restart php-fpm
    註:不要改成 listen = /tmp/php-fcgi.sock (將 php-fcgi.sock 設定在 /tmp 底下), 因為系統產生 php-fcgi.sock 時,會放在 /tmp/systemd-private-*/tmp/php-fpm.sock 隨機私有目錄下, 除非把 /usr/lib/systemd/system/ 裡面的 PrivateTmp=true 設定改成 PrivateTmp=false, 但還是會產生其他問題,所以還是換個位置最方便 


    # yum remove php*

    rpm 安装 Php7 相应的 yum源

    CentOS/RHEL 7.x:

    # rpm -Uvh # rpm -Uvh

    CentOS/RHEL 6.x:
    # rpm -Uvh


    yum install php70w php70w-opcache


    配置(configure)、编译(make)、安装(make install)

    使用configure --help


    Configuration: --cache-file=FILE       cache test results in FILE --help                  print this message --no-create             do not create output files --quiet, --silent       do not print `checking...' messages --version               print the version of autoconf that created configure Directory and file names: --prefix=PREFIX         install architecture-independent files in PREFIX [/usr/local] --exec-prefix=EPREFIX   install architecture-dependent files in EPREFIX



    为可执行程序声明目录,缺省是 EXEC-PREFIX/bin
    设置所安装的程序需要的只读文件的目录.缺省是 PREFIX/share
    用于各种各样配置文件的目录,缺省为 PREFIX/etc
    库文件和动态装载模块的目录.缺省是 EXEC-PREFIX/lib
    C 和 C++ 头文件的目录.缺省是 PREFIX/include
    文档文件,(除 “man(手册页)”以外, 将被安装到这个目录.缺省是 PREFIX/doc
    随着程序一起带的手册页 将安装到这个目录.在它们相应的manx子目录里. 缺省是PREFIX/man
    注意: 为了减少对共享安装位置(比如 /usr/local/include) 的污染,configure 自动在 datadir, sysconfdir,includedir, 和 docdir 上附加一个 “/postgresql” 字串, 除非完全展开以后的目录名字已经包含字串 “postgres” 或者 “pgsql”.比如,如果你选择 /usr/local 做前缀,那么 C 的头文件将安装到 /usr/local/include/postgresql, 但是如果前缀是 /opt/postgres,那么它们将 被放进 /opt/postgres/include
    DIRECTORIES 是一系列冒号分隔的目录,这些目录将被加入编译器的头文件 搜索列表中.如果你有一些可选的包(比如 GNU Readline)安装在 非标准位置,你就必须使用这个选项,以及可能还有相应的 --with-libraries 选项.
    DIRECTORIES 是一系列冒号分隔的目录,这些目录是用于查找库文件的. 如果你有一些包安装在非标准位置,你可能就需要使用这个选项 (以及对应的--with-includes选项)

    • PHP FPM設定參考
      pid = /usr/local/php/var/run/
      error_log = /usr/local/php/var/log/php-fpm.log
      listen = /var/run/php-fpm/php-fpm.sock
      user = www
      group = www
      pm = dynamic
      pm.max_children = 800
      pm.start_servers = 200
      pm.min_spare_servers = 100
      pm.max_spare_servers = 800
      pm.max_requests = 4000
      rlimit_files = 51200
      listen.backlog = 65536
      ;設 65536 的原因是-1 可能不是unlimited
      slowlog = /usr/local/php/var/log/slow.log
      request_slowlog_timeout = 10
    • nginx.conf 設定參考 
      user  nginx;
      worker_processes  8;
      error_log  /var/log/nginx/error.log warn;
      pid		/var/run/;
      events {
        use epoll;
        worker_connections  65535;
      worker_rlimit_nofile 65535;
      #若沒設定,可能出現錯誤:65535 worker_connections exceed open file resource limit: 1024
      http {
        include	   /etc/nginx/mime.types;
        default_type  application/octet-stream;
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                  '$status $body_bytes_sent "$http_referer" '
                  '"$http_user_agent" "$http_x_forwarded_for"';
        access_log  /var/log/nginx/access.log  main;
        sendfile		on;
        tcp_nopush	 on;
        keepalive_timeout  65;
        server_names_hash_bucket_size 128;
        client_header_buffer_size 32k;
        large_client_header_buffers 4 32k;
        client_max_body_size 8m;
        server_tokens  off;
        client_body_buffer_size  512k;
        # fastcgi
        fastcgi_connect_timeout 300;
        fastcgi_send_timeout 300;
        fastcgi_read_timeout 300;
        fastcgi_buffer_size 64k;
        fastcgi_buffers 4 64k;
        fastcgi_busy_buffers_size 128k;
        fastcgi_temp_file_write_size 128k;
        fastcgi_intercept_errors on;
        #gzip (說明
        gzip  off;
        gzip_min_length  1k;#1k以上才壓縮
        gzip_buffers 32  4k;
          #使用 getconf PAGESIZE 取得系統 one memory page size,
        gzip_http_version  1.0;
        gzip_comp_level  2;
        gzip_types  text/css text/xml application/javascript application/atom+xml application/rss+xml text/plain application/json;
          #查看 nginx 的 mime.types 檔案(/etc/nginx/mime.types),裡面有各種類型的定義
        gzip_vary  on;
        include /etc/nginx/conf.d/*.conf;
      若出現出現錯誤:setrlimit(RLIMIT_NOFILE, 65535) failed (1: Operation not permitted) 
      ulimit -n
      若設定值太小,修改 /etc/security/limits.conf
      vi /etc/security/limits.conf
      * soft nofile 65535
      * hard nofile 65535

Alpha 2016-08-10 13:44 发表评论


DNS是域名系统(Domain Name System)的缩写,它的作用是将主机名解析成IP(正向解析),从IP地址查询其主机名(反向解析)。







bind bind-libs bind-utils bind-chroot caching-nameserver




yum install bind bind-libs bind-utils bind-chroot

这里更新源上的版本是bind 9.3.6-16.P1.el5,DNS的配置文件放在/var/named/chroot目录下。


cp /usr/share/doc/bind-9.3.6/sample/etc/* /var/named/chroot/etc
cp -a /usr/share/doc/bind-9.3.6/sample/var/named/* /var/named/chroot/var/named

主配置文件:/var/named/chroot/etc/named.conf 设置一般的named参数,指向该服务器使用的域数据库的信息源。
根域名服务器指向文件:/var/named/chroot/var/named/named.root 指向根域名服务器,用于唯高速缓存服务器的初始配置。
正向解析文件:/var/named/chroot/var/named/ localhost区文件,用于将名字localhost转换为本地回送IP地址(,正向解析。
反向解析文件:/var/named/chroot/var/named/named.local localhost区文件,用于将本地回送IP地址(转换成名字localhost,反向解析。


service named restart


Stopping named: [ OK ]
Starting named: [FAILED]


cat /var/log/messages |grep named


my named[1384]: /etc/named.conf:100 configuring key ‘ddns_key’: bad base64 encoding

是没有ddns_key造成的,执行/usr/sbin/dns-keygen来生成TSIG keys。然后替换named.conf中
secret “use /usr/sbin/dns-keygen to generate TSIG keys”;引号内的内容。



vim /var/named/chroot/etc/named.conf


key ddns_key
algorithm hmac-md5;
secret “5L6JQccNVZ53CHA3iW4VnPgDZXdcX3U3pnhL2txKUsaPqwBRddE58LpA7uiI”;


options //设置data相关文件,对data/目录要有写的权限
logging //debug log
view “localhost_resolver” //本地解析,caching only nameserver
view “internal” //限定同一个局域网的内部用户使用
key ddns_key //设置ddns key
view “external” //限制外部用户请求这个DNS服务器


cd /var/named/chroot/var/named
chown named:named data


cd /var/named/chroot/var
chmod g+w named

如果这个目录没有写权限的话,named服务可以启动,但是系统日志里会有,”the working directory is not writable”错误。

7.修改name.conf中view “external”区域内设置:

vim /var/named/chroot/etc/named.conf
recursion yes; //打开递归
allow-query-cache { any; }; //允许查询缓存


service named restart
Stopping named: [ OK ]
Starting named: [ OK ]



tail -30 /var/log/messages |grep named



chkconfig –level named 345 on


以test.com域为例子: //web服务 //域名服务 //邮件服务 //文件服务


vim /var/named/chroot/etc/named.rfc1912.zones


zone “” IN {
type master;
file “”;
allow-update { none; };


zone “” IN {
type master;
file “”;
allow-update { none; };


cd /var/named/chroot/var/named




$TTL  86400
@    IN SOA @  root (
42     ; serial (d. adams)
3H     ; refresh
15M     ; retry
1W     ; expiry
1D )     ; minimum

www   IN A
ns    IN A
work   IN CNAME   www
mail    IN A
@     IN MX 10
ftp     IN A


cp named.local




$TTL  86400
@  IN  SOA  localhost. root.localhost. (
1997022700 ; Serial
28800  ; Refresh
14400  ; Retry
3600000 ; Expire
86400 ) ; Minimum
100  IN  PTR
101  IN  PTR
103  IN  PTR
104  IN  PTR

6.编辑named.conf,将test.com区域加入到view “external”中:

vim /var/named/chroot/etc/named.conf

由于zone “”是写在named.rfc1912.zones文件中的,在view “external”中添加:

include “/etc/named.rfc1912.zones”;


service named restart




vim /var/named/chroot/etc/named.rfc1912.zones


zone “” IN {
type slave;
file “slaves/”;
masters {; }; //这里填主DNS服务器IP


zone “” IN {
type slave;
file “slaves/”;
masters {; }; //这里填主DNS服务器IP

2.编辑named.conf,将named.rfc1912.zones文件加入到view “external”中:

vim /var/named/chroot/etc/named.conf

在view “external”中添加:

include “/etc/named.rfc1912.zones”;


cd /var/named/chroot/var/named
chown root:named slaves


chmod g+w slaves



service named restart




Alpha 2011-11-21 16:25 发表评论

          Staff Directory unavailable        

Aug 3, 09:27 AEST
Resolved - This incident has been resolved.

Aug 3, 09:25 AEST
Update - Staff Directory is fully restored and all services are operational.

OneHelp Reference Number: INC0047662
Services Affected: Staff Directory
Impact: Staff/Students/public

Thank you for your patience during this service disruption.

For more information please contact the IT Service Desk on 02 9850 HELP (4357) Opt 2, or email

Aug 3, 09:03 AEST
Investigating - Staff Directory is currently unavailable.

We are working to resolve this and apologise for any inconvenience caused.

OneHelp Reference Number: INC0047662
Services Affected: Staff Directory
Impact: Staff/Students/public

          Firewall maintenance window 6pm - 8pm Wednesday 2 August 2017        

Aug 2, 20:00 AEST
Completed - The scheduled maintenance has been completed.

Aug 2, 18:01 AEST
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.

Aug 2, 16:00 AEST
Scheduled - Scheduled IT maintenance is taking place on Wednesday 2 August 2017 from 6pm to 8pm to carry out essential Firewall remediation work.

As a result, the following systems and services may be impacted by intermittent outages that will last up to one hour:

- Truth
- Wiki
- iTeach
- iPrint
- Staff Directory

Stay informed:
- Check the IT Maintenance Window calendar (, or subscribe here ( for dates and times of planned Information Technology downtime windows for 2017.
- Keep up-to-date on the operational status of all IT systems and services by subscribing to real-time notifications at
- Manage notifications by clicking 'manage your subscription' below, or reply 'manage' to SMS notifications.
- Follow @mqu_it on Twitter for breaking alerts and status updates
- Contact the IT Service Desk on 02 9850 HELP (4357), or email

          Truth, Wiki, iTeach, iPrint and Staff Directory are currently unavailable        

Aug 1, 19:59 AEST
Resolved - Access to Truth, Wiki, iTeach, iPrint and Staff Directory is fully restored and all services are operational.

OneHelp Reference Number: INC0047546
Services Affected: Truth, Wiki, iTeach, iPrint and Staff Directory
Impact: Staff and Students

Thank you for your patience during this service disruption.

For more information please contact the IT Service Desk on 02 9850 HELP (4357) Opt 2, or email

Aug 1, 17:46 AEST
Investigating - We are currently investigating an issue impacting Truth, Wiki, iTeach, iPrint and Staff Directory

OneHelp Reference Number: INC0047546
Services Affected: Truth, Wiki, iTeach, iPrint and Staff Directory
Impact: Staff and Students

For more information please contact the IT Service Desk on 02 9850 HELP (4357) Opt 2, or email

          Staff Directory unavailable        

Jul 28, 10:08 AEST
Resolved - Staff Directory is fully restored and all services are operational.

OneHelp Reference Number: INC0047200
Services Affected: Staff Directory
Impact: Staff/Students/public

Thank you for your patience during this service disruption.

For more information please contact the IT Service Desk on 02 9850 HELP (4357) Opt 2, or email

Jul 28, 08:54 AEST
Investigating - Staff Directory is currently unavailable.

We are working to resolve this and apologise for any inconvenience caused.

OneHelp Reference Number: INC0047200
Services Affected: Staff Directory
Impact: Staff/Students/public

For more information please contact the IT Service Desk on 02 9850 HELP (4357) Opt 2, or email

          Hot Pic Of Indian Actress        
Hot Pic Of Indian Actress Biography

Lara Dutta Bhupathi (born 16 April 1978) is an Indian Bollywood actress and former Miss Universe (2000).
She made her Hindi debut in 2003 with the film Andaaz which was a box office success and won her a Filmfare Best Female Debut Award. Dutta next appeared in a series of successful films such as Masti (2004), No Entry (2005), Kaal (2005), Bhagam Bhag (2006), Partner (2007), Housefull (2010), Chalo Dilli (2011) and Don 2 (2011), thus establishing herself as a bankable Bollywood actress.
As of September 2012, Dutta is currently filming for David, which is expected to be released in either late 2012 or early 2013.

Dutta was born in to an Indian father and an Anglo-Indian mother.Her father is Wing Commander L.K. Dutta (retired) and her mother is Jennifer Dutta. She also has two elder sisters, Sabrina, who serves in the Indian Air Force and younger sister Cheryl.Composer and DJ Nitin Sawhney is Dutta's cousin.The Dutta family moved to Bangalore in 1981 where she completed high school from St.Francis Xavier Girls' High School and the Frank Anthony Public School. Dutta graduated in economics with a minor in communications from the University of Mumbai. She is fluent in English, Hindi, Bengali, Kannada, and French.

Dutta won the annual Gladrags Megamodel competition in her native India in 1995,thus winning the right to enter the 1997 Miss Intercontinental Pageant, in which she took first place. Later, she was crowned Femina Miss India Universe in 2000.
At Miss Universe 2000 in Cyprus, she achieved the highest score in the swimsuit competition and her finalist interview score was the highest individual score in any category in the history of the Miss Universe contest, as her interview saw a majority of the judges giving her the maximum 9.99 mark.After her final question, in which she delivered a defense of the Miss Universe contest (and other beauty pageants), she became the second Indian Miss Universe. Dutta's win led to her appointment as a UNFPA Goodwill Ambassador in 2001.
In the same year, Priyanka Chopra and Dia Mirza won their respective Miss World and Miss Asia Pacific titles which gave India a rare triple victory in the world of beauty pageants.

Dutta signed up for the Tamil film, Arasatchi in 2002, but due to financial problems, it was only released in mid-2004. She made her Hindi debut in 2003 with the film Andaaz which was a box office success and won her a Filmfare Best Female Debut Award. She then went onto appearing in Bardaasht, which failed to do well at the box office. Her next release Aan: Men at Work was also a flop in India. Insan, Elaan and Jurm also ended up failing to do well at the box office. However, she then appeared in the highly successful comedy, Masti opposite Ajay Devgn.

In 2005, Sania Mirza and Dutta both participated in Kaun Banega Crorepati (season 2) on 13 November 2005.In 2005, Dutta appeared in Kaal, which was a moderate success at the box office. Dutta then appeared in 2005's biggest hit, No Entry opposite actors Anil Kapoor, Salman Khan, Fardeen Khan, Bipasha Basu, Esha Deol and Celina Jaitley. She next appeared in Dosti: Friends Forever which was only an average grosser in India, however became the biggest hit of the year (for a Bollywood film) in overseas markets.
In 2006, Dutta appeared alongside Akshay Kumar in Bhagam Bhag, which was one of the biggest hits of the year, as it collected over 40 crore (US$7.56 million) in India alone.
Dutta's first release of 2007 was Shaad Ali's Jhoom Barabar Jhoom. The film was a box office failure in India but did better overseas, especially in the U.K.She received mixed reviews for her performance in the film. Her later release, Partner became a "Blockbuster" in India and became the third biggest grosser of the year.Dutta then made a special appearance in Om Shanti Om.
In 2008, Dutta lent her voice for the animated film, Jumbo. However, the film failed to do well at the box office. She also made a special appearance in Rab Ne Bana Di Jodi.
Her 2009 release, Blue, was one of the most expensive movies of Indian cinema. Dutta had initially walked away from the project because the movie was entirely shot in the ocean and she did not know how to swim. However the male protagonist Akshay Kumar encouraged her in learning how to swim and she immediately started training with a special coach. Blue was released on 16 October 2009.She stated that "The moment I got to know of it, I called Akshay and told him that I wouldn't be able to accept the assignment. He knew the reason behind my decision. Not many people are aware that I had almost drowned while shooting for Andaaz, Akshay had rescued me. When I reminded him I couldn't swim, he told me to forget my phobia and learn swimming pronto," said Dutta. "Today, I feel Blue has not merely made me overcome my phobias, but has also taught me something that will stay with me for the rest of my life".Despite a promising opening, the film failed to do well at the box office. She also appeared in Do Knot Disturb, which also took a good start at the box office, however dropped in the coming days and failed to do well. Her 2010 release Housefull was a major success across India. She starred opposite Akshay Kumar, Deepika Padukone and Riteish Deshmukh. It was the fifth biggest hit in the country as it collected 114 crore (US$21.55 million) at the box office.She played the role of Hetal Petal, one of the main characters in the movie.

In 2011, her first movie as a producer, Chalo Dilli was released. The film was a decent success, as it was made on a modest budget of 5 crore (US$0.95 million).[citation needed] She then played Ayesha (Don's girlfriend and accomplice in the team) in the movie Don 2: The King Is Back. The film was a major success, as it collect approx. 206 crore (US$38.93 million) worldwide in it's Hindi version only.

Dutta at the Launch of Nivea India.
Following her post-pregnancy break from Hindi cinema, Lara Dutta has recently signed a new movie entitled David. Filming will start by August, 2012 and she will have a very important role in the movie according to its director, Bejoy Nambiar.The movie will also star Vikram, Neil Nitin Mukesh, Tabu & Isha Sharvani.

In September 2010, Dutta became engaged to Indian tennis player, Mahesh Bhupathi.They married on February 16, 2011 in a civil ceremony in Bandra,and later followed it by a Christian ceremony on 20 February 2011 at Sunset Point in Goa.
On 1 August 2011, Dutta confirmed that she is pregnant with their first child.On 20 January 2012 it was confirmed by Bhupathi via a popular networking site that Dutta had given birth to a baby girl, they have named her Saira.

Hot Pic Of Indian Actress
Hot Pic Of Indian Actress
Hot Pic Of Indian Actress
Hot Pic Of Indian Actress
Hot Pic Of Indian Actress
Hot Pic Of Indian Actress
Hot Pic Of Indian Actress
Hot Pic Of Indian Actress
Hot Pic Of Indian Actress
Hot Pic Of Indian Actress
Hot Pic Of Indian Actress
Hot Pic Of Indian Actress
Hot Pic Of Indian Actress

          Canal de desenhos clássicos        

          Custom Taxonomies In WordPress Plugins        

Taxonomy Support I wrote recently about building a brand directory using a fantastic feature of WordPress called “Taxonomies”. Now that that feature is live I’ve realised that a number of my favourite WordPress plugins simply don’t support taxonomies. Among the … Continue reading

The post Custom Taxonomies In WordPress Plugins appeared first on Lee Willis.

          Macto, a module features spec sheet for authentication        

imageI am going to talk about a few ways of trying to organize a project, mostly as a way to lay out the high level requirements for a feature or a module.

I consider filling in one of those to be the act of actually sitting down and thinking about the concrete design of the system. It is not final design, and it is not set in stone, but in general it forces me to think about things in a structured way.

It is not a very hard to do, so let us try to do this for the authentication part of the application. Authentication itself is a fairly simple task. In real corporate environment, I’ll probably need integration with Active Directory, but I think that we can do with simple username & pass in the sample.

Module: Authentication

Tasks: Authenticate users using name/pass combination

Integration: Publish notifications for changes to users

Scaling constraints:

Up to 100,000 users, with several million authentication calls per day.

Physical layout:

Since the system need to handle small amount of users, we have to separate deployment options, Centralized Service* and Localized Component*. Both options are going to be developed, to show both options.




authenticate user

Based on name & password
Lock a user if after 5 failed logins

less than 50 ms per authentication request 99.9% of the time, for 100 requests per second per server

create new user

User name, password, email

less than 250 ms per change password request 99.9% of the time, for 10 requests per second globally

change password


less than 250 ms per change password request 99.9% of the time, for 10 requests per second globally

reset password


less than 250 ms per change password request 99.9% of the time, for 10 requests per second globally

enable / disable user

disable / enable the option to login to the system

less than 250 ms per change password request 99.9% of the time, for 10 requests per second globally

You should note that while I don’t expect to have that many users in the system, or have to handle that load, for the purpose of the sample, I think it would be interesting to see how to deal with such requirements.

The implications of this spec sheet is that the system can handle about 8.5 million authentication requests per day, and about a 3/4 of a million user modifications requests.

There are a few important things to observe about the spec sheet. It is extremely high level, it provide no actual implementation semantics but it does provide a few key data items. First, we know what the expected data size and load are. Second, we know what the SLAs for those are.

* Centralized Service & Localized Component are two topics that I’ll talk about in the future.

          Great Ranking For Boca Raton Regional Hospital, No Ranking For West Boca Hospital        
BOCA RATON, FL ( — Boca Raton Regional Hospital is again ranked a top hospital in Florida by US News and World Report in its annual report.  Neither West Boca Medical Center nor Delray Medical Center made the list (unless you count the directory section). In fact, Delray Medical Center received just one star for…
          Internet Advertising Techniques to Help Get Your Site to the Top        
Internet advertising is a popular way for offline businesses to pull in customers, but online marketing campaigns can utilize more than just the standard article directory promotion methods to pull in traffic and increase revenue. No internet marketing campaign is going to take their advertising efforts offline.
          No more “unknown” icons        
In recent versions of Dolphin, the view sometimes looked like this just after entering a directory. Some of the files and sub-directories have “unknown” icons, which are replaced by the correct icons later. This will not happen any more in Dolphin 4.11. Why did we show you “unknown” icons at all in the first place? […]
          The “S” Trilogy in Search Industry.        
We all are familiar with search marketing industry. The “S” trilogy means Search Trilogy. Search Trilogy has three parts.

Search Engine Optimization (SEO) – Organic Marketing
Search engine optimization is process of improving the quality and volume of web traffic to a website by employing a series of proven SEO techniques that help a website achieve a higher ranking with the major search engines when certain keywords and phrases are put in the search field.

Popular SEO Techniques

1) Phase I - (Evaluation, Planning & Research)
a) Basic Analysis of Website
b) Analysis Report of Website
c) Keyword Research
d) Competitor Analysis
2) Phase II - (On Page Optimization)
a) Page Optimization
b) Website Design or Re-design
c) Content Pages (Keyword Rich Pages)
d) Blogs (Online Journal)
e) News Feeds
f) Site Usability
g) Optimize Website
h) Editing Content
i) SEO Articles
j) Programming
k) Sitemap (General HTML Sitemap)
l) Google Sitemap (XML Sitemap)
m) Internal Linking
n) Text link navigation
o) Footers
p) Inline text links
q) Installation of Tracking Tools
3) Phase III - (Off Page Optimization)
a) Article Submission
b) Directory Submissions
c) Reciprocal Link Building
d) Press Release Distribution
e) Forums
f) Blogs
g) RSS Feeds
h) Email Marketing
i) Banner Advertising
j) Podcasting
k) Social Networking
l) Social Bookmarking
m) Wikis
n) Non-Orthodox Link Building
iii) Google Groups
iv) Yahoo Groups
o) By commenting in relative blogs.
4) Phase IV - (SEO Monitoring & Reporting)
a) Keyword Position Reporting (SERP)
b) Pages Indexed Reporting
c) Page Rank Reporting
d) Back Link Reporting
e) Web Analytics Reporting

Search Engine Marketing (SEM) – Paid Marketing (PPC)
Pay per click (PPC) search marketing refers to when a company pays for text ads to be displayed on the search engine results pages when a specific key phrase is entered by the search users. It is so-called since the marketer pays for each time the hypertext link in the ad is clicked on. As a search user, you can identify the pay per click text ads since they are usually under the heading ‘Sponsored links’. You can see them on the right side of web page in Figure 7-4. The position in these paid listing, as they are also known, is determined by the amount bid by a company to be near the top of the listing.

To participate in PPC campaigns, clients or their agencies commonly use PPC ad networks or brokers to place and report on Pay Per Click ads on different search engines. It is necessary to deal direct with Google, which have their own PPC ad programmes such as Adwords select

Popular PPC Techniques
1) Market Research
2) Keyword Research
3) Ad copy writing
4) Account set up
5) Bid Management
6) A/B Ad Copy Testing
7) Landing Page Optimization
8) A/B Landing Page Tests
9) Multivariate testing (if required)
10) Conversion Tracking
11) Reporting

Social Media Optimization (SMO) – Online Reputation Management (ORM)
Social media optimization (SMO) is a set of methods for generating publicity through social media, online communities and community websites. Methods of SMO include adding RSS feeds, social news buttons, blogging, and incorporating third-party community functionalities like images and videos. Social media optimization is related to search engine marketing, but differs in several ways, primarily the focus on driving traffic from sources other than search engines, though improved search ranking is also a benefit of successful SMO.

Popular SMO Techniques
1) Preparation
Preparing tags, descriptions, multimedia items, and text content for use in the social Web
2) Optimizing Your Web Site
Creating a social media and Web 2.0 optimized Web site using RSS feeds and other tools, including using WordPress to power your entire site
3) Starting a Blog
Outlines options and best-practices for starting a blog
4) Podcasting and Vidcasting
Creating a podcast or vidcast using blogging platforms
5) Optimizing Your Blog
Tips and instructions for getting best exposure for your blog or RSS feed
6) Social Bookmarking Sites
Getting the most out of sites like Technorati and Purpose-built pages explained
7) Crowd-Sourced News Aggregators
Sharing your content on sites like Digg and Propeller
8) Media Communities & Social Calendars
Sharing your multimedia on Web 2.0 sites like Flickr, YouTube, and Upcoming
9) Social Networking & Similar Tools
Tapping the power of social networking sites like Facebook, MySpace, and Squidoo
10) Social Media Newsrooms
Creating the ultimate Web 2.0 tool for your book or business
11) Social Media News Releases
A multimedia version of the standard press release
12) Preparatory Elements for a Social Media Newsroom
Checklist for gathering the content necessary to built a social media newsroom
13) Widgets and Badges
Using widgets and badges from sites like Facebook, MySpace, Widgetbox, and Amazon, that can make your site more interactive, enhance your own Web presence, and promote your blog or RSS feed
14) Advanced Social Media Technologies
Implementing technologies that require a bit more time and commitment, including virtual worlds, Wikis, and Webcasting

          What is SEO?? A Project or Process !!!        
Organic search engine optimization is a unique project, which is a process. Like a garden, you often. Sometimes you have an existing garden where the plants are over-or-nothing grows. Sometimes you just have a portion of the yard of your rope, then plant stuff in. like a garden, organic search engine optimization takes time and attention. It can be as simple as setting a timer recording on the water or as complex as starting from scratch.
Organic SEO is like a garden, if you do not have water, he dies.

Soil considerations:

What is the story of what has been done to your website for SEO purposes?
Do you have pages full of links to websites which are not within your company, where some of the site no longer exist, or even a link to an "adult" site? If someone hides a bunch of keywords are the same color as the background to the bottom of the page? Perhaps the page you are on the way to work, a group of keywords in the HTML code?
(I know this is 2009, but some company, somewhere, has this on their website and now.)

Sunlight, rain and other factors:

Traffic analysis - Are there any statistics on the website of the company? What information can you statistics on what keywords, referrers, exit pages, entry's, bounce rates, etc. What are the statistics program does your company use? We ask our customers to the website analysis tool that we use .. - simply because we need them the measurements for our customers and also for the consistent reporting for all customers.

What can we plant in our garden?

Keyword Research - Do you collect the keywords of the client with the current website metrics and client input, what they think, give you someone to look for them. We usually have a method that I use for many, many years. We ask the manager (which is addressed to them at the top), the receptionist, the sales manager, marketing director, CFO, COO, (help to do these two things, and also ensure that you get paid!) to the search terms, 5.10 per piece, and not with each other. This is statistically significant when different people with a phrase more than once. Take these and put them in the tables, so we want to know what the Ministry of Transport and later on. This is the main keyword stage.

You can also correlate the keywords, transport and the lowest bounce rate on its website compared to what the customer expects. Now of course, you should take into account the fact that if the content is not on their website now, they will not be found.

What kind of pesticides would be best to use the bugs?

Competitive Analysis - look at the competitors in the HTML code, you can view the competitors of the SEO strategy? Have they? Hat keywords are competitors using? How do these differ from what you the customer? Have them all directly to the customer. Are the competitors with CMS systems, HTML code, the presence of an SEO company?

These are the people who your customers are of the opinion that their competitors in the economy, not necessarily via the Internet. Sometimes that works to your advantage, because people think your customers are your competitors can not you, the SEO's competitors and as such can be used to obtain market share of competitors is quite fast.

What plants grow best in your city?

Competitive Keyword Research - What keywords are overcrowded? Which keywords are open? If your client is a niche, with very few people in the channel do SEO? If your client is a start-up, where the market or go bald for real estate or insurance, which is bad press?

Audience Metrics - What the customers think that if they're looking for? In addition to the website statistics of what your customers think about the customer in relation to the search? Can you imagine some of your customers customers contact info? (say that 3 times faster ...).

Buy your plants.

Keyword completion - with the market up and down so much these days, if you bound your KPIs (Key Performance Indicators) for your customers website traffic, you are indeed a problem. Although some people say, keyword search engine ranking is a poor measure of SEO success, I would disagree and say that qualified tracking keyword rankings are important. If 3 out of 10 times a certain keyword phrase motivate visitors to the site to complete your customer contact form, this is a "qualified conversion phrase. The fact that you are # 1 and # 31 for this term is important to do something to your customers and potential customers. This is only the beginning of SEO. We have not even started the review of the site yet. Send the final round of the keywords to the customers for them to sign before you base your calculations. Even at this point, it is a good idea to make sure to know what the company "bread and butter" and which areas in their organization.

Knowledge will help you select your first keyword focus for rapid ROI and also to know which areas you need to grow in. The customer is the cost of SEO should be at least a three-fold return on their investment after a popular.

Baseline - What clients rank in the search results of Google, Yahoo! and MSN for the agreed conditions and targeted keyword phrases. You can shoot yourself in the foot if you try to measure and vague, your best bet is to make the cashier ring and track qualified keywords on a monthly basis to show progress. You can also measure the market share of the customer specified keywords and show them a competitive baseline.


Planning - What should be done and who are the people on the client and agency side. You do not want anyone to irrigate crops in the same time.

Priorities - What should be done first thing that needs to be done, because there is no effort or expense to the customer wasted? Is there a new site in the future for the company? Plans, so that you are not wasting the money of the customers.

Garden Care

The biggest mistake in SEO is that you "do" seo once on a website, you can just walk. Like a garden, if there is no water, the plants die. The nature of the SEO is simply this: the current rules or procedures may or may not be tomorrow. Google is usually the goal we want for SEO, but the target is fairly often. Google, in the fight