Widescreen Gaming Forum

[-noun] Web community dedicated to ensuring PC games run properly on your tablet, netbook, personal computer, HDTV and multi-monitor gaming rig.
It is currently 03 Dec 2024, 06:59

All times are UTC [ DST ]




Post new topic Reply to topic  [ 107 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6, 7 ... 11  Next
Author Message
PostPosted: 22 Jan 2009, 11:59 
Offline
Insiders
Insiders
User avatar

Joined: 14 Apr 2007, 02:13
Posts: 1514
Yeah I read into how to compile your own 64bit version its supposed to be about 10% faster but offer less options if you use the 64bit version.


You don't lose any options with a 64-bit x264. You have the same arguments and capabilities as before. The only area that really suffers is when dealing with Avisynth. You need a 64-bit Avisynth to use with a 64-bit x264. While 64-bit builds of Avisynth do exist, almost all of the useful plugins are 32-bit only. This is why you use the avs2yuv pipe. The pipe basically reads the Avisynth script and pipes it to x264. This lets you use 32-bit Avisynth with 64-bit x264. There is some performance loss, but it is still faster than using a 32-bit x264.

Also, if you compile it with ICC instead of GCC, you can gain even more performance.


Man 150 or 300fps game files? Mine are massive at only 30fps, I wonder if my computer would explode trying to write 5 times the amount of data while recording at that rate, also since you cut it down to probably 30fps anyways is there a real noticeable advantage?


Like I said, you capture at a high framerate to calculate the motion blur. The higher the FPS, the more accurate and better looking the motion blur is. There is definitely a visual difference on the final output.


Also I did my research on trellis like I said. I have multiple sources that say Trellis is not a good idea for CRF or any other kind of constant quantizer as it can give odd and unpredictable results. If anything use Trellis 1 they say.


If you want maximum quality, you should use the line I posted before. Using trellis 2 will give you better results. Your sources might be outdated or inaccurate. Keep in mind that other arguments affect it, as well. However, for maximum quality, the line I provided is really as far as you want to take it.


So here is a question for you, the command to make the video more web compatible so that it can download in progressive scan mode. I know this helps if you host your file directly but do you think this aids or worsens things for youtube? I cant seem to find the answer on google.


Progressive as opposed to interlaced? Always go progressive. Games are recorded in a progressive manner, so leave them that way. Making them interlaced will only make playback worse. If you have interlaced content, deinterlace it.


Are these native x64 builds here any good? - http://forum.doom9.org/showthread.php?t=89979 or do you think making your own or using the pipeline method is better.


If you want to use someone else's builds, then use either of these:
http://x264.nl/
http://techouse.project357.com/

I prefer to compile my own for Windows using ICC. If you use Linux and want to use Avisynth, then you'll need to use Wine and pipe that to x264.

_________________
Widescreen Fixer - https://www.widescreenfixer.org/

Widescreen Fixer Twitter - https://twitter.com/widescreenfixer
Personal Twitter - https://twitter.com/davidrudie


Top
 Profile  
 


PostPosted: 22 Jan 2009, 14:14 
Offline
Editors
Editors
User avatar

Joined: 31 Jul 2006, 14:58
Posts: 1497
Not progressive scan, it is some method of streaming the file.

I think it means it can stream and play rather than have to download completely first before it can play not sure.

The Trellis stuff is new, and its people that make the programs on Doom9 for some of it. I cant see how it was a higher quality with a lower file size and I think it looked worse.

I'll try your exact config though to see what it comes out like.

Edit: man its slow :P

I actually added one thing that I know is slowing it down I put Merange of 32 wich is what I use for umh, but for tesa the default of 16 would have probably been fine and much faster.

The encode finished with a file of 25mb, a bit smaller than my previous files (again trellis 2 at work mostly I think) I can not see any quality improvement but I do not have a sharp eye and its just a fraps capture. With some true HD footage it would make more of a difference I think.

I'll have to see what happens if I try to fraps at 150fps, I do not think it will work well, my games do not even run that fast.

If it works, do I just drop the framerate down when I encode it or is there some special filter you are running in avisynth for an effect?

_________________
ViciousXUSMC on the Web - YouTube :: FaceBook :: Website


Top
 Profile  
 
PostPosted: 23 Jan 2009, 02:50 
Offline
Insiders
Insiders
User avatar

Joined: 14 Apr 2007, 02:13
Posts: 1514
Not progressive scan, it is some method of streaming the file.

I think it means it can stream and play rather than have to download completely first before it can play not sure.


There's different things that can affect streaming, but I wouldn't call it progressive. ;) The container format, various atoms, etc. can affect streaming and compatibility. However, if you just upload videos to YouTube, other hosting sites, or host it yourself, you don't need to do anything special. If you're going to stream it yourself, use a MP4 container because I don't think JW FLV Media Player supports MKV.


The Trellis stuff is new, and its people that make the programs on Doom9 for some of it. I cant see how it was a higher quality with a lower file size and I think it looked worse.


I actually hang out with the developers of x264 directly and deal with them on a daily basis. I've seen many people come and ask questions about it and the answers are usually pretty clear. :)


I actually added one thing that I know is slowing it down I put Merange of 32 wich is what I use for umh, but for tesa the default of 16 would have probably been fine and much faster.


16 is actually the default for any setting. tesa will provide a higher quality than umh, but it's a lot slower. merange will greatly slow down your encode time, especially with tesa. Using umh and the default of 16 is more than fine. Switching to tesa can provide better quality, but there's really no need to raise merange.


The encode finished with a file of 25mb, a bit smaller than my previous files (again trellis 2 at work mostly I think) I can not see any quality improvement but I do not have a sharp eye and its just a fraps capture. With some true HD footage it would make more of a difference I think.


I'm sure some of it has to do with you being able to know exactly what to look for. It's like MP3. I can't stand the MP3 format because I can easily hear the defects in it over something like AAC (far superior format). This is because I know just what to listen for. Likewise, with videos, I know just what to look for, as well.


I'll have to see what happens if I try to fraps at 150fps, I do not think it will work well, my games do not even run that fast.

If it works, do I just drop the framerate down when I encode it or is there some special filter you are running in avisynth for an effect?


As I said before, it depends on if the game has a built-in method of dumping frames. Call of Duty, Counter-Strike, World in Conflict, Team Fortress 2, and several other games have built-in methods of recording. Using these methods, the game doesn't actually run at full speed. It will render each frame slowly. What you do is record a demo, and then play it back later and dump the frames of it.

The script I used I showed earlier. You use ImageSource() to load the images in at a specified framerate. You calculate the motion blur from this and then you cut it down to 30 FPS (or others if you want). It should be outputting 30 FPS before you encode it.

_________________
Widescreen Fixer - https://www.widescreenfixer.org/

Widescreen Fixer Twitter - https://twitter.com/widescreenfixer
Personal Twitter - https://twitter.com/davidrudie


Top
 Profile  
 
PostPosted: 23 Jan 2009, 07:09 
Offline
Editors
Editors
User avatar

Joined: 31 Jul 2006, 14:58
Posts: 1497
Cool thanks for all the help dopefish.

Im doing pretty well now, todays experiment was with psy-rd and psy-trellis. :P

The first time I coded at all with HandBrake I had some jumpy frame rate issues associated with very fast motion when I panned my view. I think when I raised the merange to 32 thats what fixed it. So not sure but I think there is some merit in raising the merange up for very high motion stuff.

I think I still have my samples that show this behavior. UMH is pretty fast even with the higher range so I do not mind, but yeah running a 32 TESA is like shooting yourself in the face, twice.

_________________
ViciousXUSMC on the Web - YouTube :: FaceBook :: Website


Top
 Profile  
 
PostPosted: 23 Jan 2009, 07:39 
Offline
Insiders
Insiders
User avatar

Joined: 14 Apr 2007, 02:13
Posts: 1514
Cool thanks for all the help dopefish.


No problem.


Im doing pretty well now, todays experiment was with psy-rd and psy-trellis. :P


psy-rdo + trellis = greatness

I actually have a bunch of screenshots from when they first implemented psy-rdo and I was comparing it against fgo. fgo was better to preserve noise at high bitrates, but psy-rdo is good for everything else.


The first time I coded at all with HandBrake I had some jumpy frame rate issues associated with very fast motion when I panned my view. I think when I raised the merange to 32 thats what fixed it. So not sure but I think there is some merit in raising the merange up for very high motion stuff.


It could help out for sure. Using tesa with the default of 16 will probably be better than umh with a range of 32.


I think I still have my samples that show this behavior. UMH is pretty fast even with the higher range so I do not mind, but yeah running a 32 TESA is like shooting yourself in the face, twice.


If you have decent hosting, just host a file yourself with JW FLV player. Otherwise, YouTube, NoobFlicks, WeGame, etc. will just hurt the quality. If you upload it to NoobFlicks, they actually keep the original file so people can optionally download that.

_________________
Widescreen Fixer - https://www.widescreenfixer.org/

Widescreen Fixer Twitter - https://twitter.com/widescreenfixer
Personal Twitter - https://twitter.com/davidrudie


Top
 Profile  
 
PostPosted: 24 Jan 2009, 16:19 
Offline
Editors
Editors
User avatar

Joined: 31 Jul 2006, 14:58
Posts: 1497
Hmm I wanted to start my guide but I found one small problem. HandBrake can not open FRAPS files. The ones I had used before were first opened and edited in Sony Vegas and rendered out as HuffYUV.AVI files.

So I need to find an easy way to make the files work with HB. Im thinking Avisynth is going to be the way to do this with a free tool.

Have any hints for me? Avisynth can work as a frame server cant it? or is that DGindex that I am thinking of.

Edit: nevermind it was VirtualDub I was thinking of and it did not work. But VirtualDub would make an easy to use free program that can process the file between the FRAPS original and something HB can compress.

Your supposed to be able to use X264 from VirtualDub according to this: http://teek.info/guides/video/x264encode.html

However I do not have it as an option in my drop down.

_________________
ViciousXUSMC on the Web - YouTube :: FaceBook :: Website


Top
 Profile  
 
PostPosted: 24 Jan 2009, 19:11 
Offline
Insiders
Insiders
User avatar

Joined: 14 Apr 2007, 02:13
Posts: 1514
You can use DirectShowSource() to open the FRAPS video with Avisynth. FFmpegSource() might work, too.

_________________
Widescreen Fixer - https://www.widescreenfixer.org/

Widescreen Fixer Twitter - https://twitter.com/widescreenfixer
Personal Twitter - https://twitter.com/davidrudie


Top
 Profile  
 
PostPosted: 24 Jan 2009, 20:09 
Offline
Editors
Editors
User avatar

Joined: 31 Jul 2006, 14:58
Posts: 1497
You can use DirectShowSource() to open the FRAPS video with Avisynth. FFmpegSource() might work, too.


Yep, I was using

AVISource("C:FrapsMoviesMirrorsEdgeOpening.avi", Audio=False)
ConvertToYV12()

as per what Lord Mulder told me to do to get the file to work in Avidemux, it has a built in proxy function to work with Avisynth and it worked as far as opening the file but then of course I have no audio, also when I rendered the file I still had the issues I have been having with Avidemux where the file starts off too fast for a few seconds.

So I have been trying all day to find a way to use that same functionality with HB. I have tried a few different frame serving methods with VirtualDub, and tried to use that script above with HB.

I'll try directshowsource and see if it works. I was told not to use that unless I had too.

Edit: not working both Avidemux & HB wont open my .avs file with the DirectShowSource script.

Edit2: not sure how Avidemux does it with the proxy but the HB guys told me you can not use a frame server with the program due to the fact that it requires some kind of platform specific thing that it does not use.

so I am trying to slowly get to the point where I can do things the way you do.

Im looking at this part of things:

Create an Avisynth script with ImageSource(), AssumeFPS(), and ConvertToYV12()
Use mplayer to play the avs script and dump it to a yuv4mpeg (y4m) file
Use RawSource() to load the y4m
Do anything else I want with the clip in Avisynth
Use avs2yuv to pipe the script to 64-bit x264 and encode the video
Import the .264 file into mkvmerge, set settings, and save as a .mkv file
Done


For now I want to keep it simple so no need for filters, and I wont be doing the y4m convert since I have no massive 300fps file like you do (tho small question, assumefps to drop it to 30fps right, but it changes the length of the clip, so how do you get your sound to match again?)

I have been using Avisynth a bit now. I did not even know how to use it since its command based but after I found out all you have to do is make a text file and save it as a .avs its been easy. So to experment I make a script and open it with media player classic.

Still its only a middle man, say I load up my FRAPS file with this code

Code:
AVISource("C:FrapsMoviesMirrorsEdgeOpening.avi", Audio=False)
   ConvertToYV12()


It opens in my player but not my encoding programs...

So I want to send it off to x264 directly like you, x264 is also command line based right? So how do you run it without a GUI?

As for your final step of MKVmerge I heard thats a good one and better than its MP4 equivalent. Have any reason you prefer MKV to MP4?? For what I am doing not sure if it matters ether should work.

Just a few more steps to complete the process and once I have the basics I should just be able to experiment and use guides off google to get better.

_________________
ViciousXUSMC on the Web - YouTube :: FaceBook :: Website


Top
 Profile  
 
PostPosted: 25 Jan 2009, 08:07 
Offline
Insiders
Insiders
User avatar

Joined: 14 Apr 2007, 02:13
Posts: 1514
AVISource("C:FrapsMoviesMirrorsEdgeOpening.avi", Audio=False)
ConvertToYV12()

as per what Lord Mulder told me to do to get the file to work in Avidemux, it has a built in proxy function to work with Avisynth and it worked as far as opening the file but then of course I have no audio, also when I rendered the file I still had the issues I have been having with Avidemux where the file starts off too fast for a few seconds.


You can use AVISource(), as well. There are many ways to open it.

I always work with audio outside of any program. It's often faster to work with that way, too. So you dump the video to one file, and you dump the audio to another. Then you mux them together later. It's a typical process.

I've never had any experience where the file starts out too fast. If it really bothers you, just use Trim() to cut off the beginning part.


For now I want to keep it simple so no need for filters, and I wont be doing the y4m convert since I have no massive 300fps file like you do (tho small question, assumefps to drop it to 30fps right, but it changes the length of the clip, so how do you get your sound to match again?)


Converting it to y4m really only has some benefits. If you want to apply effects to the video, they can slow down the processing immensely. If you're going to encode it multiple times, like one for 1080p and one for 720p, then you're really going to hurt your encoding time since it's going to apply those effects both times. It would be more beneficial to make one pass with the effects and output the video to y4m, huffyuv, etc. Now you have your high quality, lossless source with the effects already applied. With y4m, you can encode the file directly with x264 (either 32-bit or 64-bit). To resize the video, simply create a simple avs script with LanczosResize() in it and either use that avs file directly with 32-bit x264 or pipe it to 64-bit x264 with avs2yuv (still quicker than using 32-bit x264 directly).

As for the AssumeFPS(), you use this on your source. If your source video is 300 FPS, then when you load the video in you use AssumeFPS(300). When you cut it down, you don't specify another AssumeFPS(). You use SelectEvery() to pull every 10th frame, which automatically drops it down to 30 FPS. This causes absolutely zero issues with audio sync. The audio doesn't know anything about the FPS of the video, nor does it need to. The audio is going to play over 1 second, so whether there are 30 FPS or 300 FPS in that one second, it won't affect the audio. It won't change the length of the clip at all.


Still its only a middle man, say I load up my FRAPS file with this code

...

It opens in my player but not my encoding programs...


Yeah, you can play it directly to check any effects you apply, how it will look, etc. If you want to use it as a source to encode with, you can almost always just open the .avs file from inside the program.


So I want to send it off to x264 directly like you, x264 is also command line based right? So how do you run it without a GUI?


Just run: x264 -o outputfile.264 inputfile.avs

If you're using a 64-bit x264, then run:
avs2yuv -raw "inputfile.avs" - | x264 -o outputfile.264 - 1920x1080

Replace 1920x1080 with whatever resolution the video is, make note of the hyphens, and replace with whatever options you want.

This will encode to a raw H.264 bitstream that you will import into a MP4 or MKV container.


As for your final step of MKVmerge I heard thats a good one and better than its MP4 equivalent. Have any reason you prefer MKV to MP4?? For what I am doing not sure if it matters ether should work.


MKV has a smaller overhead than MP4, it supports far more features, maintains better audio/video sync., supports chapters, etc., etc., etc. There are a lot of good reasons to use it.

The only drawback I've seen so far is that there doesn't seem to be any players that support streaming it online yet. (I haven't looked in months). You can easily pull the raw video and audio out of MP4 and MKV files and switch containers that they use. So it's not like if you use MKV that all hope is lost. This makes it easy for me when I do want to stream something online. I use MKV for everything I have, and when I want to stream it, I simply extract the video and audio out of the MKV file and remux it a MP4 for online streaming.

_________________
Widescreen Fixer - https://www.widescreenfixer.org/

Widescreen Fixer Twitter - https://twitter.com/widescreenfixer
Personal Twitter - https://twitter.com/davidrudie


Top
 Profile  
 
PostPosted: 25 Jan 2009, 12:17 
Offline
Editors
Editors
User avatar

Joined: 31 Jul 2006, 14:58
Posts: 1497
Cool info yet again.

Considering the nature of my problem I ran into yet another program today. StaxRip, this one is interesting its like the mix of Avidemux because it has a player/editor/filters and HandBrake because its simple to use.

Now whats interesting about this one is that it uses Avisynth for input, It also uses MKVmerge, MP4Box, and those others too. So its like the auto version of what you do almost.

So it generates logs and shows you the script of what it does and thats making it easy for me to learn the script for the programs.

Only issue I have with it is you cant edit the script directly in the gui like HandBrake can. So it came with an older version of x264 for example and back then the subme highest setting was 7, and also psy-rd did not exsist. There is a window where you can add code, but adding subme 9 made it still default to the gui's 7, also psy-rd failed totally due to the older version of x264.

No biggie though, I just swaped out the x264 file the program uses, it has its own local version in its program folder.

To get around the subme & psy-rd issues both of them were fixed in the latest beta of the program that uses a newer x264.exe. However I noticed how it keeps allll the extra files after the encode.

I have a .wav file from the demuxing, two .avs scripts one for the input and the 2nd I guess for all your filters before the encoding, I did not see the x264 script but did not look super close yet.

So my idea is, if the gui wont let me edit it, if it produces the script before the action starts I could just go edit the scripts and have a full auto semi manual way of doing things. It would make a great learning tool.

Last question for tonight.

Code:
Just run: x264 <options I listed before> -o outputfile.264 inputfile.avs


When I have multiple x264.exe like I have now dont I need to specify a path to the exe or something for that to work?

_________________
ViciousXUSMC on the Web - YouTube :: FaceBook :: Website


Top
 Profile  
 
Display posts from previous:  Sort by  
Post new topic Reply to topic  [ 107 posts ]  Go to page Previous  1, 2, 3, 4, 5, 6, 7 ... 11  Next

All times are UTC [ DST ]


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot post attachments in this forum

Search for:
Jump to:  




Powered by phpBB® Forum Software © phpBB Group