Team Updates


INSIGHTFUL CONVERSION BETWEEN US AND EVIDENCE POINTING TOWARDS FEASIBILITY OF OUR PROPOSED APPROACH AND A VERY INTERESTING CONVERSATION!


look at this reddit page

423 PM

answer from Ian Goodfellow himself on DCGAN use in NLP....although we are not doing NLP, but he provides some clarification on DCGAN here*https://www.reddit.com/r/MachineLearning/comments/40ldq6/generative_adversarial_networks_for_text/*redditreddit

Generative Adversarial Networks for Text

What are some papers where Generative Adversarial Networks have been applied to NLP models? I see plenty for images.


"Hi there, this is Ian Goodfellow, inventor of GANs (verification: http://imgur.com/WDnukgP).

GANs have not been applied to NLP because GANs are only defined for real-valued data.

GANs work by training a generator network that outputs synthetic data, then running a discriminator network on the synthetic data. The gradient of the output of the discriminator network with respect to the synthetic data tells you how to slightly change the synthetic data to make it more realistic.

You can make slight changes to the synthetic data only if it is based on continuous numbers. If it is based on discrete numbers, there is no way to make a slight change.

For example, if you output an image with a pixel value of 1.0, you can change that pixel value to 1.0001 on the next step.

If you output the word "penguin", you can't change that to "penguin + .001" on the next step, because there is no such word as "penguin + .001". You have to go all the way from "penguin" to "ostrich".

Since all NLP is based on discrete values like words, characters, or bytes, no one really knows how to apply GANs to NLP yet.

In principle, you could use the REINFORCE algorithm, but REINFORCE doesn't work very well, and no one has made the effort to try it yet as far as I know.

I see other people have said that GANs don't work for RNNs. As far as I know, that's wrong; in theory, there's no reason GANs should have trouble with RNN generators or discriminators. But no one with serious neural net credentials has really tried it yet either, so maybe there is some obstacle that comes up in practice.

BTW, VAEs work with discrete visible units, but not discrete hidden units (unless you use REINFORCE, like with DARN/NVIL). GANs work with discrete hidden units, but not discrete visible units (unless, in theory, you use REINFORCE). So the two methods have complementary advantages and disadvantages."


4:26 PM

good news!

4:27 PM

This is from Ian's answer. So our approach is correct so far."GANs work by training a generator network that outputs synthetic data, then running a discriminator network on the synthetic data. The gradient of the output of the discriminator network with respect to the synthetic data tells you how to slightly change the synthetic data to make it more realistic.You can make slight changes to the synthetic data only if it is based on continuous numbers."

RISHABH4:27 PM

https://arxiv.org/pdf/1511.06349v2.pdf

4:27 PM

damn this is good

Shamir4:27 PM

we are gonna be working on some time series data sets at one point....and those are continuous numbers

4:27 PM

GAN should be good with it

RISHABH4:27 PM

Yeah Ian Goodfellow has confirmed it

Shamir4:28 PM

haha

RISHABH4:28 PM

but this s more towards the nlp side

Shamir4:28 PM

and for categorical variables...we can just encode them as numeric

RISHABH4:28 PM

I have done some work on word embedding and semantic "contexual" search

4:28 PM

yeah

Shamir4:29 PM

yeah....but the concept is that it works with continuous data...and sensor data is continuous

RISHABH4:29 PM

and for categorical variables...we can just encode them as numeric -->has someone done anything like this?

Shamir4:29 PM

be it image or just some other form of data

RISHABH4:29 PM

yes continuous latent space

4:29 PM

embedding then

Shamir4:29 PM

oh...encoding categorical variables as numeric is standard practice

RISHABH4:29 PM

images is easy due to dc GAN implementation

4:29 PM

done in DCG as well




exynosRishabh

UPDATES TO THE GITHUB REPOSITORY :


commit 26278a318c3f649b134a57431e99ae27699476a2 (HEAD -> master, origin/master, origin/dg1223, origin/HEAD)Author: Shamir Alavi <dg1223@users.noreply.github.com>Date: Mon Oct 21 15:15:36 2019 -0400 Fixed minor typo in parameter descriptioncommit 4d7bc183ce465423a6e718c9d189bd8336f73c81Author: Shamir Alavi <dg1223@users.noreply.github.com>Date: Mon Oct 21 15:13:15 2019 -0400 Final version of the code Final working version that was used during the hackathon (on the last day) to download around 6500 NASA (MODIS Terradata) Earth images using the GIBS RESTful APIcommit d5e849d5d033fe70e4e5c7201d60ba14e4e889f6Author: RISHABH <55334249+EXYNOS-999@users.noreply.github.com>Date: Mon Oct 21 16:19:01 2019 +0800 NASA_DISCRETE_DATASETScommit 148e243f15dc0cfd3d0805cda42f84238ed1a244Author: RISHABH <55334249+EXYNOS-999@users.noreply.github.com>Date: Mon Oct 21 12:01:06 2019 +0530 Autoencoder to Beta-VAE DISENTANGLED VARIATIONAL AUTOENCODERS https://lilianweng.github.io/lil-log/2018/08/12/from-autoencoder-to-beta-vae.htmlcommit 3146c70bebc23cc7b5f93a679c79f9c528823ee5Author: RISHABH <55334249+EXYNOS-999@users.noreply.github.com>Date: Mon Oct 21 11:50:30 2019 +0530 Enhanced Super-Resolution Generative Adversarial Networks Enhanced Super-Resolution Generative Adversarial Networks ESRGAN: Enhanced Super-Resolution Generative Adversarial Networks Xintao Wang1, Ke Yu1, Shixiang Wu2, Jinjin Gu3, Yihao Liu4, Chao Dong2, Chen Change Loy5, Yu Qiao2, Xiaoou Tang1 1CUHK-SenseTime Joint Lab, The Chinese University of Hong Kong 2SIAT-SenseTime Joint Lab, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences 3The Chinese University of Hong Kong, Shenzhen 4University of Chinese Academy of Sciences 5Nanyang Technological University, Singapore {wx016,yk017,xtang}@ie.cuhk.edu.hk, {sx.wu,chao.dong,yu.qiao}@siat.ac.cnliuyihao14@mails.ucas.ac.cn, 115010148@link.cuhk.edu.cn, ccloy@ntu.edu.sg Abstract. The Super-Resolution Generative Adversarial Network (SRGAN) [1] is a seminal work that is capable of generating realistic textures during single image super-resolution. However, the hallucinated details are often accompanied with unpleasant artifacts. To further enhance the visual quality, we thoroughly study three key components of SRGAN – network architecture, adversarial loss and perceptual loss, and improve each of them to derive an Enhanced SRGAN (ESRGAN). In particular, we introduce the Residual-in-Residual Dense Block (RRDB) without batch normalization as the basic network building unit. Moreover, we borrow the idea from relativistic GAN [2] to let the discriminator predict relative realness instead of the absolute value. Finally, we improve the perceptual loss by using the features before activation, which could provide stronger supervision for brightness consistency and texture recovery. Benefiting from these improvements, the proposed ESRGAN achieves consistently better visual quality with more realistic and natural textures than SRGAN and won the first place in the PIRM2018-SR Challenge1 [3]. The code is available at https://github.com/xinntao/ESRGAN.commit 3ecbbde00e50d523eee6018c46799ec4609d9970Author: RISHABH <55334249+EXYNOS-999@users.noreply.github.com>Date: Mon Oct 21 11:36:28 2019 +0530 SdA ext. classical autoencodercommit 8633d763ce12fbe20852689a952603b5ec57e67fAuthor: RISHABH <55334249+EXYNOS-999@users.noreply.github.com>Date: Mon Oct 21 11:31:36 2019 +0530 Add files via upload Implementation of Bengio, P. Lamblin, D. Popovici and H. Larochelle, Greedy Layer-Wise Training of Deep Networks, in Advances in Neural Information Processing Systems 19 (NIPS‘06), pages 153-160, MIT Press 2007. Introduced in Vincent, H. Larochelle Y. Bengio and P.A. Manzagol, Extracting and Composing Robust Features with Denoising Autoencoders, Proceedings of the Twenty-fifth International Conference on Machine Learning (ICML‘08), pages 1096 - 1103, ACM, 2008.commit 649fa0433549c80a528c3338a458d6eaeecd10c7Author: RISHABH <55334249+EXYNOS-999@users.noreply.github.com>Date: Mon Oct 21 10:07:11 2019 +0530 Update README.mdcommit 129ebe852ddde0ee9b6cdd1073ebf5e86be0ddc6Author: RISHABH <55334249+EXYNOS-999@users.noreply.github.com>Date: Mon Oct 21 10:06:11 2019 +0530 Update README.mdcommit 02abe9d56e2ef06350d9eca59a899e7f97295c5fAuthor: Shamir Alavi <dg1223@users.noreply.github.com>Date: Sun Oct 20 23:56:19 2019 -0400 NASA image downloader, GIBS RESTful APIcommit c7a7cfb0d9dc5af1be57f4aeb30e75893bae4262Author: Shamir Alavi <dg1223@users.noreply.github.com>Date: Sun Oct 20 23:39:12 2019 -0400 Ipython notebook, it has our demoscommit b981c48fd184aab1153adb7203b16f50312da07dAuthor: Shamir Alavi <dg1223@users.noreply.github.com>Date: Sun Oct 20 23:15:36 2019 -0400 Visualizations of the features Meteorite dataset, Near-earth comets datasetcommit e7e459c53332e9d7f1819d15645c1bf812f55e31Author: Shamir Alavi <dg1223@users.noreply.github.com>Date: Sun Oct 20 21:12:49 2019 -0400 bulk imagescommit e3371683e90e77195eb01c9ed07ae9056a516e6dAuthor: Shamir Alavi <dg1223@users.noreply.github.com>Date: Sun Oct 20 15:41:52 2019 -0400 Image downloadER using GIBS REST API NASA MODIS earthcommit c68b2748384332f0f67c5d4f16ff6f0a847c4512Author: Shamir Alavi <dg1223@users.noreply.github.com>Date: Sun Oct 20 13:48:15 2019 -0400 28x28 good and bad images - NASA earth - separatecommit d574e8aec3440bdb98f8c4e9fb7174c6d2a56159Author: Shamir Alavi <dg1223@users.noreply.github.com>Date: Sun Oct 20 12:23:47 2019 -0400 56x56 images with missing portionscommit f9b3aa2ec8ba60b717f21994c8e0df6379eed889Author: Shamir Alavi <dg1223@users.noreply.github.com>Date: Sun Oct 20 12:18:15 2019 -0400 All images - good and bad - 56x56commit e0260fb31a1bb88c0e46017a19a753116f37a010Author: Shamir Alavi <dg1223@users.noreply.github.com>Date: Sun Oct 20 00:25:15 2019 -0400 DCGAN tutorialcommit 67f3a18bcb89f425423692de3caee19f52df4bd0Author: Shamir Alavi <dg1223@users.noreply.github.com>Date: Sat Oct 19 12:56:53 2019 -0400 2019-03-15commit 54a05b9f5bc6fba4662cde1ad21e3e5a4da289b7Author: Shamir Alavi <dg1223@users.noreply.github.com>Date: Sat Oct 19 11:54:45 2019 -0400 MODIS imagescommit ed5f587b7c9f3ef60556b427d492a77b9643e0d3Author: RISHABH <55334249+EXYNOS-999@users.noreply.github.com>Date: Sat Oct 19 19:20:52 2019 +0530 Initial commit

http://canarytokens.com/tags/nuv081jyc62nstql1fdeu...

exynosRishabh
well...
well...
S
Shamir Alavi
importrequests
importshutil
importos
defdownload_MODIS_image(num_images, year, month, day, max_day, max_month, end_date):
# input params
# param 1: number of images to download (for this URL, don't go over 80)
# param 2: date of image; e.g. date = '2012-07-09'
ifmonth<10andday<10:
date=str(year) +'-0'+str(month) +'-0'+str(day)
elifmonth<10andday>=10:
date=str(year) +'-0'+str(month) +'-'+str(day)
elifmonth>=10andday<10:
date=str(year) +'-'+str(month) +'-0'+str(day)
else:
date=str(year) +'-'+str(month) +'-'+str(day)
#max_day = 30 # not going to take the data from 31st day of any month (future imoprovement)
#max_month = 12
image_num=0
forminrange(month, max_month+1):
fordinrange(day, max_day+1):
foriinrange(num_images):
ifdate==end_date:
print'end date ', end_date, ' reached'
break
image_id=i
#image_num = str(img_counter)
url='https://gibs.earthdata.nasa.gov/wmts/epsg4326/best/MODIS_Terra_CorrectedReflectance_TrueColor/default/'+date+'/250m/6/13/'+str(image_id) +'.jpg'
#Save file in local hard drive
filepath='D:\SpaceApps2019\Chasers_of_lost_data\downloads\images_modis_nasa\\'
filename='nasa_modis_image_'+date+'_'+str(image_num) +'.jpg'
full_filepath=filepath+filename
# Open the url image, set stream to True, this will return the stream content.
response=requests.get(url, stream=True)
# Open a local file with wb ( write binary ) permission.
local_file=open(full_filepath, 'wb')
# Set decode_content value to True, otherwise the downloaded image file's size will be zero.
response.raw.decode_content=True
# Copy the response stream raw data to local image file.
shutil.copyfileobj(response.raw, local_file)
# Remove the image url response object.
local_file.close()
delresponse
filesize=os.path.getsize(full_filepath)
iffilesize>428:
print'image #', image_num, 'downloaded'
else:
print'image #', image_num, 'is a zero sized file --> invalid image'
image_num+=1
#### MAIN ####
# Loop over dates in a month to download in larger batches
#num_images = 80
#day = 1
#month = 7
#year = 2019
#max_day = 30
#max_month = 9
#end_date = '2019-09-15'
#download_MODIS_image(num_images, year, month, day, max_day, max_month, end_date)
S
Shamir Alavi
Our Slack workspace
Our Slack workspace
S
Shamir Alavi
Generative Adversarial Network
Generative Adversarial Network
S
Shamir Alavi

The number of epochs should be proportional to the amount of patience you have.

S
Shamir Alavi

After long hours of relentless research and implementation tryouts, we now have an idea of what we can deliver.

We have a "Plan of Action" for our deliverables!

S
Shamir Alavi